29 min listen
Efficient LLM Inference on CPUs
ratings:
Length:
18 minutes
Released:
Dec 2, 2023
Format:
Podcast episode
Description
Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks. However, deploying these models has been challenging due to the astronomical amount of model parameters, which requires a demand for large memory capacity and high memory bandwidth. In this paper, we propose an effective approach that can make the deployment of LLMs more efficiently. We support an automatic INT4 weight-only quantization flow and design a special LLM runtime with highly-optimized kernels to accelerate the LLM inference on CPUs. We demonstrate the general applicability of our approach on popular LLMs including Llama2, Llama, GPT-NeoX, and showcase the extreme inference efficiency on CPUs. The code is publicly available at: https://github.com/intel/intel-extension-for-transformers.
2023: Haihao Shen, Hanwen Chang, Bo Dong, Yu Luo, Hengyu Meng
https://arxiv.org/pdf/2311.00502v1.pdf
2023: Haihao Shen, Hanwen Chang, Bo Dong, Yu Luo, Hengyu Meng
https://arxiv.org/pdf/2311.00502v1.pdf
Released:
Dec 2, 2023
Format:
Podcast episode
Titles in the series (100)
STaR: Bootstrapping Reasoning With Reasoning: Generating step-by-step"chain-of-thought"rationales improves language model performance on complex reasoning tasks like mathematics or commonsense question-answering. However, inducing language model rationale generation currently requires either con... by Papers Read on AI