Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $12.99 CAD/month.

Efficient LLM Inference on CPUs

Efficient LLM Inference on CPUs

FromPapers Read on AI


Efficient LLM Inference on CPUs

FromPapers Read on AI

ratings:
Length:
18 minutes
Released:
Dec 2, 2023
Format:
Podcast episode

Description

Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks. However, deploying these models has been challenging due to the astronomical amount of model parameters, which requires a demand for large memory capacity and high memory bandwidth. In this paper, we propose an effective approach that can make the deployment of LLMs more efficiently. We support an automatic INT4 weight-only quantization flow and design a special LLM runtime with highly-optimized kernels to accelerate the LLM inference on CPUs. We demonstrate the general applicability of our approach on popular LLMs including Llama2, Llama, GPT-NeoX, and showcase the extreme inference efficiency on CPUs. The code is publicly available at: https://github.com/intel/intel-extension-for-transformers.

2023: Haihao Shen, Hanwen Chang, Bo Dong, Yu Luo, Hengyu Meng



https://arxiv.org/pdf/2311.00502v1.pdf
Released:
Dec 2, 2023
Format:
Podcast episode

Titles in the series (100)

Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.