56. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 1/2

56. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 1/2

Author: Manuel Pasieka June 7, 2024 Duration: 51:59

Hello and welcome back to the AAIP


If you are an active Machine Learning engineer or are simply interested in Large Language models, I am sure you have seen the discussions around quantized models and all kind of new frameworks that have appeared recently and achieve astonishing inference performance of LLM's on consumer devices.


If you are curious how modern Large Language Models with their billions of parameters can run on a simple laptop or even an embedded device, than this episode is for you.


Today I am talking to Eldar Kurtic, researcher in the Alistarh group at the IST in lower Austrian and senior research engineer at the American startup Neural Magic.


Eldar's research focuses on optimizing Inference of Deep Neural Networks. On the show he is going to explain in depth show sparsity and quantization works, and how they can be applied to accelerate inference of big models, like LLM's on devices with limited resources.


Because of the length of the interview, I decided to split it into two parts.


This one, the first part, is going to focus on sparsity to reduce model size and enable faster inference by reducing the amount of memory and compute that is needed to store and run models.

The second part is going to focus on quantization as a mean to find representations of models with lower numeric precision that require less memory to store and process, while retaining accuracy.


In this first part about sparsity, Eldar will explain fundamental concepts like structured and unstructured sparsity. How and why they work and how currently we can achieve performant inference of unstructured sparsity only on CPU's and far less on GPU's.


We will discuss how to achieve crazy numbers of up to 95% unstructured sparsity while retaining the accuracy of models, but why it is difficult to leverage this under quoutes, reduction in model size, to actually accelerate model inference.


Enjoy.


## AAIP Community

Join our discord server and ask guest directly or discuss related topics with the community.

https://discord.gg/5Pj446VKNU


### References

Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

Neural Magic: https://neuralmagic.com/

IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/


Hosted by Manuel Pasieka, the Austrian Artificial Intelligence Podcast offers a grounded, local perspective on a global phenomenon. Instead of abstract theorizing, each conversation focuses on the tangible impact and practical applications of AI within Austria's unique ecosystem. You'll hear from a diverse range of guests-researchers, entrepreneurs, policymakers, and creatives-who are actively shaping this landscape, discussing both the remarkable opportunities and the nuanced challenges specific to the region. The discussions delve into how these technologies are being integrated into Austrian industry, academia, and society, moving beyond hype to examine real-world implementation and ethical considerations. This podcast serves as an essential audio forum for anyone in Austria, or with an interest in the European tech scene, looking to understand how artificial intelligence is evolving right here. It’s about the people behind the algorithms and the local stories within a global revolution. For those engaged with the content, questions and suggestions are always welcome at the provided email address.
Author: Language: English Episodes: 73

Austrian Artificial Intelligence Podcast
Podcast Episodes
71 - NeoAlp - Humanoide Robotik - Zwischen Dystopie und Euphorie [not-audio_url] [/not-audio_url]

Duration: 1:41:48
Die letzten Jahre ware von Large Language Models dominiert, und für die meisten ist GenAI immer noch der Inbegriff von Fortschritt.Andere denken schon weiter und zielen auf physical AI ab, welche sich darauf spezialisier…
67. Mathias Neumayer and Dima Rubanov - Lora a child friendly AI [not-audio_url] [/not-audio_url]

Duration: 53:10
## SummaryLarge Language Models have many strengths and the frontier of what is possible and what they can be used for, is pushed back on the daily bases. One area in which current LLM's need to improve is how they commu…
66. Taylor Peer - Beat Shaper - A music producers AI Copilot [not-audio_url] [/not-audio_url]

Duration: 52:20
Today on the show I have the pleasure to talk to returning guest, Taylor Peer one of the co-founders of the startup, behind Beat Shaper.Taylor will explain how they are following an Bottom-up approach to create electroni…
65. Daniel Kondor - CSH - The long term impact of AI on society [not-audio_url] [/not-audio_url]

Duration: 1:04:50
Guest in this episode is the Computational Social Scientist Daniel Kondor, Postdoc at the Complexity Science Hub in Vienna.Daniel is talking about research methods that make it possible to study the impact of various fac…
64. Solo - Manuel Pasieka on the hottest LLM topics of 2024 [not-audio_url] [/not-audio_url]

Duration: 59:00
With the last episode in 2024, I dare to release an solo episode, summarizing my christmas research on the topics of - Small Language models - Agentic Systems - Advanced Reasoning / Test time compute paradigm I hope you…