57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

Author: Manuel Pasieka June 25, 2024 Duration: 46:38

Hello and welcome back to the AAIP


This is the second part of my interview with Eldar Kurtic and his research on how to optimiz inference of deep neural networks.


In the first part of the interview, we focused on sparsity and how high unstructured sparsity can be achieved without loosing model accuracy on CPU's and in part on GPU's.


In this second part of the interview, we are going to focus on quantization. Quantization tries to reduce model size by finding ways to represent the model in numeric representations with less precision while retaining model performance. This means that a model that for example has been trained in a standard 32bit floating point representation is during post training quantization converted to a representation that is only using 8 bits. Reducing the model size to one forth.


We will discuss how current quantization method can be applied to quantize model weights down to 4 bits while retaining most of the models performance and why doing so with the models activation is much more tricky.


Eldar will explain how current GPU architectures, create two different type of bottlenecks. Memory bound and compute bound scenarios. Where in the case of memory bound situations, the model size causes most of the inference time to be spend in transferring model weights. Exactly in these situations, quantization has its biggest impact and reducing the models size can accelerate inference.


Enjoy.


## AAIP Community

Join our discord server and ask guest directly or discuss related topics with the community.

https://discord.gg/5Pj446VKNU


### References

Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

Neural Magic: https://neuralmagic.com/

IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/


Hosted by Manuel Pasieka, the Austrian Artificial Intelligence Podcast offers a grounded, local perspective on a global phenomenon. Instead of abstract theorizing, each conversation focuses on the tangible impact and practical applications of AI within Austria's unique ecosystem. You'll hear from a diverse range of guests-researchers, entrepreneurs, policymakers, and creatives-who are actively shaping this landscape, discussing both the remarkable opportunities and the nuanced challenges specific to the region. The discussions delve into how these technologies are being integrated into Austrian industry, academia, and society, moving beyond hype to examine real-world implementation and ethical considerations. This podcast serves as an essential audio forum for anyone in Austria, or with an interest in the European tech scene, looking to understand how artificial intelligence is evolving right here. It’s about the people behind the algorithms and the local stories within a global revolution. For those engaged with the content, questions and suggestions are always welcome at the provided email address.
Author: Language: English Episodes: 73

Austrian Artificial Intelligence Podcast
Podcast Episodes
71 - NeoAlp - Humanoide Robotik - Zwischen Dystopie und Euphorie [not-audio_url] [/not-audio_url]

Duration: 1:41:48
Die letzten Jahre ware von Large Language Models dominiert, und für die meisten ist GenAI immer noch der Inbegriff von Fortschritt.Andere denken schon weiter und zielen auf physical AI ab, welche sich darauf spezialisier…
67. Mathias Neumayer and Dima Rubanov - Lora a child friendly AI [not-audio_url] [/not-audio_url]

Duration: 53:10
## SummaryLarge Language Models have many strengths and the frontier of what is possible and what they can be used for, is pushed back on the daily bases. One area in which current LLM's need to improve is how they commu…
66. Taylor Peer - Beat Shaper - A music producers AI Copilot [not-audio_url] [/not-audio_url]

Duration: 52:20
Today on the show I have the pleasure to talk to returning guest, Taylor Peer one of the co-founders of the startup, behind Beat Shaper.Taylor will explain how they are following an Bottom-up approach to create electroni…
65. Daniel Kondor - CSH - The long term impact of AI on society [not-audio_url] [/not-audio_url]

Duration: 1:04:50
Guest in this episode is the Computational Social Scientist Daniel Kondor, Postdoc at the Complexity Science Hub in Vienna.Daniel is talking about research methods that make it possible to study the impact of various fac…
64. Solo - Manuel Pasieka on the hottest LLM topics of 2024 [not-audio_url] [/not-audio_url]

Duration: 59:00
With the last episode in 2024, I dare to release an solo episode, summarizing my christmas research on the topics of - Small Language models - Agentic Systems - Advanced Reasoning / Test time compute paradigm I hope you…