57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

Author: Manuel Pasieka June 25, 2024 Duration: 46:38

Hello and welcome back to the AAIP


This is the second part of my interview with Eldar Kurtic and his research on how to optimiz inference of deep neural networks.


In the first part of the interview, we focused on sparsity and how high unstructured sparsity can be achieved without loosing model accuracy on CPU's and in part on GPU's.


In this second part of the interview, we are going to focus on quantization. Quantization tries to reduce model size by finding ways to represent the model in numeric representations with less precision while retaining model performance. This means that a model that for example has been trained in a standard 32bit floating point representation is during post training quantization converted to a representation that is only using 8 bits. Reducing the model size to one forth.


We will discuss how current quantization method can be applied to quantize model weights down to 4 bits while retaining most of the models performance and why doing so with the models activation is much more tricky.


Eldar will explain how current GPU architectures, create two different type of bottlenecks. Memory bound and compute bound scenarios. Where in the case of memory bound situations, the model size causes most of the inference time to be spend in transferring model weights. Exactly in these situations, quantization has its biggest impact and reducing the models size can accelerate inference.


Enjoy.


## AAIP Community

Join our discord server and ask guest directly or discuss related topics with the community.

https://discord.gg/5Pj446VKNU


### References

Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

Neural Magic: https://neuralmagic.com/

IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/


Hosted by Manuel Pasieka, the Austrian Artificial Intelligence Podcast offers a grounded, local perspective on a global phenomenon. Instead of abstract theorizing, each conversation focuses on the tangible impact and practical applications of AI within Austria's unique ecosystem. You'll hear from a diverse range of guests-researchers, entrepreneurs, policymakers, and creatives-who are actively shaping this landscape, discussing both the remarkable opportunities and the nuanced challenges specific to the region. The discussions delve into how these technologies are being integrated into Austrian industry, academia, and society, moving beyond hype to examine real-world implementation and ethical considerations. This podcast serves as an essential audio forum for anyone in Austria, or with an interest in the European tech scene, looking to understand how artificial intelligence is evolving right here. It’s about the people behind the algorithms and the local stories within a global revolution. For those engaged with the content, questions and suggestions are always welcome at the provided email address.
Author: Language: English Episodes: 73

Austrian Artificial Intelligence Podcast
Podcast Episodes
12. Rania Wazir: On AI4Good and how to shape future AI regulations [not-audio_url] [/not-audio_url]

Duration: 51:24
Summary Rania has a background in theoretical mathematics and has focused her work in recent years as a Data Scientists on natural language understanding and social media monitoring. Today on the show she will share her…