44. Andreas Stephan - University of Vienna - Weak Superversion in NLP

44. Andreas Stephan - University of Vienna - Weak Superversion in NLP

Author: Manuel Pasieka December 27, 2023 Duration: 49:54

# Summary

I am sure that most of you are familiar with the training paradigm of supervised and unsupervised learning. Where in the case of supervised learning one has a label for each training datapoint and in the unsupervised situation there are no labels.

Although there can be exceptions, everyone is well advise to perform supervised training when ever possible. But where to get those labels for your training data if traditional labeling strategies, like manual annotations are not possible?

Well often you might not have perfect labels for your data, but you have some idea what those labels might be.

And this, my dear listener is exactly the are of weak supervision.

Today on the show I am talking to Andreas Stephan who is doing is PhD in Natural Language Processing at the University of Vienna in the Digital Text Sciences group led by Professor Benjamin Roth.

Andreas will explain about his recent research in the area of weak supervision as well how Large Language Models can be used as weak supervision sources for image classification tasks.


# TOC

00:00:00 Beginning

00:01:38 Weak supervision a short introduction (by me)

00:04:17 Guest Introduction

00:08:48 What is weak supervision?

00:16:02 Paper: SepLL: Separating Latent Class Labels from Weak Supervision Noise

00:26:28 Benefits of priors to guide model training

00:29:38 Data quality & Data Quantity in training foundation models

00:36:10 Using LLM's for weak supervision

00:46:51 Future of weak supervision research


# Sponsors

- Quantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/

- Belichberg GmbH: We do digital transformations as your innovation partner - https://belichberg.com/


# References

- Andreas Stephan - https://andst.github.io/

- Stephan et al. "SepLL: Separating Latent Class Labels from Weak Supervision Noise" (2022) - https://arxiv.org/pdf/2210.13898.pdf

- Gunasekar et al. "Textbooks are all you need" (2023) - https://arxiv.org/abs/2306.11644

- Introduction into weak supervision: https://dawn.cs.stanford.edu/2017/07/16/weak-supervision/


Hosted by Manuel Pasieka, the Austrian Artificial Intelligence Podcast offers a grounded, local perspective on a global phenomenon. Instead of abstract theorizing, each conversation focuses on the tangible impact and practical applications of AI within Austria's unique ecosystem. You'll hear from a diverse range of guests-researchers, entrepreneurs, policymakers, and creatives-who are actively shaping this landscape, discussing both the remarkable opportunities and the nuanced challenges specific to the region. The discussions delve into how these technologies are being integrated into Austrian industry, academia, and society, moving beyond hype to examine real-world implementation and ethical considerations. This podcast serves as an essential audio forum for anyone in Austria, or with an interest in the European tech scene, looking to understand how artificial intelligence is evolving right here. It’s about the people behind the algorithms and the local stories within a global revolution. For those engaged with the content, questions and suggestions are always welcome at the provided email address.
Author: Language: English Episodes: 73

Austrian Artificial Intelligence Podcast
Podcast Episodes
71 - NeoAlp - Humanoide Robotik - Zwischen Dystopie und Euphorie [not-audio_url] [/not-audio_url]

Duration: 1:41:48
Die letzten Jahre ware von Large Language Models dominiert, und für die meisten ist GenAI immer noch der Inbegriff von Fortschritt.Andere denken schon weiter und zielen auf physical AI ab, welche sich darauf spezialisier…
67. Mathias Neumayer and Dima Rubanov - Lora a child friendly AI [not-audio_url] [/not-audio_url]

Duration: 53:10
## SummaryLarge Language Models have many strengths and the frontier of what is possible and what they can be used for, is pushed back on the daily bases. One area in which current LLM's need to improve is how they commu…
66. Taylor Peer - Beat Shaper - A music producers AI Copilot [not-audio_url] [/not-audio_url]

Duration: 52:20
Today on the show I have the pleasure to talk to returning guest, Taylor Peer one of the co-founders of the startup, behind Beat Shaper.Taylor will explain how they are following an Bottom-up approach to create electroni…
65. Daniel Kondor - CSH - The long term impact of AI on society [not-audio_url] [/not-audio_url]

Duration: 1:04:50
Guest in this episode is the Computational Social Scientist Daniel Kondor, Postdoc at the Complexity Science Hub in Vienna.Daniel is talking about research methods that make it possible to study the impact of various fac…
64. Solo - Manuel Pasieka on the hottest LLM topics of 2024 [not-audio_url] [/not-audio_url]

Duration: 59:00
With the last episode in 2024, I dare to release an solo episode, summarizing my christmas research on the topics of - Small Language models - Agentic Systems - Advanced Reasoning / Test time compute paradigm I hope you…