GPT Reviews
This episode dives into Googleβs Gemma 2, which claims to outperform GPT-3.5 while tackling responsible AI practices. We explore Black Forest Labs' Flux model, featuring 12 billion parameters and tailored versions for various users. Olivia sheds light on the ethical concerns surrounding the resurgence of pseudoscience in machine learning, particularly physiognomy. Lastly, Belinda reviews critical research on AI safety, advocating for clearer metrics to prevent misleading claims about safety advancements.
Contact:Β Β sergi@earkind.com
Timestamps:
00:34 Introduction
01:37Β Googleβs tiny AI model bests GPT-3.5
02:48Β Announcing Flux by Black Forest Labs: The Next Leap in Text-to-Image Models
04:28Β The reanimation of pseudoscience in machine learning and its ethical repercussions
06:06 Fake sponsor
08:04Β MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
09:55Β Spectra: A Comprehensive Study of Ternary, Quantized, and FP16 Language Models
11:41Β Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?
13:33 Outro