EmerGPT

EmerGPT

Author: CNA January 13, 2023 Duration: 36:05

Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America's strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China's Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on "deepfakes," requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a "multidisciplinary open peer review dataset" (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a "code red" for Google's search business over the success of OpenAI's ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to "polish" writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe.

https://www.cna.org/our-media/podcasts/ai-with-ai

 


Tune into AI with AI: Artificial Intelligence with Andy Ilachinski for a grounded and insightful conversation about a field that often feels like science fiction. Host Andy Ilachinski, alongside David Broyles, breaks down complex topics without the hype, focusing on what the latest breakthroughs in AI and autonomy actually mean. This isn't just a technical deep dive; each episode carefully considers the real-world ramifications, particularly how these technologies intersect with global security and military strategy. You'll hear clear explanations of emerging research, thoughtful analysis of current events, and discussions that connect laboratory advances to their broader societal impact. Produced by CNA, this podcast serves as a vital resource for anyone looking to move beyond headlines and understand the forces shaping our future. The perspectives offered are those of the hosts and commentators, providing a focused lens on a rapidly evolving landscape. For listeners curious about the intersection of technology and policy, this series offers consistently substantive content that clarifies the present while thoughtfully examining the path ahead.
Author: Language: English Episodes: 100

AI with AI: Artificial Intelligence with Andy Ilachinski
Podcast Episodes
Someday My 'Nets Will Code [not-audio_url] [/not-audio_url]

Duration: 45:01
Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20 To RSVP contact Larry Lewis at LewisL@cna.org. Andy and Dave discuss the latest in AI news, inc…
Just the Tip of the Skyborg [not-audio_url] [/not-audio_url]

Duration: 34:55
Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20 To RSVP contact Larry Lewis at LewisL@cna.org. Andy and Dave discuss the latest in AI news, inc…
Rebroadcast: A.I. in the Sky [not-audio_url] [/not-audio_url]

Duration: 36:08
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Resear…
Doggone [not-audio_url] [/not-audio_url]

Duration: 39:35
Andy and Dave discuss the latest in AI news, including a new AI website from the White House at AI.gov, which provides a variety of resources on recent reports, news, key US agencies, and other information. The U.S. Navy…
Superhumans [not-audio_url] [/not-audio_url]

Duration: 15:06
Andy's out this week, but Dave recently had a chance to do a series of interviews on a paper that he wrote, Superhumans, Implications of genetic engineering and human-centered bioengineering. So this week's podcast will…
Mnemosyne That Before [not-audio_url] [/not-audio_url]

Duration: 37:25
Andy and Dave discuss the latest AI news and research, including a blog post from the Federal Trade Commission that businesses can and will be held accountable for the fairness of their algorithms. A bipartisan coalition…
Xen and the Art of Motorcell Maintenance [not-audio_url] [/not-audio_url]

Duration: 40:14
Andy and Dave discuss the latest in AI news, including the European Commission's proposal for the regulation of AI. A report in Nature Medicine examines the limitations of the evaluation process for medical devices using…
Donkey Pong [not-audio_url] [/not-audio_url]

Duration: 39:14
Andy and Dave discuss the latest in AI news, including the National Intelligence Council's 7th Edition Global Trends 2040 Report, which sprinkles the importance of AI and ML throughout future trends. A BuzzFeed report cl…
Xenomania [not-audio_url] [/not-audio_url]

Duration: 37:19
Andy and Dave discuss the latest in AI news, including the resignation of Samy Bengio from Google Brain, which fired ethicists Gebru in December and Mitchell in February. The Joint AI Center releases its request for prop…
Guise of the Machines [not-audio_url] [/not-audio_url]

Duration: 36:28
Andy and Dave discuss the latest in AI news, including a report that systematically examined 62 studies on COVID-19 ML methods (from a pool o 2200+ studies), and found that none of the models were of potential clinical u…