Someday My 'Nets Will Code

Someday My 'Nets Will Code

Author: CNA June 11, 2021 Duration: 45:01

Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20

To RSVP contact Larry Lewis at LewisL@cna.org.

Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council's Panel of Experts, which notes the March 2020 use of the "fully autonomous" Kargu-2 to engage retreating forces; it's unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses an 825GB data "Pile" to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as "code generators," creating a benchmark (the Automated Programming Progress Standard)  to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3's 175B parameter model performing much worse (presumably due to the inability to fine-tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with "Someday." And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally, Diana Gehlhaus from Georgetown's Center for Security and Emerging Technology (CSET),  joins Andy and Dave to preview an upcoming event, "Requirements for Leveraging AI."

Interview with Diana Gehlhaus: 33:32

Click here to visit our website and explore the links mentioned in the episode.

 

Tune into AI with AI: Artificial Intelligence with Andy Ilachinski for a grounded and insightful conversation about a field that often feels like science fiction. Host Andy Ilachinski, alongside David Broyles, breaks down complex topics without the hype, focusing on what the latest breakthroughs in AI and autonomy actually mean. This isn't just a technical deep dive; each episode carefully considers the real-world ramifications, particularly how these technologies intersect with global security and military strategy. You'll hear clear explanations of emerging research, thoughtful analysis of current events, and discussions that connect laboratory advances to their broader societal impact. Produced by CNA, this podcast serves as a vital resource for anyone looking to move beyond headlines and understand the forces shaping our future. The perspectives offered are those of the hosts and commentators, providing a focused lens on a rapidly evolving landscape. For listeners curious about the intersection of technology and policy, this series offers consistently substantive content that clarifies the present while thoughtfully examining the path ahead.
Author: Language: English Episodes: 100

AI with AI: Artificial Intelligence with Andy Ilachinski
Podcast Episodes
All Good Things [not-audio_url] [/not-audio_url]

Duration: 28:29
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autono…
Up, Up, and Autonomy! [not-audio_url] [/not-audio_url]

Duration: 37:19
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Frame…
Dr. GPT [not-audio_url] [/not-audio_url]

Duration: 36:54
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US E…
EmerGPT [not-audio_url] [/not-audio_url]

Duration: 36:05
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America's strategy for…
The Kwicker Man [not-audio_url] [/not-audio_url]

Duration: 32:18
Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of "AI" and many more requirements for the Departm…
Battledrone Galactica [not-audio_url] [/not-audio_url]

Duration: 36:15
Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The T…
The AI Who Loved Me [not-audio_url] [/not-audio_url]

Duration: 30:44
Andy and Dave once again welcome Sam Bendett, research analyst with CNA's Russia Studies Program, to the podcast to discuss the latest unmanned and autonomous news from the Ukraine and Russian conflict. The group discuss…
Drawing Outside the Box [not-audio_url] [/not-audio_url]

Duration: 33:19
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office…
Keep Watching the AIs! [not-audio_url] [/not-audio_url]

Duration: 36:25
Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK's National Cyber Security Centre, providing a set of security principles for developers implementing machine learning mode…