EP 560: Inside Multi-Agentic AI: 3 Critical Risks and How to Navigate Them
EP 560: Inside Multi-Agentic AI: 3 Critical Risks and How to Navigate Them  
Podcast: Everyday AI Podcast – An AI and ChatGPT Podcast
Published On: Thu Jul 03 2025
Description: Multi-agentic AI is rewriting the future of work.... but are we racing ahead without checking for warning signs?Microsoft’s new agent systems can split up work, make choices, and act on their own. The possibilities? Massive.But it's not without risks, which is why you NEED to listen to Sarah Bird. She's the Chief Product Officer of Responsible AI at Microsoft and is constantly building out safer agentic AI. So what’s really at stake when AIs start making decisions together?And how do you actually stay in control?We’re pulling back the curtain on the 3 critical risks of multi-agentic AI and unveiling the playbook to navigate them safely.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft’s AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and all episodes: StartHereSeries.com Also, here's a link to the entire series on a Spotify playlist.