
The safety panorama is present process one more main shift, and nowhere was this extra evident than at Black Hat USA 2025. As synthetic intelligence (particularly the agentic selection) turns into deeply embedded in enterprise techniques, it’s creating each safety challenges and alternatives. Right here’s what safety professionals have to learn about this quickly evolving panorama.
AI techniques—and notably the AI assistants which have change into integral to enterprise workflows—are rising as prime targets for attackers. In one of the crucial attention-grabbing and scariest shows, Michael Bargury of Zenity demonstrated beforehand unknown “0click” exploit strategies affecting main AI platforms together with ChatGPT, Gemini, and Microsoft Copilot. These findings underscore how AI assistants, regardless of their sturdy safety measures, can change into vectors for system compromise.
AI safety presents a paradox: As organizations broaden AI capabilities to reinforce productiveness, they need to essentially improve these instruments’ entry to delicate knowledge and techniques. This enlargement creates new assault surfaces and extra complicated provide chains to defend. NVIDIA’s AI purple group highlighted this vulnerability, revealing how massive language fashions (LLMs) are uniquely vulnerable to malicious inputs, and demonstrated a number of novel exploit methods that reap the benefits of these inherent weaknesses.
Nevertheless, it’s not all new territory. Many conventional safety ideas stay related and are, the truth is, extra essential than ever. Nathan Hamiel and Nils Amiet of Kudelski Safety confirmed how AI-powered growth instruments are inadvertently reintroducing well-known vulnerabilities into fashionable functions. Their findings recommend that fundamental utility safety practices stay basic to AI safety.
Wanting ahead, menace modeling turns into more and more crucial but additionally extra complicated. The safety group is responding with new frameworks designed particularly for AI techniques corresponding to MAESTRO and NIST’s AI Threat Administration Framework. The OWASP Agentic Safety Prime 10 undertaking, launched throughout this 12 months’s convention, offers a structured strategy to understanding and addressing AI-specific safety dangers.
For safety professionals, the trail ahead requires a balanced strategy: sustaining sturdy fundamentals whereas creating new experience in AI-specific safety challenges. Organizations should reassess their safety posture by way of this new lens, contemplating each conventional vulnerabilities and rising AI-specific threats.
The discussions at Black Hat USA 2025 made it clear that whereas AI presents new safety challenges, it additionally affords alternatives for innovation in protection methods. Mikko Hypponen’s opening keynote offered a historic perspective on the final 30 years of cybersecurity developments and concluded that safety shouldn’t be solely higher than it’s ever been however poised to leverage a head begin in AI utilization. Black Hat has a method of underscoring the explanations for concern, however taken as a complete, this 12 months’s shows present us that there are additionally many causes to be optimistic. Particular person success will depend upon how nicely safety groups can adapt their present practices whereas embracing new approaches particularly designed for AI techniques.
