AI developers urged for temporary halt in AI integration due to unsecured tools leaving teams in chaos
In the rapidly evolving digital landscape, security teams are increasingly focusing on proactive, AI-enhanced defenses to combat the potential of generative AI-driven attacks. This shift comes as security leaders consider changing their approach to cybersecurity defense strategies, recognising the growing threat posed by AI-empowered cybercriminals.
According to recent findings, the resolution rate for serious LLM pentest findings is the lowest across all test types conducted by Cobalt. This highlights the need for a more effective and efficient response to these threats.
One of the key strategies being adopted by security teams is the use of AI-powered threat detection and prevention. By utilising generative AI tools, teams can automatically analyse threats, predict potential breaches, and generate real-time countermeasures. This dynamic and predictive capability helps stay ahead of AI-enhanced cybercriminal tactics such as AI-crafted phishing, malware automation, and sophisticated social engineering schemes.
Another important strategy is hardening AI systems themselves. Since AI models can be vulnerable to prompt attacks, data exfiltration methods, and other security risks, defenders are embedding security at every layer of AI system development and deployment. This involves rigorous security assessments, continuous learning to adapt to novel threats, and layered defenses around AI applications.
Human oversight and ethical use of AI are also crucial in this context. Although AI can automate threat detection and generate simulated phishing for training, human supervision remains essential to manage the double-edged nature of generative AI, which can also be used by attackers to craft believable scams or fake identities.
Compliance with regulations like the EU’s DORA is also playing a significant role in enhancing operational resilience and supply chain risk management, influencing proactive cybersecurity practices across sectors.
Incident response and collaboration are also vital. Industry groups such as OWASP have developed incident response guides focused on generative AI threats, emphasising the importance of preparedness, governance, and collaboration among business leaders, AI developers, and cybersecurity communities to build resilient AI ecosystems.
To further improve defences, organisations should continuously monitor AI systems for emerging vulnerabilities and patch them promptly. They should also invest in layered security architectures combining AI-driven and traditional controls, train employees regularly with AI-generated realistic phishing simulations, foster collaboration and information-sharing between AI developers, security teams, and regulators, and prioritise ethical AI use and human oversight to mitigate misuse risks.
Top concerns among all survey respondents included sensitive information disclosure, model poisoning or theft, inaccurate data, and training data leakage. Gunter Ollmann, CTO at Cobalt, states that AI security readiness has been exposed as a fundamental gap due to the rush of LLM adoption. More than seven-in-ten (72%) cited generative AI-related attacks as their top IT risk.
As the threat landscape evolves, it is clear that the foundations of security must evolve in parallel with generative AI, or we risk building tomorrow's innovation on today's outdated safeguards. Gunter Ollmann, CTO at Cobalt, stated that "Threat actors aren't waiting around, and neither can security teams."
The integration of AI technology in cybersecurity is evident, with teams utilizing AI-powered threat detection and prevention to combat AI-driven attacks. This strategy involves automatically analyzing threats, predicting potential breaches, and generating real-time countermeasures, such as those used in AI-crafted phishing.
Recognizing the vulnerability of AI systems to prompt attacks, data exfiltration methods, and other security risks, defenders are embedding security at every layer of AI system development and deployment. This approach includes rigorous security assessments, continuous learning, and layered defenses around AI applications.