Skip to content

Day Two of RSAC Conference: Emphasis on Emerging Hacking Methods

Debating the potential use of quantum computing and artificial intelligence by hackers for data breaches and encryption bypasses.

Discourse among professionals delved into the potential exploitation of cutting-edge technologies,...
Discourse among professionals delved into the potential exploitation of cutting-edge technologies, including quantum and artificial intelligence, by malicious actors for the purpose of unauthorized access and decryption of confidential information.

Day Two of RSAC Conference: Emphasis on Emerging Hacking Methods

Modern Menaces in the Digital Age: Unveiling AI in Cybercrime

The focus of day two at the RSAC Conference was dedicated to disclosing the insidious tactics of cybercriminals exploiting innovative technologies, such as AI and quantum computing.

In a series of captivating keynote presentations, high-ranking executives from Google shared their insights on the shady dealings of cyber villains, including nation-states, who are adept at harnessing AI tools.

Sandra Joyce, Google's VP for Threat Intelligence, detailed how regional APT groups from countries like Iran, China, and North Korea, accessed Google's AI service, Gemini, to intensify their assaults. The crime syndicates were observed conducting reconnaissance, mapping vulnerabilities, enlisting assistants for malicious scripting, perfecting phishing messages, and probing evasion techniques.

Despite these malicious activities, which seem threatening on the surface, Joyce noted that they were relatively benign as the attackers did not deploy any fundamentally novel methods with these AI models. AI safety safeguards, according to Joyce, successfully thwarted some APT groups from executing more advanced, AI-amplified research and attacks. It is worth mentioning that Google's AI technology aided in the discovery of Big Sleep, a previously uncovered vulnerability, marking the first instance of an AI agent unearthing a previously unknown memory issue in widely-used software.

While analyzing data on the nefarious activities of Gemini, Microsoft Copilot, or ChatGPT may serve to reveal valuable clues about the sinister world of hackers, another Google executive offered crucial perspective during a keynote panel discussion on the limitations of relying solely on that type of data.

John 'Four' Flynn, Google DeepMind's VP for Security and Privacy, shed light on the operational security protocols of the most dangerous nation-state actors, which tend to keep the industry in the dark about their activities. Flynn opined, "I suspect that most adversarial work will likely be carried out on on-prem, open-weight models, or some sort of custom models they're constructing, as there is a matter of visibility. If you're an attacker, you'll naturally test out all available tools, but if you're performing serious AI activities, it might not be something you do in the open."

Another panelist enlightened the audience on the breakneck speed at which attackers are adapting alongside the advancements in AI technology, creating new threats. Jade Leung, the CTO of the UK AI Security Institute, stressed the urgency of addressing the rapid evolution of AI capabilities and their potential impact on national security risks, such as those related to chemical and biological attacks or terrorism.

With AI dominating the security discourse at RSAC Conference, another main stage panel delved into the potential threats posed by quantum computing, which could potentially compromise current encryption practices. The panelists concurred that the advent of quantum computing was likely more than a decade away, but they warned that nation-states were taking preemptive action.

Raluca Ada Popa, an associate professor and senior staff research scientist at UC Berkeley and Google DeepMind, described the technique as "harvest now, decrypt later." She clarified that attackers can import encrypted, confidential data now and decrypt it later when quantum computers are ready. Whitfield Diffie, a pioneer of public-key cryptography, proposed employing newer post-quantum encryption algorithms in addition to current algorithms like RSA or Diffie-Hellman to safeguard sensitive data. "The pragmatic thing to do is embrace what's called hybrid encryption," Diffie said.

Subscribe to our Platform Daily Newsletter

Sign up today and get a free copy of our Future Focus 2025 report, offering invaluable insights on AI, cybersecurity, and other IT challenges from 700+ senior executives.

[1]: AI-powered attacks: What do hackers really do?

[2]: AI and Cybercrime: A New Era of Threat Intelligence

[3]: Navigating the landscape of AI-optional and AI-augmented threats

[4]: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

[5]: Understanding the AI Threat Landscape: An Intelligence Perspective

1.The application of Artificial Intelligence (AI) in cybercrime is a growing concern, particularly in the realms of finance, as seen in AI-powered attacks that have been meticulously orchestrated by hackers.2. In the world of cybersecurity, AI is not just a tool for defense, but also for offense, as cybercriminals are using AI to identify and exploit vulnerabilities in data-and-cloud-computing systems and technology infrastructure.3. The rapid advancement of AI in technology, including quantum computing, is driving the evolution of cyberthreats, necessitating urgent attention in areas like data security and national security risks, such as chemical and biological attacks or terrorism.

Read also:

    Latest