Google’s Bug Bounty Program Pays Off: Researchers Expose Security Gaps in Bard AI and Cloud Console

Unveiling the Invisible: Researchers Unearth Vulnerabilities in Google's AI

Share

Researchers attending Google’s LLM bugSWAT event in Las Vegas uncovered serious security flaws in Bard (formerly known as Gemini) and Google Cloud Console, earning a combined reward of $50,000. These vulnerabilities could have allowed attackers to steal user data, launch denial-of-service attacks, and potentially gain access to sensitive information. This article explores the details of these vulnerabilities and the importance of Google’s bug bounty program in securing its AI-powered systems.

The team of Roni Carta, Justin Gardner, and Joseph Thacker collaborated to find these flaws, which could have potentially allowed attackers to:

3 Security Flaws Discovered at Bug Bounty Event

Researchers attending Google’s LLM bugSWAT event in Las Vegas uncovered critical security vulnerabilities in Bard and Google Cloud Console. These flaws, if exploited, could have had a significant impact on user privacy and system stability.

  • Unauthorized Image Access in Bard

A vulnerability in Bard’s image processing function allowed unauthorized access to user-uploaded images. As researcher Roni Carta explained on his blog, “the flaw granted us access to another user’s images without any permissions or verification process.” This means attackers could potentially steal sensitive information stored within images, such as personal documents or screenshots containing confidential data.

  • Denial-of-Service Attacks on Google Cloud Console

The researchers also identified a way to launch Denial-of-Service (DoS) attacks against Google Cloud Console. By manipulating the GraphQL API, a core component for data querying, they discovered a method to overload Google’s backend servers with excessive requests. This could have potentially crippled the system’s availability, impacting legitimate users.

  • Exfiltrating Sensitive Text via OCR

The situation gets even more concerning when considering Bard’s Optical Character Recognition (OCR) capabilities. The image access vulnerability, coupled with OCR, could have allowed attackers to not only steal images but also extract sensitive text data embedded within them. Imagine an attacker gaining access to an image containing a private email or financial notes – a chilling prospect. 

Exploiting Google Cloud Console’s GraphQL API

Researchers at the LLM bugSWAT event turned their attention to recently released AI features within the Google Cloud Console. Their focus landed on the GraphQL API, a powerful tool that allows for efficient data querying within applications. By meticulously analyzing the communication between the front-end user interface and the back-end servers, they unearthed a critical vulnerability.

This vulnerability stemmed from a concept known as directive overloading. In essence, GraphQL allows developers to specify directives within queries, which instruct the server on how to process the data. The researchers discovered that the API lacked proper safeguards against malicious actors crafting queries containing an excessive number of directives.

Imagine a scenario where a legitimate query might include just a handful of directives. An attacker exploiting this vulnerability could craft a weaponized query containing millions of directives. The sheer volume of directives would overwhelm Google’s backend servers, forcing them to expend significant resources on processing this unnecessary data. This resource drain would ultimately lead to a Denial-of-Service (DoS) attack, potentially crippling the entire system and hindering access for legitimate users.

The researchers elaborated that “a malicious actor could easily compute a request with millions of directives and send thousands of requests per minute to hang some part of Google’s Backend.” This highlights the potential severity of the vulnerability, where a single attacker could disrupt service for a vast number of users.

Google’s Bug Bounty Program:

Google’s bug bounty program is designed to encourage security researchers to find and report vulnerabilities in their products. This LLM bugSWAT event specifically targeted flaws in Google’s AI systems, highlighting the growing importance of securing AI-powered technology.

The researchers were rewarded for their findings. Here’s a breakdown for clarity:

  • $20,000 bounty: This reward was given for the vulnerability in Bard that allowed unauthorized access to user data.
  • $1,337 bounty: This reward was given for the “third-coolest bug,” which was the image access vulnerability with the potential for information exfiltration. 1337 is a leetspeak number used in hacker culture and signifies “elite.”
  • $1,000 bounty: This reward was given for the DoS vulnerability in Google Cloud Console.
  • Additional $5,000 bounty: This bonus reward was given for the “Coolest Bug of the Event,” which was also the DoS vulnerability in Google Cloud Console.

Fortunately, Google’s bug bounty program provided the platform for researchers to responsibly disclose this vulnerability. By working together, security researchers and tech giants like Google can proactively identify and address security flaws, ensuring the continued stability and trust in AI-powered systems. This event underscores the ongoing need for robust security measures as AI becomes increasingly integrated into our daily lives. By working with cybersecurity researchers, Google aims to proactively identify and address vulnerabilities in their AI systems.

Author

  • Maya Pillai is a tech writer with 20+ years of experience curating engaging content. She can translate complex ideas into clear, concise information for all audiences.

    View all posts

1 Comment

www.binance.com'a kaydolun May 3, 2024 - 7:08 pm

Thanks for sharing. I read many of your blog posts, cool, your blog is very good.

Post Comment