A hacker said they purloined private details from millions of OpenAI accounts-but scientists are skeptical, and the company is investigating.
OpenAI states it's examining after a hacker claimed to have actually swiped login qualifications for 20 countless the AI firm's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher published a puzzling message in Russian advertising "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and oke.zone using what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the complete dataset was being offered for sale "for simply a couple of dollars."
"I have over 20 million gain access to codes for OpenAI accounts," emirking composed Thursday, according to an equated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If legitimate, this would be the 3rd significant security incident for the AI business since the release of ChatGPT to the public. Last year, a hacker got access to the company's internal Slack messaging system. According to The New York Times, the hacker "stole details about the style of the company's A.I. technologies."
Before that, in 2023 an even easier bug involving jailbreaking triggers permitted hackers to obtain the personal information of OpenAI's paying consumers.
This time, however, security researchers aren't even sure a hack took place. Daily Dot reporter Mikael Thalan wrote on X that he found invalid email addresses in the expected sample information: "No proof (suggests) this alleged OpenAI breach is genuine. At least two addresses were invalid. The user's just other post on the forum is for a thief log. Thread has actually because been erased too."
No evidence this alleged OpenAI breach is genuine.
Contacted every email address from the supposed sample of login credentials.
At least 2 addresses were void. The user's just other post on the forum is for a stealer log. Thread has considering that been erased too. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shared with Decrypt, an OpenAI representative acknowledged the circumstance while maintaining that the company's systems appeared safe.
"We take these claims seriously," the representative said, lovewiki.faith adding: "We have actually not seen any evidence that this is connected to a compromise of OpenAI systems to date."
The scope of the supposed breach stimulated concerns due to OpenAI's enormous user base. Countless users worldwide rely on the company's tools like ChatGPT for company operations, academic purposes, and content generation. A legitimate breach might expose private conversations, industrial jobs, and other delicate information.
Until there's a final report, some preventive steps are always recommended:
- Go to the "Configurations" tab, log out from all connected devices, and enable two-factor authentication or 2FA. This makes it practically difficult for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then produce a virtual card number to manage OpenAI memberships. In this manner, it is easier to spot and avoid fraud.
- Always watch on the conversations kept in the chatbot's memory, and understand any phishing efforts. OpenAI does not request for any personal details, and any payment upgrade is constantly managed through the main OpenAI.com link.