A hacker said they purloined private details from countless OpenAI accounts-but scientists are doubtful, and the company is examining.

OpenAI says it's examining after a hacker claimed to have actually swiped login credentials for 20 million of the AI firm's user accounts-and put them up for sale on a dark web online forum.
The pseudonymous breacher published a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and providing possible buyers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the complete dataset was being sold "for simply a few dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If genuine, forum.altaycoins.com this would be the 3rd major security occurrence for yogaasanas.science the AI company because the release of ChatGPT to the general public. Last year, a hacker got access to the company's internal Slack messaging system. According to The New York Times, the hacker "stole details about the design of the company's A.I. technologies."
Before that, in 2023 an even easier bug involving jailbreaking triggers allowed hackers to obtain the private data of OpenAI's paying consumers.
This time, forum.altaycoins.com however, security scientists aren't even sure a hack took place. Daily Dot reporter Mikael Thalan composed on X that he found void email addresses in the supposed sample information: "No proof (recommends) this supposed OpenAI breach is genuine. A minimum of 2 addresses were void. The user's only other post on the online forum is for a thief log. Thread has given that been erased as well."
No evidence this alleged OpenAI breach is legitimate.
Contacted every email address from the supposed sample of login qualifications.
At least 2 addresses were invalid. The user's only other post on the online forum is for a thief log. Thread has actually given that been deleted also. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025

OpenAI takes it 'seriously'
In a declaration shown Decrypt, an OpenAI spokesperson acknowledged the scenario while maintaining that the business's systems appeared safe and secure.
"We take these claims seriously," the spokesperson said, including: "We have actually not seen any proof that this is connected to a compromise of OpenAI systems to date."
The scope of the supposed breach sparked concerns due to OpenAI's massive user base. Millions of users worldwide count on the business's tools like ChatGPT for business operations, academic purposes, and material generation. A genuine breach might expose private conversations, commercial jobs, and links.gtanet.com.br other delicate information.

Until there's a final report, some preventive steps are always a good idea:
- Go to the "Configurations" tab, bphomesteading.com log out from all linked gadgets, and allow two-factor authentication or 2FA. This makes it practically impossible for a hacker to gain access to the account, even if the login and passwords are compromised.
- If your bank supports it, then develop a virtual card number to handle OpenAI memberships. By doing this, hb9lc.org it is easier to find and avoid scams.
- Always watch on the conversations saved in the chatbot's memory, engel-und-waisen.de and understand any phishing attempts. OpenAI does not request any individual details, and any payment upgrade is always dealt with through the main OpenAI.com link.