Reports say that OpenAI has shut down Eric Hartford’s ChatGPT account. Hartford is a developer known for making open source and uncensored AI models like Dolphin.
Hartford says that his account was deleted on October 13, 2025, without warning or explanation. This erased years of conversation history, even though he paid $200 a month for a Pro plan.
Hartford said he didn’t get any notice of policy violations and denied being involved in any illegal activities, like making illegal content.
He also said that his appeal was automatically turned down and that the process was unclear and dismissive.
The move has sparked a lot of debate on X (formerly Twitter), with some people defending OpenAI’s right to enforce its rules and others saying that it is going too far.
Some people who support open source said that OpenAI was punishing developers who support AI alternatives that are open and decentralised.
OpenAI may have still acted for reasons of internal security or compliance that it can’t talk about in public.
But the lack of openness makes people wonder and makes them distrustful, especially since AI companies are already being looked at for ethical and privacy issues.
We don’t know what caused the suspension. If a private company thinks its platform is being misused, it has every right to enforce its rules.
Tech experts say the argument shows that there are bigger problems between the open-source movement and corporate AI governance.
This case shows how easy it is for users to lose trust when platforms have full control over access and data. It’s not so much about one developer as it is about how centralised AI power works.
This could be a warning for anyone who trusts cloud-based AI services, or the company may have done something to keep its systems safe from abuse.
OpenAI has not yet made an official statement about this. The episode adds to the ongoing discussions about data sovereignty, content moderation, and the moral limits of AI development.