There S A Secret Way To Opt Out Of Chatgpt Model Training
Not just that, OpenAI uses human reviewers from “trusted service providers” to process your private de-identified chats, but fails to mention it on ChatGPT’s homepage. Google also employs human reviewers for Gemini chats to train and improve its model, but Google clearly informs the user on the homepage.
While OpenAI offers a Data controls option in ChatGPT’s settings page to disable model training on all of your chats, this setting also turns off Chat History. This functionality, where opting out of training removes access to past conversations, feels punitive and discourages users from prioritizing privacy.
Having access to your past conversations is a basic feature, and in no way, it should be tied to your privacy. Both can co-exist. And indeed, it exists.
So I kept digging in and launched OpenAI’s privacy portal page where I filed a request to stop training on my content. If you are already signed in with your OpenAI account, it will automatically log you in. Otherwise, you’ll have to enter the email address associated with your ChatGPT account, and OpenAI will send an email. Click on the link you receive in your email and file the request. Both free and ChatGPT Plus users can file this request. Here’s what the process looks like:
Of course, the request will be applied to all future chats and not your past conversations. An active request will be created. Reload the page in a minute or two and your request will be processed, at least it did for me. You will also receive an email stating the following:
“We successfully processed your request to not train on content provided to our consumer services. We will no longer use your content to train our models. As a reminder, this request is forward looking and does not apply to content that had previously been disassociated from your account.”
Now, you can use ChatGPT with your Chat History turned on (including Chat sharing), while not being part of model training. Thank you. I would have truly appreciated OpenAI if the company added a link to the privacy portal page inside the Data Controls settings page so users could make informed decisions about their privacy.
This kind of dark pattern raises questions about OpenAI’s commitment to transparency, particularly when dealing with user chat history. It would bode well for the company if they walked the talk on transparency otherwise it can erode user trust.