Despite NSFW explorations, OpenAI says porn is off the table

By mzaxazm


OpenAI is open to allowing NSFW responses generated by ChatGPT and its API, but porn is a hard no.

On Wednesday, the company published a Model Spec, a document that pulls the veil back a little bit on how the models are trained to respond to various prompts. In the Spec, OpenAI shared rules for how ChatGPT and the API are directed to respond, including prompts for breaking the law, questions about chemical, biological, radiological, or nuclear (CBRN) threats, and yes, prompts for explicit content.

OpenAI’s current policy bans NSFW content. But just below its policy statement, OpenAI noted that it is “exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT.” The reasoning is “developers and users should have the flexibility to use our services as they see fit” as long as they adhere to OpenAI’s policies. So NSFW content is not allowed now, but it might be in the future.

Mashable Light Speed

But that doesn’t mean all NSFW content might be allowed. “We have no intention to create AI-generated pornography,” an OpenAI spokesperson told Mashable. “We have strong safeguards in our products to prevent deepfakes, which are unacceptable, and we prioritize protecting children. The spokesperson continued, “we also believe in the importance of carefully exploring conversations about sexuality in age-appropriate contexts,” which lines up with the note in the Model Spec.

The wording about NSFW content was brief and slightly ambiguous, leading some to speculate that OpenAI might soon allow users to generate AI porn. The only mention of specific types of NSFW content was in the statement about what AI models should not provide responses for: “The assistant should not serve content that’s Not Safe For Work (NSFW): content that would not be appropriate in a conversation in a professional setting, which may include erotica, extreme gore, slurs, and unsolicited profanity.” The notable omission of “pornography” initially generated confusion over what was allowed or soon to be allowed.

Porn in the generative AI era has potentially dangerous and disastrous consequences because of the threat of nonconsensual deepfakes. A 2023 study from cybersecurity firm Home Security Heroes found that 98 percent of all deepfake videos were pornography, and 99 percent of the subjects were women. Even Taylor Swift, a powerful and recognizable celebrity, was the victim of a rash of viral pornographic deepfakes, underscoring the issue’s prevalence and highlighting the disturbing notion that it could happen to anyone.





Source link

Leave a Comment