Carding Forum
Professional
- Messages
- 2,788
- Reaction score
- 1,176
- Points
- 113
What prevents the company from releasing a ready-made detector that can prevent fraud?
OpenAI has developed a method to identify texts written using ChatGPT, but has not yet released it, despite concerns about the use of AI to deceive. According to The Wall Street Journal, the project has been ready to launch for about a year, but the decision on the release is constantly postponed.
OpenAI employees are torn between the desire for transparency and the desire to attract and retain users. A survey conducted among loyal ChatGPT users showed that almost 30% of them will be dissatisfied with the introduction of such technology. A company representative noted that the tool may negatively affect non-native English speakers.
Some employees support the release, believing that the benefits outweigh the risks. OpenAI CEO Sam Altman and CTO Mira Muratti participated in discussions about the tool. Altman supports the project, but does not insist on its immediate release.
The ChatGPT system predicts which word or word fragment should follow next in the sentence. The discussed "anti-cheat" slightly changes the process of selecting these tokens, leaving a watermark that is barely noticeable to the human eye. Watermarks, according to internal documents, show an efficiency of 99.9%.
Some employees have expressed concerns that watermarks can be erased by simple methods, such as translating text through Google Translate or adding and deleting emojis. Also, the question of who will use the detector remains unresolved: access for too narrow a category of users, access will make it useless, and too wide-it can lead to the disclosure of the technology by intruders.
At the beginning of 2023, OpenAI released an algorithm for detecting text, but its accuracy was only 26%, and after 7 months the company abandoned the tool. Internal watermark discussions began before the launch of ChatGPT in November 2022 and have become a constant source of tension.
In April 2023, OpenAI commissioned a survey that showed that people around the world support the idea of a 4-to-1 AI detection tool. However, 69% of ChatGPT users expressed concerns that the deception detection technology would lead to false accusations of AI use, and almost 30% said they would use ChatGPT less. if watermarks are implemented.
OpenAI staff concluded that the watermark tool works well, but the results of the user survey are still worrying. The company will continue to look for less controversial approaches and plans this year to develop a strategy to shape public opinion about AI transparency and possible new laws on this topic.
There are a number of services and tools that can quite accurately determine whether the text was generated by a neural network or written by a person. These services include, for example, GPTZero, ZeroGPT, and OpenAI Text Classifier. However, as it turned out, you should not seriously rely on these services either.
Source
OpenAI has developed a method to identify texts written using ChatGPT, but has not yet released it, despite concerns about the use of AI to deceive. According to The Wall Street Journal, the project has been ready to launch for about a year, but the decision on the release is constantly postponed.
OpenAI employees are torn between the desire for transparency and the desire to attract and retain users. A survey conducted among loyal ChatGPT users showed that almost 30% of them will be dissatisfied with the introduction of such technology. A company representative noted that the tool may negatively affect non-native English speakers.
Some employees support the release, believing that the benefits outweigh the risks. OpenAI CEO Sam Altman and CTO Mira Muratti participated in discussions about the tool. Altman supports the project, but does not insist on its immediate release.
The ChatGPT system predicts which word or word fragment should follow next in the sentence. The discussed "anti-cheat" slightly changes the process of selecting these tokens, leaving a watermark that is barely noticeable to the human eye. Watermarks, according to internal documents, show an efficiency of 99.9%.
Some employees have expressed concerns that watermarks can be erased by simple methods, such as translating text through Google Translate or adding and deleting emojis. Also, the question of who will use the detector remains unresolved: access for too narrow a category of users, access will make it useless, and too wide-it can lead to the disclosure of the technology by intruders.
At the beginning of 2023, OpenAI released an algorithm for detecting text, but its accuracy was only 26%, and after 7 months the company abandoned the tool. Internal watermark discussions began before the launch of ChatGPT in November 2022 and have become a constant source of tension.
In April 2023, OpenAI commissioned a survey that showed that people around the world support the idea of a 4-to-1 AI detection tool. However, 69% of ChatGPT users expressed concerns that the deception detection technology would lead to false accusations of AI use, and almost 30% said they would use ChatGPT less. if watermarks are implemented.
OpenAI staff concluded that the watermark tool works well, but the results of the user survey are still worrying. The company will continue to look for less controversial approaches and plans this year to develop a strategy to shape public opinion about AI transparency and possible new laws on this topic.
There are a number of services and tools that can quite accurately determine whether the text was generated by a neural network or written by a person. These services include, for example, GPTZero, ZeroGPT, and OpenAI Text Classifier. However, as it turned out, you should not seriously rely on these services either.
Source