The ChatGPT software is causing a lot of uproar these days. Because she is able to formulate texts so well that it is sometimes difficult to distinguish them from those that people have written. That makes cheating easier. Now the ChatGPT inventors are trying to limit the possible damage.
The creators of the writing software ChatGPT are now trying to get a grip on the consequences of their invention. The developer company OpenAI published a program that is supposed to distinguish whether a text was written by a human or a computer. ChatGPT is so good at mimicking human speech that there are concerns, among other things, that it could be used to cheat on schoolwork or create large-scale disinformation campaigns.
The detection still works rather mediocre, as OpenAI admitted in a blog entry on Tuesday. In test runs, the software correctly identified texts written by a computer in 26 percent of the cases. At the same time, however, nine percent of the texts formulated by humans were incorrectly assigned to a machine. For this reason, it is recommended for the time being not to rely primarily on the assessment of the “classifier” when evaluating the texts.
ChatGPT is artificial intelligence-based software trained on massive amounts of text and data to mimic human speech. At the same time, the program can convincingly mix completely incorrect information with correct information. OpenAI made ChatGPT publicly available last year, prompting admiration for the software’s capabilities and concerns about fraud.
Google has also been developing software that can write and speak like a human for years, but has so far refrained from publishing it. Now the Internet group is letting employees test a chatbot that works similarly to ChatGPT, the broadcaster CNBC reported on Wednesday night. An internal email states that a response to ChatGPT has priority. Google is also experimenting with a question-and-answer version of its Internet search engine.