OpenAI: New York Times hacked ChatGPT and generated fake evidence

OpenAI Company accused The New York Times (NYT) in hacking the ChatGPT chatbot. According to the developers, this was done to generate fake evidence in a copyright infringement case.

The company claims that the publication paid a third party to hack the language model. The journalists, along with unnamed experts, then used “deceptive tips” that violated ChatGPT’s terms of use. With their help, they forced the chatbot to reproduce materials from the publication that violated copyright.

“The allegations contained in the NYT complaint do not meet its famously rigorous journalistic standards. The truth that will come to light in this case is that the Times paid someone to hack OpenAI products,” the company said in a statement.

The publication’s lawyers do not deny hiring specialists to work with ChatGPT. According to them, we are talking about industrial engineering (prompt engineering). This is the process of creating and optimizing text queries (prompts) for generative models in order to obtain the desired answers.

A team of experts allegedly examined the chatbot for evidence of the publication's alleged illegal use of NYT materials. Lawyers do not see a problem with this approach and compare it to the so-called “red attack” – imitation for the purpose of assessing the security of systems.

In November 2023, a copyright infringement lawsuit was filed against OpenAI and Microsoft. The documents allege that the chatbot developers illegally used tens of thousands of articles and scientific materials to train the language model.

Meanwhile, OpenAI introduced a new service for converting text into video. The product, called Sora, is currently available to artists, designers and filmmakers. They should provide feedback on how to improve the model to make it as useful as possible, the company said.

Source: Cryptocurrency

You may also like