OpenAI Company accused The New York Times (NYT) in hacking the ChatGPT chatbot. According to the developers, this was done to generate fake evidence in a copyright infringement case.
The company claims that the publication paid a third party to hack the language model. The journalists, along with unnamed experts, then used “deceptive tips” that violated ChatGPT’s terms of use. With their help, they forced the chatbot to reproduce materials from the publication that violated copyright.
The publication’s lawyers do not deny hiring specialists to work with ChatGPT. According to them, we are talking about industrial engineering (prompt engineering). This is the process of creating and optimizing text queries (prompts) for generative models in order to obtain the desired answers.
A team of experts allegedly examined the chatbot for evidence of the publication's alleged illegal use of NYT materials. Lawyers do not see a problem with this approach and compare it to the so-called “red attack” – imitation for the purpose of assessing the security of systems.
In November 2023, a copyright infringement lawsuit was filed against OpenAI and Microsoft. The documents allege that the chatbot developers illegally used tens of thousands of articles and scientific materials to train the language model.
Meanwhile, OpenAI introduced a new service for converting text into video. The product, called Sora, is currently available to artists, designers and filmmakers. They should provide feedback on how to improve the model to make it as useful as possible, the company said.
Source: Cryptocurrency

I am an experienced journalist and writer with a career in the news industry. My focus is on covering Top News stories for World Stock Market, where I provide comprehensive analysis and commentary on markets around the world. I have expertise in writing both long-form articles and shorter pieces that deliver timely, relevant updates to readers.