Copyright Lawsuit from The New York Times Against OpenAI

year of war with artificial intelligence zzEBQJ jpg
year of war with artificial intelligence zzEBQJ jpg

In the last week of the year, the compensation lawsuit filed by The New York Times newspaper against Microsoft and OpenAI came to the fore. In fact, the filing of this lawsuit has been a topic of discussion since the summer months, but the submission of the petition to the court has taken its place on the agenda.

The New York Times' request to compensate for the damages it has suffered and the allegations that Microsoft and OpenAI have become unjustly enriched by using the content that the newspaper has created with billions of dollars of investment to date will be decided in court. However, there is another dimension to the case. The NewYork Times claims that some false results emerged in ChatGPT used by Microsoft and OpenAI, that they said that this information was the content of The NewYork Times in order to show these false results as true, and that they continued this despite knowing this fact. In other words, it has declared a kind of war on the disinformation side of the business.

Fraud and disinformation are on the rise

While the development of artificial intelligence creates various opportunities for humanity, the misuse of this technology poses various risks in terms of both ethical values ​​and security. Fake photos and videos created using artificial intelligence technologies can manipulate the public through social media. In October, researchers at Tsinghua University in China announced an artificial intelligence tool that can produce approximately 700 thousand photos per day.

In August, Google released a tool that places invisible digital watermarks on AI-generated images. In 2022, Intel developed a technology that determines whether an image is real by analyzing skin color changes that indicate blood flow under a person's skin. Hitachi is developing technology to prevent fraud for online authentication.

Collaboration from camera manufacturers

A global standard determined and adopted by Japanese manufacturers Canon, Nikon and Sony Group, which accounts for 90% of the world camera and camcorder market, is used as a digital signature in photographs. Nikon, Sony Group and Canon are developing camera technology that places digital signatures on images.

Meanwhile, an alliance of global news organizations, technology companies and camera manufacturers has announced a web-based verification tool called Verify to check images for free. If the image was created or manipulated by artificial intelligence, it gives a warning message that this information cannot be found. As the use of these technologies becomes widespread, fake images created by artificial intelligence will become more easily understood by people. It looks like digital theft will end soon.

Verifying cameras

Camera manufacturers are also working on new models that digitally sign photos as soon as they are taken. Sony will release the technology to incorporate digital signatures into three professional-grade mirrorless SLR cameras via a firmware update in spring 2024. The company is considering making the technology compatible with videos as well.

In this technology, when a photographer sends images to a news organization, Sony's authentication servers detect digital signatures and determine whether they were created by artificial intelligence. Sony and Associated Press conducted field tests of this vehicle in October.

Sony is making preparations to both expand its series of cameras and camcorder models compatible with this technology and to encourage media organizations to use this technology. Canon will also launch a camera with similar features in early 2024. The company is also developing technologies that add digital signatures to video. Additionally, Canon is also releasing an image management app that will detect whether images were taken by humans.