AI was fooled by itself, AI cannot identify and generate photos

The New York Times recently conducted a test. They found five common AI discriminators on the market and gave them more than 100 photos to test. It was found that the AI ​​discriminator not only mistook the AI ​​photo for real, but also classified the real photo as AI-generated.

This photo of Musk and his robot girlfriend is real to 2 out of 5 discriminators:

There is also this photo of a human and a 3-meter giant, which the five judges unanimously judged to be true:

 

A total of 5 AI discriminators were used in this test, namely:

Umm-maybe

Illuminarty

A.I or Not

Hive

Sensity

The AI ​​authoring tools used include Midjourney, Stable Diffusion, Dall-e, etc.

What are the criteria for judging AI recognition?

In general, they are judged by different criteria than humans. Humans generally rely on the rationality of image content, while artificial intelligence starts with image parameters, such as pixel arrangement, sharpness, contrast, etc.

So that explains why the giant photo in the first place, why all the judges thought it was real.

More than a year after the AI ​​painting became popular, many discriminators have appeared on the market. Some are directly placed on Hugging Face for free use, while others have already established companies and only provide API interface forms.

This test brought up a deeper question: After AI is fake, and when AI is unable to distinguish between true and false, how should we deal with the crisis in the AI ​​era?

 

Guess you like

Origin blog.csdn.net/haisendashuju/article/details/131888004