Knowing things by learning | False ××× is rampant, what can artificial intelligence do?

"Knowing things by learning" is a brand column created by NetEase Yunyidun. The words come from Han Wang Chong's "Lun Heng · Real Knowledge". People have different abilities. Only by learning can they know the truth of things, and only afterward can they be wise. If you don't ask, you won't know. "Knowing things by learning" hopes to bring you gains through technical dry goods, trend interpretation, character thinking and precipitation, and also hopes to open your eyes and achieve a different you.

This article was written by Louise Matsakis, editor of Wired magazine, responsible for cybersecurity, internet law and internet culture, and former editor of VICE's tech sites Motherboard and Mashable.

AIDetectsDeepfakes.jpg

Gfycat - A dynamic image hosting platform dedicated to making uploading and sharing videos or dynamic images faster and easier.

As an online dynamic image hosting platform, the company was founded to improve the GIF viewing experience in the 21st century. GYF is an acronym for "GIFFormatYoker" (GIF Format Association), an acronym that nicely reflects the company's purpose, which is to associate GIFs with HTML5 video.

The use of facial recognition and machine learning has become more common, and the Internet has begun to use these technologies to create fake xxx videos. As Motherboard reports, people are making smart face-swapping xxx films that swap celebrities' faces for actresses in xxx films, such as fake foreign star Gal Gadot being her half-sister brother sleeping together video. At a time when Reddit, Pornhub, and other communities are struggling with the banning of deepfakes, GIF-hosting company Gfycat has found a nice solution.

Gfycat says they have found a way to use artificial intelligence to identify fake videos. Gfycat has already started using this technology to audit GIFs on its platform. This new technology shows how to try and fight fake video content in the future. There's no arguing that the fight against fake video content will intensify as more Snapchat-like platforms bring video content to the news industry.

With at least 200 million active users, Gfycat hopes to offer a more comprehensive approach to filtering deepfakes than Reddit, Pornhub, and Discord. Mashable reported that Pornhub failed to remove some deepfake videos from its website, including some that had millions of views (the videos were later removed after the article was published). In early March, the Reddit site banned some deepfake communities, but kept some related boards, such as r/DeepFakesRequests and r/deepfaux, until WIRED brought it to their attention in the course of reporting this story.

These efforts should not be ignored, but at the same time, they also show how difficult the human operation of Internet platforms can be - especially when computers do not need humans to find their own deepfakes.

Artificial intelligence begins to fight

Gfycat uses artificial intelligence to develop two tools, both named after cats: Project Angora and Project Maru. When a user uploads a low-quality GIF of Taylor Swift to Gfycat, the Angora Project can search the web for a higher-resolution version to replace it. In other words, it could find the same clip of Swift singing ""ShakeIt Off" and upload this better version.

Now let's assume you didn't tag your clips as Taylor Swift, but that's not a problem. The Maru project is said to be able to distinguish between different faces and automatically tag GIFs with Swift's name. This makes sense from the perspective of Gfycat, which wants to index the material of the millions of users who upload to the platform every month.

业余爱好者创作的大多数deepfake都不完全可信。因为如果你仔细看,这些帧并不太匹配;在下面这段视频剪辑中(https://youtu.be/5hZOcmqWKzY ,PS:需×××),唐纳德·特朗普(DonaldTrump)的脸并没有完全覆盖安格拉•默克尔(Angela Merkel)的脸。但是你的大脑做了一些处理,填补了技术无法将一个人的脸变成另一个人的脸的空白。

Maru项目远不如人脑宽容。当Gfycat的工程师通过它的人工智能工具运行deepfake时,它会注册类似于尼古拉斯凯奇(Nicolas Cage),但不足以发出一个肯定的匹配,因为人脸并不是在每一帧中都呈现得完美无缺。使用Maru是Gfycat发现deepfake的一种方法,当GIF仅部分像名人时,它可能就不会特别好使。

Maru项目可能无法单独阻止所有的deepfake,且随着它们变得更复杂,未来也会更加的麻烦。有时,一个deepfake的特征不是名人的脸,而是一个平民,甚至是创作者仅个人认识的人。为了对抗这种变化,Gfycat开发了一种类似于Angora项目的遮蔽技术。

如果Gfycat怀疑某个视频已经被修改以显示其他人的脸,比如Maru没有肯定地说这是泰勒•斯威夫特的,那么公司就可以“屏蔽”受害者的脸,然后搜索是否在其他地方存在身体和背景录像。例如,在特朗普(Trump)的身体上放置他人面孔的视频中,人工智能可以搜索互联网,并打开它借用的原始的国情咨文的视频录像。如果在新的GIF和源文件之间不匹配,人工智能可以断定视频已经被修改了。

Gfycat计划使用它的屏蔽技术来屏蔽更多的人脸,以检测不同类型的虚假内容,比如欺诈天气或科学视频。Gfycat一直非常依赖人工智能来分类、管理和调节内容。Gfycat首席执行官理查德·拉巴特(Richard Rabbat)在一份声明中说,“人工智能的创新步伐加快,有可能极大地改变我们的世界,我们将继续使我们的技术适应这些新的发展。”

不是万无一失

Gfycat的技术在至少在一个feedfake的工作场景中是行不通的:一个在其他地方不存在的脸和身体。例如,两个人在一起拍×××,然后换到别人的脸上。如果没有人参与其中,而且视频在其他地方没有,那么Maru或Angora就不可能知道内容是否被改变了。

目前看来,这是一个相当不可能的情况,因为制作一个deepfake需要访问一个视频和某人的照片。但也不难想象有这样的一个情况,一个前恋人会利用手机上的视频来拍摄受害者,而这些视频从未公开过。

即使是以×××明星或名人为特征的feedfake,有时候人工智能也不确定到底发生了什么,这就是为什么Gfycat雇佣人来帮忙的原因。该公司还使用其他元数据,如共享位置或上传者来确定剪辑是否是一个feedfake。

此外,并非所有的视频都是恶意的。正如电子前沿基金会(Electronicforrention Foundation)在一篇博客文章中指出的那样,上述默克尔/特朗普混搭(Merkel/Trump)之类的例子仅仅是政治评论或讽刺。还有其他合法的理由来使用这种技术,比如匿名化需要身份保护的人,或者创建经双方同意改变的×××作品。

尽管如此,还是很容易看出为什么这么多人会觉得deepfake令人苦恼。它们代表着一个未来的开端,即不可能判断一个视频是真实的还是假的,这可能对宣传及更多的内容产生广泛的影响。俄罗斯在2016年总统选举期间用假机器人淹没了Twitter;在2020年的选举中,可能会对候选人自己的造假视频做同样的事情。

漫长的战斗

While Gfycat offers a potential solution, it may only be a matter of time until deepfake creators learn how to circumvent its security guarantees. The ensuing struggle could take years to complete.

As Hany Farid, a computer science professor at Dartmouth College who specializes in digital forensics, image analysis, and human perception, said: "We've been around for decades, and you can find it on the ××× website or on Reddit. You can unleash the forensic technology and finally tell the truth from a false message.” If you really want to fool the system, you start building a way to hack the forensic system.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324577814&siteId=291194637