Content moderation left, or right?

Recently, various news information apps and live broadcast platforms have begun to have a soft spot for the recruitment of content reviewers.

 

As early as the beginning of this year, Toutiao launched the recruitment of up to 2,000 content review editors, and gave priority to party members. On April 10, Zhang Yiming, CEO of Toutiao, said in an open letter that the existing 6,000-strong operation review team will be expanded again—the total number will reach 10,000.

 

Kuaishou is also in line with today’s headlines. After Kuaishou CEO Su Hua made his reflection in early April, he immediately announced the urgent recruitment of a 3,000-person audit team, expanding the current audit team from 2,000 to 5,000.

 

Behind the crazy recruitment is the strict supervision of relevant departments. Take Toutiao today, since 2018, it has been named, criticized or punished by CCTV, the Administration for Industry and Commerce and other relevant departments up to 8 times.

 

Strict oversight is only one factor affecting the results, but it would be a bit perfunctory to attribute the “renaissance” of manual review to it. So objectively speaking, what is the fundamental reason for the revival of content moderation?

 

Strict supervision is the "effect", not the "cause"

 

First of all, it has to be clarified that strict supervision is not the root cause of Toutiao and Kuaishou’s crazy recruitment.

 

If you turn your attention from domestic to foreign, you will find that manual review has long been revived.

 

Facebook announced in 2017 that it would add 3,000 content moderator jobs. By the middle of the year, the number of Facebook moderators had nearly doubled compared to the past eight months, and the number of staff was equivalent to the total number of employees at Twitter or Snapchat at the time.

 

In addition to Facebook, Google is also re-embracing human review. Google-owned YouTube said in April that it plans to release a new version of the YouTube Kids App, which will eliminate algorithmically recommended videos and instead recommend all videos by YouTube employees.

 

Why is this transformation? This has to mention a few things that happened last year.

 

In 2017, a man in the United States shot and killed an innocent old man on Facebook live. After a period of time, a man in Thailand killed his daughter live on Facebook. Not only did many witness the brutal process, but within 24 hours after the video was released, people could also see the video on Facebook. Ironically, earlier, Facebook said it would use artificial intelligence to monitor live broadcasts to prevent users from committing suicide. However, several subsequent incidents have shown that artificial intelligence has not played a corresponding role.

 

On the other hand, Youtube has been filled with cartoon characters such as Princess Elsa, Mickey Mouse, Spider-Man, Peppa Pig and other cartoon characters for a period of time. A character who helps and abuses him. After clicking on one, more related videos will be recommended. Some people even use algorithms to develop subcultures and profit from them.

 

At this point, we understand that the major platforms have re-recruited a large number of content reviewers and relevant departments have implemented strict supervision. It is that everyone has discovered that the AI ​​algorithm that has been advocated for several years has many drawbacks, and it is impossible to rely on the algorithm alone to solve content problems.

 

Artificial intelligence falls from the altar of content

 

If Facebook began to recruit content reviewers on a large scale last year, indicating the revival of manual review this year, can we also see the pure manual recommendation of the YouTube Kids App as a sign, for example - in the field of content, the application of artificial intelligence is not No longer the scenery, or to be precise, its applications are no longer dominant.

 

People's cognition is very easily influenced, and the historical allusions of "Zeng Shen murdered" and "Three people became a tiger" fully prove this point. For a rumor, one may say that no one will believe it, two people may say that some people will believe it, but three or more people also say the same, then people will most likely think it may be true.

 

The recommendation of content is all done by machines, giving us not what we want or what is good for us, but what we actually care about. That is, the more we look at something, the more focused that aspect is pushed towards us. And people with ulterior motives will use the feature of the algorithm to influence us.

 

"I believe the biggest risk we face is that we ourselves are losing our ability to discern reality," Reddit CEO Steve Huffman said worriedly in an article he wrote. And this is also the fundamental reason for the uproar of Facebook’s data breach. If we don’t strictly supervise data, we will not be us.

 

Zhang Yiming said that the first thing that Toutiao will change in the follow-up is to integrate the correct values ​​into technology and products. This also indicates that Toutiao will set things right and comprehensively correct the defects of algorithm and machine auditing.

 

Technology is not a panacea, and many things cannot be solved by investing a lot of engineers.

 

A teacher in the circle of friends said that products need values, and there will indeed be big problems in consumption to death and entertainment to death. From this point of view, the "return" of manual review has become inevitable.

 

But is manual review a "big profit" business?

 

Today's content is massive, relying on labor, the cost will become very high. For example, if the recruitment of 2,000 people is expanded, the salary of one person is 4,000 yuan, and the salary cost on the bright side is nearly 100 million yuan a year. Where does the competitiveness of the enterprise come from? Not all companies are like Toutiao and Facebook. In order to achieve 7X24-hour full-coverage review, they recruit thousands of people at every turn and maintain a scale of tens of thousands. This is the first.

 

Second, the work of content review seems to be not very demanding, but it is actually very demanding. Forced to watch a large number of horror, pornographic and murder pictures and videos every day, many people suffer from psychological sequelae - often insomnia, nightmares, and their minds are full of pictures and videos. People who do content review need to have a particularly strong ability to resolve and digest, and can bear the negative impact of emotions and thoughts, otherwise people are very easy to be "returned" by it.

 

One foreign content moderator described the emotional damage and psychological impact she had left from browsing the violent, pornographic and disturbing content, saying: "When I left MySpace, I didn't spend about 3 years without it. Ken shakes hands because I think people are disgusting. I can't touch people." She said the most ordinary people in the world are also fucking freaks. "I came out of there disgusted with humanity. Most of my colleagues felt the same way as I did, and we all left a terrible impression of humanity."

 

Employers are sometimes implicated while causing harm to employees. Earlier last year, two Microsoft cybersecurity employees took their employers to court and sought damages,media reported. They developed PTSD (Post Traumatic Stress Disorder) because of the horrific videos they had to watch at work, including images and videos of child pornography and murder.

 

From this point of view, manual review is not necessarily a good solution.

 

Left or right?

 

The author likes the flower name taken by a person very much - "Xingzhong", the name is taken from the Book of Changes, which means doing things in an intermediate state and a harmonious state. Back to the content review, if you want to solve this problem, you have to do it.

 

Aside from the advantages of people, the reason why algorithms were used to replace manual review and recommendation at the beginning was because manual labor had many drawbacks, such as efficiency, cost, and impact on people. Algorithms are flawed, but their efficiency and low cost are also fascinating. Artificial + algorithm is actually a natural pair, and there is no need to dismantle them because of temporary "friction".

 

Then we have to face a problem: some companies are actually "in the line", using the combination of manual and algorithm to solve the problem of content, why is it not OK?

 

An expert of NetEase Yunyidun content security pointed out when analyzing the problem that algorithm + labor is the most suitable form now and in the future. At present, it is not that algorithms and labor are not enough, but that some domestic companies have too little accumulation and precipitation, thinking that it is good to use the framework of machine learning and add some data to feed them. "The reason why NetEase has done a good job in content security is that it has 20 years of content auditing and technical experience." The expert who has been engaged in security business for many years went on to say that if you don't make a small step, you can't go a long way. The most important thing in the audit is to have enough accumulation and the blessing of "experience".

 

"There are two aspects to this accumulation, one is the continuous accumulation of validity data, and the other is the accumulation of experience and thinking power of content operations and reviewers. Combining the two, we can continuously train the machine, continuously abstract and correct the direction, and then we can get a reasonable Only by combining the boundary of 'artificial intelligence + intelligence' can we get closer to human judgment." He emphasized that this is not an overnight thing, and it cannot be replaced by just stacking up by engineers.

 

After the chat, the author fell into further thinking: people have thoughts and emotions, some massive, repetitive, and low-value jobs will definitely be replaced by technology, while people will do more advanced and more creative things.

 

If we can jump to the future, can we find that content review will not go left or right, but will develop in an inevitable direction - the tacit cooperation between humans and artificial intelligence.

 

concluding remarks

 

At the end of the article, I want to talk about regulation and commercial interests.

 

Judging from the current regulatory policies that are frequently exerting force, in order to achieve rapid development of content products, they must first pass the regulatory threshold. In the eyes of some people, they think that they have been chanted a spell, but I don't think so.

 

A large amount of bad content seems to bring a lot of traffic and commercial benefits, but bad money drives out good money, which is destined to lose high-quality users, which is not conducive to the long-term development of the platform, and even gradually becomes marginalized in the fierce competition; secondly, Unfettered development also means "breaking the head". At that time, if you want to correct the problem, you will find that there is no chance.

 

On April 11, today’s headline CEO Zhang Yiming’s statement, if interpreted in a few sentences, roughly means:

 

I was born as an engineer, and the original intention of starting a business was to create a product that would facilitate the interaction and communication of users around the world. In the past few years, we have put more energy and resources on the growth of the enterprise, but have not taken enough measures to make up for the homework we owe in platform supervision and corporate social responsibility, such as vulgarity, violence, Effective governance of harmful content and false advertising.
PS: The above is an interpretation. For the original text of Zhang Yiming's statement,

 

There is a picture on the Internet that takes stock of the history of Toutiao being named and criticized. Counting down, I was named and criticized 16 times in four months a year, an average of once a month. Until recently, Neihan Duanzi was ordered to shut down and never came back.

 

Finally, the author would like to say two points: technology is indeed neutral, but how to use technology and what content to spread is biased, which needs to be corrected by people; second, long-term commercial interests and supervision are not in conflict, and content platforms attach great importance to Supervision and content security are very important parts of development, and I hope content entrepreneurs keep this in mind.

 

If you are a small and medium-sized start-up company, you feel that the investment cost is too high in content review, you have no operational experience, you are also worried that the relevant policies are not well understood, and you are even more worried that the investment will not be effective, then you can try to use the anti-spam business of Yidun , one-click access to Yidun content security solution.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324407500&siteId=291194637