After the Facebook data breach, what other AI crises await us?

from Medium

Author: François Chollet

Heart of the Machine Compilation

Participation: Bai Yue, Li Zenan


Since March this year, discussions on topics such as data privacy and concerns about the future of artificial intelligence technology have suddenly ushered in another wave of enthusiasm, caused by Facebook data leaks and "big data killing". These events are telling us that private data and the artificial intelligence technology that uses it can affect not only domestic and foreign, but has penetrated into many parts of life.

Recently, François Chollet, a Google researcher and the author of the deep learning library Keras, spoke out on the Facebook incident, expounding his concerns and suggestions for the development of AI.

Social networking services are increasingly controlling the information we consume, and what we see in the news feed has become algorithmically "curated". Social media algorithms increasingly determine which articles we read, which movie trailers we see, who we keep in touch with, and the feedback we collect to express our opinions. —François Chollet

Disclaimer: These are my personal opinions, not my employer's. If you cite this article, please state these views truthfully: they are personal, speculative, and judge for yourself.

If you were born around the 1980s and 1990s, you may remember the now extinct "cynophobia" that I witnessed firsthand for a while in the early 2000s - as personal computers entered our In life, in the workplace and at home, many people express anxiety, fear and even aggressive behavior. While some of us are fascinated by computers and in awe of their potential, most do not understand them. They find them strange and esoteric and threatened in many ways. People are worried about being replaced by technology.

Most of us are disgusted, even panicked, by the technological shift. Perhaps any change will cause this phenomenon. But it's clear that most of the things we worry about won't happen in the end.

Fast forward a few years, and computer opponents have learned to live in the age of computers and enjoy the convenience they bring. Computers didn't replace us, and they didn't trigger mass unemployment—we can't imagine life without laptops, tablets, and smartphones these days. Changes that could be "threatened" have turned into a status quo that brings comfort. But at the same time, because we no longer fear, the advent of computers and the Internet has created threats to us that few had warned us about in the 1980s and 1990s. Ubiquitous mass surveillance, hacker tracking of our devices or personal data, psychological alienation of social media, loss of our ability to be patient and focused, susceptible to political or religious radicalization, hostile foreign power through social networks Destruction of Western democracies.

If most of our fears are considered absurd, conversely, the truly worrisome developments that have occurred in the past due to technological change were caused by the lack of fear of most people before they happened. A hundred years ago, we couldn't really predict that the transportation and manufacturing technologies we were developing would lead to a new form of industrial warfare, destroying tens of millions of people in both world wars. We didn't realize early on that the invention of the radio would lead to a new form of mass propaganda that contributed to the rise of fascism in Italy and Germany. The advances in theoretical physics in the 1920s and 1930s were not accompanied by press articles about how these developments could lead to nuclear weapons that would forever threaten to destroy the world. And now, even as the biggest issue of our time in decades—climate—has loomed, a large portion of the American public (44%) chooses to ignore it. As a civilization, we seem to have a hard time correctly identifying future threats and having legitimate fears about them, just as we seem to be so prone to panic because of absurd fears.

Now, as many times in the past, we are facing a new wave of fundamental change: cognitive automation, roughly summed up by the keyword "AI". Like so many times in the past, we fear that this new technology will hurt us — that AI will cause mass unemployment, or that AI will have its own powers, become superhuman, and choose to destroy us.


Image credit: facebook.com/zuck

But what if we worry about the wrong things, like we worry about every time before? What if the real danger of AI is far from the "superintelligence" and "singularity" views that many people are panicking about today? In this post, I want to raise awareness of the real concerns of AI: the efficient, highly scalable manipulation of human behavior that AI enables, and the malicious use by businesses and governments. Of course, this is not the only tangible risk arising from the development of cognitive technologies - there are many others, especially those related to the harmful bias of machine learning models. Others are more wary of these issues than I am. I choose to write about mass population manipulation because I think the risk is urgent and not obvious.

This risk is now a reality and will be further amplified by some secular technology trends over the next few decades. As our lives become more and more digital, social media companies are becoming more aware of our lives and our minds. At the same time, they increasingly control our consumption of information through behavioral control vectors, especially through algorithmic journalism. This turns human behavior into an optimization problem, an AI problem: a social media company can iteratively adjust its control vectors to achieve a specific behavior, just as a gaming AI iteratively improves its game strategy to score upgrades. The only bottleneck in this process is the algorithmic intelligence in the loop, and as it happens, the largest social networking companies are currently investing billions in fundamental AI research.

Let me explain in detail.

Social media as a psychological prison

Over the past 20 years, our private and public lives have moved online. We spend more time staring at screens every day, and the world is shifting to a phase of digital information consumption, modification or creation.

A side effect of this long-term trend is that companies and governments are collecting vast amounts of data about us, especially through social networking services. Who we communicate with, what we say, what we've been consuming (images, movies, music and news), our mood at a given time. Eventually, almost everything we perceive and everything we do will be recorded on some remote server.

In theory this data enables those who collect it to build very accurate mental models of individuals and teams. Your opinions and actions can be cross-correlated with thousands of similar people, giving you an uncanny understanding of what you choose, perhaps more predictably than achievable through mere reflection (e.g. Facebook's algorithmic evaluation of out your personality more accurately than your friends). This data can predict, days in advance, when you will start a new relationship (and with whom), and when you will end your current relationship. or who is at risk of suicide. Or which side you would end up voting for in an election, even with indecision. And this not only enables analysis at the individual level, but is more predictive for large groups because the average behavior excludes randomness and individual outliers.


Digital Information Consumption as a Psychological Control Vector

This does not stop at passive data collection consumption. Social networking services are increasingly controlling the information we consume, and what we see in the news feed has become algorithmically "curated". Opaque social media algorithms increasingly determine which articles we read, which movie trailers we see, who we keep in touch with, and the feedback we collect to express our opinions.

Through years of exposure, the algorithmic processing of the information we consume has given algorithms considerable power over our lives, including who we are and who we become. If Facebook has long determined what news you watch (real or fake), what changes in your political status you watch, and who watches your news, then Facebook will effectively control your worldview and political beliefs.

Facebook’s business is rooted in influencing people, which is selling services to customers — ads, including political ads. So Facebook has built a fine-tuned algorithmic engine to make it happen. Not only is this engine able to influence which brand of smart speaker you buy next, but it can also influence your mood, making you angry or happy at will by adjusting what is presented to you. It could even change elections.

Human behavior as an optimization problem

In short, social networking companies can simultaneously know everything about us and control the information we consume. This trend is accelerating, and when you have access to perception and action, that's the AI ​​problem. You can start building an optimization loop for human behavior in which you observe the state of the target and keep adjusting the information provided to you until you start observing the choices and behaviors you want to see. A large part of the field of artificial intelligence, especially "reinforcement learning", is the research direction of algorithms to solve these optimization problems as effectively as possible, and tends to be a closed-loop process, with complete control over the goal at hand, that is, complete control over us. By shifting our lives into the digital realm, we are more vulnerable to AI algorithms.


Reinforcement learning loops for human behavior

This is made easier by the fact that the human mind is very susceptible to simple patterns of social manipulation. For example, consider the following attack vector:

  • Identity Reinforcement: This is an old trick that's been exploited by the earliest ads in history and still works as well as before, including associating a given perception with a marker you've determined (or hoped you did), allowing you to automatically Master the idea of ​​the goal. In the context of AI optimizing social media consumption, controlling the algorithm can ensure that you only see what it wants you to see that co-presents with your own identity markers, and conversely, it doesn't want you to see the algorithm doesn't want you see the point of view.

  • Negative Social Reinforcement: If you publish a post that expresses views that the controlling algorithm doesn’t want you to hold, the system can choose to present your post only to those who hold opposing views or are extremely critical of those views (maybe acquaintances, strangers or robots). Repeatedly, this social backlash can cause you to deviate from your initial point of view.

  • Positive social reinforcement: If you publish an article expressing an opinion that the control algorithm wants to spread, it can choose to present it only to people (maybe even bots) who "like" it. This strengthens your beliefs and makes you feel like you're part of the people who support the majority.


  • Sampling Bias: The algorithm may also present you with posts from your friends (and more likely the media) that support the views the algorithm wants you to hold. In such an information bubble, you feel that these views have more support than they actually are.

  • Parametric personalization: The algorithm may observe presenting specific content to people with certain psychological attributes similar to yours, resulting in a desired shift in perspective. It may then present you with some content that would be most effective for someone with your particular perspective and life experience. In the long run, the algorithm can even generate the most effective content from scratch, especially for you.

From an information security perspective, you would call these vulnerabilities: known vulnerabilities that can be used to take over a system. As far as the human mind is concerned, these holes will never be patched, they are just the way we work, in our DNA. The human mind is a static, vulnerable system that will be increasingly attacked by smarter artificial intelligence algorithms that will simultaneously look at everything we do and believe, and have complete control over what we consume Information.

current situation

It is worth noting that the large-scale population manipulation, especially political control, induced by putting AI algorithms in our information diet does not necessarily require very advanced AI. You don't need self-awareness, super artificial intelligence is a terrifying threat -- even though current technology may be sufficient. Social networking companies have been researching for several years and have achieved remarkable results. While they may just be trying to maximize "engagement" and influence your buying decisions, rather than manipulate your worldview, the tools they have developed have been hijacked by hostile states for political purposes - like the 2016 Brexit referendum or the 2016 Brexit US presidential election. This is already a reality, but if mass population manipulation is now possible, why hasn't the world been turned upside down, in theory?

In short, I think it's because we don't know enough about AI, but that may not be the case any time soon.

Until 2015, all ad targeting algorithms in the industry were just running on logistic regression. In fact, largely still is, with only the biggest players moving to the most advanced modes. Logistic regression was one of the most fundamental techniques used for personalization before the computing era. That's why you're not interested in so many ads you see online. Likewise, social media bots used by hostile states to influence public opinion use little AI. Currently they are all very primitive.

Machine learning and artificial intelligence have made rapid progress in recent years and are just beginning to deploy bots targeting algorithms and social media. Machine learning only started making its way into news and advertising networks in 2016, and no one knows what will happen next. Facebook has invested heavily in AI research and development, and its stated goal of becoming a leader in the field is amazing. What use is natural language processing and reinforcement learning when your product is a social news feed?

We're looking at a company that builds meticulous psychographic models of nearly 2 billion humans, serves as the primary news source for many of them, runs large-scale behavioral manipulation experiments, and aims to develop the best artificial intelligence to date. Intelligent Technology. To me, it scares me, and I don't think Facebook might even be the most worrisome threat. Many people like to pretend that big corporations are the all-powerful rulers of the modern world, but they have powers that are far from government. With algorithmic control over our minds, governments could be the worst actors, not corporations.

Now, what can we do? How can we protect ourselves? What can we do as technologists to avoid the risk of mass manipulation through our social news feeds?

The other side of the coin: what AI can do for us

Importantly, the existence of this threat does not mean that all algorithmic strategies are bad, or that all targeted advertising is bad. Instead, both can serve a valuable purpose.

With the rise of the Internet and artificial intelligence, applying algorithms to our information access paths is not just an inevitable trend, but an ideal one. As our lives become more digitized and interconnected and information denser, we need artificial intelligence as our interface with the world. In the long run, education and self-development will be one of the most impactful applications of AI, and it will be dynamic, and these dynamics will almost entirely reflect malicious AI-backed newsfeeds trying to manipulate you. Algorithmic information management has great potential to help us, enabling people to better realize their individual potential and helping to better manage society.

The problem is not AI itself, the problem is control.

Instead of letting newsfeed algorithms manipulate users to achieve opaque goals, such as swaying their political views, or vastly wasting their time, we should give users control over the goals of algorithm optimization. After all, we're talking about your news, your worldview, your friends, your life, and the impact of technology on you should be in your control. Information management algorithms should not be mystical forces designed to serve goals contrary to our self-interest; instead, they should be a tool in our hands, a tool that can serve our purposes, such as education and personal rather than entertainment.

Here's an idea - any mass-adopted algorithmic news interview should:


  • Transparently communicate what the push algorithm is currently optimizing for and how those goals affect your information acquisition.

  • Giving you intuitive tools for setting these goals, for example, should be able to configure newsfeeds in specific directions to maximize learning and personal growth.

  • With the always-visible feature, measure the time you spend on your feed.

  • Having tools to control the amount of time spent on optimizing your feed, such as a daily time goal, is designed by the algorithm to keep you out of optimizing your feed.

Strengthen yourself with AI while maintaining control

We should build AI to serve humanity, not manipulate them for profit or political gain. What would things look like if news algorithms didn't work like casino operators or propagandists? If they are closer to a mentor or a good librarian, with a keen understanding of your psychology and the psychology of millions of others, present you with a book that resonates with you the most and makes you grow Book. A Navigation Tool for Your Life - AI can guide you through the best path to experience space to get where you want to go. Can you imagine seeing your own life through the lens of a system that has gone through millions of lives? Or write books with a system that reads every book? Or collaborating with a system that sees the full extent of current human knowledge?

In an AI product that has full control over how you interact with it, more sophisticated algorithms, rather than threats, would be a positive, allowing you to achieve your goals more effectively.

Building an Anti-Facebook

Overall, in the future AI will be our interface with the world of digital information. This also gives individuals greater control over their lives, perhaps even without institutions at all. Unfortunately, social media today is going down the wrong path, and it will be a long time before we turn things around.

And the industry needs to develop product assortments and markets that incentivize the algorithms that influence users to be controlled by users, rather than artificial intelligence that exploits users’ minds for profit or political gain. We need to work towards an anti-Facebook product.

In the distant future, these products could be in the form of AI assistants. Digital tutor programming helps you control what you want to achieve when you interact with them. At the moment, search engines can be seen as an early, more primitive example of AI-driven information interfaces that serve users rather than try to kidnap their mental world. Search is a deliberate tool for finding a specific target, rather than passively allowing you to receive what voters recommend to you. You should discern what it can do for you. Search engines should try to minimize the time from occurrence to resolution, from question to answer, rather than maximizing your time.

You might wonder, since search engines are still an AI layer between us and the information we consume, could it make its results more inclined to try to manipulate us? Yes, this risk exists with every information management algorithm. But in stark contrast to social networks, the market incentives in this case are actually aligned with user needs to make search engines as relevant and objective as possible. If they don't work to the fullest, there is little impediment for users to switch to competing products. Importantly, search engines are far less psychologically offensive than social news. The threats we describe in this article require most of the following characteristics to be present in the product:

  • Perception and Action: The product should not only control the information displayed to you (news and social updates), it should also be able to "perceive" your current state of mind through "likes", chat messages, and status updates. Without perception and action, a reinforcement learning loop cannot be built. As a potential channel for classical communication, read-only optimized push is harmful and not beneficial.

  • The centrality of our lives: Products should be the primary source of information for at least a subset of users, and the typical user should spend a few hours a day on it. Auxiliary and specialized feeds (such as Amazon's product recommendations) are not a serious threat.

  • The makeup of society makes mind control vectors (especially social reinforcement) broader and more effective, and objective news feeds are only a small part of our minds.


  • Business incentives are meant to manipulate users and make them spend more time with the product.

Most AI-driven information management products do not meet these requirements. Social networks, on the other hand, are a terrible combination of risk factors. As technologists, we should gravitate towards products that do not have these characteristics and resist products that combine them, as long as they have the potential for dangerous abuse. Build search engines and digital assistants instead of social news feeds, make your recommendation engine transparent, configurable, and constructive instead of a coin slot machine - maximizes "engagement" and wastes people's time . Companies should invest in UI, UX, and AI expertise to build good configuration panels for your algorithms that enable your users to use your product on demand.

It is important that we educate users about these issues to prevent them from resisting manipulating the product, causing enough market pressure to force the incentives of the tech industry to align with those of consumers.


Conclusion: a fork in the road ahead

  • Not only does social media know us well enough to build powerful mental models for individual teams, it also increasingly controls our informational diet. It has a range of potent psychological effects that manipulate what we believe, how we feel, and what we do.

  • A sufficiently advanced AI algorithm that continuously loops through our mental states of perception and action can be used to effectively hijack our beliefs and behaviors.

  • Having AI as our information interface is not a problem in itself. Such AI interfaces, if well designed, have the potential to bring enormous benefits to all of us. The point is: the user should have complete control over the goals of the algorithm, using it as a tool to achieve their own goals (in the same way they use search engines).

  • As technologists, we have a responsibility to market products that resist the uncontrollable and strive to build information interfaces that allow users to control. Don’t use AI as a tool to manipulate users; instead, use AI as a tool for users to gain more agency in many cases.


One path leads to something that scares me so much, and the other leads to a more human future. We still have time to choose a better one. If you use these techniques, keep in mind that you may have no nefarious intentions, you may not care, or you may just evaluate your restricted equity (RSU) and not focus on our common future. But whether you care or not, you own the infrastructure of the digital world, so your choices affect all of us. You are ultimately responsible for all of this.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326487529&siteId=291194637