【Translation】What is ethical artificial intelligence? preliminary understanding

Last night, while revising this post, I was trying to decide how I should cover this topic. First, there is the issue of artificial intelligence (AI) itself. It is a broad field of study and one of its applications is machine learning. It's become the industry norm to refer to machine learning as "artificial intelligence" in general, and that's not entirely true. In this article, we will use artificial intelligence to refer to this field of study and all its applications, including machine learning.

Then there are ethical issues. I mean, ethics -- if you look it up on Wikipedia, you'll find that ethical theory predates philosophical thinking (although its meaning is closer to "virtue" or "excellence"). As you'd expect, there's been a lot of talk about the definition alone. Ethical issues probably arise at least as often in other branches of software as in my own. As Charles Humble and Heather Joslyn recently pointed out , "Even without artificial intelligence, bad software can ruin lives".

When it comes to "artificial intelligence ethics", people usually think of Asimov's famous three laws of robotics , which is unfortunate because they are based on a huge misunderstanding. Perhaps we can find a better definition through a recent piece of American news. " Portman, Heinrich urge NSF to prioritize safety and ethics in AI research and innovation ".

this article

The press release is about a letter from two senators to the National Science Foundation (NSF), which -- in addition to political and scientific merit -- contains the following definitions:

"Broadly, AI safety refers to technical efforts to improve AI systems in order to reduce their dangers, while AI ethics refers to the quantitative analysis of AI systems to address issues ranging from fairness to potential discrimination."

It was not specified which dangers might be relevant to AI safety, although the image that comes to mind is Terminator's Skynet. However, "artificial intelligence has the potential to discriminate" is repeated several times in the letter. This is absolutely understandable, especially given the current sociopolitical climate in the United States. At the same time, repeated use of the word "discrimination" makes "AI ethics" appear to be reduced to "AI bias". This is a dangerous downscaling, because while AI bias is undoubtedly a problem, many other aspects of our daily lives that are affected by AI can be viewed through an ethical lens.

How can I help you help me help you?

Earlier this year, I had the opportunity to watch a lecture entitled " L'éthique ou comment protéger l'IA de l'Humain pour protéger l'Humaine de l'IA " - which can be loosely translated as "ethics or how to protect artificial Intelligence from humans in order to protect humans from artificial intelligence". In this speech, Professor Amal El Fallah-Seghr said: "Our goal is to protect artificial intelligence." Amal El Fallah-Seghrouchni - a world-class artificial intelligence researcher and UNESCO World Ethics of Knowledge and Technology Members of the Committee ( COMEST ) - Divide the fields affected by artificial intelligence into:

  • society (more specifically, the impulse to control that comes with citing all available data and what can be inferred from it);
  • ecosystem (increased use of computing resources leads to increased energy consumption); and
  • "Human life and the human spirit" (new ways of acting and thinking, human interactions, and resulting cognitive decision-making processes).

AI has an impact on each of these fields—some easier to identify than others. For example, if we consider the massive use of user data by AI-powered systems, it is quite reasonable to understand that AI can directly affect everything related to privacy. Also, given the nature of most AI-driven decision-making processes, transparency can be an issue -- we'll talk about that later.

Other affected aspects are also easily deduced from causal analysis, relating to responsibility and accountability. Let’s say an automobile factory decides to use artificial intelligence to create a more cost-effective manufacturing process. Software powered by artificial intelligence concludes that a certain part of the vehicle could be assembled differently. If it doesn't work out, and the new process actually costs more than the old one, the software vendor/development team can be blamed, there will be regressions, the factory will lose some money, someone might get fired, but in the end life goes on .

Now -- imagine we're talking about an AI-powered justice system where software replaces juries. As arcane as it sounds, we don't really need a giant piece of software to do this; the existing legal system (and the decisions made within it) are already constrained by the data and algorithms used in the localization system, and Impact of other automated decisions. If someone is found guilty and they are not as a consequence of a bug (as you know, no software is without bugs) - who is to blame? What are the consequences for all parties involved? And if we assume that transparency is an issue, how do we evaluate justice?

Analyzing the impact of AI computing in our ecosystem may not have been easy in the recent past, but the global chip crisis and growing criticism of the resource-intensive use of cryptocurrency mining allows us to draw a very definite parallel. AI modeling is computationally expensive. Processing more data requires more resources, more energy -- we can't escape that. Many AI workloads run on the cloud, but cloud data centers are also huge consumers of energy, and while the major cloud providers have pledged to reach carbon zero, they haven't and likely never will. And that's just the data center, it's a relatively easy problem to solve. As our smartphones get more powerful, we need more energy and bigger batteries -- a need fueled by more complex, resource-hungry apps. As evidenced by Holly Cummins' recent article for WTF , we've gotten to the point where we're talking about green software engineering . Unlimited clean energy remains a utopia.

every day

How ethical is it to double, triple charge our phones just to keep us using our very expensive data crunching apps (whether powered by AI or not)?

And then we have the less obvious aspects, like those related to human psychology: we already know that digital addiction is a real condition. So -- how much should we expect the number of digital addicts to change because of the use of AI in targeted advertising, say? Is it ethical to design products that use artificial intelligence and psychology in a deliberate attempt to seduce consumers , as advocated by the likes of Nir Eyal ? And once the already new paradigm of digital dating becomes driven by artificial intelligence, what will happen to our relationships with other human beings? To what extent does entrusting our reasoning influence other parts of our lives? If you choose to do this - who exactly is in charge ?

Looking at these issues at a higher level, we come to two different conclusions. The first - as stated earlier - is that AI ethics is not just about bias in AI. Of course, bias exists and is a problem as it affects anything related to decision-making (especially if it is analytical decision-making). However, if we analyze all the fields affected by AI and consider what is right or wrong, there are still many questions that need to be answered and resolved.

Second question - as proposed by Prof. Fallah-Seghrouchni

The second issue that Fallah-Seghrouchni raises is that we need to see ethics as the systematic study of behavior grounded in a global and evolving framework of values, independent principles, and actions. Without this, we can't provide answers in a responsible way, and don't even know if we can adopt AI-driven things in the first place. In other words, ethics must be viewed as a dynamic basis for evaluating, locating, and using AI-driven technologies.

In practice, this means adopting AI ethically also means looking at every question we expect AI to answer, and considering how it affects everything else. It's not even something we as humans can do - there's a whole discussion here about morality and ethics, and our different, often culturally formed notions of right and wrong. As humans, we make decisions and are responsible for them through our conscience, social norms or ultimately through the law. But we also know that it is impossible, at least given our current state of technology, to fully replicate the human reasoning process as a system. Therefore, if it is difficult for us to draw a "globally correct" conclusion, it is impossible for any software to do so. But that doesn't mean we can't start tackling this problem.

AI bias, or the "This is Minecraft" problem

We've touched on this topic at least twice, so let's address this question: what does it mean to be biased when we talk about AI? Let's start by defining what we consider "bias". We can find a lot just by googling the term, but I especially like this article . The authors list 20 biases that can affect (or impair) our cognitive processes -- or, if you prefer, our decision-making processes.

We can immediately connect several of these biases to our background. For example, let's look at the "availability heuristic", which involves overestimating the importance of existing information. This is a classic "don't" in economics: if you give equal importance to all data when using a purely analytical procedure, you end up with a "cluster illusion" bias . That's when you find correlations, like how hot Miami is depending on the birth rate of kangaroos in Australia. Machine learning gets a little more complicated, but the principle is the same: if you (i) don't choose the right learning algorithm for your problem, (ii) feed in too much data, you can definitely expect your system not to give your correct answer. The same analogy can be used for many of the other biases listed in the article.

Other types of bias that can affect your algorithm are related to "selective perception" and "stereotypes". You're probably familiar with the " Is this a Chihuahua or a Muffin? " question (and it's very real, by the way). In machine learning, selective cognition can also come from using a learning algorithm on "the right data" -- a set of data that has confirmed your analysis. Using thousands of photos of the same two or three people to train a facial recognition algorithm doesn't teach the algorithm to recognize any faces that are too distinctive from the original model. This is a clear bias. But what if your dataset is biased and you don't even know it?

There are many articles written about the problem of biased datasets or algorithms in automated decision making . Stay on facial recognition algorithms, whose failure rates vary widely based on factors like gender or race. This happened because the data used to train the algorithm was biased -- most of the faces used said they were white, or male, or both. So -- if your software fails to identify a person, and that failure (i) can be traced back to a biased data set and (ii) has other effects (mental pain, financial loss) on one or more individuals -- who Responsible, what should be the consequences? If there is no law to regulate the provider of the data (most likely a different company that provides services to software developers), where does the chain of responsibility stop?

Of course, there are many other types of bias to consider. If hardcoded inferences are based on cultural/local norms or rules (e.g., legal drinking age in different countries), then the algorithm may be biased. We already know that it's impossible to be truly moral (unless you actually solve the whole problem of human reasoning as a robot, patent pending). But as a developer, engineer, manager, or executive, you've probably reached the real question: Can we measure how ethical our software is?

Know the rules of ethics

There is a lot of research related to AI ethics, from how to implement ethical principles to actual ethical guidelines for AI. Oddly enough, in this case, Asimov's three aforementioned laws are not moral codes—in fact, they might even be considered immoral: complex moral questions don't really have a simple yes or no s answer. The Three Laws are often used to convey a sentiment of trust—as long as robots follow these laws, they can be trusted not to harm any human beings. But it's a matter of safety -- following the Three Laws doesn't mean that all decisions are inherently good or bad.

Establishing a code of ethics for AI is hard. Last year, AI ethics researcher Thilo Hagendorff published a comprehensive assessment of the guidelines used in the development of AI systems . In this paper, Hagendorff compares 22 different guidelines currently in use, examining the extent to which ethical principles are implemented in AI systems (by the way, the authors also examine bias among the authors of these guidelines) . The conclusion is straightforward. AI ethics fails in many cases, mainly due to the lack of a reinforcement mechanism:

"In practice, AI ethics is often seen as irrelevant, a residue or some kind of 'addition' to technical issues, a non-binding framework imposed by bodies 'outside' the tech community. Decentralized Responsibility combined with a lack of understanding of the long-term or broader socio-technical consequences leads to a lack of accountability among software developers or perceptions of the moral significance of their work."

The authors also note that these considerations have two consequences (paraphrased) for AI ethics.

i. In AI and ML, a stronger focus on technical details is needed to bridge the gap between ethics and technical discourse
. AI ethics should move from a description of purely technological phenomena to a stronger focus on social and personality-related aspects. AI ethics, then, is less about AI itself and more about ways to deviate from or stray away from problematic norms of action, discover blind spots in knowledge, and gain individual self-responsibility.

If you thought the two consequences were somewhat contradictory, you'd be right. In order to actually measure and understand how ethical your system is, you need to find a way to technically implement abstract values. But to do that, you need to understand what's really going on in your AI-powered system (I won't get into that here, but just know that "explainable AI" is a big research trend enough, for a reason). At the same time, the more you focus on the abstract value, the more you distance yourself from the technical part. While he didn't explain how to resolve this dilemma, Hagendorf acknowledged that "finding a balance between the two approaches" is a challenge in AI ethics.

Of course, this leads us to the practical question at hand: if we want to know whether our AI systems are ethical (or at least how), we need to abide by a set of guidelines - but also understand that they are not (probably Nor will it be) all-encompassing for some time.

have

A balance needs to be struck, and recognizing that in itself is a very difficult first step.

new romantic cyborg

What we know so far is that (1) the ethics of artificial intelligence involve considerations and questions across the universe, and (2) whichever side of the problem we choose to address may just be the tip of the iceberg. Also, while huge, we only look at things from a systems perspective, and how the immediate application/implementation of AI affects us. You might think this is a one-way street, like "software's impact on us humans" - but unfortunately, that's not the case.

We already understand that the human psyche can be influenced by AI. This influence can be direct or indirect. The immediate impact can be seen in how the e-learning system reacts when you get a third question wrong in a row: Seeing the same "you're wrong" message is demotivating. Indirect effects come from AI-driven algorithms used in dating apps, for example (as we discussed above).

However, our relationship with AI-driven systems—the interaction itself—can also be considered ethical or immoral, depending on which abstract aspects we take into account. This means that AI experiences are subject to ethical analysis -- follow all the guidelines, and if your system doesn't interact with users ethically, you can still have problems, and vice versa. For a dating app, is it okay if you know enough about the parameters entered by the user to "rig the game"? Going back to the question of liability — if your actions had reprehensible results, to what extent is the AI ​​to be blamed as well? Or, to put it another way: if AI is susceptible to unethical interactions, is it unethical? This is actually an old question, if you consider every single judicial battle involving "terms of use" - but the question remains.

And what about those without access to smartphones, high-speed internet, or any modern communication technology? Are these people the outcasts now if our behavior changes due to the use of artificial intelligence? Is it ethical if our software doesn't handle situations like this? Are we (exposed and altered by AI) ethical if we push our new ways to become the "new default"?

Facts don't exist because they're ignored

At this point, you may feel intimidated, even afraid to approach artificial intelligence. Please do not do this. "Ethical AI" is much bigger and more than just illustrating the problem in isolation. We don't have a perfect solution. This topic is profound, there are many problems, and the impact on our society is also huge. We don't need to banish all AI, though, as if we're in a technological dark age. However, we do need to understand that "AI ethics" is not just a hyped marketing term, or a new tech version of the neighborhood-friendly "Hey, we recycle".

Before we start buying and using AI-powered systems (or building them ourselves) just because "they're the future" , there are a lot of things we need to understand - including the ethics involved. We're talking about something that affects how we think, decide, and -- ultimately -- live. Consequences, responsibility, accountability are all part of the package. This shouldn't be scary, but it doesn't change the fact that the discussion is already there. So, let me ask you: How ethical is your AI?

WTF_ethics_ebook.png

Guess you like

Origin blog.csdn.net/community_717/article/details/129631482