Shenzhen University "Computer Topics" homework: the impact of big data and artificial intelligence technology on human life

illustrate

This assignment is a group assignment, and it is required to be completed based on a report by a Tencent researcher (that is, after-view). It is divided into 4 sub-topics to discuss ethical thinking in the era of artificial intelligence. Due to copyright reasons, the specific content of the report is not provided, only the answer content is displayed.

first question

(1) How do you view the abuse of AI technology? Please analyze it from the perspective of IT company professional ethics and social responsibility. (20 points)

In recent years, AI technology has made breakthroughs in more and more fields. With the maturity of artificial intelligence technology, the application of its core technology is gradually becoming cheaper, and the accompanying risks of human privacy, personal safety, and data abuse cannot be ignored. For example: criminals use AI technology to exchange the faces of celebrities and pornographic video performers, Facebook steals user information for presidential elections, etc.

From the perspective of professional ethics, the typical case described on page 23 of the report, in which the fake pornographic video, etc., violated the "General Code of Ethics" of the "American Association for Computing Machinery (ACM) Code of Ethics and Professional Conduct", which "benefits society "with human beings", "avoiding harm to others", "honesty and trustworthiness", "respect for privacy", and many other requirements also violate "the duties of relatively special professionals". Because the forgers of the video abused artificial intelligence technology, did not make their work with a high degree of dignity, disregarded relevant laws, and violated women's right to reputation. They failed to fulfill their social and legal responsibilities, thus violating professional ethics.

On the other hand, from the perspective of social responsibility, the abuse of AI technology violates the responsibility for social development and the responsibility for people. First of all, the progress of spiritual civilization requires healthy and noble ideology and morality, and it is necessary to resist the influence of eliminating negative and decadent content (including the abuse of AI for pornography). This impact is also reflected in the responsibilities of IT companies and related practitioners to this human society.

As Google CEO Stephen Pichai has said, new artificial intelligence tools, such as self-driving cars and disease-detection algorithms, should place ethical guardrails and consider what it means to misuse the technology. When we use the word "abuse", the phenomenon we are discussing has already involved the "shouldn't do" part of the professional ethics of IT companies. AI technology is powerful, but the more it can do, the more harm it can do. If an IT company promotes the abuse of AI technology, it will not only discredit the company itself, but also have a bad impact on the entire industry.

Whether AI technology can develop better is closely related to how people who master AI technology use it. If IT companies do things that "shouldn't be done", such as sensationalizing head-changing videos, or even violating privacy and making pornographic videos, the public will be afraid of AI, and the government will have more confidence in this technology. prevention. AI is undoubtedly a very powerful advanced technology, but only when IT companies abide by professional ethics and assume social responsibilities can they create a good industry scope and enable the society to have good expectations and tolerance for AI technology.

We believe that the abuse of AI technology at this stage should be regulated by the main body that masters the technology, that is, IT companies or technicians. Observing professional ethics and assuming social responsibilities are not only to protect the privacy and security of the public, but also for the sake of the industry. prosper. The advancement of science and technology requires tolerance and freedom, so the misuse of technology, which can easily cause panic, cannot happen. Therefore, IT companies should regulate themselves, be brave enough to abide by professional ethics, dare to take social responsibility, and prevent AI technology from being abused. Promote the thriving field of AI technology.

second question

(2) Will artificial intelligence algorithms such as news recommendation, recruitment algorithm discrimination, and crime risk assessment cause harm to humans? Please analyze from the perspective of software quality, IT risk and management. (20 points)

We believe that these types of AI algorithms can cause harm to humans.

With the rapid development of big data technology and artificial intelligence technology, artificial intelligence algorithm recommendation for the purpose of adapting to individual needs is becoming a new technology paradigm in the field of information circulation. In this context, algorithms are becoming more and more intelligent and programmed. In addition to following the foundation of natural logic and thinking logic, they can even replace collective intelligence for decision-making. In addition, judging from the current communication practice, the communication revolution brought about by artificial intelligence algorithm recommendation is subtly affecting all aspects of social life, and there are naturally hidden risks.

From the perspective of software quality, artificial intelligence algorithms also have risks and hazards when they are widely used due to their high software complexity, difficult principles, and black-box processes. Starting from the algorithm recommendation results, the current common news algorithm recommendation mechanism is that the platform analyzes the user's internal interests and needs based on data such as users' likes and comments. A content-based push method for matching information content [1]. On the one hand, this method can realize the efficient matching of people and information, saving users’ time, but on the other hand, it also allows users to avoid the content that they are not interested in, and only selectively contact the content that they are interested in. Topics, in the long run, will intensify the formation of information cocoons.

The recruitment algorithm and the crime risk assessment algorithm scan the user's social circle, dig deep into the user's occupation, location, gender, age and other relevant information, and perform tag matching [1]. In the actual communication practice, the main body of algorithm design usually puts some value appeals at the top or highlights some value orientations deliberately because they want to achieve certain expected results, which leads to the bias of values ​​always existing in the algorithm. This value bias is deduced through a specific form [1]. In practice, the algorithm will not only continue this bias, but may also continue to strengthen and amplify this bias with the expansion of data volume and the update iteration of the algorithm.

Analyzing from the perspective of IT risks, the existence of artificial intelligence algorithm bias reflects errors and imperfections in the development of application systems.

We believe that artificial intelligence algorithms have the following risks: information value disorder, information addiction, and information narrowing.

The wide application of artificial intelligence recommendation algorithms has shifted the control power of information production and circulation to intelligent machines and diverse individualized users. Under the benefit chain of traffic supremacy, intelligent algorithm recommendations will pay more attention to users' information preferences rather than The public value of the information itself - as long as the user pays attention, no matter how high the quality of the information is, it is headlines; as long as the user does not pay attention or is not interested, no matter how high-quality the information is, it will be worthless in the eyes of intelligent algorithms [2].

The push mechanism of artificial intelligence recommendation algorithm to users' preferences has strengthened users' media dependence on information platforms unprecedentedly. People are immersed in the digital world woven by algorithms, and a lot of time is spent by digital production and digital consumption consume. In addition, the information addiction and entertainment generalization recommended by intelligent algorithms are born in one vein. Driven by the wave of consumerism and the information addiction mechanism recommended by intelligent algorithms, the phenomenon of entertainment generalization is also spreading to the public domain.[ 2].

The powerful information filtering function of artificial intelligence algorithms not only improves the efficiency of information allocation, but also subtly builds a separation wall for the exchange of views and value integration between different user groups-users are more likely to agree with those like-minded views, and for those Competing viewpoints are ignored or rejected. If things go on like this, what people get is a kind of "narrowed" information.

How to manage the above risks? We believe that after the determination of risks and risk analysis, efforts should be made in the stages of risk planning, risk tracking and risk control. First, the algorithm is regulated by technology during the planning phase. To some extent, many problems brought about by technology can actually be solved through further optimization and improvement of the technology itself, by opening the black box of the algorithm to improve its transparency, and finally establish an information balance between users and the platform. Secondly, without the guidance of mainstream values, artificial intelligence algorithms will run wild and out of control. Therefore, we need to take mainstream values ​​as the core of the algorithm. In the risk tracking stage, compared with the post-check mode of publishing first and then deleting, we It is believed that by strengthening the pre-checks of the algorithm, the manual review and the algorithm review should be combined to give full play to the advantages of the two and provide real-time feedback of risk information. Finally, regarding how to control risks, we believe that it is the trend of the times to regulate artificial intelligence with the rule of law. On the one hand, it is necessary to further enhance the legal awareness of relevant departments for the supervision of artificial intelligence algorithms; Legislative efficiency in the field of algorithm recommendation enhances the foresight and effectiveness of legislation.

third question

(3) Is artificial intelligence technology a helper or an enemy of mankind? Please analyze from the perspective of social impact brought by information technology. (20 points)

Is artificial intelligence friend or foe? We think it should be viewed dialectically. From the perspective of its tool attributes, AI, like traditional tools, can provide assistance for human production and life, and is a good helper for human beings; on the other hand, because of the new problems brought about by the continuous development of technology, it It is likely to become our potential enemy.

From the current level of technological development, artificial intelligence technology can only exist as a tool for human beings for a long time, and cannot reach the various levels and indicators of strong artificial intelligence. But if there is a day when artificial intelligence can achieve the same or even higher level of intelligence as humans, the significance of this technology for society and humans is open to question.

We should think about this problem dialectically: On the one hand, strong artificial intelligence can bring us a more convenient life, a more developed social model and even solve various social, academic, and engineering problems that cannot be solved by humans. Strong artificial intelligence has its own judgment ability and is less likely to be affected than humans. It is very suitable for large-scale applications in medical, rescue, military, scientific research and other fields. The computing power of artificial intelligence beyond human beings also makes them likely to be able to solve important problems in mathematics, physics or computer fields that humans cannot answer at this stage. From this point of view, the development of artificial intelligence technology is necessary and necessary.

On the other hand, the impact of artificial intelligence on human society may have long gone beyond the scope of "information rights", "digital divide", and "globalization" mentioned in textbooks. In the future, the unprecedented development of artificial intelligence may lead to the traditional The unprecedented impact on social form, social structure and social ethics.

If artificial intelligence is really so powerful, where should human beings go? Strong artificial intelligence, with its mechanical properties, can allow them to occupy almost all occupations in the world that do not require creativity, and employers are bound to be more inclined to use artificial intelligence that does not require wages than humans. At that time, the unemployment population increased sharply, and it was still unknown whether the development of social security could adapt to this change, and the value and significance of human beings themselves would also be questioned. We can see concerns about the rebellion of strong artificial intelligence from the "Omnic Crisis" (Fig. 1), but even if artificial intelligence does not rebel, its impact has serious two-sidedness. Human beings should be more cautious and rational in their attitude towards this technology, strengthen legislation and other normative work, conduct in-depth research on the ethical issues of artificial intelligence, and think about how to maximize the convenience brought by artificial intelligence technology while minimizing the negative impact .

No matter how far artificial intelligence develops, we should make it a good helper to bring equality and well-being to mankind, not a tool that exacerbates human inequality or a new evil that rules mankind.
Figure 1: Omnic Crisis
Figure 1: Omnic Crisis

fourth question

(4) Do you think there are loopholes in autonomous driving, and can humans trust autonomous driving? Who do you think should be held responsible in driverless car accidents? Please analyze it from the perspective of risk awareness of IT users. (20 points)

We believe that at this stage, humans cannot fully trust autonomous driving technology, but can only treat it as an auxiliary tool and research object.

First of all, we believe that the current autonomous driving technology is still in its immature stage, and there must be loopholes. The current international mainstream so-called "autonomous driving" technology mainly exists in the form of "computer-assisted driving". Humans are still a long way from being able to fully trust autonomous driving.

Technically, although with the development of algorithms in the fields of artificial intelligence and computer vision such as deep learning and reinforcement learning, autonomous driving technology has reached a high standard in object detection and path planning, but in some complex situations, For example, in extreme weather and complex road conditions, facing so much "noise", the recognition rate of our automatic driving algorithm will also be greatly affected. The research on the "robustness of neural network" is still in the research stage The field of self-driving cars; the frequent accidents of self-driving cars lead to more difficult ethical and social responsibility issues than technical issues.

Take the multiple traffic accidents that happened to Tesla vehicles recently as an example. In the face of unmanned driving traffic accidents, who should bear the main responsibility? Is it the car owner or the car manufacturer? Or should we shift the main responsibility to local regulators and legislators of laws related to autonomous driving technology?

For traffic accidents like this, it is unfair to impose excessive responsibility on car owners. For the car owner, he has legally purchased a self-driving car that is permitted by local laws and is driving on the road legally. The main driving operations of the car are completed by algorithms. Then why should the car owner bear the main responsibility for traffic accidents? Woolen cloth? It's like taking a taxi. If there is a traffic accident in the taxi, how can you push the main responsibility to a passenger? What's more, most of the traffic accidents caused by automatic driving are caused by problems such as algorithm failure or mechanical system failure, which is not the fault of the car owner.

Therefore, we believe that the main responsibility lies with the negligence of the car's designers and the negligence of the transportation department.

From the perspective of risk awareness of IT users, car owners do have a certain degree of responsibility for failing to take precautions and foresee potential risks; It also falls under the category of "IT users". The transportation department issued a license to Tesla without fully verifying its safety. This dereliction of duty is an obvious lack of awareness and prevention of the risks of autonomous driving technology.

Therefore, due to the consideration of IT ethics and social responsibility issues, the immaturity of software and hardware such as algorithms, the weak supervision of relevant departments, and the serious lag at the legislative level, autonomous driving technology has not and should not be widely promoted before it matures. However, there is nothing wrong with the self-driving technology itself. The technological revolution is the guarantee for the development of productivity. At the beginning of the invention of the automobile, many similar accidents occurred. The public opinion and social movement on the automobile issue were also very intense. At that time, People would rather ride in an old, bumpy horse-drawn carriage than a car, but these days, the horse-drawn carriage has long since disappeared into the dust of history. The future looks at the present, and the present looks at the past. After the self-driving technology matures, it will be better promoted.

References:
[1] Zhang Lin. Ideological Risk and Its Governance in Intelligent Algorithm Recommendations [J]. Exploration, 2021(01): 176-188. [2] Chen Haijun,
Zou Junbo, Zhao Ying, Fei Dexin. Information Platform Research on Ethical Issues of Intelligent Recommendation Algorithms [J]. New Media Research, 2021, 7(07): 32-34+63.

thoughts

This homework was completed by a team. During the completion of the homework, all the students expressed their opinions. Therefore, there is not only an analysis based on textbook knowledge, but also a broad-minded thinking outside of the textbook. This way of learning has benefited us a lot. We are fortunate to be computer students on the cusp of the artificial intelligence era. The future of artificial intelligence and the entire computer technology is related to us. In the future, we must not only improve our computer professional ability, but also cultivate our ethical ability. Only with sufficient ethical quality and feelings can we better serve the society, the motherland and the people.

Guess you like

Origin blog.csdn.net/weixin_46655675/article/details/129334953