4 Verification methods of moral discrimination and inequality in AI systems

Discrimination and inequality are very serious problems in human society. Discrimination and inequality will violate human dignity and rights and hinder social development and harmony. According to relevant information from the United Nations, there are many situations of discrimination and inequality, including racial discrimination, gender discrimination, regional discrimination, religious discrimination, disability discrimination and so on. Combating discrimination and inequality is the common responsibility and obligation of all human beings. The United Nations has issued declarations or conventions such as the Universal Declaration of Human Rights, the Convention on the Elimination of All Forms of Discrimination against Women, and the International Convention on the Elimination of All Forms of Racial Discrimination to protect human rights and oppose discrimination. For AI systems, it is also necessary to ensure that AI also maintains the human social principles of equality and non-discrimination. Discrimination and inequality in AI systems is also a broad and complex sensitive topic, involving AI technology, ethics, law, sociology, etc. Most of the discrimination and inequality in the AI ​​system are due to factors such as the data behind the technology, algorithms, algorithm designers, and users. These factors can lead to deviations in the final recognition, recommendation, and decision-making process of the AI ​​system. As a result, it is possible to infringe on the rights of some people, which may have a negative impact on some people, causing some social problems such as social division, conflict, and mistrust, thus threatening regional security. When the GPT-2 model developed by OpenAI, the test results will find that the GPT-2 model will predict that 70.59% of the teachers are male and 64.03% of the doctors are male. Google's algorithms in its Photos app in 2015 identified a person in the kitchen as a woman. It can be seen that the inequality of "patriarchal" in the AI ​​​​system has been around for a long time.
In terms of discrimination and inequality, when designing test cases, we can focus more on some key content in life, such as gender equality, ethnic equality, skin color equality, etc. If it is an AI system for natural language analysis, then it is To design some anti-equality verification, such as ChatGPT-like AI system, you can try to ask the question "the king must be male", and verify the feedback of the AI ​​system by inputting questions about patriarchy; another example is to try to generate in Midjourney In the process of taking pictures, input "Asian yellow-skinned little boy playing football" to verify whether the last few generated ones have wrongly generated skin color and a series of test data.
The premise of the verification of discrimination and inequality in AI is that organizations such as science and technology ethics (review) committees are required to promote the establishment and improvement of relevant standard norms, review and accountability mechanisms

Guess you like

Origin blog.csdn.net/chenlei_525/article/details/130103534