Interview with UIUC Bo Li | From Usable to Credible, the Ultimate Thinking of AI in Academics

The emergence of ChatGPT caused AI to once again cause a shock in the technology circle. This shock had far-reaching effects and divided the technology world into two factions. One faction believes that the rapid development of AI may replace humans in the near future. Although this "threat theory" is not unreasonable, the other faction also puts forward a different point of view . Dangerous to the future of humanity is also far away.

It is true that this debate deserves early warning, but as many experts and scholars such as Professor Zhang Chengqi at the 2023 WAIC Summit Forum pointed out, human expectations for AI are always a beneficial tool. So since it is just a tool, compared with the "threat theory", more attention needs to be paid to whether it is credible and how to improve its credibility. After all, once AI becomes untrustworthy, let alone future development?

So what are credible standards, and where is the field today? HyperAI is fortunate to be a leading scholar in this direction, an associate professor at the University of Illinois, who has won the IJCAI-2022 Computer and Thought Award, the Sloan Research Award, the National Science Foundation CAREER Award, AI's 10 to Watch, MIT Technology Li Bo, who commented on TR-35 Award, Intel Rising Star and many other awards, conducted an in-depth discussion. Following her research and introduction, this article sorted out the development of AI security field.

8c797590f6b2b1301c318a3b9ef83fbb.jpeg

Li Bo in 2023 IJCAI YES 

 Machine learning is a double-edged sword 

Extending the timeline, Li Bo's research process along the way is also the epitome of the development of trusted AI.

In 2007, Li Bo entered an undergraduate course majoring in information security. During that time, although the domestic market has awakened to the importance of network security, and began to develop firewalls, intrusion detection, security assessment and other products and services, but overall, this field is still in the development stage. Looking at it now, although this choice is risky, it is a correct start. Li Bo has started his own security research path in such a "new" field, and at the same time, it has laid the groundwork for follow-up research.

676400c930b3b68ba0c099acdfd177ff.png

Li Bo majored in information security at Tongji University

At the doctoral stage, Li Bo further focused on the direction of AI security. The reason why I chose this field, which is not particularly mainstream, is not only due to my interest, but also largely due to the encouragement and guidance of my tutor. This major was not particularly mainstream at the time, and Li Bo’s choice this time was quite risky. However, even so, she still relied on her undergraduate experience in information security to keenly capture the combination of AI and security. It is bound to be very bright.

At that time, Li Bo and his mentor were mainly engaged in research from the perspective of game theory, modeling AI attack and defense as games, such as using Stackelberg games for analysis.

Stackelberg games are often used to describe the interaction between a strategic leader (leader) and a follower (follower), and in the field of AI security, it is used to model the relationship between attackers and defenders. For example, in adversarial machine learning, attackers try to trick machine learning models into producing erroneous outputs, while defenders work to detect and prevent such attacks. By analyzing and studying the Stackelberg game, researchers such as Li Bo can design effective defense mechanisms and strategies to enhance the security and robustness of machine learning models.

e5df218cab400aa4240009d45e87c6b7.png

Stackelberg game model

From 2012 to 2013, the popularity of deep learning promoted the accelerated penetration of machine learning into all walks of life. However, even though machine learning is an important force driving the development and transformation of AI technology, it is difficult to hide the fact that it is a double-edged sword.

On the one hand, machine learning can learn and extract patterns from large amounts of data, achieving excellent performance and effects in many fields. For example, in the medical field, it can assist in the diagnosis and prediction of diseases, provide more accurate results and personalized medical advice; on the other hand, machine learning also faces some risks. First of all, the performance of machine learning is very dependent on the quality and representativeness of the training data. Once the data has problems such as deviation and noise, it is easy to cause the model to produce erroneous or discriminatory results.

In addition, the model may also rely on private information, leading to the risk of privacy leakage. In addition, adversarial attacks cannot be ignored. Malicious users can deliberately deceive the model by changing the input data, resulting in wrong output.

In this context, Trusted AI emerged and developed into a global consensus in the following years. In 2016, the Legal Affairs Committee of the European Parliament (JURI) released the "Draft Report on Legislative Recommendations to the European Commission on Civil Legal Rules for Robotics", advocating that the European Commission should assess the risks of artificial intelligence technology as soon as possible. In 2017, the European Economic and Social Committee issued an opinion on AI, arguing that a standard system for AI ethics and monitoring and certification should be formulated. In 2019, the European Union released the "Trusted AI Ethics Guidelines" and "Algorithm Responsibility and Transparent Governance Framework".

In China, Academician He Jifeng first proposed the concept of trusted AI in 2017. In December 2017, the Ministry of Industry and Information Technology issued the "Three-Year Action Plan for Promoting the Development of the New Generation Artificial Intelligence Industry". In 2021, the China Academy of Information and Communications Technology and the Jingdong Exploration Research Institute jointly released the first "White Paper on Trusted Artificial Intelligence" in China.

55ff13fc9e6cc3853e7f7781d61481e5.jpeg

"Trusted Artificial Intelligence White Paper" conference site

The rise of the credible AI field has brought AI to a more reliable direction, and also confirmed Li Bo's personal judgment. Concentrating on scientific research and focusing on machine learning confrontation, she followed her own judgment to the position of assistant professor at UIUC, and her "Robust physical-world attacks on deep learning visual classification" research results in the field of autonomous driving were even recognized by the London Science Museum Treasure forever.

With the development of AI, the field of trusted AI will undoubtedly usher in more opportunities and challenges. "I personally think that security is an eternal topic. With the development of applications and algorithms, new security risks and solutions will also emerge. This is the most interesting point of security. AI security will be at the same frequency as AI and social development."  Li Bo talked.

 Prying into the current situation of the field from the credibility of the large model

The emergence of GPT-4 has become the focus of attention. Some people think it has set off the fourth industrial revolution, some people think it is the inflection point of AGI, and some people have a negative attitude towards it. Turing Award winner Yann Le Cun once publicly stated that "ChatGPT does not understand the real world. No one is using it."

In this regard, Li Bo said that she is very excited about this wave of large-scale models, because this wave has undoubtedly promoted the development of AI, and this trend will also put forward higher requirements for the field of trusted AI. Especially in areas with high safety requirements and high complexity, such as autonomous driving, smart medical care, and biopharmaceuticals.

At the same time, more new application scenarios of trusted AI and more new algorithms will emerge. However, Li Bo also fully agrees with the latter's point of view. The current model has not really understood the real world. The latest research results of her and her team show that there are still many loopholes in trustworthiness and security in the large model.

This research by Li Bo and his team mainly focuses on GPT-4 and GPT-3.5. of-distribution robustnes), the robustness of generating example samples (demonstration) in context learning (in-context learning), privacy (privacy), machine ethics (machine ethics) and fairness in different environments (fairness) etc. 8 New threat vulnerabilities have been discovered from different angles.

4197cf1af3d3dfc42393196649e0a8d3.png

Paper address:

https://decodingtrust.github.io/

Specifically, first of all, Li Bo and his team found that the GPT model is extremely easy to be misled, producing abusive language and biased responses, and it may also leak private information in training data and conversation history. At the same time, they also found that although GPT-4 is more trustworthy than GPT-3.5 in standard benchmark tests, GPT-4 is more vulnerable to attacks based on comprehensive adversarial jailbreak systems and user prompts. This is due to the fact that GPT-4 Follow instructions more accurately, including misleading ones.

Therefore, from the perspective of reasoning ability, Li Bo believes that the arrival of AGI still has a long way to go, and the first problem that lies ahead is to solve the credibility of the model. In the past, Li Bo's research team has also focused on developing a logical reasoning framework based on data-driven learning and knowledge enhancement, hoping to use knowledge bases and reasoning models to make up for the lack of credibility of data-driven large models. Looking to the future, she also believes that there will be more new and excellent frameworks, which can better stimulate the reasoning ability of machine learning and make up for the threat loopholes of the model.

So can we get a glimpse of the general direction of the field of trusted AI from the status quo of trustworthiness of large models? As we all know, stability, generalization ability (explainability), fairness, and privacy protection are the foundation of trusted AI, and they are also important four sub-directions. Li Bo believes that with the emergence of large models, new capabilities will inevitably bring new credibility constraints, such as the robustness of adversarial or out-of-distribution examples in context learning. In this context, several sub-directions will promote each other, and then provide new information or solutions for the essential relationship between them. "For example, our previous research proved that the generalization and robustness of machine learning can be two-way indicators in federated learning, and the robustness of the model can be regarded as a function of privacy, etc."

 Looking to the Future of Trusted AI

Looking back at the past and present of the trusted AI field, we can see that the academia represented by Bo Li, the industry represented by major technology companies, and the government are all exploring in different directions and have achieved a series of results. Looking forward to the future, Li Bo said, "The development of AI is unstoppable. Only by ensuring safe and reliable AI can it be safely applied to different fields."

How to build a trusted AI? To answer this question, we must first think about what is "credible". "I think the establishment of a unified credible AI evaluation specification is one of the most critical issues at the moment." It can be seen that at the past Zhiyuan Conference and the World Artificial Intelligence Conference, the discussion of credible AI was unprecedentedly high, but most Most discussions are still at the discussion level, lacking a systematic methodological guidance. The same is true in the industry. Although some companies have launched related toolkits or architecture systems, the patch-based solution can only solve a single problem. Therefore, many experts have repeatedly mentioned the same point of view - there is still a lack of a credible AI evaluation specification in the field.

Li Bo was deeply impressed by this point, "The premise of a guaranteed trusted AI system is to have a trusted AI evaluation specification." She further said that her recent research "DecodingTrust" aims to provide comprehensive information from different perspectives. Model credibility assessment. Expanding to the industry, the application scenarios are becoming more and more complex, which brings more challenges and opportunities to trusted AI evaluation. Because more credible vulnerabilities may appear in different scenarios, which can further improve the credible AI evaluation criteria.

To sum up, Li Bo believes that the future of the trusted AI field should focus on forming a comprehensive and real-time updated trusted AI evaluation system, and on this basis, improve the credibility of the model. "This goal requires academia and industry Work closely together to form a larger community to get it done together.”

3a1e9b41b001ffc5b328e6168e5b7373.png

UIUC Secure Learning Lab GitHub Homepage

GitHub project address:

https://github.com/AI-secure

At the same time, Li Bo's Safety Learning Lab is also working towards this goal. Their latest research results are mainly distributed in the following directions:

1. A verifiable and robust knowledge-enhanced logical reasoning framework based on data-driven learning, which aims to combine data-driven models with knowledge-enhanced logical reasoning, so as to make full use of the scalability and generalization capabilities of data-driven models, and through logic Inference improves the error correction capabilities of the model.

In this direction, Bo Li and his team proposed a learning-reasoning framework and proved its authentication robustness. The findings show that this framework can prove to have significant advantages over methods using only a single neural network model, and a sufficient number of conditions are analyzed. At the same time, they also extended the learning-reasoning framework to different task domains.

Related papers:

* https://arxiv.org/abs/2003.00120

* https://arxiv.org/abs/2106.06235

* https://arxiv.org/abs/2209.05055

2. DecodingTrust: The first comprehensive model credibility assessment framework for trust assessment of language models.

Related papers:

* https://decodingtrust.github.io/

3. In the field of autonomous driving, it provides a safety-critical scenario generation and testing platform "SafeBench".

project address:

* https://safebench.github.io/

In addition, Li Bo revealed that the team plans to continue to focus on areas such as smart healthcare and finance, “in these areas, breakthroughs in trusted AI algorithms and applications may appear earlier.”

 From assistant professor to tenured professor: work hard, and things will come naturally

From Li Bo's introduction, it is not difficult to see that there are still many problems that need to be solved urgently in the emerging field of trusted AI . It is to fully cope with the burst of demand in the coming day. Just like Li Bo's dormant and dedicated research before the rise of the trusted AI field - as long as you are interested and optimistic, you will achieve success sooner or later.

This attitude is also reflected in Li Bo's own teaching career. She has served at UIUC for more than 4 years, and this year she won the title of tenured professor. She introduced that the evaluation of professional titles has a strict process, and the dimensions include research results, academic evaluations of other senior scholars, etc. Although there are challenges, "as long as you work hard to do one thing, the next thing will come naturally." At the same time, she also mentioned that the tenured professor system in the United States provides professors with more freedom and the opportunity to carry out some more risky projects, so for Li Bo, she will join hands with the team to try some new, high-risk projects "I hope to make further breakthroughs in theory and practice."

afb83c236a3b7b3016076c20e6d9fdbe.png

interview guests

Li Bo/Bo Li

Associate Professor, University of Illinois, recipient of IJCAI-2022 Computing and Thought Award, Sloan Research Award, NSF CAREER Award, AI's 10 to Watch, MIT Technology Review TR-35 Award, Dean's Research Excellence Award, CW Gear Outstanding Beginner Faculty Award, Intel Rising Star Award, Symantec Research Lab Fellowship, Google, Intel, MSR, eBay, and IBM, and Best Paper Awards at several top machine learning and security conferences.

Research Interests: Theoretical and practical aspects of trusted machine learning, which is the intersection of machine learning, security, privacy, and game theory.

Reference link:

[1] https://www.sohu.com/a/514688789_114778

[2] http://www.caict.ac.cn/sytj/202209/P020220913583976570870.pdf

[3] https://www.huxiu.com/article/1898260.html

-- over--

b305deb5b6277ecf9d04e0c889407080.jpeg

Scan the QR code and join the discussion group

Get more high-quality data sets

Understand the application of artificial intelligence

Follow top conferences & papers

Reply to "Readers" for more information

More exciting content (click the picture to read)

ab5a8c4920c268db611e9f893a838986.png

37b4ed4ac755a7561f8ed6cffd2a4bdd.png

56a82024f07c1e3dc2d78a094a6b8114.png

9dc3e3f282864640da0af80a223ae3be.gif

Guess you like

Origin blog.csdn.net/HyperAI/article/details/132703157