Technology Cloud Report: Generative AI has become an emerging risk for enterprises, but we should not stop eating because of choking

Original technology cloud report.

In 2023, generative AI technology will emerge from its cocoon and trigger a global digital revolution.

From the initial chatting and playing chess, to medical care, finance, manufacturing, education, scientific research, etc., generative AI has shown strong creativity and unlimited potential. According to incomplete statistics, as of the end of August this year, more than a hundred large industry AI models have been released across the country.

However, the accompanying concerns about data protection, compliance risks and privacy leaks have also made the industry pay special attention to the security risks that will be brought about during the deployment of large AI models.
Insert image description here

At the same time, generative AI is a double-edged sword. It can help enterprises solve practical problems, but it also faces huge risks such as data leakage.

At the beginning of this year, a large global enterprise leaked confidential information in its database during large model training, which had a huge negative impact on the enterprise. And such incidents still occur in endlessly.

Regarding the security challenges brought by generative AI, some AI companies believe that generative AI has demonstrated an unprecedented level of intelligence and will occupy a key position in corporate IT. The frequency of attacks brought about by this importance will also This makes large models a new security battlefield after cloud computing, big data, the Internet of Things, and the mobile Internet.

At the same time, generative AI technology will also help improve network security operation and maintenance efficiency in many aspects and change the foundation of the network security landscape at a deeper level.

Mainstream enterprise use cases for generative AI are emerging

Generative AI is a type of artificial intelligence that uses deep learning technology to generate high-quality content. It is based on the generative algorithms, generative functions and template-free calls of deep learning technology.

It has wide applications in computer vision, natural language processing, machine translation and other fields. With the continuous development of deep learning technology and the expansion of application scenarios, generative AI will be more widely used in the future. Currently, there are three enterprise use cases that are becoming mainstream in the industry.

First, in customer support, generative AI, including GPT and other large-scale learning models, is transforming the capabilities of conversational chatbots into one that feels natural, is more accurate, and can better perceive and respond to tone and emotion.

So conversational AI in product support chatbots is one of the first enterprise use cases we're seeing in the industry. These chatbots can search and query existing internal information and communicate in a human-like manner to answer customer questions and resolve common issues.

For companies already using some form of conversational AI, GPT improves response quality and customer satisfaction. For companies looking to convert their manual call centers to be more responsive, always-on, and more efficient, GPT becomes an attractive option.

Second, around business insights, one of the biggest challenges in data science is separating business users from data scientists.

The former understands best the nuances of the business and the questions that need to be answered, but only the latter can actually program in a computer language to get the answers to those questions. Generative AI now allows business users to ask questions in natural language.

AI can convert these questions into SQL queries, run them against an internal database, and return answers in a structured narrative, all within minutes. The advantage here isn't just efficiency, it's the speed of decision-making and the ability for business users to interrogate the data more directly and interactively.

Third, in terms of programming automation, large language models have high accuracy in many languages, including computer languages.

Software developers spend almost 50% less time writing code and related documentation. For example, Microsoft's Power Automate program, a robotic process automation tool, can now use natural language programming to automate tasks and workflows in a more intuitive and user-friendly way.

Not only is this more efficient than involving large teams of programmers and testers, but it also reduces the time and iterations it takes to get the automation up and running.

Preventing generative AI risks has become a required course for enterprises

We need to realize that generative AI technology is a "double-edged sword". While it promotes social progress, it may also bring security risks in technology, design, algorithms, and data. Therefore, we must conduct forward-looking research and establish and improve laws, regulations, institutional systems, and ethics to ensure the healthy development of artificial intelligence. While paying attention to risk prevention, we must simultaneously establish error-tolerance and error-correction mechanisms and strive to achieve a dynamic balance between regulation and development.

To understand the source of security risks, we first need to understand the characteristics and principles of their operation. Generative AI has three major technical characteristics: "big data, big models, and big computing power", as well as key technologies such as natural language understanding, knowledge engineering methods, and brain-like interactive decision-making.

Therefore, this makes generative artificial intelligence possess five major risks: social value, user usage, data compliance, data security, and data quality. These risks have been found in various fields and application scenarios such as finance, medical care, education, e-commerce, and media. reflect.

Since the beginning of the year, relevant departments in my country have promoted the implementation of a number of relevant regulatory bills. Among them, on April 11, the Cyberspace Administration of China drafted the "Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments)" (hereinafter referred to as the "Measures") and publicly solicited opinions from the public.

The "Measures" focus on three major issues: privacy security, technology abuse, intellectual property rights and the rights of others, and establish a protective mechanism for the development of AIGC.

Like any emerging technology, one of the biggest challenges with generative AI is its relative immaturity. While generative AI is great for experimenting with chatbots for personal use, it's still in its early days for mainstream enterprise applications.

Organizations deploying it have to do a lot of the heavy lifting themselves, such as experimenting to find the best use cases and sifting through an ever-increasing and confusing list of available options (such as between OpenAI’s ChatGPT service and Microsoft Azure selection), or integrate it into their business processes (by fully integrating it into many application workflows).

The result is that as the technology matures, much of this will disappear and vendors will incorporate more of the technology into their core products in an integrated fashion.

Second, one of the major drawbacks of generative AI is the potential to produce incorrect but apparently convincing answers. Because GPT has made significant advances in natural language processing, there is a considerable risk that it will provide responses that sound correct but are actually wrong.

In industries where accuracy is critical, such as healthcare or financial services, this is something that cannot be allowed to happen. Companies must carefully select the right application areas and then establish governance and oversight to mitigate this risk.

Third, companies need to pay attention to setting and managing corporate guidelines. Data privacy and maintaining the confidentiality of protected corporate data are key to business success. Therefore, as a first step, it is crucial to define and set appropriate corporate guidelines.

In addition to the risk of loss of confidential or personally identifiable or other protected data, an additional risk of training a publicly available language model with proprietary data is that it may result in an inadvertent loss of intellectual property, particularly when the results based on the training are provided to other competitors.

Having sound policies and frameworks in place is difficult because they must balance the need for innovation on the one hand and the risks associated with generative AI on the other.

Finally, it can be challenging to find the right balance between overindulging in hyped technology and focusing on the highest return initiatives. Organizations need to ensure they are allocating appropriate capital and resources to their most urgent initiatives.

On the other hand, organizations that wait too long for the technology to mature may lose the opportunity to mainstream AI in the industry, fall behind the latest technology that could have a significant impact on their business, and reduce their lasting competitive advantage.

Governance ideas return to the origins of AI

In terms of ideas and strategies for dealing with generative AI security risks, the characteristics of AI technology itself have attracted the attention of the industry. The traditional ICT network security model is no longer able to adapt to the security brought by current generative AI. AI's reverse estimation capabilities, especially the anti-reverse engineering capabilities for some malware, are very effective.

At present, the industry generally agrees that the service capabilities of AI can reach the level of technical personnel in the four to five years. The improvement of defense digitalization level through AI computing power, coupled with auxiliary automated operations, makes it easier to deal with generative AI risks. has inherent advantages.

This background has also made it an industry consensus to use a large security model to manage the risks of generative AI.

This requires construction from three aspects: "scenario traction, technology drive, and ecological collaboration", applying AI to reshape the work paradigm of the security industry, and solving the three problems of actual combat situation command and dispatch, red and blue confrontation auxiliary decision-making, and security operation efficiency improvement through large models. Big problem.

On the one hand, we must seize the most fundamental elements of generative AI - data and algorithms, use this as a breakthrough point, control the source management of big data, layout and build systematic big data firewalls in advance, and control and prevent data leaks.

At the same time, the national-level independent research and development database system and backup system should be launched and optimized in a timely manner, and the AI ​​big data control system, hierarchical and hierarchical enjoyment system, network risk prevention system and the resulting AI intelligent control and supervision architecture system should be promptly launched and developed.

On the other hand, various measures have been taken to promote the technological development and governance of generative AI. Encourage the innovative application of generative AI technology in various industries and fields, explore and optimize application scenarios, and build an application ecosystem.

Support industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant professional institutions, etc. to collaborate in generative AI technology innovation, data resource construction, transformation and application, risk prevention, etc. Promote the construction of generative AI infrastructure and public training data resource platforms.

Promote the collaborative sharing of computing resources and improve the efficiency of computing resource utilization. Promote the orderly opening of public data classification and classification, and expand high-quality public training data resources.

Nowadays, the fight for "data sovereignty" has become a new trend in the development of global data security. As countries have introduced data strategies one after another, they have legislated to safeguard national data sovereignty.

In this context, improving the security of generative AI will become one of the future industrial tasks of countries around the world, and my country's security companies have already taken an important step.

[About Technology Cloud Report]

Experts focusing on original enterprise-level content - Technology Cloud Report. Founded in 2015, it is one of the top 10 media in the cutting-edge enterprise IT field. Authoritatively recognized by the Ministry of Industry and Information Technology, it is one of the official communication media for Trusted Cloud and Global Cloud Computing Conference. In-depth original reporting on cloud computing, big data, artificial intelligence, blockchain and other fields.

Guess you like

Origin blog.csdn.net/weixin_43634380/article/details/132764898