Innovation Guide|How CEOs should respond to the 4 major disruptive innovation opportunities brought by generative AI

Generative AI is a rapidly developing disruptive innovation that can help companies innovate business models, improve efficiency and unlock business value. This study describes how to address the challenges and opportunities presented by generative AI technology and how to proactively leverage this technology to outperform competition and organizational creativity and efficiency. If you want to learn more about generative AI as an enterprise innovation strategy, read our complete guide.

ChatGPT, released at the end of 2022, has aroused unprecedented global interest in generative AI (AGI for short). Bill Gates commented that this is as revolutionary a technology as PC, Internet and mobile. In the past few days, many users who have tried this new technology have discovered and shared countless surprises that enhance creativity and unlock productivity. In the weeks and months since, organizations have scrambled to keep up and fend off unforeseen challenges. Some organizations have taken a more formal approach, creating dedicated teams ( my organization already had one ) to explore how generative AI can explore potential value and improve efficiency.

The AI ​​era has arrived! Generative AI has the ability to revolutionize the way businesses operate, create new competitive advantages, and potentially lead to the demise of some existing models. However, rather than trying to become technical experts, business leaders should focus on how this will impact the organization and business, and how to make the best strategic decisions to benefit from the advantages and reduce operational risks.

This can be achieved by focusing on 4 main areas: disruption, potential exploration, people transformation and risk policy. Disruptive power requires enterprises to reach consensus and be aware of the disruptive power of generative AI. Potential exploration involves identifying which use cases can be used to differentiate a company from its competitors by seizing innovation opportunities. People transformation involves preparing employees to support deployment and ensuring organizational structure and staffing changes. Risk policy involves establishing ethical and legal boundaries. Each of these areas has short- and long-term implications and unanswered questions, but CEOs must prepare for the sea change that generative AI may bring.

1. Recognize the disruptive power of generative AI

For CEOs, however, generative AI poses a greater challenge. The focus today may be on productivity improvements and technological constraints, but a revolution in business model innovation is coming. Just as Mosaic, the world’s first free web browser, ushered in the internet age and disrupted the way we work and live, generative AI has the potential to disrupt nearly every industry—competitive advantage and creative disruption. The implications for leaders are clear: today’s surprising ChatGPT buzz needs to evolve into a generative AI innovation strategy for enterprise decision-makers .

This is no small task, and CEOs—who may have been multiple stages out of the technology itself—may feel uncertain about next steps. But from our perspective, CEOs’ first priority is not to fully immerse themselves in technology; instead, they should focus on how generative AI will impact their organizations and businesses, and what strategic choices will enable them to take advantage of first movers Opportunities and management challenges.

These raise a pressing question for CEOs. What innovations are possible when every employee has access to generative AI products with seemingly unlimited potential? How will this technology change how employee roles are defined and managed? How do leaders respond to the fact that generative AI models may produce deeply fake or biased outputs?

How chatGPT works

How CHATGPT works

The variety of generative AI models emerging recently is mind-boggling. They can receive different content, such as images, longer text formats, emails, social media content, audio recordings, code and structured data. They can output new content, such as translations, Q&A, sentiment analysis, summaries, and even videos. There are many potential applications for these content machines in business, and we’ll talk about a few of them now.

1) Marketing application

These generative models have potential value in many areas of business, but their application is perhaps most common in marketing. For example, Jasper is the marketing version of GPT-3 and can generate blog posts, social media posts, web copy, sales emails, ads, and other content for customers. It insists on frequently A/B testing its output and optimizing its content for search engine rankings. Jasper has also fine-tuned its GPT-3 model using its customers' best output, and Jasper's senior executives say this approach has resulted in substantial improvements. Most of Jasper's customers are individuals and small businesses, but some teams at large companies are also using its features. For example, writers at cloud computing company VMWare use Jasper when creating original marketing content, from emails to product events to social media copy. Rosa., director of product-driven growth. Rosa Lear said that Jasper helps the company improve its content strategy, and writers now have time to do better research, ideas and strategies.

Chrissy, principal of public relations and social media agency Ruby Media Group. Kris Ruby is currently using generative models to generate text and images. She said that these models have greatly improved the effectiveness of SEO, and also provided writers with some personalized words in terms of public relations, and the results are very good. She believes these new tools open up a new copyright front, and she also assists clients in formulating AI policies. She said that when she uses these tools, "AI accounts for 10% and I account for 90%" because she has to do many prompts, edits and iterations. She believes that these tools can make articles written better and more complete, which will help search engines find articles, and that image generation tools may replace the stock photo market and help revitalize creative work.

DALL-E 2 and other image generation tools are already used in advertising. For example, Heinz used a picture of a ketchup bottle with a label similar to Heinz’s and said, “This is what the AI ​​sees as ‘ketchup.’” Of course, this just means that the model used a relatively large number of photos of Heinz ketchup bottles when training. Nestle is using an AI-enhanced version of a Vermeer painting to help sell one of its yogurt brands. Clothing company Stitch Fix has used AI to recommend personalized clothing to customers , and is currently experimenting with DALL-E 2 to create visual images of clothing based on customers' preferences for color, fabric and style. Mattel is also using the technology to generate images for toy design and marketing.

2) Code generation application

GPT-3, in particular, has also proven to be effective (albeit imperfect) in generating computer code. GPT-3's Codex program has been specially trained to generate code. As long as you describe a "snippet" (editing: a small piece of reusable code) or a small program function to it, it can generate various Language code. Microsoft's Github also has a version of GPT-3 that generates code, called CoPilot. With the latest version of Codex, you can now find bugs in your own code, fix them, and even explain what the code is doing, at least sometimes. Microsoft says its goal is not to replace human programmers, but to enable tools like Codex or CoPilot to become "partner programmers" that work with humans to improve their speed and effectiveness.

Everyone agreed that LLM-based code generation worked really well for these fragments, but if you want to integrate these fragments into a larger program and integrate this program into a specific technical environment, you still need human programming. Designing ability. Deloitte has been experimenting with Codex extensively over the past few months and found that it allows experienced developers to increase productivity and allows inexperienced developers to gain some programming skills.

Deloitte conducted a six-week trial involving 55 developers, and the results showed that most users believed that the final code (mostly from Codex) was more than 65% accurate. Overall, Deloitte's experiments found that code development speed for related projects increased by 20%. Deloitte also uses Codex to translate code from one language to another. The company concluded that there will still be a need for professional developers for the foreseeable future, but as productivity improves, there may not be as many needed. Deloitte found that, as with other types of generative AI tools, the better the input prompts, the better the output code.

3) Intelligent conversation application

LLM is also increasingly used at the core of conversational AI or chatbots. Compared with current dialogue technology, LLM may have a better understanding of dialogue and context awareness. For example, Facebook's BlenderBot, which is designed for conversation, is able to hold long conversations with humans while maintaining context. Google's BERT is used to understand search terms and is also a component of its own DialogFlow chatbot engine. Google's other LLM, LaMBA, is also designed for conversation, and conversations with it once convinced one of the company's engineers that it was sentient. Its ability to predict which words will be used in a conversation based solely on past conversations is breathtaking.

None of these LLMs are perfect conversational bots. They are trained on content from the human past and tend to replicate any racist, sexist, or deviant language they were exposed to during training. Although the companies building these systems are filtering hate speech, they are not entirely successful.

4) Knowledge management applications

An emerging application of LLM is to manage textual knowledge (or possibly image knowledge, video knowledge) within an organization. Building a structured knowledge base is highly labor-intensive, making it difficult for many large companies to implement large-scale knowledge management. However, some studies have pointed out that if the training of the model is based on a specific range of textual knowledge within the organization when fine-tuning, then LLM can effectively manage the organization's knowledge. Knowledge in LLM can be obtained by asking questions, that is, entering prompts.

Some companies are joining top commercial LLM suppliers to explore the concept of LLM knowledge management. For example, Morgan Stanley is working with OpenAI’s GPT-3 to fine-tune training with wealth management content, allowing financial advisors to both search for existing knowledge within the firm and easily tailor content for clients. Users of such systems may need training or assistance to enter effective prompts, and the knowledge output from LLM may still require editing or review before use. However, if these issues are addressed, LLM can reinvigorate the field of knowledge management and allow the field to scale more effectively.

2. Explore the innovation potential of AGI

Artificial intelligence has never been more accessible. Tools like ChatGPT, DALL-E 2, Midjourney, and Stable Diffusion allow anyone to create a website, generate an advertising strategy, and produce videos—the possibilities are endless. This “low-code, no-code” quality will also make it easier for organizations to adopt AI capabilities at scale. (See "Functional Characteristics of Generative AI." below.)

Functional characteristics of generative AI

Generative AI

Just-in-time productivity improvements can significantly reduce costs. For example, generative AI can summarize documents with stunning accuracy in seconds that might take a researcher hours (estimated at $30 to $50 per hour).

But the democratizing power of generative AI also means that, by definition, a company’s competitors will also have the same access and capabilities. Many use cases that rely on existing large language model (LLM) applications, such as productivity improvements for programmers using Github Copilot and marketing content developers using Jasper.ai, will be needed just to keep up with other organizations. But they will not provide differentiation because the only variability is created by the user's ability to prompt the system.

1) Choose the right use case

For CEOs, the key is to identify the company’s “golden” use cases – situations where the best existing solutions relatively exist that deliver real competitive advantage and have the greatest impact.

These use cases can come from any point in the value chain. Some companies will be able to drive growth through improved products; Intercom, which provides customer service solutions, is running pilots to integrate generative AI into its customer engagement tools to automate priority service. Growth can also be found in reduced time to market and cost savings, as well as in the ability to stimulate imagination and create new ideas. For example, in biopharmaceuticals, today's 20-year patent period is largely consumed by R&D; speeding up this process can significantly increase the value of the patent. In February 2021, biotech company Insilico Medicine announced that its artificial intelligence-generated anti-fibrosis drug went from concept to Phase 1 clinical trials in less than 30 months, costing approximately $2.6 million—more than traditional drug discovery. Orders of magnitude faster and cheaper.

Once leaders identify the company's prime use cases, they will need to collaborate with the digital technology team to decide whether to fine-tune an existing LLM or train a custom model.

2) Fine-tune existing models

Adapting existing open source or paid models is cost-effective—in 2022 experiments, Snorkel AI found that fine-tuning an LLM model to complete complex legal classifications cost $1,915 to $7,418. Such apps can save hours of time for lawyers, who can charge up to $500 an hour.

Fine-tuning can also launch experiments that would consume time, talent, and investment using in-house capabilities. It will prepare companies for a future when generative AI may evolve into a model like cloud services: companies buy solutions expecting quality at scale from the standardization and reliability of cloud service providers.

But this approach also has drawbacks. These models rely entirely on the functionality and domain knowledge of the core model's training data; they are also limited by the modalities available, which today mainly consist of language models. There are limited options for protecting proprietary data - for example, fine-tuning LLMs that are stored entirely locally.

3) Train new or existing models

Training a custom LLM will provide greater flexibility, but it comes with high costs and capacity requirements: according to AI21 Labs estimates, it would cost about $1.6 million to train a 150 million parameter model with two configurations and 10 a run. To put this investment into context, AI21 Labs estimates that Google spent about $10 million to train BERT and OpenAI spent $12 million on a single training run of GPT-3. 2 (Please note that successful LLM requires multiple rounds of training.)

These costs, as well as data center, compute and talent requirement requirements, are significantly higher than other AI models, even when managed through a partner. The bar to justify this investment is high, but for truly different use cases, the value generated from the model may offset the cost.

4) Plan your AGI investment

Leaders need to carefully evaluate the timing of such investments, weighing the potential costs of moving too early on complex projects where the talent and technology aren’t ready against the risk of falling behind. Today’s generative AI is still limited by its propensity for error and is primarily intended for use cases with a high tolerance for variability. CEOs will also need to consider new funding mechanisms for data and infrastructure—for example, whether budgets should come from IT, R&D, or other sources—if they determine custom development is a critical and time-sensitive need.

The “tune vs. train” debate has other implications when it comes to long-term competitive advantage. Previously, most research on generative AI was public, with models provided through open source channels. Since this research is now being conducted behind closed doors, open source models have lagged far behind state-of-the-art solutions. In other words, we are on the verge of a generative AI arms race.

As research accelerates and becomes more proprietary, and algorithms become more complex, keeping up with state-of-the-art models will be challenging. Data scientists will need special training, advanced skills, and deep expertise to understand how models work—their capabilities, limitations, and usefulness for new business use cases. Large players looking to remain independent while using the latest AI technology will need to build strong in-house technology teams.

Develop an AI-driven innovation strategy

View the plan now >

How to get enterprise innovation right in the AI ​​era? How does innovation cope with greater uncertainty under the new normal? Plan the AI ​​innovation roadmap, formulate an innovation North Star, select innovative project portfolios and supporting innovation organizational mechanisms to promote implementation. This opens a new chapter of sustainable innovative growth.

3. Plan AGI-driven people transformation

Like existing forms of AI, generative AI is a destructive force for humanity. In the short term, CEOs will need to work with their leadership teams and HR leaders to determine how this transformation should unfold within their organizations—redefining employee roles and responsibilities and adjusting operating models accordingly.

1) Redefine roles and responsibilities

There are already several AI-related shifts happening. Traditional artificial intelligence and machine learning algorithms (sometimes incorrectly referred to as analytical artificial intelligence) use powerful logic or statistics to analyze data and automate or enhance decision-making, allowing people to work more autonomously and managers are increasingly focused on team dynamics and goal setting.

Now, generative AI as a first-draft content generator will augment many roles by increasing productivity, performance, and creativity. Employees with more clerical jobs, such as paralegals and marketers, can use generative AI to create first drafts, allowing them to spend more time refining content and identifying new solutions. Coders will be able to focus on activities such as improving code quality and ensuring compliance with security requirements on tight timelines.

Of course, these changes cannot (and should not) happen in a vacuum. CEOs need to be aware of the impact of AI on employees’ emotional well-being and professional identity. Increases in productivity are often conflated with reductions in the overall workforce, and artificial intelligence has caused concern among employees; many college graduates believe that artificial intelligence will make their jobs irrelevant within a few years. But AI also has the potential to create as many jobs as possible.

The impact of AI is therefore a key cultural and workforce issue, and CEOs should work with HR to understand how roles will evolve. As AI initiatives roll out, regular pulse checks should be conducted to track employee sentiment; CEOs also need to develop transparent change management plans that both help employees embrace their new AI colleagues while ensuring employees retain autonomy right. The message should be that humans are not going anywhere – in fact, AI needs to be deployed effectively and ethically.

As AI adoption accelerates, CEOs need to continue to learn and use these lessons to develop strategic workforce plans—in fact, they should start developing this plan now and adapt it as the technology evolves. It’s not just about determining how certain job descriptions will change, it’s about ensuring companies have the right people and management in place to stay competitive and make the most of their AI investments. Questions CEOs should ask when assessing their company's strengths, weaknesses, and priorities include:

  • What capabilities do project leaders need to ensure that the quality of individual contributors' work is high enough?
  • How can CEOs create optimal experience curves to generate the right future talent pipeline—for example, ensuring that more junior employees gain skills in AI augmentation and that executives are prepared to lead an AI-augmented workforce?
  • How should training and recruiting be adapted to build a high-performing workforce now and in the future?

2) Adjust the company’s operating model

We expect that in the long term, the agile (or bionic) model will remain the most effective and scalable, but with centralized IT and R&D departments staffed with experts who can train and customize LLM. This centralization should ensure that employees working with similar types of data have access to the same data sets. When data is siled across departments, which is all too often, companies will struggle to realize the true potential of generative AI. But under the right conditions, generative AI has the power to eliminate the compromise between agility and scale.

Due to the growing importance of data science and engineering, many companies would benefit from a senior executive role (e.g., chief AI officer) overseeing the business and technical requirements of AI initiatives. This executive should place small data science or engineering teams within each business unit to fit the model for a specific task or application. Therefore, the technical team will have the domain expertise and direct connections to support individual contributors, preferably with one layer of distance between the platform or technical leaders and individual contributors.

Structurally, this might involve departmentally focused teams with cross-functional members (e.g., a sales team with sales reps and dedicated technical support), or preferably cross-department and cross-functional teams that are aligned with the business and technology platforms.

4. Actively prevent AGI risks

1) Deep hypocrisy of legal and moral issues

We have seen that these generative AI systems quickly raise many legal and ethical issues. "Deepfakes" are images and videos created by AI that claim to be real but are actually fake. They are already present in fields such as media, entertainment and politics. However, building deepfakes used to require a lot of computational skills, but now almost everyone can do it. In order to control fake images, OpenAI tries to add unique symbols as "watermarks" to each DALL-E 2 image. However, more control may be needed in the future, especially as generative video creation becomes mainstream.

Generative AI also raises a lot of questions about original and proprietary content. Because the generated text and images don't exactly resemble past content, providers of these systems argue that they belong to the people who entered the relevant prompts. But they are clearly derivatives, derived from past texts and images (used to train the model). Needless to say, they will create a lot of work for intellectual property lawyers in the coming years.

Generative AI lacks a trusted truth function, meaning it does not know when information is factually incorrect. The meaning of this trait, also known as "hallucination," can range from humorous shortcomings to destructive or dangerous mistakes. But generative AI also poses other key risks for companies, including copyright infringement; proprietary data breaches; and unplanned features discovered after a product is released, also known as overcapacity. For example, Riffusion uses the text-to-image model Stable Diffusion to create new music by converting music data into spectrograms.

2) Be prepared for risks

Companies need policies to help employees use generative AI safely and limit its use to situations where performance is within established guardrails. Experimentation should be encouraged; however, it is important to track all experiments across the organization and avoid "shadow experiments" that may expose sensitive information. These policies should also guarantee clear data ownership, establish review processes to prevent incorrect or harmful content from being published, and protect the proprietary data of the company and its customers.

Another near-term priority is training employees on how to use generative AI within their expertise. The low-code, no-code nature of generative AI may make employees feel overconfident in their ability to complete tasks for which they lack the necessary background or skills; for example, marketers may try to circumvent company IT rules and write code to build new marketing tool. According to the NYU Cybersecurity Center, approximately 40% of code generated by AI is insecure, and since most employees are not qualified to assess code vulnerabilities, this creates a serious security risk. AI assistance in writing code can also create quality risks, as programmers may be overconfident in the AI's ability to avoid vulnerabilities, according to a Stanford University study.

Therefore, leaders need to encourage all employees, especially programmers, to maintain a healthy skepticism about AI-generated content. Company policy should state that employees only use data they fully understand and that all content generated by AI is thoroughly vetted by data owners. Generative AI applications such as Bing Chat are already starting to implement the ability to reference source data, a feature that can be extended to identify the data owner.

3) Ensure quality and safety

Leaders can adapt existing recommendations on responsible publishing to guide the release of AI-generated content and code. They should mandate robust documentation and establish an institutional review board to review a priori considerations of impact, similar to the process for publishing scientific studies. Licensing for downstream uses, such as the Responsible Artificial Intelligence License (RAIL), provides another mechanism for managing the lack of real functionality of generative AI.

Finally, leaders should remind employees not to use public chatbots to obtain sensitive information. All information fed into the Generative AI tool will be stored and used to continue training the model; even Microsoft, which has made significant investments in Generative AI, has warned its employees not to share sensitive data with ChatGPT.

Today, there are few ways for companies to leverage LL.M.s without disclosing the data. One option for data privacy is to store the complete model locally or on a dedicated server. (BLOOM, an open-source model from Hugging Face's BigScience group, is GPT-3 sized but only requires about 512 gigabytes of storage.) However, this may limit the ability to use state-of-the-art solutions. In addition to sharing proprietary data, there are other data concerns when using LLM, including protecting personally identifiable information. Leaders should consider leveraging clean technologies like named entity recognition to remove names of people, places, and organizations. As LLM matures, solutions for protecting sensitive information will become more sophisticated, and CEOs should regularly update their security protocols and policies.

5. Promote AGI initiatives now

When dealing with generative AI technologies, companies can adopt the following strategies:

1.  Redefine employee roles and responsibilities : Businesses need to re-evaluate employee roles and responsibilities to ensure they can adapt to the new technology environment. For example, some companies may need to hire more data scientists or AI experts to manage generative AI technologies.

2.  Innovative business models : Generative AI technology can help companies create new business models. For example, some companies use generative AI technology to create personalized products or services to meet customer needs.

3.  Improve operational efficiency : Generative AI technology can automate many repetitive tasks, thereby increasing efficiency and reducing errors. For example, some companies use generative AI technology to automatically generate reports or analyze data.

4.  Reconstruct differentiated competition : By leveraging generative AI technology to create unique products or services, companies can differentiate themselves from competitors. For example, some companies use generative AI technology to create unique advertising or marketing materials.

For example, one online retailer uses generative AI technology to create a personalized shopping experience. They can use this technology to recommend products, create personalized ads and optimize website layout. This will help them improve customer satisfaction and increase sales.

Another example is a healthcare company might use generative AI technology to automatically generate medical reports. This will help them be more efficient and reduce errors, and enable doctors to make diagnostic and treatment decisions faster.

Generative AI offers unprecedented opportunities. But it also forces CEOs to grapple with towering unknowns and do so in a space that may feel unfamiliar or uncomfortable. Developing an effective strategic approach to generative AI can help distinguish signal from noise. Leaders who are ready to reimagine their business models—identifying the right opportunities, organizing their employees and operating models to support generative AI innovation, and ensuring experimentation does not come at the expense of safety and ethics—can create long-term competitive advantage.

 Original link:

Innovation Guide|How CEOs should respond to the 4 major disruptive innovation opportunities brought by generative AI

Extended article:

1. Innovation Case | Kunqu Opera DTC innovation, using big data and social marketing to reshape the traditional performance business model

2. Innovative Cases | Billionaire skin care brand Lin Qingxuan DTC uses global live broadcast + private domain operations to reshape new retail capabilities

3. Innovation Cases | Analysis of the strategy of Xiangxiniao to achieve DTC transformation and cultivate new growth poles of the brand

4. How Dollar Shave Club innovates the brand-consumer relationship and maintains the highest customer retention rate

5. Channel Strategy|How do DTC brands cope with the cost growth challenge under a sales scale of 1 billion?

For more exciting cases and solutions, you can visit the Runwise Innovation Community .

Guess you like

Origin blog.csdn.net/upskill2018/article/details/132584867