Thoughtworks: Facing the trend of artificial intelligence, two ways to apply large language models

On June 16, 2023, Thoughtworks, a world-renowned software and technology consulting company, held the 28th media in-depth analysis meeting of Technology Radar. Xu Hao, CTO of Thoughtworks China, Liu Shangqi, general manager of Hong Kong and Macau of China, member of the Global Technology Advisory Committee, and Zhou Nina, head of social influence and sustainable development in China, were invited to attend and shared their expert opinions on the latest topics of this issue of Technology Radar.

About Technology Radar

As a pioneer in the field of technology, Stewart has been committed to promoting innovation and leading the development of the industry. Technology Radar was born from our mission to support software excellence and revolutionize the IT industry. So far, it has been 14 years since the release of Technology Radar.

Technology Radar is a technical trend report released by Thoughtworks every six months. The Technology Radar Technical Advisory Board (TAB) is composed of more than 20 senior technical leaders from all over the world in Thoughtworks, relying on our observations obtained when solving the severe business challenges faced by customers The results, dialogue content and front-line experience are extracted from repeated summarization and discussion, aiming to provide high-information industry insights to various stakeholders, from CTOs to developers, in a clear context.

The Four Quadrants of the Technology Radar

Technology Radar uses graphics to display different technical content in terms of items, each item corresponds to a technology. We group items into quadrants Technology, Tools, Platform, Language, and Framework, and the Adopt, Evaluate, Trial, and Hold rings represent our assessment of their maturity. The software landscape changes rapidly, and so do the technology items we track, and their place on the radar changes with technology trends.

Ring of Technology Radar

The 28th issue of Technology Radar covers 107 items and five major themes, including "The Rapid Rise of Practical Artificial Intelligence", "Easy-to-Use Accessible Design", "Lambda Trap", "Engineering Rigor in Data Analysis and Artificial Intelligence" and "Declaration, or Programming?" ". This technology radar media conference conducted an in-depth analysis of the following three topics:

The Rapid Rise of Practical AI

In the past few months, tools like ChatGPT  have revolutionized the perception of artificial intelligence and made such tools widely available. As a large language model (LLM) that has "read" billions of web pages, ChatGPT can provide additional perspectives and assist in different tasks, including generating ideas and requirements, creating code and testing, etc.

For the application of artificial intelligence, Technology Radar advises against excessive or inappropriate use. There may be intellectual property and data privacy concerns about using these AI tools, including some unresolved legal issues, so we recommend that businesses seek advice from their legal teams before using them. Today, AI models can generate a good first draft. But generated content always needs to be monitored, verified, vetted, and used responsibly by humans. Organizations and users may face reputational and security risks if these warnings are ignored. Even some product use cases remind users, “AI-generated content may contain errors. Please make sure it is correct and reasonable before using it”.

"ChatGPT is amazing enough, but if I can't use it safely, or find a similar solution, it will only cause panic." Xu Hao, CTO of Thoughtworks China, commented on "Two Roads to Big Language Models" Indicated during topic interpretation.

In terms of the use of large language models, Xu Hao, CTO of Thoughtworks China, believes that there are currently two ways. "One way is based on traditional machine learning, derived from the logic of transfer learning. Since there is a pre-trained model (pre-trained model), then using data that is closer to a specific field for transfer learning is The pre-trained ability can be transferred to different fields." Xu Hao explained. In this issue of "Technology Radar", we mentioned domain-specific large language models (Domain Specific LLM ) , fine-tuning general-purpose large language models with domain-specific data can use them for a variety of tasks, including information query, enhanced user support and content creation. This practice is already showing its potential in the fields of law and finance. In addition, the self-hosted large-scale language model (Self-host LLM) has also become a reality. Self-hosting has many benefits, such as better control over the fine-tuning of the model in some specific usage scenarios, improved security and privacy, and support for offline access. However, it is unavoidable that this method may retain or re-share your data, which will bring risks to the ownership of confidential information and data, and the cost of consumption should also be taken into consideration.

Another approach is based on the reading comprehension and reasoning abilities of the large language model itself. Xu Hao believes: "In this way, complex data collection, expensive GPU, and long-term training are not required. With a few dialogue corrections, the large language model can be migrated to the domain you want." For example, in Hint Engineering and LangChain mentioned in this issue of Technology Radar . The former refers to the process of designing and optimizing cues for generative AI models to obtain high-quality model responses. This process involves carefully designing cues that are specific, understandable, and relevant to the desired task or application to guide the model to output useful results. The latter is a framework for building applications based on large language models (LLMs). These models have sparked a race for generative artificial intelligence in various scenarios.

In terms of context construction, Xu Hao believes that the future of LLM is still controversial. How to choose and what the mainstream model will be in the future still needs to be explored by the industry.

Easy-to-use accessibility design

Accessible design has been a factor that has been valued by organizations for many years. In this edition of the Technology Radar, Thoughtworks highlights the team's growing experience with the tools and techniques that lead to better accessible designs for development. In the accessibility annotations in article design , we recommend Figma's accessibility annotation plugins, including The A11y Annotation Kit, Twitter's Accessibility Annotation Library and Ax's toolset Ax for Designers. These tools facilitate communication within the team and help the team consider important elements such as document structure, semantic HTML, and alternative text from the beginning of the work. And tools like axe DevTools, Accessibility Insights for Web, or ARC Toolkit can help practitioners implement intelligently assisted accessibility testing . We love seeing the emphasis on accessible design and providing improved access to features for more people.

"In order to avoid further deterioration of social exclusion, tearing down the invisible digital walls is an important responsibility of every technical worker and enterprise." Regarding how to achieve barrier-free design, Zhou Nina, head of social influence and sustainable development of Thoughtworks China, brought Interpretation titled "Teardown the Data Wall - Achieving Accessibility and Ease of Use of Information". She pointed out: "In the digital age, many groups have to face digital walls caused by factors such as age, education level, disability, income, and geographical location." Gradually rejected by the physical world, struggling. To combat social exclusion, it is the responsibility of every tech worker and business to integrate digital inclusion and information accessibility technologies into products and organizations.

What should responsible enterprises and product teams focus on, and how to adhere to the long-term principle and integrate accessible and easy-to-use barrier-free technology into the end-to-end process of product development? Based on Thoughworks' own system transformation and customer service experience, Zhou Nina summarized the following four-dimensional methods:

  1. Find the direction of universal design from the deep needs of people    
  2. Early Adoption of Accessibility Technologies to Reduce Customer Acquisition Costs
  3. Incorporate accessible practices and tools throughout the agile delivery lifecycle
  4. Improving corporate culture as a cornerstone for driving digital inclusion

The relationship between technology and people and society is intertwined. The demand for barrier-free products and services is bound to increase in the future. If you do not make preparations in advance, the transformation cost of enterprises will increase significantly. Enterprises that cannot provide barrier-free services will gradually lose a large minority group of users, and will be at a disadvantage in business competition.

Lambda Trap

Serverless functions AWS Lambdas  are increasingly appearing in architects' and developers' toolboxes and are being used to implement a variety of cloud-based infrastructure tasks. However, like many useful things, sometimes a solution starts out clean and useful, but as it continues to evolve with success, it eventually violates the constraints imposed by the paradigm, becomes unwieldy, and is eventually abandoned. In this issue of Technology Radar, we include the Lambda pitfall as one of the topics because while we have seen many serverless-style solutions deployed successfully, we have also heard many cautionary tales from projects, such as when it comes to The Lambda Pinball anti-pattern can arise when complex execution and data flow across multiple interdependent Lambdas . At the code level, simple mapping between domain concepts and the multiple Lambdas involved is simply not possible, making any changes and additions challenging.

Liu Shangqi, general manager of Thoughtworks China, Hong Kong and Macau, and member of the Global Technical Advisory Committee, presented "The Lambda Trap: Reduce 90% of costs by migrating from microservices to overall architecture?" at the Technology Radar Press Conference. " as the title, expressed his own views.

Liu Shangqi believes: "Serverless functions are not a panacea for all problems. You need to consider their limitations and make trade-offs before adopting them. One of the challenges of serverless functions is managing their complexity and dependencies. As applications As they grow and evolve, they may require more and more serverless functions to handle different tasks and events. This may lead to a situation where serverless functions become redundant, interdependent, and difficult to maintain. This is what we call Lambda quicksand."

Like all technical solutions, serverless has its niches, but many of its features come with trade-offs. Liu suggests that serverless functions are best suited for simple, stateless, and short-lived tasks that can benefit from the scalability and cost-effectiveness of the cloud. For more complex or long-running tasks that require state management, data consistency, or transactional integrity, other architectures or technologies are recommended.

One of the alternatives that can be tried is a function-based serverless architecture moving to a more coarse-grained microservices architecture or even a modular monolith. A monolith is a single application that contains all the functionality and logic of a system, as opposed to microservices, which are small independent services that communicate with each other. Monoliths have traditionally been considered outdated and inflexible, but some companies claim they can reduce cost and complexity by switching from microservices back to Monoliths. This also reflects the current industry's reflection on coping with the complexity of the structure.

The choice of architectural style needs to consider many factors, such as the size, complexity, domain, requirements and goals of the application and organization. Serverless functions are powerful tools for building cloud-based applications, but they are not without challenges or limitations. Developers need to be careful of Lambda pitfalls.

The above is Thoughtworks' interpretation of the three topics in the 28th issue of "Technology Radar". For more topics and entries, please visit our website to view the full version. "Technology Radar" tries to capture as much as possible the evolution trend of the software industry. In order to make room for new content, we will adjust the items that appear in each issue of the radar. Some content that has not been updated recently may be removed, but Leaving out a technology doesn't mean we don't care about it anymore. Any macro change will have some small signals. We will continue to pay attention to these small changes, support the development of excellent software business, and set off an IT revolution.

{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4518215/blog/10083287