Observe.AI Launches 30 Billion Parameter Contact Center LLM and Generative AI Suite

Dialogue intelligence platform Observe.AI announced the launch of a contact center large-scale language model (Contact Center LLM) with a capacity of 30 billion parameters, as well as a generative AI suite designed to improve agent performance.

The company claims that its proprietary LLM, trained on a massive dataset of real-world contact center interactions, can handle a variety of AI-based tasks (call summarization, automated QA, coaching, etc.). Observe.AI emphasizes that the unique value of its model lies in the calibration and control it provides users. The platform allows users to fine-tune and customize the model to suit their specific contact center requirements.

“The LLM is trained with different numbers of parameters (7B, 13B, and 30B parameters) to maximize contact center performance. Based on preliminary tests, our Contact Center LLM was found to be more accurate than GPT3.5 in automatically summarizing conversations35 % and a 33% increase in the accuracy of sentiment analysis during calls.”

Leveraging its LLM capabilities, Observe.AI's generative AI suite works to improve agent performance across all customer interactions: calls and chats, inquiries, complaints, and everyday conversations handled by contact center teams. According to Swapnil Jain , CEO of Observe.AI , “Our LLM was extensively trained on a domain-specific dataset of contact center interactions. The training process involved leveraging Lots of data points."

He emphasized the importance of quality and relevance in the instruction dataset, which contains hundreds of curated instructions for a variety of tasks directly applicable to contact center use cases. According to Jain, this meticulous approach to data set management improves LLM's ability to deliver the accurate and contextual responses that the industry demands.

Observe.ai's claim that its proprietary model outperforms GPT in terms of consistency and relevance marks a major advance. “Our LLM is only trained on data that has fully redacted any sensitive customer information and PII. Our redaction benchmark in this area is exemplary in the industry – we avoided over-redacting of sensitive information 150 million times out of 100 million calls, reports with less than 500 errors. This ensures that sensitive information is protected and privacy and compliance are maintained while preserving the maximum amount of information for LLM training.”

Jain also revealed that the company has implemented a robust data protocol to store all customer data (including LLM-generated data), in full compliance with regulatory requirements. Each customer/account is assigned a dedicated storage partition, ensuring data encryption and unique identification for each customer/account.

Guess you like

Origin www.oschina.net/news/246226/observe-ai-30-billion-parameter-contact-center-llm