Appen and Reka AI join forces to build high-quality multimodal LLM applications

Recently, Appen officially announced the cooperation with Reka AI, an AI start-up company, to realize the combination of world-class data services and multi-modal language models.

The rise of innovative applications such as ChatGPT has enabled the development of large language models (LLMs) by leaps and bounds. LLM can help enterprises improve operational efficiency and provide end users with a refreshing experience. However, large enterprises often encounter friction and challenges in the deployment process of LLM, because these LLMs are not ready-made solutions for enterprises. To take full advantage of the power of LLM, companies need to fine-tune the underlying models for their application scenarios and continuously evaluate and monitor the performance of these models in the real world.

AI flavor:

Reka AI is a full-stack model provider, offering solutions to create enterprise-grade production models from base models. Before joining Reka, the team participated in the AI ​​research work of DeepMind, FAIR and Google Brain, and achieved major breakthroughs. Reka's proprietary multimodal large-scale language model is trained to read text, images, and tabular data. Reka uses proprietary algorithms to quickly and cost-effectively customize models for any data and application scenario. Its unique approach to personalization and compression enables customers to deploy a wide range of applications to meet various deployment constraints such as quality, latency and data privacy requirements.

Reka AI relies on the team's deep industry expertise to develop advanced proprietary algorithms , and has previously led some major breakthroughs in AI research at companies such as Google Brain and DeepMind. Appen, which has been in the field of AI training data and language services for 26 years , has unique advantages to help enterprises accelerate LLM deployment and fully unlock the potential of generative AI.


The strong alliance between Appen and Reka AI will develop effective and comprehensive generative AI solutions for enterprises, and help enterprises create and own enterprise-level production models that meet their specific deployment requirements.

Partnering with Reka will enable leading enterprises to build highly secure custom proprietary models. The industry is currently limited to relying on public APIs, leaving businesses vulnerable to data breaches and privacy concerns for highly sensitive data. Together, Appen and Reka will provide enterprises with an unprecedented ability to secure their LLM applications.

——Appen CEO Armughan Ahmad

Productizing a generative AI solution requires not only expertise in data curation, but also continuous human feedback to help improve model performance, and a robust model evaluation platform. Backed by Appen's robust human test data, Reka can build, test and deploy LLM faster, enabling its proprietary algorithms to quickly customize Yasa for many use cases. This partnership enables enterprises to have a full-stack solution, using Yasa for many application scenarios deployed by enterprises.

Our approach is flexible, enabling enterprises to deploy Yasa under varying quality, latency, and privacy constraints. Partnering with Appen, our customers can further benefit from Appen's world-class data services expertise, greatly simplifying the production-ready creation process.

——Reka CEO Dani Yogatama

Guess you like

Origin blog.csdn.net/Appen_China/article/details/130606936