Open-source AI chatbot MLC LLM released for multiple platforms

At present, most AI chatbots need to be connected to the cloud for processing, and even those that can run locally require extremely high configuration. So is there a lightweight chatbot that doesn't require an internet connection?

A new open source project called MLC LLM has been launched on GitHub. It can run completely locally without networking, and even old computers with graphics cards and Apple iPhones can run.

The MLC LLM project introduction states: "MLC LLM is a general solution that allows any language model to be deployed locally on a diverse set of hardware backends and native applications, in addition to an efficient framework for everyone to further Optimize model performance for your own use case. Everything runs locally, without server support, and accelerated by local GPUs on phones and laptops. Our mission is to empower everyone to develop, optimize, and deploy AI models locally on their devices. "

The GitHub page found that the developers of this project came from Carnegie Mellon University's Catalyst program, the SAMPL machine learning research group, as well as the University of Washington, Shanghai Jiaotong University, and OctoML. They also have a related project called Web LLM, which runs an AI chatbot entirely in a web browser.

MLC LLM uses Vicuna-7B-V1.1, which is a lightweight LLM based on Meta LLaMA. Although the effect is not as good as GPT3.5 or GPT4, it has an advantage in size.

Currently, MLC LLM is available for Windows, Linux , macOS and iOS platforms, there is no version for Android yet.

According to the test of foreign media tomshardware, the Apple iPhone 14 Pro Max and iPhone 12 Pro Max phones with 6GB of memory successfully run MLC LLM with an installation size of 3GB. The Apple iPhone 11 Pro Max with 4GB of memory cannot run MLC LLM.

In addition, the ThinkPad X1 Carbon (6th Gen) was also tested to run MLC LLM successfully, which is a notebook with an i7-8550U processor, no discrete graphics card, and an Intel UHD 620 GPU. MLC LLM needs to be run through the command line on the PC platform . The performance of the foreign media test is average, the response time takes nearly 30 seconds, and there is almost no continuous dialogue ability. I hope it can be improved in subsequent versions.

Guess you like

Origin blog.csdn.net/u014389734/article/details/130716178