Accelerated Machine Learning on Consumer GPUs with MLC.ai

MLC.ai is a machine learning compiler that allows real-world language models to run smoothly on consumer GPUs on phones and laptops without the need for server support. This innovative tool can target various GPU backends such as Vulkan, Metal, and CUDA, making it possible to run large language models like Vicuña with impressive speed and accuracy.

The performance of MLC.ai is truly mind-blowing. The demo mlc_chat_cli runs over three times faster than 7B q4_2 quantized Vicuña running on LLaMA.cpp on an M1 Max MBP. The project was originally designed for WebGPUs, which are hundreds of lines long, and only takes tens of lines to expand it to other GPU backends.

One of the developers commented that this is the first demo where a machine learning compiler helps to deploy a real-world LLM (Vicuña) to consumer-class GPUs on phones and laptops. The possibilities for this tool are endless, and combining it with powerful frontends like SillyTavern, which can even run on a smartphone, would be very interesting.

MLC.ai supports various GPU backends like Vulkan, Metal, and CUDA. The performance of AMD cards using Vulkan is excellent, making it possible to run llms on GPUs using this API. This is great news for AMD users who can now take advantage of the impressive speed and accuracy of MLC.ai.

To try out MLC.ai, users can switch out and test different language models. The tool is highly versatile and easy to use. Developers can install other models by copying their Pygmalion files to the 'dist' folder.


Similar Posts


New Advances in AI Model Handling: GPU and CPU Interplay

With recent breakthroughs, it appears that AI models can now be shared between the CPU and GPU, potentially making expensive, high-VRAM GPUs less of a necessity. Users have reported impressive results with models like Wizard-Vicuna-13B-Uncensored.ggml.q8_0.bin using this technique, yielding fast execution with minimal VRAM use. This could be a game-changer for those with limited VRAM but ample RAM, like users of the 3070ti mobile GPU with 64GB of RAM.

There's an ongoing discussion about the possibilities of splitting … click here to read


Exploring the Best GPUs for AI Model Training

Are you looking to enhance your AI model performance? Having a powerful GPU can make a significant difference. Let's explore some options!

If you're on a budget, there are alternatives available. You can run llama-based models purely on your CPU or split the workload between your CPU and GPU. Consider downloading KoboldCPP and assign as many layers as your GPU can handle, while letting the CPU and system RAM handle the rest. Additionally, you can … click here to read


Bringing Accelerated LLM to Consumer Hardware

MLC AI, a startup that specializes in creating advanced language models, has announced its latest breakthrough: a way to bring accelerated Language Model (LLM) training to consumer hardware. This development will enable more accessible and affordable training of advanced LLMs for companies and organizations, paving the way for faster and more efficient natural language processing.

The MLC team has achieved this by optimizing its training process for consumer-grade hardware, which typically lacks the computational power of high-end data center infrastructure. This optimization … click here to read


Open Source Projects: Hyena Hierarchy, Griptape, and TruthGPT

Hyena Hierarchy is a new subquadratic-time layer in AI that combines long convolutions and gating, reducing compute requirements significantly. This technology has the potential to increase context length in sequence models, making them faster and more efficient. It could pave the way for revolutionary models like GPT4 that could run much faster and use 100x less compute, leading to exponential improvements in speed and performance. Check out Hyena on GitHub for more information.

Elon Musk has been building his own … click here to read


Alternatives for Running Stable Diffusion Locally and in the Cloud

If you are looking for ways to run Stable Diffusion locally or in the cloud without having to spin up a GPU each time and load models, there are several options available. Here are some of the most cost-effective and reliable solutions:


Top AI Sites and Tools for 2024

Embark on a journey to the forefront of artificial intelligence with these premier platforms, each dedicated to offering groundbreaking AI tools and applications.


Enhancing GPT's External Data Lookup Capacity: A Walkthrough

Accessing external information and blending it with AI-generated text is a capability that would significantly enhance AI applications. For instance, the combination of OpenAI's GPT and external data lookup, when executed efficiently, can lead to more comprehensive and contextually accurate output.

One promising approach is to leverage the LangChain API to extract and split text, embed it, and create a vectorstore which can be queried for relevant context to add to a prompt … click here to read



© 2023 ainews.nbshare.io. All rights reserved.