UltraLM-13B on the Leaderboard

UltraLM-13B has now been tested on this open leaderboard. Click here to view the leaderboard. It's the 25th best 13B model on the leaderboard. If this is an accurate assessment, could its high AlpacaEval performance be a problem with UltraLM's dataset or an example of how bad AlpacaEval is and the concept of using LLMs to judge other LLMs? Edit: Quite bad on this leaderboard too. Here is the leaderboard.

Just have a look at the training dataset. If all of that was used during training it could be believed. That's 8 Gb of data! Here is the dataset.

UltraChat contains 1.5 million high-quality multi-turn dialogues and covers a wide range of topics and instructions.

This paper believes that the most straightforward way, that is, the quality and diversity of data employed in the training process, play a vital role in further improving the performance of chat language models. The paper: Click here to read the paper.

It's worth noting that some baselines seem underwhelming. Benchmarks don't tell the whole story, and the data generation strategy could be flawed. However, considering the scale of ChatGPT data, it's challenging not to gain an edge over regular fine-tunes, even well-engineered ones like Wizard. (BTW, people who are skeptical about it because it's Chinese should realize that all of Wizard authors are Chinese as well, only working in Hong Kong & Microsoft)

Always remember the thumb rule, if there's no change in architecture, training, or fine-tuning methodology, then it's most likely a gamification of the dataset to score better on the leaderboard. Here is additional information available on their GitHub. It appears that the model was developed by Qinghua University in China. Judgment should be withheld until transparently conducted and reproducibly designed benchmarks from other reputable sources become available.

It's remarkable that a 13B model can be so performant. Perhaps it's fine-tuned specifically to pass those tests. Some models may prioritize "passing tests" rather than helping humans become better. WizardLM is preferred as it feels the smartest among all tested models, so there's some sense to that leaderboard.


Similar Posts


Stack Llama and Vicuna-13B Comparison

Stack Llama, available on the TRL Library, is a RLHF model that works well with logical tasks, similar to the performance of normal Vicuna-13B 1.1 in initial testing. However, it requires about 25.2GB of dedicated GPU VRAM and takes approximately 12 seconds to load.

The Stack Llama model was trained using the StableLM training method, which aims to improve the stability of the model's training and make it more robust to the effects of noisy data. The model was also trained on a … click here to read


Comparing Large Language Models: WizardLM 7B, Alpaca 65B, and More

A recent comparison of large language models, including WizardLM 7B , Alpaca 65B , Vicuna 13B, and others, showcases their performance across various tasks. The analysis highlights how the models perform despite their differences in parameter count. The GPT4-X-Alpaca 30B model, for instance, gets close to the performance of Alpaca 65B. Furthermore, the Vicuna 13B and 7B models demonstrate impressive results, given their lower parameter numbers.

Some users … click here to read


Exploring AI Models for Role-playing

If you're into role-playing and interactive fiction, there are several exciting AI models and projects worth checking out. Here's a roundup of some intriguing options:

  • KoboldCPP: You want to be running KoboldCPP , not ooba. Not only is it better optimized for pure CPU inference, but it has a lot of tools built in to facilitate RP. Setting up lorebooks and world info takes some time, but once done, it's pretty slick.
  • click here to read

Building an AI-Powered Chatbot using lmsys/fastchat-t5-3b-v1.0 on Intel CPUs

Discover how you can harness the power of lmsys/fastchat-t5-3b-v1.0 language model and leverage Intel CPUs to build an advanced AI-powered chatbot. Let's dive in!

Python Code:

 # Installing the Intel® Extension for PyTorch* CPU version python -m pip install intel_extension_for_pytorch # Importing the required libraries import torch from transformers import T5Tokenizer, AutoModelForSeq2SeqLM import intel_extension_for_pytorch as ipex # Loading the T5 model and tokenizer tokenizer = T5Tokenizer.from_pretrained("lmsys/fastchat-t5-3b-v1.0") model = AutoModelForSeq2SeqLM.from_pretrained("lmsys/fastchat-t5-3b-v1.0", low_cpu_mem_usage=True) # Setting up the conversation prompt prompt …
                        click here to read
                    

Exciting News: Open Orca Dataset Released!

It's a moment of great excitement for the AI community as the highly anticipated Open Orca dataset has been released. This dataset has been the talk of the town ever since the research paper was published, and now it's finally here, thanks to the dedicated efforts of the team behind it.

The Open Orca dataset holds immense potential for advancing natural language processing and AI models. It promises to bring us closer to open-source models that can compete with the likes of … click here to read


Stable Diffusion Forks: Auto1111 vs. Vladmandic

Recently, there has been a lot of buzz about the different forks of Stable Diffusion , particularly Auto1111 and Vladmandic . While many have praised Auto1111 for his contributions to the diffusion-based community, others have raised concerns about his controversial past. Meanwhile, Vladmandic's fork has gained popularity for its additional optimization options and faster performance.

Some users have reported difficulty in setting up Vladmandic's fork on Windows, … click here to read


What has changed in Transformer architecture?

There have been close to no improvements on the original transformer architecture . Different architecture are better at different tasks, and the training objective can also vary. There's a major error in the paper " Attention is All You Need " where they accidentally put the layer norms after the layers not before them. Putting attention layers and MLPs in parallel makes the model run much faster, but doesn't really affect performance. The original … click here to read



© 2023 ainews.nbshare.io. All rights reserved.