Using LLMs to Implement Abstract Methods in Python: The llm-strategy Package

The llm-strategy package is a Python package that uses LLMs (such as OpenAI's GPT-3) to implement abstract methods in interface classes. It does this by forwarding requests to the LLM and converting the responses back to Python data using Python's @dataclasses. The package can use docstrings, type annotations, and method/function names as prompts for the LLM, and can automatically convert the results back into Python types (currently only supporting @dataclasses). It can also extract a data schema to send to the LLM for interpretation. While the llm-strategy package still relies on some Python code, it has the potential to reduce the need for this code in the future by using additional, cheaper LLMs to automate the parsing of structured data.

The Github repository of the package is available here, and the documentation can be found here.

The package includes an example implementation of the llm_strategy decorator for the Customer and CustomerDatabase classes using OpenAI's LLMs. The example code shows how to find the keys of the customers that match a natural language query best and how to load and store the customer database from/to a file using LLMs. There is also a full example in customer_database_search.py for searching the customer database.

If you're interested in using LLMs in Python to improve your software engineering practices, the llm-strategy package may be worth exploring. For a wider perspective on why this package could be important in the future, check out this blog post.

Tags: Python, OpenAI, LLMs


Similar Posts


LLAMA-style LLMs and LangChain: A Solution to Long-Term Memory Problem

LLAMA-style Long-Form Memory (LLM) models are gaining popularity in solving long-term memory (LTM) problems. However, the creation of LLMs requires a fully manual process. Users may wonder whether any existing GPT-powered applications perform similar tasks. A project called gpt-llama.cpp, which uses llama.cpp and mocks an OpenAI endpoint, has been proposed to support GPT-powered applications with llama.cpp, which supports Vicuna.

LangChain, a framework for building agents, provides a solution to the LTM problem by combining LLMs, tools, and memory. … click here to read


Re-Pre-Training Language Models for Low-Resource Languages

Language models are initially pre-trained on a huge corpus of mostly-unfiltered text in the target languages, then they are made into ChatLLMs by fine-tuning on a prompt dataset. The pre-training is the most expensive part by far, and if existing LLMs can't do basic sentences in your language, then one needs to start from that point by finding/scraping/making a huge dataset. One can exhaustively go through every available LLM and check its language abilities before investing in re-pre-training. There are surprisingly many of them … click here to read


Transforming LLMs with Externalized World Knowledge

The concept of externalizing world knowledge to make language models more efficient has been gaining traction in the field of AI. Current LLMs are equipped with enormous amounts of data, but not all of it is useful or relevant. Therefore, it is important to offload the "facts" and allow LLMs to focus on language and reasoning skills. One potential solution is to use a vector database to store world knowledge.

However, some have questioned the feasibility of this approach, as it may … click here to read


Toolkit-AI: A Powerful Toolkit for Generating AI Agents

In the ever-evolving realm of artificial intelligence, developers constantly seek to create intelligent and efficient AI agents that automate tasks and engage with users meaningfully. Toolkit-AI emerges as a potent toolkit, empowering developers to achieve this objective by equipping them with tools for generating AI agents that excel in both intelligence and efficacy.

What is Toolkit-AI?

Toolkit-AI, a Python library, allows developers to generate AI agents that harness either Langchain plugins or ChatGPT … click here to read


Exploring Pygmalion: The New Contender in Language Models

Enthusiasm is building in the OpenAI community for Pygmalion , a cleverly named new language model. While initial responses vary, the community is undeniably eager to delve into its capabilities and quirks.

Pygmalion exhibits some unique characteristics, particularly in role-playing scenarios. It's been found to generate frequent emotive responses, similar to its predecessor, Pygmalion 7B from TavernAI. However, some users argue that it's somewhat less coherent than its cousin, Wizard Vicuna 13B uncensored, as it … click here to read


LMFlow - Fast and Extensible Toolkit for Finetuning and Inference of Large Foundation Models

Some recommends LMFlow , a fast and extensible toolkit for finetuning and inference of large foundation models. It just takes 5 hours on a 3090 GPU for fine-tuning llama-7B.

LMFlow is a powerful toolkit designed to streamline the process of finetuning and performing inference with large foundation models. It provides efficient and scalable solutions for handling large-scale language models. With LMFlow, you can easily experiment with different data sets, … click here to read


Suitable Open Source Recommendation Engine for Insurance Recommendations

When it comes to open source recommendation engines tailored for insurance recommendations, two popular choices are:

  • ActionML Engines : This open source project provides a collection of recommendation engines, including the Universal Recommender, which can be customized for insurance recommendations based on user behavior and other relevant data.
  • Cornac : Cornac is a flexible and scalable recommender system library in Python. It offers various recommendation algorithms that … click here to read

Local Language Models: A User Perspective

Many users are exploring Local Language Models (LLMs) not because they outperform ChatGPT/GPT4, but to learn about the technology, understand its workings, and personalize its capabilities and features. Users have been able to run several models, learn about tokenizers and embeddings , and experiment with vector databases . They value the freedom and control over the information they seek, without ideological or ethical restrictions imposed by Big Tech. … click here to read



© 2023 ainews.nbshare.io. All rights reserved.