Getting Started

What is UniLLM?

UniLLM allows you to call any LLM using the OpenAI API, with 100% type safety.

Benefits

  • ✨ Integrate with any provider and model using the OpenAI API
  • πŸ’¬ Consistent chatCompletion responses and logs across all models and providers
  • πŸ’― Type safety across all providers and models
  • πŸ” Seamlessly switch between LLMs without rewriting your codebase
  • βœ… If you write tests for your service, you only need to test it once
  • πŸ”œ (Coming Soon) Request caching and rate limiting
  • πŸ”œ (Coming Soon) Cost monitoring and alerting

Getting Started

Install UniLLM

npm i unillm

Make an API call

import { UniLLM } from 'unillm';
 
/*
  #SETUP#
*/
 
// Setup UniLLM
const unillm = new UniLLM();
 
// Use any LLM provider and model
const response = await unillm.createChatCompletion("#MODEL#", {
  stream: true,
  temperature: 0,
  messages: [
    {
      role: "user",
      content: "How are you?"
    }
  ],
})

Supported Providers & Models

Below is a list of models that are currently supported by UniLLM. If you would like to see a model added to the roadmap, please open an issue on GitHub (opens in a new tab).

Provider/LLMChat CompletionsStreaming
OpenAI gpt-3.5-turboβœ…βœ…
OpenAI gpt-4βœ…βœ…
Anthropic claude-2βœ…βœ…
Anthropic claude-instant-1βœ…βœ…
Azure OpenAI (all models)βœ…βœ…
Llama 2🚧 Coming Soon🚧 Coming Soon
Falcon🚧 Coming Soon🚧 Coming Soon
Mistral🚧 Coming Soon🚧 Coming Soon
AWS Bedrock🚧 Coming Soon🚧 Coming Soon
AI21🚧 Coming Soon🚧 Coming Soon
Huggingface🚧 Coming Soon🚧 Coming Soon