What is UniLLM?
UniLLM allows you to call any LLM using the OpenAI API, with 100% type safety.
Benefits
- β¨ Integrate with any provider and model using the OpenAI API
- π¬ Consistent chatCompletion responses and logs across all models and providers
- π― Type safety across all providers and models
- π Seamlessly switch between LLMs without rewriting your codebase
- β If you write tests for your service, you only need to test it once
- π (Coming Soon) Request caching and rate limiting
- π (Coming Soon) Cost monitoring and alerting
Getting Started
Install UniLLM
npm i unillm
Make an API call
import { UniLLM } from 'unillm';
/*
#SETUP#
*/
// Setup UniLLM
const unillm = new UniLLM();
// Use any LLM provider and model
const response = await unillm.createChatCompletion("#MODEL#", {
stream: true,
temperature: 0,
messages: [
{
role: "user",
content: "How are you?"
}
],
})
Supported Providers & Models
Below is a list of models that are currently supported by UniLLM. If you would like to see a model added to the roadmap, please open an issue on GitHub (opens in a new tab).
Provider/LLM | Chat Completions | Streaming |
---|---|---|
OpenAI gpt-3.5-turbo | β | β |
OpenAI gpt-4 | β | β |
Anthropic claude-2 | β | β |
Anthropic claude-instant-1 | β | β |
Azure OpenAI (all models) | β | β |
Llama 2 | π§ Coming Soon | π§ Coming Soon |
Falcon | π§ Coming Soon | π§ Coming Soon |
Mistral | π§ Coming Soon | π§ Coming Soon |
AWS Bedrock | π§ Coming Soon | π§ Coming Soon |
AI21 | π§ Coming Soon | π§ Coming Soon |
Huggingface | π§ Coming Soon | π§ Coming Soon |