← All models
Usage
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.tributary.cc/openai/v1",
apiKey: "<TRIBUTARY_API_KEY>",
});
const completion = await client.chat.completions.create({
model: "meta:llama-3.3-70b-instruct",
messages: [
{ role: "user", content: "Hello!" }
],
});
console.log(completion.choices[0].message.content); Providers
| Provider | Model | Context | Output | Input $/M | Output $/M |
|---|---|---|---|---|---|
| DeepInfra | meta-llama/Llama-3.3-70B-Instruct-Turbo | 131K 131,072 tokens | 131K 131,072 tokens | $0.10 | $0.32 |
| Nebius | meta-llama/Llama-3.3-70B-Instruct | 131K 131,072 tokens | 8K 8,192 tokens | $0.13 | $0.40 |
| Novita | meta-llama/llama-3.3-70b-instruct | 131K 131,072 tokens | 120K 120,000 tokens | $0.14 | $0.40 |
| Parasail | meta-llama/Llama-3.3-70B-Instruct | 131K 131,072 tokens | 131K 131,072 tokens | $0.22 | $0.50 |
| Nebius | meta-llama/Llama-3.3-70B-Instruct-fast | 131K 131,072 tokens | 8K 8,192 tokens | $0.25 | $0.75 |
| Together | meta-llama/Llama-3.3-70B-Instruct-Turbo | 131K 131,072 tokens | 131K 131,072 tokens | $0.88 | $0.88 |
Context/output in tokens. Prices in USD per 1M tokens.