Llama3.1 70b
Llama3.1 70b is available via Cerebras with a 128K context window and up to 128,000 output tokens. Pricing: $0.6000/1M input tokens, $0.6000/1M output tokens.
Llama3.1 70b Pricing & Specifications
What is Llama3.1 70b?
Llama3.1 70b is a large language model by Cerebras with a 128K context window and up to 128,000 output tokens. It costs $0.60 per 1M input tokens and $0.60 per 1M output tokens. Llama3.1 70b is available via Cerebras with a 128K context window and up to 128,000 output tokens. Pricing: $0.6000/1M input tokens, $0.6000/1M output tokens.
Capabilities
text function calling
Llama3.1 70b Cost Examples
Short prompt (500 tokens)
$0.000300
Medium prompt (2K tokens)
$0.00120
Long output (4K tokens)
$0.00240
Count tokens for Llama3.1 70b
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Llama3.1 70b
Frequently Asked Questions
How much does Llama3.1 70b cost per token? +
Llama3.1 70b costs $0.60 per 1M input tokens and $0.60 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000900.
What is the context window for Llama3.1 70b? +
Llama3.1 70b supports a context window of 128,000 tokens (128K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Llama3.1 70b? +
Llama3.1 70b can generate up to 128,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Llama3.1 70b good for coding tasks? +
Yes, Llama3.1 70b supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.