Skip to content
FriendliAI

Meta Llama 3.1 70b Instruct

Meta Llama 3.1 70b Instruct is available via FriendliAI with a 8K context window and up to 8,192 output tokens. Pricing: $0.6000/1M input tokens, $0.6000/1M output tokens.

Meta Llama 3.1 70b Instruct Pricing & Specifications

Input Price$0.60 per 1M tokens
Output Price$0.60 per 1M tokens
Context Window8,192 tokens (8K)
Max Output8,192 tokens
ProviderFriendliAI

What is Meta Llama 3.1 70b Instruct?

Meta Llama 3.1 70b Instruct is a large language model by FriendliAI with a 8K context window and up to 8,192 output tokens. It costs $0.60 per 1M input tokens and $0.60 per 1M output tokens. Meta Llama 3.1 70b Instruct is available via FriendliAI with a 8K context window and up to 8,192 output tokens. Pricing: $0.6000/1M input tokens, $0.6000/1M output tokens.

Capabilities

text function calling json mode

Meta Llama 3.1 70b Instruct Cost Examples

Short prompt (500 tokens)

$0.000300

Medium prompt (2K tokens)

$0.00120

Long output (4K tokens)

$0.00240

Count tokens for Meta Llama 3.1 70b Instruct

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Meta Llama 3.1 70b Instruct

FriendliAI

Meta Llama 3.1 8b Instruct

$0.10/1M in 8K ctx

Azure OpenAI

Gpt Audio Mini 2025 10 06

$0.60/1M in 128K ctx

Azure OpenAI

Gpt 4o Mini Realtime Preview 2024 12 17

$0.60/1M in 128K ctx

Azure OpenAI

Gpt Realtime Mini 2025 10 06

$0.60/1M in 32K ctx

Frequently Asked Questions

How much does Meta Llama 3.1 70b Instruct cost per token? +
Meta Llama 3.1 70b Instruct costs $0.60 per 1M input tokens and $0.60 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000900.
What is the context window for Meta Llama 3.1 70b Instruct? +
Meta Llama 3.1 70b Instruct supports a context window of 8,192 tokens (8K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta Llama 3.1 70b Instruct? +
Meta Llama 3.1 70b Instruct can generate up to 8,192 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta Llama 3.1 70b Instruct good for coding tasks? +
Yes, Meta Llama 3.1 70b Instruct supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All FriendliAI Models