Skip to content
Perplexity

Llama 3.1 70b Instruct

Llama 3.1 70b Instruct is available via Perplexity with a 131K context window and up to 131,072 output tokens. Pricing: $1.00/1M input tokens, $1.00/1M output tokens.

Llama 3.1 70b Instruct Pricing & Specifications

Input Price$1.00 per 1M tokens
Output Price$1.00 per 1M tokens
Context Window131,072 tokens (131K)
Max Output131,072 tokens
ProviderPerplexity

What is Llama 3.1 70b Instruct?

Llama 3.1 70b Instruct is a large language model by Perplexity with a 131K context window and up to 131,072 output tokens. It costs $1.00 per 1M input tokens and $1.00 per 1M output tokens. Llama 3.1 70b Instruct is available via Perplexity with a 131K context window and up to 131,072 output tokens. Pricing: $1.00/1M input tokens, $1.00/1M output tokens.

Capabilities

text

Llama 3.1 70b Instruct Cost Examples

Short prompt (500 tokens)

$0.000500

Medium prompt (2K tokens)

$0.00200

Long output (4K tokens)

$0.00400

Count tokens for Llama 3.1 70b Instruct

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Llama 3.1 70b Instruct

Perplexity

Sonar

$1.00/1M in 128K ctx

Perplexity

Sonar Reasoning

$1.00/1M in 128K ctx

Perplexity

Codellama 70b Instruct

$0.70/1M in 16K ctx

Perplexity

Llama 2 70b Chat

$0.70/1M in 4K ctx

Frequently Asked Questions

How much does Llama 3.1 70b Instruct cost per token? +
Llama 3.1 70b Instruct costs $1.00 per 1M input tokens and $1.00 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.001500.
What is the context window for Llama 3.1 70b Instruct? +
Llama 3.1 70b Instruct supports a context window of 131,072 tokens (131K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Llama 3.1 70b Instruct? +
Llama 3.1 70b Instruct can generate up to 131,072 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Llama 3.1 70b Instruct good for coding tasks? +
Llama 3.1 70b Instruct can handle basic coding tasks, but there are models specifically optimized for code generation that may perform better on complex programming problems.
Token Counter | Pricing Calculator | Model Comparison | All Perplexity Models