Skip to content
SambaNova

Meta Llama 3.1 8B Instruct

Meta Llama 3.1 8B Instruct is available via SambaNova with a 16K context window and up to 16,384 output tokens. Pricing: $0.1000/1M input tokens, $0.2000/1M output tokens.

Meta Llama 3.1 8B Instruct Pricing & Specifications

Input Price$0.10 per 1M tokens
Output Price$0.20 per 1M tokens
Context Window16,384 tokens (16K)
Max Output16,384 tokens
ProviderSambaNova

What is Meta Llama 3.1 8B Instruct?

Meta Llama 3.1 8B Instruct is a large language model by SambaNova with a 16K context window and up to 16,384 output tokens. It costs $0.10 per 1M input tokens and $0.20 per 1M output tokens. Meta Llama 3.1 8B Instruct is available via SambaNova with a 16K context window and up to 16,384 output tokens. Pricing: $0.1000/1M input tokens, $0.2000/1M output tokens.

Capabilities

text function calling json mode

Meta Llama 3.1 8B Instruct Cost Examples

Short prompt (500 tokens)

$0.000050

Medium prompt (2K tokens)

$0.00020

Long output (4K tokens)

$0.00080

Count tokens for Meta Llama 3.1 8B Instruct

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Meta Llama 3.1 8B Instruct

SambaNova

Meta Llama 3.2 3B Instruct

$0.080/1M in 4K ctx

SambaNova

Meta Llama 3.2 1B Instruct

$0.040/1M in 16K ctx

SambaNova

Meta Llama Guard 3 8B

$0.30/1M in 16K ctx

SambaNova

Llama 4 Scout 17B 16E Instruct

$0.40/1M in 8K ctx

Frequently Asked Questions

How much does Meta Llama 3.1 8B Instruct cost per token? +
Meta Llama 3.1 8B Instruct costs $0.10 per 1M input tokens and $0.20 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000200.
What is the context window for Meta Llama 3.1 8B Instruct? +
Meta Llama 3.1 8B Instruct supports a context window of 16,384 tokens (16K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta Llama 3.1 8B Instruct? +
Meta Llama 3.1 8B Instruct can generate up to 16,384 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta Llama 3.1 8B Instruct good for coding tasks? +
Yes, Meta Llama 3.1 8B Instruct supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All SambaNova Models