Skip to content
Groq

Meta Llama/Llama 4 Maverick 17b 128e Instruct

Meta Llama/Llama 4 Maverick 17b 128e Instruct is available via Groq with a 131K context window and up to 8,192 output tokens. Pricing: $0.2000/1M input tokens, $0.6000/1M output tokens.

Meta Llama/Llama 4 Maverick 17b 128e Instruct Pricing & Specifications

Input Price$0.20 per 1M tokens
Output Price$0.60 per 1M tokens
Context Window131,072 tokens (131K)
Max Output8,192 tokens
ProviderGroq

What is Meta Llama/Llama 4 Maverick 17b 128e Instruct?

Meta Llama/Llama 4 Maverick 17b 128e Instruct is a large language model by Groq with a 131K context window and up to 8,192 output tokens. It costs $0.20 per 1M input tokens and $0.60 per 1M output tokens. Meta Llama/Llama 4 Maverick 17b 128e Instruct is available via Groq with a 131K context window and up to 8,192 output tokens. Pricing: $0.2000/1M input tokens, $0.6000/1M output tokens.

Capabilities

text vision function calling json mode

Meta Llama/Llama 4 Maverick 17b 128e Instruct Cost Examples

Short prompt (500 tokens)

$0.000100

Medium prompt (2K tokens)

$0.00040

Long output (4K tokens)

$0.00240

Count tokens for Meta Llama/Llama 4 Maverick 17b 128e Instruct

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Meta Llama/Llama 4 Maverick 17b 128e Instruct

Groq

Meta Llama/Llama Guard 4 12b

$0.20/1M in 8K ctx

Groq

Openai/Gpt Oss 120b

$0.15/1M in 131K ctx

Groq

Qwen/Qwen3 32b

$0.29/1M in 131K ctx

Groq

Meta Llama/Llama 4 Scout 17b 16e Instruct

$0.11/1M in 131K ctx

Frequently Asked Questions

How much does Meta Llama/Llama 4 Maverick 17b 128e Instruct cost per token? +
Meta Llama/Llama 4 Maverick 17b 128e Instruct costs $0.20 per 1M input tokens and $0.60 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000500.
What is the context window for Meta Llama/Llama 4 Maverick 17b 128e Instruct? +
Meta Llama/Llama 4 Maverick 17b 128e Instruct supports a context window of 131,072 tokens (131K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta Llama/Llama 4 Maverick 17b 128e Instruct? +
Meta Llama/Llama 4 Maverick 17b 128e Instruct can generate up to 8,192 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta Llama/Llama 4 Maverick 17b 128e Instruct good for coding tasks? +
Yes, Meta Llama/Llama 4 Maverick 17b 128e Instruct supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All Groq Models