Skip to content
AWS Bedrock

Meta.Llama4 Maverick 17b Instruct

Meta.Llama4 Maverick 17b Instruct is available via AWS Bedrock with a 128K context window and up to 4,096 output tokens. Pricing: $0.2400/1M input tokens, $0.9700/1M output tokens.

Meta.Llama4 Maverick 17b Instruct Pricing & Specifications

Input Price$0.24 per 1M tokens
Output Price$0.97 per 1M tokens
Context Window128,000 tokens (128K)
Max Output4,096 tokens
ProviderAWS Bedrock

What is Meta.Llama4 Maverick 17b Instruct?

Meta.Llama4 Maverick 17b Instruct is a large language model by AWS Bedrock with a 128K context window and up to 4,096 output tokens. It costs $0.24 per 1M input tokens and $0.97 per 1M output tokens. Meta.Llama4 Maverick 17b Instruct is available via AWS Bedrock with a 128K context window and up to 4,096 output tokens. Pricing: $0.2400/1M input tokens, $0.9700/1M output tokens.

Capabilities

text function calling

Meta.Llama4 Maverick 17b Instruct Cost Examples

Short prompt (500 tokens)

$0.000120

Medium prompt (2K tokens)

$0.00048

Long output (4K tokens)

$0.00388

Count tokens for Meta.Llama4 Maverick 17b Instruct

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Meta.Llama4 Maverick 17b Instruct

AWS Bedrock

Us.Meta.Llama4 Maverick 17b Instruct

$0.24/1M in 128K ctx

AWS Bedrock

Google.Gemma 3 27b It

$0.23/1M in 128K ctx

AWS Bedrock

Anthropic.Claude 3 Haiku 20240307

$0.25/1M in 200K ctx

AWS Bedrock

Apac.Anthropic.Claude 3 Haiku 20240307

$0.25/1M in 200K ctx

Frequently Asked Questions

How much does Meta.Llama4 Maverick 17b Instruct cost per token? +
Meta.Llama4 Maverick 17b Instruct costs $0.24 per 1M input tokens and $0.97 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000725.
What is the context window for Meta.Llama4 Maverick 17b Instruct? +
Meta.Llama4 Maverick 17b Instruct supports a context window of 128,000 tokens (128K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta.Llama4 Maverick 17b Instruct? +
Meta.Llama4 Maverick 17b Instruct can generate up to 4,096 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta.Llama4 Maverick 17b Instruct good for coding tasks? +
Yes, Meta.Llama4 Maverick 17b Instruct supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All AWS Bedrock Models