Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8
Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8 is available via DeepInfra with a 1.0M context window and up to 1,048,576 output tokens. Pricing: $0.1500/1M input tokens, $0.6000/1M output tokens.
Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8 Pricing & Specifications
What is Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8?
Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8 is a large language model by DeepInfra with a 1.0M context window and up to 1,048,576 output tokens. It costs $0.15 per 1M input tokens and $0.60 per 1M output tokens. Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8 is available via DeepInfra with a 1.0M context window and up to 1,048,576 output tokens. Pricing: $0.1500/1M input tokens, $0.6000/1M output tokens.
Capabilities
Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8 Cost Examples
Short prompt (500 tokens)
$0.000075
Medium prompt (2K tokens)
$0.00030
Long output (4K tokens)
$0.00240
Count tokens for Meta Llama/Llama 4 Maverick 17B 128E Instruct FP8
Paste your prompt to see exact token counts and API cost estimates.
Open Token Counter