Meta Llama 3.2 1B Instruct
Meta Llama 3.2 1B Instruct is available via SambaNova with a 16K context window and up to 16,384 output tokens. Pricing: $0.0400/1M input tokens, $0.0800/1M output tokens.
Meta Llama 3.2 1B Instruct Pricing & Specifications
What is Meta Llama 3.2 1B Instruct?
Meta Llama 3.2 1B Instruct is a large language model by SambaNova with a 16K context window and up to 16,384 output tokens. It costs $0.040 per 1M input tokens and $0.080 per 1M output tokens. Meta Llama 3.2 1B Instruct is available via SambaNova with a 16K context window and up to 16,384 output tokens. Pricing: $0.0400/1M input tokens, $0.0800/1M output tokens.
Capabilities
text
Meta Llama 3.2 1B Instruct Cost Examples
Short prompt (500 tokens)
$0.000020
Medium prompt (2K tokens)
$0.00008
Long output (4K tokens)
$0.00032
Count tokens for Meta Llama 3.2 1B Instruct
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Meta Llama 3.2 1B Instruct
Frequently Asked Questions
How much does Meta Llama 3.2 1B Instruct cost per token? +
Meta Llama 3.2 1B Instruct costs $0.040 per 1M input tokens and $0.080 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000080.
What is the context window for Meta Llama 3.2 1B Instruct? +
Meta Llama 3.2 1B Instruct supports a context window of 16,384 tokens (16K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta Llama 3.2 1B Instruct? +
Meta Llama 3.2 1B Instruct can generate up to 16,384 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta Llama 3.2 1B Instruct good for coding tasks? +
Meta Llama 3.2 1B Instruct can handle basic coding tasks, but there are models specifically optimized for code generation that may perform better on complex programming problems.