Mistral.Mistral Small 2402
Mistral.Mistral Small 2402 is available via AWS Bedrock with a 32K context window and up to 8,191 output tokens. Pricing: $1.00/1M input tokens, $3.00/1M output tokens.
Mistral.Mistral Small 2402 Pricing & Specifications
What is Mistral.Mistral Small 2402?
Mistral.Mistral Small 2402 is a large language model by AWS Bedrock with a 32K context window and up to 8,191 output tokens. It costs $1.00 per 1M input tokens and $3.00 per 1M output tokens. Mistral.Mistral Small 2402 is available via AWS Bedrock with a 32K context window and up to 8,191 output tokens. Pricing: $1.00/1M input tokens, $3.00/1M output tokens.
Capabilities
text function calling
Mistral.Mistral Small 2402 Cost Examples
Short prompt (500 tokens)
$0.000500
Medium prompt (2K tokens)
$0.00200
Long output (4K tokens)
$0.01200
Count tokens for Mistral.Mistral Small 2402
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Mistral.Mistral Small 2402
Frequently Asked Questions
How much does Mistral.Mistral Small 2402 cost per token? +
Mistral.Mistral Small 2402 costs $1.00 per 1M input tokens and $3.00 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.002500.
What is the context window for Mistral.Mistral Small 2402? +
Mistral.Mistral Small 2402 supports a context window of 32,000 tokens (32K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Mistral.Mistral Small 2402? +
Mistral.Mistral Small 2402 can generate up to 8,191 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Mistral.Mistral Small 2402 good for coding tasks? +
Yes, Mistral.Mistral Small 2402 supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.