Skip to content
Ollama

Mistral Large Instruct 2407

Mistral Large Instruct 2407 is available via Ollama with a 66K context window and up to 8,192 output tokens. Pricing: $0.000000/1M input tokens, $0.000000/1M output tokens.

Mistral Large Instruct 2407 Pricing & Specifications

Input Price$0.000 per 1M tokens
Output Price$0.000 per 1M tokens
Context Window65,536 tokens (66K)
Max Output8,192 tokens
ProviderOllama

What is Mistral Large Instruct 2407?

Mistral Large Instruct 2407 is a large language model by Ollama with a 66K context window and up to 8,192 output tokens. It costs $0.000 per 1M input tokens and $0.000 per 1M output tokens. Mistral Large Instruct 2407 is available via Ollama with a 66K context window and up to 8,192 output tokens. Pricing: $0.000000/1M input tokens, $0.000000/1M output tokens.

Capabilities

text function calling

Mistral Large Instruct 2407 Cost Examples

Short prompt (500 tokens)

$0.000000

Medium prompt (2K tokens)

$0.00000

Long output (4K tokens)

$0.00000

Count tokens for Mistral Large Instruct 2407

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Mistral Large Instruct 2407

Ollama

Codegeex4

$0.000/1M in 33K ctx

Ollama

Deepseek Coder V2 Instruct

$0.000/1M in 33K ctx

Ollama

Deepseek Coder V2 Lite Instruct

$0.000/1M in 33K ctx

Ollama

Deepseek V3.1:671b Cloud

$0.000/1M in 164K ctx

Frequently Asked Questions

How much does Mistral Large Instruct 2407 cost per token? +
Mistral Large Instruct 2407 costs $0.000 per 1M input tokens and $0.000 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000000.
What is the context window for Mistral Large Instruct 2407? +
Mistral Large Instruct 2407 supports a context window of 65,536 tokens (66K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Mistral Large Instruct 2407? +
Mistral Large Instruct 2407 can generate up to 8,192 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Mistral Large Instruct 2407 good for coding tasks? +
Yes, Mistral Large Instruct 2407 supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All Ollama Models