Skip to content
Oci

Meta.Llama 3.3 70b Instruct Fp8 Dynamic

Meta.Llama 3.3 70b Instruct Fp8 Dynamic is available via Oci with a 128K context window and up to 4,000 output tokens. Pricing: $0.7200/1M input tokens, $0.7200/1M output tokens.

Meta.Llama 3.3 70b Instruct Fp8 Dynamic Pricing & Specifications

Input Price$0.72 per 1M tokens
Output Price$0.72 per 1M tokens
Context Window128,000 tokens (128K)
Max Output4,000 tokens
ProviderOci

What is Meta.Llama 3.3 70b Instruct Fp8 Dynamic?

Meta.Llama 3.3 70b Instruct Fp8 Dynamic is a large language model by Oci with a 128K context window and up to 4,000 output tokens. It costs $0.72 per 1M input tokens and $0.72 per 1M output tokens. Meta.Llama 3.3 70b Instruct Fp8 Dynamic is available via Oci with a 128K context window and up to 4,000 output tokens. Pricing: $0.7200/1M input tokens, $0.7200/1M output tokens.

Capabilities

text function calling

Meta.Llama 3.3 70b Instruct Fp8 Dynamic Cost Examples

Short prompt (500 tokens)

$0.000360

Medium prompt (2K tokens)

$0.00144

Long output (4K tokens)

$0.00288

Count tokens for Meta.Llama 3.3 70b Instruct Fp8 Dynamic

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Meta.Llama 3.3 70b Instruct Fp8 Dynamic

Oci

Meta.Llama 3.3 70b Instruct

$0.72/1M in 128K ctx

Oci

Meta.Llama 4 Maverick 17b 128e Instruct Fp8

$0.72/1M in 512K ctx

Oci

Meta.Llama 4 Scout 17b 16e Instruct

$0.72/1M in 192K ctx

Oci

Meta.Llama 3.1 70b Instruct

$0.72/1M in 128K ctx

Frequently Asked Questions

How much does Meta.Llama 3.3 70b Instruct Fp8 Dynamic cost per token? +
Meta.Llama 3.3 70b Instruct Fp8 Dynamic costs $0.72 per 1M input tokens and $0.72 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.001080.
What is the context window for Meta.Llama 3.3 70b Instruct Fp8 Dynamic? +
Meta.Llama 3.3 70b Instruct Fp8 Dynamic supports a context window of 128,000 tokens (128K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta.Llama 3.3 70b Instruct Fp8 Dynamic? +
Meta.Llama 3.3 70b Instruct Fp8 Dynamic can generate up to 4,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta.Llama 3.3 70b Instruct Fp8 Dynamic good for coding tasks? +
Yes, Meta.Llama 3.3 70b Instruct Fp8 Dynamic supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All Oci Models