AI API Cost Calculator

Compare inference costs across GPT-5, Claude 4.5, Gemini 3, and more.
Now with agentic loops, caching, and reasoning tokens

Last Updated: February 6, 2026

Configure Your Workload

tokens
tokens

Agentic Loop Configuration

In 2026, most AI applications use multi-step "agentic loops" rather than single prompts. Calculate costs for workflows like: "5 research steps + 1 summary step = 6 total steps."

1 step

Context Caching (Save up to 90%)

Modern APIs (Anthropic, Gemini, OpenAI) offer context caching that reduces input costs by up to 90% for repeated data. Set your cache hit rate to see potential savings.

0%

Reasoning models (like OpenAI's o-series) "think" before responding. These "thinking tokens" cost extra money and are often missed by older calculators. Enable reasoning to see the true cost.

Cost Comparison

GPT-5

OpenAI
$0.1155
Total Cost
Input tokens$0.0875
Output tokens$0.0280
Lowest

GPT-5 Mini

OpenAI
$0.0165
Total Cost
Input tokens$0.0125
Output tokens$0.004000

o3 (Reasoning)

OpenAI
$0.5800
Total Cost
Input tokens$0.5000
Output tokens$0.0800

Context Caching Savings

Enable context caching to save up to 90% on repeated inputs

Comparison Heat Map

Visual comparison showing which model performs best for different use cases

Model
Low Latency
Fast response times, minimal processing
High Intelligence
Complex reasoning, large context, advanced capabilities
Cost Optimized
Lowest total cost with caching
High Volume
Best for bulk processing, high throughput
GPT-5
OpenAI
#3
3rd
#2
2nd
#2
2nd
#2
2nd
GPT-5 Mini
OpenAI
#1
Best
#3
3rd
#1
Best
#1
Best
o3 (Reasoning)
OpenAI
#2
2nd
#1
Best
#3
3rd
#3
3rd
Best for this scenario
Good option
Average
Not recommended

Detailed Breakdown

ModelInput PriceOutput PriceContextTotal Costvs. Lowest
GPT-5 MiniOpenAI
$0.25/1M$2.00/1M128.0K$0.0165Lowest
GPT-5OpenAI
$1.75/1M$14.00/1M256.0K$0.1155+$0.0990 (600%)
o3 (Reasoning)OpenAI
$10.00/1M$40.00/1M200.0K$0.5800+$0.5635 (3415%)

Developer Code Snippet

Copy-paste code to integrate GPT-5 Mini into your application

import openai

client = openai.OpenAI(api_key="your-api-key")

response = client.chat.completions.create(
    model="gpt-5-mini",
    messages=[
        {"role": "user", "content": "Your prompt here"}
    ],
    max_tokens=2000
)

print(response.choices[0].message.content)

💡 Replace "your-api-key" with your actual API key. Adjust tokens and parameters as needed.

GPT-5 vs GPT-5 Mini: Complete Cost Comparison 2026

Compare pricing, features, and total cost of ownership between GPT-5 (OpenAI) and GPT-5 Mini (OpenAI). Find out which AI model offers the best value for your use case.

Pricing Overview

GPT-5

  • Input: $1.75 per 1M tokens
  • Output: $14.00 per 1M tokens
  • Cached: $0.44 per 1M tokens
  • Context: 256,000 tokens

GPT-5 Mini

  • Input: $0.25 per 1M tokens
  • Output: $2.00 per 1M tokens
  • Cached: $0.06 per 1M tokens
  • Context: 128,000 tokens

Key Differences

GPT-5 Mini is more affordable

86% cheaper input pricing

GPT-5 has larger context

2.0x more context capacity

Which Model Should You Choose?

Choose GPT-5 if:

  • You need OpenAI's ecosystem and integrations
  • You require larger context windows for long documents

Choose GPT-5 Mini if:

  • You prefer OpenAI's platform and tools
  • You're optimizing for lower costs

Frequently Asked Questions

Which is cheaper: GPT-5 or GPT-5 Mini?

GPT-5 Mini offers lower input pricing at $0.25 per 1M tokens compared to $1.75 for GPT-5. However, total costs depend on your usage patterns, output requirements, and whether you can leverage context caching. Use the calculator above to estimate costs for your specific use case.

What's the difference in context window size?

GPT-5 supports up to 256,000 tokens, while GPT-5 Minisupports 128,000 tokens. GPT-5can handle longer documents and conversations without truncation, which is important for applications requiring extensive context.

Can I use both models together?

Yes, many developers use multiple AI models for different tasks. You might use GPT-5 Minifor high-volume, cost-sensitive operations and GPT-5 for tasks requiring specific capabilities. The calculator above helps you compare costs across both models.

How do I reduce API costs?

Both models support context caching, which can significantly reduce costs for repeated inputs. GPT-5 offers cached input at $0.44 per 1M tokens. GPT-5 Mini offers cached input at $0.06 per 1M tokens.Additionally, optimize your prompts, use streaming for faster responses, and consider agentic workflows to minimize token usage.