AI API Cost Calculator

Compare inference costs across GPT-5, Claude 4.5, Gemini 3, and more.
Now with agentic loops, caching, and reasoning tokens

Last Updated: February 6, 2026

Configure Your Workload

tokens
tokens

Agentic Loop Configuration

In 2026, most AI applications use multi-step "agentic loops" rather than single prompts. Calculate costs for workflows like: "5 research steps + 1 summary step = 6 total steps."

1 step

Context Caching (Save up to 90%)

Modern APIs (Anthropic, Gemini, OpenAI) offer context caching that reduces input costs by up to 90% for repeated data. Set your cache hit rate to see potential savings.

0%

Reasoning models (like OpenAI's o-series) "think" before responding. These "thinking tokens" cost extra money and are often missed by older calculators. Enable reasoning to see the true cost.

Cost Comparison

GPT-5

OpenAI
$0.1155
Total Cost
Input tokens$0.0875
Output tokens$0.0280
Lowest

Gemini 3 Pro

Google
$0.0825
Total Cost
Input tokens$0.0625
Output tokens$0.0200

Context Caching Savings

Enable context caching to save up to 90% on repeated inputs

Comparison Heat Map

Visual comparison showing which model performs best for different use cases

Model
Low Latency
Fast response times, minimal processing
High Intelligence
Complex reasoning, large context, advanced capabilities
Cost Optimized
Lowest total cost with caching
High Volume
Best for bulk processing, high throughput
GPT-5
OpenAI
#1
Best
#2
2nd
#2
2nd
#2
2nd
Gemini 3 Pro
Google
#2
2nd
#1
Best
#1
Best
#1
Best
Best for this scenario
Good option
Average
Not recommended

Detailed Breakdown

ModelInput PriceOutput PriceContextTotal Costvs. Lowest
Gemini 3 ProGoogle
$1.25/1M$10.00/1M2.0M$0.0825Lowest
GPT-5OpenAI
$1.75/1M$14.00/1M256.0K$0.1155+$0.0330 (40%)

Developer Code Snippet

Copy-paste code to integrate Gemini 3 Pro into your application

import google.generativeai as genai

genai.configure(api_key="your-api-key")
model = genai.GenerativeModel('gemini-3-pro')

response = model.generate_content(
    "Your prompt here",
    generation_config={
        "max_output_tokens": 2000
    }
)

print(response.text)

💡 Replace "your-api-key" with your actual API key. Adjust tokens and parameters as needed.

Cheapest Video AI API 2026

Compare the most affordable video AI APIs in 2026. Find cost-effective solutions for video processing, analysis, transcription, and generation across major providers.

Most Affordable Option

Gemini 3 Pro

Google

Input:

$1.25/1M tokens

Output:

$10.00/1M tokens

Complete Pricing Comparison

Gemini 3 Pro

Google

Input

$1.25/1M

Output

$10.00/1M

Cached

$0.32/1M

Context

2000K

Cheapest

GPT-5

OpenAI

Input

$1.75/1M

Output

$14.00/1M

Cached

$0.44/1M

Context

256K

Key Insights

Price Range

Input pricing ranges from $1.25 per 1M tokens (Gemini 3 Pro) to $1.75 per 1M tokens (GPT-5). The difference represents a 40% price variation, so choosing the right model can significantly impact your costs.

Consider Total Cost of Ownership

While input pricing is important, consider output costs, context caching availability, and additional features. Some models offer better value when factoring in cached input pricing, larger context windows, or specialized capabilities that reduce overall token usage.

Usage Patterns Matter

The cheapest model for high-volume input may not be cheapest for high-output scenarios. Use the calculator above to estimate costs based on your specific input/output token ratios, agentic steps, and caching usage.

Frequently Asked Questions

Which is the cheapest AI API?

Gemini 3 Pro offers the lowest input pricing at $1.25 per 1M tokens. However, the best choice depends on your specific use case, output requirements, and whether you can leverage context caching.

How do I reduce API costs?

Use context caching when available, optimize your prompts to reduce token usage, implement streaming for faster responses, and consider using cheaper models for high-volume operations while reserving premium models for complex tasks.

Should I use multiple AI APIs?

Many developers use multiple APIs for different tasks. You might use a cheaper model for high-volume operations and a more capable model for complex reasoning or specialized tasks. The calculator above helps you compare costs across different models.