Process massive datasets with Gemini 2.0 Flash, featuring an expansive 1.0M context window for long-document analysis. This model delivers cost-effective pricing at $0.10/1M input and $0.40/1M output tokens, native tool calling support. Access Gemini 2.0 Flash via the LLM Gateway API with up to 8K output tokens.
Tokens
Tokens
Tokens
Gemini 2.0 Flash by LLM Gateway costs $0.10 per 1M input tokens and $0.40 per 1M output tokens. Cached reads cost $0.03 per 1M tokens.