AI Models

235 models Free & Paid Cập nhật: 16 phút trước

GPT-5.1 Chat (AKA Instant is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on...

by |Th11 2025 |128K context |$1.25/M input |$10.00/M output
128K tokens

GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks....

by |Th11 2025 |400K context |$1.25/M input |$10.00/M output
400K tokens

GPT-5.1-Codex-Mini is a smaller and faster version of GPT-5.1-Codex

by |Th11 2025 |400K context |$0.2500/M input |$2.00/M output
400K tokens

Kimi K2 Thinking is Moonshot AI’s most advanced open reasoning model to date, extending the K2 series into agentic, long-horizon reasoning. Built on the trillion-parameter Mixture-of-Experts (MoE) architecture introduced in...

by |Th11 2025 |262K context |$0.6000/M input |$2.50/M output
262K tokens

Amazon Nova Premier is the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher for distilling custom models.

by |Th10 2025 |1M context |$2.50/M input |$12.50/M output
1M tokens

Exclusively available on the OpenRouter API, Sonar Pro's new Pro Search mode is Perplexity's most advanced agentic search system. It is designed for deeper reasoning and analysis. Pricing is based...

by |Th10 2025 |200K context |$3.00/M input |$15.00/M output
200K tokens

Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance. It excels at speech transcription, translation and audio understanding. Input audio...

by |Th10 2025 |32K context |$0.1000/M input |$0.3000/M output

gpt-oss-safeguard-20b is a safety reasoning model from OpenAI built upon gpt-oss-20b. This open-weight, 21B-parameter Mixture-of-Experts (MoE) model offers lower latency for safety tasks like content classification, LLM filtering, and trust...

by |Th10 2025 |131K context |$0.0750/M input |$0.3000/M output
131K tokens

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s...

by |Th10 2025 |128K context |Miễn phí input |Miễn phí output
128K tokens

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s...

by |Th10 2025 |131K context |$0.2000/M input |$0.6000/M output
131K tokens

MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning,...

by |Th10 2025 |197K context |$0.2550/M input |$1.00/M output
197K tokens

Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-precision understanding and reasoning across text, images, and video. With 32 billion parameters, it combines deep visual perception with advanced text...

by |Th10 2025 |131K context |$0.1040/M input |$0.4160/M output
131K tokens

Granite-4.0-H-Micro is a 3B parameter from the Granite 4 family of models. These models are the latest in a series of models released by IBM. They are fine-tuned for long...

by |Th10 2025 |131K context |$0.0170/M input |$0.1100/M output
131K tokens

GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini](https://openrouter.ai/openai/gpt-5-mini), with GPT Image 1 Mini for efficient image generation. This natively multimodal model features superior instruction following, text...

by |Th10 2025 |400K context |$2.50/M input |$2.00/M output
400K tokens

Claude Haiku 4.5 is Anthropic’s fastest and most efficient model, delivering near-frontier intelligence at a fraction of the cost and latency of larger Claude models. Matching Claude Sonnet 4’s performance...

by |Th10 2025 |200K context |$1.00/M input |$5.00/M output
200K tokens

Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences. It integrates enhanced multimodal alignment and...

by |Th10 2025 |131K context |$0.1170/M input |$1.37/M output
131K tokens

Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal fusion with Interleaved-MRoPE for long-horizon...

by |Th10 2025 |131K context |$0.0800/M input |$0.5000/M output
131K tokens

[GPT-5](https://openrouter.ai/openai/gpt-5) Image combines OpenAI's GPT-5 model with state-of-the-art image generation capabilities. It offers major improvements in reasoning, code quality, and user experience while incorporating GPT Image 1's superior instruction following,...

by |Th10 2025 |400K context |$10.00/M input |$10.00/M output
400K tokens

o3-deep-research is OpenAI's advanced model for deep research, designed to tackle complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.

by |Th10 2025 |200K context |$10.00/M input |$40.00/M output
200K tokens

o4-mini-deep-research is OpenAI's faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.

by |Th10 2025 |200K context |$2.00/M input |$8.00/M output
200K tokens

Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Meta’s Llama-3.3-70B-Instruct with a 128K context. It’s post-trained for agentic workflows (RAG, tool calling) via SFT across math, code, science, and...

by |Th10 2025 |131K context |$0.1000/M input |$0.4000/M output
131K tokens

ERNIE-4.5-21B-A3B-Thinking is Baidu's upgraded lightweight MoE model, refined to boost reasoning depth and quality for top-tier performance in logical puzzles, math, science, coding, text generation, and expert-level academic benchmarks.

by |Th10 2025 |131K context |$0.0700/M input |$0.2800/M output
131K tokens

Gemini 2.5 Flash Image, a.k.a. "Nano Banana," is now generally available. It is a state of the art image generation model with contextual understanding. It is capable of image generation,...

by |Th10 2025 |33K context |$0.3000/M input |$2.50/M output

Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning in STEM, math, and complex tasks. It excels...

by |Th10 2025 |131K context |$0.1300/M input |$1.56/M output
131K tokens

Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general multimodal tasks. It excels in perception...

by |Th10 2025 |131K context |$0.1300/M input |$0.5200/M output
131K tokens

GPT-5 Pro is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex tasks that require step-by-step reasoning, instruction following, and...

by |Th10 2025 |400K context |$15.00/M input |$120.00/M output
400K tokens

Compared with GLM-4.5, this generation brings several key improvements: Longer context window: The context window has been expanded from 128K to 200K tokens, enabling the model to handle more complex...

by |Th9 2025 |205K context |$0.3900/M input |$1.90/M output
205K tokens

Claude Sonnet 4.5 is Anthropic’s most advanced Sonnet model to date, optimized for real-world agents and coding workflows. It delivers state-of-the-art performance on coding benchmarks such as SWE-bench Verified, with...

by |Th9 2025 |1M context |$3.00/M input |$15.00/M output
1M tokens

DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism...

by |Th9 2025 |164K context |$0.2700/M input |$0.4100/M output
164K tokens

Uncensored and creative writing model based on Mistral Small 3.2 24B with good recall, prompt adherence, and intelligence.

by |Th9 2025 |131K context |$0.3000/M input |$0.5000/M output
131K tokens

Relace Apply 3 is a specialized code-patching LLM that merges AI-suggested edits straight into your source files. It can apply updates from GPT-4o, Claude, and others into your files at...

by |Th9 2025 |256K context |$0.8500/M input |$1.25/M output
256K tokens

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance...

by |Th9 2025 |1M context |$0.1000/M input |$0.4000/M output
1M tokens

Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math....

by |Th9 2025 |131K context |$0.2600/M input |$2.60/M output
131K tokens

Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language use (VQA, document parsing, chart/table...

by |Th9 2025 |262K context |$0.2000/M input |$0.8800/M output
262K tokens

Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the January 2025 version. It...

by |Th9 2025 |262K context |$0.7800/M input |$3.90/M output
262K tokens

Qwen3 Coder Plus is Alibaba's proprietary version of the Open Source Qwen3 Coder 480B A35B. It is a powerful coding agent model specializing in autonomous programming via tool calling and...

by |Th9 2025 |1M context |$0.6500/M input |$3.25/M output
1M tokens

GPT-5-Codex is a specialized version of GPT-5 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks....

by |Th9 2025 |400K context |$1.25/M input |$10.00/M output
400K tokens

DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language consistency and agent capabilities, further optimizing the model's...

by |Th9 2025 |164K context |$0.2100/M input |$0.7900/M output
164K tokens

Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning. Read more about the model...

by |Th9 2025 |2M context |$0.2000/M input |$0.5000/M output
2M tokens

Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks...

by |Th9 2025 |131K context |$0.0900/M input |$0.4500/M output
131K tokens

Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling...

by |Th9 2025 |1M context |$0.1950/M input |$0.9750/M output
1M tokens

Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step problems; math proofs, code synthesis/debugging, logic, and agentic...

by |Th9 2025 |131K context |$0.0975/M input |$0.7800/M output
131K tokens

Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code generation, knowledge QA, and multilingual...

by |Th9 2025 |262K context |Miễn phí input |Miễn phí output
262K tokens

Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code generation, knowledge QA, and multilingual...

by |Th9 2025 |262K context |$0.0900/M input |$1.10/M output
262K tokens

Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.

by |Th9 2025 |1M context |$0.2600/M input |$0.7800/M output
1M tokens

Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.

by |Th9 2025 |1M context |$0.2600/M input |$0.7800/M output
1M tokens

NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and...

by |Th9 2025 |128K context |Miễn phí input |Miễn phí output
128K tokens

NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and...

by |Th9 2025 |131K context |$0.0400/M input |$0.1600/M output
131K tokens

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32...

by |Th9 2025 |262K context |$0.4000/M input |$2.00/M output
262K tokens

Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking mode,” where internal reasoning traces are separated...

by |Th8 2025 |131K context |$0.0800/M input |$0.4000/M output
131K tokens