AI Models

235 models Free & Paid Cập nhật: 1 hour trước

Opus 4.6 is Anthropic’s strongest model for coding and long-running professional tasks. It is built for agents that operate across entire workflows rather than single prompts, making it especially effective...

by |Feb 2026 |1M context |$5.00/M input |$25.00/M output
1M tokens

Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per...

by |Feb 2026 |262K context |$0.1500/M input |$0.8000/M output
262K tokens

The simplest way to get free inference. openrouter/free is a router that selects free models at random from the models available on OpenRouter. The router smartly filters for models that...

by |Feb 2026 |200K context |Miễn phí input |Miễn phí output
200K tokens

Step 3.5 Flash is StepFun's most capable open-source foundation model. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token....

by |Jan 2026 |262K context |$0.1000/M input |$0.3000/M output
262K tokens

Trinity-Large-Preview is a frontier-scale open-weight language model from Arcee, built as a 400B-parameter sparse Mixture-of-Experts with 13B active parameters per token using 4-of-256 expert routing. It excels in creative writing,...

by |Jan 2026 |131K context |Miễn phí input |Miễn phí output
131K tokens

Kimi K2.5 is Moonshot AI's native multimodal model, delivering state-of-the-art visual coding capability and a self-directed agent swarm paradigm. Built on Kimi K2 with continued pretraining over approximately 15T mixed...

by |Jan 2026 |262K context |$0.3827/M input |$1.72/M output
262K tokens

Solar Pro 3 is Upstage's powerful Mixture-of-Experts (MoE) language model. With 102B total parameters and 12B active parameters per forward pass, it delivers exceptional performance while maintaining computational efficiency. Optimized...

by |Jan 2026 |128K context |$0.1500/M input |$0.6000/M output
128K tokens

MiniMax M2-her is a dialogue-first large language model built for immersive roleplay, character-driven chat, and expressive multi-turn conversations. Designed to stay consistent in tone and personality, it supports rich message...

by |Jan 2026 |66K context |$0.3000/M input |$1.20/M output
66K tokens

Palmyra X5 is Writer's most advanced model, purpose-built for building and scaling AI agents across the enterprise. It delivers industry-leading speed and efficiency on context windows up to 1 million...

by |Jan 2026 |1M context |$0.6000/M input |$6.00/M output
1M tokens

LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks, data extraction, and RAG—while still running comfortably on edge devices. It supports long context (up to 32K tokens) and is...

by |Jan 2026 |33K context |Miễn phí input |Miễn phí output

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.

by |Jan 2026 |33K context |Miễn phí input |Miễn phí output

The gpt-audio model is OpenAI's first generally available audio model. The new snapshot features an upgraded decoder for more natural sounding voices and maintains better voice consistency. Audio is priced...

by |Jan 2026 |128K context |$2.50/M input |$10.00/M output
128K tokens

A cost-efficient version of GPT Audio. The new snapshot features an upgraded decoder for more natural sounding voices and maintains better voice consistency. Input is priced at $0.60 per million...

by |Jan 2026 |128K context |$0.6000/M input |$2.40/M output
128K tokens

As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning,...

by |Jan 2026 |203K context |$0.0600/M input |$0.4000/M output
203K tokens

GPT-5.2-Codex is an upgraded version of GPT-5.1-Codex optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks....

by |Jan 2026 |400K context |$1.75/M input |$14.00/M output
400K tokens

Olmo 3.1 32B Instruct is a large-scale, 32-billion-parameter instruction-tuned language model engineered for high-performance conversational AI, multi-turn dialogue, and practical instruction following. As part of the Olmo 3.1 family, this...

by |Jan 2026 |66K context |$0.2000/M input |$0.6000/M output
66K tokens

Seed 1.6 Flash is an ultra-fast multimodal deep thinking model by ByteDance Seed, supporting both text and visual understanding. It features a 256k context window and can generate outputs of...

by |Dec 2025 |262K context |$0.0750/M input |$0.3000/M output
262K tokens

Seed 1.6 is a general-purpose model released by the ByteDance Seed team. It incorporates multimodal capabilities and adaptive deep thinking with a 256K context window.

by |Dec 2025 |262K context |$0.2500/M input |$2.00/M output
262K tokens

MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world...

by |Dec 2025 |197K context |$0.2900/M input |$0.9500/M output
197K tokens

GLM-4.7 is Z.ai’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while...

by |Dec 2025 |203K context |$0.3900/M input |$1.75/M output
203K tokens

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool...

by |Dec 2025 |1M context |$0.5000/M input |$3.00/M output
1M tokens

Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversational agents.

by |Dec 2025 |33K context |$0.1000/M input |$0.3000/M output

MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a...

by |Dec 2025 |262K context |$0.0900/M input |$0.2900/M output
262K tokens

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems. The model is fully...

by |Dec 2025 |256K context |Miễn phí input |Miễn phí output
256K tokens

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems. The model is fully...

by |Dec 2025 |262K context |$0.0500/M input |$0.2000/M output
262K tokens

GPT-5.2 Chat (AKA Instant) is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “think” on...

by |Dec 2025 |128K context |$1.75/M input |$14.00/M output
128K tokens

GPT-5.2 Pro is OpenAI’s most advanced model, offering major improvements in agentic coding and long context performance over GPT-5 Pro. It is optimized for complex tasks that require step-by-step reasoning,...

by |Dec 2025 |400K context |$21.00/M input |$168.00/M output
400K tokens

GPT-5.2 is the latest frontier-grade model in the GPT-5 series, offering stronger agentic and long context perfomance compared to GPT-5.1. It uses adaptive reasoning to allocate computation dynamically, responding quickly...

by |Dec 2025 |400K context |$1.75/M input |$14.00/M output
400K tokens

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window. Devstral 2 supports exploring...

by |Dec 2025 |262K context |$0.4000/M input |$2.00/M output
262K tokens

The relace-search model uses 4-12 `view_file` and `grep` tools in parallel to explore a codebase and return relevant files to the user request. In contrast to RAG, relace-search performs agentic...

by |Dec 2025 |256K context |$1.00/M input |$3.00/M output
256K tokens

GLM-4.6V is a large multimodal model designed for high-fidelity visual understanding and long-context reasoning across images, documents, and mixed media. It supports up to 128K tokens, processes complex page layouts...

by |Dec 2025 |131K context |$0.3000/M input |$0.9000/M output
131K tokens

DeepSeek V3.1 Nex-N1 is the flagship release of the Nex-N1 series — a post-trained model designed to highlight agent autonomy, tool use, and real-world productivity. Nex-N1 demonstrates competitive performance across...

by |Dec 2025 |131K context |$0.1350/M input |$0.5000/M output
131K tokens

Rnj-1 is an 8B-parameter, dense, open-weight model family developed by Essential AI and trained from scratch with a focus on programming, math, and scientific reasoning. The model demonstrates strong performance...

by |Dec 2025 |33K context |$0.1500/M input |$0.1500/M output

Transform your natural language requests into structured OpenRouter API request objects. Describe what you want to accomplish with AI models, and Body Builder will construct the appropriate API calls. Example:...

by |Dec 2025 |128K context |Miễn phí input |Miễn phí output
128K tokens

GPT-5.1-Codex-Max is OpenAI’s latest agentic coding model, designed for long-running, high-context software development tasks. It is based on an updated version of the 5.1 reasoning stack and trained on agentic...

by |Dec 2025 |400K context |$1.25/M input |$10.00/M output
400K tokens

Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can process text, images, and videos to generate text. Nova 2 Lite demonstrates standout capabilities in processing...

by |Dec 2025 |1M context |$0.3000/M input |$2.50/M output
1M tokens

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language...

by |Dec 2025 |262K context |$0.2000/M input |$0.2000/M output
262K tokens

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

by |Dec 2025 |262K context |$0.1500/M input |$0.1500/M output
262K tokens

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.

by |Dec 2025 |131K context |$0.1000/M input |$0.1000/M output
131K tokens

Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.

by |Dec 2025 |262K context |$0.5000/M input |$1.50/M output
262K tokens

Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with robust function...

by |Dec 2025 |131K context |$0.0450/M input |$0.1500/M output
131K tokens

DeepSeek-V3.2-Speciale is a high-compute variant of DeepSeek-V3.2 optimized for maximum reasoning and agentic performance. It builds on DeepSeek Sparse Attention (DSA) for efficient long-context processing, then scales post-training reinforcement learning...

by |Dec 2025 |164K context |$0.4000/M input |$1.20/M output
164K tokens

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism...

by |Dec 2025 |164K context |$0.2600/M input |$0.3800/M output
164K tokens

INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It offers state-of-the-art performance for its size across math,...

by |Nov 2025 |131K context |$0.2000/M input |$1.10/M output
131K tokens

Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competitive performance across real-world coding and...

by |Nov 2025 |200K context |$5.00/M input |$25.00/M output
200K tokens

Olmo 3 32B Think is a large-scale, 32-billion-parameter model purpose-built for deep reasoning, complex logic chains and advanced instruction-following scenarios. Its capacity enables strong performance on demanding evaluation tasks and...

by |Nov 2025 |66K context |$0.1500/M input |$0.5000/M output
66K tokens

Nano Banana Pro is Google’s most advanced image-generation and editing model, built on Gemini 3 Pro. It extends the original Nano Banana with significantly improved multimodal reasoning, real-world grounding, and...

by |Nov 2025 |66K context |$2.00/M input |$12.00/M output
66K tokens

Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research. 2M context window. Reasoning can be enabled/disabled using...

by |Nov 2025 |2M context |$0.2000/M input |$0.5000/M output
2M tokens

Cogito v2.1 671B MoE represents one of the strongest open models globally, matching performance of frontier closed and open models. This model is trained using self play with reinforcement learning...

by |Nov 2025 |128K context |$1.25/M input |$1.25/M output
128K tokens

GPT-5.1 is the latest frontier-grade model in the GPT-5 series, offering stronger general-purpose reasoning, improved instruction adherence, and a more natural conversational style compared to GPT-5. It uses adaptive reasoning...

by |Nov 2025 |400K context |$1.25/M input |$10.00/M output
400K tokens