首页/OpenAI

OpenAI

文本

gpt-5.4-pro

文本

gpt-5.4-nano

Our cheapest GPT-5.4-class model for simple high-volume tasks

文本

gpt-5.4-mini

Our strongest mini model yet for coding, computer use, and subagents

文本

gpt-5.4

文本

gpt-5.3-codex

文本

gpt-5.3-chat-latest

文本

gpt-5.2

文本

gpt-5.1

文本

OpenAI GPT OSS 120B

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

文本

gpt-5-codex

文本

OpenAI: GPT OSS 20B

gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.

文本

gpt-5.1-chat-latest

文本

gpt-5

文本

gpt-5-mini

文本

gpt-5-nano

文本

gpt-5-pro

文本

gpt-5.2-codex

文本

gpt-5.2-pro

文本

gpt-5.2-chat-latest

文本

gpt-5.1-codex-max

文本

gpt-5.1-codex-mini

文本

gpt-5.1-codex

文本

gpt-5-chat-latest

文本

gpt-4.1-mini

文本

gpt-4.1-nano

文本

gpt-4.1

文本

o3

chatgpt-o3

文本

o3-mini

chatgpt-o3-mini

文本

o1-mini

chatgpt-o1-mini

文本

o1

chatgpt-o1

文本

gpt-4o-mini

chatgpt-4o-mini

文本

gpt-4o

chatgpt-4o

嵌入

text-embedding-3-large

联系我们