OpenAI
gpt-5.4-pro
gpt-5.4
gpt-5.3-codex
gpt-5.3-chat-latest
gpt-5.2
gpt-5.1
OpenAI GPT OSS 120B
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
gpt-5-codex
OpenAI: GPT OSS 20B
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
gpt-5.1-chat-latest
gpt-5
gpt-5-mini
gpt-5-nano
gpt-5-pro
gpt-5.2-codex
gpt-5.2-pro
gpt-5.2-chat-latest
gpt-5.1-codex-max
gpt-5.1-codex-mini
gpt-5.1-codex
gpt-5-chat-latest
gpt-4.1-mini
gpt-4.1-nano
gpt-4.1
o3
chatgpt-o3
o3-mini
chatgpt-o3-mini
o1-mini
chatgpt-o1-mini
o1
chatgpt-o1
gpt-4o-mini
chatgpt-4o-mini
gpt-4o
chatgpt-4o