OpenAI
gpt-5-codex
gpt-5-codexgpt-5-codexgpt-5-codexgpt-5-codexgpt-5-codexgpt-5-codexgpt-5-codexgpt-5-codexgpt-5-codex
OpenAI GPT OSS 120B
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
OpenAI: GPT OSS 20B
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
gpt-5
gpt-5-mini
gpt-5-nano
Synchronous Interface Testing-123
长长的
gpt-5-pro
test-model-interface-2
悟饭测试同步接口AI
gpt-5-chat-latest
gpt-4o
chatgpt-4o