Openai's open source model available at IBM watsonx.ai

Applications of AI


The large-scale model, the GPT-OSS-120B, contains 116.8 billion parameters and uses a mixture of expert (MOE) architectures. This means that only a portion of the model is active at any time. This design allows you to run efficiently on a single NVIDIA H100 GPU, widely used in data centers. The smaller model, the GPT-OSS-20B, is optimized for consumer-grade devices such as laptops with 16GB of memory.

Both models support adjustable inference levels of low, medium or high, allowing users to balance output quality with cost and speed. The model also outputs a complete chain of inferences, so the user can see how they reached the conclusion rather than receiving only the final answer. Openai says this will increase transparency and make it easier to debug and trust the model.

OpenAI published benchmark scores showing that larger models compete in several inference and mathematical tasks. I won 90 in the MMLU Benchmark, a standard test of general knowledge. He scored 80.1 in the GPQA Science Benchmark. The AIME 2025 Mathematics Test achieved a score of 97.9, suggesting a powerful ability in symbolic reasoning.

The model does not include understanding of images, audio or video, and does not include content filters or moderation systems. They are text only and users are expected to implement their own safeguards depending on how the model is deployed. Users can leverage granite guardians along with GPT-Oss to act as guardrail models to help them detect prompt and response risk.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *