Social media giant Meta announced on July 1 that former CEO of Scale AI, Alexandr Wang, has been appointed Chief AI Officer for Meta and will co-lead the former CEO of Github, Nat Friedman and Meta Superintelligence Labs (MSL). This follows a $14.3 billion investment in Meta's scale AI at the beginning of June.
The company employs top talents from Openai, Anthropic, Google and Deepmind, employing several signature bonuses reportedly reaching up to $100 million, and employing Blitz.
The following individuals will form the Superintelligence team:
A boring conflict
Trapit Bansal is recognized for his pioneering work in applying reinforcement learning to inference of thinking within large-scale language models. As a co-creator of Openai's O-series model, Bansal has played a pivotal role in increasing the interpretability and robustness of the model. His research focuses on developing training methodologies that enhance both the inference capabilities and efficiency of modern AI systems.
shuchao bi
Shuchao BI has made a major contribution to the development of the GPT-4O's audio mode and O4-MINI model. At Openai, he led his post-training efforts after multimodal. This has improved the way models process and generate output through text, audio and visual inputs.
Huiwen Chang
Huiwen Chang is committed to designing the image generation capabilities of the GPT-4o and has a strong background in generating AI. Previously at Google Research, she invented the architecture of MaskGit and Muse, both of which are fundamental in the field of text-to-image integration.
Ji Lin
Ji Lin played a key role in building an influential set of models, including O3/O4-MINI, GPT-4O, GPT-4.1, GPT-4.5, 4O-ImageGen, and operator inference stacks. His contributions span the development of advanced inference mechanisms and architectural improvements, allowing these models to perform a wide range of complex AI tasks more effectively.
Joel Pover
In humanity, Joel Pober led the work on optimizing reasoning. He previously spent over a decade in Meta, contributing to the development of core infrastructure projects including HHVM, hack, flow, red sex, and a variety of performance and machine learning tools. His deep experience with software engineering and AI was important to improve the speed and scalability of AI inference systems.
Jack Ray
Jack Ray serves as Gemini's pre-training technical lead and leads Gemini 2.5 reasoning. At Deepmind, he led early, large-scale language modeling projects, including Gopher and Chinchilla. His expertise lies in improving the reasoning capabilities of large-scale pre-training strategies and cutting-edge AI models.
horny
Hongyu Ren is a co-creator of several OpenAI models, including GPT-4O, 4o-Mini, O1-Mini, O3-Mini, O3, O4-Mini, etc. He previously led a post-training group at Openai, focusing on improving and optimizing large-scale language models to obtain higher accuracy and efficiency through advanced post-training techniques.
Johann Scharkwick
Johan Schalkwyk, a former Google Fellow, was an early contributor to the Sesame Project and served as the technical lead for Maya. His extensive background in AI research and development has influenced advances in machine learning frameworks and the fundamentals of AI technology.
Paysan
PEI Sun worked on coding and reasoning after training the Gemini project at Google Deepmind. Previously, he developed the last two generations of perceptual models of Waymo, demonstrating his expertise in AI for autonomous vehicles. His current focus is on inference and enhancement of real-world application capabilities in advanced AI systems.
Jiahui Yu
Jiahui Yu is a co-creator of O3, O4-Mini, GPT-4.1, and GPT-4O. He previously led Openai's co-led multimodal research with Gemini. His research focuses on advancing AI awareness, integrating multimodal understandings, and enabling models to process and generate information across diverse data types.
Shengjia Zhao
Shengjia Zhao is a co-creator of ChatGpt, GPT-4, The Mini Model Series, 4.1, and O3. At Openai, he leads a synthetic data initiative, focusing on improving the diversity and quality of training data. His innovation in data integration was important to enhance model generalization and overall performance.
