Imagine a world that develops and produces iPhones and other advanced smartphones. You don’t need an expensive team of engineers and designers or a complex global supply chain with factories in multiple countries to manufacture screens, chips and sensors. We employ tens of thousands of workers to assemble and ship our devices around the world.
Imagine a world where almost everyone could build their own smartphone, at least as advanced as Samsung or Google, customized to their needs. The development of this device will not be done by a big company worth trillions of dollars, but by a small startup or perhaps an open source researcher. Sounds surreal, doesn’t it? But when it comes to today’s most advanced and discussed technology, generative artificial intelligence, you don’t need to imagine anything at all. This is today’s reality, not tomorrow’s reality.
1 view gallery


Diagram showing the battle between open source and closed source research (Credit: MindJourney)
This was recently confirmed by the publication of an alleged leak of an internal Google document titled “We have no moat, and neither does OpenAI.” A moat in technology development refers to a limit or threshold that makes it difficult for competitors to catch up to the big players in their field. This may be a financial, logistical, technical or other limitation. For example, smartphone makers have logistical and financial moats that make it difficult for new entrants in the space to generate significant competition. But when it comes to generative AI, especially large-scale language models like his ChatGPT, the document argues that no such moat exists at all.
The document was posted on a public Discord server, and Dylan Patel, chief analyst at research firm Semianalysis, confirmed that it was written by a Google employee. But even if that weren’t the case, the document raises some important insights, mainly that players like Google and OpenAI are not in a position to win his AI race. “While we are arguing, a third faction is quietly eating lunch. Of course, I’m talking about open source. Quite frankly, they’re blaming us. We What we consider to be a “major open problem” has been resolved and is in the hands of the people today. Among other things, the author covers his LLM and scalable personal AI on the phone.
According to the document, progress made by the open source community began in March when Meta’s LLaMA model was leaked. This was the first time the open source community had direct access to the multi-feature base model. “There followed a tremendous burst of innovation,” it says. “Anyone can tinker. Many of the new ideas come from the public. Barriers to entry for training and experimentation have dropped from the total output of a major research organization to he one. Evening and a chunky laptop.”
According to the authors, Google models and OpenAI models have a slight advantage in terms of quality, but the gap is rapidly closing. “The open source model is faster, more customizable, more private, and more pound-for-pound capable. at $100 and 13 billion parameters, and they’re doing it in weeks, not months.”
“We have no secret sauce,” continues. “People won’t pay for the restricted model if the free and unrestricted alternatives are of equal quality.”
“Right now, solutions developed by the open-source community are sufficient for everyday commercial applications,” says AI artist and entrepreneur and machine learning instructor for designers at Bezalel Academy. , Matty Mariansky, founder of Rise of the Rise of the Machines community on Facebook, told Calcalist. “You also don’t have to give model owners all your personal data. Model owners can choose to censor or restrict the specific cases your startup wants to work on.
“The author of this paper basically says that building models is such an expensive business that the idea that everyone had to go to OpenAI or Google was a mistake. It has university researchers in it. Disposal is educating big companies.” He said the open-source community’s approach is smarter, “training a model that knows everything. If you need a bot to respond to customer service, why should the bot know how to:” Quoted by James Joyce Do you want to collect existing data to make a small GPT a great expert in a small field? ”
Mariansky added that the situation creates considerable difficulties for those seeking to regulate the sector, saying, “From a regulatory perspective, this is very bad news. is very difficult,” he added. “If the model is owned by a law-abiding company, that’s fine, but bad guys can easily enter and take huge power without accountability. Four or five if the EU thought Let’s solve the problem. That idea has now been shattered. Everyone will have their own AI, trained by God only, capable of doing all sorts of things, and will be on their private, hard-to-monitor server. This also means that no one can monitor it. “Some of the big companies will stop. Before it was ‘If we stop, the Chinese will continue to pass us’, now it’s ‘Everyone will pass us’. ”.
But researchers in the field believe that even if the gaps don’t relate to existing applications, the gaps will always remain and the open source model will always lag behind the larger and more expensive models. Andrei Karpathy, one of the founders of OpenAI and former senior director of AI at Tesla, said the recent open-source boom was made possible in large part thanks to Meta’s leaked models. “Pre-training of LLM-based models is still very expensive. Think about it, even for very large models, think of it as a few GPUs + a day.”
As far as practical applications of generative AI are concerned, large-scale OpenAI and Google models are often preferred over free open-source models, especially for limited and well-defined tasks such as customer service. There are no significant advantages to justify it. , such as summarizing business documents and creating medical reports. But this is a short-term view. Looking to the future, ambitious applications such as long-term investment in the stock market, managing complex international logistics systems, and even addressing and resolving political issues and crises will depend on larger and better powers. May result in moat formation. – Funders that the open source community has no chance of overcoming.
