Google and OpenAI are Walmart surrounded by fruit stands

Machine Learning


Image credit: Tim Boyle/Getty Images

OpenAI may now be synonymous with machine learning. Google is doing its best to get off the floor, but both could soon face new threats. A cumbersome enterprise in the dust. This zerg-like threat may not be an existential one, but it will certainly continue to defend dominant players.

The concept is nothing new — expect to see this kind of chaos every week in the fast-moving AI community — but a widely shared document allegedly made within Google puts the situation in perspective. was overviewed. “We have no moat, and neither does OpenAI,” the memo reads.

I won’t overwhelm the reader with a long summary of this perfectly readable and interesting article, but the gist is that while GPT-4 and other proprietary models have received a lot of attention and are actually making money, there is a lucrative It means that we have made a start. Financing and infrastructure seem slimmer by the day.

The pace of OpenAI’s releases may seem ferocious by the standards of a typical major software release, but compared to versions of iOS and Photoshop, GPT-3, ChatGPT, and GPT-4 are certainly catching up with each other. I’m here. But they still occur monthly and yearly.

The memo points out that an underlying language model called LLaMA leaked from Meta in March was leaked in fairly rough form.internal weekpeople tinkering with laptops and penny-a-minute servers added core features like instruction coordination, multiple modalities, and reinforcement learning from human feedback. , couldn’t replicate the level of collaboration and experimentation going on on subreddits and Discord.

Is it really possible that the gigantic computational problems that present challengers with insurmountable obstacles (moats) are relics from another era in AI development?

Sam Altman already pointed out that throwing parameters into the problem should expect returns to diminish. Sure, bigger isn’t always better, but few people thought smaller was better.

GPT-4 is Walmart and no one really likes Walmart

The business paradigm currently pursued by OpenAI and others is a direct descendant of the SaaS model. You have high-value software or services that provide carefully gated access, such as via APIs. This is a simple, proven approach that makes perfect sense when you invest hundreds of millions of dollars to develop a single monolithic yet versatile product like a large language model. is.

If GPT-4 generalizes well to questions about contract law precedents, that’s great. Nevermind that that vast array of “intelligence” is dedicated to being able to parrot the style of every author who has ever published work in English. GPT-4 is like Walmart.no one actually hope To get there, the company confirms that it has no other choice.

But customers are starting to wonder. Why am I walking down 50 junk aisles to buy a few apples? Why would you want to use the service of the largest and most versatile AI model ever created when you just want to use your intelligence to match the language of this contract with hundreds of others? At the cost of tormenting the metaphor (you won’t be told), if GPT-4 is Walmart where you go to buy apples, what happens when a fruit stand opens up in a parking lot?

In the world of AI, it didn’t take long for a large language model to run on a Raspberry Pi (unsurprisingly). For businesses like OpenAI, its jockey Microsoft, Google, or other companies in the AI-as-a-Service world, that effectively begs the entire premise of their business. it’s for you. In fact, these companies are starting to look like they chose and engineered versions of AI that fit their existing business models, not the other way around!

Once upon a time, it was necessary to offload computations related to word processing to the mainframe. The terminal was just a display. Of course, that was a different time, and it was a long time ago when an entire application could fit on a personal computer. This process has happened many times since our devices repeatedly increased their computing power exponentially. We all know that these days when you have to do something on a supercomputer, it’s a matter of time and optimization.

For Google and OpenAI, that time came much sooner than expected. And they weren’t doing any optimizations – and maybe not at this speed.

Now, that doesn’t mean they’re out of luck. Google didn’t get to where it is by being the best. Not for long anyway. Being a Walmart has its perks. A company doesn’t need to find a bespoke solution that does the task he needs 30% faster if he can get a good price from an existing vendor and not rock the boat too much. Never underestimate the value of inertia in business.

Admittedly, people are iterating over LLaMA so fast that they don’t have enough camels to name it. As a side note, I want to thank the developers for creating an excuse to just scroll through hundreds of lovely tan vicuna pictures instead of working on them. However, few corporate IT departments are willing to work on implementing the semi-legally leaked meta-model of Stability’s open-source derivatives using OpenAI’s simple and effective APIs. They have a business to run!

But at the same time, many years ago I stopped using Photoshop for image editing and creation because of the incredible open source options like Gimp and Paint.net. At this point the discussion goes in a different direction. How much does Photoshop cost? No way, we have a business to run!

Anonymous authors at Google are clearly concerned that the distance from the first situation to the second situation is much shorter than anyone thought, and there’s nothing anyone can do about it. It looks like

However, the memo insists: take it with you. Open, publish, collaborate, share, compromise. As they conclude:

Google should establish itself as a leader in the open source community and lead by collaborating rather than ignoring the wider debate. This probably means taking some unpleasant steps, such as exposing model weights for small ULM variants. This necessarily means giving up some control over our model. However, this compromise is inevitable. We cannot expect to both drive and control innovation.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *