Perhaps when you dumped $14 billion into the company to buy 49% stake, like Meta did with Scale AI, you said company a) will help you make a lot of money and b) you know what it's doing.
But a new scoop INC Magazine Scale AI suggests that it was co-founded by 28-year-old Gillionaire Alexandre Wang. The lack of the letter “E” between “D” and “R” in its name is a massive behind-the-scenes show.
When working at Google (they just broke up following the acquisition of Meta), Scale AI reportedly overrunned with countless “spammers” who let the company miss out on fake jobs by leveraging laughing security and review protocols, an episode that encapsulates the struggle to meet the demands of giant clients like Google.
Scale AI is basically a data annotation hub that performs grant work, which is essential to the AI industry. Quality data is required to train AI models. And in order for that data to mean anything, you need to know what the AI model is looking at. The annotator enters manually and adds that context.
Like Corporate America's instruments, Scale AI has built a business model into an army of significantly lower wage gig workers, many of which are overseas. The condition is “Digital sweat shop“And many workers are denounced AI, the scale of wage theft.
It turns out this was not an environment to promote high quality work.
According to the internal document obtained by IncAI's “Bulba Experts” program for training Google's AI systems is to deploy authorities and staff. Related fields. But instead, for a 11-month, chaotic month between March 2023 and April 2024, its suspicious “contributor” flooded the program with “spam.” This was called “false information, misinformation, GPT-generated thinking processes.”
Often, spammers, who were independent contractors who worked through scale AI-owned platforms such as Lemocock and Outliers, are paid to submit complete nonsense as it has become almost impossible to catch everything. And even if they get caught, some will simply use a VPN to return.
“People made a lot of money,” the former contributor said. Inc. “They just hired everyone who could breathe.”
The work often called for advanced degrees that many contributors did not have, the previous contributors said. And at first glance, no one came in.
“There were no background checks at all,” said the former remote queue manager who was responsible for reviewing and approving the contributor's work. Inc. “For example, clients have requirements for people working on a project to get a specific degree. But there were no validation checks… Often, it was people who weren't native English speakers.”
The former said “spamers could “run away by simply submitting trash, and there weren't enough people to track them down.” A queue manager has been added. They also recall how they scaled down the AI allocation team responsible for assigning contributors “throwing 800 spammers” to teams that spamed “all tasks.”
The attempts to crack down on were crude. Around Inccalled various notes and guidelines Reject or remove contributors from certain countries, including Egypt, Pakistan, Kenya and Venezuela.
The program also gave us a bit of the technology that helped us create it. Since spammers had submitted junk generated by so many AI, supervisors were advised to use a tool called Zerogpt that was intended to detect the use of ChatGpt.
It will make you wonder how much you slip through the cracks and end up internalizing by Google's AI model. Perhaps we can explain a bit about its infamous, jarring AI overview features.
For that part, a Scale AI spokesman rejected the claim.
“This story is filled with so many inaccuracies that it is difficult to track,” the spokesman said in a statement. Inc. “What these documents show and what we explained Inc That means that prior to publication there was clear protection in place to detect and remove spam before something went to the customer. ”
AI details: WhatsApp deploys AI. For those who don't understand simple messages from friends and family
