Small businesses should be a bigger part of the “AI transformation” conversation

AI For Business


welcome to A.I. decoded, fast companyA weekly newsletter breaking the most important news in the world. A.I.. you can sign Until you receive this newsletter in your email every week. here.


The outlook for AI in small and medium-sized businesses

Much of the conversation surrounding great people is A.I. Business transformation is centered around enterprises, or companies with 500 or more employees. That’s natural. For AI and cloud companies, acquiring large enterprise customers can mean securing a significant recurring revenue stream.

But if we’re really talking about AI reinventing work and making everyone more efficient; productivesmall businesses should be a bigger part of the conversation. According to the Small Business Administration, there are approximately 36 million small businesses operating in the United States, employing 46% of private sector workers. Most of those companies are very small. Approximately 88% have fewer than 20 employees, according to federal data.

Of course, universities and consultancies have been investigating how and to what extent small and medium-sized enterprises are using AI tools. Survey after 2024 formed a consensus It is based on the idea that a relatively small number of small and medium-sized enterprises have begun to meaningfully implement them. However, a study conducted in 2026 paints a more complicated picture. a Goldman Sachs recent research Three-quarters of 10,000 small and medium-sized businesses are using AI, and 84% cite improved productivity and efficiency. Still, only 14% say they are integrating AI into their core operations. another studyAccording to a study by the National Federation of Independent Business (NFIB), only a quarter of small businesses reported not using any AI tools. (NFIB typically looks at traditional, very small businesses such as plumbers and caterers, while Goldman may look at more digitally driven companies, such as e-commerce retailers.)

Many small business owners are probably aware of the growing ecosystem of AI products designed for small businesses. Intuit, Zapier, HubSpot, Lindy, and Microsoft all compete in this space. Many of the software companies that have been serving small and medium-sized businesses for many years include: intuitionis gradually embedding AI co-pilots and automation into products its customers are already familiar with, such as accounting platforms, CRM systems, office suites, customer support software, and workflow automation tools. Microsoft did just that when it integrated Copilot into its productivity suite. Meanwhile, Google is incorporating the Gemini model into its Google Workspace suite.

click here Learn more about how AI companies are promoting their products to small businesses.

New study says AI model reasoning for ethical dilemmas may be purely performative

Leading AI models often appear to ponder moral complexities without actually doing so, according to one researcher. new paper Published in a magazine AI and ethics By researchers at the Allen Institute at Harvard Kennedy School. Instead of actually thinking logically and coming up with nuanced answers to difficult questions, they seem to default to a hidden “hierarchy of values” that they’ve already been trained in, the researchers say.

The study is titled “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?” It tested four models (Claude, GPT, Llama, DeepSeek) on ethical dilemmas derived from moral psychology, including scenarios where both available options have real moral costs. In 87% of so-called tragic trade-off trials, all four models converged on the same choice, and that choice often did not follow the inference.

Researchers describe the AI’s behavior as “crying crocodile tears,” enacting moral distress while enacting what they characterize as implicit and opaque value hierarchies. This can cause major trust issues with users. “More and more people are turning to these tools to help guide them in making difficult decisions,” lead author Sarah Hubbard said in a statement. “If a model appears to be addressing an ethical dilemma, but is actually reducing it to a predetermined answer, it may be gaining user trust through false pretenses.”

click here Learn more about how AI models aren’t always built to deal with ethical dilemmas.

Are AI benchmarks functionally useless?

In the world of AI research, the most common way to measure the intelligence of a model is to submit it to benchmark tests. Hundreds of tests exist, each focusing on a different aspect of intelligence. Some people focus on writing code, while others focus on following instructions and reasoning.

But there’s a big problem. AI Lab allows you to try out benchmarks. “Then as soon as the first training session started, [a] Benchmarks have been released. “I don’t think benchmarks will be a good measure of intelligence because the model will suddenly be trained on that benchmark and that will happen to all benchmarks,” said former OpenAI researcher. jerry tourek said inside Recent podcast appearances.

Sample test questions and answers are immediately available online. AI Labs can train models based on that data to help you score better on tests. “People will target it in training and work it out against every benchmark,” Tourek said. Researchers can then create algorithms that tell the model how to answer test questions.

Tworek, one of the main minds behind OpenAI’s groundbreaking o1 and o3 inference models, says that for benchmarks to be meaningful, there needs to be a way to generate new questions and scenarios for each new test, so that the model being tested is unlike anything we’ve seen before.

click here Learn more about the release of the ARC-AGI-3 gaming benchmark here.


More information about AI from Fast Company:

Want exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? sign up for fast company Premium.



Source link