invasion of Chatbots have disrupted the plans of countless companies, including those that have been working on the very technology for years (see you, Google). But not Artifact, the news discovery app created by Instagram co-founders Kevin Systrom and Mike Krieger. When I spoke with Systrom this week about his startup (the much-anticipated successor to the billion-user social network that underpinned Meta for the last few years), he said that the artifact is a product of the recent AI revolution. emphasized something. Even though GPT was invented before chat started. In fact, Systrom and Krieger say they started with the idea of harnessing the power of machine learning, scoured for serious problems that AI could solve, and ended up with news apps.
The problem is that it is difficult to find the individually relevant, high-quality news articles that people want to see most, and irrelevant clickbait, misleading partisan claims, and low No need to sift through calorie distractions. Artifact delivers what looks like a standard feed containing links to news articles and snippets of headlines and descriptions. But unlike links that appear on Twitter, Facebook, and other social media, it is the WHO suggests them, but is the content of the story itself. Ideally, the content each user wants to see comes from publications that have been vetted for authenticity.
What makes this possible, says Systrom, is his small team’s AI transformation efforts. Artifact, at least not yet, converses with users like ChatGPT, but the app leverages a large proprietary language model to help each individual choose which news articles to watch. increase. Artifact internally digests news articles so that their content can be represented by long strings of numbers.
By comparing the numerical hashes of available news articles to those indicated by a particular user (via clicks, reading time, or desire to see things on a particular topic), Artifact can identify unique news articles. We provide a collection of articles tailored to. human. “The advent of these large-scale language models has made it possible to summarize content into these numbers, and find matches much more efficiently than before,” Systrom said. say. “The difference between us and his GPT or Bard is that we are not generating text, we are understanding it.”
That doesn’t mean Artifact has ignored the recent boom in AI to generate text for users. The startup has a business relationship with OpenAI that provides access to the APIs of GPT-4, OpenAI’s latest and greatest language model that powers the premium version of ChatGPT. When the Artifact user selects a story, the app technology gives them the option to summarize the news article into several bullet points, allowing the user to get the gist of the story before continuing reading. (Artifact warns that because the synopsis was generated by AI, it “may contain mistakes.”)
Today, Artifact is once again on the generative AI rocket to tackle the annoying problem of clickbait headlines. The app already provides a way for users to flag articles for clickbait, and if multiple people tag an article, Artifact will not disseminate it. But Systrom said sometimes the problem isn’t in the article, but in the headline. It can promise too much, be misleading, or make readers click just to find information hidden from the headline. From a publisher’s perspective, getting more clicks is a big plus, but it’s frustrating for users to feel manipulated.
Systrom and Krieger have developed a futuristic way to mitigate this problem. If a user flags a headline as dangerous, Artifact will send the content to her GPT-4.Algorithms analyze story content and write your own headline. This more descriptive title will appear in the user’s feed. “99 times out of 100, that title is factual and clearer than the original title users are asking,” says Systrom. The headline will only be shared with the complaining user. But if multiple users reported the clickbait title, all Percentage of users in Artifact shows AI-generated headlines instead of publisher-provided ones. Ultimately, the system will figure out how to identify and replace offending headlines without user input, Systrom said. (GPT-4 can now do it on its own, but Systrom doesn’t trust GPT-4 enough to hand off processes to algorithms.)