
Artificial intelligence (AI) should not be left alone to decide what content people should read, as it still lacks the key properties necessary to decipher information.
Arjun Narayan, Head of Trust, Safety and Customer Experience at SmartNews, said human oversight and guardrails are critical to ensuring the right content is pushed to users.
again: 6 Skills Needed to Become an AI Prompt Engineer
The news aggregator platform curates articles from 3,000 news sources around the world, and users spend an average of 23 minutes per day on the app. Available for Android and iOS, the app has been downloaded over 50 million times. Headquartered in Tokyo, SmartNews has teams of linguists, analysts and policy makers in Japan and the United States.
The company’s mission is to provide authoritative and relevant news to its users, given the vast amount of information currently available online. “News must be trustworthy. Our algorithm evaluates millions of articles, signals and human interactions to deliver the top 0.01% of the most important stories at the moment.” He promotes SmartNews on its website.
The platform uses machine learning and natural language processing technology to identify and prioritize the news users want. There are metrics to assess the credibility and accuracy of news sources.
again: According to MIT, the mass adoption of generative AI tools is derailing one very important factor
This is very important as information is increasingly consumed through social media of questionable veracity, Narayan said.
A proprietary AI engine powers a news feed tailored based on your personal preferences, including topics you follow. We also use various machine learning systems to analyze and evaluate indexed articles to determine if the content complies with company policies. Non-compliant sources will be filtered out, he said.
He added that customer support reports directly to his team, so user feedback can be quickly reviewed and acted upon if necessary.
Like many others, the company is now looking at generative AI and evaluating how best to use new technologies to further enhance content discovery and search. Narayan declined to provide details on what these new features might look like.
However, he stressed the importance of maintaining human oversight amid the still inadequate use of AI in some areas.
again: What liability arises when using AI-generated code?
For example, large language models are not efficient for handling breaking news or current affairs news, but perform with greater accuracy and reliability when used to analyze evergreen content such as DYI and how-to articles.
These AI models are also good at summarizing large amounts of content and supporting some features such as enhanced content delivery, he noted. His team evaluates the effectiveness of using large-scale language models to determine whether specific content satisfies the company’s editorial policies. “It’s still early days, early days,” he said. “What we learned [the level of] The accuracy or precision of an AI model is only as good as the data that feeds and trains the AI model. “
Most models today are not “conscious” and lack an understanding of context, Narayan said. These issues can be resolved over time as more datasets and data are fed into the model, he said.
Equal effort must be invested in ensuring that the training data is “processed” and free of bias or normalized discrepancies. He noted that this is especially important for generative AI, where open datasets are commonly used to train AI models. He said it was a “shady” part of the industry and would lead to issues related to copyright and intellectual property infringement.
again: Annoying question: Who owns the code, images and stories generated by AI?
“At the moment, there isn’t much public disclosure about what kind of data is being fed into the AI models,” he said. “This needs to change. There should be transparency about how the AI is trained and the decision-making logic, as these AI models will shape the way we see the world.”
He expressed concern about “hallucinations” in which AI generates false information that pragmatic people believe to be true.
Issues like this further highlight the need for some form of governance, with human oversight of content pushed to users, he said.
Organizations should also audit what they get from AI models and implement the necessary guardrails. For example, if an AI system is asked for instructions on building a bomb or writing a plagiarized article, a safety net should be put in place.
“At this point AI is not ready to run on its own,” Narayan said, adding that investments in human capabilities and oversight will always be required. “We need guardrails. We don’t want content that isn’t proofread or fact-checked.”
And amid all the hype, it’s important to keep in mind the limitations of generative AI. Generative AI models have not yet been trained to handle breaking news and do not perform well with real-time data.
again: Who owns the code? If ChatGPT helps build apps, is it yours?
Where the AI worked better is enhancing the recommendation engine SmartNews uses to prioritize articles that it believes will be of interest, based on certain background signals such as a user’s reading patterns. These AI systems have been in use for the past decade, and the rules and algorithms have been continuously fine-tuned, he explained.
He was reluctant to give details on how generative AI could be incorporated, but noted its potential to facilitate human-machine interaction.
Anyone, even those with no technical background, can get the answers they need if they know how to ask the right questions. The answers can then be reused in daily activities, he said.
However, some areas of generative AI are still gray.
According to Narayan, the company’s news platform has ongoing discussions with publishers about how articles written entirely by AI and articles written by humans but enhanced with AI should be managed. It is said that it is being done. And if rules were established for such articles, how would they be enforced?
Additionally, there are questions about the level of disclosure that should be applied to different variations so that readers know how and when AI is used.
again: Will humanity really go extinct? Think about AI’s Achilles heel
Regardless of how these are ultimately addressed, it remains an obligation of editorial oversight. Narayan again stressed the importance of transparency, saying that all content must meet SmartNews’ editorial policy of accuracy and authenticity.
He expressed concern over the tech layoffs that accompanied the removal of the AI Ethics and Trust teams. “I’m going to say it now, but it’s very important to continue.” [human] Monitor and invest in safety guardrails. Lack of diligence will create monsters,” he said. “Automation is great” [and allows] The system can be extended, but nothing beats human ingenuity. “
