Science has an AI problem. This group says it can fix it.

Machine Learning


Associate Professor of Economics and Strategy Marta Serra Garcia

One of the main points is transparency. This checklist describes each machine learning model, including the code, data used to train and test the model, hardware specifications used to generate results, experimental design, project goals, and model limitations. Researchers are asked to provide detailed explanations. Research results. According to the authors, the standard is flexible enough to accommodate a wide range of nuances, such as private datasets and complex hardware configurations.

Although the increased stringency of these new standards may delay the publication of certain studies, the authors believe that widespread adoption of these standards will potentially increase the overall rate of discovery and innovation. I believe it will increase significantly.

“What we ultimately care about is the pace of scientific progress,” says lead author Emily Cantrell, a sociologist who is pursuing a Ph.D. At Princeton. “Ensuring that published papers are of high quality and a solid foundation on which future papers can be built can accelerate the pace of scientific progress. That's where we really need to focus, instead of focusing on scientific progress itself.”

Mr. Kapur agreed. The error was painful. “On a collective level, this is just a huge time reduction,” he says. That time costs money. And once that money is wasted, it can have devastating downstream effects, limiting the types of science that attract funding and investment, and undermining ventures that are mistakenly built on flawed science. It can demotivate countless young researchers.

In working towards a consensus on what the guidelines should include, the authors said they aimed to strike a balance. That is, it is simple enough to be widely adopted, yet comprehensive enough to catch as many common mistakes as possible.

They say researchers could adopt the standards to improve their work. Reviewers can use checklists to evaluate papers. Journals can then adopt that standard as a requirement for publication.

“There are a lot of avoidable errors in the scientific literature, especially in applied machine learning research,” Narayanan says. “And we want to help people. We want to keep honest people honest.”

The paper, “Consensus-based recommendations for machine learning-based science,” published May 1 in Science Advances, includes the following authors: Emily Cantrell, Princeton University; Kenny Peng, Cornell University. Thanh Hien (Hien) Pham, Princeton University; Christopher A. Vail, Duke University; Odd Erik Gundersen, Norwegian University of Science and Technology. Jake M. Hoffman, Microsoft Research; Jessica Hulman, Northwestern University. Michael A. Lowndes, Heriot-Watt University; Momin M. Malik, Mayo Clinic Digital Health Centre, Priyanka Nanayakkara, North West. Russell A. Poldrack, Stanford University. Inioluwa Deborah Raji, University of California, Berkeley. Michael Roberts, University of Cambridge. Matthew J. Salganic, Princeton University. Marta Serra García, University of California, San Diego, Brandon M. Stewart, Princeton University. Gilles Vandeveer, Ghent University. and Arvind Narayanan of Princeton University.

Created based on Princeton University release

For more information about research and education at the University of California, San Diego, please visit:


artificial intelligence



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *