
Nature Images created in whole or in part using Generative AI will not be published.Credit: Artem Medvediev/Alamy
should do it Nature Do you want to enable the use of generative artificial intelligence (AI) in the creation of your images and videos? The explosion of content created using generative AI tools like ChatGPT and Midjourney has rapidly increased the capabilities of these platforms As a result, the journal has been discussing, discussing and consulting on this issue for several months.
Apart from articles dedicated to AI, Nature At least for the time being, we will not publish content where photos, videos, or illustrations are created in whole or in part using generative AI.
Artists, filmmakers, illustrators and photographers we commission and work with are asked to ensure that the work they submit has not been generated or augmented using generative AI (go.nature.com/3c5vrtm ).

Tools like ChatGPT threaten transparent science.Here are the ground rules for using
Why ban the use of generative AI in visual content? Ultimately, it’s a matter of honesty. The publishing process is underpinned by a shared commitment to integrity as far as both science and art are concerned. It also includes transparency. As researchers, editors and publishers, we all need to know the sources of our data and images so we can verify that they are accurate and true. Existing generative AI tools do not provide access to the sources to do such validation.
Then there is attribution. If pre-existing work is used or cited, attribution must be given. This is a core principle of science and art, and generative AI tools do not meet this expectation.
Consent and permission are also factors. For example, if an individual is identifiable, or if the intellectual property of an artist or illustrator is involved, these should be obtained. Again, general applications of generative AI do not pass these tests.
Generative AI systems are trained on images where no effort has been made to identify the source. Copyrighted works are routinely used to train generative AI without proper permission. In some cases, privacy may be violated. For example, when a generative AI system creates what looks like photos or videos of people without their consent. In addition to privacy concerns, the ease with which these “deepfakes” can be created also fuels the spread of disinformation.
proper precautions
At this point, Nature permits the inclusion of generated text with the aid of generative AI, with due diligence (see go.nature.com/3cbrjbb). Use of such large language model (LLM) tools should be documented in the methods or acknowledgments section of the paper, and the author provides the source of all data, including data generated with her AI assistance. is expected to Additionally, LLM tools are not accepted as authors of research papers.
The world is on the brink of an AI revolution. While much is expected of this revolution, AI, especially generative AI, is rapidly upending long-established conventions in fields such as science, art, and publishing. These rules have taken centuries to develop in some cases, but the result is a system that protects the integrity of science and protects content creators from abuse. All of these achievements are in danger of collapsing if AI is not handled with care.
Regulations and legal systems in many countries are still formulating their responses to the rise of generative AI. Until they catch up, as a publisher of research and creative work, Nature‘s stance will continue to be a simple “no” to including visual content created using generative AI.
