With the recent emergence of deep learning text-to-image platforms such as Midjourney and Stable Diffusion, people can come up with amazing digital works of art in seconds by simply entering short descriptive text prompts. Kind of like “a wizard casting a spell on top of a mountain.”
Many of these new tools are relatively easy for ordinary people to use and don’t require years of learning the basics of drawing and painting.
Many of us are helping businesses manage their data more smoothly, helping medical professionals make more accurate diagnoses, sifting through misinformation from the daily news, and more. , fully aware of the potential benefits of machine learning. But not surprisingly, the potential for exploitation to create eerily convincing deepfakes, the social implications of algorithmic bias, and the fear of mass surveillance surrounding AI technologies like facial recognition, etc. , there are also legitimate concerns about potential AI pitfalls.
And now these new generation tools are raising a series of concerns about the ethics of AI.
Human Artists ‘Fight the Code’
These systems are called diffusion models, a type of generative model that produces data output that resembles the input data it was trained on.
First introduced in 2015, these diffusion models seek to destroy the training data by continuously adding Gaussian noise and then recover it by reversing this “noise” or diffusion process. It works by learning. This makes it more powerful than generative adversarial networks (GANs) for image generation. .
Through this ‘noise’ and ‘denoise’ diffusion process, these AI image generators can even be used to create images in the style of specific artists simply by typing in their name. This is because these models are trained on millions of images collected from the internet. A recent study indicates that it may contain harmful or illegal content.
One particularly popular artist whose names and images are often used to train and produce these mimetic AI artworks is Polish concept artist Greg Rutkowski.
The problem is that Rutkowski himself never approved of using his images in this way. Worse, some of these AI-generated knockoffs even have his signature on them.
“the way [AI art generation is] It’s in development and the direction it’s going in is scary,” Rutkowski said in an interview with Crypto Mile. “Nowadays it takes a human only 5-10 minutes to create something he could only create in 2 weeks. We’ll probably have to wait a year before it’s good enough to compete with a living artist.” .”
There is an ethical conundrum behind such AI image generators. To create something in the style of a particular artist, we need to scrape that artist’s work from the internet and feed it into these AI training datasets.
However, none of the companies behind these image generators have explicitly asked permission from the artists themselves. Also, we do not compensate the artist.
Vladimir Petkovic, creative director at Adobe, said in a recent LinkedIn post:
“Copyrighted artwork, the artist’s personal name, and style are simply ingested with no respect for their rightful author. AI has established itself as a powerful tool that can enhance many creative workflows, but until the right systems are in place, [rightfully] I can’t attribute and reward everyone whose work is being used to train these algorithms, but personally, I’d rather use them to create a concept of ‘art’. is not considered ethical. ”
unforeseen consequences
Beyond copyright infringement, which can threaten the livelihoods of human artists, AI could have unexpected and far-reaching ramifications within the industry. For example, the widespread use of AI imagery could discourage would-be artists from pursuing creative careers. Because in the future we may think it is futile to compete in a market that may be dominated by machine-generated art.
Additionally, AI has the potential to disrupt the educational pipeline within the art industry. In this industry, it’s common for budding art creators to invest large sums of cash in courses taught by established artists and art schools to gain marketable skills that will help them move up. within the industry.
In fact, the threat of AI automating the work of professional artists and illustrators is no nebulous prospect. Some artists, who typically take small commissions, have already found work drying up, especially from clients with tighter budgets.
Springfield, Missouri-based artist Danielle Harris told CBC News: “[Clients] Frankly, they said just let this AI do it — not as good, but much cheaper. ”
Conversely, it has already been reported. client gets scammed, they are paying for what they believe to be original work, but are actually receiving something that is AI-generated. Similarly, a US-based artist was able to win first prize in a state fair digital painting contest using work actually produced and printed on canvas by Midjourney.
So far, while the number of AI artists who support the technology is growing, others seem to be speaking out. Recently, on an online portfolio site, an artist posted an image saying “NO TO AI GENERATED IMAGES IN AI GENERATED IMAGES”, and the original work was posted alongside the AI generated image. I protested against it. Other artists come together to form collectives like Spawning. This is the site behind Have I Been Trained?, which allows users to see if their work has been scraped for his AI models and to opt out. Additionally, there are suggestions that AI models should exclude images created by living artists.
Some kind of data laundering?
Other experts speculate that these models are also used as a form of data laundering. In this case, stolen data is converted for sale or use in legitimate databases. In essence, this means that big tech companies can avoid copyright and accountability by creating and funding nonprofits that can build datasets and train models for “research purposes.” A pipeline from academia to commerce. These models can then be shared by commercial companies, who can monetize these models by providing commercially sold APIs.
Until you make eye-opening comparisons of how generative AI is being deployed in the art world and the music industry, it might seem like a no-brainer.
“Technically, these models create something new and should be protected by fair use.” Devansh Devansh, Head of AI and Data Science at Clientell, said in a blog post: “But the stability AI [the company behind Stable Diffusion] I was also creating diffusion-based models for music.Unlike stable diffusion, this is [Dance Diffusion] The model does not use copyrighted data. It’s no coincidence that the model shuns the work of an industry with better lawyers. ”
Art, literature, journalism, and music may be the first testing grounds for a wide range of rapidly developing AI models, but human actors, directors, and models, such as film, photography, and fashion, are likely to be the future of AI. may be replaced by an image generated by
After all, while it may seem fun to experiment with technologies that ostensibly “democratize” these industries, it should be done carefully, transparently, and without harming people’s lives. is. In addition to these practical questions, we need to ask ourselves: Is “art” made without a soul really art?