Will AI be able to read minds someday? Japan’s breakthrough sparks debate | Technology News

AI News


Tokyo Japan – Yuu Takagi couldn’t believe his eyes. One Saturday afternoon in September, he sat alone at his desk and watched in awe as artificial intelligence decoded brain activity in subjects and created an image of what they saw on a screen. rice field.

“I still remember the first time I saw [AI-generated] Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I would go to the bathroom, look at myself in the mirror, look at my face, and say, ‘Okay, that’s normal. Maybe I’m not crazy.'”

Takagi and his team used Stable Diffusion (SD), a deep-learning AI model developed in Germany in 2022, to study the brain strokes of subjects who viewed up to 10,000 images inside an MRI machine. analyzed the can.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion looks strikingly similar to the original image. We were able to generate high fidelity images.

The AI ​​could do this without being shown the photos beforehand or trained in any way to produce the results.

“I really didn’t expect this kind of result,” Takagi said.

Takagi stressed that breakthroughs do not currently represent mind-reading, and that AI can only generate images that humans see.

“This is not mind-reading,” Takagi said. “Unfortunately, there are many misconceptions in our research.”

“Imaginations and dreams cannot be deciphered. I think this is too optimistic. Of course, the future is possible.”

However, the development has raised concerns about how such technology might be used in the future.

Despite his excitement, Takagi himself admits that such fears are not without merit given that they could be exploited by malicious people or without consent. .

“For us, the issue of privacy is of paramount importance. If a government or institution can read people’s minds, it’s a very sensitive issue,” Takagi said. We need a high-level discussion to do that.”

Yu Takagi and colleagues have developed a method to analyze and visually represent brain activity using AI [Yu Takagi]

Takagi and Nishimoto’s work has generated a lot of buzz in the tech community, which has been electrified by rapid advances in AI, such as the release of ChatGPT, which generates human-like voices in response to user prompts.

Their paper detailing the findings ranks in the top 1% of engagements among the more than 23 million studies tracked to date, according to data company Altmetric.

This research was also accepted at the Conference on Computer Vision and Pattern Recognition (CVPR) in June 2023. This is a common route to justifying significant breakthroughs in neuroscience.

However, Takagi and Nishimoto are careful not to be misled by their findings.

Takagi argues that there are two main bottlenecks to true mind-reading. It’s brain scanning technology and AI itself.

Advances in neural interfaces such as electroencephalography (EEG) brain computers, which detect electroencephalograms via electrodes attached to the subject’s head, and fMRI, which measures brain activity by detecting changes related to blood flow. Regardless, scientists believe it will take us decades to come into existence. It can accurately and reliably decode the imagined visual experience.

Yu Takagi and his colleagues used MRI to scan the brains of subjects for the experiment. [Yu Takagi]

In Takagi and Nishimoto’s study, subjects had to sit in an fMRI scanner for up to 40 hours, which was expensive and time consuming.

In a 2021 paper, researchers from the Korea Advanced Institute of Science and Technology pointed out that traditional neural interfaces “lack chronic recording stability” due to the soft and complex nature of neural tissue.

Additionally, the researcher wrote: Acquiring a fine signal with high sensitivity from the target area is still not trivial because electrical noise greatly perturbs the sensitivity. ”

Takagi acknowledges that these capabilities are improving day by day, but the current AI limitations are the second bottleneck.

“I’m optimistic about AI, but I’m not optimistic about brain technology,” Takagi said. “I think this is the consensus among neuroscientists.”

Takagi and Nishimoto’s framework can be used in non-MRI brain-scanning devices such as EEG, and in ultra-invasive techniques such as brain computer implants being developed by Elon Musk’s Neuralink.

Still, Takagi believes that practical use of AI experiments is currently rare.

First of all, the method is not yet applicable to new subjects. Since the shape of the brain differs from person to person, a model made for one person cannot be applied to another person as it is.

However, Takagi sees a future where it can be used for clinical, communication, and even entertainment purposes.

Ricardo Silva, Professor of Computational Neuroscience at University College London and Research Fellow at the Alan Turing Institute, told Al Jazeera:

“This will develop markers for the detection and progression assessment of Alzheimer’s disease by assessing how persistent abnormalities can be found in images of a visual navigation task reconstructed from patient brain activity.” It could be one way.”

Some scientists believe AI may be used in the future to detect diseases such as Alzheimer’s disease. [Yu Takagi]

Silva shares concerns about the ethics of technology that could one day be used to read real minds.

“The most pressing question is the extent to which data collectors should be compelled to disclose in full detail the uses of the data collected,” he said.

“It’s one thing to sign up as a way to take a snapshot of yourself at a young age, perhaps for future clinical use, but you can’t use it in secondary tasks like marketing or worse. And using it in litigation is another thing altogether. It’s against someone’s interests.”

Still, Takagi and his partner have no intention of delaying their studies. They are already planning version 2 of the project, focusing on improving the technique and applying it to other modalities.

“We are developing something better now. [image] It’s a reconstruction technology,” Takagi said. “And it’s happening at a very fast pace.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *