Is AI dulling our minds? — Harvard Gazette

Applications of AI

A recent MIT Media Lab study reports that “overreliance on AI-driven solutions” can contribute to “cognitive atrophy” and diminished critical thinking abilities. The study is small and has not been peer-reviewed, but it raises a warning that even artificial intelligence assistants are happy to acknowledge. When we asked ChatGPT if AI could make us dumber or smarter, he said, “It depends on how we interact with it: as a crutch or as a tool for growth.”

The Gazette spoke with faculty from a variety of fields, including education researchers, philosophers, and the director of the Derek Bok Center for Teaching and Learning, to discuss critical thinking in the age of AI. We asked them how AI can promote or inhibit critical thinking, and whether over-reliance on technology is dulling our minds. The interview has been edited for length and clarity.


Tina Glotzer

Tina Glotzer.

Vesey Conway/Harvard University Staff Photographer

we are better than bayesian

Tina Glotzer
Graduate School of Education, Senior Researcher in Educational Studies

Many students use AI without a good understanding of how it works from a computational or Bayesian perspective, resulting in overconfidence in the AI’s output. Therefore, it is important to teach them to be critical and insightful about how to use it and what it has to offer. But more importantly, it makes you understand how the embodied human mind works and how powerful it can be when used well.

Neuroscience research makes a convincing case that while the human mind is computational and uses Bayesian processes, it is in many ways “better than Bayesian.” For example, work by Antonio Damasio et al. highlights how somatic markers enable us to make quick and intuitive leaps. Research in my lab has found that using strategic information when playing games allows kindergarten children to make more informed moves more quickly than a purely Bayesian approach. Moreover, our human minds can detect important distinctions and exceptions in covariation patterns that drive conceptual change and make modifications to the model that can be summed up in a pure Bayesian approach. This is just the tip of the iceberg showing how the human mind is more powerful than AI. There are many other examples (e.g., as far as I know, AI can provide analogies, but cannot reason analogically).

In my Becoming an Expert Learner course, I aim to help students explore the wealth of research on how the human mind works and make the most of their particular minds (with their normative and non-normative properties). Then ask them to carefully consider how and when to use each compared to AI. I hope this will lead to a deeper understanding of their wonderful minds and abilities.


Dan Levy

Dan Levy.

Photo by Dan Levy

The mission is not the end goal

Dan Levy
Senior Lecturer in Public Policy at Harvard Kennedy School. Co-author “Effective education using ChatGPT”

In the book that Ángela Pérez Albertos and I wrote, we emphasize that there is no such thing as “AI is good for learning” or “AI is bad for learning.” I think that AI can be used to benefit learning, and it can also be used to hinder learning.

When students use AI to do the work for them rather than with them, they don’t learn as much. Learning cannot occur unless the brain is actively involved in making sense of and making sense of what is being learned. Learning doesn’t happen by simply asking ChatGPT, “Please tell me the answer to the question the instructor is asking.”

After all, if you think you’re in school to produce output, it might be okay for AI to help you produce that output. However, if you are going to school because you want to learn, remember that output is just a means to achieving that learning. Output is usually not the final goal. If you confuse the two, you can end up using AI in ways that aren’t helpful for learning. AI can also hinder learning if students are over-engaged, overworked, and see AI only as a time-saving device. However, if AI saves time on tedious tasks and can be used for more serious learning, then I think that’s a positive.

There are reasons to be optimistic about AI and reasons to be concerned about it, but it’s not like we can say, “Okay, let’s forget about AI,” because AI is here to stay. We need to figure out how to work with it and leverage it to advance our goals as educators, learners, and human beings.


Chris Dede

Christopher Dede.

Niles Singer/Harvard University Staff Photographer

There’s an owl on my shoulder

Christopher Dede
Senior Researcher, Graduate School of Education

Athena, the Greek goddess of wisdom, is always depicted with an owl on her shoulder. We should now be asking, “Can AI be like the owl that helps us become smarter?”

I think the key to making the owl a positive force instead of a negative force is to not let it think. We know that generative AI cannot understand human context. As such, it does not provide wisdom about social, emotional, and contextual events. Because these events are not part of its repertoire. However, GenAI is very good at absorbing large amounts of data and making computational predictions in ways that can enhance thinking.

For me, the contrast is between doing things better and doing things better. 95% of what I’ve read about AI in education is that it can help us do things better, but also that we should be doing better. One of the traps of GenAI is that even when you’re using it well, if you’re only using it to do the same old work better and faster, you end up doing the wrong work faster.

When AI does the thinking for you, whether it’s through autocomplete or in more sophisticated ways like “let the AI ​​write the first draft and then I edit the rest,” it undermines your critical thinking and creativity. Since other people are also using AI, you could end up using AI to write the same job application letter as everyone else, which could result in you losing your job. You must always remember that the owl is not on your shoulder, and not vice versa.


Fawaz Khabal

Fawaz Khabal.

Stephanie Mitchell/Harvard University Staff Photographer

Only humans can solve human problems

Fawaz Khabal
John A. Paulson School of Engineering and Applied Sciences Senior Lecturer in Applied Physics

The AI ​​and Human Cognition course I teach aims to demystify AI, distinguish between human intelligence and machine intelligence, and explore the fundamentals of AI and how to use it effectively.

AI is great at data processing and statistics, but it lacks the ability to create truly innovative and creative solutions. Machines calculate, but they don’t have human experience. Although AI machines process advanced statistics and advanced mathematics and use extremely fast electronic chips that operate at incredible computational speeds, we must remember that they rely on human-generated data, and that data is more or less the same across different AI platforms. When you ask different AI platforms, the answers are almost always very similar because they have the same database. AI can teach us how to put things together, but it cannot help us build devices that are relevant to human context. Machine learning relies on statistical adjustments, whereas humans self-organize their lives around meaning.

AI can engage in processes similar to critical thinking, such as data analysis, problem solving, and modeling, but it also has limitations. Critical thinking requires human experience, human insight, and ethics and moral reasoning. Today’s machines lack all of that, and their processes are only recursive.

I’m worried that students will rely too much on AI. We must remind our students that we are trying to help them become future leaders of society, and that as part of their leadership development they will add new value to society. And it is a human business. I’ve never seen AI do really good systems analysis and deep critical thinking. I find it very difficult to imagine that AI could have reflective thinking, at least today. We must be careful not to think that AI will solve our problems. Human challenges are complex and only humans can solve them.


Karen Thornber

Karen Thornber.

Stephanie Mitchell/Harvard University Staff Photographer

take shortcuts without knowing the map

Karen Thornber
Harry Tuchman Levin Professor of Literature and Professor of East Asian Languages ​​and Civilizations. Richard L. Menschel Faculty Director, Derek Bok Center for Teaching and Learning

AI is forcing us to think differently about various elements of critical thinking. For example, while AI can be a helpful partner in analysis and reasoning and certain types of problem solving, it is not always successful when it comes to evaluation, and reflection cannot be outsourced to AI (yet).

Certainly, it is possible to use AI in ways that degrade some of our skills, both lower-order abilities such as memory and factual knowledge, and higher-order skills such as critical thinking. Just as turn-by-turn navigation systems have meant that many of us now know the streets of the city we live in in far greater detail than the streets of the cities we learned about before smartphones and car-based GPS systems became popular, the ease of use of an LLM will likely avoid engaging in certain difficult mental skills that would be difficult to persuade students to develop in the first place.

The key is to use AI to support our learning and critical thinking, and, in the words of the American Historical Association’s recent Guiding Principles for Artificial Intelligence in History Education, to “support the intentional and conscientious development of AI literacy.” Some critical thinking skills cannot be outsourced to AI (yet), so they will become more valuable. The proliferation of “cheap intelligence” (more code, text, and images than ever before) means that skills of discernment, evaluation, judgment, thoughtful planning, and deliberation are more important than ever.


Jeff Behrens

Jeff Behrens.

Vesey Conway/Harvard University Staff Photographer

Considerations for using other cognitive labor tools

Jeff Behrens
Senior Researcher and Senior Associate Lecturer, Department of Philosophy

I am very concerned about the impact that a general purpose LLM will have on critical reasoning skills. We already know that the tools we use during cognitive labor can change the way we do that work. For example, we know that handwritten notes have better recall than keystroke notes, and that predictive text features built into word processors and email interfaces change word choice. Given these trends, I would be stunned if frequent use of LLM in multiple contexts did not lead to real changes in the way users approach inference tasks.

A recent study by the MIT Media Lab provides at least some initial evidence of that. I few For example, we are concerned about AI as an aid to drawing expert-level reasoning in a subject area, such as when using AI in diagnosis to ensure that doctors do not miss any unusual illnesses. But the problem is that there is too much hype surrounding the LLM as a general reasoning that allows us to (at least partially) think less about any topic. It is in the interests of those who create that technology to make us believe that its possibilities are limitless and will bring about a wonderful new future for everyone. We should be cautious before getting too excited about the latest technology trends.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *