Artificial intelligence (AI) tools such as ChatGpt, Claude, and Gemini are becoming increasingly popular in higher education, raising questions about how students can be meaningfully involved with AI. Rather than using AI as a shortcut to complete assignments, students need to develop powerful prompt skills to enhance their analytical and problem-solving skills. AI prompts promote critical thinking by promoting problem-solving, analysis, and integration while promoting ethical reasoning (Mollick and Mollick, 2023). By guiding students to create effective prompts and critically assess AI-generated content, educators can help them leverage AI as a thinking partner rather than a content generator.
Why is AI important for critical thinking?
Involvement in AI requires effective intentionality. Students who learn to create accurate, free, and repetitive prompts can use AI to support cognitive engagement. AI prompts help students improve their problem-solving skills by encouraging students to repeat questions and seek deeper insights. It also enhances analysis and integration by requiring students to compare perspectives, justify responses, and critically engage information. Furthermore, effective prompts can highlight ethical concerns such as AI bias and misinformation that promote digital literacy (Kasneci etal. 2023). Instead of resisting AI, educators should focus on integrating AI into their coursework in ways that support critical thinking and intellectual engagement.
Teach students how to interact with AI
Teaching students how to interact with AI starts with understanding rapid engineering. A powerful AI prompt consists of clarity, specificity, and iterative improvements. Teachers can guide students by showing the difference between open-ended and closed-end prompts. For example, rather than asking, “What is photosynthesis?”, you could get a general response, and students could ask, “Explaining how photosynthesis affects global climate patterns.” Additionally, layering prompts can deepen AI engagement. By improving responses through additional constraints such as “explaining the context of deforestation and the impact on carbon levels in the atmosphere,” students can gain a more nuanced understanding of the topic. It is also important to encourage repetition. Students need to adjust prompts based on AI responses and reflect on how revision shapes learning (Mollick, 2023).
Another important aspect of AI literacy is the critical assessment of AI-generated content. Since AI output is not always reliable, students should be trained to review the information and compare the responses generated by the AI with academic sources. Educators can incorporate tasks where students need to validate AI responses, identify biases, and analyze conflicting perspectives. For example, students can ask AI, “What are the advantages and disadvantages of AI in employment?” and assess whether the response reflects bias (Bender et al., 2021). By attracting students to these exercises, educators can help develop the skills needed to critically evaluate information generated by AI.
Designing AI-driven assignments for active learning
Teachers can also integrate scaffolding AI-driven tasks into coursework to promote critical thinking. In this regard, the AI-supported Socrates question can be a valuable tool. For example, students can use AI to generate rebuttals for the thesis statement and assess the quality of the responses generated to the AI. Similarly, discussion preparation can be strengthened by encouraging AI to act as opponents and challenging students' positions on the topic. AI-generated case studies provide another useful application, allowing students to analyze and refine AI-generated scenarios to improve their critical inference skills. Additionally, we use AI as our brainstorming partner. It requires students to justify acceptance, revision or rejection of AI proposals, but encourages deeper engagement with the material.
Ethical considerations for AI use
Also, ethical considerations must be addressed beyond the actual application of AI prompts. Avoiding excessive reliance on AI is essential, and faculty members must ensure that AI complements rather than replacing students' thought processes (cotton, cotton, vessels, 2023). Transparency is another important factor in responsible AI use. Students should be encouraged to disclose when using AI and reflect on how it affects learning. Academic Integrity Policy should provide clear guidelines for the ethical use of AI in coursework, while promoting an environment that promotes learning rather than punitive measures.
Best Practices for Teacher Implementation
To successfully integrate AI's skills-inducing skills in higher education, teachers need to provide opportunities for structured AI engagement. Assignments must be designed with specific AI-driven tasks tailored to their learning goals. Additionally, educators can use AI for formative feedback to allow students to refine their work before submitting their final assignment. It is also important to assess the ability of students to be critically involved with AI. Rather than focusing on AI detection, faculty members should assess how well students interpret, criticize and refine AI-generated content.
Conclusion
As AI continues to reform education, students need to develop the skills they need to engage in it in meaningful ways. By promoting effective prompt strategies, promoting critical assessment, and designing assignments that require intellectual interaction with AI, educators can turn AI from passive tools to active partners in learning. The goal is not to resist AI, but to reinforce critical thinking and leverage the possibilities of preparing students for an AI-driven future.
Edited by Rick Holbeck is the executive director of the Online Educational and Learning Bureau at Grand Canyon University. With his extensive experience in educational technology, teacher development and online education, he specializes in integrating new technologies into teaching and learning. His expertise includes AI-driven pedagogy, education design, and strategies to attract online learners. Rick regularly presents topics such as AI literacy, academic integrity in the digital age, and best practices for online education teachers.
reference
Bender, Emily M., Timnit Gebble, Angelina McMilan Major, and Schmitchell. 2021. “The risk of stochastic parrots: Can the language model be too big?” Proceedings of the 2021 ACM Meeting on Fairness, Accountability and Transparency610–623. https://doi.org/10.1145/3442188.3445922.
Cotton, Debbie R., Paul A. Cotton, James R. Shipway. (2023). “Chat and fraud: Ensure academic integrity in the era of ChatGpt.” Education and Innovation in International Education, 0 (0), 1-12. https://doi.org/10.1080/14703297.2023.2190148.
Kasneci, Enkelejda, Klaus Seßler, Niklas Kühl, Urs Gasser, Andreas Lampert, Tobias Widjaja, and Gjergji Kasneci. 2023. “Chatgupt on opportunities and challenges of large-scale language models for education.” Learning and individual differences, 103:102274. https://doi.org/10.1016/j.lindif.2023.102274.
Morik, Ethan. 2023. Co-Intelligence: Life and cooperation with AI. Harvard Business Review Press.
Mollick, Ethan, and Lilach Mollick. 2023. “AI Assignment: Seven Approaches for Students using AI Tools such as ChatGPT.” Harvard Business Review.
