Over the past two years, I have witnessed an increase in the overall use of student-generated AI. Not surprisingly, more students are using generated AI to help with writing.
In my undergraduate business communication course, the percentage of students (business proposals) who declared the use of generated AI for writing assessments (business proposals) has steadily increased over the four semesters from 35% in 2023 to 61% in 2025.

*Please note that there were about 350 students per semester, with a total of about 1,400 students over the fourth term/two years.
These students may wonder how they use generated AI in their presentations.
They reported using the generated AI as follows:
- Create and edit visuals (e.g. images, prototypes/mockups, logos, etc.)
- Get inspiration for rhetorical devices (e.g. taglines, stories, alliances)
- Preparing for Q&A (e.g. generating questions, review/structural answers)
Beyond verbal language, Visuals are an important aspect of communication, and students need to prepare for more multimodal communication tasks in the workplace (Brumberger, 2005). Digital media has a change in the balance between words and images that can be viewed on websites, reports, and even manuals (Bolter, 2003). Students' ability to communicate through written speaking should be complemented by their proficiency in visual language. Currently, generative AI can reduce these barriers to creative visual representations (Ali et al., 2004).
For example, students in my Business Communications course use AI tools to complement the description by creating prototypes and mockups of project ideas. If you can't generate exactly what you need, edit traditional editing software or, more recently, software with generation AI editing capabilities, such as Adobe Firefly, which allows users to select specific areas of the image and allow brainstorming and editing without advanced technical skills. This and other AI text generators, including Dall-E (OpenART) and Midjourney, have opened up the possibility that communicators will use visuals to enhance their messages.
here it is AI Visual Tools Students report using it in written and spoken language assessments for two years.
What's interesting from the list is not only the increase in the number of AI tools used, but also the types of tools used for specific purposes such as logo pony, logos, Youth Galileo, app interface design, creating slides, creating slides, and creating AI tools (2) for editing such as Photoshop AI, Adobe Firefly, and conefly Beyond that, you can see how students use a variety of tools from constantly evolving companies, such as Magic Studio, Dream Lab, Openai, which integrates Dall-E with ChatGpt, and Google, which has the latest products, Sora, and even Gemini Flash 2.0. Generated AI is also accessible on a variety of platforms where Meta AI is integrated into WhatsApp, a cross-platform messaging app.
Ultimately, this list gives us a glimpse into undergraduate business students getting their hands on, and educators should consider trying them out. More importantly, because all graphics are not equally effective, it can guide students to think about the visuals and graphics they will ultimately use (Mayer and Moreno 2003).
Some of the graphics are as follows:
- Decoration
They are neutral and can enhance aesthetics, but they are neither interesting nor direct.
- Enchanting
They may be very interesting, but they are not directly related, can deflect the audience, and can focus on materials that have no cognitive processing involved.
- beneficial
They are directly related to topics (Sung and Maye, 2012).
However, this does not mean that all visuals should be beneficial as they depend on the communicator's goals. For example, if your main goal is for fun, decorative visuals can enhance aesthetics, while the fascinating visuals are very interesting, which increases satisfaction. For example, AI tools tend to create visuals with many unrelated details that can be distracting, leading to cognitive overload (Deleeuw and Mayer 2008). Students need to improve their prompts by being more specific and accurate (Hwang and Wu 2024).
There are limits to what AI can do now.
- It's not really innovative as it learns from existing data.
- We cannot fully understand subtle aspects such as culture, values, and emotional nuances (Hwang and Wu 2024).
However, it can provide a stepping stone for students to visualize their ideas.
Encourage students to recognize what they want to achieve when using AI tools and to be proactive in choosing, rearranging, editing and refinement of visuals to suit their goals.
Eileen Wanli Lam is a senior lecturer and technology enthusiast at the National University of Singapore. She is fascinated by educational technology and enjoys conversations about the latest industry development. She is also passionate about professional communication, student engagement and educational leadership.
reference
Ali, Safina, Prana Ravi, Randy Williams, Daniela DiPaola and Cynthia Briazir. “Build dreams using generated AI.” Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, no. 21, pp. 23268-23275. 2024.
Bolter, Jay David. “Critical Theory and the Challenge of New Media” (2003).
Brumberger, Eva R., “Visual rhetoric of the curriculum: Multimodal workplace education Business Communication Quarterly 68, no. 3 (2005): 318-333.
Deleeuw, Krista E., and Richard E. Mayer. “Comparison of three measures of cognitive load: evidence of separable measurements of endogenous, external, and close load.” Journal of Educational Psychology 100, no. 1 (2008): 223.
Hwang, Yongjun, Yi Wu. “Methodology of Visual Communication Design Based on Generated AI.” International Journal of Advanced Smart Convergence 13, no. 3 (2024): 170-175.
Mayer, Richard E., and Roxana Moreno. “Nine Ways to Reduce Cognitive Load in Multimedia Learning.” Educational Psychologist 38, No. 1 (2003): 43-52.
Sung, Eunmo, and Richard E. Mayer. “If graphics improve preferences, but do not learn from online lessons.” Computers of Human Behavior 28, no. 5 (2012): 1618-1625.