Managers using AI to write emails deemed honest, uncompassionate and confident

Applications of AI


While many experts are currently using artificial intelligence tools to help them write, new research suggests managers who use AI to create the risks of everyday workplace emails and appear unreliable. AI-assisted messages were generally considered sophisticated experts, but managers who relied heavily on such tools were viewed as honest, uncompassionate and competent by their employees. The survey results have been published on International Journal of Business Communication. This study provides evidence that messages generated in AI are often considered effective and efficient, but may come at social costs.

The release of generative artificial intelligence tools like ChatGpt has caused a surge in interest in using them for everyday writing tasks, including professional settings. Many workers now rely on these tools to draft emails, reports, or internal notes. Research has already shown that writing AI-assisted can increase the clarity, accuracy and professionalism of workplace messages. However, little is known about how senders of such messages are perceived.

The goal of the new research was not to write itself, but to examine how readers interpret the personality of people who use AI to create messages. In other words, does using AI affect how authors look trustworthy, honest, or capable? And does the answer change depending on whether the message was written primarily by AI or disregarded?

The study also aims to explore how these perceptions change depending on who is using AI. Are people more tolerant of AI use than others? Do they judge managers differently than their peers?

“I think AI will have a big impact on our interpersonal relationships. People use AI a lot to help us communicate. This is already happening in the workplace. I want people to recognize the impact of communication through AI.”

The research team surveyed 1,158 full-time labor experts in the United States. Participants randomly presented one of eight different scenarios describing an email message congratulating the team on achieving their goals. The scenario varied along two dimensions. The message comes from who (participant or their supervisor) and is the amount of messages generated by the AI ​​(from low to high support).

Some messages only showed light editing by AI, while others were written by AI tools primarily based on short prompts. In some cases, participants were presented with the original prompt given to the AI. In others, it wasn't. After reading the assigned messages, participants were asked a series of questions about the perceived authorship, effectiveness, professionalism, integrity, compassion, confidence and comfort level due to the use of AI.

The study included both a numerical rating scale and open-ended questions asking participants to explain why they thought the authors were important in workplace communication.

Overall, the results showed that people generally view AI-assisted messages as professional and effective, but are unlikely to trust a sender, especially if the sender was a supervisor using high levels of AI assistance.

Participants in particular were less likely to believe that supervisors were the true authors of messages that are strongly supported by AI. 93% agreed that the supervisor was an author in the low support condition, but only 25% agreed on the high assistant condition without visible prompts.

Nevertheless, severe AI-assisted messages were not rated as ineffective. In fact, messages with higher AI involvement were sometimes seen as slightly more effective than those with less support. Participants often described AI as a useful tool for improving grammar, tone, and structure. Many said they don't care if AI is being used to hone their writing, as long as the content still reflects the sender's own ideas.

“The minor use of AI, primarily for small editing to professional emails, is generally considered appropriate,” Cardon told Psypost.

Still, there was a clear tension between message quality and sender perception. Supervisors who relied heavily on AI were always rated as honest, uncompassionate and confident. Only about 40% of participants consider supervisors in the high assistant condition to be honest, compared to over 80% of low support conditions.

“The biggest surprise was the strength of the emotions,” Cardon said. “Many respondents have used AI in emails to express their intrusions about bosses.”

The open-ended response revealed several reasons behind this skepticism. Many participants expressed a sense of disappointment and frustration when they knew it was a message, especially a celebration. Some described it as “lazy”, “disincentive”, or “injustice.” Others said they felt like the manager didn't feel well enough to write a personal message. This lack of effort was perceived by some as a lack of investment in team success.

Some participants also questioned the ability of supervisors who were heavily dependent on AI. Many respondents said that managers hope to be able to write simple emails without external help, and that using AI for this purpose could indicate a lack of leadership and communication skills.

The results also showed an important perception gap between how participants viewed their own use of AI and how they judged others, particularly supervisors. People tended to rate their supplementary letters more favorably than their superiors. When they imagined that they were using AI, they were more likely to view it as a useful support tool. However, when supervisors used it, their use was more likely to raise questions about integrity and reliability, especially with less transparency.

Despite these concerns, most participants stated that AI is generally comfortable being used for this type of message. Even in high support conditions, the majority said there was no problem with supervisors using AI to write congratulatory emails. But their comfort was often warned. Many participants emphasized that the acceptability of AI use depends on the nature of the message. Messages of relational or emotional tones, such as praise and support, were deemed less suitable for the AI ​​generation than de facto updates or routine reminders.

Several respondents also raised long-term concerns regarding the repeated use of AI in workplace communications. Some worried that overuse could lead to loss of human connection or undermine team cohesion. Others feared that once AI defaults to all types of messages, even interpersonal messages, the workplace could begin to feel impersonal or traded.

“Experts should be aware of the reputation and relationship risks of overusing AI in business communication,” Cardon advised.

Like all studies, there are limitations. This study focuses on certain types of messaging (emails congratulating the team) and may not be generalizable to all workplace communications. If the message is about dispute resolution, feedback, or performance reviews, the response may differ. Future research can explore how perceptions differ between different genres of communication and different professional contexts.

This study also focuses on relationships with supervisors, and power dynamics can raise concerns about integrity and trust. Perceptions may differ when it comes to peer-to-peer scenarios and when subordinates use AI to communicate upwards.

“We are in the early stages of using mass AI,” Cardon said. “Tools can continue to evolve and people's attitudes can change.”

Researchers recommend additional research into whether people feel that they should disclose AI use and how that disclosure affects trust. They also suggest that we explore how attitudes towards AI-supported writing change over time as such tools become more embedded in everyday work life.

“We want to accurately express people's opinions, attitudes and experiences as AI becomes embedded in daily communication,” Cardon explained. “We hope that this information will enable individuals to use AI in ways that improve their lives and relationships. We are all on our AI journey. We need to discuss it, and use it thoughtfully and with purpose.”

The study, “Workplace Writing of AI Professionalism and Reliability: Benefits and Disadvantages of AI Writing,” was written by Peter W. Cardon and Anthony W. Coman.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *