From 2022 onwards, Dr. Gloria Washington has been at the forefront of a human-centered approach to the artificial intelligence boom. As director of Howard's Human-Centered Artificial Intelligence Institute (HCAI), which is funded by the Office of Naval Research (ONR), she and her team of researchers ensure that AI is useful to the people it actually serves. This includes other HBCUs, industry, and government.
You may already be familiar with Washington's work in the AI space. This summer, Howard announced the completion of a Google-sponsored project. Project Elevate Black Voice. The Washington-led project provides a database of more than 600 hours of African American dialect recorded across the United States. Designed to reduce errors when recognizing different types of dialects in automatic speech recognition systems, Washington State says future work on this project will help build a consortium of HBCUs that will investigate how this data can be most effectively stored, protected, and used in future technologies, including AI.
“This is interesting because HBCUs that we have already worked with in the past are excited about using the dataset and developing fair use guidelines for how AI will impact the larger community of people who speak African American dialects of English,” Washington said. “It's really interesting. This is an ongoing project and I believe it will take us into new territory.”
As the spring 2026 semester approaches, Washington and HCAI staff continue to push the boundaries of how AI can best serve people, even under incredible pressure.
Supporting decision-making under high stress
HCAI's current research focuses on improving tactical decision making under high stress situations. Specifically, a team of researchers in Washington is working to design chatbots from large-scale language models (LLMS) – AI models that are self-trained on vast amounts of text – and “augmented reality” tools that will help naval officers make better-informed decisions in the field. This effort is vast and complex, requiring model development based on highly restricted military datasets, simulations of high- and low-stress environments, and in-depth studies of the effects of stress on cognitive load and situational awareness.
“The purpose of this tool is to [make] It reduces the burden of decision-making,” explained a third-year PhD student, software engineer, and former educator. christopher watson. “So it would be a large language model combined with an augmented reality component that could interface with the model.” Simply put, it transforms text output into an interactive augmented reality display that uses colors, icons, and other graphical information to indicate the importance of decisions.
Watson will work on the LLM side of the project to fine-tune the Tactical Decision Making Under Stress (TADMUS) model, which integrates a technique known as Search Augmented Generation. In this process, the model is trained to access a database of external materials, in this case military protocol documents. Search for potentially relevant information before responding to the user. This reduces hallucinations in the model and reduces the cognitive load on users who have difficulty remembering the exact protocol in stressful situations.
However, for models to be used in the field, they must correspond to real-world contexts, which has proven to be a challenge. Understandably, detailed images of active-duty naval vessels are rare and largely unclassified. This means there are limited datasets for simulating real-world naval missions.
This is where the work of a senior researcher begins. Saurav Arial (BS'18, Ph.D.'21) After combing through YouTube, Aryal was able to find about 100 images of 30 types of ships, but not enough to reliably use to train the model. However, by flipping the image and zooming in, we were able to increase that number even further. To be able to display images further away, which would be more useful for naval missions and where sources of information would be more difficult to obtain, his lab turned to AI.
“The idea was that we could use generative AI to fill in the background of an image, thereby making it appear as if it were far away,” Ayal explained. “You can zoom in and make it look close, but previously you couldn't make it look far away. And now we're blowing up 100 images to something like 1000 and pushing them further and further away. This is promising.”
Tools designed for people
The accuracy of a TADMUS model means little if it is not useful to real humans. For this tool to effectively reduce stress and cognitive load, it must blend computer science, design, and psychology.
Chief researcher Dr. Lucrecia Williams It focuses specifically on human-computer interaction in health and education. For the ONR project, her lab is specifically testing how stress affects decision-making.
“Specifically, we created two simulated environments: a calm environment and a stressful environment,” she said. “A quiet environment has light ambient noise, like you might hear in a coffee shop, and plenty of time to read and make decisions. However, a stressful environment includes loud ambient noise, people may be shouting orders, and very strict time limits.”
We're working to train this large language model and use AI to provide better information, but that's only half the problem.
As a pilot test, Williams asked students to run a simulation and respond to prompts using TADMUS in either a calm or stressful environment. Students also complete the NASA Task Load Index questionnaire, which is designed to measure the cognitive load of a task, and the Perceived Stress Scale questionnaire. The results of this study will help further refine the model and allow researchers to identify what kind of information should be provided to effectively reduce stress in different environments.
As important as the information a model provides is how it is provided. Especially in high-stakes scenarios, it's important to ensure that the model is a tool and not another distraction.
“In computer science, we only focus on the technical side, but what does it look like in a real-world functional state? We're training this large language model and using AI to be able to provide better information, but that's only half the problem,” said the senior researcher. Dr. Simone SmarrHis work focuses on applying text-based models to augmented reality tools. “Then the question becomes, 'How do we display that information?'” And we're trying to explore this different, more interactive way of displaying it. ”
Smarr's lab is still searching for the best augmented reality device for the job, but has narrowed it down to wearable glasses similar to Ray-Ban Meta glasses that can quickly provide information to the crews of naval vessels. To ensure the final form is intuitive for users, she draws on her experience in UX design, and her team is constantly testing ways to alert users.
“We have an alarm system built into the system,” Smarr explained. “So the question is, number one, it's obvious, but is it helpful? At what point is this too much going on? Within the cognitive load space, and especially within augmented reality, this is the same kind of UX design problem that I'm very interested in, because I've done a lot of general UX, and also when there's too much stuff on a computer or on a screen.”
Dr. Jay NiasLeading the evaluation of TADMUS' accuracy, he best summed up the interdisciplinary, boundary-pushing spirit of HCAI, reflecting the tool's civilian use in high-stress moments like driving that we experience every day.
“I think everything we do can always be applied to other parts of our lives,” Nias said.
New horizons in human-centered research
Although currently focused on naval missions, researchers at the institute see potential applications in a wide range of fields. This tool could one day be used in medical emergencies, evacuations, disaster response, and any other situation that requires rapid decision-making. In addition, Ayal sees potential for image enhancement research in any field where data is lacking, naming everything from astronomy to satellite image analysis. Meanwhile, Williams envisions a version of a lab simulation test that could measure the effectiveness of AI educational tools. AI educational tools will become important tools as technology becomes increasingly pervasive in classrooms.
Within the Institute, researchers are working to not only create software innovations, but also to lead the way in demonstrating how the nation's workforce can come to Howard to learn advanced skills in artificial intelligence.
Under Washington's leadership, HCAI serves to proclaim Howard's position as a technology leader, ensuring that the next generation of computer scientists remains at the cutting edge of AI research without forgetting the human element.
“Howard has always been at the forefront of developing STEM professionals. Within the Institute, our researchers are not only leading the way in creating software innovations, but also in showing how the nation's workforce can come to Howard to learn advanced skills in artificial intelligence,” Washington said. “We are studying how our unique methods of mentoring and educating young scientists are creating a new workforce for the technical jobs of the future.”
