Malvern, Pa. — Nicholas Garman loves the challenge of exploring new frontiers of technology.
“I find the most enjoyable thing when I'm in the wilderness. I don't know what I'm doing. I have to understand something,” he said.
His undergraduate courses in Machine Learning and Artificial Intelligence (AI) were interested in learning more about these new tools. As a student of Schleyer's honors in Abington, Pennsylvania, he was able to create his own custom integrated undergraduate/graduate degree, combining a bachelor's degree in computer science with a master of AI through the Great Valley of Pennsylvania.
“All professors are really enthusiastic,” Garman said. “They seem to really love what they do and they seem to love us, our students. And when they encourage us, teach us things and we work with them, we get infected with that enthusiasm.”
He said he enjoyed the challenge of his Capstone project. He and his team of classmates investigated whether machine learning models could accurately reduce the number of cybersecurity alerts professional analysts need to triage. Using hyperparameter tuning, Garman and his team trained machine learning models on a large hierarchical dataset of cybersecurity incidents to predict whether a particular incident was a benign alert or a proactively malicious cyberattack.
“These results show that integrating machine learning into cybersecurity workflows can significantly reduce workloads for human analysts and improve the scalability of incident response systems,” writes Garman and his team.
Through such practical projects, Gahman said he is proficient in selecting and implementing the appropriate AI model for a particular task. This experience, he said, gives him a deeper understanding of AI principles and how training works.
“I feel good [the master’s program] He gave me some tools to understand what's going on. AI wasn't a black box,” he said.
Otherwise, people said they might critically accept the output that AI tools provide them without considering how reliable it is.
Gahman mentioned a classic example of an AI-powered job seeker tracking system that uses historical data about employees to identify characteristics and help them select future employees. If the data comes from companies that discriminated against diverse groups in their employment practices, the system can perpetuate human bias by taking unrelated demographic data such as race, gender, and sexual orientation, and continuing to deny employment opportunities for minority groups. Alternatively, the system could make a variety of mistakes, draw inaccurate conclusions from the data, leading to insufficient decisions, Garman said.
“There are so many good things about AI, but there are also a lot of risks,” Garman said. “The ethics of the AI course I took gave me a lot of food on how to use AI responsibly.”
Understanding the explanability of AI, or why AI tools reach certain conclusions, is one of the keys learned in the Ethics courses to reduce risk and improve tool performance.
“The only real way to prevent AI from making poor or unfair decisions in these complex situations is to know why AI is making decisions.
Recognizing the potential pitfalls, Garman said he was surprised by the power of AI and its myriad applications, including autonomous driving, cancer and disease detection, stock market prediction, and language translation.
“It's affecting our lives in every way. There's a lot about it. It's cool and there's a lot of stuff that can be used. We live in science fiction,” he said.
Garman received his master's degree last December and began this month as an AI research engineer in Lockheed Martin's space industry.
“It's like my dream job. I'm going to learn a lot, and it's going to be so fun and interesting,” he said. “I'm pleased to be on the cutting edge of technology and I'm really looking forward to using my skills in real life situations.
