The world around us is constantly being flashed by adaptive radar systems. From salt flats to mountains and everywhere in between, adaptive radar is used to detect, locate, and track moving objects. Just because the human eye can't see into these ultra-high frequency (UHF) ranges doesn't mean we're not taking pictures.
Adaptive radar systems have been around since World War II but have hit fundamental performance barriers over the past few decades. But by leveraging modern AI approaches and lessons learned from computer vision, Duke University researchers are breaking through those barriers and leading the way for everyone else in the field.
In a new paper published July 16 in the journal IET Radar, Sonar and NavigationDuke University engineers have shown that convolutional neural networks (CNNs), a type of AI that has revolutionized computer vision, can be used to significantly enhance modern adaptive radar systems.
And in a move that echoes the growing computer vision boom, they have opened up a large dataset of their digital landscapes for other AI researchers to further their work.
“Traditional radar approaches are very good, but they're not enough to meet industry demand for products like autonomous vehicles,” said Shyam Venkatasubramanian, a graduate research assistant in the lab of Vahid Tarok, the Rhodes Family Professor of Electrical and Computer Engineering at Duke.
“We are working to bring AI to the adaptive radar space to address problems such as object detection, location and tracking that the industry needs to solve.”
At the most basic level, radar isn't hard to understand: High-frequency radio pulses are sent out and an antenna collects data from the reflected waves. But as technology advances, so do the concepts used in modern radar systems.
With the ability to shape and direct signals, process multiple contacts at once, and filter out background noise, the technology has advanced greatly over the past century, but radar can only progress so far using these technologies alone.
Adaptive radar systems still struggle to accurately identify and track moving objects, especially in complex environments like mountainous terrain.
To move adaptive radar into the AI era, Venkatasubramanian and Tarokh looked to the history of computer vision for inspiration: In 2010, researchers at Stanford University released a massive image database of more than 14 million annotated images called ImageNet.
Researchers around the world have used ImageNet to test and compare new AI approaches that have become industry standards.
In the new paper, Venkatasubramanian and his collaborators show that using the same AI approach can significantly improve the performance of current adaptive radar systems.
“Our work is similar to that of early users of AI in computer vision and developers of ImageNet, but within the scope of adaptive radar,” Venkatasubramanian said. “Our proposed AI takes processed radar data as input and outputs a prediction of the target's location through a simple architecture that we believe is comparable to the predecessors of most modern computer vision architectures.”
While the research group has not yet tested its method in the field, it has benchmarked the AI's performance with a modeling and simulation tool called RFView, which incorporates Earth's topography and terrain into its modeling toolbox to improve accuracy.
Continuing their path in computer vision, they created 100 airborne radar scenarios based on the landscape of the continental United States and released them as an open source asset called “RASPNet.” You can read more about RASPNet here: arXiv Preprint server.
This is a valuable asset, as only a handful of teams have access to RFView, but the researchers received special permission from RFView's creators to build a dataset containing over 16 terabytes of data that was built over several months and then make it publicly available.
“I am pleased to see this groundbreaking research published, and especially to see the associated data now available in the RASPNet repository,” said Hugh Griffiths, FRA, IEEE, IET, OBE, and THALES/Royal Academy RF Sensors Chair at University College London, who was not involved in the research.
“This will undoubtedly stimulate further research in this important area and allow results to be more easily compared.”
The scenarios included were curated by radar and machine learning experts and vary in geographic complexity—the easiest for an adaptive radar system to tackle is the Bonneville Salt Flats, while Mount Rainier is the most challenging. Venkatasubramanian and his group hope that others will take their ideas and datasets and build even better AI approaches.
For example, in a previous paper, Venkatasubramanian showed that AI tuned to a specific geographic location can achieve up to seven times improvement in object localization over traditional methods. If the AI can choose scenarios it has already been trained on that are similar to the current environment, it should see a significant performance boost.
“We think this will have a tremendous impact on the adaptive radar community,” Venkatasubramanian said. “As we move forward and continue to add features to our dataset, we want to give the community everything they need to move the field forward toward using AI.”
For more information:
Shyam Venkatasubramanian et al. “Data-Driven Target Localization Using Adaptive Radar Processing and Convolutional Neural Networks” IET Radar, Sonar and Navigation (2024). DOI: 10.1049/rsn2.12600
Shyam Venkatasubramanian et al. “RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing Applications” arXiv (2024). DOI: 10.48550/arxiv.2406.09638
arXiv
Courtesy of Duke University
Quote: Powering Adaptive Radar with AI and Enormous Open Source Dataset (July 19, 2024) Retrieved July 19, 2024 from https://techxplore.com/news/2024-07-radar-ai-enormous-source-dataset.html
This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.