Neuroanatomy powers hybrid AI in acoustic target detection

AI News


“Current automated STD methods perform well under controlled conditions, but their performance degrades rapidly at low SNR and invisible targets, while standalone BCI systems suffer from high false alarm rates. To overcome these limitations, we proposed a hybrid approach that combines the complementary strengths of neural perception and acoustic feature learning,” explained study author Luzheng Bi, a researcher at Beijing Institute of Technology. Core innovations include (a) Tri-SDANet, an EEG decoding model that incorporates neuroanatomical priors from signal source analysis, (b) a reliability-driven fusion strategy that adaptively integrates BCI and automated detection output, and (c) experimental validation of a streaming mode that simulates real-world scenarios. “This integrated solution delivers robust detection performance with high versatility, providing a practical tool for security protection and environmental reconnaissance.”

Hybrid systems take advantage of multiple technological advances. Although 3D printing is not involved here, the Tri-SDANet model employs a neuroanatomy-based spatial partitioning strategy to split the 60-channel EEG signal into temporal, frontal, and parieto-occipital lobes, each processed by a dedicated spatiotemporal filter. The automatic detection module integrates a state-of-the-art model trained on log-Mel spectrogram features. “The fusion framework calls the BCI only when the automatic detector is uncertain, reducing human workload while maintaining accuracy,” said first author Jianting Shi.

“While hybrid systems have shown promising results, they still face challenges such as EEG decoding delays, operator fatigue, and adaptation to a more diverse range of audio targets. Future work will focus on optimizing the algorithms and hardware to reduce delays, develop user-friendly training protocols, and expand the dataset to cover a wider range of acoustic scenarios,” said Shi. Overall, this brain-machine hybrid intelligence framework provides a generalizable solution to robust STDs and bridges the gap between laboratory performance and real-world application demands.

Authors of this paper include Jianting Shi, Jiaqi Wang, Weijie Fei, Aberham Genetu Feleke, and Luzheng Bi.

This research was supported by the National Natural Science Foundation of China under Grant 62573053 and the Beijing Natural Science Foundation of China under Grant IS23064.

The paper “Brain-machine hybrid intelligence for powerful neuroanatomical-based acoustic target detection” was published in the journal Cyborg and Bionic Systems on October 17, 2025 (DOI: 10.34133/cbsystems.0438).

/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.



Source link