summary: The researchers have published a research paper detailing their project BigNeuron. This effort aims to establish a standard method for accurate and rapid automatic reconstruction of neurons using deep learning algorithms.
This project provides an extensive set of publicly accessible neural reconstruction images and powerful tools for independent analysis. This could help researchers understand how the brain functions and changes over time.
Important facts:
- BigNeuron is an international effort involving computer scientists and neuroscientists from multiple institutions, aiming to create a standard framework for automated neuron reconstruction.
- This project provides a large dataset of publicly available neural reconstruction images and robust tools for analysis.
- The researchers used deep learning to develop an automated algorithm that identifies the shape of each neuron in an image, overcoming challenges such as species diversity, brain location, developmental stage, and quality of different image sets. .
sauce: Texas A&M
Dr. Shuiwang Ji, Professor of Computer Science and Engineering at Texas A&M University, is part of the collaborative research community that recently published a paper titled “BigNeuron: A Resource for Benchmarking and Predicting Algorithm Performance for Automated Tracing of Neurons.” is.Optical Microscopy Dataset” was published in his April issue of the journal nature method.
Launched in 2015 and led by the Allen Institute for Brain Science, BigNeuron is an international effort bringing together computer scientists and neuroscientists from 12 institutions.
Its goal is to develop a standard framework to help researchers define optimal methods and algorithms for fast and accurate automated neuronal reconstruction. We then use a supercomputer to “bench test” the algorithm on large image datasets.
This project will produce a large set of publicly available neural reconstruction data images, along with robust tools and algorithms that researchers can use for their own analytical work.
The human brain alone contains hundreds of billions of neurons, which are connected to each other through thousands of thin “branches” to form a 3D tree-like structure.
To understand how the brain functions and changes over time, scientists need to be able to digitally reconstruct these neuronal structures to get an idea of the shape of each neuron in an image. .
Scientists have been working for nearly 40 years to develop fully automated neuron reconstruction methods using high-resolution microscopy to capture 3D images of individual neurons.
Reproducing them remains a challenge due to the diversity of species, brain locations, developmental stages, and quality of microscopic image sets.
These factors make existing algorithms difficult to generalize effectively when applied to a large number of images acquired in different laboratories.
To alleviate this problem, the team developed an automatic algorithm that uses deep learning to figure out the shape of each neuron in a given image.
About this AI and neuroscience research new
author: Leslie Henton
sauce: Texas A&M
contact: Leslie Henton – Texas A&M
image: Image credited to Neuroscience News
Original research: closed access.
“BigNeuron: A Resource for Benchmarking and Predicting Algorithm Performance for Automated Tracing of Neurons in Light Microscopy Datasets” Shuiwang Ji et al. nature method
overview
BigNeuron is an open community bench testing platform that aims to establish an open standard for accurate and fast automated neuron tracing. We collected a diverse set of image volumes across several species, representative of data obtained in many neuroscience laboratories interested in neuron tracking.
Here we report the gold-standard manual annotations generated on a subset of the available image datasets and the quantified tracing quality of 35 automated tracing algorithms. The purpose of generating such a diverse, hand-curated dataset is to advance the development of tracing algorithms and enable generalizable benchmarking.
Together with the image quality feature, we pooled the data into an interactive web application that allows users and developers to perform principal component analysis. t– Distributed stochastic neighborhood embedding, correlation and clustering, visualization of imaging and trace data, and benchmarking of automated trace algorithms on user-defined data subsets. Image quality metrics account for most of the variance in the data, followed by neuromorphological features related to neuron size.
We observed that diverse algorithms can provide complementary information for accurate results and developed a method to iteratively combine techniques to generate consensus reconstructions.
The resulting consensus tree usually provides better ground truth estimates of neuronal structure than single algorithms on noisy datasets. However, certain algorithms may outperform consensus tree strategies in certain imaging conditions.
Finally, in order to allow the user to predict the most accurate auto-tracing results without manual annotation for comparison, support vector machine regression is used to consider the image volume and the set of auto-tracing for reconstruction. Expected quality.
