Lab AI Applications

Applications of AI


In recent years, the scientific community has increasingly considered artificial intelligence (AI) as a tool that can provide significant benefits to the operations and measurements that equipment performs. Here we will discuss the basics of AI and discuss the potential benefits of implementing AI-based solutions to solve lab problems.

First, let's define some terms that you may have heard of. Machine learning (ML) is the process of creating AI. Machine learning can be applied through several different mechanisms, such as fuzzy logic, discriminant analysis, and neural networks. With its ability to handle computationally intensive problems, neural networks are the foundation of the most commercially viable AI tools.

Lab Manager Academy Logo

Metrics and Productivity Courses

Get metrics and productivity training and earn the CEU.

One of the Academy's over 40 IACET accreditation courses.

The limit for neural network implementation is the number of nodes or possible connections in use for the processor.

A simplified network diagram showing input, output, and two hidden layers. A node (circle) represents a mathematical function, and an arrow represents a data flow over a network.

Steve Knight and the Lab Manager

Neural Network Properties

To successfully apply an AI solution to a particular problem, you must first characterize the problem. This is usually a mathematical solution to specific problems analyzing data, such as smoothing algorithms and data trend identification. In the design of neural networks, a network consists of many “layers”, each representing a different mathematical operation. A combination of these features on a data stream allows you to extract data features. This may be the useful answer required or the intermediate stage where further processing is required. For example, a network may establish two or three different ways to average a particular data set. The output can then be fed to the comparator. The comparator selects the best answer from the three presented.

When creating neural networks, they initially act as a “black box.” This means that there are no pre-determined node interconnects within the network. Nodes and possible paths in the network are not assigned weights or biases. This is the starting point for AI.

A neural network combines a series of mathematical logic operations in a particular sequence to achieve the desired result

Steve Knight and the Lab Manager

Interested in lab leadership?

Subscribe to our free Lab Leadership Digest Newsletter.

Is the form loaded? If you want to use the ad blocker or browser privacy features, try turning them off and refreshing the page.

By subscribing, you agree to receive emails related to Lab Manager content and products. You can unsubscribe at any time.

Determining the dataset to use is essential for characterizing problems solved using AI. For example, it could be a .png image file generated by equipment indicating the presence or absence of sample tubes in the rack, but the dataset could be the starting point. AI has been particularly useful for image analysis, and has discovered applications of tissue science for the determination of cervical cancer cell types in smears. The AI ​​algorithm has been shown to be nearly 100% accurate in identifying potential tumor cells.

The input layer of a neural network is defined by the size of the input data stream. In the example above, each pixel could be an 8-bit, 50 x 50 pixel image with a possible value between zero and 225. Each layer that this data is fed in sequence is a separate mathematical argument. The combination of mathematical functions gives you an overall function, such as image classification.

The role of classification networks is to determine or classify the content of images according to a pre-determined set of parameters. For example, is the output greater than zero, one, or one? It could also be a simple yes/no answer, or in the example above, does a tube exist? In this case, the classification network generates two answers. The probability that the image containing the tube and the image does not contain the tube. The output is fed to a comparator that examines the results and provides answers to the questions. Does the image contain a tube? If the network is incorrect, you must “retrain” it. In sophisticated networks, techniques such as the Adam function can be used to explore areas of solution space, which are areas restricted by the trajectory of all possible valid solutions for mathematical discussions.

In reality, networks are “trained” by presenting many known examples. Usually thousands of known examples. Controlling neural networks allows the network to adjust the weights and biases of each node to “grow” the network in a direction that is perfectly similar to the allowed solution. In the example provided, the network will display thousands of different tubes. The network will eventually adjust until you can severely distinguish the image from the embodied tube. Further training will allow you to determine if the tube is not present. It sounds obvious and easy, but in reality, many factors in the lab can all recognize the tubes in the imager, such as overhead lighting, proximity to the window, time of day, and reflections from adjacent population wells.

The importance of data verification

Once trained, you need to verify your network. Scoring network performance using a validation set of data about 20% of the size of the training set containing new related images.

In reality, networks tend to be overtrained because they are great at handling training sets, but fail miserably when new data is displayed. To mitigate this overtraining problem, many techniques can be employed, such as augmenting training data or forcing partially trained networks to lose current weights and bias values. Data augmentation allows developers to perform conversion, rotation, and other image preprocessing operations on their training sets. Augmenting the data reduces the tendency to create overtrained networks as it guarantees an increase in the various images used in the network for training.

Dropout is the second process that helps reduce overtraining. The percentage of weight/bias trained between training epochs is removed randomly. The neural network must then be retrained for weights if presented with another iteration of the training data. If brand new data is presented to the network and can score the same high relative to the training set, successful prevention of overtraining is achieved. When a network runs this way, it is considered ready for deployment.

There are many frameworks that can be used to develop and train neural networks such as Tensorflow, Pytorch, Torch, Keras, and more. The framework is run using popular programming languages ​​such as Python. They provide an end-to-end, open source platform for machine learning using a set of tools and libraries that allow developers to easily build and deploy ML-driven applications such as scientific image analysis.

Ultimately, once the neural network is fully trained, the next step is to move it to an FPGA or a field programmable gate array. The device combines billions of transistors into the platform, from which you create custom architectures to solve user-specific problems. FPGAs have many advantages over other processing technologies. They provide a flexible platform with the ability to update configurations in the field, generating high performance compared to other processors, allowing for quick development turnarounds compared to custom silicon. Translating neural networks in this way determines the architecture, size, capacity and cost of the FPGA. Most FPGAs can run many times faster than the embedded PCs used to deal with them. So, if necessary, it can accommodate larger and more powerful networks at future points.

As mentioned above, ML is beneficial for specific image analysis problems. It can identify very high resolution features that find difficult to solve the human eye. Additionally, deconvolution of override data such as spectra and chromatograms can be useful for AI solutions. In labs, AI may need pattern recognition, potentially. Other areas are developing ways to reduce the amount of training data required to condition neural networks and make applications easier and faster. The promise of AI in the lab is faster, more sensitive and cheaper data processing than what is possible today.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *