HOW DOES AI WORK?
Artificial Intelligence Methods
Artificial intelligence systems can be divided roughly into two main categories: symbolic learning, and machine learning.
Symbolic learning is the earliest artificial intelligence system, sometimes called GOFAI ("Good Old-Fashioned Artificial Intelligence"). This is the form of artificial intelligence upon which most research was based from the mid-1950s until the late 1980s. It is based on the basic computer science assumption that the world can be represented as symbols that can then be dealt with according to specific logical processes (such as If-Then statements). It was generally used in simple robotics performing routine tasks, or in highly structured logical problem-solving (such as playing a game of checkers). Its scope is limited to those situations in which variables and outputs are clearly defined.
Statistical learning is focused on pattern recognition. This type of machine learning is seen in speech recognition and natural language processing system programming. These AI applications demonstrate how judgment in uncertain contexts is crucial to producing the kind of intelligence humans enact. Typically in both cases the results improve as data is acquired from use, demonstrating the experiential learning that is essential to producing artificial intelligence.
Deep Learning Explained
Deep learning systems are the pinnacle of artificial intelligence programming. Relying upon convolutional and recurrent neural networks (CNN and RNN, respectively), deep learning systems allow for input of complex data that can be interpreted by the weighing of various factors through each neuron, which then "vote" towards a connected network of neurons. The relative weight of each neuron within the connected network creates a kind of synthetic "conceptual" framework of the information that produces a judgment based on the input and weighing relative probabilities through the neural network. Machine learning and neural networks, with their abilities to weigh competing information which contributes to an artificial understanding of that information, is crucial to the breakthroughs demonstrated in contemporary artificial intelligence systems.
Convolutional Neural Networks (CNN)
Convolutional neural networks are designed to mimic aspects of the visual cortex, and are often applied to more advanced forms of computer vision. Whereas computer vision in symbolic systems is determined by images meeting preset criteria for certain object designations (height and width relationships, shapes, etc.), in CNN-based computer vision, a wider set of parameters can be accounted for, and their relative weight can be adjusted based on specific circumstances. This gives convolutional neural networks used for computer vision the capacity to adjust its classification of images based upon the angle of the object, its relative distance from the vantage point, and the specific piece of the object.
For example, a human head looks totally different from the front and the back, but the same object remains. Likewise, a hand and a leg are both part of a person, but one must first recognize the body part and weigh it against different body parts of different animals to determine its class as human. It is this kind of weighing of a multiplicity of factors in various layers of analysis that demonstrates the power of convolutional neural networks. Convolutional neural networks are also used in natural language processing, drug discovery, games, and more.
Recurrent Neural Networks (RNN)
Recurrent neural networks differ from convolutional neural networks in the fact that they are not strictly feed-forward artificial neural networks: that is, the processing does not flow exclusively from input to output through the neural layers. Instead, recurrent neural networks structure feedback loops within the layered information processing, which helps contextualize information processing based on previous inputs and processes in the neural network. In this sense, they reproduce data sequencing in a way that is similar to how the mind structures thought, creating a type of memory that allows information to persist and influence outputs dynamically and temporally. Recurrent neural networks assist in speech recognition that adjusts to the particulars of an individual voice, and natural language processing in which meaning is deciphered based on context of previously used words.
Long Short-Term Memory (LSTM)
Long short-term memory units build upon the inherent promise of recurrent neural networks by enhancing the memory capacities when information processing must move through a large number of layers. When there are a large number of layers in an RNN, references back to the earliest layers become increasingly difficult to process. Long short-term memory units solve this problem by categorizing information within a recurrent neural network as either short term or long term. This allows RNNs to selectively loop information back into the layered processing patterns as needed. LSTM applications include: robot control, speech recognition, grammar learning, sign language translation, business process management, medical care pathway prediction, and more.
In reinforcement learning, goals of the neural network are analyzed for relative success and/or failure, and the relative weight of the different neuronal inputs in the neural network are adjusted based upon the outcome. This allows for machine learning that adjusts to experience. Reinforcement learning can be applied to convolutional or recurrent neural networks. The power of reinforcement learning algorithms as applied to artificial neural networks was best demonstrated in the victory of Google's AlphaGo program over a live master Go champion. The program had been put through thousands upon thousands of Go games to get its neural network weighting system adjusted to the point that it was able to beat a grand champion in four out of five games.
If you would like to find out what artificial intelligence methods would work best for your application, contact ArtificialIntelligence.health today.