What Hides Behind the Effectiveness of Neural Networks and Deep Learning

It wasn’t long ago when artificial intelligence was associated mostly with disappointment and something clearly overestimated. How come that since recently it suddenly became the hottest field in technology that is integrated into the lives of ordinary people on a regular basis. Does artificial intelligence really bring tangible results, or is it again promising much more than it could deliver?
History Lesson
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. The abilities that were once considered unique and humane, are now characteristics of powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve also mastered the game of GO and beat the best human players.
Current artificial intelligence success can be traced back to the beginning of the 21st century and an online contest called the ImageNet Challenge. ImageNet contains several hundred images for any given word, such as “dog” or “plane”. During the contest, participants share their techniques and results in image recognition.
And in 2015 during the ImageNet Challenge neural networks were finally able to surpass humans accurately recognizing 96% of images, compared to humans recognizing the average of 95%. Since then neural networks have occupied a solid position in the technology world and therefore our lives.
How It Works
The operation principle of neural networks is based on the universal approximation theorem. It was proved by George Cybenko in 1989 and states that a feed-forward network with a single hidden layer containing a finite number of neurons, can approximate continuous functions in subsets of Rn. If given appropriate parameters, the simple neural networks are able to represent wide ranges of interesting functions. In short, the theorem proposes that simple neural networks are universal approximators, neural networks work by approximating complex mathematical functions with simpler ones.
Each artificial neural network consists of an input layer of neurons where data can be fed into the network, an output layer where results come out, and a couple of hidden layers in the middle where information is processed creating an artificial brain. Each neuron within the network has a set of “weights” and training a neural network involves adjusting the neurons’ weights so that a given input produces the desired output. For a neural network to learn there has to be involved an element of feedback – the bigger the difference between the intended and actual outcome, the more radical alterations have to be made. Those alterations, called backpropagation, cause the network to learn, reducing the difference between actual and intended output,
There is a number of challenges data scientists meet while attempting to make a neural network function like a human brain. Domain knowledge is one of the most important components in the success of fine tuning and launching of the neural network, along with solid understanding of which events have real influence on expected results. Although more and more businesses are already using the help of neural networks, it is still a very complicated task to formulate the problem neural network has to solve, meaning identifying what we have as an input layer, what we want as an output layer and what data has an influence over the output. Launching the calculations can be hard too since it is important to have a very representative and well-reasoned data set for training the neural network.
How We Can Use It
Neural networks are most commonly used for such tasks as:
- Classification – data distribution by parameters. For example, the input layer is the number of people applying for a loan in a bank and the output layer should be the people who qualify for a loan.
- Prediction – neural networks can predict further steps and actions based on historic data, for example, fall of stocks.
- Recognition – neural networks can recognize images, for example, when phone camera identifies a person’s face.
Based on these tasks there could be found lots of different applications for neural networks, involving recognizing patterns and making simple decisions about them. Neural networks might be used in factories for quality control – measure the main qualities of produced goods and feed those measurements into your neural network as inputs, letting the network decide whether each sample meets the requirements. The banking industry does also benefit from using neural networks. Knowing the previous history of every client and quickly processing the data, neural networks are perfect for identifying fraudulent transactions among credit card holders and marking them as suspicious.
The effectiveness of neural networks lies in them making computer systems more and more human-like, which opens endless possibilities for businesses to automate certain processes, improve predictions for customers, and avoid human errors.
InData Labs helps tech startups and enterprises explore new ways of leveraging data, implement highly complex and innovative projects, and build breakthrough AI products, using machine learning, AI and Big Data technologies. Our core services include AI Consulting, Big Data Engineering, Data Science Consulting.