Deep Learning: A 100,000 Million Technological Revolution

Must read

Technological Revolution: It is possible that you have surprised yourself lately using the Google translator with some assiduity, giving the control of the brakes to your car in the middle of the highway or dictating your answers in that WhatsApp group and checking that, now, your mobile understands when you speak to him.

The technology behind all these developments is called Deep Learning that is bringing artificial intelligence to a new generation due to the surprising results and new applications that it is opening.

It is the great bet, not only of the technological giants such as Google, Amazon, Facebook, Baidu, or IBM, but also of all the large car manufacturers, the management, or biotechnology companies, which are making million-dollar investments (6,000 million dollars since 2014 according to Fortune magazine).

We are talking about a technology where, for the first time, the signing of its technological referents is compared with that of the great sports stars. Let’s first see what is meant by Deep Learning or deep learning. It is a set of information processing methods that fall within the branch of artificial intelligence called machine learning. The fundamental characteristic of Deep Learning techniques is that they learn a hierarchical representation of the observed data, giving rise to models made up of a high number of layers (hence the name of deep ), which allow representations much richer than any classical method.

How is it different from other machine learning (ML) methods? The classic ML approach is based on the manual design of mathematical operators (descriptors) for the extraction of those representations that characterize each sample or observed element. In the vast majority of cases, these variables or descriptors are specific to the process or application. To define them, knowledge of the application domain and the technologies that allow automating their calculation is necessary. However, Deep Learning extracts knowledge directly from the data without the need to explicitly program rules, but through the mere observation of examples (data-driven).

Conventional neural networks lived through various periods of heyday and subsequent oblivion in the 1970s and 1990s. Now they have definitively re-emerged from the hand of Deep Learning, thanks to three sets of concurrent favorable circumstances:

  • A set of conducive technical developments
  • The availability of millions of data for training base networks
  • Increase in computing capacity, which allows such training to be carried out in a reasonable time, going from taking years to hours.

1. New Technical Developments

Although Deep Learning techniques make use of proposals made since the 1960s, in recent years there have been a series of technical advances that have made the difference developed by some groups that today have become world leaders in techniques. deep such as Geoffrey Hinton, Yann LeCun, Yoshua Bengio or Jürgen Schimdhuber:

  • new techniques for the stabilization of multilayer network training that solve problems such as the disappearance or explosion of gradients during training through the backpropagation algorithm
  • the proposal of specific architectures (convolutional networks) for image processing
  • the appearance of regularization techniques such as dropout
  • networks with memory capacity

The developments of this type of system are not trivial. The publication of various open-source development platforms has greatly contributed to its popularization. Currently, the main ones are Caffe (University of California Berkeley), Torch / Pytorch (New York University and IDIAP), Theano (University of Montreal), and Tensorflow (Google) with wrappers such as Lassagne (for Theano) and Keras (for Theano and Tensor Flow). The most common programming languages are Python, C ++, and Lua.

2. Data in Millions

The availability of large amounts of data has been possible thanks to the massive use of the internet and the uploading of data by users. This has made it possible to compile large databases, being the foundations of the great international research competitions. It was in ImageNet, dedicated to the recognition of objects in images, when, in 2012, Deep Learning techniques began their great expansion. It was in this year that a group from the University of Toronto led by George Hinton won by a wide margin. Although the need to have large annotated amounts of data may seem a priori an insurmountable problem in many applications, they have developed techniques known as transfer of learning( transfer learning ) that allow transferring what has been learned over the network in a domain in which sufficient amounts of data are available to another domain with a reduced number of data.

3. More Powerful Computers

Powerful Computers

The third ingredient, the increase in computing capacity, leads us to talk about the natural hardware for this type of system: graphics processing units or GPUs that allow going from the limited core units of a PC to the several thousand in a GPU.

At the moment the market is dominated by Nvidia, which offers multiple GPU models: the Titan series for home use, the Tesla series for professional use, Jetson for embedded systems or the drive Px series for use in driving assistance and autonomous car. . In addition, there are cloud computing services and other companies are entering this field as solutions other than GPUs. This is the case of the Lake Crest (scheduled for the 1st semester 2017) of Nervana (Intel group), the Tensor Processing Unit (TPU) of Google, the Intelligent Processor Unit (IPU) of Graphcore, or the EyeQ of mobileye recently acquired by Intel.

New Opportunities

Under the umbrella of the denomination, Deep Learning encompasses a set of interrelated but independent methods that are specially oriented to the resolution of certain types of problems, some of which were impossible to address until their appearance.

Convolutional networks (CNN- Convolutional Neural Networks ) have allowed for the first time to overcome the results obtained by people, among others, in tasks of identification of the objects present in images. These improvements are being applied in sectors such as manufacturing ( Surfin ), healthcare (Enlitic, IBM Watson, DeepMind), autonomous vehicles (MobilEye, Tesla), or multimedia (Google image search).

For their part, the RNN – Recurrent Neural Networks (especially the LSTM- Long Short Term Memory and its variants) are conducive to modeling all types of sequential signals, the most representative fields being automatic speech recognition systems (p. ex: Siri, X-Box voice commands), automatic translation systems (Google or Facebook translator) or video.
Outside of purely supervised learning, Reinforcement Learning (RL) is being studied in fields such as robotics or in the automatic generation of data adapted to the context.

The incipient emergence of unsupervised techniques for learning generative models deserves special mention. Having been trained to capture the semantics inherent in the variety of observed data, they are able to generate new, realistic and genuine samples never seen before. Thus, it is possible to generate audios, images, videos with obvious applications in entertainment industries, or even industrial or pharmacological designs (eg generation of molecules with characteristics that make them candidates to be tested as a vaccine against malaria).

In general, the use of deep techniques is spreading to many other applications and sectors, ranging from the study of molecules in medicines to the analysis of CERN data, the estimation of risks in insurance companies, personalized marketing, the study DNA mutations, income tax optimization, gaming systems (such as Alpha Go, the first automatic system to beat a master of the Chinese game Go).

Does it mean that deep techniques are the solution to all processes that include giving a response or taking an action on input data?

The versatility of Deep Learning and the ability to extract knowledge from networks in previously unknown domains is impressive. This enormous potential is reflected in the growth forecasts where it is estimated that the business of enterprise deep applications will grow from 109 million dollars in 2015 to 10.400 million in 2024, year in which the annual revenues driven by deep technologies will exceed 100.000 million dollars, especially in financial markets, image classification, biomedical analysis, and predictive maintenance.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article