One hundred billion neurons with hundreds of billions of connections; the human brain is almost a perfect machine. Even today, after decades of huge technological advances, there is no machine able to reach this level of perfection. Giants like Google and Facebook have been working for years to create a true Artificial Intelligence: can you imagine a supercomputer with the conscience and the feelings of human beings?
Several technological theoreticians have already predicted the date in which machines will be able to have purely human behaviors. Vernor Vinge, the author who popularized the term in the 80s with his book “The Pace War”, places this milestone in 2030. Raymond Kurzweil, a writer and scientist expert in Computer Science and Artificial Intelligence, says it will be in 2045. And Stuart Armstrong, a member of the Future of Humanity Institute at Oxford University, mentioned at the Singularity Summit 2012 the year 2040. Hence, the time horizon is around 15 or 30 years.
The truth is that although Vinge, Kurzweil and Armstrong are renowned experts in the field of Artificial Intelligence, it is possible that what we all generally envision as a “human” machine will take more than 15 or 30 years to become a reality. Nevertheless, research and startups in the field of machine learning and deep learning are setting today the path of that future.
What is deep learning and how will it change the world?
Deep learning is the use of algorithms to make abstract representations of information and facilitate automatic learning (machine learning). This allows a machine (based on these data patterns) to be able to recognize speech, movement, signals or images. This is not a new line of work, but years ago the cost of this type of research was very high; today it is cheaper and many companies are investing on it.
Major IT companies are betting on the development and improvement of algorithms which can recognize voices, images and texts. Google has developed successfully neural networks that recognize voices in Android phones and images in Google Plus. Facebook uses deep learning to target ads and identify faces and objects in pictures and videos; Microsoft is using it in speech recognition projects; and Baidu, the leading Chinese search engine, decided to open in 2013 a big deep learning research center in Silicon Valley, 10 kilometers from Google’s Campus in Mountain View.
Google has been taking interesting steps for more than two years in the field of deep learning and Artificial Intelligence. In January 2014 it bought DeepMind for around 290 million euros. The company founded in 2012 by Demis Hassabis, Shane Legg and Mustafa Suleyman jumped into the technological arena by using machine learning algorithms in e-commerce and video games. Ultimately, the goal of Google is to advance in the creation of a browser that is capable of understanding and answering to the requests of the users as a person.
In addition, in 2013 the company headquartered in Mountain View recruited one of the world's leading experts in machine learning, Geoffrey Hinton, who in the 1980s investigated the development of computers capable of functioning as the human brain by combining data patterns. He is currently the project manager of Google’s project The Knowledge Graph.
Facebook wants the same: the human machine
The other major player in the deep learning field is Facebook, and its (human) asset is Yann LeCun, professor of the Courant Institute of Mathematical Sciences at New York University and an expert in machine learning. He is one of the few people in the world capable of developing an algorithm from scratch. He was the creator of the first version of “error backpropagation”, a supervised learning algorithm to train artificial neural networks.
His research in the field of understanding images and speech recognition is what led Mark Zuckerberg to hire him for his Artificial Intelligence Laboratory. As LeCun himself has acknowledged in some interviews, the idea is to find an algorithm able to understand the content that users upload to the Internet. Mind you, by understanding we mean the way a human being would do it.
Spanish companies working with deep learning
In Spain there are also companies applying machine learning knowledge for the benefit of their customers. One of the most important is Inbenta, specialized in the development of software for natural language processing. Its technology allows a machine to understand and remember the conversation with a person thanks to the incorporation of cognitive retention, memory and detection of context in the interactions of their machines and users.
“This has many applications in the field of virtual assistants, in the customer service departments of big companies and in their communications in general, such as emails, chats… and in sectors such as banking, insurance, transport, retail or telecommunications,” says Julio Prada, one of the managers of this Spanish startup.
Another of the Spanish leaders in deep learning is Sherpa, a company that has designed a system combining the functions of search engines, personal assistants and predictive modelling. It is entirely designed for mobile devices, and it is one of the international competitors of the two major virtual assistants in the market: Apple’s Siri, for iOS, and Google Now, for Android mobile devices.
Indisys, a Spanish company that in 2012 gained the interest of an investor such as Intel, has also been working in this field for some time. Its research area is the same as Sherpa and Inbenta: natural language processing. As a result, Indisys has designed its own personal assistant, capable of having conversations as your father, your brother or your friend. If developments in deep learning continue at this rate, it is possible that in the future you won’t be able to tell if an article such as this one has been written by a person … or a machine!
BBVA – Follow us on @BBVAAPIMarket