Michael Drew, Partner & Head Technology & IT Services at Odgers Berndtson in London, talks to Alex White, Vice President Enterprise Business EMEA at NVIDIA about the power of so-called "deep learning artificial intelligence".
Michael Drew: Artificial Intelligence (AI) is a very hot topic, but there is still confusion around the terms used to describe it. Help us demystify it.
Alex White: When people describe AI, they normally refer to three main terms: AI, machine learning and deep learning. One way to think about each of these terms is as if they were a set of Russian dolls. AI is the largest ‘doll’, which encases machine learning and, in turn, deep learning.
AI has been around since the 1950s when Alan Turing speculated that machines would one day be able to think like humans. Over the succeeding decades, scientists developed machine learning algorithms that allowed computers to begin using very basic pattern recognition, for example, to map a route for a travelling salesperson.
Then, in the 1990s, work on machine learning shifted from a knowledge-driven approach to a data-driven approach. Scientists began creating programmes for computers to analyse large amounts of data and draw conclusions – or ‘learn’ – from the results. This shift led to the development of deep learning algorithms that learn from massive amounts of data to create software that can tackle such challenges as translating languages, diagnosing cancer and teaching autonomous cars to drive. The fundamental difference is that, with deep learning, the data itself learns the best way to develop the software algorithm, rather than relying on a scientist to do that from their acquired knowledge.
Deep learning goes far beyond anything that was possible in the past with machine learning; as such, AI is accelerating at a pace never seen before. Examples of the type of change taking place can be found in every data centre – not so long ago they served up web pages, advertising and video content. Modern data centres recognise voices, detect images in video streams and more often connect us with richer sources of information, exactly when we need it. Increasingly, those capabilities are enabled by deep learning.
MD: And how does NVIDIA fit into the field of AI?
AW: Deep learning is achieving remarkable results, but the approach demands that computers process vast amounts of data in the shortest possible time, precisely at the time when Moore’s Law is slowing (Moore’s Law is the observation that the number of transistors on a CPU, and CPU performance, would double every two years). In effect, deep learning is a new computing model that has required the invention of a new computing architecture.
NVIDIA has been working hard over the past five years to develop a new AI computing platform with many technological advances specifically built for AI and deep learning. Our current Pascal graphics processing unit (GPU) architecture delivers 26x the deep learning performance of the GPU architecture available five years ago, and the world’s first supercomputer specifically designed for deep learning is equivalent to 250 traditional servers, delivering performance improvements far exceeding Moore’s Law. AI researchers are discovering that the GPU-accelerated computing model is ideal for deep learning.
MD: How widespread will the application of this technology be? Which sectors and markets are already seeing the impact?
AW: AI won’t be an industry – it will be part of every industry. From intelligent assistants to smart homes to self-driving cars, it’s clear that this new computing model will infuse consumer technology as much as it will reinvent enterprise computing.
At NVIDIA, we’re evolving and taking actions so every customer has a unique entry point to our company. This is not just our opportunity; it’s an opportunity for every company, in every industry, to ensure their customers enjoy products and services designed specifically for their needs. Early adopters include online retailers like Amazon and Netflix, who use deep learning to suggest products that fit our preferences.
Healthcare has also been quick to embrace the power of AI. DreamQuark, for instance, has used deep learning to develop a technique for diagnosing diabetic retinopathy, a condition that usually leads to blindness. In the near future, we can expect more personalised care and improvements in the detection and treatment of devastating diseases like cancer.
In warehouses and manufacturing plants, industrial robots that can learn new processes, rather than require costly modification or replacement, will bring huge gains in effectiveness. French start-up Akeoplus (a company focused on future factory and industrial robots) and retail giant Zalando are already making headway here.
MD: Developing the technology is one thing, but what are some of the challenges for creating widespread adoption of AI?
AW: With the recent explosion of AI and the desire to deploy deep learning, the demand for talented deep learning developers outweighs supply. The need for AI is being recognised on an international scale by many governments around the world. In February the UK government pledged to invest millions of pounds in realising its AI opportunity. Accenture claims AI could add around £654 billion ($814 billion) to the UK economy by 2035.
With the recent explosion of AI and the desire to deploy deep learning, the demand for talented deep learning developers outweighs supply.
Canada has introduced its Vector Institute to expand the applications of AI through explorations in deep learning and other forms of machine learning. It has received millions of dollars in funding from the Canadian and Ontario governments and a group of 30 businesses, including NVIDIA and Google. NVIDIA’s role in alleviating some of the challenges for AI is multifaceted. Not only does its core hardware technology, the GPU, accelerate deep learning, but it has also made significant investments in deep learning software tools, training and education to make it easier for scientists and developers to work on AI projects, increase productivity and shorten development cycles.
NVIDIA’s role in alleviating some of the challenges for AI is multifaceted. Not only does its core hardware technology, the GPU, accelerate deep learning, but it has also made significant investments in deep learning software tools, training and education to make it easier for scientists and developers to work on AI projects, increase productivity and shorten development cycles.
The NVIDIA Inception Program has been developed to support disruptive AI start-ups by providing access to hardware, marketing support and training. Our Deep Learning Institute offers hands-on training for developers, data scientists and researchers who are looking to solve challenging problems with deep learning. We have also committed to training 100,000 developers through our Deep Learning Institute this year.
MD: The long-held nervousness about the impact that AI and automation will have on certain professions seems to be intensifying. How widespread, significant and quickly implemented do you expect its impact to be?
AW: AI will certainly impact the workplace and our roles within the business. However, this change should be embraced and not feared. Historically, the changes brought about by new technology have been the source of economic growth, and the changes that AI brings will not be any different. AI will augment many jobs, enhance productivity and drive down costs in manufacturing and commercial industries.
In 1999, NVIDIA invented the graphic processing unit which sparked the growth of the PC gaming market. Today it is revolutionising the next era of computing – artificial intelligence – with its technology powering driverless cars and robotics across manufacturing, healthcare and business.
With women still under-represented at senior levels in science and technology, what will it take...
Are genetically-modified crops and precision agriculture destined to become the norm as an import...