This article was published on 7/17/2017 in VentureBeat.
Advances in deep learning and other machine learning algorithms are currently causing a tectonic shift in the technology landscape. Technology behemoths like Google, Microsoft, Amazon, Facebook and Salesforce are engaged in an artificial intelligence (AI) arms race, gobbling up machine learning talent and start-ups at an alarming pace. They are building AI technology war chests in an effort to develop an insurmountable competitive advantage.
While AI and machine learning are not new, the current momentum behind AI is distinctly different today, for several reasons. First, advances in computing technology (GPU chips and cloud computing, in particular) are enabling engineers to solve problems in ways that weren’t possible before. These advances have a broader impact than just the development of faster, cheaper processors, however. The low cost of computation and the ease of accessing cloud-managed clusters have democratized AI in a way that we’ve never seen before. In the past, building a computer cluster to train a deep neural network would have required access to deep pockets or a university research facility. You would have also needed someone with a Ph.D. in mathematics who could understand the academic research papers on subjects like convolutional neural networks.
Today, you can watch a 30-minute deep learning tutorial online, spin-up a 10-node cluster over the weekend to experiment with, and shut down the cluster on Monday when you’re done – all for the cost of a few hundred bucks. Cloud providers are betting big on an AI future, and are investing resources to simplify and promote machine learning to win new cloud customers. This has led to an unprecedented level of accessibility which is breeding grassroots innovation in AI. A comparable technology democratization occurred with the Internet in the 1990s. If AI innovation follows a similar trajectory, the world will be a very interesting place in five years.
Although everybody points to improvements in CPU/GPU as the primary driver of AI innovation, this is only half of the equation. Advances in AI algorithms in the mid-1980s broke the spell of the AI winter of the 1970s. The work of deep learning pioneers like Geoffrey Hinton and Yann LeCun solved some of the critical shortcomings that plagued earlier algorithms. In many ways, algorithms like Hinton’s backpropagation opened the flood gates for future algorithmic innovations, albeit these improvements happened at a slower, academic pace. DeepMind’s AlphaGo program, for example, combined deep learning with reinforcement learning to enable a computer that beat a worldwide Go master in 2016 – a full 20 years later.
Another aspect that has changed this time around is that we’ve implicitly redefined AI to mean something different, something more realistic. Historically, AI has been defined by the ability of a computer to pass the Turing test, which meant the public wasn’t going to be happy with AI until they had a walking, talking robot. Anything less was considered a failure. We are still far away from creating this kind of general AI, but we are already solving some advanced problems with machine learning, a subset of AI proper. Rather than focus on general intelligence, machine learning algorithms work by improving their ability to perform specific tasks using data. Problems that used to be the exclusive domain of humans – computer vision, speech recognition, autonomous movement – are being solved today by machine learning algorithms.
In fact, machine learning has become such a huge area of focus because of its success, that for all practical purposes, the term machine learning has become synonymous with AI. Ultimately this is a good thing. The more people start associating the term AI with real-world applications of machine learning like self-driving cars, the more they realize that AI is a real thing, it is here to stay, and it holds the promise of reshaping the technology landscape over the next several years.
Enterprises should take advantage by aligning their cloud and technology stacks with providers who are leaders in AI. The gap between the AI haves and have-nots will be wide, so picking the right technology providers is critical. For example, a non-AI powered CRM system might require your sales force to find prospective customers based on the last time they were contacted. This is okay, but it’s a fairly rudimentary approach. An AI-powered CRM system, in contrast, could proactively feed leads to sales reps in real-time using algorithms designed to maximize the likelihood of a sale, based on breaking information about the customer, their company and even the sales rep herself. Choosing the right CRM vendor in this case, could have a direct and significant impact on revenue.
This year, enterprises and mid-sized technology companies will start to realize that they’ll end up on the wrong side of the growing gap between the AI haves and have-nots if they don’t develop strong, in-house machine learning capabilities. However, rather than hiring teams of AI innovators like the first wave of AI tech giants have done, these companies should build their AI capabilities using out-of-the-box machine learning tools from AI-focused platform providers like Microsoft and Google. The increasing demand for AI-driven technology, combined with the death of machine learning talent in the labor pool, will force the democratization of data science. Indeed, Amazon and Microsoft are betting the farm on this trend, which is why they’re making such huge investments in machine learning education and easy-to-use AI tools.