Modern AI technology has been far more successful than at any point in history, with it becoming part of our daily lives without us noticing. Despite this success, AI has a long way to go to achieve the promise that many hold of this technology − which is why it is feared that the AI field is heading into its third winter. This article examines the factors that will determine whether we are heading for an AI summer, AI autumn, AI winter or AI spring and will also provide stories that encapsulate each of the plausible futures that each of the AI seasons represents.
Artificial intelligence: overview
Over the past 250 years, the primary drivers of economic growth have been technological innovations that became general-purpose technologies creating waves of complementary innovations and opportunities. Artificial intelligence, and more specifically machine learning, is the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it is given. Machine-learning algorithms is a subset of AI, thus when referring to either AI or machine-learning in this article, it relates to the ability of an system to automatically learn and improve from experience without being explicitly programmed.
PwC predicts that AI would add $16 trillion to the global economy by 2030, while McKinsey estimates this figure at $13 trillion. AI’s most important attributes, which will only increase in importance in the coming years, are connectivity and updateability. AI can connect to an integrated network with an innumerable number of similar applications which enables it to process data and learn much quicker than humans could ever wish to. The other ability is that of updatability whereby AI remains updated with the latest data, which is impossible for humans to do.
The promise of AI has not materialised in its application, which resulted in some researchers suggesting that it is heading for its third winter. The previous two AI winters took place when the promise of what AI could do went unmet, which led to research funding drying up and the importance of this research field being questioned. The Boston Consulting Group conducted a survey of 2 500 CEOs of which seven out of ten stated that their AI projects had generated little benefit thus far. Furthermore, two-fifths of the respondents that invested significantly in AI have not realised any benefits. PwC also conducted a survey that found that only 4% of the respondents planned to deploy AI across their organisations, which is down from 20% the year before.
Both the surveys could suggest a cooling in the enthusiasm for using AI currently and could indicate that an AI winter is approaching.
The revival of AI over the past few years can be ascribed to three trends that converged, namely the improvement in AI algorithms, the use of big data and cloud supercomputing. These three trends not only influenced the revival of AI but will also determine whether AI is moving into its third winter or not. They will also shape the future use of AI as discussed in the next section.
AI algorithms
One of the limiting factors of the current AI algorithms is their lack of cognitive ability, which means they only have the ability to correlate inputs with outputs, but they do it blindly with no understanding of the broader context. Therefore, although they are powerful pattern-recognition tools, they do not have top-down reasoning that represents the way humans approach problems and tasks. Rather, AI algorithms are currently trained with bottom-up reasoning that requires mountains of big data and creates serious limitations because the algorithms do not know how to treat situations where little or no data exist. If the future algorithms can improve their cognitive ability by using top-down reasoning. Relying less on bottom-up big data, they will more closely resemble the way humans approach problems and tasks and will be more broadly applied.
To create cognitive algorithms that use top-down reasoning, researchers are encouraged to widen the scope rather than the volume of data of what machines are taught. Furthermore, it is suggested that researchers should combine current machine-learning techniques with older, symbolic AI approaches which emphasised formal logic, hierarchical categories and top-down reasoning.
Data requirements
The data requirements of AI algorithms depend on whether algorithms will develop the cognitive function of using top-down reasoning in the future or remain limited to bottom-up reasoning that will require vast amounts of data to train these algorithms. Researchers found that one of the key factors that differentiate between AI leaders and laggard organisations is the difference in their approach to data.
Current AI algorithms require large numbers of carefully labelled examples, and those labels usually have to be applied by humans, which is known as data wrangling. Data wrangling takes approximately 80% of the time used on an AI project. The more complex and the more parameters that AI algorithms have, the higher the cost in terms of training data. Also, more computing power, which tends to be cloud computing, is then required.
There are some problems with training data that need to be taken into account:
- In many cases the training data contain a bias which is then transferred to the algorithm, creating or reinforcing unfair bias.
- The use of personal data has brought about ethical constraints and also privacy issues.
- Apart from the training data, AI algorithms require new data that will allow them to improve their accuracy in terms of prediction.
- One of the ways in which to deal with these data problems is to make up some data by creating synthetic virtual training data. It was found that algorithms trained on synthetic data performed better than algorithms trained on real data alone. Synthenic data is also more attractive as it addresses privacy concerns relating to personal data.
Computational power
One of the key trends that has brought about the revival of AI is the unlimited access to supercomputing in the cloud, which was estimated at $70 billion in 2015, and the continual growth in big data has brought about a compound annual growth rate of more than 50% since 2010. One of the central laws used to predict computing power is known as Moore’s Law which states that computing power doubles every two years on average at a constant cost.
The explosion in the demand for computing power has put pressure on Moore’s Law as the shrinking of computer chips is getting harder and the associated benefits of doing so are not what they were. This led to the optimisation of techniques such as changing the computer architecture to follow the structure of the data being processed or eliminating the time AI models spend on multiplying numbers by zero.
Other researchers are looking at alternative ideas such as quantum computing and neuromorphic chips. These two ideas are explained below:
Quantum computing uses the counter-intuitive properties of quantum mechanics to provide big speed-ups for some types of computation in which a computer is trying to make trade-offs between millions of variables to arrive at a solution that minimises as many as possible.
Neuromorphic chips are inspired by biology and chip makers are investigating chips that contain components designed to mimic more closely the electrical behaviour of the neurons that make up biological brains.
Companies should take full advantage of this collaboration between AI and humans. They must understand how humans can most effectively augment machines, how machines can enhance what humans do best, and how to redesign business processes to support the partnership.
Four AI development scenarios by 2030
Figure 1 was developed by identifying the two main trends that will shape the future of AI by 2030. These two trends are the development of cognitive ability by AI algorithms to use top-down reasoning and the development of data sharing and use according to a global framework. This global framework will protect data privacy, govern what the data can be used for and ensure that the data are usable and uniform, because without data even the most cognitive algorithms will not be able to function. These two trends influence each other: as the cognitive ability improves, the algorithms will become less data dependent and vice versa. Lastly, both trends also have an impact on the computing power that will be needed in the future, and thus it was decided to use these two trends in the scenario.
Figure 1 shows the four scenarios that will be described by way of stories in the next series of posts depicting AI summer, AI autumn, AI winter and AI spring.