Photo by Markus Winkler on Unsplash

The concept of Machine Intelligence was coined by Alan Turing, the famous British mathematician, in 1950s. By that time, he already considered that machines could think like humans do, so he developed the Turing Test to assess if they had intelligent behaviour.

Since then, AI systems have experienced an impressive development. Some examples are autonomous vehicles, chatbots and smart homes among others.

It is quite common that AI breakthroughs boost human imagination in the science fiction realm. Films like Terminator illustrate how dangerous machines can get when they are no longer under our control.

AI has also greatly impacted business landscape. According to Gartner, AI business value will reach $3.9 trillion in 2022. For the same year, World Economic Forum forecasted in The Future of Jobs Reports 2018, there will be 133 million new jobs created by AI.

Despite all the hype around it, the academic perspective is more cautious about the future development of AI. Considering all the milestones achieved, the question that should be answered is what limits AI?

Stormy Waters

The answer may sound a little tricky but exposes important shortcomings in machine-learning systems. Take the example of an image classification system that is trained to identify cars. If you want the same system to identify bikes, then it has to be trained all over again. This phenomenon is known as Catastrophic Forgetting. Another example occurs when a chatbot is no able to solve a query and human intervention is required to help the customer.

The reason behind these problems is due to causation. Current AI systems are not able to understand that cause A produces effect B. Now, you may think that is untrue because computers are able to see a certain image containing a car and correctly classify it as a car. You are tempted to consider the car image as the cause, and the label assigned to the image as the effect. If that was true, then AI systems are able to deal with causation, right?… Unfortunately, that’s not the case!!

A Meaningful Distinction

Elias Bareinboim is the director Causal Artificial Intelligence Lab at Columbia University. He is the protégé of Judea Pearl, a computer scientist and statistician at UCLA and also a Turing Award winning scholar. They are both trying to implement the science of causation into AI.

In the previous example of the car image, Barenboim and Pearl would argue that it not causation but correlation what makes AI to correctly classify the image. A car image has certain patterns, shapes, textures that helps computer to classify the image.

Both concepts are related but different. The image below illustrates this by comparing ice-cream sales with shark attacks.

Photo by Robert Nichols on Quora

Causation implies correlation, but correlation does not imply causation. In causation orders matters, A causes B is not the same as B causes A.

AI systems are good at spotting correlation, but they are clueless about causation. This means that certain techniques such as deep learning, when there is a great amount of familiar data, can lead to precise predictions. So, predictions are limited by the data available.

It is clear that machine learning has experienced an impressive growth. But this process is based on numerous problems that can be solved by relying solely on correlation. However, AI progress will stagnate if systems can’t deal with causation. Imagine for a moment that computers wouldn´t need to learn everything from scratch each time you want to apply them in a certain domain. Computers could use what they have already learnt and apply it to new domains if they understood cause-effect relationships. That’s the power of understanding causation!

At the moment, there is little room for computers to establish cause-effect relationships. Reinforcement learning is a technique where a sort of causation is learnt by the AI. This technique enables an agent to learn in an interactive environment (typically games like Starcraft or chess) by trial and error using feedback from its own actions. The main limitation about reinforcement learning is that it doesn´t work with more complex environments where there are no stated rules like in games.

A New Hope

The work of Bareinboim and Pearl allows to think about a future where computers can apply causal reasoning. However, there is still a hard way to go.

Humans struggle when it comes to interpret causation and correlation. It is not clear that we can implement causation in computers, when there are many examples where humans misunderstood causation and correlation.

Consider how alcohol affects car accidents. The increase of alcohol consumption increases the likelihood of car accidents. But is this correlation, or is this causation? It is clear that more alcohol is correlated with more car accidents. However, it is harder to determine whether alcohol causes more accidents. Maybe, there is an independent variable causing people to drink more and have more accidents, like stress.

Pearl and other statisticians have designed a mathematical approach to identify what facts are required to support a causal claim. In the previous example, considering the prevalence of alcohol consumption and car accidents, an independent variable causing both would be highly unlikely. Therefore, we can state that alcohol consumption causes more car accidents.

Another important contribution based on Pearl’s effort are causal Bayesian networks. In fact, GNS Healthcare, a company based in Massachusetts, takes advantage of this technology to suggest researchers about promising paths to explore. This means that efforts are focused in a more efficient way, rather than using trial-error basis or intuition.

Within the scope of deep learning there is an effort to make neural networks do meta-learning and establish causal relationship. Yoshua Bengio is a 2018 Turing Award computer scientist at the University of Montreal. He considers that neural networks may be able to obtain fundamental knowledge of the world by studying similar information across datasets. After enough meta-learning, neural networks could learn about that similar or invariant information, so it is reusable in multiple domains.

The Last Mile

There are promising initiatives that can make AI truly intelligent. According to Pearl, causal reasoning is necessary to reach an artificial general intelligence. Causal reasoning enables the introspection that is at the core of cognition. “What if questions are the building blocks of science, of moral attitudes, of free will, of consciousness”

Who knows? Maybe in a near future AI can unlock the vast potential that we are just starting to scratch.

References

  1. Brian Bergstein, What AI still can’t do
  2. Stephanie Overby, AI careers and salaries: 7 telling statistics

Love AI, reading, climbing and meditation.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store