A new perspective on AI
At the recent O’Reilly AI Conference, there were two keynote presentations that discuss human-machine partnerships as the foundation for better AI:
- ‘Machines as thought partners‘ by Dr David Ferrucci (Elemental Cognition)
- ‘Building machines that learn and think like people‘ by Josh Tenenbaum (MIT)
In a nutshell, they basically agree on some core characteristics of current Artificial Intelligence (AI):
- AI is great at taking very specific questions and providing specific answers in a very narrow domain;
- Before it can do anything, AI requires training on huge data sets;
- Currently, AI is really bad at explaining how it came to its answers.
This kind of AI is very useful in fields like marketing. It has proven its value in that field many times over by now (think Google, Amazon, etc.).
Outside of marketing, IBM’s Watson has proven to be really good at Jeopardy (in 2011) and more recently in fields like Health (IBM Watson Health), which proves it can have great value as well.
Limitations of (current) AI
The main limitations of current AI are (at least these are the two I want to focus on right now):
- that it requires huge training sets;
- that it is a black box.
You first have to feed it huge amounts of training data, train it and then it provides you with answers (that can be very accurate).
The great thing about this is, that it allows an AI to analyse huge amounts of data much faster than any human will ever be able to.
The not so great thing about this is that while current AI gives us really powerful correlations, it doesn’t show us the causal relationships. One of the difficulties with training AI’s is to make sure a neural network is actually answering your question. Is it not actually answering some other question that happens to lead to the same correlations. The reason for this is that you cannot see how it gets to its conclusions.
The AI is basically a huge mathematical equation that just calculates the outcomes for any input that you provide. You don’t get to see the actual formula though, let alone understand it. It truly is a black box.
AI vs human thinking and learning
This is rather similar to the problem of understanding how our brains work. We know what our brains can do, but we still don’t really know how our brains create our consciousness, creativity, personalities.
However, what we do know, is that we build mental models of the world from the day we are born (actually already in the womb). We can actually build such mental models based on very little data. As Tenenbaum nicely illustrates in his presentation, based on as little as single image of an object we don’t know yet, we can immediately recognise that object in other pictures even though they are different variations of the object and presented in different orientations.
This is something no AI is currently capable of, while even the smallest child can do this with ease.
In addition to building these models, we know what we don’t know or understand yet. When we don’t fully understand something, we ask the question ‘Why?’ (anyone who has children knows this really well). We use the answers of our ‘why’ questions to update our mental model. We also test our mental models against factual data we find in our environment. When that data doesn’t fit the model, we again ask the question ‘why’.
Overly simplified, we fundamentally learn by building our own mental models of the world and we keep asking ‘why’ until we are confident enough about our models.
In a sense, our learning is a constant dialog. The core enabler of this dialogue is that we are able to communicate about our mental models. The parent has a deeper and more tested mental model than the child. The ‘why game’ allows the child to improve it’s mental models.
In order to effectively communicate about our mental models we first need to have an underlying lower level model that we already share: a shared frame of reference. This underlying model forms the scaffolding of our shared learning.
Moving towards human-machine partnerships
These capabilities allow us to form (large) teams of people that coordinate and learn together.
In his great duo of books: ‘Sapiens’ and ‘Homo Deus’, Yuval Noah Harari explains this is how we humans moved from being rather irrelevant weak animals to completely ruling the world in a very short time (say only 70000 years).
It is obvious that the way we learn is different than the way most current AI systems learn. It is also where most current AI is actually less useful than we’d like it to be.
Wouldn’t it be great if we could shape AI to be a very powerful new thought partner in our working and learning teams?
This seems to be a much nicer scenario for our future than the bleak predictions that we will be replaced by our machines. We should build AI that augments us and makes us better in dealing with our challenges. As IBM is saying: AI should stand for ‘Augmented Intelligence’ not ‘Artificial Intelligence’.
This is also the message that David Ferrucci, Josh Tenenbaum, Yuval Noah Harari and others are broadcasting.
Kasparov and freestyle chess
A great proponent of building human-machine partnerships, is a person whom has had first hand experience with facing an AI on the verge of ‘replacing’ humans: Gary Kasparov. In 1997 he was famously beaten by IBM’s DeepBlue chess computer.
Until then, many people were convinced that the game of Chess was one of those things computers could never get better at than Humans.
Since then Kasparov has pioneered new forms of Chess like ‘freestyle chess’, where human-machine partnerships compete in chess games. Within the world of freestyle chess it has been demonstrated that frequently the winners are not teams of the best chess players or the best chess computers: it turns out that the key to success is to form the best human-machine partnerships.
It is critical to balance the power of the human mind and the computational power of the computer. Additionally, the best teams know how to optimally co-ordinate the work between the different parties.
This doesn’t sound all that different than the success factors of human-only partnerships, now doesn’t it?
How does this relate to SUIRON?
The idea’s described above are part of the fundamental principles behind SUIRON (this is why the SUIRON logo is an abstract Ying/Yang sign of two brains: one human, one machine).
Let’s start with a difference in the approach between typical AI development and the development of SUIRON first:
This difference is the starting point. With SUIRON, development started firmly in the realm of knowledge models and data and works towards including AI. Ferrucci and Tenenbaun (and others) start from an AI perspective, working towards including knowledge models (of the non-statistical kind).
The goal of SUIRON is to provide the platform for storing, sharing, reasoning and communicating about models and data.
It provides fundamental support for multiple hierarchical levels of models, each providing a more powerful language to build on. Each level can function as a possible shared frame of reference for humans and machines to collaborate and learn together.
It also allows to store factual data based on the grammar and semantics defined in the models. Through this combined storage of models and data, SUIRON provides a platform to test models on factual observations, in addition this allows to search for clues for new concepts and relationships between these concepts.
Especially this process of searching and matching is where AI is very powerful, as such SUIRON integrates with AI technologies to delegate that work to.
SUIRON is not an AI solution itself: infact it doesn’t need to be. There are multiple very good AI technologies available on the market. These are increasingly packaged into ready to use services. The approach in SUIRON is to pick and choose whatever AI technology is the best fit for a specific type of challenge.
This also includes employing more traditional logical, functional and imperative methods. Any newly infered knowledge can be used to improve and enrich the models and data in SUIRON.
The following blog posts will continue exploring these idea’s and developments; ‘context is everything’ and ‘we are temporal beings in a temporal world’.