Is a new AI winter coming? Maybe it is AI spring time instead.
The term Artificial Intelligence was accepted in 1956 to describe a new scientific field to work on the following assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.
During the following decades, different schools have gained momentum, to always end up falling in what has been called the AI winters, “a period of reduced funding and interest in Artificial Intelligence research”.
From 2012 onwards we are living a new hype cycle around Artificial Intelligence mainly due to big scientific results and investment in neural networks. The question is then: are we going to have a new AI winter for neural networks coming soon?
Let’s first take a step back. Have neural networks been a huge leap in AI? Definitely. It is now clear that this AI approach has helped fields like Natural Language Processing (NLP) or Vision Computing improve a lot, generating lots of opportunities for companies understanding its advantages and limitations. Neural networks are also bringing new possibilities in health research and diagnosis, which is also great news.
On the other hand, are Neural Networks going to be able to manage the high expectations put on creating an Artificial General Intelligence? It is clear by now that this is very unlikely. Does this mean that we are approaching a new AI winter? In my opinion, what is different now than in previous times is that lots of AI experts agree in that what we are trying to solve is a huge problem that will require completely different approaches and fields to work together on solutions that require to think out of the box.
We already have good examples on this kind of collaboration proposals:
- Demis Hassabis, founder of DeepMind, has pointed out in different interviews that we are still far from that goal, and that the way to get to the next level is for different scientific fields to work together and solve the problem.
- Scientists of AI at Google Brain and DeepMind units have acknowledged machine learning is falling short of human cognition. They have proposed a new approach using models of networks that might be a way to find relations between things that allow computers to generalize more broadly about the world. You can find their complete paper called “Relational inductive biases, deep learning, and graph networks” here.
As a conclusion: we now have the computer power and data, we already have scientific results that back up new and strong investments and, most importantly, we now have the right collaboration culture that we will need very broad approaches in terms of scientific and humanist fields to move forward. Hopefully this will also include ethics at a global level, as it stands as one of the biggest challenges around AI.
If we really rely on this collaboration mindset as the basis for AI research, it will open the door for an AI spring more than for a new AI winter