Now that AI and (particularly the neural networks)is more common, we easily identified its domains of predilection of Artificial Intelligence and its current limitations. Those limitations that are responsible of the misunderstandings between the IA evangelists who promise wonders and the users who see their expectations unfulfilled today.
This technology has existed since the 1950s with the arrival of Frank Rosenblatt’s first perception. The race for power described by Moore’s Law has allowed it to begin its democratization for the last 20 years. Deepblue has been the ambassador of this type of AI to the general public.
Given the growing challenges of data management, the neural network is now at the heart of everyone’sconcerns. In order to respond to the different challenges of data processing and lack of power, optimized topologies have been developed but limited to a few very specific needs. For example : convolutional neural networks that reproduce the topology of the visual cortex for image processing.
There are many topological models of neural networks today, but these chosen topologies unfortunately limit and imprison their possibilities of evolution.
Current limitations of neural networks
My background gave me a vision broad enough to make me realize that the root of the problem lies in the lack of communication between fields of research such as mathematics, biology, statistics, behavioral sciences, etc. In this context, I strive to ensure that cross-domain discussions take place.
The importance of topology
One of the keys to move to the next step in IA is the topology. The models of neural network as we know them are based on mathematical optimizations or biomimicry.
This topological confinement is the barrier to break free of current limitations. If we compare artificial neurons to actual biological neurons, then it appears clearly: the power of our biological brain is derived from a complex topology from millions of years of evolution. It can not be limited to a single topology. The plasticity of the connections is not enough.
There are fortunately solutions of topological optimization. The alliance between neural network learning algorithms and genetical algorithms responsible for evolution. Some off-the-shelf libraries are beginning to incorporate these optimizations. The convergence of Deep learning and Topology and WeightEvolvingArtificial Neural Networks (TWEANNS) is going in the right direction. The “traditional” type Deeplearningapproachrequires data, the genetic approach, it only requires a simulation model where the entities evolve and are selected.
I am convinced that the future of AI is not to use pre-wired libraries but to develop a neuro-evolutionary approach allowing to completely master the functioning of AI (learning and topological evolution).
AI will not manage to fulfill its promises unless it breaks free of the limitations inherent to its dedicated topological approaches.
Current challenges to the evolution of AI
In addition to the hardware architectures, which needs to improve, the software foundation must evolve. We must abandon Python, R, etc … and choose a development language adapted to the specific issues of the neural networks. Of course, the huge code library of these languages is an advantage but it is also their prison. We must change the paradigm. This language must be at least some kind of concurrent, distributed real-time computing to really represent neurons and their connections.
An example of a language that satisfies all these constraints is Erlang which supports all of the theseprerequisites.
Many companies use AI more as a marketing argument than the revolutionary tool it should really be. To meet the high expectations for AI, a number of barriers must be overcome, both technical and social. There is no doubt that these limits thus evoked will disappear within the next 5 to 10 years, and the companies that will be the first to understand these points will flourish.