Open AI, a non-profit Artificial Intelligence research organization and search giant Google have launched a new modus operandi for visualizing the interactions between neurons, named Activation atlases. Designed to provide a better understanding of the internal decision-making processes of AI systems and spot their weak points and breakdowns, Activation atlases are developed on feature visualization. This is a technique for understanding what the concealed layers of neural networks can represent in order to make Machine Learning more accessible and interpretable.
The collaborative technique for visualizing the interactions between neurons will simply answer the question of what an image classification neural network actually sees when given with an image, as a result, to provide users an insight into the buried layers of a network. With Activation Atlases, people can explore unexpected issues in neural networks like they can find out places where the network is based on spurious correlations to identify images, or where re-using a feature between two classes lead to strange bugs. They can even utilize this system to attack the model, modifying images to deceive it. Activation atlases are developed from a convolutional image classification network, Inceptionv1, which is trained on the ImageNet dataset. This network increasingly assesses image data through about ten layers, where each layer is made of hundreds of neurons and every neuron activate to varying degrees on various sorts of image patches.
An activation atlas is developed by amassing the internal activations from each of these layers of the neural network from the images. Then, represented by a compound set of high-dimensional vectors, these activations are projected into constructive 2-dimensional layouts through UMAP. The researchers anticipated that their paper will offer users to peer into convolutional vision networks with a new way and enable them to perceive the inner working of intricate AI systems in a simplified manner.