CIOTechie_ArtificialIntelligence_AI_Robotic_Identify_Automation_Objects

Researchers Develop An AI Robotic Perception System That Can Identify Objects By Touching Them

Artificial Intelligence News

Researchers_Develop-An-AI-Robotic-Perception-System-That-Can-Identify-Objects-By-Touching-Them Researchers Develop An AI Robotic Perception System That Can Identify Objects By Touching ThemThe team of researchers at the University of California, Berkeley devised an Artificial Intelligence framework in a paper for robots that learn to classify the objects by touch alone. Published by Carnegie Mellon University researchers and others, researchers have set out to design an AI system able to identify whether a set of physical observations correspond to particular objects. The researchers explained in a paper titled ‘Learning to Identify Object Instances by Touch: Tactile Recognition via Multimodal Matching’ that their perception is inherently multi-modal as humans naturally connect the appearances and material properties of objects across multiple modalities.

According to the researchers, tactile sensors lack the same global view as image sensors; by contrast, they run with respect to local surface properties, as well as their readings be inclined to be more difficult to interpret. To overcome these limitations, they have integrated a high-resolution GelSight touch sensor, which produces readings with a camera that perceives gel deformations made by contacts with objects, with a convolutional neural network. Moreover, the researchers mounted two GelSight sensors on the fingers of a parallel jaw gripper for using to compile a set of data from the camera’s observations and the tactile sensor’s readings in cases where the gripper effectively obtained its fingers around target objects. Afterward, they accumulated over all samples for 98 different objects, where 80 of them used to train the convolutional neural network.

During the experiment, the AI system was able to precisely infer the individuality of objects from tactile feel nearly 64.3 percent of the time, involving those which it hadn’t stumbled upon during training. The researches have claimed that it outperformed similar methods, including those of 11 human volunteers in 420 trials who were asked to spot objects by looking at the shape of their fingers as they held them in their hands. They further said that there is room for advancement, all images came from the same environment, and they note that their work considered the only individual grasps rather than multiple tactile interactions.