As demand for Artificial Intelligence technologies continues to grow, e-commerce giants are also keeping on to expanding and reaching to the peak. To quickly satisfy the tremendous amount and a range of orders, businesses like Amazon, Walmart, and Alibaba are spending massively in new warehouses. Several companies, to address the lack of workforces, are considering robots. However, reliably understanding a distinct array of products continues a big challenge for robotics.
A team of engineers at the University of California, Berkeley, in a paper published last week, present a new ambidextrous approach to apprehending an assorted range of object shapes without training. A postdoctoral researcher at UC Berkeley and lead author of the paper, Jeff Mahler noted that any single gripper cannot handle all objects. For instance, a suction cup cannot form a seal on porous objects like clothing and parallel-jaw grippers may not be able to reach both sides of some tools and toys. The robotic systems utilized in most e-commerce fulfillment centers rely on suction grippers that can define the range of objects they can grip. Although, the UC Berkeley paper proposes an ambidextrous approach that is congenial with a range of gripper models. The approach is based on a common “reward function” for each gripper model that quantifies the possibility where each gripper will succeed. It enables the system to quickly determine which gripper to use for each situation. To effectively compute a reward function for each gripper model, the paper illustrates a process for learning reward functions by training on large synthetic datasets rapidly formed by applying structured domain randomization and analytic models of sensors and the physics and geometry of each gripper.
When the researchers trained reward functions for a parallel-jaw gripper and a suction cup gripper on a two-armed robot, they found that their system cleared bins with up to 25 earlier unseen objects at a rate of more than 300 picks per hour with 95 percent reliability. The research was conducted at the UC Berkeley’s Laboratory for Automation Science and Engineering (AUTOLAB) in association with the Berkeley AI Research (BAIR) Lab, the Real-Time Intelligent Secure Execution (RISE) Lab, and the CITRIS “People and Robots” (CPAR) Initiative.