Tech giant Microsoft has wrapped off its Azure Kinect Developer Kit, an all-in-one perception system for computer vision and speech solutions that appears to be a more polished version of Project Kinect for Azure, at Mobile World Congress in Barcelona. As the company said, Azure Kinect is an intelligent edge device that not only sees and hears but understands the people, the objects and their actions, and the environment. It only creates a sense for the people to develop a new device when the company has unique capabilities or technologies to assist the company to move ahead.
Microsoft’s Azure Kinect comprises a 1-megapixel depth sensor, with a 12-megapixel high-depth camera and spatial 7-microphone array. It all comes in a package nearly 5 inches long and 1.5 inches thick that overall draws less than 950 megawatts of power. If a user can toggle the field of view, it works with a range of computing types which can be utilized together to catch a panoramic understanding of the ecosystem. With Microsoft’s previous adopter program, its customer Ava Retail utilized Azure Kinect and the Azure Cloud to create a self-checkout and grab-and-go shopping platform. While one another customer based on healthcare system provider leveraged it to find out when patients fall and to proactively alert nurses to likely falls.
The company’s Azure Kinect Developer Kit is a PC peripheral with advanced Artificial Intelligence sensors for sophisticated computer vision and speech models. It combines a best-in-class depth sensor and spatial microphone array with a video camera and orientation sensor, all in one small device with multiple modes, options, and SDKs. This developer kit came nearly 9 years later Microsoft introduced the first Kinect as motion-sensing gaming peripheral for its Xbox 360 console. The kit is available in the U.S. and China market, with starting price of USD399.