Silicon Valley-based lidar Technology Company, Velodyne Lidar Inc, has demonstrated its surround-view lidar solutions for collecting rich perception data in testing and validation, at Nvidia’s GPU Technology Conference (GTC) in San Jose, CA. The company’s solutions enable full, a 360-degree perception in real-time, facilitating highly accurate localization and path-planning capabilities, and are available on the NVIDIA DRIVE™ autonomous driving platform. Its sensors’ characteristics are also available on NVIDIA DRIVE Constellation™, an open, scalable simulation platform which allows large-scale, bit-accurate hardware-in-the-loop testing of AVs.
Velodyne’s lidar solution’s DRIVE Sim™ software imitates lidar and other sensors, recreating inputs of an autonomous driving car with high fidelity in the virtual world. President and Chief Commercial Officer of Velodyne Lidar said that Velodyne and NVIDIA are at the forefront delivering the high-resolution sensing and high-performance computing needed for autonomous driving. He further added that as an NVIDIA DRIVE ecosystem partner, our intelligent lidar sensors are foundational to advance vehicle autonomy, safety, and driver assistance systems at leading global manufacturers. Seated in San Jose, California, Velodyne offers its customers the industry’s largest portfolio of lidar solutions that span the full product range required for advanced driver assistance and autonomy by automotive OEMs, truck OEMs, delivery manufacturers, and Tier 1 suppliers.
Velodyne sensors verified through learning from millions of road miles, assist in deciding the safest way to navigate and direct a driverless vehicle. Additionally, Velodyne sensors improve Level 2+ Advanced Driver Assistance Systems (ADAS) capabilities, with Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Lane Keep Assist (LKA). Senior Director of sensor ecosystem development at NVIDIA, Glenn Schuster pointed out that Velodyne’s lidar sensors help deliver the intelligence to enable automated driving systems and roadway safety by detecting more objects and presenting vehicles with more in-depth views of their surrounding environments.