Ford Releases Self-Driving Data to Improve the Breed
Ford Motor Company is doing something to help improve the widespread development of autonomous vehicles. Every second a self-driving vehicle is operating, it’s gathering information about the world around it. Cameras and LiDAR help it identify vehicles, pedestrians, signs and anything else that might be out in or near the streets. Radar helps the vehicle keep track of how fast things are moving around it. Without all this data, self-driving cars wouldn’t even be able to leave a parking lot. These vehicles need to process a constant stream of information to safely navigate their surroundings, but even before they can do that, high-quality data is needed to help engineers and researchers create software that can properly teach self-driving vehicles how to analyze their environments. To further spur innovation in this exciting field, Ford is releasing a comprehensive self-driving vehicle dataset to the academic and research community. There’s no better way of promoting research and development than ensuring the academic community has the data it needs to create effective self-driving vehicle algorithms. As part of this package, Ford is releasing data from multiple self-driving research vehicles collected over a span of one year — part of advanced research efforts separate from the work we’re doing with Argo AI to develop a production-ready self-driving system. This dataset includes not only LiDAR and camera sensor data, GPS and trajectory information, but also unique elements such as multi-vehicle data and 3D point cloud and ground reflectivity maps. A plug-in is also available that can easily visualize the data, which is offered in the popular ROS format. There are a number of reasons why these data points are noteworthy to researchers.Since this dataset spans an entire year, it includes seasonal variations and varied environments throughout Metro Detroit. It features data from sunny, cloudy and snowy days, not to mention freeways, tunnels, residential complexes and neighborhoods, airports and dense urban areas. Toss in construction zones and pedestrian activity,and researchers now have access to diverse scenarios that self-driving vehicles will find themselves in, helping them design more robust algorithms that can account for dynamic environments. Most datasets only offer data from a single vehicle, but sensor information from two vehicles can help researchers explore entirely new scenarios, especially when the two encounter each other at different points along their respective routes. Right now, one vehicle has limited “vision” in terms of what it can see; you’ll note in our visualizations that some parts are not colored in, which is because the vehicle’s sensors could not penetrate those areas. But with multiple vehicles in the same general area, it’s feasible one would detect things the others simply cannot, potentially opening up new routes for multi-vehicle communication, localization, perception and path planning.