|3D Scan by Streamline Automation|
Writing at IEEE Spectrum, Erin Rapacki suggests that the 3D scanning industry is about to greatly impact robotics and asks the question, "Can we somehow automate or crowdsource image tagging of almost every object imaginable?" Her answer is to create robot-accessible 3D scanning databases in the cloud.
Like the Microsoft Kinect 3D gaming system, a critical task required by robots working with people is to calibrate its algorithms to rapidly and accurately recognize parts of the human body, especially hands, to make sure the device would work in any home, with any age group, any clothing, and any kind of background object. Using a computer-based approach to do the calibration has limitations, because computers would sometimes fail to identify a human hand in a Kinect-generated image, or would "see" a hand where none existed. So Microsoft is said to have turned to humans for help, crowdsourcing the image-tagging job using Amazon’s Mechanical Turk, the online service where people get paid for performing relatively simple tasks that computers are not good at. As a result the Kinect now knows what all (or most) hands look like. This is great according to Rapacki.
If this scenario becomes reality, then all of the 3D images could be aggregated into a robot-friendly database that bots would use as reference. A robot would take 3D sensor data of an object it is seeing and check whether it matches one or more of the reference images. Over time, and with feedback ("Yes, Rosie, this is a plate"), the robot's object-recognition capabilities would continually improve. So you want smarter robots? Then start demanding that online retailers offer 3D scans of their products -- and start creating your own scans. With this data set, robots will finally start to be able to recognize and understand our world of objects.
How do we collect 3D data for every possible object? Luckily, according to Rapacki, a large hacker community formed around the Kinect sensor, and startups like MatterPort are enabling quick 3D rendering of objects just by taking images with the Kinect at a few angles. The results are still crude, but as sensors and algorithms improve, you can imagine that "3D-fying" a scene will become as easy as snapping a picture of it. In fact, technologies like the Lytro and other "computational cameras" that capture both intensity and angle of light, allowing users to refocus already-snapped photos, could also help with the creation of 3D images.
Underpinning the many scenarios of digital information interacting with physical spaces will be robust methods for representing architectural geometry. The Columbia University Robotics Group has been pioneering methodologies to acquire and integrate spatial information through laser scanning. Laser scanning produces dense point clouds that give extremely accurate measurements of the spatial and material reality around us. This data set will be utilized for future research project on augmented reality and other spatial/informational hybrids.
There are a plethora of 3D scanning systems available now, including those from Scantech, 3D Systems, NextEngine, Streamline Automation, Artec and more. There are even do-it-yourself systems that work quite well. Databases of 3D data content are growing exponentially.
In the future cloud robotic applications such as Google's ROS can make use of these databases. Using fast on-board 3D scanners and linking data acquired to the cloud will allow for familiarization, tracking, error processing and many other factors.
It's Time to Start 3D Scanning the World - IEEE Spectrum