Wednesday, December 18, 2019
Autonomous Vehicles with Depth Perception, Part 1
Autonomous Vehicles with Depth Perception, Part 1 Autonomous Vehicles with Depth Perception, Part 1 Autonomous Vehicles with Depth Perception, Part 1When we look at something, we see quite a bit more than the object were focusing on thanks to ur peripheral vision. And we can immediately and intuitively tell how far away that something is. Not so with robots.While imaging systems that include cameras and software allow robots to see, they need quite complicated and involved vision systems to see with panoramic vision and to perceive depth.But the wide-screen views and depth perception they typically lack would give robots such as dronesand self-driving cars much more feedback to use as they navigate their worlds, says Donald Dansereau, a Stanford University postdoctoral fellow in electrical engineering.Dansearu was part of a research team at two California universities that developed a 4D camera that includes capabilities unseen in robotic-vision, single-lens image-capture systems.Res earchers with a prototype of their single-lens, panoramic-view camera for easier robotic navigation. Image credit L.A. CiceroThe single-lens panoramiclight-field camera gives robots a 138-degree field of view and can quickly calculate the distance to objectslending depth perception to robotic sight, say researchers at Stanford University and at the University of California, San Diego, which teamed for the project.The capabilities will help robots move through crowded areas and across semi-transparent landscapes obscured by sleet and snow.As autonomous cars take to the streets and delivery drones to the skies, its crucial that we endow them with truly reliable vision, Dansereau says.Current robots have to move and shift through their environment while their onboard imaging systems gatherdifferent images and pieces them together to create an entire view gleaned from separate perspectives, he adds.With the new 4D camera, robots could gather the same information from a single image, say s Gordon Wetzstein, Stanford assistant prof of electrical engineering. Wetzsteins lab collaborated with University of San Diego electrical engineering professor Joseph Ford lab on the project.The extra dimension is the wider field of view.Humans typically have about a 135-degree vertical and a 155-degree horizontal field of visual view. Those numbers refer to the total area across which humans can see objects with their peripheral vision as they focus on a central point, according to the researchers.For a robot, the difference between the new lens and a typically used lens compares to the difference when looking through a window or a peephole, Dansereau says.Researchers in Fords lab designed the new cameras spherical lens, which gives the camera a sightline of nearly one third of the circle around itself. The camera no longer needs the fiber bundles used for a previous version of the camera that the lab developed earlier. Instead, the new lens calls upon several smaller lenses neste d behind it and also uses digital signal processing technology.All of this solved one set of issues. But how would the researchers add another critical component depth perception? To find out, read Part 2.Jean Thilmany is an independent writer. For Further Discussion As autonomous cars take to the streets and delivery drones to the skies, its crucial that we endow them with truly reliable vision.Prof. Donald Dansereau, Stanford University
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.