Visual Roation Detection and Estimation for Mobile Robot Navigation

There are a number of sensor possibilities for mobile robots. Unfortunately many of these are relatively expensive (e.g., laser scanners) or only provide sparse information (e.g., sonar rings). As an alternative, vision-based navigation is very attractive because cameras are cheap these days and computer power is plentiful. The trick is to figure out how to get valuable information out of at least some fraction of the copious pixel stream. In this paper we demonstrate how environmental landmarks can be visually extracted and tracked in order to estimate the rotation of a mobile robot. This method is superior to odometry (wheel turn counting) because it will work with a wide range of environments and robot configurations. In particular, we have applied this method to a very simple motorized base in order to get it to drive in straight lines. As expected, this works far better than ballistic control. We present quantitative results of several experiments to bolster this conclusion.

By: Matthew E. Albert, Jonathan H. Connell

Published in: RC23029 in 2003

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

rc23029.pdf

Questions about this service can be mailed to reports@us.ibm.com .