Three-dimensional occupancy grids with the use of vision and proximity sensors in a robotic workcell

Article

This paper discusses the use of multiple vision sensors and a proximity sensor to obtain three-dimensional occupancy profile of robotic workspace, identify key features, and obtain a 3-D model of the objects in the work space. The present research makes use of three identical vision sensors. Two of these sensors are mounted on a stereo rig on the sidewall of the robotic workcell. The third vision sensor is located above the workcell. The vision sensors on the stereo rig provide information about three-dimensional position of any point in the robotic workspace. The camera to robot calibration for these vision sensors in stereo configuration has been obtained with the help of a three-layered feedforward neural network. Squared Sum of Difference (SSD) algorithm has been used to obtain the stereo matching. Similarly, camera to robot transformation for the camera located above the work cell has been obtained with the help of a three-layered feedforward neural network. Three-dimensional positional information from vision sensors on stereo rig and two-dimensional positional information from a camera located above the workcell and a proximity sensor mounted on the robot wrist have been fused with the help of Bayesian technique to obtain more accurate positional information about locations in workspace. Copyright © 2004 by ASME.

Duke Authors

Cited Authors

  • Kumar, M; Garg, DP

Published Date

  • 2004

Published In

  • American Society of Mechanical Engineers, Dynamic Systems and Control Division (Publication) DSC

Volume / Issue

  • 73 / 2 PART B

Start / End Page

  • 1029 - 1036