Last update: Sept. 8, 1998
Publications on the Use of Perspective :
BibTeX references.
Massimo Bertozzi,
Alberto Broggi, & Alessandra Fascioli,
In Proc. IEEE Intelligent Vehicles Symposium, Stuttgart, Germany,
October 1998.
This paper addresses a key problem for autonomous road driving. A large number of implementations of autonomous driving systems developed worldwide rely on the strong assumption of moving on a flat road. This assumption is of basic importance since in the great majority of these systems the first stage of the processing is based on the Inverse Perspective Mapping (IPM), which removes the perspective effect from the acquired image. This paper presents an extension to the IPM that removes the assumption of a flat road ahead of the vehicle and allows to recover the road texture even in presence of bumps and hills. The paper discusses the results of the application of this technique to synthetic images; it is now being integrated into the GOLD system on the ARGO autonomous vehicle.
Massimo Bertozzi,
Alberto Broggi, & Alessandra Fascioli,
Image & Vision Computing, vol.16(8), pp.585-590, June 1998.
This paper discusses an extension to the Inverse Perspective Mapping (IPM) geometrical transform to the processing of stereo images and presents the calibration method used on the ARGO autonomous vehicle. The article features also an example of application in the automotive field, in which the stereo Inverse Perspective Mapping helps to speed up the process.
Antonio Guiducci *
Computer Vision & Image Understanding, v.70 (2), pp.212-226, , May
1998.
* Istituto Elettrotecnico
Nazionale Galileo Ferraris,
Systems Engineering Department - Computer Vision Laboratory,
Torino, Italy .
A new algorithm is presented for the local 3D reconstruction of a road from its image plane boundaries. Given, on the same scan line, the images of two boundary points of the road and the tangents to the boundaries at the same two points, the algorithms computes the position in 3D space of the road cross segment that has one of the two points as its end point. The approximations introduced are fully discussed and formulas are given for the determination and comparison of the various source of errors. The algorithm is very fast and has been implemented in real time on the mobile laboratory under development at the author's institution.
Filiberto Pla Bañón, J.M. Sanchiz, J.A. Marchant, R.
Brivot
Image & Vision Computing, Vol.15(6), pp. 465-473, June 1997.
Shishir Shah and J.K. Aggarwal **
Machine
Vision & Applications, vol.10(4), pp.159-173, 1997.
** Computer and Vision Research Center, Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712-1084, USA
We present an autonomous mobile robot navigation system using stereo fish-eye lenses for navigation in an indoor structured environment and for generating a model of the imaged scene. The system estimates the three-dimensional (3D) position of significant features in the scene, and by estimating its relative position to the features, navigates through narrow passages and makes turns at corridor ends. Fish-eye lenses are used to provide a large field of view, which images objects close to the robot and helps in making smooth transitions in the direction of motion. Calibration is performed for the lens-camera setup and the distortion is corrected to obtain accurate quantitative measurements. A vision-based algorithm that uses the vanishing points of extracted segments from a scene in a few 3D orientations provides an accurate estimate of the robot orientation. This is used, in addition to 3D recovery via stereo correspondence, to maintain the robot motion in a purely translational path, as well as to remove the effects of any drifts from this path from each acquired image. Horizontal segments are used as a qualitative estimate of change in the motion direction and correspondence of vertical segment provides precise 3D information about objects close to the robot. Assuming detected linear edges in the scene as boundaries of planar surfaces, the 3D model of the scene is generated. The robot system is implemented and tested in a structured environment at our research center. Results from the robot navigation in real environments are presented and discussed.
Key words: Motion stereo - Scene modeling - Fish-eye lens - Depth integration - Navigation
Page created & maintained by Frederic Leymarie,
1998.
Comments, suggestions, etc., mail to: leymarie@lems.brown.edu