Perception for collision avoidance and autonomous driving

Romuald Aufrère, Jay Gowdy, Christoph Mertz*, Chuck Thorpe, Chieh-Chih Wang, Teruko Yata

*Corresponding author for this work

Research output: Contribution to journalArticle

71 Scopus citations


The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. The current system uses video sensing, laser rangefinders, a novel light-stripe rangefinder, software to process each sensor individually, a map-based fusion system, and a probability based predictive model. The complete system has been demonstrated on the Navlab 11 vehicle for monitoring the environment of a vehicle driving through a cluttered urban environment, detecting and tracking fixed objects, moving objects, pedestrians, curbs, and roads.

Original languageEnglish
Pages (from-to)1149-1161
Number of pages13
Issue number10 SPEC.
StatePublished - 1 Jan 2003


  • Autonomous driving
  • Collision avoidance
  • Collision prediction
  • Curb detection
  • LIDAR object detection
  • Optical flow
  • Sensor fusion
  • Short-range surround sensing
  • Triangulation laser sensor

Fingerprint Dive into the research topics of 'Perception for collision avoidance and autonomous driving'. Together they form a unique fingerprint.

  • Cite this

    Aufrère, R., Gowdy, J., Mertz, C., Thorpe, C., Wang, C-C., & Yata, T. (2003). Perception for collision avoidance and autonomous driving. Mechatronics, 13(10 SPEC.), 1149-1161.