range和vary和distinguish和diverge convergee

Hindawi Publishing Corporation
The Scientific World Journal
Journal Menu
The Scientific World JournalVolume ), Article ID
pagesResearch ArticleRobust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System and Department of Electronic Engineering, Sogang University, CPO. Box 1142, Seoul 100-611,
Republic of KoreaReceived 30 August 2013; Accepted 22 October 2013; Published
16 January 2014Academic Editors: G. Berrettini and A. LahiriCopyright © 2014 Ki-Yeong Park and Sun-Young Hwang. This is an open access article distributed under the , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.1. IntroductionAs number of vehicles increases, driving safety has become an important issue. Since a considerable number of traffic accidents are caused by driver inattention, driver assistance systems which detect imminent collision and provide timely warning for drivers can significantly reduce the number and severity of these accidents []. Forward collision warning (FCW) system detects vehicles ahead and issues warnings in advance to drivers for avoiding or mitigating the harm caused by rear-end crashes [–]. FCW system identifies a target vehicle within trajectory of subject vehicle and determines range to the target vehicle. Time-to-collision (TTC) is computed using the range, and collision warning is issued when TTC is lower than a certain threshold [, ].To issue timely collision warning, range to target vehicle has to be determined accurately. Radar sensors have been commonly used for this purpose. They can detect objects over long range even under poor illumination conditions [, , ]. Due to their cost and limited angular accuracy, vision-based FCW systems which use a low-cost image sensor have been popularly investigated by many researchers [, , , , ]. Contrary to radar sensors, image sensors do not provide range information. Range should be estimated from information on size and/or position of vehicles in image [, , –]. Width of a vehicle in image is inversely proportional to range to the vehicle, and range can be estimated from vehicle width in image. However, range estimation from vehicle width without prior knowledge of vehicle real width may introduce significant error, as vehicle real width varies from 1.4 m to 2.6 m. Distance between vehicle bottom and horizon in image is inversely proportional to range to the vehicle. Range can be estimated accurately from vehicle bottom position in image, if horizon is located accurately. As position of horizon varies due to vehicle motion and road inclination, it has to be located at run-time. Position of horizon can be located by analyzing lane markings [, ]. However, this method cannot be appropriately used when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. Moreover, horizon estimated by analyzing lane markings in close proximity may result in unbounded range error, especially when vehicles are on a hill.In this paper, we propose a robust range estimation method which provides range information even when road inclination varies continuously or lane markings are not seen. The proposed method estimates virtual horizon from information on size and position of vehicles and calculates range from vehicle position with the estimated virtual horizon. The rest of this paper is organized as follows. In Section , we present a brief introduction of vision-based FCW system and describe conventional vision-based range estimation methods. The proposed range estimation method is presented in Section . Experimental results are presented in Section
and the conclusions are drawn in Section .2. Backgrounds2.1. Vision-Based FCW SystemIn this section, a brief introduction of vision-based FCW system is presented, which adopts object detection algorithm to determine size and position of vehicles in image. Object detection is to distinguish a class of objects which have large intraclass variability such as faces, pedestrians, and vehicles from cluttered background [, ]. Many vision-based vehicle detection systems have been designed only for highway traffic where lanes are usually well marked and built with slowly changing curvature [, , ]. Whereas vehicles appear quite separated from background in highway traffic environment, vehicles have to be distinguished from more cluttered background in urban traffic environment []. Most FCW systems adopt object detection algorithm for this purpose [, –].Figure
shows the flow of tasks performed by a vision-based FCW system which adopts object detection algorithm. It consists of two stages: object detection/tracking stage and warning stage. In object detection/tracking stage, size and position of vehicles in captured image are determined. Most object detection algorithms consist of hypothesis generation and hypothesis verification phases []. Presence of vehicles is hypothesized in hypothesis generation phase by using prior knowledge about vehicle such as color, texture, symmetry, corners, and horizontal/vertical edges []. In hypothesis verification phase, the presence of vehicles is verified with a classifier which distinguishes vehicles from background by computing a series of features. Sun et al. employed Haar wavelet decomposition for feature extraction and support vector machines (SVM) for classification []. Cui et al. employed Haar-like features for feature extraction and AdaBoost for classification []. Accuracy of object detection can be improved by combining it with tracking mechanism. Presence of vehicles in future frames is hypothesized by using detection result of current frame, and false detections are reduced by validating detection results with past history. Betke et al. used a refined search within tracking window to validate detections []. Cui et al. employed simplified Lucas-Kanade algorithm to remove false detections [].Figure 1: Vision-based FCW system.In warning stage, TTC is computed for a target vehicle and collision warning is issued [, ]. Trajectory of subject vehicle is predicted by using road lane information and vehicle signals such as steering angle, yaw, and speed []. The nearest vehicle within the trajectory of subject vehicle is identified as a target vehicle. To issue collision warning, range to the target vehicle is determined and TTC is computed using the range. Collision warning is triggered when TTC is lower than a certain threshold, which is in the range of 2.0~2.4 sec according to NHTSA NCAP Forward Collision Warning System Confirmation Test [].2.2. Range Estimation with a Monocular CameraIn this section, conventional range estimation methods used in vision-based systems with a monocular camera are described. As image sensors do not provide range information, range should be estimated from information on size and/or position of vehicle in image by using the pinhole camera geometry [].2.2.1. Range Estimation Using Size InformationWidth of a vehicle in image is proportional to real width of the vehicle according to the pinhole camera geometry. If real width of a vehicle is known, range
  to the vehicle can be calculated as in the following:
  is focal length of camera and
  and
  are vehicle width in image and vehicle real width, respectively. Vehicle real width varies from 1.4 m to 2.6 m. Applying this formula for range estimation without prior knowledge of vehicle real width may introduce significant error, which can be as much as 30%, if a fixed width (e.g.,
 m) is used. It is not accurate enough for computing TTC, but it can be used as sanity check [].2.2.2. Range Estimation Using Position InformationWith an assumption that both roll and yaw angles are zero, range can be estimated from vehicle position in image by using the pinhole camera geometry as shown in Figure . Range estimation when camera pitch angle is zero and nonzero is illustrated in Figures
and , respectively. Let contacting line between vehicle bottom and road surface in image be bottom line and let horizontal line passing through vanishing point of road lanes be horizon. Bottom line of a vehicle approaches to horizon as the vehicle goes far away from camera. Horizon will pass through the centre of image when optical axis of camera is parallel to road surface. It moves upward or downward depending on camera pitch angle.Figure 2: Range estimation from vehicle position in image by using the pinhole camera geometry. (a) Camera (b) camera pitch angle is nonzero.Distance between bottom line of a vehicle and horizon is inversely proportional to range to the vehicle. When camera pitch angle is negligibly small, range
  to vehicle can be calculated as in the following:
  is camera focal length,
  is camera height, and
  and
  are vertical coordinates of vehicle bottom line and horizon, respectively. When camera pitch angle
  is considerably large, range has to be calculated as in the following:
  is small, () can be used instead of (), since range error resulting by small
  is negligible. For example, when camera height
  is 1.3 m and
  is 10°,
  is 1.03 and the second term in () becomes 0.23 m. However, when
  is nonzero, horizon does not pass through the centre of image anymore, and its position has to be determined.Small variations in horizon position may result in large range error, since the denominator term
  in () becomes very small as vehicle goes far away from camera. In highway traffic environment where horizon varies in a small range, range can be calculated with a fixed horizon determined by camera calibration. In urban traffic environment where horizon can vary much due to vehicle motion and road inclination, it should be located at run-time. Horizon can be determined by analyzing lane markings [, ]. However, this method cannot be appropriately used when road inclination varies continuously or lane markings are not seen.3. Proposed Range Estimation MethodIn this section, we propose a robust range estimation method which provides range information even when road inclination varies continuously or lane markings are not seen. The proposed range estimation method determines virtual horizon only from size and position of vehicles in image and calculates range with the estimated virtual horizon.3.1. Virtual Horizon EstimationIf real width
  of a vehicle is known and both size and position of the vehicle in image are given, vertical coordinate
  of horizon can be determined from () and () and presented in the following:
  is camera height which is constant and
  and
  are vehicle bottom line position and vehicle width in image, respectively, which are obtained in object detection stage. Vehicle real width
  can be represented as
  is average real width of vehicles and
  is difference between real width of a vehicle and the average real width. If sufficiently many vehicles are detected,
  converges to zero and can be ignored. As a result, horizon can be determined only from the information on position and width of detected vehicles with a fixed average real width (e.g.,
 m).When several vehicles are detected in object detection stage, average horizon
  can be determined from average of vehicle positions and average of vehicle widths with a fixed real width as in the following:
  is number of detected vehicles and
  and
  are position and width of a detected vehicle, respectively.
  and
  are average position and average width of detected vehicles, respectively.
  is difference between real width of a detected vehicle and a fixed average real width
  and is ignored in ().Estimated horizon position can fluctuate due to occurrence of false detections and insufficient number of detections, as well as pitch motion of subject vehicle such as vibration and acceleration. Fluctuation of horizon due to pitch motion of subject vehicle can be ignored as both vehicle position and horizon in image are influenced at the same time by the pitch motion. Range error due to pitch angle itself is negligible if pitch angle is small as we described in Section . However, fluctuation of horizon due to false detections and insufficient number of detections need to be removed. As road inclination changes slowly in most cases, fluctuation of horizon can be reduced by reflecting the information on previously estimated horizon. Virtual horizon at image frame
  can be estimated as in the following:
  and
  are virtual horizons estimated at image frames
  and
, respectively, and
  is average horizon calculated at image frame
  obtained by applying ().
  is a constant experimentally determined. When
, a default horizon position is used for
. Once virtual horizon is determined, range can be calculated from vehicle position with the virtual horizon by ().False detections can be reduced by restricting width of detected vehicles in object detection stage. Applying virtual horizon
  estimated at previous image frame to (), min/max width of a vehicle at position
  in image can be restricted as in the following:
  is min/max of vehicle real width (e.g.,
  m,
  m). A detected vehicle whose width in image is out of these bounds should be regarded as a false detection.3.2. Accuracy ConsiderationRange estimation from vehicle width in image can provide accurate range information only when real width of the vehicle is given, and range error can be as much as 30%, if a fixed width is used instead of real width of the vehicle as we described in Section . On the other hand, range estimation from vehicle position in image can provide accurate range information only when horizon is located accurately. Small variation in position of horizon may result in large range error. Horizon always has to be located above vehicle bottom lines. Range cannot be determined if vehicle is located above horizon. However, horizon estimated by analyzing lane markings in close proximity as in conventional method may be located even below vehicle bottom lines, especially when vehicles are on a hill.In the proposed method, virtual horizon is always located above vehicle bottom lines, and range error is bounded to a limit in a condition that both size and position of detected vehicles are accurate, since the virtual horizon is also estimated from vehicle positions. In case only one vehicle is detected, the proposed method in () turns to the range estimation using only size information in (). Especially when width of the only detected vehicle is very small or very large, range error may amount to as much as 30%, which is the upper limit of range error with the proposed method.4. Experimental ResultsFor experiments, a vision-based FCW system has been implemented, which employs an object detection algorithm based on Haar-like feature and AdaBoost [, , ]. Test video clips were recorded with a vehicle-mounted camera both in highway and in urban traffic environments. Resolution of the camera sensor is 1280 × 672 pixels and frame rate of the camera is 15 frames per second. Highway traffic video clips were recorded in a test track according to NHTSA NCAP Forward Collision Warning System Confirmation Test []. The confirmation test consists of three scenarios: stopped/decelerating/slower moving target vehicle. In all the scenarios, subject vehicle was moving at a constant speed. Urban traffic video clip includes a crossroad section where lane markings are not seen and a hill section where road inclination varies. Examples of captured image frames from the video clips are shown in Figure .Figure 3: Example of captured image frames from test video clips. (a) Highway (b) urban traffic environment.Image frames were captured from each video clip, and reference horizons were manually identified by analyzing lane markings for each captured image frame. Lengths of the highway traffic video clips are 56, 152, and 88 frames, respectively, and that of the urban traffic video clip is 884 frames. Virtual horizons estimated by the proposed method are compared with the reference horizons. In our experiments,
  is used to calculate virtual horizon in (). Ranges measured by using differential GPS were prepared for each highway traffic video clip. Ranges estimated by using the proposed method are compared with the measured ranges and ranges estimated by using conventional methods described in Section . With the conventional method which estimates range from vehicle position, ranges are calculated both with reference horizons and a fixed horizon. A fixed horizon was determined for each clip by averaging the reference horizons. For the urban traffic video clip, ranges measured by using differential GPS were not prepared. Ranges estimated by using the proposed method are compared with ranges estimated by using the conventional methods.4.1. Evaluation of Virtual Horizon EstimationVirtual horizons estimated by using the proposed method are compared with reference horizons. Position of reference horizons varies in the range of 12 pixels (between 364 and 376) in the highway traffic video clips, and it varies in the range of 66 pixels (between 311 and 377) in the urban traffic video clip. The urban traffic video clip includes a hill section as well as a crossroad section. As a result, the variation in horizon positions is much increased in the clip.Figure
shows both reference horizon and virtual horizon estimated by the proposed method for the highway traffic video clips. In the first clip where subject vehicle encounters stopped target vehicle, horizon error of the proposed method is very low as shown in Figure . Average and standard deviation of differences between reference horizon and virtual horizon are 1.4 pixels and 1.2 pixels, respectively. In the second and the third clips where subject vehicle follows moving target vehicle, horizon error is increased as shown in Figures
and . As both subject and target vehicles move, degraded accuracy of object detection results in increased error in these clips. Average and standard deviation of differences are 2.8 pixels and 1.9 pixels for the second clip, respectively, and 3.4 pixels and 2.0 pixels for the third clip, respectively. In the highway traffic video clips, there is only one vehicle ahead of subject vehicle as shown in Figure . As the proposed method estimates virtual horizon from average width and average position of vehicles, estimation accuracy can be improved, if accuracy of object detection is improved and several vehicles are detected in a frame.Figure 4: Reference horizon and virtual horizon estimated by the proposed method for highway traffic video clips. (a) St (b) deceler (c) slower moving target vehicle.Figure
shows both reference horizon and virtual horizon for the urban traffic video clip. Even though there are a lot of fluctuations on both reference and virtual horizons, their positions are quite similar in most frames. Average and standard deviation of differences between reference and virtual horizons are 6.0 pixels and 5.0 pixels, respectively. If we take into consideration that reference horizons vary in the range of 66 pixels in the urban traffic video clip, whereas they vary in the range of 12 pixels in the highway traffic video clips, it can be said that the accuracy is not degraded in the urban traffic video clip. Even though the clip includes a crossroad section and a hill section, accuracy is not degraded, since there are several vehicles in the clip as shown in Figure . This result confirms that the proposed method can be appropriately used in urban traffic environment.Figure 5: Reference horizon and virtual horizon estimated by the proposed method for urban traffic video clip.While subject vehicle goes over a hill between frame number 400 and frame number 700 in the urban traffic video clip, difference between reference horizon and virtual horizon increases in Figure . In a hill section where road inclination varies continuously, horizon located by analyzing lane markings in close proximity may be inappropriate to estimate range to a vehicle far away, since vehicles will appear above usual position in an uphill section and below in a hilltop section. On the other hand, virtual horizon estimated by the proposed method is appropriate to estimate range on hilly roads, since virtual horizon is estimated only from information on vehicle size and position. Figure
shows both reference horizon and virtual horizon for image frames from hill section. Virtual horizons estimated by the proposed method are located above reference horizons in an uphill section image and below in a hilltop section image.Figure 6: Reference horizon (solid line) and virtual horizon (dotted line) estimated by the proposed method for image frames from a hill section.4.2. Evaluation of Range EstimationFor the highway traffic video clips, ranges estimated by using the proposed method are compared with measured ranges and ranges estimated by using conventional methods. Figure
shows measured and estimated ranges for each video clip. For all the video clips, each estimation method provides quite accurate range when target vehicle is within 50 m range. While target vehicle is out of 50 m range, each estimation method shows increased range error due to degraded accuracy of object detection in our FCW system. In each highway traffic video clip, there was only one vehicle ahead of subject vehicle. As mentioned in Section , range error may be reduced if several vehicles are detected in a frame. Table
shows average and standard deviation of range error for each highway traffic video clip. Average range error of the proposed method is comparable to that of estimation with vehicle position information with reference horizons.Table 1: Average (&#x3) and standard deviation (σ) of range error for highway traffic video clips.Figure 7: Measured and estimated ranges for highway traffic video clips. (a) St (b) deceler (c) slower moving target vehicle.For the urban traffic video clip, measured ranges were not prepared. Ranges estimated by using the proposed method are compared with those estimated by using the conventional methods. In the previous experiment, estimated ranges were quite accurate when target vehicle is within 50 m range. To get more accurate range, the urban traffic video clip was recorded while subject vehicle was following a target vehicle with maintaining headway to the target vehicle at about 30 m. Figure
shows estimated ranges for the urban traffic video clip. Ranges estimated from vehicle position with a fixed horizon show big fluctuations in some frames while those estimated by the other methods are very similar in most frames. This result shows that the estimated method from vehicle position with a fixed horizon cannot be used in urban traffic environment.Figure 8: Estimated ranges for urban traffic video clip.From frames 400 to 700 where target vehicle goes over a hill section, ranges estimated from vehicle position with reference horizons become smaller than those estimated from vehicle size, while those estimated by using the proposed method are similar to those estimated from vehicle size. Target vehicle appears in lower position than usual on a hilltop and ranges estimated from vehicle position become smaller. As ranges estimated from vehicle size are not influenced by vehicle position, they can be more accurate than those estimated from vehicle position in those frames. In the proposed method, lowered vehicle position is already compensated when virtual horizon is estimated, and range estimation will not be influenced by lowered vehicle position. Virtual horizons are located lower than reference horizons in those frames as shown in Figure . This result confirms that the proposed method is more appropriate while road inclination varies continuously on hilly roads.5. ConclusionIn this paper, we propose a range estimation method which can be used for vision-based forward collision warning systems both in highway and urban traffic environments. The proposed method estimates virtual horizon from information on size and position of vehicles in image which is obtained by object detection algorithm and calculates range from vehicle position in image with the virtual horizon. In the conventional approach where horizons are determined by analyzing lane markings, horizons cannot be appropriately located when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. On the other hand, virtual horizons are always located if vehicles are detected in image, as the proposed method estimates horizons only from information on size and position of vehicles in image. Small variation in position of horizon may result in large range error. Horizons determined by analyzing lane markings can be located even below vehicles, especially when the vehicles are on a hill, which can result in unbounded range error. Virtual horizons are always located above bottom lines of detected vehicles as they are estimated from vehicle positions, and range error is bounded. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Experimental results confirm that the proposed method provides robust results in urban traffic environment as well as in highway traffic environment.Conflict of InterestsThe authors declare that there is no conflict of interests regarding the publication of this paper.AcknowledgmentThis work was supported by the Technology Innovation Program (Industrial Strategic Technology Development Program, , Development of Driving Assist System and Integrated Image Recognition Software for Lane and Preceding/Oncoming Vehicles) funded by the Ministry of Knowledge Economy (MKE, Korea).References
G. Widmann, W. Bauson, and S. Alland, “Development of collision avoidance systems at delphi automotive systems,” in Proceedings of the IEEE International Conference on Intelligent Vehicles, vol. 2, pp. 353–358, Stuttgart, Germany, October 1998. E. Dagan, O. Mano, G. P. Stein, and A. Shashua, “Forward collision warning with a single camera,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 37–42, Parma, Italy, June 2004. L. Zhang, J. Wang, and K. Li, “Forward collision warning system based on THASV-II Platform,” in Proceedings of the IEEE International Conference on Vehicular Electronics and Safety (ICVES '06), pp. 255–258, Beijing, China, December 2006.
· Y. Wei, H. Meng, H. Zhang, and X. Wang, “Vehicle frontal collision warning system based on improved target tracking and threat assessment,” in Proceedings of the 10th IEEE International Conference on Intelligent Transportation Systems (ITSC '07), pp. 167–172, Seattle, Wash, USA, October 2007.
· E. Raphael, R. Kiefer, P. Reisman, and G. Hayon, “Development of a camera-based forward collision alert system,” SAE International Journal of Passenger Cars, vol. 4, no. 1, pp. 467–478, 2011.
· H. Lin, L. Chen, and Y. Lin, “Lane departure and front collision warning using a single camera,” in Proceedings of the IEEE International Symposium on Intelligent Signal Processing and Communication Systems, pp. 64–69, New Taipei, Taiwan, November 2012. R. van der Horst and J. Hogema, “Time-to-collision and collision avoidance systems,” in Proceedings of the ICTCT Workshop: Safety Evaluation of Traffic Systems: Traffic Conflicts and Other Measures, pp. 109–121, Salzburg, Austria, 1993. C. Li, H. Meng, H. Zhang, and X. Wang, “Evaluation and improvement of required deceleration algorithm in frontal collision warning systems,” in Proceedings of the 11th International IEEE Conference on Intelligent Transportation Systems (ITSC '08), pp. , Beijing, China, October 2008.
· S. K. Kenue, “Selection of range and azimuth angle parameters for a forward looking collision warning radar sensor,” in Proceedings of the Intelligent Vehicles Symposium, pp. 494–499, Detroit, Mich, USA, September 1995. E. Lemaire, E. El Koursi, P. Deloof, and J. Ghys, “Safety analysis of a frontal collision warning system,” in Proceedings of the IEEE Intelligent Vehicle Symposium, vol. 2, pp. 453–458, Versailles, France, June 2002. J. Cui, F. Liu, Z. Li, and Z. Jia, “Vehicle localisation using a single camera,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV '10), pp. 871–876, San Diego, Calif, USA, June 2010.
· Y. Zhang, W. Li, X. Wang, J. Zhao, and Y. Yuan, “General design of vehicle safety assistant system based on machine vision and electronic control steering,” in Proceedings of the International Conference on Computer Design and Applications (ICCDA '10), vol. 3, pp. 20–23, Qinhuandao, China, June 2010.
· G. Stein, O. Mano, and A. Shashua, “Vision-based ACC with a single camera: bounds on range and range rate accuracy,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 120–125, Columbus, Ohio, USA, June 2003. Y. Chen, M. Das, and D. Bajpai, “Vehicle tracking and distance estimation based on multiple image features,” in Proceedings of the 4th Canadian Conference on Computer and Robot Vision (CRV '07), pp. 371–378, Montreal, Canada, May 2007.
· Y. Yan and Y. Zhang, “Technique of measuring leading vehicle distance based on digital image processing theory,” in Proceedings of the International Conference on Intelligent Computation Technology and Automation (ICICTA '10), pp. 674–677, Changsha, China, May 2010.
· G. P. Stein, Y. Gdalyahu, and A. Shashua, “Stereo-assist: top-down stereo for driver assistance systems,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV '10), pp. 723–730, San Diego, Calif, USA, June 2010.
· S.-H. Chen and R.-S. Chen, “Vision-based distance estimation for multiple vehicles using single optical camera,” in Proceedings of the 2nd International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA '11), pp. 9–12, Shenzhan, China, December 2011.
· G. Kim and J. Cho, “Vision-based vehicle detection and inter-vehicle distance estimation,” in Proceedings of the International Conference on Control, Automation and Systems, pp. 625–629, Jeju, Korea, October 2012. M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio, “Pedestrian detection using wavelet templates,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 193–199, June 1997. P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004.
· M. Betke, E. Haritaoglu, and L. S. Davis, “Real-time multiple vehicle detection and tracking from a moving vehicle,” Machine Vision and Applications, vol. 12, no. 2, pp. 69–83, 2000.
· F. Chi, C. Huo, Y. Yu, and T. Sun, “An effective approach for forward vehicle detection with tracking,” in Proceedings of the IEEE Global Conference on Consumer Electronics, pp. 683–686, Tokyo, Japan, October 2012. U. Franke, D. Gavrila, S. Görzig, F. Lindner, F. Paetzold, and C. Wöhler, “Autonomous driving goes downtown,” IEEE Intelligent Systems and Their Applications, vol. 13, no. 6, pp. 40–48, 1998.
· Z. Sun, R. Miller, G. Bebis, and D. DiMeo, “A real-time precrash vehicle detection system,” in Proceedings of the IEEE International Workshop on Application of Computer Vision, pp. 171–176, Orlando, Fla, USA, December 2002. L. Prevost, P. Negri, X. Clady, and S. M. Hanif, “A cascade of boosted generative and discriminative classifiers for vehicle detection,” Eurasip Journal on Advances in Signal Processing, vol. 2008, Article ID 08.
· A. Haselhoff and A. Kummert, “A vehicle detection system based on haar and triangle features,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 261–266, Xi'an, China, June 2009.
· Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: a review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 694–711, 2006.
· M. Betke, E. Haritaoglu, and L. S. Davis, “Multiple vehicle detection and tracking in hard real-time,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 351–356, Tokyo, Japan, September 1996.
National Highway Traffic Safety Administration, “Forward collision warning system confirmation test,” Tech. Rep. Docket NHTSA-, 2010. Y. Wang, J. Ostermann, and Y. Zhang, Video Processing and Communications, Prentice-Hall, New Jersey, NJ, USA, 2002.}

我要回帖

更多关于 distinguish from 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信