End-to-end Learning of Image based Lane-Change Decision

Seong-Gyun Jeong, Jiwon Kim, Sujung Kim, Jaesik Min
IEEE Intelligent Vehicle Symposium 2017, Redondo Beach, CA, USA, 11 - 14 June 2017

We propose an image based end-to-end learning framework that helps lane-change decisions for human drivers and autonomous vehicles. The proposed system, Safe LaneChange Aid Network (SLCAN), trains a deep convolutional neural network to classify the status of adjacent lanes from rear view images acquired by cameras mounted on both sides of the vehicle. Rather than depending on any explicit object detection or tracking scheme, SLCAN reads the whole input image and directly decides whether initiation of the lane-change at the moment is safe or not. We collected and annotated 77,273 rear side view images to train and test SLCAN. Experimental results show that the proposed framework achieves 96.98% classification accuracy although the test images are from unseen roadways. We also visualize the saliency map to understand which part of image SLCAN looks at for correct decisions.

파일 다운로드
End-to-end_Learning_of_Image_based_Lane-Change_Decision.pdf (3.63MB)