LiDAR Perception for Autonomous Driving

LiDAR Perception for Autonomous Driving
This training covers classical point cloud processing methods for ADAS as well as deep learning based methods for Autonomous Driving.
Duration
8 hours
Course type
Online
Language
English
Duration
8 hours
Location
Online
Language
English
Code
AUT-028
Training for 7-8 or more people? Customize trainings for your specific needs
LiDAR Perception for Autonomous Driving
Duration
8 hours
Location
Online
Language
English
Code
AUT-028
€ 400 *
Training for 7-8 or more people? Customize trainings for your specific needs

Description

With the introduction of LiDAR sensors (Light Detection and Ranging) to ADAS (Advanced Driver Assistance Systems) and Autonomous Driving, a need for perception algorithms that directly analyze point clouds has become apparent.

In this training you’ll:
  • get an overview of ADAS and Autonomous Driving from the perspective of processing LiDAR data
  • understand the traditional system setup and coordinate frames, notions of latency and jitter
  • understand the details of classical point cloud processing algorithms for ADAS scenario
  • get hands-on experience with implementing at least one of the classical algorithms in C++
  • get an overview of deep learning approaches to perception in the autonomous driving scenario
  • understand how to measure accuracy of the algorithms and deploy them to state of the art hardware
  • get an overview of open datasets for autonomous driving
certificate
After completing the course, a certificate
is issued on the Luxoft Training form

Objectives

  • After this training you’ll be able to understand a spectrum of classical and deep learning perception algorithms that process point cloud data from a LiDAR.
  • Get hands-on experience in implementing a selected algorithm in C++

Target Audience

  • This course is designed for computer vision algorithm developers in the automotive field

Roadmap

Brief introduction to ADAS and Autonomous Driving
  • Levels of autonomy, classic AD stack
  • Players on the market, LiDAR mount options
  • LiDAR technological directions
  • Overview of LiDAR vendors and models
  • Characteristics of Velodyne’s LiDARs
  • ASIL levels, ISO26262

Basic system setup
  • Coordinate systems (global, local, ego-vehicle, sensor, other traffic participants’)
  • Calibration
  • Synchronization
  • Latency and jitter

Classical point-cloud perception algorithms
  • Overview of perception tasks solvable with LiDAR
  • Multi-frame accumulation (motion compensation)
  • Ground detection/subtraction
  • Occupancy grid
  • Clusterization (DBscan)
  • Convex hull estimation
  • Lane detection from a point cloud

Practical exercise
  • Review of the code that implements ground plane removal, clusterization, convex hull extraction and visualization in C++ with Eigen and PCL libraries.
  • Practical task to implement one of the following algorithms: Ground plane removal with RANSAC & Convex hull calculation with Graham scan

Perception with neural networks
  • Introduction into deep learning based approaches
  • Taxonomy of neural networks for point cloud processing
  • Basic block: PointNet
  • VoxelNet (BEV detection)
  • SECOND (BEV detection)
  • PointPillars (BEV detection)
  • Fast and Furious (BEV detection and prediction)
  • Frustum PointNet (projection view, detection)
  • MV3D (mutiview detection)
  • Multiview fusion, MVF (mutiview detection)
  • Multi-View LidarNet (multitarget: segmentation and detection)

Open datasets for autonomous driving
  • KITTI
  • Semantic KITTI
  • nuScenes
  • Waymo
  • Argoverse
  • Lyft Level-5
  • Udacity

Continuous deployment of deep learning models
  • Accuracy metrics
  • Non-regressive deployment

Compute platforms for autonomous driving
  • Overview of platforms: DrivePX2, Pegasus, Mobileye, Tesla’s board computer
  • TensorRT inference library
"TYPE"html";}
Still have questions?
Connect with us