LiDAR Perception for Autonomous Driving

LiDAR Perception for Autonomous Driving
This training covers classical point cloud processing methods for ADAS as well as deep learning based methods for Autonomous Driving.
Durată
8 ore
Tipul de curs
Pe net
Limba
Engleză
Durată
8 ore
Locație
Pe net
Limba
Engleză
Cod
AUT-028
Training pentru 7-8 sau mai multe persoane? Personalizați antrenamentele pentru nevoile dumneavoastră specifice
LiDAR Perception for Autonomous Driving
Durată
8 ore
Locație
Online
Limba
English
Cod
AUT-028
€ 400 *
Training pentru 7-8 sau mai multe persoane? Personalizați antrenamentele pentru nevoile dumneavoastră specifice

Descriere

With the introduction of LiDAR sensors (Light Detection and Ranging) to ADAS (Advanced Driver Assistance Systems) and Autonomous Driving, a need for perception algorithms that directly analyze point clouds has become apparent.

In this training you’ll:
  • get an overview of ADAS and Autonomous Driving from the perspective of processing LiDAR data
  • understand the traditional system setup and coordinate frames, notions of latency and jitter
  • understand the details of classical point cloud processing algorithms for ADAS scenario
  • get hands-on experience with implementing at least one of the classical algorithms in C++
  • get an overview of deep learning approaches to perception in the autonomous driving scenario
  • understand how to measure accuracy of the algorithms and deploy them to state of the art hardware
  • get an overview of open datasets for autonomous driving
certificat
După finalizarea cursului, se eliberează un certificat
în formularul Luxoft Training

Obiective

  • After this training you’ll be able to understand a spectrum of classical and deep learning perception algorithms that process point cloud data from a LiDAR.
  • Get hands-on experience in implementing a selected algorithm in C++

Public țintă

  • This course is designed for computer vision algorithm developers in the automotive field

Foaia de parcurs

Brief introduction to ADAS and Autonomous Driving
  • Levels of autonomy, classic AD stack
  • Players on the market, LiDAR mount options
  • LiDAR technological directions
  • Overview of LiDAR vendors and models
  • Characteristics of Velodyne’s LiDARs
  • ASIL levels, ISO26262

Basic system setup
  • Coordinate systems (global, local, ego-vehicle, sensor, other traffic participants’)
  • Calibration
  • Synchronization
  • Latency and jitter

Classical point-cloud perception algorithms
  • Overview of perception tasks solvable with LiDAR
  • Multi-frame accumulation (motion compensation)
  • Ground detection/subtraction
  • Occupancy grid
  • Clusterization (DBscan)
  • Convex hull estimation
  • Lane detection from a point cloud

Practical exercise
  • Review of the code that implements ground plane removal, clusterization, convex hull extraction and visualization in C++ with Eigen and PCL libraries.
  • Practical task to implement one of the following algorithms: Ground plane removal with RANSAC & Convex hull calculation with Graham scan

Perception with neural networks
  • Introduction into deep learning based approaches
  • Taxonomy of neural networks for point cloud processing
  • Basic block: PointNet
  • VoxelNet (BEV detection)
  • SECOND (BEV detection)
  • PointPillars (BEV detection)
  • Fast and Furious (BEV detection and prediction)
  • Frustum PointNet (projection view, detection)
  • MV3D (mutiview detection)
  • Multiview fusion, MVF (mutiview detection)
  • Multi-View LidarNet (multitarget: segmentation and detection)

Open datasets for autonomous driving
  • KITTI
  • Semantic KITTI
  • nuScenes
  • Waymo
  • Argoverse
  • Lyft Level-5
  • Udacity

Continuous deployment of deep learning models
  • Accuracy metrics
  • Non-regressive deployment

Compute platforms for autonomous driving
  • Overview of platforms: DrivePX2, Pegasus, Mobileye, Tesla’s board computer
  • TensorRT inference library
"TYPE"html";}
Mai ai întrebări?
Conectați-văcu noi