LiDAR Perception for Autonomous Driving
Продолжительность
8
часы
Місцезнаходження
Онлайн
Мова
Англійська
Код
AUT-028
Тренінг для 7-8 чи більше людей?
Налаштуйте тренінги
для ваших конкретних потреб
опис
With the introduction of LiDAR sensors (Light Detection and Ranging) to ADAS (Advanced Driver Assistance Systems) and Autonomous Driving, a need for perception algorithms that directly analyze point clouds has become apparent.In this training you’ll:
- get an overview of ADAS and Autonomous Driving from the perspective of processing LiDAR data
- understand the traditional system setup and coordinate frames, notions of latency and jitter
- understand the details of classical point cloud processing algorithms for ADAS scenario
- get hands-on experience with implementing at least one of the classical algorithms in C++
- get an overview of deep learning approaches to perception in the autonomous driving scenario
- understand how to measure accuracy of the algorithms and deploy them to state of the art hardware
- get an overview of open datasets for autonomous driving
Після проходження курсу видається сертифікат
на бланку Luxoft Training
на бланку Luxoft Training
Цілі
- After this training you’ll be able to understand a spectrum of classical and deep learning perception algorithms that process point cloud data from a LiDAR.
- Get hands-on experience in implementing a selected algorithm in C++
Цільова аудиторія
- This course is designed for computer vision algorithm developers in the automotive field
Дорожня карта
Brief introduction to ADAS and Autonomous Driving
Basic system setup
Classical point-cloud perception algorithms
Practical exercise
Perception with neural networks
Open datasets for autonomous driving
Continuous deployment of deep learning models
Compute platforms for autonomous driving
- Levels of autonomy, classic AD stack
- Players on the market, LiDAR mount options
- LiDAR technological directions
- Overview of LiDAR vendors and models
- Characteristics of Velodyne’s LiDARs
- ASIL levels, ISO26262
Basic system setup
- Coordinate systems (global, local, ego-vehicle, sensor, other traffic participants’)
- Calibration
- Synchronization
- Latency and jitter
Classical point-cloud perception algorithms
- Overview of perception tasks solvable with LiDAR
- Multi-frame accumulation (motion compensation)
- Ground detection/subtraction
- Occupancy grid
- Clusterization (DBscan)
- Convex hull estimation
- Lane detection from a point cloud
Practical exercise
- Review of the code that implements ground plane removal, clusterization, convex hull extraction and visualization in C++ with Eigen and PCL libraries.
- Practical task to implement one of the following algorithms: Ground plane removal with RANSAC & Convex hull calculation with Graham scan
Perception with neural networks
- Introduction into deep learning based approaches
- Taxonomy of neural networks for point cloud processing
- Basic block: PointNet
- VoxelNet (BEV detection)
- SECOND (BEV detection)
- PointPillars (BEV detection)
- Fast and Furious (BEV detection and prediction)
- Frustum PointNet (projection view, detection)
- MV3D (mutiview detection)
- Multiview fusion, MVF (mutiview detection)
- Multi-View LidarNet (multitarget: segmentation and detection)
Open datasets for autonomous driving
- KITTI
- Semantic KITTI
- nuScenes
- Waymo
- Argoverse
- Lyft Level-5
- Udacity
Continuous deployment of deep learning models
- Accuracy metrics
- Non-regressive deployment
Compute platforms for autonomous driving
- Overview of platforms: DrivePX2, Pegasus, Mobileye, Tesla’s board computer
- TensorRT inference library