Created: 2022-1-6 20:05:09
Modified: 2022-1-21 15:23:02
computer vision
video
attack/safety
sim、test
Model
lane-change:
traditional rule-based model: assesses the positions and speeds + decides gap acceptance
Reinforcement Learning-based: end-to-end; decision-making part (deep Q-learning) + an optimal lattice planner + a model predictive controller
ChauffeurNet
ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst | Papers With Code
Iftimie/ChauffeurNet (github.com)
Apollo
map:
DAVE-2 (a steering angle model)
(refer to DeepXplore: Automated Whitebox Testing of Deep Learning Systems peikexin9/deepxplore: DeepXplore code release (github.com))
Dave-2 (End to end learning for self-driving cars, 2016, 3808 citations, 1+114 code implementations):
End to End Learning for Self-Driving Cars | Papers With Code
End-to-End Deep Learning for Self-Driving Cars | NVIDIA Technical Blog (video; also introduce how the simulator works)
Dave-norminit (Visualizations for understanding the regressed wheel steering angle for self driving cars):
Visualizations for regressing wheel steering angles in self driving cars (jacobgil.github.io)
Dave-dropout (Behavioral cloning: end-to-end learning for self-driving cars)
1 End-to-end learning for self-driving cars - Alex Staravoitau’s Blog
Self_Driving_Car/CarND-Behavioral-Cloning-P3 at master · nachiket273/Self_Driving_Car (github.com)
Other steering angle models
(refer to DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars & DeepRoad: GAN-Based Metamorphic Testing and Input Validation Framework for Autonomous Driving Systems)
self-driving-car/challenges/challenge-2 at master · udacity/self-driving-car · GitHub (Udacity self-driving car challenge)
https://medium.com/udacity/challenge-2-using-deep-learning-to-predict-steering-angles-f42004a36ff3
Teaching a Machine to Steer a Car | by Oliver Cameron | Udacity Inc | Medium (results)
refer to Deep Steering: Learning End-to-End Driving Model from Spatial and Temporal Visual Cues
The dataset we used is provided by Udacity, which is generated by NVIDIAs DAVE-2 System (refer to Self-Driving Car Steering Angle Prediction Based on Image Recognition)
Open Sourcing 223GB of Driving Data | by Oliver Cameron | Udacity Inc | Medium (dataset for challenge)
industry
safety
read: https://developer.nvidia.com/blog/training-self-driving-vehicles-challenge-scale/, https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf
LGSVL with Apollo
https://www.youtube.com/watch?v=Ucr0aM334_k: about API
LGSVL graphical simulator (LG. Lgsvl simulator. [Online]. Available: https://www.lgsvlsimulator.com/) that supports both freeway and urban road structures.
api collect data: slow , so use bridge (like apollo)?# Typora
CARLA
carla (CARLA: An Open Urban Driving Simulator)
data (routes and scenarios as source -> rbg, lidar)
route (a sequence of waypoints (and optionally a weather condition))
scenario (a trigger transform (location and orientation) and other actors present in that scenario (optional))
|
TODO:
-
Return:
-
Inherited from
-
server
-
CarSim physics solver
-
relationship between set_autopilot and TM-Server
-
check C++, server send carla.WorldSnapshot while client sent tick()
-
when to send carla.TextureColor or carla.TextureFloatColor to server