Created: May 22, 2021
Modified: January 24, 2022
Modified: January 24, 2022
Tesla Autopilot
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.- On the Robot Brains podcast, Andrej Karpathy explained to Pieter Abbeel why he thinks Tesla has the right approach to self-driving.
- Tesla's strategy is cheap sensors -> enable massive-scale data collection, vs Waymo's strategy which is precise sensors -> only limited deployment and data collection.
- The cars have eight cameras, feeding into one network that spits out the state estimate. This is sensor fusion without any explicit Bayesianism.
- Tesla's vision system is 'programmed' by people who select new data to add to the data set. It's important to be choiceful because the dataset size is limited by their compute budget (and the need to stay within a two-week iteration cycle), so selecting the data is a big part of the job.
- The goal is that the system improves itself automatically. Data streams in from cars, important episodes are automatically identified and added to the training set, the model is retrained, retested and redeployed to collect more data.
- Since they're compute-bound, the system also just gets better automatically over time as hardware improves and compute costs fall.
- The optimistic case is that, by the phase change hypothesis, eventually you get a genuinely good system.
- One could ask: should you need all of this experience to do perception? No. A human doesn't need to see thousands of fire hydrants in order to recognize one. Big advances in data efficiency could disrupt the Tesla model.