Shifting gears: Why Tesla’s previous failures in Full Self-Driving might not predict future failure

From Elon Musk, the book:

Almost every year, Musk would make another prediction that Full Self-Driving was just a year or two away. “When will someone be able to buy one of your cars and literally just take the hands off the wheel and go to sleep and wake up and find that they’ve arrived?” Chris Anderson asked him at a TED Talk in May 2017. “That’s about two years,” Musk replied. In an interview with Kara Swisher at a Code Conference at the end of 2018, he said Tesla was “on track to do it next year.” In early 2019, he doubled down. “I think we will be feature complete, Full Self-Driving, this year,” he declared on a podcast with ARK Invest. “I would say I am certain of that. That is not a question mark.”

So they’ll fail again in 2024? Maybe not.

For years, Tesla’s Autopilot system relied on a rules-based approach. It took visual data from a car’s cameras and identified such things as lane markings, pedestrians, vehicles, traffic signals, and anything else in range of the eight cameras. Then the software applied a set of rules, such as Stop when the light is red; Go when it’s green; Stay in the middle of the lane markers; Don’t cross double-yellow lines into incoming traffic; Proceed through an intersection only when there are no cars coming fast enough to hit you; and so on. Tesla’s engineers manually wrote and updated hundreds of thousands of lines of C++ code to apply these rules to complex situations.

C++?!?! Seriously?

According to the book, Tesla is shifting to a ChatGPT-style machine learning approach:

“Instead of determining the proper path of the car based only on rules,” Shroff says, “we determine the car’s proper path by also relying on a neural network that learns from millions of examples of what humans have done.” In other words, it’s human imitation. Faced with a situation, the neural network chooses a path based on what humans have done in thousands of similar situations. It’s like the way humans learn to speak and drive and play chess and eat spaghetti and do almost everything else; we might be given a set of rules to follow, but mainly we pick up the skills by observing how other people do them. It was the approach to machine learning envisioned by Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.”

By early 2023, the neural network planner project had analyzed 10 million frames of video collected from the cars of Tesla customers. Does that mean it would merely be as good as the average of human drivers? “No, because we only use data from humans when they handled a situation well,” Shroff explains. Human labelers, many of them based in Buffalo, New York, assessed the videos and gave them grades. Musk told them to look for things “a five-star Uber driver would do,” and those were the videos used to train the computer.

During the discussion, Musk latched on to a key fact the team had discovered: the neural network did not work well until it had been trained on at least a million video clips, and it started getting really good after one-and-a-half million clips. This gave Tesla a huge advantage over other car and AI companies. It had a fleet of almost two million Teslas around the world collecting billions of video frames per day. “We are uniquely positioned to do this,” Elluswamy said at the meeting.

Despite grand claims by academics seeking funding, rules-based AI generally failed to do anything interesting or practical from 1970-2010 (see MYCIN and CADUCEUS, for example). Statistical approaches to AI, however, began to deliver useful systems, e.g., for speech recognition, starting around 2010.

How Tesla describes the future:

FSD would provide a huge lifestyle boost here in South Florida where there are a lot of 1- and 2-hour drives that lead to interesting places, such as parks, cultural events, theme parks, etc. The drives themselves, however, are boring: straight highways, a lot of traffic close to Miami and Orlando. FSD should work quite well. FSD would also be good for getting to/from international airports. There are a lot more flights from FLL and MIA than from PBI, which is closer to our house, but with a self-driving car it might become more sensible to fly out of farther-away airports.

14 thoughts on “Shifting gears: Why Tesla’s previous failures in Full Self-Driving might not predict future failure

  1. Curious mind wants to know how NASA made the decision to place >$1B order after 4 consecutive launch failures and just a few days before SPX’s bankruptcy? Does the book mention this???

  2. This is a fascinating topic.

    As a highly skilled driver with questionable visual acuity, I sometimes think about how I would program “me” into a robot driver. Improved vision and perfected attention are the low-hanging fruit.

    The two hard problems for software are emotional intelligence and physical intuition. The first requires awareness of other drivers (plus bicyclists and pedestrians and street sign designers) and the ability to recognize cues that they may be inattentive, aggressive, foolish, high, drunk, or otherwise defective. The second requires understanding inertia, gravity and friction to anticipate what an unexpected incident could look like and how to steer clear.

    A rules based approach would give an “idiot score” to every driver and potential obstacle but the obvious limitation is that everything is an idiot. Definitely fascinated by this post and the idea that a neural net could develop an intuition for the same.

  3. Feels like every year there was 1 development which was going to finally fix it. Another year, it was training on synthetic visuals. Another year, it was replacing missing information with temporal predictions. Another year, it was a final hardware upgrade. Then they said dropping NVidia was going to fix it. Then the industry devotes all its attention to copying whatever this year’s speech was about without going anywhere.

  4. If and when anyone actually delivers cost-effective cars which can drive themselves without a human minder, the traffic on roads will increase dramatically in high cost urban areas, as people move 100 miles away from their workplaces and then go to work in vans where they can sleep, watch TV, read the paper, etc.

    If this happens, road traffic will be epic!

  5. I just spent 24 hours driving 1400 miles from NY to St Pete in my model y with FSD.

    I will never (NEVER) have a car without FSD again. (currently have 3 FSD cars)

    • Ted: How often did you have to intervene during these 1400 miles? And on what kinds of roads? What about our use case here in Florida of high quality highway, but sometimes hellish traffic? Would the current Tesla FSD handle that for all of the time on the highway? (there is so little non-highway driving that self-driving doesn’t need to work for that, though I guess it would be nice to have something that could handle surface streets in Miami and Miami Beach)

  6. We have(Wife drives) Model Y(vision only version) with FSD, it works great on highways. On local roads it tries to kill us at least twice a day trying to turn into on coming traffic. On local roads it is horrible. As we have vision only model it still does not have summons feature yet and the park assist takes almost 10 seconds to load, so every time i have to back out of parking i have to wait 10 seconds.

  7. Phil, I was in Gulfstream for a month last year. It drove everywhere. Mostly I just went to DelRay and back.

    I also spent august 22 driving to Co, Ut, Wy, etc with mountain bikes. It did all the driving.

    It can make decisions like a 90 year old sometimes, annoyingly slowly or jerky.

    For me it has gone from ‘let’s see how much this can do’ to ‘let’s let it do the mundane that it does well and I’ll take over where it’s weak” intervene in area that I realize it’s performance annoys me.

    I don’t know how you do 1400 miles in one straight shot by yourself without it – not at 58 years old anyway,

    • Ted: Thanks. It is interesting that autonomy is all over the map (so to speak) right now. Tesla does best on highways at highway speeds, it sounds like. Mercedes has autonomy, but only for traffic jams on divided highways (no more than 40 mph) so you have to be ready to take over any time there is a break in the traffic (and then reengage the system once traffic slows back down to a crawl?). Waymo and Cruise are optimized for slow driving in city environments? It’s like the early days of the automobile when nobody could agree on power source (battery, steam, gasoline) or engine location or much of anything else.

      I’ve been consistently wrong about the car industry, predicting that Toyota and Honda would jump in with awesome EV technology and overtake Tesla, for example. But I do wonder if there will be a disruption of some kind where one company has a breakthrough that nobody else can match. I hesitate to predict which company it will be since I am always wrong! But let me try anyway… Waymo or Cruise because the city environment is the toughest and everything else should be easy after that; Tesla because they’ve been at this the longest and hard-working people love to be on Elon’s team; Toyota or Honda because they have the best engineers in the car industry.

    • https://www.theverge.com/2022/11/30/23485989/honda-sensing-360-adas-hands-free-driver-assist says “Honda Sensing 360 will include hands-free highway driving and automatic lane changes.” and “Honda owners in China will be able to purchase the upgraded Honda Sensing 360 system later this year. US customers can option up in the late 2020s, and by 2030, the system will come standard on all Honda vehicles.” So the Chinese market is about 6 years ahead of the US market? I wonder if that is because of the different legal environment. Maybe in China, the system just has to be better than humans to avoid massive losses in the courts when the inevitable accident does occur.

    • If Honda’s automatic all – wheel drive indicative of how Honda adaptive software works I’d wait until GM comes up with self-driving car . Intelligent software seems to be a sore point for Honda, in otherwise well engineered reliable vehicles, probably the best value cars and CUVs out of all car makers.

  8. Hmmm. I don’t think I’ve effectively communicated what FSD does for me.

    It drives me EVERYWHERE.
    Not just highways. EVERYWHERE.

    It’s like having a 14 year old on your lap that you supervise and are responsible for.
    Some things it doesn’t do as well as I’d like, and I’m no longer patient so I intervene. Often these are unprotected left turns, or right on reds. Sometimes it’s a “no right on red” that it wants to turn right at.
    Often it drives like a nearsighted 75 year old. With 3.5 second 0-60, I know I can make gaps it will not attempt.
    It will save me time if I’m someplace unfamiliar. Driving to see the cybertruck in Tampa yesterday it made all kinds of turns I would have missed.

    Until it gets better I just know where I’m going to intervene.

    Sometimes it’s jerky, which is uncomfortable.
    The next version (due in “2 weeks”) looks like it’ll be another huge jump forward. Hopefully the car will not just learn to drive, but it will learn to drive the way YOU like to drive and simply be a better version of you.

    Click here and scroll down to schedule Tesla test drive https://ts.la/ted35517

Comments are closed.