The software update we mentioned allegedly corrected a flaw with Autopilot: detecting emergency vehicles and preventing crashing with them. On August 13, NHTSA started an investigation about 11 incidents between Tesla vehicles on Autopilot hitting emergency vehicles, one with fatal consequences. On August 28, a Model 3 crashed against an FHP Orlando patrol car. On September 21, Tesla released the 2021.24.12 update to solve that. We’re yet to confirm if it was successful. Despite that, NHTSA did not like how Tesla handled the situation.
On October 12, the agency sent Tesla an information request mentioning that an OTA (over-the-air) update that “mitigates a defect that poses an unreasonable risk to motor vehicle safety” should be treated as a recall. In other words, NHTSA wanted to be adequately informed about it before it was released.
With its tweet, what Tesla implied was that NHTSA was trying to halt safety OTA updates and that it would keep pushing them regardless of what the agency wants to do. It also gives the impression that NHTSA is trying to prevent safety improvements, but that takes the agency’s message totally out of context.
What the agency argued was that Tesla apparently tried to fix a defect on Autopilot with the OTA update. That defect made Tesla cars crash against emergency vehicles repeatedly. If it is really framed as a defect, NHTSA disputes the way Tesla dealt with it. The Safety Act establishes that when automakers “determine vehicles or equipment they produced contain defects related to motor vehicle safety,” they have to “initiate a recall by notifying NHTSA.” That’s precisely what Tesla didn’t do in this case.
We have already mentioned the two lines of defense the company can adopt. The first is to admit Autopilot had a defect. That is risky: it can give substance to lawsuits accusing the system of malfunctioning in crashes. So far, Tesla has accused drivers of misuse of the software, shielding against legal disputes behind the disclaimer that states the driver is responsible at all times if they choose to use the beta software.
The second may try to argue that beta software can have no defects. After all, they are not stable releases yet. The challenge this line of defense poses is that NHTSA may ask why it even was released if that is the case. That will affect all beta software Tesla currently offers its customers, such as Smart Summon and FSD (Full Self-Driving).
Asking why Tesla offers this beta software may lead regulators to determine they can only be deployed to cars when they are stable releases. That would make Tesla lose a crucial sales argument. Many people choose cars from the company because of Autopilot and the promise that these EVs will eventually become “appreciating assets” due to autonomous capacities they are yet to present. No car currently for sale is autonomous.
If Tesla can only offer it when it is really available, it may have to refund buyers who paid up to $10,000 for the software. It will also have to follow the testing procedures that all other automakers and companies pursuing autonomous driving must follow. That means it would have to have trained drivers testing the software in tracks or require permits to do so on public roads.
It may be the case that Tesla is collaborative and agreeable with NHTSA in official communications and defies it on Twitter to play tough for its supporters, but that is a really poor strategy. NHTSA regulators also check Twitter. Expect things to get more complicated for the company if it keeps this attitude.
In the end, the challenge Tesla posed to NHTSA can prove true: the company may keep making its cars safer through OTA updates. The agency is only requiring it to do so by officially admitting the reason behind that: “unreasonable risk to motor vehicle safety.” It also wants to be informed about a recall before it is performed. As you can see, safety can have a broader meaning than Tesla’s tweet tried to suggest.
Safety will continue to improve via over-the-air updates https://t.co/HvXQEboAUs
— Tesla (@Tesla) October 13, 2021