Tesla Autopilot Crash

Tesla Autopilot Crash: The Future of Autonomous Vehicles

Legal AssistantBusiness Law

In 2015, the Guardian made a rather bold prediction that road users would become permanent backseat drivers by 2020. While there certainly has been tremendous growth in the autonomous vehicle industry, the leading names in tech still have a long way to go before self-driving cars replace human drivers entirely.

Fully autonomous vehicles are still largely out of reach except in special trial settings. You can buy a self driving car that automatically brakes for you in anticipation of a collision or one that helps you keep to your lane when driving down a busy highway. However, if the current crash statistics are anything to go by, saying that fully self-driving cars are here would be a bit of a stretch.

The National Highway Traffic Safety Administration (NHTSA) has launched investigations into 27 crashes involving Tesla vehicles. In the most recent Tesla Autopilot crash that occurred in Spring, TX, near Houston, two people died after the vehicle rammed into a tree and caught fire. Investigations into the incident revealed that no one was in the driver’s seat at the time of the crash.

With this latest incident, the viability and future of autonomous vehicles have been called into question. Is the technology ready? Is there cause to worry? This article explores the answers to these questions and more.

How Do Self-Driving Cars Work?

Tech and automaking engineers have been working on perfecting self-driving car prototypes for several decades now. The basic idea behind their operation is quite straightforward:

Simply outfit a car with several cameras that can accurately track all stationary and moving objects around it. If the car gets onto a collision course with any one of those objects, it should be able to react and respond accordingly. Load the onboard computers with software that understands the road rules and then allow the car to steer itself to the desired destination.

Artificial Intelligence vs. Human Intelligence

While that description may appear simple, designing a car to operate with human-level cognitive ability is a lot harder than it seems. It’s not enough for a car to simply follow a list of road-related rules to be able to drive as well as humans do. There are lots of things drivers do that are virtually impossible for computer-ran cars to replicate.

For instance, drivers do things like making eye contact with other road users to confirm who has the right of way at an intersection, reacting to hazardous weather conditions, making quick judgment calls in the face of danger, and lots of other subtle cues that are impossible to encode in software algorithms.

Even programming a car to track the objects around it is a lot harder than it sounds. Take the Google self driving car developed by the tech giant’s sister company Waymo. The self-driving vehicle uses light detection and ranging technology (lidar) along with high-resolution cameras to estimate the distance and depth of objects around it by reflecting light and sound off them.

The computers in the car then combine this data to create a picture of where pedestrians, cyclists, vehicles, and obstacles are as it moves through traffic. To achieve this, a massive amount of training data is required.

This is drawn from millions of miles of driving inputs that Waymo has garnered over a long duration to build near-accurate expectations about all the possible ways other objects on the road might behave.

Given how difficult it is to amass such huge amounts of training data, Waymo engineers also use simulated data to train their vehicles. They need to be certain that the AI systems can draw accurate generalizations from the simulated data and transfer it to real-world situations.

While that may be far from a comprehensive description of how self driving car companies do what they do to get their autonomous vehicles to drive without any human intervention, it goes to show the complexities involved in implementing even the simplest of human capabilities.

Tesla Self Driving Car

In October 2020, a select group of Tesla car owners was chosen to receive a software update. The update in question automatically downloaded into their car computer systems to give the vehicles better steering capabilities and allow for acceleration without any human intervention whatsoever.

It essentially meant that the cars would now be able to navigate through traffic without drivers having to use their hands and feet.

Despite receiving a lot of criticism from safety advocates and other self-driving car companies that questioned whether Tesla’s new cutting-edge technology for fully autonomous vehicle capability was ready, the company stuck to its guns.

This was presumably in a bid to beat the competition by becoming the first automaker to launch the biggest fleet of fully self-driving vehicles owned by ordinary consumers.

A coalition consisting of Waymo, Uber, Ford, and General Motors’ Cruise issued a joint statement indicating that Tesla’s vehicles could not be considered “truly autonomous” since they still required supervision from an active driver.

Nonetheless, despite the skepticism expressed by the carmakers, Tesla went ahead to launch the new feature since self-driving technology is not heavily regulated in the US.

How Does Tesla Autopilot Work?

The critics of Tesla’s 2020 move argued that the carmaker was forging ahead without incorporating a key piece of hardware that nearly all self-driving car companies had integrated into the design of their vehicles.

The hardware in question: Lidar sensors. These devices are usually placed on the external parts of the vehicles to detect the precise size, depth, and shape of surrounding objects in real-time. They even work in extreme weather conditions.

Tesla opted to go against the grain by using a different type of radar technology linked to an advanced neural network, alongside a collection of HD cameras mounted on the vehicles.

While the company’s technology can detect pedestrians, other vehicles, and some obstacles such as trees, safety experts argue that it cannot adequately detect the true depth and shape of some of the objects it encounters.

This means that if, for instance, the vehicle was approaching a rig from the rear, it may not be able to accurately distinguish between a truck and a semi-trailer truck.

Tesla CEO, Elon Musk, described lidar as pricey and redundant, claiming that anyone who relied on it was “doomed.” He further stated that the company would not incorporate the technology into any of the Tesla vehicles even if they got it “for free.”

Tesla Autopilot vs Full Self Driving

While other autonomous vehicle companies, including GM’s Cruise and Waymo, have been using controlled pilot programs to test their self-driving cars, Tesla went ahead to put its technology in the hands of consumers. This, in effect, meant that any risk arising from a vehicle or software malfunction would be absorbed entirely by ordinary car owners.

However, the Tesla CEO stated in an October 2020 tweet that the rollout of the Full Self-Driving Beta feature dubbed “FDB” would be controlled and cautious. Despite this reassurance, hundreds of thousands of users reported receiving the new software update by the end of that month.

According to the company website, Tesla Autopilot is a superior driver assistance program designed to reduce a driver’s overall workload behind the wheel. The system consists of 12 ultrasonic sensors, radar technology, a suite of eight external HD cameras, and an onboard computer.

The Tesla Autopilot system offers two packages to consumers: Full Self-Driving (FSD) and Autopilot. Here’s a brief overview of the features in each package.

Tesla Full Self-Driving Capability

  • Automatic lane change
  • Automatic parallel and perpendicular parking
  • Autopilot navigation
  • Auto-steering on city streets
  • Mobile app or key vehicle summoning
  • Stop sign and traffic light control

Tesla Autopilot

  • Auto-steering within discernible lane markings
  • Traffic-aware cruise control to match the vehicle speed to that of the surrounding traffic

The website further indicates that both functions – FSD and Autopilot – are intended to be used with a “fully attentive driver” behind the wheel who should be ready to take over at any moment if the need arises. It also states that the presence of these features does not make the vehicles fully autonomous.

The NHTSA indicated that it was prepared to take legal action against the company if the new features posed any threat to public safety. The agency requires drivers to maintain full attention on the road at all times regardless of whether the vehicle in use is fully self-driving or not.

Tesla Autopilot Accident

Tesla Autopilot Accident

The company has been dogged by a myriad of safety concerns over the new Autopilot features. Several regulatory investigations into multiple crashes involving the carmaker’s vehicles have been launched. Several of these accidents have resulted in injuries and fatalities.

Despite this, Tesla has repeatedly come out to defend its Autopilot system, stating that the feature exists only to assist the driver rather than operate the car entirely. Ultimately, it is the driver who is responsible for the safe operation of the vehicle.

Tesla Autopilot Death

In the April 2021 crash that killed two passengers in Houston, Texas, reports indicate the vehicle – a 2019 Tesla Model S – was traveling at high speeds down a winding road when it failed to negotiate a curve. The vehicle subsequently crashed into a tree, bursting into flames on impact.

According to the Tesla CEO, the data recovered from the onboard computer showed that Autopilot was not enabled in the car at the time of the crash, despite investigators asserting that there was no driver behind the wheel at the time of the accident.

Mr. Musk further stated in a tweet that for the Autopilot feature to turn on, it requires visible lane lines on the road, which were not present on the street where the accident happened.

Nonetheless, despite the Tesla CEO coming out in defense of the company’s technology, many reports have since emerged that the vehicles’ Autopilot feature can be rigged into operating without a driver present.

Consumer Reports engineers investigated these claims on a Tesla Model Y. They confirmed that the vehicle not only failed to ensure that the “active driver” was fully attentive but also could not distinguish whether or not there was a driver behind the wheel. According to the engineers, the so-called Autopilot safeguards put in place by Tesla proved insufficient.

Tesla Battery Fire

Additional reports of the April Model S crash emerged, indicating that the battery fire kept reigniting even after firefighters worked tirelessly to extinguish the flames. Although they managed to put out the initial fire, the vehicle continued to smolder and ignite four hours after firefighters first arrived on the scene.

Nonetheless, this is not the only incident in which a Tesla Model S battery burst into flames. The NHTSA opened investigations into alleged battery defects that cause fires in older Tesla SUVs and sedans.

In a lawsuit filed by several Tesla car owners, the plaintiffs’ attorney alleges that the company modified the battery software in older Tesla models to lengthen the charging times and lower the range. It sought to find a solution for the battery defects.

Google Car Crashes Into Bus

In February 2016, a Google self-driving car was involved in a crash after it rammed into a bus in California. The vehicle, which was moving at 2 mph, pulled out in front of the public bus traveling at 15 mph.

Although there was a human behind the wheel of the self-driving car, the driver indicated that they had assumed the bus would slow down to allow the car to pull out and merge into the traffic. As a result, they failed to intervene and override the vehicle’s self-driving mechanism.

Google stated that it had since refined its algorithm to make smart cars aware that large vehicles such as buses and trucks are less likely to yield to smaller vehicles. Fortunately, no one was harmed in the incident.

Autonomous Vehicle Lawsuit

The family of a man who lost his life in a fiery accident while driving his Tesla has filed a lawsuit against the company for wrongful death, alleging negligence resulting from false promises and malfunctions of the Autopilot system. According to the man’s family, the deceased was led to believe that the vehicle was “safer” than a human-operated car.

This is just one of several lawsuits against autonomous vehicle companies, whose advertising and marketing campaigns have left consumers with an overly inflated impression of the driverless technologies’ true capabilities.

The Bottom Line

Full self-driving vehicle technology still has a lot of kinks to iron out. The main thing to keep in mind is that active driver supervision is required regardless.

If you or a loved one was involved in a Tesla Autopilot accident or battery-related incident, get in touch with an experienced attorney as soon as possible to help you get the compensation you deserve.

Do you have any legal questions for us? Chat online with a Laws101 attorney right now.