Tesla and Uber self-driving automobiles are under the microscope again after autonomous car accidents provoked two fatalities last spring.

When legal experts on driverless vehicle accidents scrutinize the crash evidence, attorneys may conclude live-saving algorithms either failed to act or disengaged hardware impulsively.

Now, one may ask why did vital computer code neglect to protect pedestrians and keep autonomous car drivers safe?

According to some automakers, programmers coded their autopilot’s binary data to behave this way so that drivers can have more pleasurable driving experiences.

Autopilot Data Errors Produce Self-Driving Car Accidents

Machines are imperfect; yet, self-driving cars need formulas and calculations to command their automated hardware and software.

The binomial rules that autopilot algorithms rely on to carry out correct navigation decisions often produce fictitious data reports about the vehicle’s surroundings while in operation.

To minimize these data reporting errors, autonomous automobile software must self-update automatically, and tech engineers must code probable driving/road condition scenarios into the autopilot’s algorithm.

Self-driving code produces false positive data errors when software improperly indicates existence of a condition that’s not there; contrarily, false negatives arise when autopilots mistakenly reveal an absence of a condition when it’s actually present.

This means driverless automobiles are prone to two distinct data error assessments while on the road.

For Example: A self-driving Uber’s autopilot detects a tire in the road and brakes suddenly but nothing is present, or the software fails to discover the hazard and does not try to escape it.

Fallible Driverless Autopilot Programs

If you are an autonomous automobile software provider, you’ll probably ask your engineers to perform statistical hypothesis testing before creating design algorithms that tell the vehicle how to drive carefully on the road.

But is it conceivable for a person to figure out all existing driving scenarios?

Even if one could, what binary limits should programmers create to accurately mimic human judgment and driver reaction?

Using the example above, let’s say a software provider determines a driverless car should search for abandoned tires in roadways and the engineer uses hypothetical testing to establish how and when the vehicle will brake when it encounters one.

If the programmer sets the algorithm detection limit too low, false positive data errors may arise and the vehicle will brake often and without warrant.

On the other hand, if the engineer over sets the bar, the autonomous vehicle’s autopilot may miss the abandoned tire and cause a self-driving car collision.

Autonomous automobile innovators therefore have burdensome decisions to make before they can successfully bring their self-driving vehicles to market with minimum liability exposure.

They must first theorize every conceivable vehicle, road, pedestrian and hazard condition that exists while driving; then, the driverless car manufacturer must get their software providers to produce binary action and no-action calculations for each scenario.

In the end, determining how autopilots deal with false positives and negatives will mandate whether the self-driving car will produce an unpleasurable driving experience (i.e. via unnecessary braking) or will cause higher autopilot car collisions or autonomous automobile accident fatalities (i.e. via failing to brake).

How Today’s Driverless Car Autopilots Handle Data Errors

A few self-driving cars entrust installed sight hardware when the vehicle’s autopilot receives excessive data errors from other equipment. Tesla’s automated navigator for example relies on radar when algorithms face high false positive system counts.

Autonomous automobiles such as Uber disengage self-driving functions and force its drivers to take command when hardware such as Light Detection and Ranging (LIDAR) causes vehicles to, say brake excessively without reason.

In both situations, while backup schemes exist to limit autopilot data errors, algorithm mishaps sometimes inherently cause unavoidable autonomous car accidents.

Redundant braking can provoke rear-end collisions, and no braking can certainly cause devastating driverless car crashes and even self-driving car deaths.

Knowing now that zero autopilot data error is impractical to reach, let’s put this argument into perspective by examining two actual driverless car data error events.

Tesla Self-Driving Car Death

Last March, a self-driving Tesla drove into a roadblock and killed its driver. The autonomous car’s autopilot miscalculated a division in the road and accelerated head first into a concrete divider. https://www.wired.com/story/tesla-autopilot-self-driving-crash-california/

The driver engaged Tesla’s autonomous cruise control while traveling on a highway; after a car in front of the vehicle exited the freeway, the Tesla perceived the road was free and sped up to 75 mph.

A few days before, transportation officials restricted the left lane and installed concrete barriers, which Tesla’s self-driving algorithm didn’t pick up.

Tesla’s radar system was supposed to find the approaching concrete barrier and perform an emergency braking maneuver; but the automobile’s software provider programmed the car to commit more to the vehicle’s cameras when challenged with false negative data errors.

Vision hardware didn’t recognize the meek divider signage and faded road stripes; thus, when Tesla’s algorithm got things wrong, the driverless car struck the lane divider at full speed.

Related happenings took place when a self-driving Tesla rear-ended a stopped firetruck in January, collided with a parked police car in May, and killed a driver after hitting a trailer in 2016. http://www.businessinsider.com/tesla-crash-utah-firetruck-injured-driver-2018-5https://www.theguardian.com/technology/2018/may/29/tesla-crash-autopilot-california-police-carhttps://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html

In all instances, Tesla’s self-driving algorithm mistrusted radar equipment because its data was recording numerous false positives. Software providers further believed solely relying on radar functions to perform data error driving maneuvers would produce excessive braking as hardware detects stationary road objects such as road signs or parked vehicles.

Such considerations consequently compelled programmers to place more responsibility on driverless visual equipment, a choice as we now know contributed to the Tesla autopilot car death.

Disabled Brakes Provokes Uber Self-Driving Fatality

An autonomous Uber also struck and killed a pedestrian crossing the street this year. https://www.reuters.com/article/us-autos-selfdriving-uber/self-driving-uber-car-kills-arizona-woman-crossing-street-idUSKBN1GV296

The mechanical details in this case a