Autonomous Vehicle Sensor Systems & Accidents—Litigation Potential

Introduction

There have been nearly 100 years of efforts to analyze and prevent automobile accidents, primarily with laws and signage devoted to that goal, along with highway and vehicle crash worthiness design modifications.

However, in the past 10+ years, “sensor engineering” has allowed the development and implementation of sophisticated micro sensory devices along with high capacity computer processing. This technology has allowed automobiles to now have special control and informational features. Such features have increased the propensity for certain types of single-multi vehicle accidents, while decreasing the likelihood of other types of accidents. Features such as lane tracking, auto braking, side vehicle warnings, cruise control, car following, navigation, cell phone connections, and various engine status sensor displays have all contributed to the complexity of a new autonomous environment in which the same human is expected to operate efficiently and safely.

Other authors have addressed these types of issues and have focused on certain features in order to understand the negative and positive attributes of Autonomous Vehicle (AV) sensor systems. Such features include: (a) The accurate detection of the vehicle environment; (b) The development of reliable GPS and communication systems; (c) The avoidance of objects and obstacles; (e) Lane-keeping; and (f) Adequate protection against cyber-attacks (Petrović et al., 2020; Sarkar and Mohan, 2019; Tettamanti et al., 2016; Törő et al., 2016; Wang et al., 2020). The negative attributes can, of course, lead to accidents or “near misses.”

Some Autonomous Vehicle (AV) Accident History

While the following pilot data collected by California focused on completely AVs vs Conventional Vehicles (CVs), some of the components of the AV sensor systems with machine perception are now commonplace with many so-called “conventional vehicles” as well. One has to pay attention to the type of accidents recorded based on which sensor system(s) are involved. These features might be leading to the same types of accidents in newer CVs, although not reported yet in the traditional accident reports. According to Petrovic (2020), in the Californian Department of Motor Vehicles data (CA DMV, 2018a), there were 129 traffic accidents with AVs until December 2018. One interesting finding was that there were more AV accidents (15.2%) due to traffic signals or sign violations than in the case of conventional vehicles (12.6%). This fact illustrates that machine perception still needs serious further development.

Despite the great expectations regarding the positive effects of AVs on road safety, the California DMV data and other studies have found that traffic accidents with an AV occur more often than accidents with a CV. This seems counter to the contention by Sarkar and Mohan (2019) “that autonomous vehicles can significantly reduce the number of road accidents—by up to 90%.” At least for the moment, this prediction will be overlooked as being too futuristic.

In reviewing various data,  Wang, et.al (2020) suggests AV errors leading to accidents might be categorized as being determined by the combination of AV architecture, software, and hardware. These errors can be grouped according to architecture as follows: Detection error: occurs during the detection of the environment (sensor error); Decision error: wrong or erroneous decision made based on the processed data from perception; and, Intervention error: failure of actuators. Another perspective from some other authors is that most AV accidents are not caused by the AV but by another party, with such accidents being labelled as “passive accidents.” The avoidance of such passive accidents becomes a huge challenge for machine perception to respond to. This is because it requires input stimuli dimensions that are still well beyond current sensor detection-perception capabilities for recognizing dangerous situations and responding to them.

Litigation Involving Vehicle Micro Sensors

It is common for sensor systems to be included within integrated semiconductor circuits. Even simple appliances these days will probably have some part of its functionality dependent on such an integrated circuit (IC), and it will also have a sensor included within the logical operations. Miller Engineering has been involved in numerous cases where faults in the logic within the integrated circuit system have resulted in “non-logical” system behaviors, which have then caused accidents or near misses. Obtaining the logic diagrams for these systems for analyses is quite difficult when errors occur. The source of their creation is sometimes a great mystery, particularly when many are created in a China-type application arena. However, the cocoon surrounding these integrated circuits is soon to come off!

It is surprising that one has not encountered more litigations which have attempted to focus on the logic systems already in CVs, but the appearance of AVs appears to be bringing attention to the IC systems as a source for accidents.

Authors have begun to address these litigation possibilities and concluded that:  “Litigation over autonomous vehicles (also referred to as autonomous driving systems or ADS) is in its infancy. The potential theories of liability have not yet been fleshed out or tested in court. But we can expect negligence and products liability lawsuits – not to mention statutory claims – as the government begins regulating” (Reilly and Bullard, Law360,August 31, 2018).

So, what is evolving is the recognition that the input sensor data collected and resultant decisions will lead to autonomous actions in vehicles, which can then result in errors of vehicle control activations (or inactivation)–some of which have already lead to accidents and will lead to accidents of various severities in the future. Litigations have only begun!

On the one hand, I would expect that vehicle manufacturers will be accused of: (a) Offering features which have not been adequately tested under a broad enough scope of operating conditions; and (b) Have not properly assessed the changes of risk associated with the new features. On the other hand, with all roads leading back to the sensor or IC systems which are dictating vehicle responses, one might also expect heavy criticism of these “little decision makers.” Have manufacturers incorrectly defined the limitations of their auto control products or addressed the reliabilities which should be associated with them?

Overselling Technology—Risk vs Advancement (Utility)

The enormous advancements in computer powers leading to the Artificial Intelligence Era have caused the state of technology to jump quickly into implementation into our daily lives, including AV automated features. Has the glamour of new automated features sometimes blinded an adequate review of the old risk vs utility evaluation? In the case of AV vehicles, has the excitement of getting equipment “on the road” promoted both premature introduction and unjustified reliance on some of the autonomous features? Has this all been done without sufficient consideration of possible overall “risk” increases?

These new features allegedly associated with accident/injury incidents have likely brought in a new era of automobile accident litigation. It should not be surprising to the auto-suppliers and manufacturers that there are new troubles on the horizon, as “change is always the mother of trouble,” to quote one of my favorite safety slogans.

Vision Zero vs Human Error

The trend history of AV accidents is not in the direction of fewer accidents. However, those patronizing the more extreme developments envision a goal of what is called “Vision Zero,” which aims to achieve accident-free traffic. The vision of accident-free transportation is currently a hot topic, especially for vehicle manufacturers, such as BMW, Daimler, and the Volkswagen Group. According to Wang et al. (2020), “94% of accidents are caused by human error, so ‘Vision Zero’ could be expected with fully automated vehicles, but the risk of expected technical failures must also be taken into account.”

I appreciate the recognition that it is the “human error” associated with this large portion of analyzed accidents. However, such a perspective ignores the many cases in which the “human decision making” in complex situations should also be credited with preventing certain types of accident scenarios. These scenarios surpass the capabilities of any present “autonomous” auto control systems in use. Hence, subscribing too heavily to the “Vision Zero” or even the present capabilities of the autonomous or semi-autonomous systems may actually cause trends towards more accidents, although of a different profile of circumstances.

Human Driver—Future

From early human factors authors like Ernest McCormick, starting in about 1960 there was an immediate recognition about the dividing line for task responsibilities between computers and people. That recognition seems to be experiencing a rebirth, even though that dividing line has been rapidly changing in favor of the computer systems. But the new discovery also seems to be that there are still tasks within the driving arena for which the sensor systems have still not matched the human. The new data evolving on auto accidents involving autonomous vehicles is confirming that some sensor-computer based driving situations are more likely to lead to accidents than what would be occurring with human perception and human interaction with the vehicle. I am sure this statement will be the subject of many further debates, within both the scientific and litigation communities. But as the human will never be perfect in all respects, nor is it expected that its substitute will ever be either!