Case Study — Human Safety & Responsibility in Autonomous Vehicles

Fatal Crash Involving a Self-driving Car (Case 6.2.1)

Shiva Prasad Sarkar  |  ID: 23101302  |  Sec - 04
Prepared for:
AI Ethics — Case Submission
Topic: Human Safety and Responsibility in Autonomous Vehicles — An AI Ethics Perspective

Introduction

Nowadays, AI has become our daily companion and seems to complement our lives. We are gradually becoming overly dependent on AI — it is gaining our trust, shaping our habits and becoming our tool for every task. Not only in the field of education, AI is taking its place in everything from self-driving cars to food packaging. Though AI is playing an important role in making our work easier and more productive, the concerns surrounding it cannot be completely ignored.

AI is being integrated into self-driving cars for improved safety, efficiency, and productivity. However, some incidents, such as the Autopilot of Tesla (2016) crash, the Uber pedestrian (2018) deaths, and the 2021 Tesla crash in Texas, have raised urgent ethical concerns. The aim of this study is to highlight concerns related to artificial intelligence (AI) in autonomous vehicles, its impact on daily life, the society, and the legal system. Finally, focusing on safety and ethical considerations, it will propose possible solutions within this specific area of automation.

Problem and Stakeholders

According to Kalra and Groves (2017), the main goal of automation in driving systems is to reduce human driving errors, reduce unwanted accidents, and in short, ensure human safety. Its key stakeholders include drivers, passengers, pedestrians, manufacturers like Tesla, engineers, national transport regulators and society at large. Each group is deeply involved in the system and carries different expectations and risks.

Social and Environmental Impacts

It has a profound impact on society and the environment. Primarily, it ensures public safety in both directions through the ability to drive efficiently. But because the data-driven system can be distorted by the environment, sensor errors, or complex conditions, something undesirable can happen at any moment. As we saw in the 2018 Uber incident, pedestrians, drivers, and road users face particular dangers when AI misclassifies them. The environmental impacts are less direct but potentially significant. Automation can provide a smooth experience, but it can also be confused by important rare conditions, such as dense fog or heavy rain.

Groups Affected and Ethical Issues

The most affected groups are pedestrians and vulnerable road users. Automated systems can detect or ignore their mistakes. Furthermore, drivers and passengers are at risk if they overestimate the capabilities of the system. Drivers sometimes simply rely on the system due to branding such as “autopilot” or “full self-driving”. The main ethical issue here is accountability and human safety. When an accident occurs, it is not clear whether the fault lies with the driver, the AI system, the engineers, or the manufacturer. Scholars describe this as an emerging “liability gap” (Council of Europe, 2020).

Clarity, Explainability, and Accountability

The Tesla and Uber cases demonstrate that the operation of the systems is not entirely transparent or explainable. Complex concepts and schemes are difficult to explain. In addition, manufacturers’ messaging and branding often confuse users. For example, Tesla insists that its systems require active supervision, while the term “autopilot” refers to autonomy. This discrepancy reduces transparency and encourages misuse of the system.

Accountability is divided among each stakeholder, such as manufacturers for design, engineers for development, test operators for safety oversight, and regulators for governance. According to Bonnefon, Shariff & Rahwan (2016), liability for an accident often creates ambiguity between drivers and companies. But the overall result is that victims suffer and are left without clear remedies.

Laws, Regulations, and Gaps

The current AI-based system is not fully based on current traffic laws and liability frameworks. When an accident occurs and manufacturers are confronted with the law, they claim that the system requires human oversight and blame the drivers. This creates a gap between legal protection and real-world responsibility. The regulatory frameworks proposed by the Council of Europe (2020) highlight the need for strong accountability mechanisms and human rights-focused oversight.

Engineering Responses and Safeguards

Engineers can address these policy concerns through technical design and social responsibility. Technical solutions include:

  • Redundant sensor systems and robust perception algorithms.
  • Fail-safe behaviors, such as safely stopping the car when uncertainty is high.
  • Continuous driver monitoring (e.g., eye tracking) to ensure attention.
  • Transparent system logs for post-accident analysis and interpretability.

Socially, manufacturers should avoid misleading marketing. Users should be educated about limitations and adequate training should be provided. Regulators should implement safety verification, mandatory incident reporting, and certification processes before products reach the public.

The Risk of Ignoring the Problem

If accountability and safety concerns are consistently ignored, unintended consequences will occur. Accidental deaths will increase, public hesitation will continue to build, and distrust of the technology will grow. Such outcomes will undermine the long-term benefits of autonomous systems and create legal and ethical dilemmas.

Lessons Learned

The fatal Tesla and Uber cases illustrate several important lessons.

  • Humans are not perfect monitors — Drivers often get confused or overly trust the system. Cars should be designed to keep people safe.
  • Clear language is important — Names like “autopilot” or “full self-driving” can confuse people. Companies need to be honest about what the system can and cannot do.
  • Liability should be clear — When an accident occurs, it should be determined who is at fault: the driver, the company, or the system. Without this, victims will not receive justice. People may also lose faith in the technology.
  • Safety before profit — Self-driving cars should only be put on the road when they are truly safe. Rushing for business reasons puts lives at risk.

Conclusion

Autonomous cars could make travel safer in the future, but recent accidents show that the technology isn’t there yet. The biggest issues are safety and accountability. When systems fail or drivers become completely dependent on the system, ordinary people, especially pedestrians, are at risk. To fix this, companies need to design cars with more robust safety tests. They need to be clear about the limits of the system and take responsibility if it goes wrong. Protecting people should be the main goal. If we learn from these mistakes and act responsibly, autonomous cars will truly benefit everyone.

References

  1. Stahl, B. C., Schroeder, D., & Rodrigues, R. (2021). Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenges. Springer.
  2. ACM Proceedings on AI Ethics & Governance. (2023). https://dl.acm.org/doi/proceedings/10.1145/3630106
  3. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. https://doi.org/10.1126/science.aaf2654
  4. Council of Europe, Committee on Legal Affairs and Human Rights. (2020). Legal aspects of “autonomous” vehicles. Strasbourg: Council of Europe.
  5. Kalra, N., & Groves, D. (2017). The enemy of good: Estimating the cost of waiting for nearly perfect autonomous vehicles. RAND Corporation. https://doi.org/10.7249/RR2150
  6. National Transportation Safety Board (NTSB). (2019). Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018. NTSB/HAR-19/03.