Ethical Issus - The Trolley Problem?
The trolley problem is a moral dilemma that asks whether it is acceptable to harm one person to save many others in certain situations.
The Issue
A self driving vehicle is moving along a street when a group of people unexpectedly walk across it. The car’s artificial intelligence registers the presence of the pedestrians, but because of the vehicle’s speed and the proximity of the people, it is incapable of applying brakes in time. The AI of the car has to determine whether to proceed in a straight line and collide with the pedestrians or swerve off to evade them, which may result in injuring the car’s occupants or other individuals in the surrounding area.
The ethical quandary presented by the trolley problem is a difficult challenge for policymakers and manufacturers of self-driving vehicles. In response, regulators and researchers are currently investigating methods to integrate ethical considerations into the algorithms that guide autonomous vehicles. Furthermore, nations across the globe have implemented rules and standards to supervise the progress and implementation of autonomous vehicles. For instance, the NHTSA in the US has published ethical guidelines for the development and deployment of self-driving vehicles. Similarly, the GDPR of the European Union has specific clauses pertaining to the moral and open use of autonomous vehicle technology.
Lastly, companies involved in the development of autonomous vehicles are also making efforts to confront the moral dilemmas raised by the trolley problem. For example, the IEEE has formulated a framework for the ethical creation and use of self-driving cars, which encompasses guidelines for dealing with moral dilemma like the trolley problem. In addition, a number of companies creating autonomous vehicles, such as Tesla, Waymo, and Cruise, have released their own ethical codes for their automobiles.
Several frameworks have been designed to address the ethical dilemmas posed by the trolley problem in autonomous autombile -
- Utilitarianism: The Utilitarian approach proposes that the algorithmic decision made by the car should strive to reduce damage and optimize the general well-being of the community. In simpler terms, the car should select the option that causes the least harm to everyone concerned.
- Deontological ethics: The Deontological Ethics framework proposes that the car’s algorithm should abide by a set of moral principles that apply universally, irrespective of the outcome. For instance, it could recommend that the vehicle should never intentionally injure anyone, regardless of the circumstances.
- Social contract theory: The Social Contract Theory proposes that the car’s algorithm should make decisions based on what society collectively considers to be the most suitable approach. For instance, the car should select the option that conforms to commonly acknowledged social customs and principles.
Under: #ethics , #ai , #tech