We can work on “The trolley problem will tell you nothing useful about morality” by Brianna Rennix and Nathan J. Robinson

In this article, the authors delve into the ‘trolley problem’ – one that involves making high-level decisions regarding life and death. In such problems, no one decision can be considered right since every choice leads to a horrific outcome. As the authors observe, the trolley problem goes beyond the ordinary moral choices and is virtually nonsensical. The problem places people in a forced decision-making situation with the outcomes of all the possible choices being tragic. As such, the trolley problem cannot be regarded as a moral dilemma where choices are usually informed by a chain of decision-making activities. Unlike in the moral quandary situations, the trolley problem depicts individuals as helpless victims where everything else has been decided apart from a binary choice leading to horrendous outcomes. Generally, the authors’ suggestion on how to approach the trolley problem offers important lessons on how to deal with real-life ethical problems, but at the same time, it seems to restrict invention, which is innate to human nature, and constraints utility maximization essential for human development.

The authors point out that in situations where people are confronted with life-or-death decisions, the focus should be on the agency responsible for the existing state of affairs as opposed to merely accepting the available choices. In the trolley problem, for example, it is of no significance to think between the only available choices since both of them are tragic. Rather, the focus should be on whether the trolley company is morally justified to take measures with potentially disastrous outcomes in order to maximize profits. Basically, the authors conclude that individuals can become better if they stop wasting time thinking about the trolley problem, but rather examine the larger context in order to find the basis for confronting real-world moral failings.

I think the authors’ conclusion on the trolley problem is highly plausible in light of the problems and policy issues affecting the contemporary society. One such issue concerns the problem of refugees. For instance, allowing refugees into the U.S. would help save millions of lives from various problems such as wars and natural disasters. However, America could get hurt in the process if a few refugees decide to engage in terrorism. Essentially, locking refugees out would help protect American citizens from potential cases of terrorism, but on the other hand, such a move could lead to loss of millions of innocent lives. Notably, each of the available choices in this situation involves potentially negative outcomes; none of the possible options can be regarded as more moral than the other. However, as the authors note, this does not imply absence of an appropriate action. Exercising human agency in the right direction can help address the problem. For instance, taking part in dealing with the causes of refugee crises could help avert the situation.

However, the suggestion that we should not waste time contemplating about trolley problems raises an important question regarding human ambitions and the pursuit of the greatest good as entrenched in utilitarianism. A good example to illustrate this is the case of autonomous vehicles. Developers of autonomous cars have to code a certain value system into the vehicles to help them make moral decisions, for example in a situation where a driverless car has to choose between crashing into a wall and hitting a pedestrian. The authors would recommend a shift in decision making from the current moral issue to the ethics of developing autonomous vehicles. They would perhaps recommend that the technology be abandoned as it inspires a kind of fatalism, but embracing this suggestion would compromise human innate and insatiable desire for invention. At the same time, it would hinder the potential benefits associated with autonomous vehicles such as reduced air pollution. To this end, I feel that thinking about the context in which certain choices occur rather than delving into the disturbing life-or-death decisions may not always serve to meet intrinsic human desires and interests, which may in turn elicit a new ethical issue.

“You should have a say in your robot car’s code of ethics” by Jason Millar

In this article, the author explores the question regarding who should control autonomous car’s code of ethics, particularly in relation to moral issues that do not have a right answer. For a solution to this question, the author considers the state of affairs in the modern healthcare setting where patients are informed of their treatment options and then left to make decisions based on their own preferences. Just like in the healthcare setting, the author believes that decisions on autonomous car’s code of ethics should be left to the user as opposed to designers and engineers. However, there should be limits in regard to the kind of ethical controls that should be allowed in robot cars. Absurd code of ethics, for example, one that is based on gender or race in the event of an accident, must be avoided. Generally, the author believes that users of autonomous cars should be allowed to make important decisions regarding various critical circumstances within the autonomous car’s environment based on individual moral commitment. While allowing users to have a say on robot cars’ code of ethics is laudable, such a code of ethics must be subject to adjustment since individual moral values are not stable. Besides, such a code of ethics may escalate the debate over who carries liability in the event of death.

I agree that users of autonomous cars should have a say on the machine’s code of ethics. While the idea of installing such cars with instructions that ensure they do the least harm in line with utilitarianism seems justifiable, it deprives users the ability to act in line with their moral commitment when such moral commitment is in contravention with the utilitarian principle. Just like it is an individual choice to use such cars, the cars’ code of ethics should be based on users’ input and preferences. Despite this, as the author observes, government regulations should be put in place to prevent individuals from adopting an absurd code of ethics that contradicts social standards such as fairness. For instance, no one should be allowed to develop a code that would allow their robot car to swerve only when the person at risk of being crushed is white. Such a discriminatory code of ethics could turn the autonomous car technology into a platform for pursuing malicious social ends.

Allowing users to determine the code of ethics for their robot cars brings about the issue concerning the stability of individual moral preferences over time. Humans have an intrinsic sense of values that are influenced by social morals (social demands and social expectations). This means that individuals may let go the values that they hold today if such values seem to conflict with the prevailing social expectations and demands. Therefore, one’s moral commitment may change from time to time, and as such, users may need to adjust their robot cars’ code of ethics from time to time to make it consistent with their existing moral code.

Even so, allowing users to choose their robot cars’ code of ethics may raise a debate over who carries liability in the event of death. The legal liability rests with the manufacturer in the event that they fit the robot car with a “do least harm” instruction. Who should take liability when the user applies his code of ethics? Apparently, the liability is automatically shifted to the user when the code of ethics is based on his preferences. Nevertheless, the accident itself may be the fault of the manufacturer. On this basis, I feel that the issue regarding the code of ethics for autonomous cars is not as applicable as the issue of informed consent in the healthcare setting. For this reason, I feel that the debate on the ethics of autonomous cars creates a situation with no explicit solution.

 

 

 

 

 

 

 

 

 

 

 

Works Cited

Millar, Jason. “You should have a say in your robot car’s code of ethics.”

Rennix, Brianna and Robinson, Nathan J. “The trolley problem will tell you nothing useful about morality.”

Is this question part of your assignment?

Place order