Dec. 17th, 2015

dogriver: (Default)
Some time ago, I wrote an article in this blog about why I feel self-driving cars are not going to be the magic-bullet answer that many blind people expect it to be. Since the writing of that article, I have done a lot more thinking on the subject, and I have come to ask myself, do we even want machines in our lives capable of the kind of intuition, value judgment, etc., that would be necessary in such a circumstance? What's more, would you want to trust your life to that machine's judgment?

Case study. In 1997, I was in a vehicular accident. The van in which I was riding was struck by a car. I was in the front passenger's seat, someone else was driving. In the back were the driver's wife and children.

When the driver of the vehicle that hit us realized that a collision was inevitable and could not be avoided, he swerved the car to deliberately aim it at my door, so as to avoid hitting the children in the back. This, in my opinion, was the right thing to do from a moral standpoint. But it took a value judgment. It took intuition, it took a sense of moral judgment. Would a self-driving car be able to make such a judgment? Even if it were capable of doing so, how much knowledge would the machine need about the contents of the other vehicle, and how would it go about obtaining that knowledge while respecting the privacy of those in it?

This is just one example of the intuition that a machine would be required to have in order for totally independent self-driving cars to exist. Moreover, all car manufacturers would have to make sure that all of their vehicles had the same type of intuition, and there would have to be a standard that would be strictly enforced. A set of ethics would have to be agreed upon by every country in the world that allowed self-driving cars. Who would decide these ethics, and on what basis? Who would decide how a computer should be programmed to make other ethical decisions that were not preprogrammed into it, and again, on what basis? With such a need for standardization, would we then be reduced to one manufacturer of automobiles, since much of the factors in competition would have to be removed to make the system viable?

And what of these independent decisions that a computer would be required to make? There are only so many preprogrammed sets of circumstances it would be possible to plan for when programming the computer. Ask any driver how long it's possible to be on the road before he or she encounters a situation not specifically spelled out in the driver's handbook. Such a situation would not be long in coming. So any viable self-driving car would have to have a computer capable of independent thought and intuition. Such thought and intuition would have to be extremely reliable. Ice and snow, weather conditions, road surface conditions, animals, debris, tectonics, and a million other variables are not predictable, can not be programmed into a modern-day computer with failsafe responses. A computer, on its own, does not care who lives or dies and can't make those value judgments. And would we want it to be able to do so? At what point would humanity stop being in control of the machine and risk having the machine take control? Do you want to trust your life to a machine's intuition with no recourse or intervention possible?

Profile

dogriver: (Default)
Bruce Toews

June 2017

S M T W T F S
    123
456789 10
11121314151617
18192021222324
252627282930 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 27th, 2017 06:47 pm
Powered by Dreamwidth Studios