Your Self-Driving Car May Be Programmed To Kill You

Your Self-Driving Car May Be Programmed To Kill You

With the popularity of self-driving cars and other personal robots, the age of AI is creeping closer. As the programming becomes more advanced, the robots are beginning to make moral ‘decisions’. For instance, in an accident situation, a self-driving car must ‘decide’ whether to save the driver, or innocent bystanders. Some cars may be programmed to always save the driver, but what if this is done at the sacrifice of an entire family, or a group of children? The utilitarian decision maker would always trade one life for many, but is this moral? And should it be mandated or should the owner be able to decide? A recent survey found 59% of consumers are more likely to buy an autonomous vehicle if the government did not regulate its moral “choices.” Conversely, only 21% were likely to buy the same vehicle such regulation was in place. Would you buy a car if you knew it was programmed to potentially kill you? Would auto companies even be able to sell cars with this kind of programming?

About the Author

avatar KALE: A geek who works in the IT Security field and lives in Dallas, TX. He is also a music geek who has played in several local bands. Previous to his IT career, Kale worked as a photojournalist. He brings technical advice and artistic counterpoint to the podcast.