By Timothy Mattackal
Artificial intelligence, or AI: what exactly is it? What does it do? How much should we trust it?
There are numerous questions we could ask regarding this issue, and many of these concerns were brought into focus several weeks ago when an Uber self-driving car struck and killed a pedestrian in Tempe, Arizona. The accident resulted in the first fatality from a self-driving car accident in history, and as a result, Uber and other developers of self-driving vehicles temporarily halted their testing. Self-driving cars are just one of thousands of different ways that AI and machine learning are used, and the way that we view this incident has implications for the ways in which we think about AI in the future.
AI has been developing at a rapid pace in recent years. Machines and computer programs have become increasingly adept at learning and adapting to situations in the same way that humans do. It seems inevitable that the rate of development of AI will only increase as time goes on, and that its use will become more ubiquitous with every passing year. Self-driving cars are fast becoming a reality with dozens of auto and technology companies such as Google, Uber, General Motors and Tesla working on developing and testing the technology.
The burgeoning growth of AI raises the question of just how far we should allow its development to go, and just how much control we can safely hand over to the robots.
The basis of the development and use of AI is that it is intended to enrich our lives by making tasks easier or safer. Self-driving cars have been touted as a much safer alternative to traditional vehicles driven by humans. After all, car accidents result in the deaths of 45,000 Americans every year. With that in mind, the accident in Arizona and the resulting fatality has raised serious questions about the viability of AI. Just how much can we trust an unconscious machine to perform distinctly human tasks?
Dr. Danielle Fredette, Assistant Professor of Electrical Engineering at Cedarville University, sheds some light on this issue.
“Computers are better than people at certain things, but people are clearly better than computers at other things,” says Fredette. “Computers are really good at doing things fast. The advantage of using computers is doing calculations, simulations, and that’s the advantage that compute drivers have over human drivers. They can react so much faster than a person.”
This faster reaction time gives autonomous vehicles a clear advantage over human drivers. Despite the recent accident, the fatality rate for self-driving is much lower than that of traditional vehicles. However, the accident has shown us that a move towards autonomous vehicles will not be painless, which raises the question of how much control we are willing to cede to technology when we know that it will likely result in injury to us.
In addition to the practical aspects of our use of AI, there are also significant ethical issues which have to be addressed when thinking about just how far its development can or should go. Dr. John Tarwater, Assistant Professor of Business at Cedarville University, holds a Ph.D. in ethics. He says that AI is a tool which, like any other tool, can be applied in different ways.
“I would think that AI, in and of itself, is amoral—it is neither good nor bad,” he said. “Rather, it is how it is used that determines its goodness or badness.”
However, Tarwater also says that there are boundaries which our use of AI should not cross when we look at the issue biblically.
“If the goal [of AI] is to do away with work […] this would be morally bad,” Tarwater said. “Work is a way that we can use our gifts to bring glory to God.”
The idea of AI doing away with work entirely might seem a far-fetched one, but it is not as unrealistic as it may sound. A study published last year found that 45 percent of jobs which people are paid to do right now can be automated, and 60 percent of all occupations could have at least 30 percent of their activities automated. These estimates are based on current technology, which we can only expect to develop further in the coming years.
We also must consider how AI machines should be treated when they have the capacity to think for themselves. Saudi Arabia recently became the first country to give citizenship to a robot, and the European Union Parliament has already proposed that certain AI be granted personhood status to ensure their rights are maintained. The decision to grant citizenship to a bot may have been little more than a publicity stunt, but these events do raise serious questions as to what kind of rights we should give to robots when they become so advanced that they border on sentient.
There are countless secondary implications: if AI machines are given citizenship rights, then what is to prevent them from participating in activities such as voting or serving on juries? Should we trust these decisions which usually involve some degree of emotion to soulless, unerringly rational machines?
At the moment, the future is unclear. We do not know the extent to which AI will develop or the areas in which it will or will not replace humans. Dr. Fredette says that on the topic of self-driving cars, the most likely outcome is humans and AI coexisting and supplementing each other.
“I think that the full autonomous car… it could happen, but we have got a lot of work to do before that,” she said. “A lot more than a lot of people may think.”
Fredette adds that despite the challenges in the way of fully autonomous vehicles, partial autonomy is a reality which is only going to progress in the future.
“Partial autonomy, meaning some computer-aided element of new cars, is already here and people like it,” she said. “There is room to grow. We could have more of those [partial autonomy] elements.”
This brings us back to what may be the fundamental question concerning AI: is it complementing us or replacing us? AI can make our life better, and it already does in many ways which we may not realize. Applications on our smartphones utilize AI to make our lives more convenient.
Computers can make many tasks quick and efficient, but using machines to perform distinctly human tasks, which results in the necessary humanization of machines, will likely have a wealth of unexpected consequences.
The Associated Press contributed to this story.
No Replies to "Artificial Intelligence: Friend or Foe?"