On June 29, 2017 Sir Nigel Shadbolt, a professor of computer science at the University of Oxford and the principal of Jesus College, discussed with the Summer Institute students his research on artificial intelligence. Through discussions and several articles students were able to better understand the concept of artificial intelligence and the different perspectives it entails. Below is a student's reflection on meeting with Sir Nigel Shadbolt and their class discussions.
The concept of artificial intelligence has always been an intriguing subject. Even the idea that a machine could learn and adapt on its own is interesting. Both the socratic seminar and the program with Sir Nigel had me deep in thought.
The purpose of artificial intelligence is mainly to improve human lives in some way, whether it be working in hazardous areas so people don’t have to, or deciding which song to play next on Pandora. Introducing artificial intelligence into our daily life isn’t going to be the end of the world or anything, but a lot of people act as if it will be. The irrational fear of intelligent computers is something we’ll have to get over if we want to fully take advantage of this new technology. What many people don’t seem to understand is that nothing will be mass produced for commercial use until it’s been extensively tested and confirmed to be safe. Take self driving cars for example. People fear them because of the fact that they’re not in control of the car when in fact, self driving cars eliminate any possibility of human error causing an accident. Too often, people are tired, angry, or just aren’t paying attention when they drive. Naturally this causes accidents, and in a world where a single accident can make hundreds of people late for work, the elimination of human error could make a sizeable difference in both traffic conditions and deaths in crashes. But people ignore all of that because they’re scared the robot will run them into a tree.
Another topic that piqued my interest is the question of if artificial intelligence will ever become so advanced that a computer could be as ‘smart’ as a human. During the socratic seminar, I mentioned the show Star Trek: The Next Generation, a character named Data in particular. Data is an android: a human-shaped machine so advanced that he’s considered a fully sentient being. He has all the rights that any human has in the Star Trek universe, but what about our universe? Would we grant rights to an artificial intelligence? No matter how sophisticated it is, it’s still a computer, and all computers do is interpret code. Sure, anything as advanced as Data would write its own code, but that wouldn’t make it person, would it? My answer to that is maybe. It all depends on how you define a person. We ourselves are computers too, just a different type of computer. We have a processing center, the brain, that interprets code, DNA. This processing center tells the rest of the machine, body, what to do. We can even write our own code too, it just takes some genetic mutations and a few thousand years. So could that make an intelligent machine a person? I don’t know, that’s up to you to decide for yourself. All I know for certain is that it’s going to be very interesting to see how artificial intelligence develops in the future.
- Skyler Rodgers
Comments
Post a Comment