On June 29, 2017 Sir Nigel Shadbolt, a professor of computer science at the University of Oxford and the principal of Jesus College, discussed with the Summer Institute students his research on artificial intelligence. Through discussions and several articles students were able to better understand the concept of artificial intelligence and the different perspectives it entails. Below is a student's reflection on meeting with Sir Nigel Shadbolt and their class discussions.
I was not too enthusiastic going into Thursday's meeting; I didn't feel like I knew enough about Artificial Intelligence or understood the computer science behind how these systems worked. To compensate for my shortcomings in knowledge about the science behind AI, I did some extra research about the legal and ethical challenges with the technology. In addition to talking about the science, Sir Nigel discussed many of the economic and legal implications AI has. In his discussion about jobs - which I connected to the most - he touched on a question that is becoming much more political: how will economies and governments react to the automatization of the workforce. He believed that the speed of progress in AI technology is not fast enough to quickly replace many jobs. He continued, by claiming that many jobs would be created after others are lost, but he mirrored other arguments like this one by not providing examples of what kind of jobs would be created. One critique I had of that argument is that the difference in skill needed between the jobs before and the jobs after AI is introduced is huge. For example, a trucker from a rural community would have a very hard time retraining as a server technician because of variables like reeducation cost and location.
In addition to the points made by Sir Nigel, we had a discussion of our own about the appropriate amount of regulation for AI and general automization. I believe that regulation is important to protecting workers' rights, even if there are some negative economic side effects. In our discussion we also talked about historical examples of regulation during times of unparalleled economic growth, namely the Industrial Revolution. During that period there were flagrant violations of workers' rights and safety regulations in the name of economic benefit for factory owners and operators.
I liked how Sir Nigel touched on many different applications for AI. He talked about everything from AlphaGo to automated cars and laid bare the technical and legal challenges facing AI projects of all sizes. Most interestingly to me, the moderator and him discussed the legal and ethical challenges facing technology that can program itself. One of the articles we read, titled "The Dark Secret at the Heart of AI" by Will Knight, revealed the challenges facing self-programming technology that can't "explain why it did what it did." (Knight, p.2). For this reason, and more, Sir Nigel made the prediction that the rate of change in AI technology will be slow as many more ethical and legal questions come about. Perhaps the most interesting thing he said the whole night was when he revealed the big question facing AI and automation: how to generalize specific machines and help them do things they were never intended to do. He said that machines which are trained to play go can't drive a car, and creating a way for a machine to transition would be a huge step towards more automization and more complicated AI technology.
Overall, the discussion within our group and with Sir Nigel progressed my understanding of the technological, legal, and ethical achievements and obstacles facing AI. Most surprising to me, I was able to connect regulation of AI tomorrow to regulation of machines in the past, which gave me a better context into understanding one of the most important upcoming economic policy decisions facing the world.
- Oliver Walter
Comments
Post a Comment