Student Newspaper at Michigan Tech University since 1921

Published Weekly on Tuesdays Office Located in Walker 105

Debate: Is it safe to trust artificial intelligence?

Round 1

Side 1: There’s always a lot of talk about artificial intelligence (AI), both in science and in science fiction, and many people worry about what that means for our society. But AIs, if correctly made, can be a good thing. For example, tedious, repetitive tasks could be done by AIs so that humans can focus on more creative goals. This could then open up more opportunities for advancement in several fields. With more advancements come the possibilities of more creative opportunities, just continuing the cycle. These advancements could create more job potential too, so the common argument that AIs will replace people’s jobs isn’t completely true. AIs might replace some people’s current positions, but that doesn’t mean that there won’t be other places their talents and ideas could be used. It’s not much different than the idea of search engines replacing reference librarians in archives. Sure, there are fewer librarians now, but people have access to information more quickly and efficiently, allowing them to learn more and build ideas faster.

Side 2: For how exploratory and experimental the path of research on artificial intelligence has been, there is no correct way to compile an AI software or a system that runs on it. Ultimately artificial intelligence is an effort to simulate human thought process which in itself is a mercurial wave-train and isn’t completely free of errors. There is no doubt that AIs might reduce our work and that human minds can then be put to use to achieve new evolutionary goals. However, this very change would require a definite success on the part of AIs for a sustainable time of at least 10 years. Since this technology has found a home in several areas of our lives, be it deciding the parole of a prisoner or performing a medical surgery, and given the kind of important responsibilities that we have readily committed to assign these machines with, affording even a minor mistake is not an option. AIs aren’t self-sufficient to the point where they can judge between right and wrong and corruption of data doesn’t take long to happen. Recently, Microsoft’s chatbot Tay went crazy racist by taking inputs from Twitter trolls. We cannot allow AIs to be self-reliant, let alone them replacing us at our jobs.

Round 2

Side 1: Yes, there is something to be said for the fact that people might be able to misuse AIs to the detriment of others. That is a risk. However, to argue that point, one would also need to admit that plenty of inventions and discoveries have also been used for both good and bad purposes. The question then becomes, at what point does the bad outweigh the good? But that is a debate for another time. The point is, if we were to fear every negative possibility, we may never discover something incredibly useful. And AIs can be more useful than humans in certain areas, as their processing is designed for the tasks in ways that human brains can’t do as well. Yet, this usefulness doesn’t mean that the opposition doesn’t have a point about certain tasks like driverless cars. The question about what decisions should be left solely to humans is a large quandary yet to be solved, and driverless cars may not be the way to go. However, that doesn’t mean data processing and other such tasks can’t be more efficiently done by AI. Plus, having AIs do these tasks won’t necessarily make us less intelligent. After all, it was thought that writing and reading would mean that our brains wouldn’t be able to memorize anything, but really it means that we have a different way of thinking than they did then, and it’s not necessarily worse. We just have to find the right tasks and programming to give AIs to help spur us on to greater heights while keeping the good of humanity at the forefront of our minds.

Side 2: Common knowledge and the existing sphere of activities would easily assert that the AI systems today are nowhere close to their super-advanced portrayals in science fiction and that the idea of their ascendance to the status of killer machines is still abysmal. But our true fears should not be about the machines refusing to obey our commands and go rogue, but they must rather lie in the fact that they always do what they are told to and in ways that are quite unimaginable. Envision for instance a scene where a man in his 30s is headed to his work in a sophisticated self-driven metallic black sedan. It is a fairly busy urban road and he is immersed in the editorial section of the local daily and suddenly a vehicle misfires in the front. The car’s AI is surely taught how to avoid an accidental collision, but it won’t budge even if it comes at the cost of lives of those walking on the pavement because it has to and will do anything to save its own vessel. On an insane end, think about how this technology could be deliberately abused. It won’t be difficult for extremists with warped ideologies and facile access to arms and latest tech, to strap a gun to one of these things and hunt down swarms of people running on the streets. Giving forethought to this, even if we confine the interference of AI to small scale benign applications and stay within the radar of safety, it would only make us more intellectually lazy, with a nasty self-learning algorithm doing all the thinking for us. We definitely don’t want that!

Side 2 argued by Saumyaveer Chauhan.

Leave a Reply