The Control Problem

The Control Problem

The Control Problem 2560 1706 Ayush Prakash

The topic of superintelligence has tasked computer scientists and philosophers with the notorious question: how would we control a superintelligent entity? This question is flawed for the following reasons. 

For starters, humans have flourished for ~200 000 years. We went from picking wild berries in our spare time to picking insects off our fresh crops. We stopped mining coal for our steam engines and started mining data for our search engines. The information available at our fingertips make all libraries that existed centuries ago seem primitive. We live comfortable lives, never worrying about our next meal or water source. Billions of our fellow citizens postulate about God and a simulation without worrying about enemy tribes, lurking predators, or being burned at the stake for blasphemy. We create and observe the finest arts; cook and serve the finest cuisines; wear the finest suits and skirts to the fanciest parties. However, we also engage in the nastiest genocides, the most egregious wars, and the dirtiest environmental practices. Those in power constantly make the same mistakes again when electing leaders or dealing with world issues like hunger or poverty. We are seemingly intelligent at engineering society but outstandingly rubbish at fixing the societal problems in front of us. And somehow, we seem to believe that creating a superintelligence, which is forecasted to be the next step in human evolution, should be controlled and maintained by a less-superior intelligence: us. 

This notion is challenging to grasp. For lack of better words, it doesn’t make any sense. Machine superintelligence is designed to solve current issues, but that is not their only goal: we want them to solve all problems eventually. The future outlook for technology is where humans can dispose of their biological bodies and merge with machines. Whether on a virtual server or in a robotic body, humans are destined to ditch nature and convert their blood and bones to something more mechanical or silicon.

This brings us back to the main argument. Controlling superintelligence is not only a flawed reach for power; it is also lackluster in thinking. The sole purpose of superintelligence is to be greater than humans. Superintelligence is supposed to unlock the universe’s secrets in an anecdotal sense. Creating superintelligence with the expectation of controlling every aspect of it is futile. It dooms ourselves to one reality: retaliation. 

Humans are not good problem solvers, so we seek to create machine superintelligence. But what if this machine superintelligence starts acting in ways we couldn’t predict, ways that seem arbitrarily wrong to our puny minds? To maximize its effectiveness, a superintelligence that understands our problems and seeks to fix them would need to act independently of our wants and needs. After all, it is more intelligent than us. It knows better.

Let’s think deeply for a moment. Why would any form of superintelligence bow down to superfluous life forms? Imagine talking to a five-year-old about world poverty. You explain that many people are without money to buy food, water, or houses. Millions of people around the world starve for days and weeks. In response, the five-year-old says that we should bring them all to current society, feed them, and give them money. While this is an optimistic reply, it wouldn’t work for many reasons. The less intelligent child communicating to the more intelligent adult cannot grasp the real world and how things work. This can be analogized to our conversations with machine superintelligence. We explain the problems and our possible solutions, and the machine superintelligence perceives us as incompetent bipedals.

Maybe the parameters I am working with are not adding up, resulting in a skewed post. The more likely reason is that superintelligence will not play by our rules. Assuming we will restrain superintelligence is a very wrong mode of thinking. If we are to create a machine superintelligence to solve our problems, we must allow that entity to play by its own rules; free from human constraint and deaf to human complaint.