AI Free Will

AI Free Will

AI Free Will 2560 1920 Ayush Prakash

The topic of free will has been an interest to many philosophers. From Nietzsche to Harris, free will has been discussed extensively concerning the primary mode of living. Is the way you live free or not? If it is free, how free is it? If it’s not, why not? Free will brings forth determinism, compatibilism, fatalism, and other philosophies that breed lengthy discussions over coffee and wine (hopefully separately).

Free will can be defined simply as the capabilities of choosing your actions. Choosing between Lionel Messi or Cristiano Ronaldo is an act of free will. Or is it? If your actions are predicted, are you acting freely?

Artificial intelligence (AI) learns from data. The more data you provide the AI system, the more accurate it will be. Hence, if you search up dogs on YouTube, the algorithm will tailor its recommendations to dog videos. If you suddenly switch to cats, watch the algorithm switch to recommending cat videos almost magically (of course, there will be one dog video lurking on the sidebar, waiting intently to be clicked). This begs the question: if AI can predict what you do to a degree of accuracy sufficient to correctly guess what your next move (or click) is, are you free?

Let’s take it a step further. Say you were watching a dog video and another video for Golden Retriever puppies comes up. At any given moment, there are more videos on the YouTube sidebar than you know what to do with—but, for some reason, you click the video for the puppies. Was this an act of free will? By your estimates, yes. There were many videos to choose from, but you clicked what you wanted to watch at that moment, freely. Thus, an act of free will. But when YouTube looks over your profile and the algorithm tailored to you, it will see that the algorithm has a certain success rate in recommending you videos—leading you down a line of videos tailored to your specific taste. In that instance, you weren’t really free to click; you were being groomed to click.

Isn’t the algorithm learning from you? Doesn’t whatever you do influence the algorithm? Yes, and also, no. Whatever you do affects the algorithm; each click, like, dislike, comment, and so on tailors that algorithm ever so slightly into being more accurate. But, on the flip side, each time you give the algorithm more data to work with, it becomes more accurate at predicting what you’ll do next, and this is where it starts influencing you to click on certain things you probably would not have beforehand. This is the dance we deal with every day, with every algorithm. Giving it data, affecting it, and through it using that data, it influences us.

Can this process be weaponized? Absolutely, and in all honesty, it probably already is. Can it be stopped? Not without severe consequences and conversation.

Am I still free to act? Yes, to a certain extent. Eventually, if data is continuously fed to these large data corporations, you won’t be influencing the algorithm anymore. It will be a one-way street of influence. Your free will will diminish along with the rest of your power to choose. You will be a crony for every corporation, consuming whatever they want, whenever they want, however they want you to.

I am just joking. You already are.

Next time you’re watching a recommended video or film, television show, or Instagram post, think to yourself: did I click this freely, or did they expect me to click it?