Overcoming algorithms

May 6, 2018, 8:49 p.m.

“He created a new kind of mimesis. A new biological imitation. He

knew he had succeeded. He could not see me in his futures.”

-Frank Herbert, “God-Emperor of Dune

Our era is one of calculation. As our souls expand and bleed into manifold technological prostheses, the traces we leave of our self become permanent, quantifiable, manipulatable and exploitable. As we standardize the expression of our desires into forced tapestries of symbols, we leave behind us a vast midden of forgotten psychological refuse. With increasing frequency, these scrap yards are being picked through. Systems are being deployed to scavenge the binary carrion for bits of putrid flesh, soaring like vultures through the cloud; processes are being developed to swallow whole oceans of bytes like a great Leviathan, sifting for profit through the baleen that lines its cavernous mouth. Algorithms spawn like the children of Echidna; they have been loosed upon our data, seeking to rebuild, and through rebuilding, predict.  

Mark Zuckerberg sits before Congress, attempting to explain how his company will keep safe the data of two billion people after a fraction of that data was used to influence a U.S. presidential election. What they may not understand is that Zuckerberg needs Cambridge Analytica. His inconceivably vast fortune, as Tim Wu puts it, is at least partially predicated on the notion that he’ll continually be able to “figure out a new way to extract profit from all the data he’s accumulated about us.” The continued success of his company relies on his ability to feed the ever growing transubstantiation of human data into profit. Many of the congressmen interrogating Zuckerberg were irate, demanding he implement privacy features that, as it happens, already exist.

In Rongcheng, China, policymakers have already begun implementing the Social Credit System. The system, first proposed in 2014, aims to assign to each Chinese citizen a rating which gives an abstract measure of their trustworthiness. According to the official document outlining China’s plan—which it will implement for all its citizens by 2020—the social credit rating would be drawn from a measure of honesty in four areas: governmental, commercial, judicial and societal. The first three categories are unsurprising; amalgamating an individual’s business transactions and legal history into a single valuation feels like a modern update to the credit score. However, an individual’s societal honesty rating would take into account their shopping history, their list of friends on social media, the messages they send in texting apps, even time spent playing video games. Armed with algorithms, the largest nation on Earth is attempting to quantify, measure and predict honesty, in every sense of the word.

In her 2017 book “Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart,” Rachel Botsman asks:

So are we heading for a future where we will all be branded online and data-mined? It’s certainly trending that way. Barring some kind of mass citizen revolt to wrench back privacy, we are entering an age where an individual’s actions will be judged by standards they can’t control and where that judgement can’t be erased.

For Botsman, we have two options. We could destroy the thinking machines; we could cry “No more!” and rip their spindly tentacles out from our heaps of data that we have so carelessly piled behind us. If we do not do this, we must resign ourselves to the fate of being endlessly analyzed and predicted. If we don’t “wrench back our privacy,” we allow the world to slide into atony as we submit to the judgments of our omniscient algorithmic overlords. After all, there are only bodies and languages to be symbolized, symbols to be analyzed and analyses to yield predictions.

What if there were a better way? A third way, an answer somewhere between a reactionary movement against the technocratic police state and the inevitable conclusion of a social-media-mediated Orwellian nightmare. The dichotomy above relies on the assumption that these predictive systems really work; that despite our unshakeable internal conception of free agency, valuable predictive data can be gleaned from something as simple as our online shopping histories. What if this assumption should fail? What if, instead of citizens revolting against the implementation of these predictive algorithms, the real revolt is against the algorithms themselves?

We humans have always shed “data” in the wake of our daily lives. Passing interests, imperceptible idiosyncrasies, intellectual tics, habits, partialities: all of these things could be quantified. But what mad scribe would sit in the bushes and, watching us from afar, compile our minutia? The correlation between these psychological hiccups and any meaningful description of the human person was, in a word, naive. But the obsessive monks have arrived en masse, and most of us have one in our lap or bumping up against our thigh right now. From now on, everything will be catalogued, and the long-dark links between the deepest compartments of the unconscious and the actions of the everyday will be revealed.

But we, too, can share in this knowledge, and we, too, can act on it. Is it possible that the only option is as Botsman describes: A wrenching back of privacy? A revolt against the use of such algorithms? I contend that the result will be more subtle. An individual can respond not by taking back their privacy in such a physical sense, but by changing the very nature of what can be analyzed and exploited. We will not see a conformity of behavior as a response to oppressive systems of algorithmic prediction, but rather a sudden and radical diversification.

Our question, of course, is: How? It may be too early to know. We may have yet to complete a sufficient number of revolutions from action to prediction and back again to know how we will rebel. But with the hashtag #confusefacebook, some have already begun to propose strategy.

There is now as never before a way for each of us to seize hold of what was previously phenomenologically unreachable. Given an algorithm that predicts the interior of our souls based on our habits, we’re necessarily also given the tools to confound that algorithm. The long term result of such widespread surveillance and analysis will be the forging of a new kind of human: one who is soberly aware of the interwoven substrata of their own subconscious, who defies prediction, who intentionally confounds and confuses with behavior that corresponds to a new, higher form of agency.

Psychohistory has arrived, dear reader, and a new foundation for our lives must be built.

 

Contact Sam Rogers at srogers2 ‘at’ stanford.edu.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds