arrow-right cart chevron-down chevron-left chevron-right chevron-up close menu minus play plus search share user email pinterest facebook instagram snapchat tumblr twitter vimeo youtube subscribe dogecoin dwolla forbrugsforeningen litecoin amazon_payments american_express bitcoin cirrus discover fancy interac jcb master paypal stripe visa diners_club dankort maestro trash

Shopping Cart


Culture

This is an existential crisis disguised as a story about AI and surveillance

This is an existential crisis disguised as a story about AI and surveillance

The other day, my girlfriend sent me this meme, and because I can write whatever I want on here, I’m gonna make you all read it because I had to. Basically, it’s an edited cut scene from some Megaman game, where the eponymous robot boy is confronting a robot boss of some kind. This is the dialogue (the robot is in all caps btw):

` “THE HUMAN NEED TO BE WATCHED WAS ONCE SATISFIED BY GOD”

“NOW, THE SAME FUNCTION CAN BE REPLICATED BY DATA-MINING PROGRAMS.”

“People don’t understand the dangers of indiscriminate surveillance.”

“GOD WAS A DREAM OF GOOD GOVERNMENT.”

“YOU WILL SOON HAVE YOUR GOD. AND YOU WILL MAKE IT WITH YOUR OWN HANDS.”

 We have a unique love language. Yeah, anyway, I’m not sure if she meant it as a threat, but the message certainly activated my fight or flight, and I didn’t really sleep well that night, so thanks for that babe. I mean seriously, what a fucking concept. It left me Black Mirror levels of tripped out, and that was before I started doing my research.


It turns out that the aggressively online makers of niche and vaguely academic Megaman memes are not the only ones paying attention to the power that big data companies hold over our lives. The EU is also eyeing the rapidly developing industry with a wary eye. On April 21, the European Commission outlined a policy framework aimed at regulating the use of artificial intelligence (AI) throughout its jurisdiction. 


The framework, which is the governing body’s equivalent of a bill and is yet to be the subject of what will undoubtedly be a lively debate among member states, represents sort of a first draft of regulations from which the rest of the word can divine the EU’s chief concerns when it comes to the nascent technology. 


According to reporting from The Verge, the framework assigns a high priority for regulation to applications of AI tech that affect everyday activities like the operation of autonomous vehicles, medical equipment, or financial algorithms like those that assess risk for loans. One of those high priority applications, for which development would be banned under the policy framework, is something called a ‘social credit system.’

 

Now, the EU still has a long way to go before their regulations take meaningful effect, and, as to be expected, there are critics who say the proposal doesn’t go far enough to prevent the abuse of AI in law enforcement contexts, among other things. I’m sure, if I looked hard enough, I would also find some weird techno-libertarian, Alexander Nix wannabe tech industry simp who would criticize the framework as being a symptom of regulatory overreach on the part of the Union. 


Normally, at this point, I would probably spend a couple of paragraphs agreeing with the people who say this doesn’t do enough to deal with law enforcement’s potential for abuse of this technology, given that predictive policing algorithms are unsurprisingly prone to racial bias and big data facial recognition services like Clearview AI rely on extremely sketchy data collection methods. I would also probably take a quick second to bully people who think that this overall fairly weak regulation does too much to throttle burgeoning AI technology, because it’s clear that these people are just hoping that when the robot overlords come, their loyalty will be rewarded. 


I don’t give a shit about any of that though, because I’m still tripped out about what the robot said in that meme from earlier. Specifically the line about how “God was a dream of good government,” which after googling, I realize is apparently a quote from the video game Deus Ex, which is cool? I guess? Anyway, it’s interesting to me because that particular sentiment seems to play into what I’ve read about one of the EU’s banned applications for AI: the social credit score.

 

A social credit score is essentially an extension of the regular credit score systems that we’re already used to in Western society. A low score makes it more difficult to secure rewards from society, like lower-interest loans and other such boring adult shit. However, a social credit system uses big data capabilities to keep a much more in depth profile on someone and take into account activities of the individual that go well beyond just financial dealings. 


The only country where a system like this exists is China. In China, the government has encouraged development of pilot social credit programs in local jurisdictions, which are often paired with private companies that manage the algorithms and some data collection. The result of these pilots has been a sort of patchwork of scoring systems with regional variance of criteria and little transparency on the side of private companies like Alibaba subsidiary Sesame Capital, which runs a popular such pilot program, or Tencent, a telecoms giant that can provide data to scoring firms. 


Most of what I can find on the Chinese system of social credit maintains that the Communist Party’s (CCP) stated intention of reaching a fully integrated national system has not yet been fulfilled, but reports talk of rewards for praising party policies online and punishments for petty social blunders like eating on public transport or bailing on a cab, and social perks like segregated washrooms for high-scoring members of the Sesame Capital pilot. 


All of this sounds like a classic Orwellian nightmare scenario, of the same or perhaps even more intense caliber as Edward Snowden’s leak or the Cambridge Analytica scandal, and it feels to me like I should just be thanking my lucky stars that the EU is more or less on top of things, but what’s particularly haunting about it all — and what is keeping me from moving on from this dystopian futurist’s fever dream of a story — is that that theme of good governance and a population made docile by data keeps coming up.

For example, in the Canadian Security Intelligence Service’s (CSIS) report on China’s social credit system, the spy agency focuses on what they call the CCP’s system of “social governance,” which links state security to the political survival of the party: 


“Advances in big data provide the CCP with a greater capacity to forecast, identify and assess risks to Party-state security. Their application is also intended to improve the integration, sharing and utilisation of data between and across Party-state entities. One part of the CCP’s social governance process is the nascent social credit system, which relies on technology to coerce and co-opt individuals to participate in their own management.”


That last part, the bit about Chinese citizens participating in their own management is the part that is truly bone-chilling. What the social credit system ultimately does, is allow the government to set priorities which then necessarily become priorities for every citizen, because if they don’t, then those citizens lose out on benefits, and in extreme cases, rights. 


The CCP, at least according to CSIS, can use this ability to lay priorities for its citizens, and thereby consolidate its power while acting proactively to disadvantage any member of its society that might pose a threat to Party solidarity such as critics, or just plain slobs. It’s so revolutionary to me, not because through this system of social credit, we’re able to witness the raw power of big data when applied to large scale state-sanctioned surveillance, but because this system actually succeeds in forcing the country’s citizens into a situation where bowing to the CCP’s pressure is actually in their best interests. 

The basic principles of democracy and capitalism operate under the assumption that citizens will act in their own economic best interests and that competition creates innovation which boosts the economy. This Chinese system is the first to at least appear to succeed in a long history of autocratic regimes that tried to subvert this selfish societal compulsion, but instead of achieving compliance through intimidation and fear exclusively, the CCP has managed a lovely blend of dystopian Foucauldian surveillance and brutal intimidation and fear that seems like it could truly be effective locking down internal threats entirely. 


So, to sum it up, thank Christ that we’re not even considering this type of thing in the EU, and given just how scary that shit is, and just how powerful big data already is in the West, I would like to see some even more stringent follow up to be honest. But none of this helps me get over what the robot said in the beginning. Is the natural state of humanity to be “coerced or co-opted to participate in our own management” as CSIS put it? Was God just a dream of good government, as that robot meme posited?

Erm, no, I don’t really think so. While it’s true that humans can be controlled most easily when they think they’re being watched by a higher power — what else could explain thousands of years of blue balls from a long-lived premarital sex taboo, for example — I don’t think God was a construct borne out of a desire to be properly controlled. I do think the church, as a powerful political entity, was borne from the opportunity provided by a whole population that believed a higher power was always watching. 


And in that sense, history might well repeat itself. Just as the church leveraged promises of salvation or damnation to maintain legitimacy as the governing body with the most divine authority, big data and AI have the opportunity to force that legitimacy on us once again by replicating omniscience similar to God, and any government willing and able to wield that power will find itself in a position of absolute authority over its citizens. Fucking yikes.

0 comments


Leave a comment