NOLAN OSWALD DENNIS
Toward Misrecognition. | Project notes for a haunting-ting
Project notes for a haunting-ting
Of course this is already old news, computers interface with the world through code. The speed and volume at which machines produce data necessitates an equally fast bulk method of processing – enter artificial intelligence. A double win. AI neural networks have the ability to scrutinise massive amounts of data at incredible speeds with the added benefit of refining their algorithms as they process that data. This learning as they work is an act of computation (sorted, flagging, reorganising) as well as consumption (adjusting their internal procedures based on what they receive from without). In effect turning the data they process into a training dataset which is fed forward into their data processing. This is a feature not a bug.
The artist Trevor Paglen[1]thenewinquiry.com writes about the shift in visual culture from a human centred endeavour to a machine-centric practice of image making and viewing.
The vast majority of images are now produced by machines for the exclusive consumption of other machines
(note. The autocorrect algorithm in my word processor suggests I replace consumption with ‘computation’ – a significant slippage, I will stay with consumption). By far most images in the world will never be seen by a human eye at all.
Equally as important as access to, and circulation of, these images, is the shift in function of machine-images. These images are no longer primarily representational objects, instead they are operational images engaged in mediation, activation and enforcement. While these utilitarian functions are more or less legible at the point of deployment, the images themselves are altogether more mysterious than we may realise.
Digital images are coded information, their native format is not visual but machine readable code. In order for them to be ‘seen’ in any customary sense they must be coaxed out of the image file and output as pixels of light on a screen or printed onto a physical medium. This inefficiency means most files are never realised as visually perceptible images, in any case they are not for us.
While the internal logic of machine intelligence is rigorous (more or less) it is also relatively unsophisticated and governed by highly developed pattern recognition procedures. Powerful, yes, but within tightly constrained realms determined primarily by huge dataset sorting and recombining processes. In spite of their name, the horizon of machine intelligence is understanding. In addition to all their speed and size advantages, the critical benefit of machine learning is the ability to develop and function without human input either in code or in data acquisition. We can assess their performance but not understand their procedures. To us they are opaque and impenetrable. To them we are too.
However the development of increasingly convincing natural language processing algorithms opens the doors of perception to another, weirder, possibility. While NLP is not fundamentally different to other pattern recognition procedures, the consequences of parsing natural language (everyday speech) and generating natural language responses are. These NLP developments shift the threshold of credibility from replicating intelligence (mutual understanding) to recognition of otherness (mutual alienation). A misrecognition perhaps, but that’s okay.
Orientated toward alienation, misrecognition becomes a feature not a bug.
An operational perspective on machine intelligence emphasises the effectiveness of any given performance. A machine must perform tasks always better than a human could in order to persuade us of its intelligence. It is not a mistake that the turing test[2]Turing test is a competition between a human and a machine. It is not a mistake that it is a test either.
Writing about gambling addiction Alexis C. Madrigal [3]theatlantic.com identifies the machine zone. A particular interface between human consciousness and machine procedure, Madrigal describes this zone as a rhythm, a kind of dance between machine prompt and human feedback, human prompt and machine feedback. A repeating cycle of minor gestures between an oblivion-seeking human consciousness and an obliging machine which, when in alignment, distorts spacetime and draws us into the security of the loop. United in apathy, both human and machine give nothing.
This fine-tuned feedback loop is mobilised by casinos and social media companies to keep us in the machine zone where they can extract profit from our desire for a kind of doom scrolling, demonic zen. However misused by nefarious forces, the notion of the machine zone offers an antidote to the utilitarian imagination of intelligence proposed by AI researchers. Overdetermined by capitalist logics of competition and racial-colonial reduction of what counts as human (and therefore intelligent) it seems that for the sake of both ourselves and the machines there must be another way.
I’ve been thinking about whether machine learning might be thought of as learning for the machine. Instead of training machines to execute tasks (really the saddest description of intelligence) we might share with machines another way of being, invert the relationship, teach them how to teach perhaps.
My work recently has involved collecting idiosyncratic datasets drawn from the archives of black liberation theory. In biko.fanon (touch/hold) (2018) a dataset is created of all sentences mentioning the words touch and hold in the work of black consciousness theorist Steve Biko and revolutionary psychiatrist Frantz Fanon. This dataset is then recombined using a pseudorandom script to print a scrolling receipt of a conversation between these two figures about touching and holding. I had been thinking about this machine as a performer translating a conceptual proposition on a stage, a performer summoning spectres from the near past. Machine learning presents us with an opportunity to reconceive this kind of relation.
The urgent critique of artificial intelligence from the racialised and colonised world is the critique of implicit and inherited bias.
These machines fail at relatively benign tasks like recognising the faces of people of colour because their training data is determined by the distribution of power and problems in the world at large. Another way to think about this is that these AI models are a reflection of that power, a version of the critique of colonial education which seeks to turn the colonised into new iterations of the coloniser. Is there another possibility for AI models?
I am trying to conceive of a machine learning model trained on black liberation theory. A neural network tasked with iterating the eccentricities of an archive so precious and yet so carelessly unattended to. The task, as always, is channelling the spirit, divining the possibilities and deploying the ghosts. The other etymological root of intelligence is legō – to care.
1. | ↑ | thenewinquiry.com |
2. | ↑ | Turing test |
3. | ↑ | theatlantic.com |