A critique of Galloway’s Protocol: ‘Interrogating the Algorithm pt II’
So, the following will not be for everyone. It is in fact, at times, a bit shouty. But some of this really got my goat. The idea that technology is not value-neutral and that it manifests cultural assumptions and reflects and sometimes enforces existing power structures is an important one. In light of our increasingly digitally-mediated world we need to have a very clear understanding of the (sometimes not obvious) mechanisms of control and influence within these systems. As such, a book like ‘Protocol’ by Alexander Galloway seems to be well-positioned to discuss these ideas. But my god – do we see some fail here. And the nature of the fail really raises a whole ‘meta-argument’ I have been having both with myself and Katie Hepworth recently, about the relative value and respective shortcomings and blind-spots of empirical/technical versus more philosophical/sociological approaches. And Protocol offers a nice little example of some of the short-comings of the philosophical approach.
Look, I’ll be honest – I gave up about half way through this book – its capacity to get me shouting at an inanimate object won out over my determination to work through to the end of his argument. And because even in the first three chapters, Galloway exhibits a problematic lack of insight into the actual mechanisms of technology and, resultingly, makes statements that are just eye-wateringly off-target. How this can co-exist with a central argument that is quite astute is beyond me. Sigh…
So what is my problem with Protocol? In short – it’s what I’m starting to see a lot of places at the moment and I’ve decided to call: ‘Cargo-Crit’.
Just as Melanesian indigenous populations built primitive airstrips and other western industrial artifacts after the 2nd world war in the hope of attracting the white gods and their metal birds back to deliver them more cargo – so do theorists like Galloway go through the motions of analysis in the hope of landing some earth-shattering insight whilst fundamentally misunderstanding the underlying principles of the technology they discuss. There seems to be understanding up unto a certain point and then – exactly where the complexity arises – we veer off into some wilderness of preconceived extrapolated meaning – often quite at odds with what the subject actually dictates.
Furthermore there is a kind of fetishism going on where things that are poorly understood have these unrealistic and almost un-natural powers ascribed to them. The point at which understanding fails becomes the point at which an argument launches itself to hyperbole.
So let’s have a look at some of the claims made in Protocol and examine some of the assumptions Galloway is making.
‘…code is always enacted’. Page xii.
Er, no it’s not. Code can be stored. It can be passive, it can age, it can go out of date, it can no longer run because it has no compiler. Pieces of code may sit patiently in a subroutine and never be run because the execution point is never diverted there. Code is INTENT not action. Thus is it subject to all the vagaries, foibles, errors and misjudgements that we make as humans. This is (one of the reasons) why computers crash. Code is not always ‘enacted’. And to see it as such is to endow it with magical properties that it does not really possess. Ie: Fetishism.
Newtworks are real but abstract. Page xiii.
“Protocols do not perform any interpretive function themselves…remaining relatively indifferent to the content of the information contained within.”
This is demonstrably false. By definition a protocol defines the very nature of the information being exchanged. As in this example if a protocol codes sex to two possible values: ‘male’ and ‘female’ – then an entire multi-dimensional space of possibilities is not only excluded from consideration but effectively collapsed into only this externally-sanctioned binary. At a higher level one might suggest that the most sophisticated protocols can support a wealth of possible human-readable semantics that the wrappers are blind to, such as the content of movies contained within an mpeg wrapper, but still the protocol defines much of what is and is not possible with the data. An mpeg file does not support annotating a video or finger-painting moustaches onto the actors therefore disallowing types of interaction – thus they define what is and is not possible within the content.
‘Code is not a sign’.
Um, yes it is. That is EXACTLY what it is. It is a symbol that a given machine knows how to interpret. That is ALL it is. And, because of the fallibility of our machines – that meaning can go wrong sometimes – so when you copy a piece of text from a website into a document and the formatting goes all funny – this is when two different systems have assigned different MEANINGS to the formatting codes. Ie: Codes are a sign. They are symbolic. And are therefore open to interpretation, misinterpretation, error and revision.
“A code is a series of activated mechanical gears, etc…”
Again this emphasis on something that is active or happening – seems misplaced. Code can be etched into the glass and buried under ground before being dug up, scanned, and form functioning machine action again. Pianola roles can gather dust in the back of an antique store for a hundred years before being found, carefully spooled into their player mechanisms and leap into life. Code can sleep. More to the point code can have salutary real world effects without ever actually doing anything or ever being used. As an example – there have been historical examples (and continual conjecture about ongoing practices) of companies delivering software to clients that have deliberate security holes left in them. These security holes are often called ‘backdoors’ and permit the original coders access to otherwise supposedly secure systems. Interestingly this is often done with less malicious intent that the ability to monitor and fix problems without the client becoming aware of the problem in the first place. However for such lapses – even when the breaches are never used but simply come to light at some point in a services lifetime – can have enormous effects in the real world in terms of faith in institutions, who is awarded development contracts etc… Here code need not even ever actually be used – but can have enormous real world effects.
Immateriality
‘…may involve information as an immaterial entity’. When talking about immateriality, there are arguably aspects of code that are ‘immaterial’ – but these are the objects mentioned above. These are elements that do not exist at the lowest levels of physicality – they consist of electrical charges, but they are not those. They also do not exist at the highest level, they are not apparent, we do not interact with them, we are not conscious of them. They are utterly abstract entities of function defined by the coder that hover beneath the skin of the machine. They give the machine life, they are arguably the quanta of machine intelligence, but, as with human intelligence, they are difficult to pin down and define.
Summary of qualities of ‘protocol’. Page 82.
So it would seem generally that Galloway’s conception of protocol is a very abstract one that seems removed from much of the actual real-world, implemented or ‘embodied’ examples of protocol existent. Which is fine – but on page 82, after having described both the lower levels of internet communications layers in Chapter 1 and then some of the emergent forms and manifestations in Chapter 2, he lists a number of qualities that he maintains are inherent to this platonic, eternal concept of ur-protocol. Some of which, to me at least, seem simply laughably untrue.
Protocol is flexible. This must be the point at which Galloway’s version of protocol veers from any form of actual protocol that I have ever come across. Protocol’s, in my experience, are by definition inflexible. Machines are terribly, mind-bogglingly literal, and any deviation from an expected pattern of characters or symbols in a message will cause a failure in the signaling process. There are ways of building in robustness to the system (such as try/catch techniques in coding) but these are deliberate, painful to implement, and again are terribly literal in that in each case they will only catch a specific class of error/deviance from the accepted sequence of events. In software, a single comma in the wrong place somewhere within millions of lines of code can reliably cause the entire application to crash. Protocol is not flexible. Protocols – by definition – separate the world into acceptable and unacceptable forms of data. This is the reason for their existence – meaning that they are, by nature – inflexible.
Protocol is universal. I just throw up my hands here. Tell this to any web designer and watch them spray coffee all over their laptops. Not only does protocol require implementation on both sides – by the taker and receiver – meaning that you need to possess the protocol in order to decode the message contained within – but the version of the protocol you possess must match the version used by the sender in order for lossless (electronically at least) communication to take place. Specifically – you may send me a text document that has been saved in a very specific protocol, eg: a document saved by a Microsoft word processing program, hereafter referred as a ‘Word document’ and should I not also possess a version of the same program, and thereby be legally able to decode that document – the data is fundamentally inaccessible to me. Furthermore if you have saved it with a different version of the software to that which I possess – there may be errors or inconsistencies in the layout of the data when I see it, or simply missing data. This of course, leads to all sorts of real world problems. Proprietary and now dead formats (protocols) can effectively kill data by rendering it unreadable. Got any 1.44 inch floppy disks? Betamax video tapes? Furthermore proprietary formats result in a increasing digital divides which manifests along manifold dimensions of wealth, access to information, cultural power and insider knowledge. Protocol is NOT universal. If you can’t afford to buy Microsoft word – it is going to be very difficult for you to submit a job application to a website requiring that specific format. As an object lesson, go to any webpage with multiple text and graphic elements – and then load the same webpage using different web-browsers and examine the difference. And this is after designers have spent ENORMOUS amounts of time and energy trying to make the look of the website as consistent as possible across net-platforms.
Protocol is anti-heirarchy and anti-authority. This is the most, in my eyes anyway, contentious claim and seemingly at odds with much of the rest of Galloways hypothesis. Where does this even come from? Up until this point a central thrust of his argument seems to be that the internet exists as a result of tension between the flat (anarchic) mechanisms of TCIP/IP and the hierarchal/authoritarian/centralising mechanisms of DNS. But here he suddenly states that this thing – ‘protocol’, whatever it is – contains inherent properties in relation to power-dynamics and that, specifically, these are anti-authoritarian? This is ludicrous in the extreme. Protocols rely on a multi-leveled, system-wide conformity to ever more finely granulated levels of specification to work. Specicially protocols – unless they are very specifically engineered to be open – by their very nature impose a power relationship through their use in that one must comply to the protocol in order to gain access to the information or access the network. Hence the Digital Divide. Furthermore this process of definition is usually one that the receivers of information/consumers/surfers are not privy to. This is, of course, the very problem of protocol and the reason for open-source and opeb-platform approaches to technology. This is indeed the very facet of technology that has spawned the most fertile resistances to the inherant power of unmodified protocol as evidenced by everything from the original ‘cyberpunk‘ arguments through to open source development ideologies, the Creative Commons challenge to copyright, the Maker movement etc…
Ok, ok. Enough. You get the point. I could not agree more with the central thrust of Galloway’s assertion – and as a result find it all the more galling that he seems to mishandle the argument so badly. I aim to keep reading up on the subject, but right now I want to get into some of the quant/qual or empiricism: pro/con debate.
In the meantime here are a few ripsnortin’ little examples of how ‘the algorithm’ effects our lives. Firstly (and one that I know I keep linking too… in other posts) a terrific little article on gay marriage and database engineering.
And then here’s a link to Eli Pariser’s website: thefilterbubble.com. Eli is known (amongst other things) as the originator of the term ‘filter-bubble’. Ie: how search algorithms, once they learn what we like on the web – will change our search results to reflect more of what we like – resulting in a potentially distorted, and continually self-reinforcing, world-view. It’s an important concept and Eli writes really well about the subject. His website talks about similar fascinating and important phenomena. Check it out.
And finally here’s a terrific little article from Cory Doctorow – ostensibly about how difficult it is for computers to understand and parse messy real world phenomena – such as human families – but which can also be read as a great case study of how algorithms get things wrong and can impose a given world-view on people.
That’s it for today. I have a STACK of books to get into now on qualitative, ethnographic and anthropological methods. Fun times!
[…] And here we get into what I have talked about previously – the misreading (deliberate or otherwise) of technical or empirical based work in order to build […]