This entry is part 2 of 10 in the series Snapshots at 27


There’s a popular tendency to view technology as an “objective” field, “purer” and somehow more essential for it.

For example, we often hear about the apparent infallibility and efficiency of the digital, especially compared to analog tools and processes. Computers and mobile devices have become common fixtures for some of us, and with that comes the shift from physical labour to knowledge work — “pure thought, pure mind, pure intellect,” as Audrey Watters describes it. Developments like artificial intelligence or data analytics allow more of us to crow about “smarter” devices and “data-driven” decisions.

The implication, usually, is that disembodying work minimises uncontrollable “human error,” and boiling phenomena down to “indisputable” numbers constitutes freedom from fault. Most people who talk about these shifts like to frame them, without question, as progress.

In her talk linked above, Watters quotes Asimov:

“In fact, it is possible to argue,” he adds, “that not only is technological change progressive, but that any change that is progressive involves technology even when it doesn’t seem to.”

But as Watters points out, technology always involves human factors — human labour, human judgments — no matter how much our visions of digital utopia like to pretend otherwise. Technology doesn’t spring forth from nothing. Insisting that it does often erases the inequities at play in technology’s production and usage, the structural wrongs technology doesn’t save us from (and often, in fact, perpetuates).

Anil Dash makes a similar point when he asserts that tech isn’t neutral. I’d like to stretch that further and push back against Asimov a little by noting that tech isn’t inherently good. Novelty and innovation don’t automatically translate to welcome change. Tech carries the values, biases, and failings of its creators — and it can easily 10X these at scale, to borrow from the language of Silicon Valley startup bros. Just look at how Facebook is handling misinformation and data mining on its platform.

Tech (and more specifically, its creators) sidesteps a lot of criticism and responsibility when we let it disavow human elements and pretend to be detached, “objective,” incorruptible. I think a lot about Christopher Schaberg’s discussion of the term “30,000-foot view”, a favourite of startup productivity gurus like Tim Ferriss:

The expression enfolds a double maneuver: It shares a seemingly data-rich, totalizing perspective in an apparent spirit of transparency only to justify the restriction of power, the protection of a reified point of authority. It works this way: “Here’s how things look from 30,000 feet. Can you see? Good, now I am going to make a unilateral decision based on it. There is no room for negotiation, because I have shown you how things look, so you must understand.”

This particular use of data — or of the idea of data — has always bothered me. To a certain extent, yes, data doesn’t lie, and a “data-driven” approach does help weed out some of the personal biases and preconceived notions that would otherwise colour, say, research work. Evidence matters.

But quantitative data often isn’t “pure” in the sense that many people like to believe, nor is it automatically more “reliable” or “trustworthy” than other forms of evidence. Judgments also have to be made about what data to collect and how; what analyses to perform; how to interpret and present any results. Skull measurements were data, for example, and for a long while, many anthropologists used that to prop up racist, imperialist narratives of social evolutionism.

In any case, I’ve been thinking a lot about tech lately — the functions it fulfills, the spaces it occupies in our lives.

Categories: ScienceTech