I saved this screenshot from Tumblr some time ago:
I’ve been reading a lot about the nature of information lately, and one statement in particular1Hastily jotted down in a notebook, without citation, sigh has stayed with me:
“Data is time-sensitive.”
The validity of whatever information we possess erodes over time. Hard drives can fail; phone numbers can fall out of use. Likewise, access can decay: websites shut down; programs get deprecated; file types or formats cease to be supported. Someone given a floppy disk of people’s contact details, for example, would have trouble using that information, even if nobody in that directory ever changed phone numbers.
Every now and then I wonder about how this affects — well, a lot of things, really. Our relationship with tech, for one thing; our ability to remember or learn from prior knowledge, for another.2Anil Dash rightly points out that the dearth of proper documentation and the speed of information erosion lead quite naturally to an ahistorical view of the tech world that can be quite crippling. On a smaller, pettier level, I think about the bits and pieces of personal history scattered across old OneNote and Evernote files, abandoned email accounts, profiles on social media networks that no longer exist.
Even setting aside the proliferation of streaming services and DRM3digital rights management measures, I wonder: Do we ever actually own anything digital?
Of course, digital objects don’t have a monopoly on impermanence. In an article for the BBC, Lawrence Norfolk wrote, “It is transitional. Work passes through it on the way to becoming something else.” He wasn’t talking about streaming services, but notebooks: paper, ink, writing accumulated over time. There’s a similar erosion of relevance and validity, online or offline. As for access, well: the notebook Norfolk was describing was lost on a train.
But digital objects are more vulnerable to access decay than physical items, I think. Notebooks can be destroyed or misplaced, i.e., unfortunate events can render these inaccessible to us. But a Flash video, a link out to a different website, an Adobe Photoshop project file can be lost to us even if we never do anything, even if the file simply sits on our desktop or the link stays forever on our blog — simply because the digital world would have moved on, often sooner than we’d expected.
There’s a popular tendency to view technology as an “objective” field, “purer” and somehow more essential for it.
For example, we often hear about the apparent infallibility and efficiency of the digital, especially compared to analog tools and processes. Computers and mobile devices have become common fixtures for some of us, and with that comes the shift from physical labour to knowledge work — “pure thought, pure mind, pure intellect,” as Audrey Watters describes it. Developments like artificial intelligence or data analytics allow more of us to crow about “smarter” devices and “data-driven” decisions.
The implication, usually, is that disembodying work minimises uncontrollable “human error,” and boiling phenomena down to “indisputable” numbers constitutes freedom from fault. Most people who talk about these shifts like to frame them, without question, as progress.
In her talk linked above, Watters quotes Asimov:
“In fact, it is possible to argue,” he adds, “that not only is technological change progressive, but that any change that is progressive involves technology even when it doesn’t seem to.”
But as Watters points out, technology always involves human factors — human labour, human judgments — no matter how much our visions of digital utopia like to pretend otherwise. Technology doesn’t spring forth from nothing. Insisting that it does often erases the inequities at play in technology’s production and usage, the structural wrongs technology doesn’t save us from (and often, in fact, perpetuates).
Anil Dash makes a similar point when he asserts that tech isn’t neutral. I’d like to stretch that further and push back against Asimov a little by noting that tech isn’t inherently good. Novelty and innovation don’t automatically translate to welcome change. Tech carries the values, biases, and failings of its creators — and it can easily 10X these at scale, to borrow from the language of Silicon Valley startup bros. Just look at how Facebook is handling misinformation and data mining on its platform.
Tech (and more specifically, its creators) sidesteps a lot of criticism and responsibility when we let it disavow human elements and pretend to be detached, “objective,” incorruptible. I think a lot about Christopher Schaberg’s discussion of the term “30,000-foot view”, a favourite of startup productivity gurus like Tim Ferriss:
The expression enfolds a double maneuver: It shares a seemingly data-rich, totalizing perspective in an apparent spirit of transparency only to justify the restriction of power, the protection of a reified point of authority. It works this way: “Here’s how things look from 30,000 feet. Can you see? Good, now I am going to make a unilateral decision based on it. There is no room for negotiation, because I have shown you how things look, so you must understand.”
This particular use of data — or of the idea of data — has always bothered me. To a certain extent, yes, data doesn’t lie, and a “data-driven” approach does help weed out some of the personal biases and preconceived notions that would otherwise colour, say, research work. Evidence matters.
But quantitative data often isn’t “pure” in the sense that many people like to believe, nor is it automatically more “reliable” or “trustworthy” than other forms of evidence. Judgments also have to be made about what data to collect and how; what analyses to perform; how to interpret and present any results. Skull measurements were data, for example, and for a long while, many anthropologists used that to prop up racist, imperialist narratives of social evolutionism.
In any case, I’ve been thinking a lot about tech lately — the functions it fulfills, the spaces it occupies in our lives.
Our office intern asked me a strange question this morning:
“Hey Kate, when you don’t like a person, is it obvious?”
My answer must have taken longer than expected, because she hurried to clarify: “Like, if you don’t like someone, does it show on your face? Do you behave differently toward them?”
The easiest way to put it would be, I do, but not in the way the question supposes.
By “behaving differently,” the question implies cold shoulders or eyerolls. What I mean by it, though, is a conscious1(though often reluctant) effort to be civil.
It is different, still. When I dislike someone, I’d usually prefer to knee them in the groin. But that’s hardly ever permissible in everyday interactions; when it comes to people we dislike, we’re more likely to be asked to work together than to be allowed to inflict bodily harm.2Most of us have at least one group project we would’ve wanted an exit door for, or a co-worker whose desk should’ve come with an eject button.
This used to frustrate me to no end. Why can’t I just dislike somebody and be done with it3or, more accurately, with them? Then I stopped being a teenager, and I realised I didn’t have the energy for endless frustration. And endless it usually would have been, because most people aren’t aware that we dislike them; those who know probably don’t care anyway. The upshot is that active dislike takes time and effort, all for hardly any payoff.
Between outright like and dislike, though, there’s a lot of room for civility. That’s where I try to spend my time these days. This isn’t some form of wisdom or kindness. Instead, consider it an attempt at self-preservation: If I have to work with people I don’t want to spend time with, then I might as well make the experience as painless as possible. This is what’s necessary for us to get things done, so this is as much of my time, effort, and goodwill as I will give you. Or, put differently: This is as little of my life as you will occupy.
Some people might find that cold. Maybe it is; but it’s also efficient, and it spares me the trouble of thinking about difficult people any more than I have to.
Recently, I’ve found myself trying to apply the same mindset to work in general. For example, I’m trying to be more vigilant about my working hours.
I’m one of those people who care too much about what I do: given a task or goal, I can’t stand the idea of doing anything less than great. If this sounds like a humblebrag, it’s not. In practice, this just means that work takes over my life, and I torch my reserves to accomplish even unreasonable tasks. This is neither healthy nor sustainable, but lately I’ve found myself in a setting rife with situations that could feed this tendency.
So: vigilance, which means drawing clear lines that I do my best not to cross. Mentally checking out of work at 6pm. Keeping Slack off my phone. Logging out of the work email on weekends. Muting notifications for the office group chat. Most important of all, though, is making peace with the fact that enforcing these limits will sometimes mean adjusting deadlines, asking for help, saying no.
Five years ago, that would have been horrifying — a circumscription of potential, an admission of inadequacy. Today, I try to remind myself that these limits save me from depletion. There’s still the itch to do well, all the time, but I’ve only got so much of myself to throw around, and not everything is worth it.
Some background: On 12th June, Twitter released new datasets that compiled anonymised data from accounts that seem to be linked to information operations run by the Chinese (PRC), Russian, and Turkish states. These accounts have since been shut down, but Twitter has retained data about the profiles and their tweets. This is part of Twitter’s ongoing compilation of data about “potentially state-backed information operations” on their platform.
Sinha’s analyses looked at behavioural trends in the Chinese account dataset, including the timing of tweets:
1. Tweet timings: The tweets from the CCP accounts almost exclusively tweet during "work hours" by Beijing Time.
89% of the tweets were between 7 AM and 5 PM.
For the control group of 58k tweets from 32 accounts in HK & TW, that number was 37%! pic.twitter.com/hbNEAu8j1m
This piqued my interest, of course. As you can probably tell from this blogchain, I’ve been thinking about social media and its influence on information dissemination and consumption. Sinha ends his thread by pointing out how these behavioural patterns and attributes could be used to create some accessible way to identify / flag fake accounts like these. That’s catnip for nerds in a world of digital disinformation, really.1Even if we factor out the very relevant fact that social media is destroying public discourse in my home country.
So I went and downloaded a copy of Twitter’s datasets to try poking through the data myself. The better to practice some R programming, too.
Simple Tweet Data Analysis with R
First things first, Twitter’s datasets are about as tidy as you can hope for. The Chinese set contained two main datasets:
account information, which compiled metadata about each profile (so attributes like user’s reported location, number of followers, etc.)
tweet information, which compiled individual tweet contents as well as metadata (time the tweet was published; reply, retweet, and quote counts; etc.)
There were 23,750 accounts in all, and a total of 348,608 individual tweets.
If you download the datasets, Twitter also provides a handy Read Me file that enumerates all the variables available for each dataset. For these quick probes of the data, I mostly did some simple transformations to isolate the variables I wanted to look at.
Examining Tweet Timings
First, I tried to recreate Sinha’s graph of tweet timings. I think the trickiest step here might be remembering to convert time zones, since Twitter provides timings in UTC by default.
## create column for tweet time by hour and store copy in new object
by_hour <- tweets_all %>%
mutate(chn_hour = with_tz(tweet_time, tzone = "Asia/Shanghai"),
hour_level = hour(chn_hour))
## check new object
glimpse(by_hour)
## check count of instances by hour
by_hour_sum <- by_hour %>%
group_by(hour_level) %>%
summarise(count = n())
From there, it’s a simple matter of visualising the data using ggplot2, with “chn_hour” (basically, the hour in China’s standardised local time) as the focal variable:
## line graph version
by_hour_graph_line <- by_hour_sum %>%
ggplot() +
geom_line(aes(x = hour_level, y = count)) +
scale_x_continuous(name = "Hour of Day",
limits = c(0,24),
breaks = 0:24) +
scale_y_continuous(name = "Tweets",
breaks = seq(0,60000,5000)) +
labs(title="PRC Fake Twitter Accounts - Tweets By Hour",
subtitle="Tweeting trends correspond with working hours in China",
caption="Source: Dataset from Twitter.com") +
ggthemes::theme_economist()
This gives us the following graph:
I tried to create a bar graph version too, in the sense that it might be a better representation of discrete hours (as opposed to the line graph, which links each hour together into a continuous phenomenon).
## bar graph version
by_hour_graph_bar <- by_hour %>%
ggplot() +
geom_bar(aes(x = hour_level)) +
scale_x_continuous(name = "Hour of Day",
limits = c(0,24),
breaks = 0:24) +
scale_y_continuous(name = "Tweets",
breaks = seq(0,60000,5000)) +
labs(title="PRC Fake Twitter Accounts - Tweets By Hour",
subtitle="Tweeting trends correspond with working hours in China",
caption="Source: Dataset from Twitter.com") +
ggthemes::theme_economist()
Which gives us this graph:
The findings track with Sinha’s own graph, which he shared in his Twitter thread. Obviously, this would be the outcome since we were working with the dataset — but it’s always good to have that quick assurance that your own code was structured correctly and yielded the same results.
Examining Twitter Profile Age
Sinha didn’t tweet about this, but I figured I might as well check. In the Philippines, just from what I’ve seen from regular social media browsing, troll accounts tend to be fairly new. I wondered if that might be the case for these PRC accounts as well — if, perhaps, that indicated that most accounts used for specific information ops goals are only created shortly before the campaign starts.
First, then, I had to figure out how long each account was active — that is, each account’s “age.”
Twitter’s dataset doesn’t include activity ranges, but it does provide the account creation date for each profile. The Twitter profiles included in the dataset were taken down in May 2020, so I used that as my end date. Then, it was time to calculate ages for each account.
# Grouping accounts by age ####
mark_date <- as.Date("2020-05-01")
by_age <- accounts_all %>%
mutate(current = mark_date)
## set interval between twitter reporting date and account creation date
by_age <- by_age %>%
mutate(int = interval(by_age$account_creation_date, by_age$current))
## find length of interval and assign ranges
by_age <- by_age %>%
mutate(duration = round(time_length(by_age$int, unit = "month"))) %>%
mutate(range = cut(duration,
c(0,3,6,9,12,Inf),
c("0-3 months", "4-6 months", "7-9 months", "10-12 months", "13+ months")))
I figured there would be considerable variation when it came to the number of months each profile was active. To avoid getting a fairly messy graph2Just imagine 30+ ticks all over your X-axis, I decided to simplify things further and group accounts according to specified age ranges:
0-3 months
4-6 months
7-9 months
10-12 months
13+ months
Then, it was a matter of graphing the results using ggplot2:
## check count per month age
by_age_sum <- by_age %>%
group_by(duration) %>%
summarise(accounts = n())
glimpse(by_age_sum)
## graph count per range level
by_age_graph <- by_age %>%
ggplot(aes(x = range)) +
geom_bar(aes(fill = range), show.legend = F) +
scale_y_continuous(name = "Number of Accounts",
breaks = seq(1000,13000,1000)) +
scale_x_discrete(name = "Age") +
labs(title="PRC Fake Twitter Accounts by Age",
subtitle="Most fake accounts tend to be less than 7 months old",
caption="Source: Dataset from Twitter.com") +
ggthemes::theme_economist()
This gives us the following graph:
The vast majority of these troll accounts appear to have been less than a year old. There are a lot of factors that could affect account age, though: maybe Twitter tends to identify and take down troll accounts before most of them can breach the 6-month mark; maybe accounts get abandoned or deleted after a certain campaign; and so on.
This graph is mostly descriptive; sussing out some kind of explanation for this behaviour will take much more research and analysis. Still, it’s an interesting point to bring to light about these kinds of accounts.
More Information
I tried visualising these accounts as a network, but apparently that was too much work for my lone laptop. R couldn’t even produce a visualisation. 😅
Other, better analysts have, of course, studied this data and come up with much more sophisticated analyses. Twitter has been working with the Stanford Cyber Policy Center’s Internet Observatory, which has published its findings online. They’ve got a fantastic model of the network as divided among the topics of their tweets, as well as some interesting takeaways about the specific narratives that these accounts tried to amplify.
There’s a lot more data to be studied, but if nothing else, this quick look at a couple of Twitter’s datasets highlights the scale and sophistication of the information operations being carried out online. Social media can be a scary place, more so when you consider how its massive reach and influence is essentially unchecked. Like Sinha pointed out in his thread, though, studying these information operations could give us a fighting chance against disinformation online.
This has been sitting in my Google Drive since 2018.I rediscovered it a couple of weeks ago and figured I might as well find some closure for it.
So. Just in time for Father’s Day, too. You know, two decades ago, this day would have meant several hours hunched over a chessboard at home.
My dad taught me chess when I was six years old.
I learned on one of those huge wooden boards you could pluck from the bottom shelves of National Bookstore’s ever-baffling children’s games section. You know the ones: cream and green squares painted on either side of a hinged case that housed an assortment of chipped pieces. My dad set the board down on our dining table one after-school afternoon, wood meeting glass with a hollow thunk.
As he scooped out the pieces, I inspected the upturned playing field. Its cheap, unvarnished grain reminded me of the haphazard stacks of plywood I sometimes glimpsed through the gates of a nearby lumberyard. I no longer remember what I thought then, but now I wonder if I took a moment to puzzle over the impossible chains of choice and circumstance that could bring the same material to such different destinations.
Probably not.
A couple of days ago, I received an email from a school I’d applied to a while back. Their graduate program had accepted me for the 2017-2018 school year, but at the time, I’d deferred enrollment for a year.
I had my reasons. Anxiety: the one I’d been upfront about, and the one the school had accepted. Fear: the one I never disclosed, and the one I’ve been trying to ignore. Enrollment entailed moving six time zones away and balancing graduate-level study with immigrant-level stress. Deferral spared me that quandary until I could solve it with grace.
Or so I let myself imagine.
Opening the school’s latest email one year later, I realized deferral simply let me swap one form of constant, crushing perplexity for another. Now that trade was laid bare in a question of brutal simplicity: Would I be enrolling this year or not?
In 2013, Oxford Dictionaries Online added the acronym “FOMO” to its entries. A distinctly web-flavored coinage, FOMO stands for “fear of missing out,” itself a phenomenon that would never have gotten a name if it weren’t for social media.
Fear of missing out, as it says on the tin, is anxiety spurred by the suspicion that fantastic things are happening beyond your immediate realm of experience. People are having fun, accomplishing things, and making memories — and you, by virtue of not being there alongside them, are losing out.
In response to a question posted on Quora, a clinical psychologist sketched out the evolutionary and biological underpinnings of FOMO. Dr. Anita Sanz traces FOMO back to prehistoric survival drives: having a finger on the community pulse spelled the difference, say, between benefiting from a new food source and starving to death. Over time, these drives have become well-worn grooves in the operational tracks of our amygdala, that tiny node in our brain responsible for sensing danger and triggering stress responses. FOMO, it seems, is simply the latest spin on the primordial “fight or flight” response.
For all its deep-seated roots, however, FOMO as we know it today still feels like a newfangled affliction. Like all good technological phenomena of the startup age, it targets a hyperspecific niche: avid users of social media. By definition, FOMO is a very outward-looking malaise, and its regrets revolve around presence rather than ability. To borrow from Austin Kleon, FOMO strikes me as distress about verbs, not nouns. It rues our failure to witness, to immerse, to experience; it only ever glances at our inability to attain, to become.
At least, that’s how I understand it. Many of my friends have been marking various milestones all over social media: a good number in graduate programs abroad; others climbing the next rung of their career ladders. These updates come to me in bits and pieces, photos of destinations reached and check-ins for events attended, all arriving via Facebook, Instagram, Twitter. It’s human, of course, to feel envious, or to simply feel left behind. But fear of missing out sounds like a superficial explanation, naming only the immediate prickle but not the older, deeper ache it happens to inflame.
On the flip side, that suggests that there is a resonant little kernel in the concept of FOMO. Some digging yields this: the idea that you are somehow reduced by not experiencing these things, not living these other possibilities. Thanks to technology, the world is better than ever at confronting you with all the people you could be — or could have been. FOMO or not, you are, in this context, “only” yourself, and lesser for it.
Of course, you don’t need the internet to feel like a diminished version of yourself.
Doors close all the time, and most of us register a twinge, at least, of regret when they do. Sometimes we might even hone that acute sensitivity enough to pass it on to others. The apparent glut of prodigious preschoolers supports this deeply miserable hypothesis. Worse, it suggests a terrifying adjunct: if you’re not careful (or rich, or able), doors will have slammed in your face before you hit your tenth birthday.
Many mommy bloggers, online thinkpieces, and op-ed hot takes are quick to blame Tiger Moms and helicopter parents for this and future generations of frazzled kids. It strikes me as an incomplete diagnosis, identifying only metastases. If these parents are fretting over their children missing out — on better prospects, better futures — they’re not driven by whim so much as the apparent precariousness of middle-class life.
In 1958, the British sociologist Michael Dunlop Young published a satire that popularized an enduring buzzword: meritocracy. Used by Young to describe a society that rewarded “intelligence-plus-effort,” meritocracy has since been touted as a cure for society’s rigged games. By dismissing the weight of affiliations, lineage, or wealth, meritocratic systems claim to hold doors open for whoever exerts the effort to go through them.
In a way, meritocracy is a fragile promise. It tells us that we can go as far as we’re able, but it also warns us that whatever rewards we earn can’t be bestowed wholesale upon whoever comes after. In this light, helicopter parenting almost seems reasonable. As variousjournalists have observedelsewhere, skills and credentials have become crucial requisites for social status — and to paraphrase sociologist Hilary Levey Friedman, you can’t pass on a law degree or an MBA.
Meritocracy is also an effective lie. There’s a reason Young wielded the term as criticism. Meritocratic systems assume that we pursue those law degrees or MBAs on even footing, playing by the same rules. Performance, these systems insist, is all that matters; success depends entirely on how well and how ruthlessly we can leverage our abilities. Privilege is immaterial, but so is disadvantage.
In systems blind to the many possible bounds on “intelligence-plus-effort,” the assumption is that everybody gets what they deserve. Perhaps, in this light, a life of intense calculation and crushing responsibility is a reasonable price for an inheritance to feel earned, for disparity to seem unassailable.
Of course most things of value reside behind locked doors.
Of course we celebrate those skilled enough to win their way through.
And what does it matter if some people have been handed keys, when there are so many ways to lose them?
“A long view of precarity,” Richard Settersen writes, in an edited collection looking at insecurity and risk in later life1Settersten, R. (2020). How life course dynamics matter for precarity in later life. In Settersten R., Grenier A., & Phillipson C. (Eds.), Precarity and Ageing: Understanding Insecurity and Risk in Later Life (pp. 19-40). Bristol: Bristol University Press. doi:10.2307/j.ctvtv944f.8, “means paying attention to the relevance of the past — not just the shadows of the recent past, but also the far-away past — in determining the present.” As an example, he cites the idea of cumulative advantage and disadvantage: how early-life wins or losses “can pile up and be compounded over time.”
The more we try to trace them, the further the roots of our futures seem to wind their way back through our lives.
When I was seven, my dad gifted me my first book on chess strategy.
It’s hard to miss the strategic nature of chess. At six, it took me only a curbstomp of a loss to realize that planning a few moves ahead tipped the odds in my favor. But it was my dad’s book, a tattered old paperback passed down from his own father, that crystallized this image of success as the apotheosis of relentless, precise, perfect orchestration.
My dad liked Karpov, revered Capablanca, idolized Fischer. When he started mapping out the world of chess for me, then, it unfolded as a domain of positional play. One had to look beyond the tussle of the moment. Every move, after all, could build towards victory or tighten the noose around my neck. The endgame could be decided as early as my opening moves. Decisions stacked one on top of the other like cards in a trembling cardboard house, and I had to place the next one just so, or everything would come tumbling down.
That downfall, should I ever blunder into it, would always be a matter of public record. That was the word my dad’s book used for it, too: blunder, the layman’s equivalent for the question marks that caught my eye when I first learned chess notation.
Several systems exist for recording a game of chess. The current standard, algebraic chess notation, exemplifies terse efficiency. 1. e4 e5. It cuts decisions down to their most essential components: the agent and where it ends up. A noteworthy move receives a ! (brilliant) or ? (blunder) when it’s made, but nothing more. The soundness of each move’s underlying logic, or so the system holds, reveals itself in due time. To the analyst, reading plays after the fact, perhaps flipping through a chess book several decades later, the game’s outcome explains all the moves that came before.
This is the system I grew up with.
Station Eleven, Emily St. John Mandel’s book about the survivors of a swine flu pandemic, set the book review circuit ablaze in 2015. It lingered on my list, a meditation on loss that I would only start later, when I loaded the book onto my Kindle as my family prepared to leave for a different country.
The move kept me from proceeding to the next stage in an application for a job I’d been told I would do well in. Kneeling on my bedroom floor, sorting my clothes into packing cubes — sleepwear here; outerwear there — I turned the situation over in my head. Foregoing a stable job with a clear career ladder, some measure of prestige, and skill requirements that I might actually fulfill, but with obligations and responsibilities I wasn’t sure I could shoulder: good choice or bad choice?
In the middle of the sixteen-hour flight, Emily St. John Mandel would tell me, “Adulthood’s full of ghosts.” A rattling metal box was taking me away from another future, in distance and in time, but the maddening ambivalence of it all remained.
What happens to the people we run away from, the people we’re not sure we want to be?
If I believed Station Eleven then, I might have found relief in the idea of leaving those futures behind. The haunting happens when you stay, the book had said:
“I’m talking about these people who’ve ended up in one life instead of another and they are just so disappointed. Do you know what I mean? They’ve done what’s expected of them. They want to do something different, but it’s impossible now, there’s a mortgage, kids, whatever, they’re trapped.”
I submit that the people who don’t do what’s expected of them experience a similar suffering. The search for alternatives can gut people and turn them into ghosts regardless. The uncertainty of fighting what seems clear and sensible can carve conviction out of you as thoroughly as any lifetime of resignation.
So if you don’t want this, then what do you want?
I didn’t know. I’d been taught to win, not to want. Or, more precisely, I had always been told that if I could just win, somehow, no matter the game, then I would never have to want for anything.
It hadn’t occurred to me to ask what would happen when winning revealed itself the culmination of inestimable compromises; when playing at ambition became unbearable.
Chess is a game of post-mortems.
Success depends as much on looking backward as it does on planning ahead.
Hermann Helms, one of the world’s greatest chess journalists, published his first chess column in 1893. From the early 1900s on, he served as chess reporter for The New York Times, writingfor more than fifty years. Even then, the length of Helms’ career is a blip in the long tradition of chess analysis. When my dad started our schedule of endless matches that humid, suffocating summer before third grade, AI hadn’t yet overtaken the study of chess, and dissecting every move right after a game ended was ritual.
Founded in 1872, the Paris Institute of Political Studies, more commonly known as Sciences Po, predates Helms’ career. In the field of social sciences, Sciences Po is considered France’s leading university; its graduate school for international affairs is ranked second in the world. The main campus, home to most of the graduate programs, resides in the same arrondissement as the Oulipians’ Café de Flore and the existentialists’ Les Deux Magots.
A quick look at an unofficial list of alumni yields heads of state, heads of international organizations, and countless other members of the global elite.
In the summer of 2018, several months before I found myself uprooted for entirely different reasons, my deferral clock ran out, and I declined a place in their next cohort.
At my great-aunt’s dining table, fifteen time zones and thirteen years away from the last chess game my dad and I had ever played, I was trying to unpack everything.
Is it human instinct to consider the future, what could be? And from there, isn’t it a simple step sideways to what could have been?
The Atlantictells me, “Imagining the future is just another form of memory.” Half a year earlier, studies featured in The New York Timesreported the same thing: the human mind dwells on the future, and it mines past experiences to simulate future possibilities. Mistakes, regrets, prospects — we contemplate them all using the same neural circuitry.
Here, again, the cumulative: our past circumscribes our futures. The most vivid possibilities are the ones we have the most material for. Consider, then, the difficulty of imagining far-off outcomes. What do you want to be when you grow up? Where will you be in ten years?
The vast majority of us can’t actually answer these questions. To fill in the blanks, or to give ourselves a scaffold for speculation, we turn to “cultural life scripts,” a series of milestones that our particular cultures expect us to reach.
I haven’t reviewed a chess game in years, but falling back into the scene was easy: the table’s edge biting into my forearms; the ache building behind my eyes; the tacit acceptance of responsibility. Likewise, looking back on these imagined futures, my first instinct was guilt over all the doors I didn’t pass through, the opportunities I should have seized.
How did you get here? What could you have done differently?
Instead I found myself at eleven years old, interrogated by a game log full of question marks, venturing beyond frustration and despair for the first time. A childhood of relentless drills couldn’t change the decades of prior experience my dad challenged me to overcome with every game. Ticking all the boxes on an admissions checklist didn’t change the quirks of my brain chemistry; didn’t change the hoops that international students had to jump through; didn’t quite help me through the doors that “merit” had supposedly unlocked.
There, then, with new information, I continued assembling the possibility that my decisions aren’t the sole determinants of failure; that we are not entirely to blame for being, always, outmatched.
Economics tells us that nothing comes without a cost.
In a world of finite resources, scarcity is unavoidable, and so is choice. The concept of opportunity cost describes the losses we incur with every decision we make — a valuation of chances missed and roads not taken.
No chess game ends without exchanges.
Most theoreticians assign each piece a point value, the better to evaluate the merits of one sacrifice over the other. A queen is worth more than a rook, which is worth more than a bishop, which is worth more than a pawn — which can be worth as much or more than any of these, if it can fight its way through the board.
The myth of merit is a gambit, a careful construction of acceptable risks.
It pulls us into a series of rigged games on the promise that we can earn our way out, that our gains will be guiltless when we do. It measures us against countless other, better lives, on the assurance that these are the only ones we can forfeit, the only ones we will ever have to answer for.
But opportunity cost, strictly speaking, denotes the value of the best alternative forgone.
In the movie WarGames, a supercomputer is programmed to run endless war simulations. Of course, in every scenario, the goal is decisive victory. Beyond that, its purpose is to learn, to identify the best possible scenarios and how to engineer them. Crisis comes because the supercomputer is linked to the US military’s nuclear weapons control system: prompted to run simulations of nuclear war, the supercomputer is driven to win the game, and it can’t distinguish between simulation and reality.
Buying into the illusion of merit entails a lifetime of missing out and falling short. But we give up the best alternative before the accounting even begins. Here, I think, is merit’s biggest play: that, given tenuous comforts, we accept the premise that we are only ever playing for and against ourselves; that the worth of a life can be quantified and appraised as such; that the battle for wins of ever higher value is the only one worth fighting.
In never looking beyond the board, we forgo the possibility of doing away with it entirely.
Cycling through every possible iteration of nuclear war, WarGames’ supercomputer learns that engineering any kind of meaningful victory is impossible.
It gives up control of the nuclear arsenal, observing that it has been drawn into a strange game.
The only winning move, it concludes, is not to play.
There’s a lot happening in the world right now, offline and online. Unfortunately, the web isn’t free of malicious people/groups1Which can include state forces, depending on where you live seeking to monitor, suppress, or harm people who speak out against injustice, inequality, and oppression.
Lately I’ve been fielding questions from friends who want to take extra precautions to protect themselves and their loved ones online. In case it might be helpful to others, I’ve compiled recommendations and resources here.
Not everybody is familiar or comfortable with tech, so I’ve tried to stick to safe, secure solutions that are easy to use. More notes at the end of the post.
VPNs
A virtual private network (VPN) protects you by obscuring the details of your web activity/traffic. Try to use VPNs as much as you can. Look for ones that don’t keep logs of your network activity / usage of the VPN service itself.
Good free options:
ProtonVPN: No data limits; mobile apps available; run by the same people behind ProtonMail (free encrypted email service)
TunnelBear: 500 MB limit per month, but that should be fine if you’re mostly using these when accessing sensitive stuff like email & socmed; mobile apps available
The Electronic Frontier Foundation has a one-page guide to help you learn more about how VPNs work and what features you should look at.
Browsers
Some browsers are more secure than others.
Firefox and Tor are open-source projects2Meaning their code is freely available, so people can check if there are any malicious scripts or critical flaws in the software run by nonprofits dedicated to online privacy and security. In practice, this means these browsers are significantly less likely to collect excessive personal information and/or attempt to sell that to third parties.
Tor: For most people, Firefox should be fine. The Tor browser is a bit more complex, and it can be finicky to use. CNET created a beginner’s guide to Tor if you want to give it a try.
General reminders:
Please clear your cache + cookies + browser history regularly. You can usually find these in your browser’s Settings pane.
Don’t let your browser save passwords for the websites you visit. Use a secure password manager instead.
Browser Extensions
You can install some extensions to enhance your browser’s security. Here are some extensions frequently recommended by cybersecurity professionals/groups:
Privacy Badger: Blocks third-party trackers (e.g., advertisers) from collecting info on the websites you browse/visit
Disconnect.me: Blocks most trackers, cookies, beacons used on the websites you browse
HTTPS Everywhere: Forces your browser to automatically use HTTPS-encrypted versions of websites whenever available
If you want to install other browser extensions, remember to vet them thoroughly. Anyone can publish an extension, so you’re bound to run into ones that aren’t secure, or worse, are shady by design.
Yes, this means you probably can’t memorize secure passwords for all of your accounts. No, this doesn’t mean you should use the same one for multiple websites. (Don’t, don’t,don’t use the same password for multiple accounts. Please.)
Instead, you should use a password manager. The best ones help you generate random, hard-to-crack passwords for different accounts; store your credentials in encrypted “vaults”; and manage your passwords across multiple devices.
Best free options:
BitWarden: Open-source, no usage limits. Windows, Mac, Linux, iOS, Android, and browser extensions available.
KeePass: Open-source, no usage limits, but not as polished or user-friendly as BitWarden. Windows, Mac, Linux, and browser extensions available. No official mobile apps but there are some recommended by the KeePass project team themselves.
LastPass: Popular free option, paid upgrades available. Windows, iOS, Android, and browser extensions available.
Two-Factor Authentication (2FA)
Two-factor authentication adds another layer of security to your online accounts. A website or app with 2FA will verify your identity using another piece of info (other than your password) before granting access to your account.
Most 2FA options will need you to use authenticator apps. These sync with your chosen website/app/service to generate unique codes whenever you need to login.
Here are a couple of free authenticator apps to consider:
Google Authenticator: simple, straightforward app; supports scanning QR codes so you can automatically add a service
Authy: more features, including support for running the Authy app on multiple devices
Email + Messaging
Reading other people’s correspondence is creepy, but unfortunately, there are a lot of creeps3Which can include state forces, depending on where you live out there. Here are some services that can help protect you from them:
ProtonMail: Free email client that offers end-to-end encryption by default
Mailvelope: Open-source browser extension that applies end-to-end encryption to web-based email accounts (e.g., Gmail, Yahoo, etc.)
Signal: Most secure messaging option by far. Open-source, end-to-end encryption, now also includes image blurring. Windows, Mac, mobile apps available
General reminders:
Please don’t leave yourself signed into your email account4or any other online account, really by default.
Avoid using your email or social media accounts to automatically register for / log in to other websites.
Remember that messaging is a two-way activity: Your messages also reside in the recipient’s inbox/accounts, so if those get compromised, your information is at risk, too. Encourage friends and family to be more cautious in their online communications.
Image Scrubbing
When posting photos (and videos!) online, check if you’ve captured people’s faces or other identifiable marks. This kind of information is being used to track down people these days. The same holds true for metadata, i.e., information about your camera / device that is automatically embedded in your image file.
Here are some tools + tips to help you blur out identifiable features in photos:
Edit your photo BEFORE uploading it. Don’t just rely on, say, Twitter’s or Instagram’s built-in tools to scribble over people’s faces. Why? The original photo is still saved somewhere on their server, and the scribbles are just overlaid. Don’t risk it; edit before you upload.
Use the Clone Stamp tool rather than the Blur tool (some people claim that the latter is easy to reverse).
Here are some free tools + a tip to remove metadata from photos before you upload them:
Instead of posting the original image file, take a screenshot of it and upload that instead.
Other Tools/Tips
As much as possible, avoid giving identifiable personal information (birthday, phone number, address, etc) to any online platform. Avoid linking different accounts to each other, too.
Have you been tagging your location on your social media posts? Stop that.
If you’re signing petitions that display your signature/particulars to the public, use throwaway/burner emails. There are services like Guerilla Mail for this.
Avoid using your personal email address in online forms, miscellaneous registrations, etc. Instead, create an account JUST for use on public forms/websites etc., and make sure it’s not linked to any of your personal accounts.
If you ever need multiple email addresses (e.g., for various petitions hosted on the same website, or something like that), remember that Gmail lets you create “aliases” for your email. Add a period anywhere in your username and/or use “@googlemail.com” instead of “@gmail.com” — most forms will read these as new / different addresses, but any mail will still end up in your inbox.
Speaking of email: have you emailed Congress to remind them to be public servants and work for Filipinos’ best interests? You should. Here’s an app to help you email members of Congress about the Terror Bill.
Double-check links before you click them. Avoid downloading things unless you know where they’re from.
Keep your apps and software updated. A lot of breaches happen through old / outdated programs that get exploited.
Feminist cyber activism organization HACK*BLOSSOM has a comprehensive DIY guide, including tips for mobile security.
Not exactly for digital activities, but if you’re at a protest and trouble’s brewing, here’s Teen Vogue’s guide to safely filming police misconduct.
Right now, there’s a lot of information flying around online. Here are some Carrd links that could help you learn more about some of the critical issues / events going on:
This isn’t an exhaustive guide, nor does this post claim to be the last word on cybersecurity. Digital security has far too many dimensions to be tackled in a single post — and anyway, I’m not a cybersecurity professional. I’ve done as much research as I can to vet recommended programs / tools / tips here, but in the end, I’m just another nerd trying to make tech accessible and useful.
After all, technology is not, and never has been, neutral.5I have lots of thoughts about this, but that’s for another post. Anyone who claims otherwise is, at best, oblivious to current events, or at worst, deliberately obscuring the many ways technology can (and does) inflict real harm on people.
That said, “not neutral” doesn’t mean “all bad.” Digital spaces also offer us opportunities to raise awareness about critical issues; band together6 Especially in the middle of a pandemic and take action in different ways; and, well, try to create a better world for everyone. Taking those opportunities and standing up for what’s right shouldn’t have to result in danger for yourself or your loved ones, but here we are. I hope this post makes it a bit easier for people to stay safe, to be brave.