Everything is subjective
As a writer with obsessive-compulsive disorder (OCD), I’m unendingly preoccupied with corralling the disparate ideas – for blogs, articles, interviews, books – that flicker through my mind. I’m particularly hooked on the concept of triaging those ideas, or creating a grand, overarching system that neatly categorises my daily meanderings. Devising such a failproof taxonomy has been unfailingly impossible, but my tireless mind returns to the challenge periodically, hoping to stumble upon a solution.
During one recent episode, grappling with the same recurring problem, I had a breakthrough of sorts. Finally, I grasped the crux of this quintessential dilemma: I can never organise all my ideas because they, like everything else in life, are unflinchingly subjective. No category can be wholly accurate. No bucket can be a definitive match. No organisational schematic can be finished, because something unexpected will always crop up, defying descriptions and refusing to be pigeonholed. Attempting to arrange my ideas is essentially futile, because any imposed paradigm will be arbitrary and flawed.
My latest idea-sorting algorithm even considered fancy writing values, one of which – for all my work to be ‘purposeful’ – struck me as exceptionally vague. What does ‘purposeful’ even mean? I attempted to define it and break it down into sub-dynamics, but where do you begin and end with such a process? Every person has their own take on the notion of ‘purposeful’ content, and even if we did agree on a satisfactory understanding, scoring my writing ideas by their ‘purposefulness’ – to decide which project to tackle next – would be another inherently subjective task. With each degree of separation, objective truth falls further into fantasy, so when do we cut our losses and stop chasing it?
Of course, this core subjectivity is true of most things, upon closer inspection. Even if granted a granular evaluation matrix for anything – scoring exams, grading reports, sanctioning promotions, recruiting interns – one’s interpretation of fulfilment against that matrix will be unique. Therefore, to a certain extent, scoring or grading something will always rely on gut instinct – even if we wrap it up in a heap of fancy terminology and verbose formulae. Perhaps by eliminating tedious and biased data tinkering, we can trim the fat and arrive at the same conclusions – saving time, effort and confusion along the way.
Take baseball as an example. Oakland A’s supremo Billy Beane, the doyen of sabermetric decision-making, famously built a proprietary database quantifying player performance, but said database still reflected his personal priors. After all, Beane selected the seed material – in this case, the advanced analytics – to be included or excluded from those calculations. Beane also determined the weighting applied to each datapoint. All such judgements are beholden to personal preference, so the nerdy accoutrements – spreadsheets, charts, graphs, heatmaps – were arguably a snobbish means to a primal, rudimentary end.
Beane’s analytical revolution was depicted heroically in Moneyball, Michael Lewis’ bestselling book later adapted into a Brad Pitt blockbuster. A central tenet of that story – indeed, a definitive cultural fault line of modern Major League Baseball – was the battle between traditional scouts and disruptive brainiacs for influence in key decision-making. That squabble is a simulacrum of the wider clash between subjectivity and objectivity, and I increasingly side with the crusty scouts, respecting their intangible, multi-layered expertise more than the faux quantification of one-dimensional analysts.
Ultimately, though, scouts and statisticians speak about the same things in different ways. Sure, people are excited by newfangled metrics like ‘exit velocity’ and ‘pitch tunnelling,’ but they are fancy synonyms for what scouts have prized since the game’s creation – ‘hitting the ball hard’ and ‘throwing different pitches from the same arm slot.’ Endowing those attributes with a number or percentage may make them more digestible and sortable, but do you really need mathematical verification to trust your eyes? In certain scenarios, perhaps, but not always. There are multiple ways of arriving at the same destination, and picking the most efficient route in a given situation – rather than sticking to one draconian course – is surely preferable.
Beane has never actually won the World Series, of course, whereas so-called traditionalists like Tony La Russa and Bruce Bochy have rings aplenty. Meanwhile, another world champion, Joe Maddon, was recently fired as manager of the Los Angeles Angels due to philosophical differences with the team’s analytically-driven front office. According to Maddon, the Angels even had a matrix – devised by number-crunching executives devoid of playing experience – that tracked relief pitcher usage and projected rest required by individual players. The front office often made certain pitchers unavailable to Maddon, trusting models grounded in pop kinesiology more the wisdom gleamed from his 47 years in baseball. ‘Honestly, that’s insulting,” Maddon wrote in The Book of Joe, his memoir, and I wholeheartedly concur. What gives these people the right to trounce ancient baseball sagacity with blueprints hewn from Ivy League extrapolation?
My thesis is not a treatise on solipsism or nihilism, but rather a query of efficiency: are big data and gut instinct one and the same, with the latter being a quicker, easier way of making the same decisions? In the context of my central conundrum, instead of identifying exciting writing projects by ranking ideas against esoteric criteria, could I remove reams of costly friction and arrive at more inspiring selections sooner? Could following my heart, rather than my calculator, make me happier and freer? Perhaps.
Any system that follows a set of rules can be gamed. You can use artificial intelligence (AI) to mark college essays, but students will soon decode the priorities – varied sentence length, for instance, or volume of citations – baked into the application. We see this whenever Google tweaks its search ranking algorithm and bloggers delve into the detail, optimising content based on emphasised factors. People will always catch up and skew systems to their own advantage, so is it worth all the hassle to create them in the first place? Maybe not.
This dichotomy between inputs and outputs plagues our modern outlook. I see it everywhere, from home budgeting and interior design to professional planning and online entrepreneurship. Deep down, we know objectivity is a utopian pipedream, but still we fight it. Still, we push it. Still, we think we can figure it all out. This feeds into our wider zeitgeist of inefficient efficiency and overcomplication. We like to find complex ways of achieving simple outcomes, trading intellectual procrastination for ego massages and self-flagellation. That, to me, is ludicrous.
In a dog-eat-dog world of unchecked narcissism, where no fault can be admitted and no weakness acknowledged, confirmation bias has become our go-to defence mechanism. We favour information that affirms our pre-existing opinions while discarding data that outlines other possibilities. At a macro level – in organisations and political movements – this desperate need to validate dictated values often leads to tampering and propaganda. If an output is unfamiliar, unhelpful or unwanted, it is too easy to fiddle with the formula – extra filter here, additional asterisk there – so it produces more convenient results. Amid such a fragile reality, it is probably unwise to outsource our decision-making to predictive algorithms.
Certain data is undoubtedly useful. When planning future website content, for instance, it helps to know how your work has performed historically – principally via traffic measurements. Nevertheless, I’m increasingly suspicious of proprietary data – metrics of human design that are so needlessly granular as to obfuscate their own meaning. Give me minimal raw data from metronomic sources, pleasantly unmanipulated, and let me use it as one tool in my intuitive decision-making process. When you start making formula soup, bias sets in and value is lost. That is when I turn off, because knowing interference negates objective authenticity.
It is admittedly difficult to define ‘manipulated data,’ in keeping with my broader theme of fluid semantics. When, exactly, does raw data become manipulated? And who gets to decide the threshold of acceptable data cultivation? Nobody really knows, and that is my point. If we cannot agree on the floor and ceiling of our collective moral infrastructure – on definitions fundamental to the way things work – how can we ever agree on the conclusions it spits out?
It is often said that we live in a post-truth age of fake news and strategic misinformation. People have lost trust in once-authoritative sources of knowledge and opinion, as shown by Twitter’s haemorrhaging of users. However, entirely missing from the debate on truth is an acceptance of its eternal unattainability. Objective truth has never existed, nor will it miraculously appear from an Excel formula any time soon. Everything is subjective, including the data we so keenly worship. The sooner we realise that, the sooner we can stop wasting effort and follow our heart.
In conclusion, perhaps OCD can be seen as a surfeit of objectivity in a highly subjective world. Maybe the chafe between absolutist ideals and capricious realities is synonymous with the agony of obsessive compulsion. Perhaps embracing subjectivity – living in the grey, not the black or white – can set us free. Objectivity begats analysis paralysis, extremism and perfectionism, whereas subjectivity stokes context and compassion. Let us cherish subjectivity, then, rather than chastising it. We may just get more done that way, and be a little happier in the process.