Often as developers, we spend too much time in blacks and whites. We assume too much1. Developers often want things to fit in perfectly neat boxes. Python is the best for x, WebStorm is the best for y. Everyone has a bias towards what they’re familiar with2. But the real world doesn’t fit into nice boxes. There’s chaos and mess everywhere, and approaches that ignore the chaos often tend to fail.
Some choices in software development come down to personal preferences. Is there an objective measure for which editor is the best? Is there an objective measure for which language is the most enjoyable? Not so much. Most papers I’ve read which attempt to measure these qualities struggle with defining good metrics. Other choices have data to back them up. How many deploys have happened in the last week? How quickly were incidents resolved?
As an enabler, I often have to use both perspectives to shape discussions. The quantitative (data) vs qualitative (experiences) mindset is something drilled into UX research3 work. Numbers don’t tell the full story, while experiences could just be anecdotes. Combining both is an important factor in behaviour analysis and change, which is why I prefer to combine both. Measure for raw data, collect experiences to shape an opinion.
To give an example from the news industry, we have many metrics that measure the success of an article. Typically that would be clicks, completion rate, and the number of shares. Once those are combined with the demographics of users, it can give a sense of success. However, it’s only a partial story. For example: short clickbait titles where the body text is only 2 paragraphs would result in high clicks and completion rate, but would harm the user’s overall opinion of the news site. There is no metric that can measure at scale “how did you feel about this article?”. We can make approximations, but they’re numbers to fit a trend rather than understanding the reasons for the trend.
Let’s say we track the number of users who keep coming back to the clickbait articles - if they don’t like it, the weekly active users should drop right? But there’s many factors that could impact that metric. Adverts, subscriptions prices, traffic source, the content topic itself, the thumbnail used, whether the reader has an emotional attachment to the topic, etc. Too many variables make it hard to assess, and too much “human” makes the insight only measure what they do, not why they did it.
At the same time, people are bad at expressing themselves. It’s hard collect accurate user assessments of what they’ve done. They forget things, or they’re unable to provide insight into why they did or didn’t do something. Often they’ll believe they did things that they didn’t.
These are the exact same problems I face when shaping an opinion that represents a collective - both at my day job, and in my volunteering role as the leader of Tekna's Network for Developers.
I don’t have an objectively correct solution to solve this problem, but here’s how I approach mixing experience-gathering with data-gathering:
Don’t be afraid to admit what you don’t know. There are big pockets of empty space in my knowledge - they won’t get filled if I’m not willing to listen to others. I don’t particularly care about me being personally right. I care about things being done well. If it’s not my way of doing things, that’s usually fine4.
Reflect on my own opinions. Why do I have them? Where did they come from? What would cause me to change my opinions? Are my opinions valid, or something I can’t justify?
Empathy. Understand and listen to others. Let others share their opinions without judgement. The more others trust me, the easier it is to have a good understanding of what they believe, and why they believe it.
To figure out difficult problems, there has to be chaos. Chaos can be uncertainty, unclarity, or unknowns. Any difficult problem will have things that could be unknown unknowns. Share little views into this chaos with others. Ask questions about difficult topics to a wider audience. The insight that others can provide often reduces the unknowns. At the same time, don’t introduce too much chaos. It’s tiring for others, and it can be damaging. My job involves often finding sense in chaos - it’s something I enjoy and I’m good at. But not everyone enjoys that5. Keep it to limited subsets of chaos where you need input from others.
Identify the qualities that can have metrics, and gather as much metrics as possible. Challenge your assumptions - don’t settle for the first metric that seems to prove an opinion right. A diversified combination of sources leads to a richer understanding of trends in data.
Don’t be afraid to challenge opinions. Both your own, and others’. If you approach it with both empathy and good intentions, others will respond well. It helps when you admit to questioning for sake of debate, rather than disagreeing to form an argument. Be vocal about your own opinion, but in the right settings. Opinions form debates - debates are incredibly important for behaviour change. My worst nightmare is a team of people working together who can’t have productive disagreements.
One data point is not enough to form an opinion on. If people’s experiences could be an anecdote, so can one data point. If you want to make a claim, back it up with as much data as you can.
Metrics shouldn’t be goals, but they can inform what to set as goals.
Sometimes, you have to just trust your gut. Generalisations simplify the process of talking about a problem. There’s a line between being too general and being too specific. Sometimes you should say “based on what I know and feel, this is what I believe”. An opinion shapes direction - but don’t be afraid to change both your opinion and direction when more data and experiences comes available to you.
A mix of data and opinions make my role easier. It takes time to establish both the collection of data and experiences, but I think it makes my input richer. I could do everything based on my own personal gut-feeling, but I’d probably make assumptions for a lot of decisions that weren’t accurate.
This is also an assumption, based on my personal experiences in the tech world for around 2 decades. Meta, right?
Fear, uncertainty, and doubt cause a reluctance towards the unknown. Why use Rust when you’ve never hit a memory safety issue in C++? Is it worth spending time learning and changing a codebase when it’s unclear what benefit it might have?
Protip: if you want to understand how to approach experience-gathering, sit with some UX researchers for a while. I’ve been lucky in having close colleagues and friends who helped me understand how to really gain insight from user interviews.
There are some hardlines for me though, where it’ll be hard for me to keep my bias in-check. Being selective in the areas where I let bias show works quite well. I can excuse myself from discussions, or openly state my personal bias.
Sometimes I get feedback like “x could’ve been done differently because y”, and quite often “y” was the whole intention. It takes a lot of work to introduce chaos in a structured way. Much like a self-organized audience unconference, there’s a lot of work that goes into helping others find meaning in chaos.