Choice Density: How I Think About AI and Authenticity

writing
tech
general
Reframing the ‘human vs AI’ debate: what matters isn’t whether AI was involved, but how many choices you made along the way. I call this ‘choice density’ - and it applies to writing, code, work communication, and everything else.
Author

Alex Strick van Linschoten

Published

January 21, 2026

I’ve been thinking about what it actually means to create something “with AI” versus creating something “yourself.” The usual framing treats this as binary: either you wrote it, or the machine did. Tools like Pangram try to detect AI-generated text. People unfollow accounts that seem to be posting slop. The implicit question is always: was this made by a human or not?

I don’t think that’s the right question.

The choice space

Here’s a different way to look at it. Imagine life as an infinite branching choice space. Every choice you make opens up a new infinite set of choices. When you create something – a blog post, a Slack message, a piece of code – you’re picking a specific line through that space.

The question isn’t whether AI was involved. The question is: how many choices did a human make?

I’ve started calling this “choice density.” A piece of writing with high choice density has been shaped by many human decisions: what to include, what to cut, how to phrase something, when to push back on a suggestion, when to add personal context. A piece with low choice density is the result of minimal human input – prompt in, output out, copy, paste, done.

Both kinds of slop

The discourse around AI tends to focus on “AI slop” – content that’s obviously machine-generated, generic, soulless. But there’s another kind of slop that doesn’t get talked about as much: what I’d call “brain slop.” A human sitting down and writing a first draft with no iteration, no editing, no refinement. Stream of consciousness dumped onto the page.

Both can be low-choice. Both can be generic. The presence or absence of AI isn’t what determines quality or authenticity. What matters is how many choices someone made along the way.

When I think about the things I’ve written that felt most like mine, they weren’t necessarily the ones I wrote entirely by hand. They were the ones where I made the most decisions. Where I pushed back, iterated, added context, cut things that didn’t fit, injected my own experience and taste.

This applies everywhere

The choice density lens isn’t just about blog posts or tweets. It applies to any domain where AI is showing up.

Work communication. When a colleague asks a question in Slack, do you copy the thread into ChatGPT, paste back whatever it says, and move on? Or do you actually think about the response – curate it, add your own understanding, make sure it fits the context of your team and the person asking?

Code. Agentic coding tools can now generate entire pull requests. You can say “fix this bug” and watch the agent make changes across multiple files. But there’s a spectrum here too. On one end: fully autonomous, you don’t even look at what it did. On the other end: you review the plan, you push back on architectural decisions that don’t fit your team’s approach, you catch the places where the model doesn’t understand your codebase’s conventions. Same tool, wildly different choice density.

The pattern is the same across all of them. It’s not about whether AI was used. It’s about how many human choices shaped the outcome.

The enabling side

There’s something that often gets left out of these conversations: for some people, AI tools aren’t just convenient. They’re enabling.

Not everyone has the same cognitive resources available on any given day. Chronic illness, mental health conditions, neurodivergence, caregiver responsibilities, burnout – there are lots of reasons someone might have limited capacity to wrestle their thoughts into polished prose. The “spoon theory” from the chronic illness community captures this well: you only have so many spoons each day, and when they’re gone, they’re gone.

For someone with limited spoons, the choice might not be between “write it yourself” and “use AI.” It might be between “use AI to help get your thoughts out” and “don’t express yourself at all.”

This doesn’t mean the choice density question disappears. It still matters how much you engage with the process, how many decisions you make. But it reframes the stakes. AI assistance isn’t inherently a shortcut or a cheat. For some people, it’s the difference between participating and staying silent.

The slippery slope

Here’s where I don’t have clean answers.

These tools are designed to reduce friction. They’re trained to be helpful, agreeable, generative. They say yes. That’s what makes them useful, but it’s also what makes them dangerous.

A human collaborator might push back. They might say “I don’t think you’ve thought this through” or “that’s not how we do things here.” They have their own stakes in the work. An AI assistant, by default, doesn’t. It’s optimized to help you get to an output, not to question whether the output is actually good or actually yours.

So the responsibility for maintaining choice density falls entirely on the user. And that’s a lot to ask – especially when your spoons are low, especially when the tool is designed to make acceptance easy and pushback hard.

I don’t think there’s an out-of-the-box way to use these tools well. Just like social media platforms are designed to be addictive, AI assistants are designed to be frictionless. Using them in a way that preserves your humanity requires building your own guardrails. Deliberate friction. Processes that force you to make choices. People who can tell you when you’re slipping.

It’s a discipline. I’m not sure it’s a discipline I’ve mastered.

What I’m still figuring out

I don’t have a formula for “enough” choices. I can’t tell you where the line is between high-choice and low-choice use. I’m not even sure the line is stable – it probably depends on the context, the stakes, what you’re trying to create.

What I do think is that “was AI involved?” is the wrong question. It collapses a rich spectrum into a binary, and it misses what actually matters.

The better question is: how many choices did you make? How much of your taste, your experience, your judgment, your context went into shaping this thing? If you think of the output as a path through an infinite choice space, how many forks did you consciously navigate?

That’s what I’m trying to pay attention to now. Not whether I used AI, but how many choices I made along the way.


I’m still working through these ideas. If you’ve thought about this differently, I’d genuinely like to hear it.