What I do, and what I delegate to AI
January 6, 2026 · 6 min read
AI is not a productivity hack. It’s a leverage tool.
That distinction matters because hacks optimize the edges of your day. Leverage changes the shape of your work. It shifts what you spend your attention on, what you outsource, and what becomes your core loop.
I’ve found the useful question isn’t “How do I use AI more?” It’s “What is my job now that AI exists?”
Because if you treat AI like a smarter autocomplete, you’ll get marginal gains. If you treat it like an employee with infinite patience and inconsistent judgment, you can redesign your workflow.
This essay is how I separate what I should do from what I should delegate to AI, and why the boundary is less about capability and more about responsibility.
The principle: delegate execution, keep accountability
AI can do a lot. The trap is letting can turn into should.
My rule is: I delegate execution, but I keep accountability.
If the output can damage my reputation, confuse my users, or create future maintenance debt, then I don’t hand it off blindly. I might still use AI heavily, but I keep a human final pass in the loop.
This sounds obvious until you realize how often people delegate the exact thing they’re being paid for: judgment.
Judgment is the scarce asset. Execution is becoming cheap.
So the line I draw is simple: I keep anything that requires taste, ethics, or irreversible decisions. I delegate anything that is reversible, testable, or tedious.
What I should do
There are categories of work that remain stubbornly human, even if AI can technically assist.
Decide what matters
Choosing the problem is the work.
AI can propose ideas. It can’t own the consequences. It doesn’t feel the opportunity cost of shipping feature A instead of feature B. It doesn’t have a long term relationship with your users. It doesn’t have a personal brand to protect.
I use AI to widen the option space, but I decide. Strategy, prioritization, and sequencing stay with me.
Define the spec and the constraints
Most failed projects fail at the boundary conditions.
What exactly does done mean? What do we optimize for: latency, cost, readability, accessibility, maintainability? What are we not doing?
AI is excellent at filling in a template. It is mediocre at knowing which constraints are non negotiable in my context unless I articulate them.
So I write the spec. I make the tradeoffs explicit. I set the standards.
Own taste and coherence
Taste is the ability to say no to almost right outputs.
AI can generate ten designs, ten paragraphs, ten approaches. It struggles to produce a single coherent, opinionated result that feels like me without a strong guiding hand.
I keep the editor role. I decide what fits the voice, the product, the aesthetic, the philosophy. Consistency is a brand asset, and consistency is downstream of taste.
Handle high stakes communication
Any message that can materially affect trust stays human led.
Customer escalations. Partnership emails. Sensitive internal messages. Public statements. Anything where tone is the product.
I’ll use AI to draft, but I won’t send without rewriting. The point isn’t that AI can’t write. The point is that I’m accountable for how it lands.
Make irreversible calls
Security decisions. Legal language. Deleting data. Pricing changes. Architectural bets that will cost months to unwind.
AI can advise, but it does not pay the bill when things go wrong. When the decision is expensive to reverse, I treat AI like a consultant: useful input, not the decider.
What I delegate to AI
Now the fun part: the work AI is absurdly good at, provided I frame it correctly.
First drafts of anything
Writing, code, documentation, UI copy, outlines, meeting notes, proposals. The first draft is where perfectionism wastes the most time.
AI is a draft engine. I treat it like a cold start eliminator. It gets me from zero to something, then I take over.
The value is not the draft. The value is the blank page disappearing.
Iteration and variation
- Give me five alternatives.
- Rewrite this in a calmer tone.
- Shorten this by 30% without losing meaning.
- Generate three UX approaches with pros and cons.
Humans are slow at generating variants because we get attached to our first idea. AI has no attachment. It will happily give you a menu.
This makes me a better editor, because editing is easier than inventing.
Research synthesis (with verification)
AI is great at summarizing a topic, extracting key points, and building a map of an unfamiliar domain.
But it’s not a source of truth. It’s a compression model.
So I delegate the initial synthesis, then I verify anything important against primary sources. AI gives me speed; I provide correctness.
The workflow I like is: brief me like I’m busy → list the claims that must be verified → give me the best primary sources → then I read the sources.
Boilerplate and glue code
Anything repetitive: data transforms, serialization, migrations, adapters, small utilities, tests scaffolding, type definitions, regexes, config files.
This is where AI shines for dev work. It’s not replacing engineering. It’s replacing the parts of engineering that feel like moving boxes.
I still review, but I don’t manually type what I can generate and validate.
Debugging assistance and second opinions
Rubber ducking is real. AI is a tireless rubber duck.
I paste an error, a stack trace, a snippet, and ask for hypotheses. I ask for the most likely causes, then the top three things to check. I ask it to propose minimal repros.
It doesn’t always get it right. But it reliably gives me a structured search tree, and that often saves time.
Turning my thoughts into structure
This is underrated.
When I have messy ideas, I’ll dump them into AI and ask it to produce:
- an outline
- a set of principles
- a decision matrix
- a spec
- a checklist
- or a short memo
AI is excellent at turning blobs into bones. I then decide what the skeleton should actually be.
A practical decision filter
When I’m unsure whether to delegate something, I ask:
- If this goes wrong, who pays? If the answer is me, I stay involved.
- Can I test the output quickly? If yes, delegate aggressively. If no, be careful.
- Is the main value judgment or execution? If it’s judgment, I lead. If it’s execution, AI leads.
- Will this output represent me? If yes, I edit. If no, I automate.
The bottom line
The biggest mindset shift is accepting that my value is moving up the stack.
AI doesn’t make me useless. It makes low level effort cheap. Which means the market will stop rewarding low level effort.
So I try to spend more time on:
- choosing the right problems
- setting constraints
- designing systems
- making tradeoffs
- building taste
- and shipping coherent work
And I delegate the rest.
Not because I’m avoiding work, but because I’m protecting the only resource that doesn’t scale: my attention.