Coding with Claude: the interesting work remains

Some friends in data science and engineering are anxious about how quickly AI coding tools such as Claude Code are being introduced into day-to-day work. They feel these tools are taking away creative work and replacing it with tedium: designing tests, curating data, evaluating outputs. Some say it feels like they’re babysitting AI. That surprises me.

I still pour a lot of design, research, and context-setting into getting these tools to work well. They’re context-hungry. The creative part of my work remains. If the AI is handling design and research independently for ambiguous problems, I’m probably using it poorly (send tips!).

As for reviewing outputs, how is that different from thorough code review, or reviewing a less experienced teammate’s work? I’m genuinely curious why others see it differently.

Maybe I’m not anxious because I was never particularly attached to writing code. I like figuring out the business problem, the tradeoffs, and what a good solution looks like from a user’s or researcher’s perspective. Writing fewer lines from scratch suits me fine. And if typing is therapeutic, I still type constantly. The context doesn’t write itself.

The anxiety I do take seriously is longer term. What happens when these models absorb more of the problem-solving, not just the implementation?

Several friends have mentioned that giving careful instructions to an AI is often easier than helping an early-career engineer get up to speed. So I understand why early-career engineers might be anxious. Organizations still need to invest in early-career team members. How else will we have senior contributors?

If working with these tools feels cumbersome or like babysitting, it’s worth asking why. The problem may be a poor fit. You may need a different model or workflow. People are figuring this out in public, and someone may already have solved the pain point you’re dealing with.

Leave a Comment