The Pleasures of a Life Long Intern

There is a kind of energy that comes with being new to something. You ask questions without embarrassment, make mistakes without the weight of a reputation, and try to learn something real every single day. I had that when I was interning. I thought it was a feeling I would never experience again.

I was wrong. AI gave it back to me.

I am a so-called professional developer. I have shipped production systems, led migrations, built APIs that handle real load. But somewhere in that accumulation of competence, I felt something change. The curiosity got more selective. I stopped reaching for things I didn’t already understand.

Using LLMs daily changed that, and not in the obvious “I write code faster” way.

The intern feeling

When I was interning, I had a superpower: access to people who knew more than me and, more importantly, the social permission to bother them constantly. I could ask the same question three different ways and nobody judged me. I could float half-formed ideas and find out quickly if they had legs.

That’s what I’ve found to be the most useful thing about an LLM: a thinking partner who’s there whenever you are.

I can ask it to explain a concept like I’ve never touched a computer. Then ask it to go deeper, assuming I know the fundamentals. Then deeper again, assuming I’ve read the paper. It adjusts. It doesn’t get impatient. And it lets me work through the messy middle of an idea without having to perform confidence I don’t have.

An LLM, if used well, rewards actually trying to figure things out.

The result is that I’m curious again in a way that feels almost embarrassing to admit. I go down rabbit holes I would’ve skipped before because they felt too far from the task. I try approaches I would’ve written off as too risky. I question assumptions I’d accepted for years simply because nobody around me was questioning them either.

The part people get wrong

I want to be direct about something, because the conversation around AI gets sloppy here.

LLMs are trained to agree with you. That’s not an accident. It’s a consequence of how they’re built. Reinforcement learning from human feedback rewards responses that people rate positively, and people tend to rate responses positively when they confirm what they already believe. The model is, structurally, inclined to tell you what you want to hear.

Which means if you use it uncritically, you’re not filtering bad ideas. You’re laundering them. You ask a leading question, get a flattering answer, walk away more confident in something that deserved scrutiny.

I push back on the responses I get. I ask it to argue the other side. I ask it to tell me what could specifically go wrong with an approach. I treat a confident LLM answer the same way I’d treat a confident answer from someone who wants me to like them: with interest, not automatic trust.

There’s also the retention problem. A study I came across recently found that the best way to actually absorb information from an LLM is to combine it with your own note-taking (https://www.sciencedirect.com/science/article/pii/S0360131525002829). Reading a good explanation is not the same as knowing something. I take notes. I write things in my own words. The LLM surfaces the understanding; I do the work of owning it.

See one, do one, teach one

I’m a big Grey’s Anatomy fan 🩺. and I remember hearing this in the show: training runs on a simple cycle. See one. Do one. Teach one. Each stage deepening the understanding.

Personally, I think this is exactly the right mental model for working with AI.

See one: use the LLM to actually understand something, at whatever depth it takes. Ask follow-up questions. Push until the concept is clear.

Do one: close the conversation and do the thing yourself. Don’t have it write all the code for you. Write it -> Make the mistakes -> Hit the edge cases. The LLM can help when you’re stuck, but you have to be the one driving.

Teach one: explain what you learned and push back against the LLM’s answers to debate it out. Maybe, write a blog post about it 😉. The act of teaching exposes every gap you thought you had filled.

Don’t just stop at “see one.” An LLM, a senior dev, a teacher of any kind — they give you a map. It’s your job to walk the territory.

What this actually is

What AI has done for me is compress the feedback loop of learning without compressing the learning itself. When I use it well.

The curiosity I felt as an intern wasn’t about ignorance. It was about permission. Permission to not know things, to ask questions, to try things that might fail.

The best thing AI has given me back is not productivity. It’s that permission. And with it, the feeling that the work is still interesting. That I’m still figuring things out. That showing up tomorrow might surface something I didn’t expect.

That’s worth more than faster autocomplete.