I remember as the internet took off and you could just search for things, I thought it made programming too easy. You never had to actually learn how it worked, you can just search for the specific answer and someone else would do the hard work of figuring out how to use the tools available for your particular type of problem.
Over the years, my feelings shifted, and I loved how the internet allowed me to accomplish so much more than I could have trying to figure it all out from books.
I wonder if AI will feel similar.
For programming, I don't like it. It's like a master carpenter building furniture from IKEA. Sure it's faster and he doesn't have to think very hard and the end result is acceptable but he feels lazy and after a while he feels like he is losing his skills.
The best days of computing for me were what you remember. A computer was just a blank slate. You turned it on, and had a ">" blinking on the screen. If you wanted it to do anything you had to write a program. And learning how to do that meant practice and study and reading... there were no shortcuts. It was challenging and frustrating and fun.
I've seen the following prediction by a few people and am starting to agree with it: software development (and possibly most knowledge work) will become like farming. A relatively smaller number of people will do with large machines what previously took armies of people. There will always be some people exploring the cutting edge of thought, and feeding their insights into the machine, just how I image there are biochemists and soil biology experts who produce knowledge to inform decisions made by the people running large farming operations.
I imagine this will lead to profound shifts in the world that we can hardly predict. If we don't blow ourselves up, perhaps space exploration and colonization will become possible.
You could do that without that knowledge back in the day too, we had languages that were higher level than assembler for forever.
It's just that the range of knowledge needed to maximize machine usage is far smaller now. Before you had to know how to write a ton of optimizations, nowadays you have to know how to write your code so the compiler have easy job optimizing it.
Before you had to manage the memory accesses, nowadays making sure you're not jumping actross memory too much and being aware how cache works is enough
Fewer abstractions, deeper understanding, fewer dependencies on others. These concepts show up over and over and not just in software. It's about safety.
* "search on steroids" - get me to the thing I need or ask whether the thing I need exists, give me few examples and I can get it running.
* getting the trivial and uninteresting parts out of the way, like writing some helper function for stuff I'm doing now, I'll just call AI, let it do its thing and continue writing the code in meantime, look back ,check if it makes sense and use it.
So I'm not really cheating myself out of the learning process, just outsource the parts I know well enough that I can check for correctness but save time writing
Same here. Except that as native french speaker there simply weren't that many quality books about programming/computers that I could easily find in french.
So at 11 years old I also learned english, by myself, by using computers (which were in english back then) and by reading computer books.
And we'd exchange tips with other kids in the neighborhood who also had computers and were also learning to code (like my neighbors who eventually, 20 years later, created a software startup in SoCal).
I never had the feeling that being able to search for things on the internet made things too easy. For me it felt like a natural extension to books for self-learning, it was just faster.
LLMs feel entirely different to me, and that's where I do get the sense that they make things "too easy" in that (like the author of the OP blog post) I no longer feel like I am building any sort of skill when using them other than code review (which is not a new skill as it is something I have previously done with code produced by other humans for a long time).
As with the OP author I also think that "prompting" as a skill is hugely overblown. "Prompting" was maybe a bit more of a skill a year ago, but I find that you don't really have to get too detailed with current LLMs, you just have to be a bit careful not to bias them in negative ways. Whatever value I have now as a software developer has more to do with having veto power in the instances where the LLM agent goes off the rails than it does in constructing prompts.
So for now I'm stuck in a situation where I feel like for work I am being paid to do I basically have to use LLMs because not doing so is effectively malpractice at this point (because there are real efficiency gains), but for selfish reasons if I could push a button to erase the existence of LLMs, I'd probably do it.
I think this depends on how you are using the internet. Looking up an API or official documentation is one thing, but asking for direct help on a specific problem via Stackoverflow seems different.
Of course, asking a question was another matter, likely to result in a rebuke for violating the group's arcane decorum. But given how pervasive "RTFM" culture was back then, most "n00bs" were content to do just that (RTFM) until they came up against something that genuinely wasn't covered in some FAQ or manpage.
There's no equivalent mandate for software engineers. Nothing stops you from spending years as a pure "prompt pilot" and losing the ability to read a stack trace or reason about algorithmic complexity. The atrophy is silent and gradual.
The author's suggestion to write code by hand as an educational exercise is right but will be ignored by most, because the feedback loop for skill atrophy is so delayed. You won't notice you've lost the skill until you're debugging something the agent made a mess of, under pressure, with no fallback.
Human posters need to start saying obscene things or making unsafe/violent statements to ensure the comment isn’t just AI generated.
BTW - my coworker is not AI. It is a flesh-and-bones SWE.
Honestly, I don't really know what to do. I spent my whole life (so far; I'm still very young) falling in love with programming, and now I just don't find this agent thing fun at all. But I just don't know how to find my niche if using LLMs truly does end up being the only way for me to build valuable things with my only skills.
It's pretty depressing and very scary. But I appreciate this article for at least conveying that so effectively...
My biggest lessons were from hours of pain and toil, scouring the internet. When I finally found the solution, the dopamine hit ensured that lesson was burned into my neurons. There is no such dopamine hit with LLMs. You vaguely try to understand what it’s been doing for the last five minutes and try to steer it back on course. There is no strife.
I’m only 24 and I think my career would be on a very different path if the LLMs of today were available just five years ago.
Does this mean youd be incapable of learning anything? Or could you possibly learn way more because you had the innate desire to learn and understand along with the best tool possible to do it?
Its the same thing here. How you use LLMs is all up to your mindset. Throughly review and ask questions on what it did, or why, ask if we could have done it some other way instead. Hell ask it just the questions you need and do it yourself, or dont use it at all. I was working on C++ for example with a heavy use of mutexs, shared and weak pointers which I havent done before. LLM fixed a race condition, and I got to ask it precisely what the issue was, to draw a diagram showing what was happening in this exact scenario before and after.
I feel like Im learning more because I am doing way more high level things now, and spending way less time on the stuff I already know or dont care to know (non fundementals, like syntax and even libraries/frameworks). For example, I don't really give a fuck about being an expert in Spring Security. I care about how authentication works as a principal, what methods would be best for what, etc but do I want to spend 3 hours trying to debug the nuances of configuring the Spring security library for a small project I dont care about?
Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.
You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
If that's not true, then what's the problem with not learning the material? Go do something more productive with your time if the personal curiosity isn't good enough. Were in a whole new world.
>Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
This is true, and I can't answer that 100% confidently. I imagine I would just be doing more more/complicated things and learning higher level concepts. For example, if right off the bat I could produce a web app, Id want to deploy it somewhere. So Id come across things like ssh, nginx, port forwarding, jars, bundles, DNS, authentication, etc. Do this a 1000 times just the way I wrote 1000 different little functions or programs by hand and you'll no shit absorb little here and there as issues come up. Or maybe if whats hard a year ago is easy today, Id want to do something far more incredibly complex than anything anyone's been able to imagine before, and learn in that struggle.
Programmers in the 90s were far more apt at understanding CPU registers, memory and all sorts of low level stuff. Then the abstraction moved up the stack, and then again and again. I think same thing will happen.
Also, you can't say Im in a privileged position for already knowing how to code and at the same time asking what's the point of learning it yourself.
Kids today couldn't imagine how people used to live just 100 years ago, like it was the dark ages. People from that age would probably look at kids 10 years ago and think, these poor children! They don't know how to work hard! They don't know anything about life! They're glued to these bizarre light machines! Every age is different.
The internet never fell. I bet it’ll be the same with AI. You will never not have AI.
The big difference is the internet was a liberation movement: Everything became open. And free. AI is the opposite: By design, everything is closed.
This is what I am still grappling with. Agents make more productive, but also probably worse at my job.
How is this any different than building Ikea furniture? If I build my "Minska" cupboard using the step-by-step manual, did I learn something profound?
That said, I think you're still leaning things building IKEA-style software. The first time I learned how to program, I learned from a book and I tried things out by copying listings from the book by hand into files on my computer and executing them. Essentially, it was programming-by-IKEA-manual, but it was valuable because I was trying things out with my own hands, even if I didn't fully understand every time why I needed the code I'd been told to write.
From there I graduated to fiddling with those examples and making changes to make it do what I wanted, not what the book said. And over time I figured out how to write entirely new things, and so on and so forth. But the first step required following very simple instructions.
The analogy isn't perfect, because my goal with IKEA furniture is usually not to learn how to build furniture, but to get a finished product. So I learn a little bit about using tools, but not a huge amount. Whereas when typing in that code as a kid, my goal was learning, and the finished product was basically useless outside of that.
The author's example there feels like a bit of both worlds. The task requires more independent thought than an IKEA manual, so they need to learn and understand more. But the end goal is still practical.
But the nice thing about a cupboard and its components is that they are real objects, so the remembrance is done with the whole body (like the feeling of a screw not correctly inserted). Software development is 90% a mental activity.
If the LLM is indeed such a master at complex coding tasks that we don't understand, why not ask it some questions about how the code works?
You can even ask directly about the concern. "I am worried that by letting you do everything I am not learning how the system works. Could you tell me more about what you did and how I might think through it if I needed to do it myself?"
> I do read the code, but reviewing code is very different from producing it, and surely teaches you less. If you don’t believe this, I doubt you work in software.
I work in software and for single line I write I read hundredths of them.If I am fixing bugs in my own (mostly self-education) programs, I read my program several times, over and over again. If writing programs taught me something, it is how to read programs most effectively. And also how to write programs to be most effectively read.
I think here lies the difference OP is talking about. You are reading your own code, which means you had to first put in the effort to write it. If you use LLMs, you are reading code you didn't write.
Gemini 3 by itself is insufficient. I often find myself tracing through things or testing during runtime to understand how things behave. Claude Opus is not much better for this.
On the other hand, pairing with Gemini 3 feels like pairing with other people. No one is going to get everything right all the time. I might ask Gemini to construct gcloud commands or look things up for me, but we’re trying to figure things out together.
Man, it would rule so much if programmers were literate and knew how to actually communicate what they intend to say.
It's ironic that the more ignorant one is the one calling another ignorant.
Alright I've had my fun with the name-calling. I will now explain the stunningly obvious. Not a thing anyone should have to for someone so sharp as yourself but there we are...
For someone to produce that text after growing up in an English speaking environment, they would indeed be comically inept communicators. Which is why the more reasonable assumption is that English is not in fact their native language.
Not merely the more generous assumption. Being generous by default would be a better character trait than not, but still arguably a luxury. But also simply the more reasonable assumption by plain numbers and reasoning. So, not only were you a douche, you had to go out of your way to select a less likely possibility to make the douche you wanted to be fit the situation.
Literate programmers indeed.
But I don’t think the answer here is to double down on reading the code and understanding that deeply. We’re rapidly moving past this.
I think the answer is to review the code for very obvious bad choices. But then it’s about proper validation. Check out the app, run the flows, use it for real. Does it _actually_ function?
Or that’s what is working for me. I cannot review all the LOC and I’m starting to feel like I don’t want.
Maybe he meant "reviewing code from coding agents"? Reviewing code from other humans is often a great way to learn.
I learn the most from struggling through a problem, and reading someone’s code doesn’t teach me all the wrong ways they attempted before it looked like the way it now does.
I agree that if I don't already know how to implement something, seeing a solution before trying it myself is not great, that's like skipping the homework exercises and copying straight from the answer books.
These steps are what help you solve other issues in the future.
[...] since I work at an AI lab and stand to gain a great deal if AI follows through on its economic promise.
And there it is.