Over the weekend, I watched this talk by Matt Welsh where he talks about the future of programming and how it may be different in a few years from what we understand as programming today.
In some ways, I totally agree with him. As a programmer, I spend a considerable
amount of time perfecting my craft. I read books, watch talks, write code... all
to get better at what I do. But not everything I do is about algorithms, data
structures, and design patterns. Most of it is also about learning and creating
new tools. Tools that I have accumulated over the 20+ years working in the
field. Tools that I use on a daily basis, mostly to make my life easier. Without
them, I'd be crying my way back to the old days. I'm not just talking about
tools like prettier
, eslint
, or next.js
. I'm talking about things like
git
, docker
, node
, typescript
, vscode
, oh-my-zsh
, macos
... Pretty
much every building block that I use to create software today. If AI can
interpret whatever I say and translate it into
Machine Code, then I wouldn't need
to have any of these tools anymore.
This sounds too good to be true, but in some ways, this is already happening with things like AutoGen and AIbitat. These tools are already generating code for us. They are not perfect, but they are getting better faster than humans create new tools every day. Also, dumb mistakes are not going to be a problem for them.
How did you spend the last 30 minutes?
On the other hand, though, I don't think that programming will be gone. You see, when I'm programming, I spend my time perfecting 3 things. First is learning the way HOW to do stuff (this is where I need new tools), secondly is actually DOING, and third is TESTING what I have done wrong.
Even though it looks like we all are going to happily throw our stack away and be dancing around a campfire singing kumbaya. What it actually means is that we are going to only replace our stack with a more powerful one which instead of competing against us, will actually make us super productive 10,000x individuals.
At the end of the day, I'll still need to split my time between:
- learning HOW: but will probably be on how should I talk to it, what kind of stuff it likes I say or not instead of what little tool I need to install.
- DOING the stuff: You see, when we start creating new stuff, we don't often know exactly what we want. We have a general idea of what we want, but we don't know exactly how it should look like at the end. We start simple and then we iterate until we are happy with the final result. I'll still need to learn what I want along the way.
- TESTING: of course testing, or do you think a machine that mimics the way we think and learn from us will not make mistakes? Come on, guys! It's ARTIFICIAL Intelligence, not God. It will work just like an artificial burger...
Anyways, it's awesome to extrapolate about it because it will indeed change what we are capable of, but will it change programming at its core? I'm not so sure about it. Until someone comes up with an LLM runtime, this is all speculation.
If you haven't seen it yet, I highly recommend it.