Programming towards the end of programming
Let's admit the fact that AI and androids are going to replace every career, including those we think are irreplaceable for their requirements of human intelligence. That will be the end of every career, including my own, which is programming.
I used to be, and still am, interested in the world of computer science. Previously I would read long articles trying to understand a field I am not familiar with. I would read Wikipedia, read the docs, or read the source code to understand C++, templates, TypeScript generic gymnastics or ECMAScript standards. Not every time I got lucky to figure out what was going on, due to the ambiguity of human languages, due to the lack of reading ability or due to limitations of my working memory. I tried to google, tried to ask, and now I can just ask GPT for its help - or just ask GPT directly and skip those blogs. Honestly, LLMs greatly make up for my bandwidth and I get to learn fast about Typed Lambda Calculus, Row Polymorphism, Hindley Milner, data racing, distributed systems... you name it, a lot more quickly and smoothly. When you get those thinking machines going, they answer your questions in your first language, and answer your questions directly. You don't need StackExchange or Quora to find answers to someone else's questions. Your own get answered right away.
We all know this is good news, but part of my gut feels weird. With the process of learning being easier, the "value" of the knowledge is doomed to fade away. Want something geeky like Haskell? AI will create a better one day. Just want to get an app up and running and do it quick and dirty? AI can do that right now. The value of human knowledge and skills is fading away, simply because there is a bigger brain powered by gigawatts that can absorb those ideas and emit better than any human being. In the year of 2026, we are stuck on the context size and unpredictable agentic behavior. But who knows if human brains will still outperform AI in every essential way? This is the future, and future is the end of human knowledge.
I am still working on TypeScript for my "societal value". I can come up with side projects and do it just for fun. None of these requires me to write it line by line. When AI makes mistakes, I get lucky for the error that requires me to steer its way, since that is the cause for me sitting at the desk. All I know is that the conversation with the agent will be sent to Anthropic to analyze and train a better agent, until the agent will not require a steering man. The societal value of human intelligence comes to its end, overshadowed by mechanic brains.
There is one last question: do you still learn and create, even if it will be wiped of its societal value at the end of the day? This is the one question I do not see on SNS, and of course never seriously answered.
The question reminds me of the long philosophical thinking on death. We are all going to die, and we keep living for different reasons. Putting the religious reasons and biological instincts aside, we can ask ourselves: why do you do what you do, even if it will be void after your life? The subject changes, but the pattern is kept. It just changes to another commonly recognized value - human knowledge. And the effect is universal.
Will you keep learning? Not the "how to get your agent to behave well with this prompt", but the ground knowledge of language details, database rationale, type theories that support your application?
The answer for me is the same as "will you eat gourmet or just anything that keeps you alive since you will die anyways". And, what if I am wrong, and there is indeed a limit for AI agents to do humanly stuff? I would like to say the possibility is very thin but not zero.