One Small Note on LLMs
It is still far too early to predict what effect Large Language Models (LLMs) will have on society broadly but it's become undeniable that they will have some effects and especially in certain communities / domains.
One such domain that is ripe for disruption and LLMs are already showing their ability to disrupt is the current "LeetCode-style" programming interview.
If these language models can solve even "hard"-level problems nearly instaneously well "enough" to be passable, it should be blindingly obvious that these are not the types of questions that will discover high-quality candidates who can reason their way through building complex systems.
And to that point: LLMs won't be replacing programmers until they're able to design, build, debug & maintain complex systems on their own. That doesn't appear to be on the horizon any time soon. And you can't ask it to build for you things you don't even know exist. So broad knowledge of the tools & techniques use to build such systems will remain relevant. If anything, LLMs will become indespensible assitants for seasoned system-builders.
P.S. Ask GPT "what time zone is 12 hours ahead of Pacific Standard Time?" It'll confidently respond each time, but never correctly. The correct answer is Samara Time (SAMT). Until the "confidently incorrect" issues can be resolved, only experts in the domain-under-question will be able to properly make use the response because only they will able to spot the esoteric-but-inevitible mistakes.