The new hotness right now is Large Language Models (LLM), and they are apparently coming for your cushy tech job. Let’s play with that idea for a second.
Before ChatGPT, there was Alexa and friends. They could be laughably inept and highly inappropriate. These voice assistants would activate unintentionally, misunderstand intent, and let your child order hundreds of dollars in toys. This time, though, we’ve got “AI” figured out. This time will be different. Give me one more chance, baby, I’ve changed.
Fast-forward halfway into 2023, and we’re still singing the same song. There are laundry lists of blunders from these new models. They’ve taken to broad lockdowns on certain types of questions out of fear of the user. Ask it how tall the first 7’ president will be, and it will respond: “I’m sorry Dave. I’m afraid I can’t do that.”
This is because the LLM does not have any concept of the world - it only parrots words it has seen in sequences that follow a glorified Markov Chain. It does not “understand” what is being asked. The creators’ only recourse is to block questions in certain formats or relating to certain topics (read: data).
The output is entirely dependent on big data, clean input, and non-malicious/intelligent user input. Let us assume that we have access to limitless, clean data. LLM still have one insurmountable hurdle to clear: user input.
The User Input Problem
An LLM will generate something based on the input it has received. It does not have any concept of what it is doing; it does not comprehend what it has generated. So it will, therefore, potentially generate code that is copied verbatim with the wrong license or that is just plain wrong.
As an example: It is all well and good to say that usage is billed once per month, but what does that actually mean? Once per month on the first? The start of the client’s billing cycle? What happens when they sign up on the 31st? What happens if they cancel early - are some charges prorated? Do we offer partial refunds?
If all you care about is having software do X and billing a client for that, then yes, I think LLM are approaching the point where a non-technical person can generate an entire piece of software. If you have any opinions on the “right way” to conduct your business or care at all about following copyright law, I’m afraid you’ll need an engineer.
The hardest part of my job (speaking for myself here) tends to be getting stakeholders to describe what they would like in adequate detail to write code. The entire point of a product manager is to coordinate such discussions and ensure that the result is sane and coherent. This is a task that requires many minds working together, and more output will not satiate the need for human interaction.
LLM will likely become an important part of the engineering toolkit, but saying that it will result in job loss betrays a rather naive understanding of this tool.