I recently heard a podcast of an interview with Marc Andreessen on the recent advancement of AI, with somewhat of a focus on ChatGPT or, more generally, the overarching Generative AI and LLM (Large Language Models). There was a very intriguing insight into a kind of plot twist that came about in all this recent progress made with that technology.
Think about it: how often have you heard concerns that AI would, quite soon, replace people's work? But more often than not, that has been framed as a concern for more mechanistic kinds of work.
Now it turns out that we are way farther from being able to replace a truck driver with AI and much closer to being able to replace activities in the real of knowledge work. What an irony, in a way…
It's not unpredictable, though, if you really carefully consider the scenarios and the underlying premises of what is needed for one or the other kind of work. Perhaps one way of looking at it is that mechanistic types of work require more accuracy (closeness to reality or true value), while for knowledge work, quite often it is not that easy to define what being accurate means, so we can more easily settle for precision (less variability).
What I mean is something like this:
For an AI model to drive a truck, we expect that the multitude of factors that can influence the outcome of driving are handled in a pretty optimized and accurate fashion, while to summarize content about a certain topic as given by a prompt for a generative AI model, you only need a good enough (probabilistically speaking), like one (of many possible) coherent version.
And that will precisely be the biggest risk on the whole thing, for as long as the attempt should be in pursuing generic models, like what LLM is all about – the possibility of considerable bias towards what is mainstream or has been emphasized in training somehow. Solving big generic problems imposes similarly big challenges, which should lead us to hold some level of skepticism, and mostly used in the mode: trust but check. Or use it as an augmented decision tool, not for automation of decisions (unless the problem is trivial enough).
That brings me to another insight I saw recently, a kind of prediction made by Matthew Skelton:
That resonates with me exactly because of the logic I just stated above on the biggest risk of generic modeling when it comes to biases and the trustworthiness of the probabilistic answer given. Not to mention that what Matt predicts sounds like something much more tangible and that I could see scaling in the constrained context with a focus on precision and accuracy, which could lead into interesting cases all the way towards automating some relatively complex use cases. Efficiencies would be on the line to be gained—and yes, most likely in the realm of knowledge work.
Interesting times we live in…
By Rodrigo Sperb, feel free to connect (I only refuse invites from people clearly with an agenda to ‘coldly’ sell something to me), happy to engage and interact
It was an interesting thought. We both worked with someone whose mantra was always trust but verify. I think your take on AI makes more sense and is logical which means it's probably not what is going to happen either. Are companies going to spend limited resources for the big breakthrough or look to leverage it where it makes the most sense, for example based on some logic routing a task to the right location. This probably isn't the best analogy but the one that popped into my head is that while my new truck isn't self driving yet, there is some aspect to it that is automated and I just set it and forget and that's the lights. It will based on the logic it's been provided turn high beams on and off and probably I'd say 95% of the time or more it's accurate and I don't have to toggle them off so I don't blind someone.