Discussion about this post

User's avatar
Nikethana N's avatar

Brilliantly written piece.

Expand full comment
Pawel Brodzinski's avatar

For the past decade or so, raising ambiguity and uncertainty was a centerpiece of pretty much all business literature (at some point, when playing buzzword bingo, VUCA would be a quick target).

The nature of LLMs is that they provide the most likely answer (yes, I'm oversimplifying here, but to a large degree that's what these models do). What follows is that when the most likely answer is good enough, AI is pure gold.

However, under conditions of high uncertainty, the most likely answer would be almost equally likely as, well, any other answer. Ultimately, it's all uncertain. It's all ambiguous. *By definition*, we can't have a good enough answer up front.

That's where judgment kicks in. We pull all sorts of distant contexts that aren't obvious. We trust our subconscious impulses that we can't easily explain. Now, if an AI model had access to all that, it might have even produced a similar outcome. But it doesn't. And it can't.

Heck, we can't explain why our judgment would be to choose this over that.

As cognitive science teaches, we make decisions first and then find justification for these decisions. That part won't be easily outsourced to the machine, as there's no explicit training data.

That is, not unless AI develops actual world models and reasoning.

Expand full comment

No posts

Ready for more?