Are LLMs intelligent?
Debates on this question often, but not always, devolve into debates on what LLMs can or cannot do. To a limited extent, the original question is useful because it creates an opening for people to go into specifics. But, beyond that initial use, the question quickly empties itself because (obviously) the answer to the question if X is intelligence depends on how you define intelligence (and how you define X).
Even though it is clear that words are inherently empty, internet is full of such debates. People focus on syntax, when semantics is what runs the world.
There’s no Platonic realm teeming truth that’s disconnected from the world we inhabit. If it existed, the debate on what’s true would shift to the question of who has access to that realm. Is it the scientists? Is it the Pope? Or is it your neighbourhood aunty?
We, fortunately, live in the modern world where everyone is entitled to their opinions. Someone says God exists. The other person says it’s clear that God doesn’t exist. (I say it depends on the definition but that’s boring and nobody wants to hear that)
So, in a sea of opinions, how do you distinguish truth?
The trick is to reframe the question: instead of asking what’s true, ask what’s useful.
The kind of usefulness I’m talking about here is like, but not limited to, the usefulness of a kitchen-knife. Just like a knife helps you slice tomatoes to make a sandwich for yourself when you’re hungry, “truths” are different tools in your arsenal that you could use to (potentially) make a difference in your life or the world at large (if that’s the kind of thing you care about).
We know 1+1 is 2 because it enables us to do simple accounting of objects and get ahead of other animals who can’t count. We know the sun rises from the east because this knowledge enables us to build houses with windows that stream sunlight into our bedroom just as we’re waking up (and, of course, also launch satellites).
I am walking in the footsteps of William James who founded Pragmatism. Breaking away from the philosophical tradition of swimming in abstractions, he preached asking whether something makes a real difference or not. Without a focus on real world impact, questions and debates often remain circular. Take, for example, the innocuous question: “do you love me?” It’s an empty question because love has no meaning beyond how it manifests. If I say I love you (whatever that means) but never do anything for you, should I defend my inaction by saying: “But I told you I love you”?
As you can imagine, nobody talks like this. Very soon, the cross questioning about love gets into the specifics (like it should): “You said you love me, but you never give me roses”. Now, this is a better conversation in because it is useful and actionable. It reveals the previously unstated assumption that the lover expects love to mean roses every now and then, thereby helping both the parties in getting what they want (to love and be loved by an exchange of roses).
Science is a beautiful examples of how truth emerges from usefulness. The scientific community has agreed that their stated goal is to study how the world works and their preferred method is nullius in verba. Opinions be damned, let’s see whose theory makes predictions that the real world agrees with.
Truth in science is nothing but predictions about what we will observe when we perform a certain action in the world. So, when we say that mass bends the fabric of spacetime, we’re explicitly saying that there are locations in space that are so dark that even light cannot escape, so our telescopes should observe total darkness.
Through an elaborate chain of cause-and-effects, the grounding of the truth of general relatively, ultimately happens in the prediction of what we should or should not observe via our eyes peering into the optical telescope when we point it at different locations in space.
How do predictions in science relate to usefulness? Well, if I make a prediction X, and you make a prediction Y, I have an edge over you if mine tends to be the one that agrees with what the experiment reveals. The usefulness here finally emerges from its (potential) applicability. The theory of general relativity is true because it ultimately enabled us to build things like GPS satellites.
Experiments with no immediate real-world usefulness like the discovery of Higgs Boson are useful to the extent I believe that if confers me an edge over in a head-to-head battle about a real world issue with someone else. So, truths are, ultimately, bets about what could turn out to be useful.
One can argue that many theories of the past turned out to be wrong. For example, people argue that Plotemy’s epicycles doesn’t depict reality even through it made correct predictions. But, then, which theory depicts reality? What, ultimately, is the arbiter of reality? What is reality, anyway?
We are back to the circular logic of definitions. Reality is simply a collection of everything that impacts us (or could potentially impact us). And the only way for us to define it is via our tools and models. Models of reality (that work) is reality. Newton’s laws didn’t stop working (or, equivalently, being useful) once Einstein proposed relativity. Einstein simply expanded our repertoire of tools we have to intervene in reality.
Even though truth doesn’t exist independently of the utility, it also doesn’t mean it’s subjective. You can’t simply think you can fly and jump out of the window. Reality will intervene and truth will emerges from the usefulness of the theory that no matter how hard you think, you can’t manifest flight out of thin air. So the question “can you fly?” is actually “will you survive if you jump?” in disguise.
So, truth is a prediction that enables you to get what you want in life (which, in this case, is not dying).
All our truths finally ground into what they do to the world we inhabit. Symbols require grounding in the real world. Without grounding, words are mere utterances.
Back to our original question: are LLMs intelligent? Let’s reframe it.
Can LLMs help summarise an article? Can they drive a car safely? Can they write a scientific paper that gets published in Nature?
See, words like “intelligence” don’t matter. At best, they’re pointers to tools, hypothesis and models one can choose to adopt to increase the odds of getting what one want.
TLDR: forget about truth. Ask what is useful, instead.
PS: all philosophy is politics.
Join 150k+ followers
Follow @paraschopra