Can science
explain everything?
If by everything,
one means the natural physical material world, then perhaps eventually yes,
although mysteries might remain. But what about metaphysical entities? Positing
that everything only consists of the physical is a philosophical, not a
scientific, argument. But science is greedy and constantly attempts to colonize
the metaphysical realm. One recent area in which this might be happening is the
concept of free will.
In Bjorn Brembs’
article (Proc. R. Soc. B, 2011, 278, 930-939), the author
begins with several provocative questions.
What could
possibly get a neurobiologist with no formal training in philosophy beyond a
few introductory lectures, to publicly voice his opinion on free will? Even
worse, why use empirical, neurobiological evidence mainly from invertebrates to
make the case? Surely, the lowly worm, snail or fly cannot be close to
something as philosophical today as free will?
Brembs weaves an
interesting story about adaptive behavior using a range of invertebrates as
examples. How do you stay alive and not be eaten by predators? How do you find
food when there’s a famine in your vicinity? Turns out that you want to
incorporate some degree of randomness into your actions, making it more
difficult to predict what you’ll do, and that the actions (or reactions) can be
honed by responding to changes in your environment – assuming you’re still
alive. There’s an interesting interplay between learning from external stimuli
and learning from internal ‘self’ mechanisms. There’s even a part of an insect
brain (“mushroom bodies”) that helps control the balance between these two
sources.
In animals and
humans, the situation is both murkier and more complex. We can and do
self-initiate actions, our behavior is also dependent on our experiences up to
the present moment and on the present, possibly novel, external prompt that
might motivate an action. This sense of agency, sidestepping the complicated
question of defining consciousness, is what we humans might call free will.
Brembs argues against strict determinism or dualism, and instead suggests that
we consider free will not (strictly) as a metaphysical identity, but rather as
a “quantitative, biological trait, a natural product of physical laws and
biological evolution.”
Picking up on
these ideas, the psychologist Thomas Hills argues for a concept he calls
“neurocognitive free will” (Proc. R. Soc. B, 2019, 286,
20190510). To set the stage, Hills defines what he means by conscious control.
Conscious
control processes are effortful, they focus attention in the face of
interference, they experience information in a serial format (one thing at a
time), they can generate solutions that are not hard-wired, and they operate
over a constrained cognitive workspace – working memory – to which ‘we’ have
access and can later report on as a component of conscious awareness. When
additional tasks are added to consciously effortful tasks performance suffers. Effortful
processes sit in contrast to automatic processes, which are fast and parallel,
and do not require conscious awareness. Effortful tasks can be made automatic
through repetition (like reading and driving) …
Hills assumes that
alternative possibilities must be present and able to be acted on for an
organism to be ‘free’. Like Brembs, Hills identifies two broad situations where
an organism needs to generate such alternatives: exploration and outwitting
adversaries. Why and where does this behavioral variability arise? Hill writes:
There is a
finite precision on cognitive abilities, which is a result of a trade-off
between computational accuracy and the metabolic cost of information
processing. This can lead to sensory noise, … channel noise, … synaptic noise …
Neural systems are commonly characterized as having a sensitive dependence on
initial conditions of arbitrarily small size… What matters more for free will
is where the decision to modulate variability comes from. If conscious control
in any way influences unpredictability, then consciousness is in the loop that
governs future behaviour.
Some animal
experiments are cited, where neural activity involving past experiences is
observed even when the external stimuli are no longer present. Apparently this
‘replay’ also happens in dreams. Hills argues that when encountering a ‘choice’,
this replay kicks into action by sort-of running a quick (simplified)
simulation that takes into account past experience (both good and bad) and
exploring different routes. Some of this may be automated or partially
automated (I’m assuming), but conscious control is also present and actively
involved. In a sense, one predicts what happens to future self in these
scenarios. The process may not use all the information streaming in. In fact,
conscious control inhibits acting immediately while all this deliberation is
taking place. As to the feeling that we have some control in the act of
choosing, Hills argues that our ignorance of the future represents the other
side of the same coin.
… it is exactly
the finding out – the initiation of the search and the choice among alternatives
– that is the basis of the self’s emergent will and its genuine freedom. The
bringing of forth of a self-identity is the evaluation of alternatives through
self-simulation. If a historical self emerges through conscious deliberation,
and that deliberation involves simulation of alternative futures over which the
self chooses, then a historical identity and the capacity for free choice arise
in tandem.
Could machines
have free will? Or at least the ability to “creatively” choose among multiple
alternatives? In tandem with Brembs and Hills, the physicist Hans Briegel has
an interesting theory which he calls “projective simulation” (Sci. Rep. 2012,
2, 522). First, he tackles the question of why we are reluctant to say
machines have free will even though we might ascribe to them some form of
intelligence (“the capability of the agent to perceive and act on its
environment in a way that maximizes its chances of success”). It’s because the
underlying stratum is an algorithm – which is therefore predictable regardless
of whether it is deterministic or probabilistic.
Briegel has three
pillars for his projective simulation. The first is memory – you have to be
able to store knowledge of past action. But if memory is all-controlling,
there’s no room for variation and adaptation. That’s where randomness comes
into play, when it introduces variation at the very point when an organism
interacts with its environment. It’s crucial that this randomness be tied to functional ability. Finally, the
simulation (with many similarities to what Hill describes) does a random walk
through “clips” of episodic memory – a stripped-down version of a detailed
simulation. These clips have linkages of different strengths which modulate the
probability that the random walker traverses them. But new clips can be created
that are not memories but inventions and fabrications, maybe through a mash-up.
We can imagine unicorns even if we’ve never seen one. According to Briegel:
The fundamental
problem is… how freedom can emerge from lawful processes. Both the freedom of
self-generated action and the freedom of conscious choice require, at a certain
level, some notion of room to manoeuvre, which is consistent with physical law…
Room and ultimately freedom arises in two ways, first by the existence of a
simulation platform, which enables the agent to detach itself from an immediate
(stimulus-reflex type) embedding into its environment and, second, by the
constitutive processes of the simulation, which generate a space of possibilities
for responding to environmental stimuli. The mechanisms that allow the agent to
explore this space of possibilities are based on (irreducible) random
processes.
All this makes me
thinks of games – there are underlying rules, yet the outcome cannot necessarily
be pre-determined until the game is actually played. While there are “no-luck”
games such as Tic-Tac-Toe where the range of possibilities can be enumerated
easily, with more complex and interesting strategy games, the possibilities
cannot be computed especially once you throw in dice rolls and/or drawing from
a card deck. When I’m playing a game, I try to anticipate what the other
players might do. I also have strategies in mind based on previous games I’ve
played – those that worked and those that didn’t work. I also have to account
for how the current situation on the board may differ from the previous games
I’ve played. I’m not sure how exactly I compute all these possibilities, but
eventually it’s my turn and I make my move. I don’t suffer from
“analysis-paralysis” when playing a game, but maybe it’s because I’m not
sufficiently patient, or alternatively maybe because I’m generally decisive.
If I had a
computer app associated with a game I’m playing, would I use it as an aid? I
don’t know since I’ve never tried it personally. I don’t play games on the
computer since I already stare at the screen for many hours when I’m at work. But
I could imagine that if you’re playing a computer game, you could have an app
that does some predictive simulation based on what has unfolded so far in the
present game, while also feeding in information from previous games – basically
an algorithm that chunks through data. But while that might make you better
informed, the result of the game is still open and you’ll have to play to the
finish. Perhaps that’s akin to the “freedom” alluded to by the authors I
mentioned in today’s post. I certainly feel that I have free will when
making at least some of my choices. But I don’t doubt that unconscious factors
come into play in every choice that I make.