Today’s post is Part 3 of Terrence Deacon’s The Symbolic Species (links to Part 1 and Part 2). I finished the book. The last chapter is interesting and thought-provoking. If you don’t want to plow through 464 pages, the last 32 pages will give you the gist of his argument. My quick version: Deacon’s thesis is that what makes humans unique among all other creatures is the co-evolution of the human brain and early hominid ‘societal’ behavior that leads to referential symbolic consciousness and language. Symbolic consciousness, which is necessary from abstract thought, emerges from indexical consciousness which in turn is supported by iconic consciousness. (Other creatures may exhibit indexical and iconic consciousness, and it’s possible they may attain the symbolic but it will be difficult!) Deacon builds his case methodically, although there are still many unknowns and gaps, which he acknowledges.
In this day and age, we imagine the mind to be like a computer. Thus, the question arises as to whether our computers’ artificial intelligence can be conscious. Last month Google fired Blake Lemoine, an engineer who claimed that the A.I. chatbot has a soul. Does it? I don’t know. Depends on how one defines soul or consciousness, I suppose. Deacon argues that human symbolic consciousness is virtual in a way that it transcends the physical flesh, blood, and guts. But he’d also say there are no disembodied souls. Cartesian Mind/Body dualism, he thinks, is an ineffective way of tackling the problem of consciousness. Deacon’s argument is more nuanced (you’ll have to read his book for the full version), and while I think his theory still has many unanswered questions, I find his co-evolutionary approach helpful in sketching out the boundary issues. And he seriously takes into account mutual feedback between individual organisms and their environment (that may include fellow organisms). Physical science and social science can’t be separated so cleanly.
Deacon considers the pitfalls of equating mind to computing and he carefully intrigues the Searle Chinese Room argument and it’s criticisms. I’ll quote Deacon: “Part of the danger in current computer metaphors comes from our tendency to call typographical characters ‘symbols’, as though their referential power was intrinsic, and to call the deterministic switching of signals in an electronic device a ‘computation’, because it simulates operations we might perform to derive an output string of numbers from an input string according to the laws of mathematics. We fall into the trap of imagining that the sets of electronic tokens (data) that are automatically substituted for one another in a computer according to patterns specified by other set of tokens (programs or algorithms) are self-sufficient symbols, because of their parallelism to the surface features of corresponding human activities. This brackets out of the description the critical fact that the ‘computation’ is only a computation to the extent that someone is able to interpret the inputs and outputs as related in this way… All the representational properties are vested in the interpreter.”
At the beginning of chapter 13, Deacon provides the following quote from the journalist Sydney Harris: “The real danger is not that computers will begin to think like men, but that men will begin to think like computers.” Deacon’s book was published 25 years ago. The quote is even more apt today. As we charge into online mass education and using A.I.’s for so-called ‘adaptive learning’, this is precisely what we are doing – embracing what I think is a myopic vision of using computers to teach us to think like them. How could we not? The machine is efficient and tireless, but only at its narrow task. Taylorism rears its ugly head again, and machine-like productivity is king. Before we know it, we’ll no longer know what a joke is, and become artificially unintelligent. Deacon writes: “Our cherished belief in the specialness of consciousness has not prevented us from thoughtlessly treating people as throw-away tools… The question before us is whether we will begin to treat people like unconscious computers, or come to treat conscious computers like people.”
No comments:
Post a Comment