Is Artificial Consciousness Impossible?

Can machines ever be conscious? David Hsing argues not in his aptly titled article “Artificial Consciousness Is Impossible.”

For those who don’t want to read the whole thing, the essence of Hsing’s argument is found in the following quotes.

Learning machines” are “Learning Rooms” that only take on the appearance of learning. Machines mimic certain theoretical mechanisms of learning as well as simulate the result of learning but never replicate the experiential activity of learning. Actual learning requires connecting referents with conscious experiences. This is why machines mistake groups of pixels that make up an image of a gorilla with those that compose an image of a dark-skinned human being. Machines don’t learn- They pattern match and only pattern match.

And…

The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.”

Finally…

Artificial consciousness is impossible due to the extrinsic nature of programming which is bound to syntax and devoid of meaning.

In general, I agree that machine consciousness is unlikely in the foreseeable future. Hsing makes the crucial distinction between intelligence and consciousness. Machines can process information, but they aren’t going to feel themselves being themselves anytime soon.

The similarity between computers and brains is superficial, and I believe the conflation of the two is a kind of reverse pathetic fallacy. The ancients believed that exploding volcanoes and furious hurricanes were God’s wrath. Today, our environment is filled with machines, and we sometimes attribute them with human qualities, including consciousness.

I agree with Hsing that Kurzweil and the transhumanists are basically deluded. But I have no objection to their attempts to upload their minds onto machines. Just as long as they don’t come back.

And still… it pays to remember that the future is an awfully long time…

In AI theory they refer to “the Jetson’s Fallacy.” As with the 1960’s futuristic cartoon show, many people today naively believe that AI is just going to walk beside us into the future. But no. It’s going to increasingly walk inside us. The interface of human and machine intelligence is going to become increasingly blurred. And don’t think that people won’t let it happen. The steps to get there will come one at a time. Many of us alive today might find it abhorrent to have a neural chip drilled into our skulls. But the shift won’t happen all at once. Each generation will likely take one step further towards cybernetic embodiment. For our grandchildren who will already have neural chips and cybernetic enhancements, taking one more step towards an enhanced and increasingly artificial intelligence will seem like less of a quantum leap.

I have digressed a little, but now let me make my main point. We probably won’t need the machines to be conscious. We will almost certainly merge with them. Don’t look now, but its already happening. How far we go with the whole experiment remains to be seen…

Hsing also argues that functionalist explanations for possible machine consciousness are faulted. Those arguments insist that if we just know what neurons do, then we will know what brains do. And if we can copy a brain, then we can create artificial consciousness. But Hsing sees this as a hopeless task, because to duplicate brain function we need to know all those functions and their dependencies. And we just don’t have any way to measure all that. It’s way too complex. Further, he says consciousness is “underdetermined.” If I understand him correctly, he’s suggesting reductionist arguments for consciousness are wrong, or at least inadequate.

This image has an empty alt attribute; its file name is west.jpg

Of course, the biggest issue is that we just don’t know what consciousness is, or how it arises. But as I said, the future is a long time. Maybe we’ll eventually discover the nature of mind, and there could be something in that discovery which renders Hsing’s last point invalid. We may not need to reverse engineer anything. For example, I’ve long argued that consciousness has non-local properties. Though not widely accepted in mainstream science, there is a hundred years of experimental evidence which is suggestive of this, as well as endless report-based evidence. I refer to what I call “integrated intelligence.” If the foundation of consciousness is not found in biological systems (a foundational presupposition of neuroscience), then it is theoretically possible that we might be able to access that via machine learning. But how that might happen is anybody’s guess.

Perhaps we will not need to know all the micro-foundations of the mind to eventually create machine consciousness.  After all, we can create ice crystals without needing to understand all the parameters of their formation. We simply re-create the pre-existing conditions which cause water to freeze in the right way. Perhaps it will be the same with AI consciousness. Consciousness would appear to be a far more complex phenomenon. But as Hsing points out, consciousness does not merely arise from complexity. The Mars Perseverance Rover remains no more conscious than a dial telephone.

Marcus

Leave a Reply

Your email address will not be published.