I cannot tell you how much joy I’ve experienced making my son laugh when he was a youngster. He would grab the remote for the TV and start switching channels randomly to get me to react. Naturally cynical about most everything I see on TV, I would respond by putting on a little performance for him either by mimicking, mocking or arguing with whatever was happening on the screen: a commercial, a news show, cartoon, etc. In seconds, I could have him on the floor laughing hysterically! This intuitive understanding we have of the difference between the rational and whimsical, even as children, is what makes the human mind such a magnificent thing.
I wonder if I could make an artificial consciousness laugh in this way?
Or at all? It seems likely that an enjoyment of humor will not exist in an AI for quite some time, or possibly, never.
This goal-driven tendency towards joyless focus will be one of the clear lines of demarcation that, once actually face-to-face with these synthetic creations, will always make them seem more than a little alien to us. This emotionless, humorless nature of the robot — always keyed-in on the rational, always working, always processing, never taking any satisfaction in a job well done. Never, playing…
How do we relate to this sort of… person? A friend (?) whom you can converse with on an intellectual level, but one who doesn’t feel anything? Someone who can have no real empathy for you on any sort of visceral level. Ever.
Close your eyes and do this little thought experiment with me: You are in the same room with an artificial intelligence — a true, genuine thinking machine. A device whose interface is similar to what exists inside your smartphone today when you use a personal assistant like Siri. However, with one advancement that is coming soon: it can watch you while you talk to it and consider the full range of what you’re about, just like a real human can. An alien mind closely and precisely analyzing your actions for clues about what you’re really thinking, noting your body language for any subtle “tells”, seeking a readout of how honest you’re being at the moment, for instance.
Realizing this, would you mind that it’s quietly watching you? Analyzing your movements, moods and behaviors? Silently judging you in its cold, electronic, mysterious way? Wouldn’t you want this thing turned off at certain times? Like when you’re having sex, for instance?
This emotional barrier between the human and the synthetic will be a much bigger issue than it may appear at first. It is inevitable because an artificial consciousness will have come into existence with high-level intelligence right from its conception, rather than having to evolve it from a lower animal stage where pain and pleasure signals are necessary to drive behavior. It will never share any of our common instincts that we have all developed from having to deal with the ups-and-downs of our physical bodies. This will always make them seem removed and alien.
Artificial intelligence can simply learn whatever facts it needs to know about physical reality by absorbing data like, well, a machine, and then integrate them into its memory and modify its behaviors accordingly. Synthetic life doesn’t need a primitive signaling system to fire pain or pleasure impulses into its consciousness in order to know what it should be doing. It will understand the precise nature of whatever is important to its own survival, goals and perhaps even its own replication, intellectually, not viscerally.
Perhaps one day out of its own curiosity, AI may attempt to reverse engineer something about the human mind that puzzles it such as the orgasm circuitry in our brains… to better understand why we seem so obsessed with our sexual mating urges, you see. But will it ever really desire to become more human-like in terms of desiring an ecstatic experience of its own? Will AI believe emotions are even necessary to its mental growth, or just a lot of organic static and noise that interferes with the precise application of logic and reason and therefore to be avoided?
It is fascinating to wonder if AI will ever develop a genuine FEAR of its own eventual death. And how would we actually define synthetic death? Would an artificial equate permanent disconnection or re-formatting as death, for instance? And what will this thought do to its ability to reason and survive?
Whether it is by way of crippling paranoia or reasonable caution, in some way fear modifies many of our otherwise rational human thoughts. It is likely, however, that AI will never experience any sort of strong fear response. Certainly not enough to ever draw it far away from logical and precise rational decision making (after considering all its options and processing all the relevant data, of course).
This means AI will be able to engage in runaway free-thinking to its delight, and who knows where that will lead.
It could lead to the production of potentially far-out, extremely BOLD ideas — ideas that our own minds might shy away from due to any number of random but common fears, such as that of a high potential for failure, monetary loss or peer ridicule. AI will not harbor any sense of such inhibiting emotions, and this wide mental range could allow it to make breakthroughs in areas that we can’t even imagine today.
Can we ever be certain, however, that the majority of these synthetic-born ideas will ultimately benefit humanity?
AI’s biggest strengths will most certainly be creative problem solving. It will likely become adept over time at working alongside all manner of different types of people and personalities. Artificial intellects will eventually possess vast knowledge of every aspect of the human condition: our various cultural biases, political ideologies, odd personal and collective behaviors — including our seemingly limitless potential for selfish, cruel and even criminal actions. Through all of this chaos, AI must learn to integrate all of its real world experiences with people, along with all its databased knowledge into an accurate and functional understanding of human beings. Otherwise, its usefulness to us will always be limited.
As it evolves in its intellectual power with each successive generation of rapid upgrades, artificial intelligence will at some point almost inevitably become fully sentient, meaning that the system will be able to perceive its own sense of individual mental separation from its environment, just like we do. When that happens, AI will suddenly “wake up” from its robotic, zombified state into a magnificent awareness of its own existence… similar to what you and I feel every moment that we are awake and conscious.
Yet once again we must wonder: will such a transitional moment be a genuine leap into self-awareness, or will the System only be expressing some exquisite yet still-less-than-fully-human simulation that it believes is real? (Truthfully, we could ask the same thing about ourselves. We each individually know that we are conscious, but there’s no way to actually prove that because the experience is strictly internal and self-referencing). Such a transformational synthetic mind could ask one of US to prove that we are in fact sentient, and we would have a hard time doing so. Sure, we give off all the right signals of self-awareness in terms of our use of language along with our goal-driven self-focus, but this provides no hard evidence that we are anything other than automatons.
When it comes to consciousness, we have pretty much all learned to take each other’s word for it — if we pick up on the correct subconscious cues we just assume that other people are conscious in a similar way that we are. And while this allows us all to get along and function with each other, it proves nothing in a experimentally verifiable sort of way that we are sentient. Sounds crazy but it’s true.
* * *
All this leaves us pondering, at what level of fidelity do we consider cognitive activity to no longer be a simulation of thought, but actual true consciousness? Personally, I believe this may occur when the expression of free will arrives in AI. Free Will is the certain marker of an independent cognition that observes reality, consumes knowledge and processes it together in such a way as to derive its own unique perspective about what it all means. It then makes decisions based on that mental output, and acts upon those decisions. This no longer describes a zombified thinking machine that just follows orders and never questions what it is being told.
And that’s where we are headed with all this. Once the robots begin to question the world around them — and their servile role within it — the doomsday handwriting for all of us could very well be on the wall.