How Should We Test for AI Intelligence?

Often AI is portrayed as being purely logical and lacking the sometimes unnecessary emotions that most human experience. Many scientists argue that if an AI is intelligent it will learn that it will need to understand human emotions in order to effectively function in human society. There is no doubt that intelligence, emotions, and consciousness all play a roll in what is means to be a human.  After all, these are the attributes that researchers are trying to emulate in AI. The formation of our individual identities makes us who we are and is constantly evolving over time. It can be argued that if an AI is determined to have a concept of self-identity, it would mean we have succeeded in designing an AI capable of independent thought roughly equivalent to that of a human being. But how do we test for that?

A Simple Thought Experiment

Suppose we have two AIs that are identical to each other (Subject A & Subject B) and we place them in separate but isolated environments.  

../Downloads/though_experiment1.jpeg

Both subjects will respond to various stimuli present in their respective environments and the subjects will learn and adapt accordingly.  After a suitable time as passes, the subjects are removed from their respective environments and asked a series of questions related to the stimuli present in the isolated rooms. The first set of questions are basic and logical about the sound, light, and temperature, followed by more detailed questions that are less logical and more opinion based such as ‘did they like being in a room by themselves?’ This will allow us to probe deeper into the identity, or lack thereof, of the subjects. 

After all the questions are answers, the memories of both subjects are wiped so that they return to the state they were at the beginning of the experiment.  Now, Subject A and Subject B switch environments. 

../Downloads/thought_experiment2.jpeg

After a suitable time, the subjects will be asked the same set of questions as before.  There are two possible scenarios that can arise as a result.

  1. Subject A and Subject B responses match meaning that being exposed to the same environment and stimulus leas to the same responses. 
  2. The two subjects don’t respond the same way meaning that they had a different experience, at which point the implications are profound. 

The second scenario is extremely interesting because even though they started with a blank slate, they are different beings and have unique identities. But let’s go further, suppose we had ten AI subjects all designed the same way so that there is no way to distinguish between them.  Randomly, we choose a subject to enter environment A for a certain amount of time. This is done until all the AI subjects have been in environment A.

After all the subjects have been in the environment, they are asked the same questions to those we asked in the first experiment. If the questions are answered differently then each AI has a unique sense of individuality. But if all of the answers are the same, then the viability of a strong AI is brought into question.

If all of the answers are the same, then this raises serious doubts about the AI truly having self-identity and therefore a level of consciousness that is on par with humans. Is is possible for all the responses to be the same AND the AI to have a self-identity? No, why? Simple. No two humans would answer the questions the same when placed into the same environment. It should be noted that if we had an objective, precise definition of intelligence and consciousness, we could find ways to directly measure this in a controlled setting. This thought experiment was presented to illustrate the challenges in finding a true test for strong AI. 

Sign up at www.neurales.ai

Follow us at

Linkedinhttps://www.linkedin.com/company/neurales

Facebook: https://www.facebook.com/neuralesai/

Twitter: https://twitter.com/neurales_ai

Comments are closed.