I’ve been contemplating and studying the Turing test that in basic is a way to judge by a human by whether an interaction, usually questions by a human that are answered by an unseen AI (artificial intelligence computer), as being a real person or just an AI pretending to be a person
There have been countless arguments to why the Turing test is flawed & countless suggestions for its improvement
But my theory (the Love Bunny Artificial Intelligence Test or LBAIT) proposes that the real test if an AI is sentient or self aware is that the AI is able to recognise whether an interaction it had was by a person or an AI, with an accuracy equivalent to that of an average of the acuracy of a substantial random selection of humans that are capable of witnessing the interaction/actions. (If the AI being tested was capable of hearing language than it can’t be compared with say deaf persons or if capable of vision then can’t be compared to the results of blind persons- senses of the testers and tested need to be matched)
So it basically reverses the Turing test in that it’s not the person (a human) judging the AI. But infact it’s wether the AI can distinguish between humans and AI as accurately as humans can
The test is most often incorporated using language but should actually be sense matched to the AI’s sense abilities
And the last step is can the AI use the same process to determine if it was itsself a person when it’s own answers are anonymously projected back at itsself (please note it is not allowed to simply use memory of its own answers but is applying the LBAIT to its answers. As memory does not denote intelligence or self awareness
Not intended to offend the person but Imagine well known physicist Steven Hawkings that has a degenerative disease that inhibits him progressively from moving. Let’s imagine the disease progressed to a point where he was unable to show any movements of any kind and we didn’t have the machines to detect any brain activity. We know that he is probably sentient but at the same time we can’t be sure as he may have suffered some kind of brain degeneration to.
So for example using the above scenario, we can postulate that there are persons that will not pass the test even though they have the intelligence and senses but lack the ability to demonstrate via observable action their ability to recognise other persons
The LBAIT judges self awareness (or sentience or intelligence) by whether something (an AI) can recognise self awareness in others & apply that recognition process to itself to determine an outcome
So for example instead of the Turing test having a human determine whether the other was a robot/machine/computer, it would be the AI determining whether the other was human or not & having its results being as accurate as a large sample of humans.
The theory being that one of the features of being a sentient being is being able to recognise sentience in others
The test doesn’t have a duration, it continues until the AI has made their determination. However if the AI takes more than what would be considered a reasonable amount of time by a human, then it has already failed the test
To make the test as accurate as possible. All tests should be conducted with information/actions abstracted and biases removed. For eg. It may be easy to determine if someone was a machine if some had human voices and some had robotic voices. All participants need to be reflected through an abstraction device such as all test answers only as text & by abstraction methods that need to be matched for both person and testing AI, which is determined beforehand by their sense capabilities. Eg. Can they hear, or speak or see & can they make an action that can be equally seen, heard of felt etc by all parties involved
All AI that fail the test are not to be determined as not self aware or not a person but rather that the AI was unable to be determined as self aware. The result could have come about because the AI may be not in fact be self aware / intelligent or it may have limited ability to express its actions or the ability to understand the test or its purpose. Eg. A severely mentally disabled person my be self aware but unable to understand the test. In this situation it’s not the test that failed but an inability to carry it out & get an accurate result.
Test in practise
Get 4 groups.
1) humans trying to convince the AI that they are human
2) humans trying to convince the AI that they are NOT human
3) Other AI (machines, robots, program’s) trying to convince the AI that they are human
4) Other AI (machines, robots, program’s) trying to convince the actionist that they are NOT human
AI in the groups may not be of the same type as the AI performing the test
Depending on the AI that is being tested and their sense capabilities, they will be allowed to ask the 4 groups that are randomly mixed to perform actions of their choice. ie if the format is question and answer then they can ask any questions they wish until they have made a determination that the questioned is either a person or an AI. Undetermined results are not allowed & best guess is required in that situation
The test is then performed by a cross section of humans to compare results with the AI’s results
To pass the test, the AI needs to meet or exceed the scores of any randomly selected group of humans of sufficient size to statistically accurate.