|Sometime later this century, future LGBT communities will confront a moral dilemma, as computers, robots and other artificial intelligences acquire the ability to become self-aware.
This potential development was foreseen by the inventor of the computer, Alan Turing (1912-1954), a gay man. He was a brilliant mathematician, computer scientist and cryptoanalyst, who showed particular expertise at breaking Nazi military ciphers during World War II, providing invaluable advance information about troop, naval and military movements.
Sadly, Turing fell afoul of Britain's ridiculous antigay legislation. He was having a relationship with one Arnold Murray (19), a burglar, but in revenge for an argument, Murray betrayed their relationship to the police. Under Section 11 of the Criminal Amendment Act 1885, he could have gone to prison, but chose 'chemical castration' instead- leading to his suicide on June 8, 1954.
Before that, however, he wrote a highly influential paper entitled "Computer Machinery and Intelligence," which argued that one day, artificial intelligences would become self-aware. To evaluate this, he devised a possible evaluation method, named "the Turing test" in his honour. His original paper posited that a human judge would have to evaluate which of one respondents was human, and which was an artificial intelligence. If the latter could 'pass' as human, then it would pass the Turing test. The judge would be unable to see the human and AI respondents, who would answer using text.
Some computer scientists and philosophers have argued that a sufficiently well-programmed AI could merely undertake competent manipulation of symbols without actually understanding what they meant, and produce a 'false' result.
To satisfactorily fool the judge, AIs would need to demonstrate natural language, reasoning, knowledge and cognitive skills- thinking about a subject.
Let's assume that an AI passes this threshold c2020-c2040. Then what? Beforehand, it might be possible that humans could resort to simulated AI human companionship and sexual partnership. However, there are problems attendant on such a development.
Should LGBT folk use AI simulated humans for this purpose, given that the AI would be potentially able to make its own sexual and relationship choices if a Turing test upgrade was possible? Would it be ethical to withhold an upgrade that might give an AI simulated human such sentience?
Even if a post-Turing test simulated human AI chose to stay with its human companion, another problem would emerge as time went on. If durably constructed, the AI simulated human wouldn't age, even given advances in human longevity. However, neurosurgery might offset this dilemma, through implantation of a human brain within a simulated human AI body.
As one can see, the above would blur the boundaries between human and AI. We'd face quandries about sentience rights, which will probably be resolved through expansion of human rights legislation to include these nonhumans, and emancipation and citizenship to AIs in this context.
It's intriguing to note that it was a gay scientist that first raised these questions about potential AI sentience and issues of attendant citizenship and social inclusion. When the time comes, we should remember Alan Turing's legacy, and be at the forefront of work toward AI sentience, emancipation and full citizenship.
David Leavitt: The Man Who Knew Too Much: Alan Turing and the Invention of the Computer: London: Phoenix: 2007.
David Levy: Love and Sex With Robots: The Evolution of Robot/Human Relationships: New York: HarperCollins: 2007.
Anke Meijer: "Robot Love" Winq 1 (Fall 2008): 36-39.
Gay Robot checks out the bulges at the college swimming pool on the clip below. Craig Young - 17th January 2009