Though an artificially intelligent (AI) robot might someday look and behave just like a human, how do its internal ‘mental’ states compare with a human’s. Is it possible for a robot, which behaves in a way that a human interprets as kindness or empathy, actually to be internally loving, kind, compassionate, sympathetic, attached, etc.? Can love be stored in a file on a computer disk, and what would be in such a file? Was the file designed by someone and/or was it constructed inductively from a history of sensor (infrared, microphone, etc.) data organized by machine learning algorithms? Can those algorithms modify themselves; if yes, to what extent?
Similarly, can different species (or even different people) ever really empathize with or understand each other, and does it matter? Does anyone care whether the happiness of a dog is the same as the happiness of a human, as long as the dog is wagging its tail or behaving affectionately, and as long as we believe the dog isn’t secretly plotting to hurt us?
I suspect that robots might some day reach this ‘close enough’ stage, where humans develop enough of a degree of apparently mutual love and trust with them to live with them, but I also suspect that robot minds and bodies will evolve differently, and much more rapidly, than biological ones (perhaps unless an artificial version of a human is made), such that our communications with robots will be similar to inter-species communications, and it might be hard to trust that the robot’s intelligence and/or motivations didn’t drastically change overnight. Limited hardware capabilities, similar to the way that numbers of neurons limit the complexity of biological thought, might provide some comfort to humans, though computer processors are becoming smaller and denser by the day.