AI should not simulate real humans
I was reminded of the “Booby Trap” episode of Star Trek: TNG where Geordie (accidentally) gets the computer to simulate Dr. Leah Brahms, the inventor of the Enterprise’s warp engine, inside the holodeck to help him solve their conundrum. Geordie decides to imbue the simulation with (what the computer thinks is) her personality, and proceeds to fall in love with her simulation.
In a rare continuity play in STTNG, the real Dr. Brahms appears in the next season. Geordie excitedly approaches her, expecting to develop a relationship that parallels his holodeck experience, but the real Dr. Brahms turns out to be unfriendly…and married. She later finds Geordie’s wildly inaccurate and romantic simulation of her, and is justifiably horrified at the invasion of privacy.
This episode aired in 1989, but was prescient. There is a strong desire today to use AI to simulate loved ones who have passed, or historical figures, or celebrities, or someone out of reach.
The ethics of simulating a real person in AI is exceedingly fraught, to say the least. At best, you get an inaccurate caricature, which may project unfair stereotypes: you get a “chat with Abraham Lincoln” that gives you a grotesque simulation of an historical figure full of tropes; at worst, you turn your ex’s likeness into a subservient puppet that says and does what you want.
The realism of the simulation is what makes this space so problematic today: when the product successfully crosses the uncanny valley, it risks the user forgetting that they are interacting with a mere simulation of a person, and believing that they are interacting with the actual person. I think this crosses boundaries.
I view AI products with suspicion, but I view AI products that simulate actual people with disdain. An AI that purports to let you “talk to your deceased loved ones” is a cruel and disrespectful parlor trick, and should be met with disgust.
Anyway, I love Star Trek.