Riva-Melissa Tez doesn’t fit the stereotype of the Silicon Valley techno-utopian. For one thing, she makes eye contact. For another, she’s a woman in an overwhelmingly male sector. At 27, she is a cofounder of Permutation Ventures, an investment fund and incubator for companies hoping to harness artificial intelligence (when it arrives) for humanity’s betterment. Her atypical path to San Francisco’s futurist culture began in London, where she was born and where she spent four years in a homeless shelter. After studying philosophy at University College London (and cofounding a Notting Hill toy store), Tez moved to Berlin, where she was drawn to the transhumanist community, whose conception of the future seemed cribbed from science fiction: immortality, merging of machine and man, human engineering, etc. Tez—who has lectured on such topics at Stanford and Oxford—here explains why she expects nothing less, for herself and her fellow man, than super-health, super-longevity, and super-happiness.
Tell it to us straight: Are the robots going to take over? Are we engineering our own doom? You have systems right now that could potentially take down markets and start wars. There are huge risks. We just have to make sure people are thinking about using things like machine intelligence to improve the human condition. AI doesn’t have inherent goals, so the thing that poses the most risk is still the humans programming it.
How certain are you that we’ll ever achieve artificial intelligence? I’m probably more skeptical than most people in the field. Right now, we’re just building things that mimic certain aspects of learning, but we haven’t defined an overall principle of intelligence. In the early 20th century we wanted to build planes, so we looked at birds. We built flapping machines. We didn’t have a principle of flight, but then we worked out thermodynamics and the mechanics of lift, and we managed to build planes that work. The same thing is happening right now in AI.
You’ve been called a “transhumanism advocate.” How do you define the term? I actually don’t like the word transhumanism that much because it’s been corrupted over time. Back in the day it referred to the idea of using technology for the sake of improved longevity, improved healthcare, and improved happiness. Today, transhumanism has become a meme about how we’re going to become cyborgs and live forever, when it was really about using tech to transcend some human limitations.
This may be a stupid question, but what’s wrong with humans now? There’s nothing wrong with humans now. If anything, being human is really awesome, but the systems that we’re in aren’t necessarily awesome. Healthcare is not awesome. The educational system is not awesome. So there are ways that technology can improve those things and improve our quality of life. Would I rather be 100 and have dementia, or would I rather be 150 and have full mental capabilities? We haven’t cured cancer; we haven’t cured many of the diseases that have plagued us for the last century. It’s more about eliminating those and increasing our ability to enjoy the human experience.
Too often it’s our own decisions that plague us. What do you make of talk about altering human nature through technology? We had this debate in our office the other day: Which is more important, AI (artificial intelligence) or IA (intelligence amplification)? Maybe we can’t even build AI until we figure out IA so we can actually understand what intelligence is. It’s kind of like a chicken-and-egg problem. It is a huge issue. I think you have to pick a corner on that one. I pick AI because it seems a little less complex.
If you’ve seen HBO’s Silicon Valley, you’ve heard the jokes about wanting to make the world a better place. Do they hit close to home? It drives me nuts in San Francisco when anyone’s like, “Hey, we’re building Pinterest for cats and making the world a better place.” That’s one of the reasons that I don’t go to that many events here. People have understood the merits of selling people on a narrative about being world-changing, so it doesn’t really matter what you do anymore, because people are going to sell it as that. We don’t go around saying, “Hey, we’re going to change the world.” We’re just like, “AI has the power to do this.” It’s a logical argument.