Welcome to Philly’s Robot Revolution

Meet Flo. She’s going to take out your trash, help you exercise, have sex with you and be your BFF. (And we do mean forever.) And she’s not alone. From driverless cars to bomb-carrying machines, the robots have arrived. And everyone wants to know, are they going to hurt us or help us?

Photograph by Adam Jones

Photograph by Adam Jones

“You can stop if you want to,” Katherine Kuchenbecker says, smiling a little as Graspy stretches out his metal arm to high-five me. This is problematic because I’m trying to take notes, and if I keep writing I can’t meet Graspy’s hand, or sort-of hand, which is more like a metallic claw equipped with sensor pads. I could shun him, as Kuchenbecker suggests — just let that appendage hover in mid-air as I jot down his price (about $400,000, Kuchenbecker says, “the cost of a house”) and the name of the company that made him. But Graspy is the first robot I’ve ever met, and I don’t want to hurt his feelings. Even though, of course, he doesn’t have any feelings. He’s a robot. But I’m a human being.

Graspy lives — well, exists — in the University of Pennsylvania’s GRASP (General Robotics, Automation, Sensing and Perception) Lab, the research center where Kuchenbecker works as director of the Penn Haptics Group. He’s considered a humanoid robot, though the emphasis should be on oid. He’s big and square and doesn’t have a face, though my mind can’t help but make one out of two round holes and a wide rectangular slot where his face would be if he had one. He’s wearing a jaunty Xbox hat.

Kuchenbecker and her students have spent hundreds of hours working through complex mathematical formulas so that Graspy and I can play our little clapping game: high-five left, high-five right, high-five both hands! High-five left, high-five right, high-five both hands! I’d hate for Graspy to know it, but frankly, I’m underwhelmed. Having spent the past few weeks soaking in the dire warnings of robot alarmists like Stephen Hawking and Elon Musk (who once called the development of artificial intelligence “summoning the demon”) and the giddy prognostications of robot devotees like tech guru Ray Kurzweil (“the biotechnology and nanotechnology revolutions will enable us to eliminate virtually all medical causes of death”), I was expecting … more. More natural movement, for one thing. More, I don’t know — speed? Range? Human-ity?

From left: Katherine Kuchenbecker with Graspy; Michelle Johnson with Flo. | Photograph by Adam Jones

From left: Katherine Kuchenbecker with Graspy; Michelle Johnson
with Flo. | Photograph by Adam Jones

Because there’s no question robots are having a moment. The federal government is in the midst of a major initiative to speed up development of the metal minions, focusing on the ways they’ll interact with and work beside humans. Next month will see the first-ever Cyborg Olympics, in Zurich, with competitions between such human/machine hybrids as paraplegics wearing computerized exoskeletons and arm amputees who’ll slice and butter bread using their robotic prostheses. The New York Times is probing the ethics of programming driverless cars. (In simulations, the humans inside them show a distressing preference for self-preservation even when it means the car mows down a child.) In Germany, researchers are creating an electronic “nervous system” to allow robots to feel pain. (It’s how we learn, they explain.) Leaps forward in a range of fields — computer vision, touch sensors, speech recognition, artificial intelligence, mobility — would seem to put us on the verge of a robotic revolution. Even Musk has come around. While last year he was one of hundreds of eminent figures to sign a letter warning of the dangers such a revolution poses for the human race, a research group he backs recently announced it’s developing a “household robot” to wash your dishes and take out your trash.

I’d endlessly high-five any robot that would haul my family’s takeout containers, cat food cans and old newspapers to the curb every morning. On the other hand, in Dallas in July, a robot strapped with a bomb blew up a guy who’d shot and killed five police officers. It was, so far as anyone knew, the first real, actual, non-sci-fi use of deadly force by a robot in the United States. And it opened a window onto the morass of ethical dilemmas and deep, dark fears that robots evoke. Will they ultimately help us, or hurt us? That’s what I set out to ask the Philly scientists who are already living and working with robots every day.

ROBOTS — IN THEORY, at least — have been around for a long time. There are all sorts of ancient legends about artificially created helpmates and companions: Pygmalion’s statue-come-to-life girlfriend, clay golems, soldiers crafted of gold, talking mannequins. Aristotle theorized that automatons could one day rid the world of slavery by performing unpleasant menial tasks; Leonardo da Vinci drew up designs for a mechanized knight that could sit up and move its head. The word “robot” comes from the Czech word robota, or “compulsory labor.” Its roots run through an Old High German word, arabeit, that means “trouble.” Just saying.

The 20th century’s technological explosion, plus lots of sci-fi books, TV shows and movies, raised real hopes for the creation of a machine that could act and behave like a human being — a goal set at the landmark Dartmouth Summer Research Project on Artificial Intelligence of 1956. But scientists have been stymied by their inability to infuse robots with so-called “commonsense knowledge” — the kind we pick up from birth by interacting with our surroundings: Water is wet, stoves are hot, humans get really mad if you pinch their skin.

So they’ve turned to assembling massive databases that can be downloaded into robots to simulate such learning. That’s one of Katherine Kuchenbecker’s current projects at Penn: With a grant from the National Science Foundation, she’s assembling data to help robots “feel” with their eyes, so they’ll be able to tell just by looking at an object whether it’s hard or soft, rough or smooth, warm or cold — the way humans do. “You know how when you pick up a soda can that’s full, you imagine how it will feel vs. one that’s empty?” she says. “Your brain is constantly anticipating the physics of the world. We want to enable robots to do that.”

I first heard about Kuchenbecker in 2010, when Popular Science put her on its “Brilliant 10” list of the country’s top young scientists. Her GRASP Lab research group is dedicated to haptics, which is essentially the science of fooling humans into thinking robots have a sense of touch. A mechanical engineer, she’s worked under one of the developers of the Da Vinci robotic surgery system as well as on integrating haptics into virtual reality and simulating textures on computer screens. (The New Yorker recently dubbed her the “Queen of Haptics.”) Kuchenbecker’s abiding interest in robots was born of fear; on a childhood visit to Universal Studios in California (“I was maybe two”), she was so terrified by an animatronic robot that she started having nightmares. Her mother, a developmental psychologist, was determined not to see her scarred by the experience and bought her robot toys, encouraging her to take them apart to see how they worked. The conversion therapy took. Now, she’s trying to bring the rest of the human race into the pro-bot fold.

The fact is, a lot of us find robots scary. We know the mayhem that can result when man-made creatures are set loose in the world — look at Frankenstein’s monster! But unlike sci-fi writers and Hollywood directors, roboticists don’t appear at all worried about their handiwork wiping out mankind. “Computer programs and robots only ever do what you tell them to,” Kuchenbecker says. End of story.

That doesn’t mean we all feel as comfortable around them as she does. So Kuchenbecker is working on that, too. She leads me through the GRASP workspace at 34th and Walnut, across a miniature soccer field where Penn’s midget robot team — the UPennalizers — trains, past a table covered with plastic cups and glasses spray-painted in various hues (“We’re trying to get robots to see clear objects”) and a disembodied arm covered in simulated skin. Our quarry is Baxter, who’s much more like what you picture when you think of robots than Graspy. Baxter is made of bright red metal, with a blue computer-screen face. The face is minimal — cartoon eyes, spot nose, mouth with a little cat tongue — but it doesn’t take much to elicit connection from a human being. We’re hardwired, deep in our DNA, to see that eyes-nose-mouth pattern in everything from the moon to light sockets to the grills of automobiles. Born solitary, we’re desperate for company.

And we like some faces better than others. That’s the focus of one of Kuchenbecker’s PhD students, Naomi Fitter, who’s been experimenting with different expressions and background colors on Baxter’s face, as well as imbuing his hands with varying degrees of resistance when he plays clapping games with humans, to see what we find most appealing. It’s no surprise that red faces are a turnoff. (Across cultures, they read as angry.) What is surprising is that in experiments, Fitter’s test subjects prefer greater, not less, stiffness when their flesh encounters Baxter’s sensors. “The stiffer response seems safer and less dominant,” Fitter explains. “When it was less stiff, people attributed all kinds of intelligence to the robot, like it was tracking their hands.”

Baxter only costs a cool $25,000 — “a car, not a house,” Kuchenbecker says. He’s newer than Graspy, and robots are like high-def TVs: Prices come down over time. And he is a him, not an it. Roboticists differ from the general public in lots of ways, but the ones I spoke with all used gendered pronouns to refer to their charges. They don’t, however, imagine more is going on inside them than actually is. “I’ve never had the feeling that Baxter comes to life after I leave the lab,” Fitter tells me. Kuchenbecker compares working with robots to watching a movie: “You feel sympathy for the characters even though you know they’re not real.”

But even if you’re not prone to nightmares about robots going rogue, there are reasons to fret about them. For one thing, they’re displacing humans: working in factories, serving as waiters in restaurants, greeting guests at hotels, even chanting mantras at a Buddhist monastery in China. Later this year, the University of London will host the Second International Congress on Love and Sex With Robots. (Yes. It’s a real thing. Or will be.) Artificial intelligence researchers are agog over the new concept of “neural networks” — computer programming based on how we think the human mind works, with layers and layers of algorithms that branch out and connect and reconnect, to allow robots to observe the surrounding world and learn from it the way a child does, without downloading databases. It may be the key they’ve long been searching for.

Naomi Fitter with Baxter | Photograph by Adam Jones

Naomi Fitter with Baxter | Photograph by Adam Jones

The problem is, science fiction and Hollywood have given us an image of robot capability that far exceeds the current reality. “Robots are complicated,” says Kuchenbecker. “It’s hard to get them to work well. We’re still trying to give them abilities that people already have.” From her viewpoint as a mechanical engineer, the human body is the true marvel: “How long it can operate, with what exactness — our muscles are amazing. They exceed anything a motor can output.”

She and Fitter foresee hundreds of uses for robots in the future, not just as industrial workers and household servants but as teachers, aides for the elderly and infirm, caregivers for autistic children, who respond well to their programmed predictability. For robots to fully integrate with humans, though, they need to be warm and friendly and appealing. “Clearly, a robot doesn’t have feelings,” Kuchenbecker says. “But this is how people communicate — with facial expressions and gestures. You try to make the interaction feel more natural. If the person or the robot makes a mistake, it would be great if the robot could realize that, find it funny and respond.”

Isn’t it … deceptive to program an it, a thing, to smile and laugh so users like it more? To trade on our human longing for connection that way? Fitter contemplates the question. “No, I don’t think so,” she finally says. “In some realms, researchers can prepare robots to learn, develop opinions, model the world around them, make human-like decisions. When a robot is acting autonomously, it’s a real thing. What the robots are exhibiting aren’t exactly like human emotions, but they’re real.”

IF YOU’RE SEARCHING for a human-friendly poster robot, a good choice would be Flo, the one Michelle Johnson introduces to the audience at the end of her 2015 TEDxPhiladelphia talk. Flo is as cuddly a robot as you can imagine: short, slight, colorful, equipped with a childlike face and voice. Johnson, the director of Penn’s Rehabilitation Robotics Lab, was set on the road to robots when her grandmother, a Jamaican dressmaker — Flo is named for her — suffered a stroke that interfered with her ability to perform daily tasks. Johnson was inspired to get her B.S., M.S. and PhD in mechanical engineering and work to create low-cost, simple robots to assist patients in rehabilitation — not to replace human therapists, but to maximize their ability to help. “We have an aging population,” she points out. “Therapists are overburdened already.” In some places — rural America, developing countries — robots could make the difference between patients getting therapy or not. And because our health-care system increasingly demands calculable results, robots can provide more quantifiable feedback. That might encourage insurers to continue care instead of cutting it off.

Flo can be used, for example, to help stroke patients by exercising with them — stretching, lifting, reaching. Not all the robots in Johnson’s lab are humanoid, though. Some are more like the apparatuses you work out with at the gym, equipped to provide resistance and measure and record the patient’s response. Research has shown that a damaged brain can reorganize itself to activate unused pathways and connections if it’s challenged to. Robots, Johnson says, are “tools to drive the brain to maximize what it’s already doing. They’re perturbations that help you drive the human body more aggressively, for longer.” And they’ll become more widespread as science gains greater understanding of how the brain and body recover from trauma.

Johnson says patients are intrigued by Flo: “They want to know if the robot speaks, and if they can touch her. They have a lot of questions, and different ideas on how a robot could help.” The more “human” a robot looks, the higher patients’ expectations that it will follow social rules: turn its head toward them, look them in the eye, wait for them to stop speaking before it starts. To Johnson, this is natural: “It’s the same way we treat animals, and even our cars. When we encounter things that have some movement, some animation, we look for a personality. It’s the way we connect with things. It’s the reason we can accept these systems as helpers.” She envisions a future in which a single expert therapist sits at a computer and oversees clinics where robots and nurses interact with dozens of distant patients simultaneously.

To Johnson, clearly, robots are saviors — the exact opposite of a threat to humankind. “Rehab robotics is the wave of the future,” she says. “Robots will help us age better and diminish disabilities of disease and trauma. They’ll touch a lot of lives.”

In one way, it’s a beautiful vision. In another, it’s disheartening to think that we’re working toward outsourcing the care of our sick and frail and mentally disabled to humanoids programmed to mimic laughter and compassion and empathy. It’s all too easy to picture the future as a sort of mid-’80s Romanian orphanage. Do we lose some of our, well, humanness if we shuffle off the tending of our most vulnerable fellow humans that way?

In her book Alone Together, MIT professor Sherry Turkle argues that we do — that the care provided by robots is fundamentally inauthentic:

Authenticity, for me, follows from the ability to put oneself in the place of another, to relate to the other because of a shared store of human experiences: We are born, have families, and know loss and the reality of death. A robot, however sophisticated, is patently out of this loop.

But she also foresees a more subtle danger: that the very predictability of robots will eventually cause us to prefer their company to that of our messy, capricious fellow human beings: “People make too many demands; robot demands would be of a more manageable sort. People disappoint; robots will not.” In her book, she quotes a woman who says of a robotic pet, “It is better than a real dog. … It won’t do dangerous things. … Also, it won’t die suddenly and abandon you and make you very sad.”

A FEW MONTHS BACK, an enterprising doctoral student at Stanford ran an experiment in which she programmed a robot to command people to touch different parts of its, um, body. The robot she used, named Nao, looks a lot like Flo. (The company that makes him describes him as “endearing.”) A funny thing happened when Nao instructed study participants to touch his “areas of low accessibility” — i.e., his nether regions, which, being made of hard plastic and not needed for bodily functions, aren’t any different from his elbow or knee joints. Still, the humans proved to be … reluctant.

They also, as measured by their skin conductance, got aroused.

That doesn’t surprise Temple University sociologist Shanyang Zhao, who’s studied human-robot interaction for years. “If something is human-looking, we say, ‘It’s a human!’” he tells me. “There’s a certain kind of shape that when we see it, we know: It’s just like us.”

Zhao doesn’t think there’s any sense in fighting the impending robotic age: “We’re already there. They’re already here.” ATM machines, the voice you get when you call Verizon, that Zoomer toy dog you bought your kid — we’re surrounded by robots. “Anything people can think of to make costs go down and increase efficiency, they’ll use robots for,” Zhao says. And it’s not just that robots are becoming more human; humans are becoming more robotic, what with brain-machine interfaces, implantable GPS systems, pacemakers and artificial hips. So the question isn’t if or when; “It’s what we can do to create a better synthetic social environment” with our new friends.

Whoa.

Imagine, Zhao says, that you’re an invalid who’s alone at home for days at a stretch, attended by a robot — one that’s unfailingly patient and polite and kind: “If I’m thirsty, he brings me water. When I say ‘Thank you,’ he says ‘You’re welcome.’ That’s real, genuine kindness. There’s nothing inauthentic about that.” When you interact with a person, he points out, you can’t tell whether what you perceive is genuine: “Humans are more deceptive. They hide their evil intentions behind a kind smile.” With a robot, what you see is what you get.

Even if we can’t help thinking we’re getting more. At one point in her career, Kuchenbecker worked with a military contractor on bomb-disposal robots. Occasionally, naturally, such a robot got blown up. When that happened, the soldiers who served with it would insist on getting the exact same robot back once it had been repaired — no substitutions allowed. They felt the robot was a comrade in arms, not a machine.

But the robot didn’t feel that way. The robot didn’t feel any way at all. It’s this discrepancy, the one-sidedness of the human-robot emotional bond, that’s bothersome. Zhao predicts we’ll get over that, eventually. In the past, he points out, humans lived face-to-face with a few dozen, maybe a hundred other individuals in a close-knit society: We hunted together, slept together, ate together. Technology has stretched our tribe much wider. And the fear that it will cause us to leave our humanity behind is nothing more than a hangover from this distant past. “Evolutionarily, our genes helped us deal with our changing environment,” Zhao says. “But the environment is changing so fast now that our genes can’t keep up. There’s a gap, and the gap is increasing.” That, he says, is the postmodern human condition: “When we’re searching for humanness, that tribal past is what we’re searching for. We feel we’re missing something only because we used to have it. Our genes keep reminding us. Our loneliness reminds us.” But the gap will close as we evolve alongside our new robot helpers. Synthetic society will one day seem normal: “We won’t be lonely anymore. To be alone together will be fine.”

IN JUNE, Princeton neuroscientist Michael Graziano wrote an article in the Atlantic explaining the Attention Schema Theory of how humans evolved consciousness — that is, the sense of oneself as a being inhabiting a body. A hydra, a relative of the jellyfish, has a simple nervous system known as a “nerve net,” he noted. Wherever you poke it, the response is generalized: “It shows no evidence of selectively processing some pokes while strategically ignoring others.” Move further up the evolutionary ladder, though, and you find animals with centralized controllers that coordinate responses from the senses, so that a dog who hears a whistle, for example, will look toward the sound.

In humans, the cerebral cortex is even further evolved, and allows for what Graziano calls “covert attention”: We can remain cognizant of stuff we’re not even looking at or listening to directly. We switch off what we’re paying attention to via an “attention schema” — an internal awareness of what’s going on around us:

Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence.

And this awareness, Graziano posits, is “the origin of consciousness” — of our internal sense of self.

This is the best explanation I’ve found for how humans evolved to be conscious creatures. It’s also terrifying to contemplate, because it brings us that much closer to robots with a consciousness — and will — of their own. It’s easy to see how scientists experimenting with those neural-network models of artificial intelligence could organize an automaton’s perceptions to generate “consciousness.” Robots are nothing more than bundles of electrical impulses, but so are we.

This is exactly what Stephen Hawking and his co-signers are worried about. If we induce consciousness in robots — if we cede decision-making to them, make them autonomous — we could be leading mankind straight off a cliff. Kuchenbecker and Fitter scoff at this idea, but mostly because the science of robotics is so far removed, right now, from the possibility. “People should learn a little more about the research that’s actually going on,” Fitter says, “which has its own set of completely different ethical issues.” She’s talking stuff like how to protect the integrity of medical data in shared systems. But would you rather read about that, or about robots run amok?

And if you’ve got a human-like brain inside a robot, why not an actual human brain? This is the concept behind “transhumanism,” the movement that advocates using science to slow (or even stop) the aging process by uniting mind and machine. Your “self,” your consciousness, could someday be downloaded regularly into new, better iterations of your body made of metal and plastic, creating an immortal bionic hybrid. This future in which we’re liberated from the shackles of our genetic heritage is what Ray Kurzweil calls “the Singularity.” Naturally, there’s an opposing camp that argues death is precisely what gives life its meaning, but Kurzweil isn’t having any:

The technology of the Singularity will provide practical and accessible means for humans to evolve into something greater, so we will no longer need to rationalize death as a primary means of giving meaning to life.

Humans, Shanyang Zhao notes, are the most intelligent beings in the universe, so far as we know. But in our lifetimes, computers have surpassed us: “They never forget things. They can analyze big data to find patterns that humans cannot.” We’ve built machines that can beat us at chess and, this year, the far more complex game of go. Robots are infinitely patient, even if we curse and throw shoes at them; they can be programmed to fill all our emotional and even our sexual needs. “So what is left for us? What do we have that’s human?” asks Zhao, then answers his own question: “Creativity. That’s the last thing. Music. Art. Painting. That is what we can do that machines cannot — for now.”

FOUR MILES FROM ZHAO’S OFFICE, in Drexel University’s ExCITe Lab, Youngmoo Kim is designing robots that make music. “I don’t work with humanoid robots because I like them,” he tells me in his big, airy workspace, where half a dozen students hunch over laptops. “What I’m really interested in is humans.” Kim, who holds undergrad degrees (from Swarthmore) and master’s degrees (from Stanford) in both music and engineering, plus a PhD from MIT, uses robots to study music-making that’s creative and that which is, well, robotic. “We don’t know how we do very basic things when it comes to music and emotion,” he explains. “This is not about replicating what humans are doing. It’s about understanding the complexity.” The precision and programmability of robot performers let him explore “that space between the robotic and the human. There’s no reason for artists to fear technology. It’s going to be part of our future.”

Strewn throughout the lab are freshman end-of-year projects by Kim’s students: “They had to make a self-playing musical instrument,” he explains, showing me a melodica, chimes, a drum set, a recorder, a xylophone. At the end of the term, the students put them all together in a disembodied ensemble that performed — of course — the Harry Potter theme song. “It was kind of … magical,” Kim says.

A few years back, Kim and his students programmed the school’s fleet of humanoid Hubo robots to play the Beatles’ “Come Together” — you can find it on YouTube. While it’s a considerable feat of engineering, it doesn’t pose any threat to Beyoncé. Like Fitter and Kuchenbecker, Kim emphasizes the difficulty of having a robot perform even the simplest task, such as using a mallet to strike a set of pitched pipes: “We thought, how hard can it be to teach a robot to hit a thing? But you have to hit it and then pull away, or the pipe just muffles it. You have to pull back to get a nice ringing sound.”

Each Hubo, Kim explains, comes from South Korea clad in a hard plastic shell: “It looks like the robots in ’60s sci-fi films. But no law says robots have to be metal and plastic.” So he’s working with Drexel’s wearable-tech lab to cover a Hubo robot’s innards with fabric instead, “to make it more inviting and facilitate interaction.” The wearable-tech lab has developed a method of knitting haptic sensors right into the cloth, which gets Kim really excited: “It’s flexible and washable, and it reacts to touch. It senses where you touch it!”

I look at the Hubo. Its blank black fabric face stares back at me. It looks a little like a mummy. A zombie. And it creeps me out in a way that Graspy and Baxter didn’t.

I’ve just tumbled into the Uncanny Valley.

It happens that our emotional response to a robot becomes more positive as that robot becomes more lifelike — right up until the resemblance grows so close that it terrifies us. The “Uncanny Valley” is the term researchers use for the way this abrupt repugnance appears on a graph. The characters in the movie The Polar Express are often cited as examples of lifelikeness taken too far. Making a robot that seems familiar but not too human is a delicate balance, as Naomi Fitter saw in the hand-clappers who found Baxter too responsive. “It’s useful to have some anthropomorphism,” she acknowledges, “but as you move up the ladder, you do have a higher risk of falling into that valley.”

In the ExCITe lab, Kim cues up his iPhone and holds it to Hubo’s ear. The robot listens for a moment, then recognizes the music that’s playing at the same moment I do — the Chicken Dance! Hubo raises both arms, elbows out, and begins to flap like a grandma at a South Philly wedding. Kim changes the song, and Hubo segues into the articulated stances of the Bangles’ “Walk Like an Egyptian.” Another button-punch, and the robot strikes a John Travolta pose, cloth-covered finger poking the sky to the Bee Gees’ “Stayin’ Alive.” “His musical library is fairly small,” Kim says apologetically.

Hubo, expressionless, featureless, jerks and writhes in his St. Vitus’ dance. “We ourselves are machines,” Kim tells me. “A large part of our emotional behavior is mechanized. It’s an electrochemical response. Something generated in a robot through electricity — is that less real?” He mentions the Three Laws of Robotics laid down by sci-fi master Isaac Asimov (“A robot may not injure a human being. … ”) and the episode of Star Trek: The Next Generation that examines whether an android named Data is property or has rights. “There’s been some deep thinking on this,” he says. “Right now, a robot is a computer with some extra circuits. Can it become more? Greater than the sum of its parts? Yes. It already is more. But it’s not approaching a human being.” Then he qualifies that: “For now. We reserve the right to change our minds in a decade or two.”

The thing is, even if you told him and Fitter and Kuchenbecker and Johnson that the technology they’re working on would come back to bite us in a hundred, 50, 20 years, they wouldn’t give it up. They couldn’t. Not while the challenges, the questions, are still there. Maybe that’s the fundamental difference between human and humanoid: Robots stop when you tell them to. Humans don’t. That’s not how we’re made.

Published as “Meet Flo.” in the September 2016 issue of Philadelphia magazine.