Ask Dr. Mike: Does AI Belong in Healthcare?
From patients using ChatGPT to self-diagnose to computer-assisted medical interventions, healthcare as we know it is rapidly changing. For better or worse? Dr. Mike weighs in.

Meet internal medicine physician Michael Cirigliano, affectionately known as “Dr. Mike” to not only his 2,000 patients, who love his unfussy brilliance, tenacity, humor, and warmth (he’s a hugger!), but also to viewers of FOX 29’s Good Day Philadelphia, where he’s been a long-time contributor. For 32 years, he’s been on the faculty at Penn, where he trained, and he’s been named a Philadelphia magazine Top Doc every year since 2008. Starting today, he’s our in-house doc for the questions you’ve been itching (perhaps literally) to ask a medical expert who’ll answer in words you actually understand. Got a doozy for him? Ask Dr. Mike at lbrzyski@phillymag.com.
Listen to the audio version here:
Hi again, Dr. Mike! Obviously, people have been using Google and WebMD for a while now to diagnose their health symptoms. And now we have ChatGPT. What’s your stance on all of it?
I said it before and I’ll say it again: “He or she who treats themself, treats a fool.” At the same time, I am all about patients being empowered — and to quote discounted-clothing mogul Sy Syms, “An educated consumer is our best customer.” In other words, knowledge is power, but you need to have some modicum of expertise to help guide you through the shark-infested waters of healthcare.
So, if you want to do a literature search, fine! But if you’re experiencing symptoms X, Y, and Z, you should run it by your doctor who’ll help tease through the nuances. You’re not going to be able to get seen, tested, or treated by ChatGPT, WebMD, and Dr. Google!
Got it. But has AI ever helped you, as a doc?
AI has infiltrated my life, but one way I’ve been using it for good is with this app called OpenEvidence, which is designed for use by healthcare professionals only. It kicks ass! It’s science-based and provides references — like studies and information published in scientific journals like JAMA and the New England Journal of Medicine — for everything it spits out. It has revolutionized how I deal clinically with patients. If I’m worried about a drug interaction or am wondering about what the best diagnostic test is for a certain condition, I’ll check OpenEvidence. It compiles information that’s accurate, to the point, and trustworthy. For me, that has made my practice of medicine better.
Wow, that sounds way smarter than searching symptoms on Google and getting aggregated “answers” that could be coming from totally bogus sources.
Right, and that’s what I worry about! There’s a tremendous amount of misinformation out there — and when it comes to AI, especially for something as important as your health, I use a phrase Ronald Reagan used: “Trust, but verify.”
That being said, I don’t solely rely on an AI-powered app to make my professional decisions. For example, when it comes to a pregnant patient, I will run down the hall to the gynecology department and consult with real people to make sure a medication is okay for a pregnant person to take. The possibility of getting it wrong scares the shit out of me more than anything! You’re dealing with two!
Why do you think people are so eager to take humans out of the equation? What’s the urgency?
I think part of it is that humans make mistakes, and people believe a computer wouldn’t make those errors. And when you’re dealing with life-or-death issues, we typically want every single piece of technology we can get to keep patients alive. But urgency doesn’t mean we should be relying so much on something that, I believe, is still in its infancy.
What about FDA-approved AI technologies that are innovating and assisting with the medical process, like reading X-rays, decoding mammogram results, and even delivering anesthesia? Is this the future of healthcare, whether we like it or not?
I would be very hard-pressed to get into an airplane with no pilot. I don’t care if it’s the most sophisticated computer system on the planet — I’m not getting on that plane. The same goes for healthcare. You need a human involved for two reasons: To get multiple eyes looking at whatever you’re dealing with; and more importantly, to connect with patients on a human level. We need humanity here — healthcare is not an algorithm, and it’s not just interacting with data.
I do think we need to embrace technology to a certain extent, though. If I hadn’t, I would’ve never moved to electronic medical records. Also, humans make mistakes, and maybe that’s where the computer or AI will assist or correct. Maybe there’ll come a time when X-rays don’t need verification by a doctor because the AI software is so good and so accurate.
The bottom line is: I still want a human involved. What happens when your mammogram detects cancer? Do you get an impersonal email saying, “You have cancer. See you later!” and then a robot performs your surgery? I believe wherever we’re headed should be a blend — one that doesn’t solely rely on computers and that doesn’t take the human touch out of the equation. At the end of the day, we need people. When you’re nearing your end, you’re thinking about family and your legacy …
And hopefully a computer screen isn’t the thing in front of you saying, “Alright, time’s up! Goodbye!”
Oh, God! Let’s hope not!