The development of next-gen AI is moving so rapidly that it’s beginning to show up in areas that most of us never previously considered — like hospitals. While some view this as a positive sign, citing 24-hour availability of automated chatbots, precise monitoring of patient vital signs, and standardized action plans, others believe that AI systems are devaluing and degrading modern healthcare.
Potential benefits of using AI in healthcare
Some members of the Trump Administration believe that the various types of AI models can be used effectively in hospitals and other medical settings. Proponents of AI tech emphasize the current understaffing problems seen in hospitals and facilities around the nation as a catalyst for AI implementation. Not only can these systems help address issues with staffing, burnout, and turnover, but they can do so at an affordable rate.
Robert F. Kennedy Jr., who is currently tasked with overseeing the U.S. Department of Health and Human Services, was recently quoted by AP as saying AI nurses are “as good as any doctor,” particularly for healthcare in rural areas.
Dr. Mehmet Oz, who has recently been nominated as Administrator of the Centers for Medicare and Medicaid Services, suggests that generative AI tools can “liberate doctors and nurses from all the paperwork.” Dr. Oz has faced numerous lawsuits and a Congressional hearing for promoting unproven medical treatments and spreading misinformation in the past.
Concerns and risks of using AI in healthcare
Many nurses and medical professionals disagree with RFK and Dr. Oz, including those with National Nurses United (NNU), the largest union of RNs in the United States.
While some nurses agree with using AI in theory, they argue that the current technology is not sufficient to replace trained and experienced medical professionals. For instance, even the most sophisticated AI agents aren’t capable of picking up on body language, facial expressions, odors, and other subtle signs that are often associated with certain diseases and medical issues. In addition, there have been instances of current-gen systems making false diagnoses.
Regarding AI and mental healthcare, The New York Times recently reported that American Psychological Association chief executive Arthur C. Evans Jr. cited during a presentation to the FTC court cases involving two teenagers who consulted with “psychologists” on the Character.AI app. In Florida, a 14-year-old boy died by suicide after interacting with the AI chatbot. Another teenager, a 17-year-old boy in Texas diagnosed with autism, became violent with his parents after several sessions with the chatbot.