The words “artificial intelligence” are now ubiquitous, their influence having risen dramatically in recent years and their impact being felt in healthcare and many other job sectors. In healthcare, artificial intelligence, or AI, has the potential to make drug discovery faster and cheaper and improve diagnoses and treatments. How should AI ensure patient privacy and how can it be used fairly? How might AI improve patient care, increase efficiency, and reduce costs? These are questions that need thoughtful discussion.
Dr. Koushik Kasanagottu, a public health expert and assistant clinical professor in the Department of Internal Medicine at the UC Riverside School of Medicine, addresses questions pertinent to AI in medicine in the Q&A below. Before joining UCR as a community-based faculty member, Kasanagottu was a fellow at Harvard Medical School and the U.S. Senate, where he honed his expertise in healthcare policy and innovation. He is a founding member and physician advisor for DocAide, a healthcare AI startup focused on transforming clinical decision making.
Q: How is AI transforming healthcare?
AI has had an exponential growth in the healthcare industry. It is significantly transforming healthcare by streamlining administrative tasks and improving documentation. In my own practice, I use an AI scribe daily to document patient visits. This allows me to spend more face-to-face time with patients rather than focusing on extensive notetaking, improving patient experience and the quality of the encounter. However, these tools still require clinician oversight to ensure accuracy and quality. Looking ahead, I envision AI tools augmenting decision making, supporting clinical reasoning, and reducing diagnostic errors.
Q: How are doctors and other medical staff using AI? How are patients using it?
Doctors and medical staff are increasingly integrating AI tools into various aspects of care. This is just the beginning. As we learn more about the technology and its utilization, we will be able to integrate it into clinical practice as clinical decision support tools. However, there are risks with integration and it will require clinician oversight.
Q: What is AI’s potential in the medical field? How might it define its future?
As we continue to develop and refine AI-driven tools like scribing software and clinical decision support, we’re also seeing AI systems that can analyze imaging, predict disease progression, and recommend treatment plans based on vast datasets. In the future, AI will likely become an integral part of both clinical decision making and patient management, supporting healthcare providers in real-time and reducing the healthcare burden, particularly in underserved areas. AI can have a significant role in reducing healthcare disparities by helping provide expert-level care in remote locations where specialists are scarce. However, there are risks. As AI is only as good as the data it is trained on, underserved and marginalized groups can be underrepresented in the dataset resulting in bias and inaccuracies.
Q: What might be the best way to regulate AI in medicine?
Regulating AI in medicine requires a balance between encouraging innovation and ensuring patient safety. It will be essential for regulatory bodies like the Food and Drug Administration (FDA) to establish clear, standardized guidelines for AI validation, ensuring that these technologies are thoroughly tested for accuracy and safety before deployment in clinical settings. We also need a set of standards that new AI products have to meet before they are integrated into clinical practice. The FDA is currently proposing several regulatory frameworks to ensure the safe deployment of AI products in clinical products. The ongoing monitoring of AI systems post-deployment will also be crucial to assess long-term effectiveness and mitigate any unforeseen risks. There needs to be robust testing and monitoring for accuracy throughout the AI’s usage beyond patient and provider satisfaction scores.
Q: How can AI help with disease diagnosis and risk prevention?
AI can play a pivotal role in both diagnosing diseases earlier and predicting patients’ future health risks. For instance, AI algorithms can analyze patient data; identify patterns in lab results, imaging, and genetic information; and suggest early interventions for chronic diseases like diabetes or heart disease. There are several startups and technology companies innovating in this space to provide predictive and clinical decision tools. A study from the Lancet showed that AI-based tools are improving early cancer detection by analyzing medical imaging with remarkable accuracy, helping clinicians catch conditions in earlier, more treatable stages.
Q: Will AI replace healthcare providers? Which jobs in healthcare are most vulnerable to AI?
While AI will not replace doctors, it will undoubtedly change how we practice medicine by handling repetitive tasks and augmenting clinical decision making. For example, AI can support diagnostic accuracy in imaging, help with routine data entry, and assist in treatment planning, which allows physicians to focus more on direct patient care. However, the human element of patient care, empathy, and clinical judgment will always be essential.
Q: What are the risks of using AI in medicine? What are the ethical concerns?
There are significant ethical concerns regarding AI. I am deeply concerned about data privacy, algorithm bias especially in underserved populations, and how the reliance on this technology could worsen health disparities. In particular, AI systems may inadvertently perpetuate existing biases in the data they are trained on, which could lead to disparities in care. One of the ongoing ethical challenges is ensuring that AI tools are developed and tested in a way that represents diverse populations.
Q: Are there patient safety risks? What about privacy concerns?
Yes, there are potential patient safety risks and privacy concerns with the use of AI in healthcare. Safety risks might arise if AI systems make incorrect recommendations, potentially leading to misdiagnosis or delayed treatment. For example, an AI system might miss a rare condition or flag a benign issue as a serious problem, affecting clinical decision making. On the privacy side, AI relies on vast amounts of patient data, raising concerns about data breaches or misuse. It is critical to ensure that AI systems comply with strict privacy laws like HIPAA and that the data used to train these systems is anonymized and ethically sourced.
Header image credit: Victor Perry, UC Riverside.