TRANSCRIPT: Can AI Improve Healthcare for Everyone?
September 13, 2023
Transcript automatically generated and may contain errors
Briana Lopez-Patino: Hey? Hello! Welcome! Just wanted to remind you all that the audience is not audible or visible. But you may ask questions using the QA function in zoom, and just would remind again that the session is being recorded and will be available at the Hastings Center website later in the day with closed captioning. This is the Hastings Center’s second event in the Artificial Intelligence and Society Series. The next event is on October nineteenth, and will explore the hype around the technology.I’d like to take a second to introduce one of our moderators. Josephine Johnston is the Senior research scholar at the Hastings Center and lecturer in Medical law and bioethics at the University of Otargo, New Zealand.
Josephine Johnston (she/her): Thank you, Briana Tina koto cartoor! Hi, mi welcome. Thank you so much for joining us here at the Hastings Center for this conversation about some of the benefits and challenges of AI and healthcare. So I’m sure, like many of you, I’ve been trying to follow the emergence of machine learning and the development of artificial intelligence. Or AI. And I understand a little bit. You know that algorithms have been developed to crunch big collections of data, sifting useful information from mountains of data points, searching for patterns and anomalies. I know that we’ve been developing the tools to make more and more accurate predictions based on all that data. And of course, I’ve heard of generative AI specifically, Chat GPT. We all have, although I have to admit I haven’t used it in my own work, but I’ve no doubt encountered it in the essays written by my students. But even as we are all trying to keep a breast of the news. I know that I feel quite unprepared for the personal choices I might need to make about using AI in my own life, whether that’s in banking or education, or healthcare, or some other sector. And as a bioethics researcher and teacher, I’m acutely aware that I should know more about AI and the healthcare sector in particular. How is it being used now, and how will it likely be used in the future? Because concerns have been raised about bias and machine learning and because we all know that technologies don’t usually benefit everyone equally at the, you know. All at once.I have questions, and maybe you do, too, about what we, as a community of thinkers, researchers, educators, and engaged citizens, should be considering and doing and calling for when it comes to the use of AI in the health and healthcare space.
So today, we’re going to dig into these issues a little. Where is AI at today? In healthcare? Where is it headed? What specific benefits might it deliver? And what specific challenges does it pose? Can AI improve healthcare for everyone.
We’re going to have that conversation first with 2 experts in the ethical and policy challenges raised by a I in healthcare. And then we’re going to open up that conversation to you. So in the first part of the conversation, if questions occur to you that you’d like put to our panelists, please simply type your question into the Q&A function here on Zoom. We will collate these questions and address them in the second part of today’s discussion.
So I’m gonna introduce our panelists now today. They are Nicole Turner Lee, of the Brookings Institution, and Danielle Whicher of Mathematica. Dr. Turner Lee is a senior fellow in governance studies at Brookings, where she is also director of the center for technology, Innovation and Co. Editor in chief of the Tech tank podcast. Her research focuses on how public policy can enable equitable access to technology and harness technology’s power to create change in communities in the US. And across the world. Nicole is an expert on the intersection of race, wealth and technology, and the author of a forthcoming book on the digital Divide.Dr. Whicher is a health services researcher with expertise in learning, health systems, health information, technology, health system, delivery and financing reform and bioethics.
Mathematica, where Dr. Whitcher works, is a research and consulting company. Before coming to Mathematica, Danielle worked on the National Academy of Medicine at the National Academy of Medicine, including on their 2019 Report AI in healthcare, the hope, the hype, the promise, the peril. Welcome, Nicole and Danielle.
Nicol Turner Lee: Thanks for having us.
Josephine Johnston (she/her): So let’s maybe just go back a little bit and talk about how we got where we are today. And let’s begin with you, Nicole. How did machine learning and AI develop in healthcare settings, and what opportunities or promises does it offer in that particular context?
Nicol Turner Lee: Well, you know the question of how did this all happen? I don’t think we all know right what day we actually had these machines making positioning recommendations, mark using market surveillance to target us in advertisements. And now we’re seeing these technologies really move into these sectors right sectors that not only have benefited in some way a form from the use of technology in particular, the healthcare industry has a long history when it comes to electronic health records, etc. But it’s being used in ways that actually makes predictable quote unquote decisions around particular subjects, and in many respects to sort of tag on, and how we got here. all of us that are actually involved with some level of AI, and most likely we are, have become the subjects of this technology, which makes it even more interesting in the area of healthcare, where we supposedly have some agency, or should have some agency over those outcomes. And so I think this conversation that we’re having today really is an important conversation, because there are some benefits to these automated systems. Right? First and foremost, we’re able to use artificial intelligence, particularly machine learning algorithms that offer this repetitiveness of decision making to be able to do things like understanding a a compilation of health records and health notes and being able to collaborate across physicians. I was in the doctor not too long ago, and the nurse assistant actually use AI to ensure that there was no pharmaceutical interaction with my existing penicillin Algies. Things like that we could not do without the introduction of these technologies, and then I would say one more thing. A dear friend of mine, Dr. Fay Copton, wrote this really interesting paper on Metadata. Real small data when it comes to AI, and how important that’s actually going to be when it comes to addressing health disparities right now, a lot of the AI and I will talk about some of the challenges in just a moment are scraping real big data sets right of information that’s collected from patients. But there is some interest in using AI to actually delve into health disparities. Where are their chronic conditions? Hereditary conditions that we can use some of the more expert areas like radiology and other places where we’re seeing AI work quite effectively to ensure that we reduce the type of a wide disparities that exist across demographic groups. So I’ll stop there because, you know, it’s one of those things like this, love hate relationship with AI, right? Because, on the one hand, I don’t like to be surveilled in terms of my data, my private data at that. But, on the second hand, being a person who has been working in the space of technology. There’s some curious innovation that potentially can help in areas that I like to talk about which are groups that have been medically underserved or historically vulnerable, and how we can actually leverage some of the data that is coming into these systems to make better decisions and predictions could be interesting. There’s a caveat. I’ll get back to that later. But it could be an interesting, I think, plus to the use of AI and help forward
Josephine Johnston (she/her): Danielle, do you say that same kind of benefit potential, especially for like people who have been under served, or whose conditions or the specifics of the health care needs haven’t really been well meet before.
Danielle Whicher: Absolutely. There are absolutely ways that AI can be focused on helping to address disparities instead of widening the gaps that already exist. I think you know, hospitals are using these technologies, for instance, to identify people who are at high risk of readmissions because of social factors and then to provide care management services to those people proactively, so that they’re not coming back to the hospital the next time. They don’t have enough food to eat, or they don’t have a place to sleep at night, so I think there are certainly ways that this these technologies can be used to to address disparities that exist in society, the only one, the only, the only other thing I wanted to add to what Nicole said is that you know I think we are seeing a huge increase in these technologies. Now, for a couple of reasons. One is we had. You know, we’ve had this huge push to have more healthcare data, be electronic electronic health records have been a huge push over the last decade or 2, and that’s enabled people to use that data to then create these tools. And then that’s been coupled with the increasing computing technologies and power that so that people can use this huge, huge, huge, huge huge amount of data to to understand patterns or to have tools that understand patterns, and then that can predict what’s gonna happen for different sets of folks. And so I think those are like 2 big pressure, 2 big things, 2 big points of important points that have led to the huge increase in the types of technologies that we’re seeing in the healthcare space.
Josephine Johnston (she/her): So you know, we can sort of maybe think of this as kind of a data hungry kind of technology. And so that means that the quality of the data matters, right? So if we are feeding it junk food, then it’s gonna have you know, different outcomes. So I’m II know. I think you know many of us sort of aware. And the general seats that, like the underlying data seats and problems with them, then feed into the technology itself. But presumably there are also ways to correct for that. Yeah, I know that even those might not be completely sort of what a type! So can you? Maybe Danielle, help us understand a little bit about how that bias with it. Bias concern. Is it just in terms of the quality of the data that’s feeding into the the technology?
Danielle Whicher: Yeah. So I will say, there are very. There are a ton of different sets of data that are being used to produce these tools. But if we think just about healthcare claims data.As an example in order to know how your AI tools gonna be biased, you have to know how that data, how that claims data as bias to begin with. So in our healthcare system, we know that different types of people have better or worse access to healthcare services if we can, if we know that coming in, and we have a strong understanding of how that’s reflected in our data, then we can do manipulate the data in ways to to try to address for that. So if we know that there are certain groups that are represented in the data, then we maybe manipulate the data so that the data that we do have on those groups is amplified in ways that the data for other groups isn’t. But the most important thing is, we have to know how data is biased to begin with, otherwise it does become really really tricky to address that those but dress, those underlying biases. If we don’t know what they are.
Josephine Johnston (she/her): And when you say claim study, you’re talking about like, for instance. So just for people who maybe aren’t in the Us. As well, like claims. Data is like data. We have on what the services that people actually use, that a process through their insurance or through the public assist insurance programs. Is that what you mean by that? Exactly. Yeah. Sorry. That is a good point of clarification. So that would show. So I think there was an example that you hid about that recently, wasn’t there with people with a client data suggested. Can you just explain a little bit about how clients data might work in real life and in terms of creating an algorithm or creating a kind of AI response.
Danielle Whicher: Sure. So what there was this example in United States where a very large health insurer used claims, data to predict how much health services, how how much healthcare services certain individuals will need in the future. And through an analysis done by a separate group of that tool. We learned that the algorithm predicted that white people with the same conditions as other people from other minority groups needed more healthcare services and the reason that the tool predicted that was because in the claims data white people were getting more healthcare services because they had different access to healthcare than other racial and ethnic minority groups. So those are the types of things that can happen if we don’t very closely look at the biases that exist in the data sets that we’re using to create these tools.
Josephine Johnston (she/her): Thank you. Thank you. And so that’s one kind of injustice, which is that the data see in itself is is biased, or has, you know, data that reflects existing social biases in it. But there’s also the other sort of injustice of just like whether or not really everybody will be, will will have access, or with it, and the case of healthcare, everybody’s healthcare providers will be able to use it. So and I know that sort of mirrors and more general interest of yours, Nicole, and this, like the the divide between those who get to benefit from technology and those who don’t. So how are you seeing that in in this context
Nicol Turner Lee: well, Danielle was being quite kind. I think that the example that she gave was an egregious ex. An illustration of how bias health care is in general. You know, AI is not necessarily gonna create its own systemic biases. It’s actually going to play off of the existing biases that we have. So in the example of the medical claims, data, you know, relying upon, make determinations.Help. But these using hospitalization information in an area that basically makes low income rural predominantly, people of color populations much more marginalized within the healthcare space. You know, that was going to happen. But before I actually talk about like, you know how we can deduce bias and other ways, it’s very important that we actually think about this concept called data trauma.One. In the case of the hospital claims, we are seeing developers who are not necessarily committed conditions or medical practitioners that are developing algorithms for people that they are somewhat removed from do not have an idea about those lived experiences have not thought about the socio cultural context in which we are actually designing those models. That’s the first thing you know. Whoever was putting that model talking about clearly has not done a really good history of health care and health disparities in this country. The second thing with data trauma is data can be biased in the healthcare because we have a disproportionate representation of patients who have been, historically distrustful of these systems, but, more importantly, are often under represented in clinical trials, and so there are cases where women are underrepresented when it comes to drug testing or any type of far you know, physical or pharmaceutical testing. There are people of color who do not show up in those databases as well. And so it’s one thing I think, what we talk about just be biased, that we have to think about bias from these very cases of behavioral impacts that affect communities and have unintended consequences. But we also have to think about bias as it’s embedded into computational models that are essentially relying upon systemic trends that come with these unsettled and unanswered questions that then make determinations around healthcare and II you know I when I think about this, and I’ve spoken a lot of panels on this, and I think about, for example, Covid, and there was a point with Doctor Pouchy, got up and said. I did not realize that the co-morbidities and the chronic disease of African American populations was going to worsen covid symptoms in those populations to me that suggest that we need a bigger conversation around more inclusive health care models on the practitioner clinical space outside and we need to interrogate them. The models when they are actually deployed, to ensure that they are inclusive, and represented, as I was saying, with Dr. Fay Cob. This whole idea of interrogating the micro data in these data sets could be an interesting concept and not necessarily relying on large slots of information to make determinations around diseases that have proclivities to a certain demographic groups. And so that’s really one of the reasons why I stay up at night when it comes to AI, because I think that these systems can make predictions that can have consequential output, not just physically, on individuals, but it can unravel the long efforts to make people who are trustworthy and trustful of their healthcare systems. You know, when a computer makes a mistake, think about how that sort of is compounded. When a doctor makes a mistake among populations that we are really trying to work hard to build that trust.
Josephine Johnston (she/her): So some of the systems that you’re talking about, you know, developed by, I guess. Startups and other kind of you know high kind of fast, quick, high, innovating kind of groups. But some are also being developed in hospital settings, using, you know, by with clinicians involved. So, Danielle, do you think that there is? There’s a there are models for developing the tools that can help address some of these problems really. And like, cause. This, you know, as you were pointing out this, they hast. You have to take into account the social context of the data, and that you’re putting in so otherwise that that’s being managed.
Danielle Whicher: I think so, you know. But the one problem with hospitals developing their own tools is that like a very large hospital system is gonna be able to ha is gonna have a lot of data, and they’re gonna be able to do it for themselves. But small rural hospitals aren’t they’re not gonna have the amount of data required. And they probably aren’t gonna have a staff person who’s just devoted to like developing AI tools? So that’s where I think your earlier question comes into play, which is, can we? Are we potentially creating more of a divide in the healthcare system in in terms of who is able to access and use these tools and who isn’t. And I do think that there is a big risk there, because you can’t just take a a large tool, develop or or tool developed by a large healthcare system and apply it to a small rural hospital because it’s gonna have a completely different patient population. So the data used to develop entering the tool isn’t going to be applicable to that population. So I think that’s where there’s a big there that’s very tricky. And I don’t personally know a great solution for how you take a tool developing one system and then apply it to a different system. The other problem in this spaces that rural hospitals or smaller hospitals could potentially purchase AI tools. But a lot of the AI tools that are available for purchase are proprietary. And what that means is that these small hospitals don’t have any ability to learn what data would use to develop the tool and how and then to make judgments about how well it can apply to their own population. So I think the proprietary nature of private development of AI is a huge problem. That you know, that needs to be addressed in order to to further help these small rural hospitals figure out how they can use these technologies.
Josephine Johnston (she/her): We’re totally developed, based on ex, you know, data seats of collections of genetic data. That might not represent the people, that it’s that the tools within being applied for peace and you know, predictive again, production. So and they’ve and there’s been. Now, there’s work to try to correct for some of those those problems, including by creating bigger, more diverse more representative. I guess data seats, in the first place, is that the kind of initiative that’s happening, and AI to try to to try to sort of correct the underlying or offer up a richer, more representative of data for the tools to be used on. Is that is there an effect that’s underway. And so who’s leading it? Or where is it happening?
Nicol Turner Lee: You know II would just say that first and foremost, Daniel, thank you for sharing that right? Because I was just in a conversation around that very topic also, bringing in urban community clinics right? And even the devices or equipment that they have are often in control. And so thank you for saying that. You know I think to your point. I mean people like me who are policy makers. We would love to see ways to co evolve the bias systems or anti-racist anti discriminatory systems. We’re starting something at Brookings, which is an AI equity lab which will hopefully start a pathway to a conversation on how you develop anti-racist anti discriminatory AI. But the challenge is, you have to have protection of patients and subjects. You know, I’m always reminded just being Danielle of Henrietta lacks right, whose family just got compensated for the use of her DNA, to actually have major discovery and cancer. She was a black woman who’s DNA, actually, you know allowed us to innovate in ways in terms of cure that we were not able to that before, but she was exploited. And her, you know, she died a popper essentially, because she was never, you know, she was sort of indemnified without any type of responsibility on the part of the people who are doing the testing. I think, with any type of regulatory co-evolution. In this space there have to be patient rights and protections, and we have to find ways to do it in safe spaces, particularly in areas where we do need that data. You know, I’m one of those people that I’m really skeptical of paying people to give their data or having people come in, you know, for these purposes of big companies who are more market driven to drive really important areas that require some red lines when it comes to patient safety. Privacy is another area of concern as well as just long term unintended consequences. I mean, these are people that we’re dealing with. I think just your point. We haven’t done a really great job in the United States on that right now, you know, we have bits and pieces of this like. Think the FDA has been doing a really good job in terms of map monitoring and managing how they approve products that come through that an AI enabled we’re seeing something with the product safety. So agency, which is looking at medical devices. But we’re sort of in the United States, you know, unless so in EU, a bunch of posted notes trying to put together the story right here, and these sectors are part of the story, but they somehow rebate on the edge. And so I think, to your point. Just much more work has to be done to find ways to make these technologies much more inclusive, but to also respect the context and the history in which we’re actually doing these types of testing. And you know things that could result in greater public interest.
Danielle Whicher: Yeah. And then the only, and I well, the one other thing on there on the Federal side, I’ll mention, is there is this AI Bill of Rights in the United States from the office of science and technology policy, which I think has some really good information in it about things that people should be aware of. But it’s it’s not. It’s not a law. It’s just. It’s just a bill of rights. It’s just out there for people to take a look at and be familiar with.
Josephine Johnston (she/her): When you say just a bill of rights. That seems like it would be about even more important than a law. So is this like a non binding Bill rights recommendations from a government agency about how developers should behave or what is it?
Danielle Whicher: It’s framed. Is it rights of patients. And people whose data is used, or how. What is the
people impacted by a by AI, and it’s from the White House. So it is from the top of the the government in the United States. And it has basically 5 principles that lay out things that people should be able to expect with respect to how, how AI is used and applied to them and then it. It’s actually, I think it’s a really nice document, and then you can click into each principle and see examples of why the principal is important, how it’s been applied and stuff like that. So it is actually very nice document. I don’t know how widely it’s been used. But I do think it it is a great starting point in terms of thinking through principles you know of the appropriate use of AI, and and how how people should be informed of how it’s being used.
Josephine Johnston (she/her): we should expect P. But also presumably is therefore developers to see how they should be thinking about the fit, the people whose data they using, and the people who presumably one day be impacted by the technology. See? Creating
Nicol Turner Lee: you hope. But I mean, there’s a lot of contradictory conversations going on in Washington. Right? So President Biden has recently sort of brokered voluntary commitments which assume, you know, appropriate self regulatory behavior on the part of companies with developing AI products or large language models that that feed into generative AI. You have on Congress particular sectoral bills that are trying to get at issues of healthcare employment. And then you have what we call guidance right, which is things like the equal appointment Opportunity Commission, which is come out with some guidance on how to use AI and hiring for employers.You know, the challenges. These companies that we’re talking about are American based companies, right? And their companies where AI can have a direct user impact that could be talking about, but they also can be embedded on the back end. They can be part of like I said, the pharmaceutical application. You know my nurse, physician, assistant did not say, Hey, I’m using AI, and I’m trying to figure out your directions. She basically was like, Give me a minute. Oh, it looks like these are all the drugs that this you should not take, you know, versus trying to check the book or going through. You know I was, it was reminded me of the doctors who would leave the office for about 10 min, and then come back and write your prescription, because they were checking out what your drug interactions were.We’re not always gonna know what those things are. Right, Josephine, I mean, we’re not gonna know the AI that’s embedded somewhere in the integration and the design of a product what we’re trying to be concerned with is the other side of the quote, unquote black box which are the outcomes that can lend themselves to differential treatment or disparate impact.Which is the case of the medical claim information
Josephine Johnston (she/her): having so one of the things that you your just your example shows is that it might be consoling, I think, is that this the heck, the keeping, especially in healthcare settings. The keeping of a clinician, a trained expert as a as in the kind of loop of the use of the AI is you know, is one of the ways in which bias and you know, any other flaws in the system could be. Course can be mediated. but there are other mechanisms as well. So that that are being proposed. Not. You know the bill of rights, I guess, being one of them and cleaning up the dataset that feeds into it. Buthow about this transparency? So I, personally don’t really know how much help it would be if we were more transparent about the fit that people are using a eyes, part of the thinking, because I there are many things that go into my doctor’s heed that I don’t know that a part of the kind of figuring out of what happens. But some transparency is probably really important. So like how much, when people say, we need more transparency around the Ii, and that will help how much transparency we are like. And what’s actually realistic, because companies are also gonna want some kind of proprietary control as well. Danielle, do you have thoughts on that?
Danielle Whicher: Yeah, I mean, it’s it’s hard to make a blanket statement about transparency, just because these tools are so diverse you know, like, do I really care if my doctor’s office uses an AI tool to help with scheduling. Not really. But do I care if my, you know, I’m have. My images are being read by by an AI tool? First. Yeah, probably. You know II mean to the extent that my doctor is also looking at it. That’s fine. But I mean, if you are basing a decision on an AI tool. I think that’s that’s very consequential to an individual’s life. It becomes really important that that person know that and at least have access to some information about that tool like, how was it developed? You know how you know what was the data set, and that can be for a variety of reasons it can be because just because the person has a right to know that can be one. But also, if there is harm caused by the tool. That person needs to know that because they might want to take some action based on that that, you know, incorrect diagnosis that was made. So I think there is a ro, a spectrum that exists, and the spectrum is maybe based on risk to the individual. I think on the other side of that, though, is like the position really is the person, especially if it’s being used at the point of care that needs to be able to understand
how the prediction is being made, what data was used to develop the they’re the ones that really need to understand. What’s going on because they need to be able to then take the recommendation that was made by the tool and understand, you know, whether it’s correct for the patient that’s under their care. And that’s the important point about keeping keeping the provider in the loop. I don’t think that providers are always well positioned to be able to do this, though, and that’s, I think, a be a big, a big gap that exists right now, and one where there really does need to be more training at the medical school level. And I just don’t know how often that’s occurring right now.
Josephine Johnston (she/her): Nicole. That sounds a bit like product, labeling almost right like he’s. You know, you know, that there was team. And your, you know, DM, was used in this in this product, or you know that the kind of like the genetically modified foods, labeling, or other kind of labeling, is, is there a live proposal in DC. To kind of require that kind of transparency along the sort of like product labeling. Or like, you know, lines, are you seeing anything in there?
Nicol Turner Lee: Yeah, I mean, there’s a lot of proposals, labeling being one of them. That’s not widely you know, lauded only because how do you label? AI? Right? These are organic iterative systems. And it’s hard to say, this is how we label it, or like what the FDA does. You know, level one level 2 level 3 in terms of medical device complexity, you know. Then they’re able to give some type of of sign off on them. I think the push really needs to be around standards. Right? So what are the standards that go into creating equitable, trustworthy, responsible? AI. What does that look like? And I think for the medical field? I think it is really important to, because this is a very journal based profession, to use AI in ways that we can actually contribute. And I love the way Daniel talked about it. Where transparency looks like documentation, right? It looks like evidence. It looks like collaboration. It looks like the type of coordination of care that we do when we’re looking at a variety of diseases or diagnosis where one doctor may have to talk to. You know a blood doctor may have to talk to. An endocrinologist may have to talk to someone else sort of replicating that environment online. What that looks like is still undetermined. Right? Because I love the point. You know, we’re gonna have young doctors who are gonna grow up with this stuff right? And they’re gonna go into the doctor’s office. And they’re gonna rely upon the AI to basically give you a likelihood of predictability that you have this or you have this propensity towards this disease. An older doctor, one who’s been around for a while not necessarily an age, but who’s a little bit more seasoned, may be more skeptical of the AI, and they may be looking at the AI as one source of evidence or investigation, but they may rely upon, you know, their own tactile, you know, skill set in in memory to figure out diagnosis. I think, for the medical profession. This gonna have to be a conversation and the fact that it has to be done in medical school is one way to do it, or at hospitals, for example, that we talked about where they’re constantly doing, retraining on these tools, and they’re constantly exposing people to these tools. Standards is really the way to go.
I think it’s easier to put labels on devices, you know. Your heart devices your respiratory devices.
It’s so much harder to put a label on. Ai. I’ve I’ve actually my own research proposed more of like a reputational listing, something that at least gives confidence that this tool is representative of a diverse data set of people. It has some sign off on secondary tertiary contacts. It’s not just been trained in the laboratory. It was also vouch for by practitioners who in some way are able to contribute to the documentation of that product of that AI model. Something like that to be seems to be more organic versus you know what we’re hearing from companies. We’re gonna fix problem ourselves to know that we’re thinking through this, you know. And the companies that wanna label are essentially those right like, give us a label. And we’ll tell you how we’re managing the sub stack.
But, as they all said, we’re talking about people’s lives. We’re not necessarily talking about just the model we’re talking about a model that interacts with people
Josephine Johnston (she/her): So wouldn’t these be devices that the FDA would need to approve the way they approve other kinds of medical devices. And if so, then, like you were saying, there, there will be standards that you have to meet, how you develop. You have to sort of prove that evidence. That sort of the back, the back end of the product needs to be checked. Is that is that what’s gonna be? Is that what’s already happening in the FDA, or
Nicol Turner Lee: The FDA has been very progressive. Actually, in putting out an AI standard. I think much of what we’re seeing on the AI enabled devices is an easier way to fit those modules into their standards framework. But on the other areas the FDA has been kind of careful right on AI, because not all particular medical areas are fully developed in this space, like they’ve got great examples. If you ever go to the FDA reports on radiology use of this ophthalmology because those data sets tend to be more filled out versus, you know, endocrinology is still a growing field of being able to leverage AI products and tools. I think what we’re seeing actually just made it clear for the people listening. We’re seeing more of a discussion on risk, right? Risk and arms, and I think that conversation is, is probably gotta lend itself to. And this is my prediction, probably less of a label and more of a question of who has oversight jurisdiction right? And how do you manage risk? Generally in the United States? And how do you manage it? By sectors?
And I think we’re gonna see more of this risk management framework coming out on these. These models that in some way they’re they’re not transparent because their models that start. I always give the example. I think of algorithms as a snake at the bottom of the ocean. When you drop the snake in after they walk through all the sand and the core role and all the other debris at the bottom. They look very different before they come up to the top. That’s all of the movement that’s happening on these models before they actually spit out a determination. And that’s an area again, which is much harder for us to actually regulate in areas like healthcare, criminal justice employment, because the outcomes are what we care about, not necessarily how this model is being developed.
Danielle Whicher: We do care about that. Let me take that, that most of us who are not data scientists, let me put it that way are trying to figure out how we got here.The only other thing I wanted to add to that is just the FDA only has oversight of software as medical device, and a lot of AI being used in health. The healthcare space does not fit the definition of a medical device. So a lot. So a lot of it is not regulated at all like if my doctor uses chat, Gtp. Gpt. To write up some some notes based on my visit that is not regulated. If the it’s it’s things like use and radiology use for guiding surge. Surgeries like those are devices, and those are regulated by the FDA. So there are like 5 more than 500 devices that have gone. AI enabled devices that have gone through the FDA, and I think they have a great they they really are trying to be innovative. But there’s so much of what’s being used, it’s not regulated at all.
Josephine Johnston (she/her): Would it just be to regulate it in the way that, like you can. So or like that, you is, it’s consumer, the kind of consumer protections that exist. But different from like the level that we expect from FDA approved products.
Danielle Whicher: I think there’s a lot of calls even from the tech the tech side for more regulation in this space. But there isn’t right now, and I don’t think we’ve seen.At least I’m not aware of a huge a big development in this area that’s likely to come down the pike soon. So so again, a lot of AI in general is just completely unregulated. Right?
Nicol Turner Lee: I was gonna say, after listening to the FDA on Saturday, and a panel I think, to Danielle’s point. They are trying to lock down there. What’s under their jurisdiction, which is the medical deviceenabled AI. But they are also trying to start conversations on other aspects of risk they should be managing. But again, it’s like those post notes right? There’s not one centralized jurisdictional authority that can combine the technical cadence with the public interest. And that’s where we get stuck again compared to the EU that tends to be much more prescriptive. With regards to what they expect from these technologies.
Josephine Johnston (she/her): Okay, I wonder if we should maybe actually remind everyone that we are taking questions in the QA. And we could actually look at some of those questions now. So Brian is gonna help with this. I think she’s been monitoring the QA. And can just maybe let us know one or 2 questions. Now that we could already take on from our audience.
Briana Lopez-Patino: Sounds good. A common thread has been about doctor patient relationships. So what they’re sort of concerned for the implications of AI. What are your thoughts on preserving or improving doctor patient relationships, less interactions while using AI to improve healthcare?
Josephine Johnston (she/her): Alright, thank you. Who wants to take this? Because I think, yeah, Nicol,
Nicol Turner Lee:I think to some of the examples that we gave having AI does bring much like telemedicine and telehealth these automated systems brings healthcare closer to patients, particularly those that are medically underserved. The challenges. This is the focus of my book. Shameless plug is that we have a populations, disproportionate numbers of pop people who do not have digital access. So the assumption that AI is going to be something that people can use, you know, when they leave the doctor’s office, or they can feed their you know, insulin numbers, or, you know, diabetic, and feed their numbers to the doctor using AI to sort of get an immediate response. That’s unrealistic. I in my book I met this woman in a parking lot of a library. She literally had to check her electronic health records at a local library of which she found out she had stage for cancer in a local library. And I document these travels that I had in my book. So I think we have to also be sensitive to the fact that it may help certain patients based on their assets well to, you know, claims, but it may also serve to widen the digital disparity gaps. If AI turns out to be a tool that can help us more granularly in addressing some more particular specific.
Josephine Johnston (she/her): it’s also the. It would also seem like that person doesn’t even have clinic. So I think one of the concerns would be that, like people already maybe feel like they don’t get enough access to the clinician, or it’s really quick, really rushed, and then AI’s tools can enable those processes to go even more quickly. And so, even if clinician is sort of nominally involved, maybe they’re not. It’s not for long. It’s not very and that that I know a lot of people do feel like something is lost in the connection between clinician and patient in the kind of healing that happens in that relationship. And I guess ideally, you know, one selling point would be like, wow! AI tools can just you know, give the clinician more time, so they don’t have to spend much time doing one thing. They can spend more time with the patient, but of course most of us also worried that the complete opposite would be the case, and that some people would get lots of extra time with the clinician, but a lot of people would not get would get list time, and maybe no time. and that that’s the kind of what might be further lost in the kind of automation or kind of digitization and kind of Iii fication of that of the healthcare encounter. W
Nicol Turner Lee: Well, what you got. But there are surprises. I was just talking to somebody, Jenner. Today. I, for example, if it was more universally available, could help patients who were coming in to see their doctor toquestions to ask. You all. Remember, we used to try to do that patients. I’ve been to telehealth and telemedicine research for long time when it came to Google. Search right? Ask these questions, and these are things you might want to say when you get to the doctor, because we know certain patients don’t ask questions. But the challenge again is, you know, there’s liability. Obviously for the clinician. And people think the generate today are the AI is more correct in their position. My mother is one of those people. She reads everything online before she goes into a specialty, doctor. And then she already said, Doctor Down. But then, if you don’t have the access to the technology, and your visits to the clinician are scanty, what is technology going to do. And so we just have to be careful that along with the data sets being potentially biased.
that we don’t replicate again, many of the systemic inequalities that we already see in our healthcare system.
Josephine Johnston (she/her): Yeah.
Danielle Whicher: yeah, I mean, I do think there are. You know, we know that clinicians can’t keep up with the medical literature, it’s it’s impossible. So to the extent that an AI tool can help take that literature combine it with data that exists about patients within the same healthcare system and provide information that the pro that clinician can use at the point of the Ka, a point of at the point of care when talking to the patient, to come to a more informed decision together about what to do. I think that is the ideal, you know. That is a great outcome if in AI. But, on the other hand, if a a clinician is just taking the prediction and and and saying, Okay, this is what we should do without any conversation.
That’s bad outcome. We don’t want the clinician just to solely rely on these tools and not use the training and the interpersonal skills that they’ve developed through medical school. So I think there are risks and benefits. And and it all goes into training on how we are using these AI tools have trained and and used appropriately. I think that they can potentially strengthen their relationship in many ways. But there is, there is, of course, a danger.
Josephine Johnston (she/her): Briana, do you have another one for us?
Briana Lopez-Patino: Yes, this one says, please address the impact of AI on healthcare providers and workers regarding its impact on professional identity, job security current high levels of burnout and moral distress.
Josephine Johnston (she/her): Oh, yes, so it seems like we’ve already kind of talked about that we would that it does the need to retain permission. So it’s not like we just gonna get rid of them. And we can just use the Ao. I can just be the doctor. But and Danielle was, you were offering an example, we they we! I could actually help alleviate some of the burden on clinicians. By kind of summarizing, I guess what’s been happening in the literature and applying it, helping apply that to patients. But do you see? Otherwise that could be kind of helpful for clinicians?
not necessarily, you know. directly for patients, but help the clinicians themselves to have more manageable workloads, or you know, kind of make their jobs better.
Danielle Whicher: Yeah, I think. Well, I think that some of the the clinical notes data entry stuff. I mean to using natural language processing and other generative tools. It can be useful for helping get the clinicians get through those notes. And again, if used appropriately, and the notes, you know, assuming the AI tool is spitting out notes that are accurate, and that can be another way to help reduce clinician burden. I mean on the then, you know, on the administrative side, which generally isn’t taken care of by clinicians. But by office staff. You know. There, I think there are AI tools that can help with scheduling. There are AI tools that can help review long documents and pull out salient information about like why, something would be denied or or approved. So there are these AI tools that can be integrated into a
practice or clinicians office to help reduce some of the administrative burdens that they face.
Nicol Turner Lee: Yeah, I was, gonna say, I mean, in direct response to that question, there will be job loss right? Particularly. Roles that are highly repetitive can be automated. Clerical health notes, processing, medical document processing. We’re gonna see all of that and the extent to which we’ll see that in hospitals, you know, it’s still questionable, right in terms of how affect you know nurses and others who are basically maintain the basic level of need as a patient is in their care. So you know, this is one of those questions across professions. We’re really thinking about right? Because most of the people that are in those positions tend to be women, or they tend to be people of color, or they tend to be people, disabilities, or somebody who came in to do something that was an easy grasp for them, or because of systemic inequality, they were not able to get up the food chain for whatever reason and they have to start there. So I know, Brookings, we’re sort of thinking about. How do you look at that landscape of automation. And to Daniel’s point, are there areas where it’s not necessarily of outsizing people out of this industry, but training them up or training them across so that they can just do their job differently.
Josephine Johnston (she/her): Okay, great. Thank you. Briana. Yes.
Briana Lopez-Patino: This one is, how do you prevent human bias being introduced in big data models and AI to prevent undesired outcomes. And I’d like to add to the original question. There has been a threat about racial inequality concern in big data.
Nicol Turner Lee: You know, I think you have to start with the fact that these models are trained on the values, assumptions, and norms that we all carry in life and who we are when we’re developing these models has a great impact on whether or not they represent the lived experiences and cultural efficacies of communities that are being the subjects. And of these technologies. And I think this is a question across the board. I mean, you look at technologies that are not healthcare, like facial recognition technologies that still cannot identify people with darker skin complexions, or where image quality or lightning matters. But yet they’re still being used on determinations for policing customs and border control. It’s the same thing that we’re going to see across the board how you fix. That is that you have to make more representation at the table. People who are training these models, unfortunately, should not just be heterogeneous. White men who are data scientists, right? They need to be medical practitioners, community health practice, patient advocates, social sciences. You know, people who make determinations around the well being of communities should sit at that table and the other thing. And we didn’t talk about it just really talked about in terms of demographics.People who understand that externalities include access to food, access to housing, access to transportation. You know those things also matter because they have certain propensities to determine some of the environmental considerations for people’s livelihood. And well being so, there’s no easy answer. And it is one like I, said Brookings, we’re develop this AI equity lab, one of the things that we’re really trying to do is to be small groups of people who will come from different parts of the ecosystem so that we can have those conversations. But you know, unfortunately, the train has left the station. Big companies that are driving this or market versus people, and it becomes a question of how much does this industry care about creating their own bill of rights, their own agency over the technologies they secure, and they use some patience. Well, so that’s the thing right. Everything that we’ve been saying is harder.
Josephine Johnston (she/her): It takes more time, it takes more expertise, it takes more insight. It takes more, a closer. Look at the data, you know. Correct thing for all kinds of challenges in the data, and all of that is is ex more expensive to do. Then the kind of quick. And maybe you know, quick and dirty kind of version of the technology. And so it sounds like we will have a plethora of different tools available, some of which would develop carefully with really a lot of insight and some of which were not. And then it’s the kind of competing marketplace. And and if there aren’t standards, or if they aren’t if you don’t need to get something approved and show how you developed it and what that you created for all of these things. Then it’s kind of I kind of. I guess you’d hope that eventually the market will pick the better ones. But like is it? Is it that much of a kind of free for all, Or are we? Are we actually able to? I know that you could press, you know, after the fact always seems like a really bad way of fixing problems. But, like, how else is it this buyer? Beware!
Danielle Whicher: Well, it’s also not. I don’t think it’s very easy to Sue, because there’s diffuse responsibility. I mean you. Who are you suing? Are you suing the person that use the tool? Are you suing the person to develop the tool? So that’s not that I that’s that’s not even straightforward. II unfortunately, in some spaces. I do think it’s a free for all and it is a situation where you need to be aware of what you’re using or or how you’re how that you could be impacted like with ChatGPT people need to be aware that anything that they put into that can be used by the model for future training. I mean nothing that you put in. There is private and there are other things that you need to be aware of like the model was only developed based on Internet data up through 2021, right? So anything that happened after 2021 is not reflected in the answers you’re getting from Chat Gpt, and that can have a big impact, especially like depending on. You know what you’re what you’re trying to get an answer to so and there are other examples of that as well. I mean, you know, if you’re putting your medical information out there, onto a website that has a tool that you think could be useful. You need to be very, very careful with that, because there aren’t privacy necessarily privacy regulations. You’re not protected by hipaa. If you’re putting your own information out there in the public domain. So I think there is a need for more regulation or standards or something. It’s too much for consumers or patience to have to be thinking about all of that, and it might even be too much for the key providers to be sort of like.
Nicol Turner Lee: But this is why I think the healthcare in the healthcare space has something of sort of an edge on this right if we were to take all this stuff that we talked about zoom in the gloom. And just think about the positives right? I mean for a profession that has always grappled with liability, that comes into this space with a particular oath around the duty of care when it comes to health and well-being. I’m optimistic that industries like healthcare industries like, you know, education where there is some type of implicit and explicit understanding of that relationship with consumers, or you know, students or patients will begin to think about how they leverage these principles among their national, inner, international associations, and how they begin to think about AI and healthcare, and come up with the same type of guidance and standards in the absence of regulatory guard rails. What I mean by that there is no one in DC. Trying to change hipaa right now. No one right. There’s a few people every once in a while, Will mentioned. Well, maybe need to go back and revisit hipaa and update it. Well, that’s not happening at this moment, because I think it took a lot of effort to come up with hipaa, and I’m just hopeful that the healthcare industry will really sit down and think about this another case and point. You know, I’m hearing a lot more healthcare insurance agencies. Thinking about this because, as a quoted ecosystem how you treat AI is the same way that you treat the use of telehealth and telemedicine and other digital health tools. And they’re gonna have implications or the extent to which doctors are reimbursed. And they’re gonna have implications on liability. And they’re gonna have implications on the trust and reputational risk that comes with how we’ve been sure healthcare. So you know, people like me are looking to these industries. So to help us think through these big issues because it’s gonna take a long time before I think Washington comes to agreement on some values around this outside of, you know, one part of the Kyle really appreciating the White House bill of rights.
Josephine Johnston (she/her): Okay, so now, I’m really hoping, thinking that yeah, like you said. It’s a high stakes environment. They’re already a lot of protections or kind of a protective mindset in place. And so professional and bodies inside of health care, you know, have a role to play in addition to mean schools.
Josephine Johnston (she/her): and insurers. So, Brianna, we probably have some other questions. I wanna make sure we get to, because we haven’t got much more time.
Briana Lopez-Patino: Yeah, we probably have time for one more question. Thank you. This is, where do you think the interaction between social science and AI tech is most needed.
Josephine Johnston (she/her): Hmm. anyone?
Nicol Turner Lee: Well, I mean, you know. Look I, I’m a sociologist by training. So I think, showing up to conversations really matters. But I’m also a researcher right at heart, much like medical doctors. We have peer review journals that could be done. You know, articles that can be done collaboratively to provide evidence-based information. We have conferences. So we could talk about these issues. Think every you know, I used to say that every computer scientist should have a sociologist as a friend and take a social science course during the time that they were in computer science. I’m now believing that every sociologist should take a baby science class. You know, I think as this evolves, we’re just gonna have to figure out where we start this process, and Daniel said it at the medical school and college and premed and then beyond, in terms of your certification, as you use these tools further.
Danielle Whicher: Yeah, I mean, I agree with that. I think also, you know, just understanding our society and how it impacts the data that we have because it’s not, it’s it’s you can’t. You should not create an AI tool without understanding data that you’re using and how it’s impacted by the patterns that exist in our society.
Josephine Johnston (she/her): Thank you so much. So in a different, in addition to sort of using social science to study the impacts of it and test it. We can also use social scientists knowledge of things like how data are collected and the biases that can go into date to kind of understand how to clean up the data seats to create one data more equitable, more reliable prices than the ones that we are using ready. Is that fair?
Nicol Turner Lee: Yeah, we can. We social sciences, I mean, some of the best papers I’ve written have been with people who are in the space of their so being able to do health policy with people who are health practitioners, it just makes a lot more sense, which is the same discussion and campaign that we’re putting out for AI. Bring us to the table, pull up a chair here, we have to say, to make these systems much more responsive and responsible in their application.
Josephine Johnston (she/her): Well, I wish we could keep going, but we can’t. So we’re at the time. I wanna remind everybody that a recording of the this will be available soon on the website and later today. But and then my only task remind me, really is to think and Nicole and Danielle for having this conversation with with me and with with us, and with the Hastings center and with our audience. And thank you to all of you who joined and listen and and to Brianna for all of your coordination. So thanks very, very much. Yeah. And d Briana back to you.
Briana Lopez-Patino: No, thank you. This recording should be out soon, and let us know if you have any questions. But everyone thank you for coming.