“Smart” Therapeutics Create Ethical Conundrums

Publication
Article
Drug Topics JournalDrug Topics March 2022
Volume 166
Issue 03

With each new technological advance comes a set of potential downsides and a myriad of ethical dilemmas that must be addressed.

Technology can be exciting, and technological advances in the health care industry have certainly made waves. However, with each of these new advances comes a set of potential downsides and a myriad of ethical dilemmas that must be addressed. Drug Topics® spoke with Craig M. Klugman, PhD, Vincent de Paul Professor of Bioethics in the Department of Health Sciences at DePaul University College of Science and Health in Chicago, Illinois, to discuss some of the bioethical dilemmas associated with “smart” medications today.

Drug Topics®:What are some of the biggest ethical issues associated with smart medications and digital therapeutics?

Klugman:Privacy is the biggest issue, including control over your data, who can view the data, and being tracked. [Although] health care providers and hospitals are bound by the Health Insurance Portability and Accountability Act [HIPAA], device manufacturers, programmers, and insurers are not. [However, they] will have access to the same information.

[Other issues include] informed consent for a product that can constantly be changing (ie, updates and new user agreements) [and] coercion. A health insurer, provider, or hospital system may encourage patients to take a digital solution because they can be tracked. Maybe an insurance company will not pay for a medication if they learn the patient is not taking it as prescribed. Informed consent would cover the medical treatment, but digital solutions may require software and hardware, and those are covered by user agreements that can be very long, complicated, and hard to understand, [which is] the opposite of what an informed consent document should be.

[There’s also the issue of] how to navigate trust between patient and health care worker (do you believe the data from smart medication or the patient if their tales are different) and objectification of patients; we can end up treating data instead of people. There’s potential for injustice. We [must] make sure that prescribing these digital solutions is based on medical need and not on race, sex, gender, socioeconomic status, or other areas of potential bias. Often, these devices are built using data that exclude minority health data, meaning the algorithms on which they are based can be biased. Because these devices are more expensive than other means of treatment, they may only be available to people with greater financial resources. These solutions also take away the option to lie. [No matter what the patient says], the device will always tell the truth, which can lead to feelings of stigma and judgement.

There’s [also] dependability.If a patient takes an FDA-approved drug, they can be sure it will do what it claims to do. They don’t have to worry about the internet going down, the power going out, or a battery running out. There won’t be a software update that takes several hours, and it won’t ever need to be repaired. [However], a digital solution is subject to all those things.

Therapeutic misconception [is another ethical issue].Most everyone has heard of the placebo effect. The same could happen in the digital landscape. Just because I am using a device, I might think I’m better, even if there is no change. Most of us have a bias that every problem has a technological solution, so that faith alone might make a person think they are getting better. We also have a technological imperative: Once a technology exists, we use it, even if costs more and is no better than existing technologies.

This quote from your 2019 article1 stood out to me: “For providers, digital medicine changes the relationship where trust can be verified, clinicians can be monitored, expectations must be managed, and new liability risks may be assumed.” Are there any real-world examples of this playing out that come to mind that you could share?

One of the early issues in using these devices was [the question of], when do you turn them off? In medicine, when you initiate a treatment that keeps a person alive, there is a strong reluctance to stop it. But what about a person who is dying [and] using a device they no longer want but prevents them from being able to die? For example, an implantable defibrillator that continues to jump start the heart, which supposedly feels like being kicked in the chest by a horse, over and over, because no one wanted to turn it off, even though that was the patient’s wish.2

The most famous example [of this] was altering Dick Cheney’s implantable defibrillator in 2007 with one that could not be hacked out of fear that terrorists could send software that would kill him.3 [There is also an example of] an insulin pump that was recalled because it had a software vulnerability that could allow it to be hacked,4 [as well as] pacemakers that were recalled because they had a risk of an electrical short.5 In this case, “recall” means a person [must] undergo surgery to remove the device and have a new one implanted.

Traditionally, medicine is a relationship between a patient (and their family) and a provider. But with this new technology, that relationship also includes the software programmer, the device manufacturer, and perhaps a pharmacist and pharmaceutical manufacturer—all of whom will have access to your information and input on your treatment.

If the question is whether I know of a physician who was sued for not monitoring a patient’s device closely enough? Or where a provider felt they had big brother watching their every move [while] monitoring a patient’s device? Or where patients said, “I do not feel comfortable sharing my medical information with a company.” [Data from The Pew Charitable Trusts]6 show that patients want to share their device information with providers…but they also want it more secure. Most are not aware that the health data on apps are not covered by privacy laws, but I have not done any research myself into whether the warnings my coauthors and I offered have come to pass.

How can providers ensure patients are providing fully informed consent before prescribing digital therapeutics? Is fully informed consent even possible? Can patients revoke that consent once sensors and trackers are involved?

Many question whether truly fully informed consent is possible. Informed consent consists of helping patients to understand the risks, benefits, processes, and alternatives, but it’s not every risk. It’s just risks that may apply to that person or that are statistically more common. The problem with digital therapeutics is that we layer a user agreement on top of informed consent. Tech companies—both hardware and software—use user agreements. Anyone who has ever used a computer, cell phone, tablet, program, or app has encountered these.

Informed consent exists to protect the patient. These documents and conversations use everyday language and explain to patients their rights. [However], a user agreement is usually long, written in legalese, and is intended to protect the company’s rights. It is also possible that these approaches are at odds. For example, informed consent is all about process, asking questions, and even allowing a person to change their mind. A user agreement is about limits, protecting the company’s intellectual property, and often limiting how people can respond to problems with the product, (such as arbitration clauses). Also, if a patient does not agree with something in a user agreement, they may still have to agree to get needed treatment or they [must] forgo getting medical treatment.

User agreements are updated all the time, and [if you do not agree with the change], the answer is to stop using [the product] immediately. [Although] it’s not a big deal if you’re cut off from your streaming music service, in the case of a person’s health, [quickly] stopping use could be harmful.

What kind of patient and provider safeguards can be put in place to ensure smart medications are being used in the most ethical way possible?

We need regulations. This area has few guidelines and few regulations right now. We need regulations to cover areas such as privacy, informed consent vs user agreements, who has access to data, how long and in what forms data can be maintained, [and] safety and efficacy information. There is also limited FDA oversight, so we need to develop better channels for reviewing digital medicine. HIPAA needs to be updated to cover not [only] health care providers and hospitals but also all insurers, device manufacturers, and software programmers.

I think there also needs to be greater transparency about how these devices work and how their intelligent algorithms make decisions for particular patients. Right now, the algorithm inside these smart devices is a black box, meaning no one knows how the machines make their decisions, [partially] because no one understands and [partially] because companies are protecting their intellectual property. Patients need to have access to what goes into making these decisions for there to be informed consent. Perhaps like [with] 2-factor authentication, patients should get a message when anyone is accessing their records for their smart prescriptions, including who accessed it and why.

Finally, we need technology interventions to protect against hacking these devices. There have been several [recent] news articles about AirTags and how people are using them to track and stalk others, to know where a spouse is at all times, or to harass someone.7,8 But what if the tracking tag was part of your implantable device? [That’s] not so easy to remove. This is why we need more regulations and technology protections.

As for providers, insurance companies and hospitals can track and monitor them in ways that have not been available before. Most of these devices come with portals that allow tracking and monitoring of patient information. An insurer and hospital employer will know if a health care provider is keeping tabs on their patients, how often, and for how long. As a society, we have viewed health care providers as professionals who decide how they do that work, but now they will be under a new level of scrutiny. For example, a health insurer could drop a provider who does not monitor patient information enough [by the insurer’s standards].

The FDA also needs to bring these devices under greater scrutiny. Some things, such as digital pills, do undergo FDA review, but an implantable device is only reviewed if it is not novel. If [a device is] similar to something already on the market, then it is not investigated deeply. And things like apps are not under FDA review at all.

When we think about these tech solutions, there are multiple parts—the pill or device, the portal through which it talks to the cloud, the servers where information is stored and where algorithms exist—and the FDA only looks at things that go into the body. The rest of the infrastructure is not under their authority. In Europe, the whole system is reviewed, but not in the United States. [Additionally], the whole system, from device to portals to servers—must be encrypted so that if it were hacked, no information would be revealed.

Insurance providers and other payers are already quite involved in health care decision-making. Do you think the advent of these trackable smart medication technologies have the potential to increase that involvement? Does that potential raise any additional concerns?

Smart medication technologies will absolutely increase insurer involvement in medical decision-making and at the individual patient level. With the feedback these technologies provide, an insurer could say, “We don’t think this drug or device is having the intended effect, so we won’t cover it any longer,” even though the patient and their provider might think it is working. A patient might get a text from their insurer that they forgot to take their pill, and unless they take it as prescribed, [the insurer] won’t cover it. Or a patient might be on vacation and receive a note from their health insurer that they’ve gone to a place where their insurance plan will not cover their medical needs.

Insurance companies are used to making decisions based on population-level data analysis, but now they can make decisions based on the individual’s data. Given how used we are to giving up our private information to technology companies, most people probably won’t think twice about giving up more information to their insurer. HIPAA only protects data with a patient’s health insurer. Their auto, property, and life insurer are not only free to do whatever they want with that data, they are free to discriminate against patients with the data. HIPAA does not apply to them.

HIPAA also does not apply to quality improvement and oversight. No one needs a patient’s consent to use their record to check the prescribing habits of their physician, the effectiveness of the device, the efficiency of the medical office, or to upload the latest software update. This is why new regulations are necessary. Devices like smart pills were invented because drug companies and insurance companies wanted to know: If a medication does not work, is it because the medication does not work or that the patient is not following the prescription? Digital medication allows them to answer that question because they know exactly when and where medication is taken. But that sharing of information is not [necessarily] meant to benefit the patient, it is more to benefit the insurer and the manufacturer. Sharing of patient data [must] directly benefit a patient and, at the very least, not put them at risk.

Are there other health care technologies that have given you pause from an ethical perspective?

In the digital space, automated insulin pumps have been around a while. Robin Cook [MD] wrote a thriller, Cell, where an intelligent algorithm gave an overdose of insulin to kill certain patients whose care was expensive, and recalls of them suggest this is not farfetched.

Precision medicine holds a lot of hope [and] concern, [including] genetic therapies that can eliminate disease but might also permanently change the human genome. Neuroscience is finding new understandings of the brain, which can include ways to influence it.

To create new and better artificial intelligence systems, companies need access to huge troves of patient data. Hospitals are happy to sell this data to tech companies for a hefty price. They do not share profits with the patients whose data it is, nor do they inform patients that they are selling the data, nor do they give patients an opportunity to opt out of being included in such data sets.

Anytime a new drug, device, or other therapeutic is introduced into a human body, there are risk/benefit concerns that require ethical analysis. Personalized health technologies have the potential [to improve] our health, but the cost is privacy. How much are we willing to sacrifice—or should we sacrifice—for potentially small gains in health?

References

  1. Klugman CM, Dunn LB, Schwartz J, Cohen IG. The ethics of smart pills and self-acting devices: autonomy, truth-telling, and trust at the dawn of digital medicine. Am J Bioeth.2018;18(9):38-47. doi:10.1080/15265161.2018.1498933
  2. Carroll L. Shocking ending: implanted defibrillators can bring misery to final hours. NBC News.Published October 10, 2011. Accessed February 15, 2022. https://www.nbcnews.com/healthmain/shocking-ending-implanted-defibrillators-can-bring-misery-final-hours-1c6436765
  3. Ford D. Cheney’s defibrillator was modified to prevent hacking. CNN.Published October 24, 2013. Accessed February 15, 2022. https://www.cnn.com/2013/10/20/us/dick-cheney-gupta-interview/index.html
  4. Mitchell H. Medtronic releases urgent recall for insulin pump vulnerable to hackers: 3 details. Becker’s Hospital Review.Published October 6, 2021. Accessed February 15, 2022. https://www.beckershospitalreview.com/cybersecurity/medtronic-releases-urgent-recall-for-insulin-pump-vulnerable-to-hackers-3-details.html
  5. Kassraie A. Over 60,000 pacemakers recalled due to risk of electrical shorts. AARP.Published May 17, 2021. Accessed February 15, 2022. https://www.aarp.org/health/conditions-treatments/info-2021/fda-recalls-abbott-pacemakers.html
  6. Most Americans want to share and access more digital health data. The Pew Charitable Trusts. Published July 27, 2021. Accessed February 15, 2022. https://www.pewtrusts.org/en/research-and-analysis/issue-briefs/2021/07/most-americans-want-to-share-and-access-more-digital-health-data
  7. Matei A. ‘I was just really scared’: Apple AirTags lead to stalking complaints. The Guardian.Published January 20, 2022. Accessed February 15, 2022. https://www.theguardian.com/technology/2022/jan/20/apple-airtags-stalking-complaints-technology
  8. Browning K. Apple says it will make AirTags easier to find after complaints of stalking. New York Times.February 10, 2022. Accessed February 15, 2022. https://www.nytimes.com/2022/02/10/business/a
Related Videos
© 2024 MJH Life Sciences

All rights reserved.