Bioethics Forum Essay
Continuous Health Monitoring: Greater Self-Knowledge or TMI?
For millions of health-conscious Americans, digital technology has been a boon, providing increasingly sophisticated fitness trackers that measure steps, heart rate, and calories burned. They are every dieter’s dream (or nightmare), delivering honest appraisals of behavior and activity. Researchers speak excitedly about a new frontier of “continuous health monitoring,” with the potential to detect diseases and aliments, perhaps even cancer, in their incipient stages, and understand our bodies in unprecedented fashion and intimacy. It also raises a host of disturbing questions about health surveillance. Will these devices really empower us? Will they compromise us as autonomous individuals? Will they simply drive us mad?
The pandemic has hastened the development of health monitoring devices. The Oura Ring can detect slight variations in body temperature and, therefore, might reveal early Covid symptoms and catch incipient outbreaks. This potential persuaded the National Basketball Association to outfit each of its players with an Oura Ring. The U.S. Olympic surfing team is also using the device. Unsurprisingly, all this star attention means that Oura sales are booming.
Health surveillance is increasingly stylish. The Oura ring is available in titanium or gold or encrusted in diamonds, and it is popular with celebrities. The sleek Air Pods can monitor your blood oxygen levels. New patents from Apple point to next-generation Air Pods having EKG (electrocardiogram) and ICG (impedance cardiogram) capabilities, the first line of defense against heart failure. The Apple Watch already sports an EKG feature, a heart rate monitor, and fall detection. It can measure your rate of respiration and alert you to sleep apnea. Soon the Apple Watch will include a thermometer to help with fertility planning.
Amazon’s Halo wristband can record your voice, analyze its tone, and report on your moods. New York Times columnist Kara Swisher writes of her experience wearing a Halo: “That first day a vexed emoji told me I was ‘stern’ or ‘discouraged’ for 16 percent of the day. ‘You had one phrase that sounded restrained and sad’ for 1.6 seconds at 12:30 p.m., it reported, although I have no idea what that phrase could have been. But 8 percent of the day, including for 14.4 seconds at exactly 11:41:41 a.m., I was ‘satisfied,’ with ‘two phrases that sounded satisfied, delightful or appreciative.’ Later, for 1.2 seconds at 7:18:30 p.m., I was ‘afraid, panicked or overwhelmed.’” What is the health value of this? Well, you can reflect on your various and varying emotional states, and understand what prompted them. You may subsequently decide to avoid certain encounters or experiences—or people—and reduce your stress level.
Household furniture and appliances are also part of the health surveillance network. Smart beds can monitor us at night, record our tossing and turning, and produce detailed sleep reports, which we can study over our coffee. They can detect if we are snoring, and nudge us to stop. There are also smart forks to keep track of how much and how quickly we eat.
In the works are smart toilets, being developed by Japanese manufacturer Toto. “The Wellness Toilet, which the company hopes to roll out in a few years, will scrutinize people’s daily waste output to look for various disease markers,” states an article in an Asian business publication. Stanford researchers are also working on smart toilet technology in hopes that it can aid in “precision health.”
Philosophers have long advised self-knowledge as the essential prerequisite to a life of virtue and happiness. If you don’t know yourself, your true nature, your talents, and your shortcomings, Seneca said, you simply cannot understand what fulfills you, and how you can best contribute to society and the world at large. Thus, the Stoics argued, philosophy starts in frank self-assessment and appraisal. It is a lifelong discipline, requiring patience, frankness, diligence, and commitment.
What do the many devices reporting on our mental and physical state provide for self-knowledge? What will we learn about ourselves? More importantly, what will we become?
I’m afraid we might drown in all the data. Might we also drown it out? How will we know what information is essential and what to ignore? By contrast, if we obsess over every detail, will we become hypochondriacs or narcissists of the highest order?
From a moral standpoint, these devices may make us highly insular. Poring over the constant stream of health data, will we have time or energy or incentive to wonder about other people, how they are doing or feeling? Will we care?
Surveillance scholar Shoshana Zuboff speaks of practitioners of data analysis as “high priests” versed in an esoteric and opaque science, which gives them unprecedented insight into our lives. It is chilling to ponder what data analysts will do with the embarrassment of riches gathered by our health monitoring devices. It gives them even deeper insight into our tastes and predilections—and needs—which they can then manipulate. Marketers will be happy to know if we are dieting, or struggling with our weight, or managing diabetes, cancer, or mental illness.
Despite these concerns, I suspect constant health monitoring will soon be widespread. In the United States, cost savings in health care will drive this movement. Our ailments will be less expensive to treat if they are caught early, which health surveillance enables. But it is worrisome to contemplate this new world.
Will we principally understand and know ourselves as a collection of data points? As a host of markers and figures that can be manipulated at will, much like the temperature on my thermostat? This is imprecise self-knowledge. In surveying myself, I may miss much of what is me, and what makes me—my friends, my family, my environment, all of which play a significant role in forging and sustaining my mental and physical health. What’s more, an ideal of perfection is oppressive. I am not surprised that constant health monitoring is insinuating itself into our hyper-competitive, hyper-individualistic society. But it probably won’t make living in it any more enjoyable.
Firmin DeBrabander, PhD (@Firdebrabander), is a professor of philosophy at the Maryland Institute College of Art. His most recent book is Life After Privacy.
Firmin, thank you for your post. I literally just finished a class on Law and Bioethics. That class succeeded my other Bioethics class tonight, which was on Research Ethics. Whenever the various clinicians in my classes start to speak about diseases, treatments and drugs in my classes, my heart just starts to race, and I am afraid to confess that these discussions cause me anxiety. I then reminded myself, for the first time tonight, that although I am not a clinician, as a philosopher in the group, there must be something I can say regarding the anxiety I feel with these very detailed health discussions. Your post has therefore given me an opportunity to say something.
Yes, too much knowledge via these wearable observation and testing devices would overwhelm me, but while I understand that this knowledge awareness is for the customer, and the customer has signed on to receive such knowledge, this knowledge is also for the researchers of such collected knowledge. Two bioethical and legal concerns here are: have these customers given their informed consent to have researchers collect and use their information in such a manner; and do these customers have the decisional capacity to be giving this kind of consent?
One upside to all of this – and I wanted to mention this in my Research Ethics class tonight, but felt a little unsure of my question: if in a Research Study of, say, a sample size of 5,000, if, typically, human research subjects tend to drop out of research studies, when they last for more than a year, these wearable observation and testing devices which look nice, and are trendy, could engage subjects in research studies for the long haul – constant surveillance, as you write about. This would bring the treasure trove of knowledge about human bodily functions, etc. that researchers have so long sought.
Alas, it may end up being the case that on my Apple watch, Siri knows me best. And so, were my decisional capacity impaired for one reason or another, and I had not yet signed an Advanced Directive, Siri can be by surrogate! What would the law have to say about that?
Philosophers debated about human knowledge for centuries. What can modern-day philosophers offer about “too much knowledge”? Individually, I may aim to live a life well-examined, but Siri and Data “Others” now comprise the team of my life examined, even SO much more. Personally, that is just too much self(autonomous)-examination for me, and so I defer to Nietzsche regarding his thesis on moderation. But who follows moderation in this era of (data) excess and excessive (and, sometimes, illegal) sharing of customer/human research subject data? Who is “strictly scrutinizing” these data, tech and legal ever-increasing, inter-twinings?
Thanks again for your timely post!
Hi Suzanne- Wow so many amazing questions! It would take me another essay to answer them all in earnest- and you give me much food for thought. A few things: yes, it seems the users have given consent to this kind of data collecting- but you raise the salient question: do people really know all that they are consenting to? I don’t believe they have thought through the implications. Because it is kind of unfathomable. I do think these devices are amazing for long term research. But I worry about them taking over your decision making ability, as you indicate- we may outsource that to the devices, too, as we have outsourced so many mental tasks to digital devices already. Clearly, we need new legal structures for that- in the case of last directives. It’s funny you mention Nietzsche- I was thinking of him recently too- on the virtue of forgetting. Forgetting can be good for us, he argues- we need not keep everything in mind- that is the nature of ‘ressentiment,’ where we go over our grudges or worries again and again (from the French ressentir) . It can be a virtue not to be focused on every little iota of our existence, as these devices seem to encourage. Thanks for your thoughts!!!
Thank you for your thought-provoking post, Mr. DeBrabander! You have highlighted a great deal of benefits that continuous health monitoring technologies offer in our modern age, while posing multiple bioethical questions these technologies raise. As you shared, I too believe that health monitoring technologies provide a tremendous amount of benefit in increasing self-awareness of health. Health technologies provide objective data that allow us to examine health behaviors, with the potential to improve long term health and decrease healthcare expenses. Yet similar to you, I am cautious about its ever-increasing presence and implications on health surveillance.
After reading your post, I can’t help but draw a comparison between health surveillance via technologic devices and genetic sequencing. Like health technologies, genetic sequencing provides an enormous amount of “data” on an individual’s health, posing similar moral and bioethical dilemmas. Although genetic sequencing and associated technologies (such as CRISPR) is a relatively new and hot topic, perhaps there are lessons we can learn that could be applied to continuous health monitoring?
Genetic testing uncovers the programming behind how we function, with the potential to detect our flaws for better or for worse. Discovery of problematic genes, such as BRCA, have allowed us to better treat or screen for diseases, such as breast cancer, reducing morbidity and mortality. Furthermore, genetic testing has piqued interest and exploration in precision medicine, which may someday allow for better targeted therapies to treat an individual’s susceptibilities causing disease. Similar to genetics, I think health technologies have the potential to be an incredible asset in improving individual health.
Yet, as you detailed in your post, health technologies have the potential to produce an overwhelming amount of data. This is similar to genetic sequencing, in which despite our ability to sequence an entire genome, we lack knowledge about the purpose of many genes and their link to particular diseases. Additionally, while we can uncover an overwhelming amount of information by sequencing one’s genome (via services such as 23andMe), clinicians often lack guidance on what to do with this information once uncovered, and results often produce unnecessary anxiety for individuals who may be predisposed to developing, but not yet have and may never develop, a given disease. In this regard, I believe it is sometimes best to leave some information unknown. With the overwhelming amount of data that health technologies are able to uncover, I wonder what data may be best left unknown with the harms of knowing outweighing the benefits of uncovering this knowledge.
But perhaps most concerning to me, I wonder how continuous health monitoring data may be used by third parties, such as insurance companies or employers, to discriminate against individuals based on their health data. With the discovery of the human genome and its link to genetic flaws, bioethical issues were raised regarding how genetic information would be used by insurance companies to discriminate against susceptible individuals. These concerns prompted the passage of the Genetic Information Nondiscrimination Act (GINA), which was created to limit discrimination based on genetic make-up within the health insurance industry and workplace. To my knowledge, no such laws yet exist to limit use of health monitoring data, which I worry is sorely needed. Furthermore, despite passage of GINA which was meant to enforce an ethical value that our society has set between acceptable and unacceptable use of genetic information, it is impossible to control all individuals’ actions. Even in the presence of laws, morals, and ethics, people are bound to break (and have broken) “the rules” that our society holds on acceptable use of genetic information. This causes me to pause as we consider the enormous amount of personal health information continuous health monitoring provides to makers of technologic devices. How might this information be used against us? What protections must we put in place to prevent health discrimination?
I also wonder, how will health technologies worsen health disparities as those of lower socioeconomic status are unable to afford these technologies? There are many bioethical and legal considerations we must explore as health technologies become more prevalent in our day-to-day lives.
Dr. DeBrander, I enjoyed this insightful post into the idea of continuous health monitoring technology and what it holds in our future. As you mentioned, this technology holds great potential in helping us understand our day to day health and arguably can be cost saving in the light of the expensive health care system we have here in the country. I certainly believe that COVID has expedited the use of this type of technology to help monitor our oxygenation status and cardiovascular health at home versus needing to visit the emergency room. If used correctly, this type of technology can be useful as a stepping stone for patients to go visit their healthcare provider if there are any concerns and shift our healthcare system’s focus towards preventative medicine. However, I am skeptical about continuous health monitoring because of the detailed surveillance it has over our lives along with the mass amount of data generated and what that data entails.
I found your included NYT opinion article by Kara Swisher regarding Amazon’s Halo wristband to be incredibly thought provoking. As she stated, having such a huge corporation like Amazon be able to monitor your mood, store that information somewhere, and deliver you back a service like ice cream if the technology analyzes your tone as sad just sounds incredibly intrusive. The idea of a smart toilet that can analyze your urine and stool sample if an individual is more prone to predisposed conditions like IBS sounds helpful, but the thought of a built in identification system like adding fingerprinting on the flush level or a small scanner of the anus is quite unnerving. When this data is uploaded to a cloud system, who gains access to this data and how accurate is this information in providing a proper diagnosis? I am also very curious what types of interpretations these gadgets offer for individuals based off the results they’re given. Would it be individualized to the patient’s age, weight, ethnicity, predisposing genetic factors, etc. as in what you may find when you visit a provider’s office, or does it just offer generalized interpretations that a patient may either be hypertensive or tachycardic? I find this data to potentially offer more anxiety for users who may not know how to interpret the information provided.
As you mentioned in your post, this data can easily be manipulated for marketers to use and we will continue to fall prey to the profitable corporations who capitalize on our vulnerabilities. As providers, we are taught to reduce overscreening and overdiagnosing our otherwise healthy patients and the idea of continuous health monitoring seems to go against that idea. While wearable technology has increasingly become integrated and depended upon in our lives, it’s important to caution and limit how far this data goes to tracking for health concerns specifically versus surveilling our day to day activities where it becomes an intrusion of our privacy.
Thank you Melissa! You raise so many good points. I am not a lawyer, so I cannot say much about the proper legal protections here- but I think we must bar companies from sharing this data with marketers. This data must be strictly off limits to them, and accessible only to individuals and medical professionals. It seems clear to me, too, that this is a major step in the project of precision medicine- which could be used to tame diabetes, for example, by issuing individualized regimens to patients, analyzing their stool, recommending their diet, monitoring their activity and habits throughout the day, etc. As for Halo, this device in particular raises questions about the nature of the data collected, and how useful it can be- psychological data can be misleading, and troubling. I have written about this elsewhere.
I found this to be a very interesting topic and I appreciate the value in what you shared throughout this article, Dr. DeBrabander. Thinking about the future, technological advances such as continuous health monitoring have the ability to invoke significant impact that may result in greater self-awareness with both physical and mental health. I appreciated your point on the Oura Ring and agree that a device such as this is truly unique in detecting covid symptoms and helping identify potential outbreaks. However, one particular point that came to mind with this was in reference to the population. This particular device was acquired by those with “unlimited” financial assets (i.e. NBA & U.S. Olympic Teams). I find all these advances beneficial, although marketing and advertising seems to target those who have means to access such devices. It would be interesting to see in the future how certain devices that have the potential to detect certain diseases/disorders such as cancer, diabetes, mood changes, and heart failure and how we could somehow collaborate with insurance companies to cover a device that could potentially reduce overall costs contributing to such. On the other hand, I believe such data collected could also become problematic with insurance companies if that data is aquired by them (specifically if an individual has a life insurance policy & the cost associated with it if they detect comorbid health conditions) or how insurance companies might limit services covered by them as a potential barrier.
One other point you mentioned that I found interesting was the Amazon Halo wristband that has the capability of reporting mood based off of voice analysis and tone. I’m curious on the accuracy with this feature and if during trial studies a voice manipulation detected inaccurate reported moods. This specific feature, I found somewhat disturbing and invasive. The watch is marketed as an “always-on” microphone recording and analyzing tone of voice. The columnist, Kara Swisher, wrote that in one of her reports she was “afraid, panicked or overwhelmed.” If she was someone who had some sort of anxiety disorder, I feel this feature could exeracerbate those symptoms or lead to catastrophic thinking. You highlighted many valuable and interesting points throughout this article that I appreciated as food for thought. I agree with the benefits of health survillence in potentially reducing healthcare costs with early detection, but also agree that marketers and figures can manipulate ways to continue increasing financial gain that is not always in the best interest of the consumer. All in all, I believe that technology has the capacity to change the way we approach such matters, but it is important to remember that such measures should be taken with caution as to not further create barriers that decrease trust amongst providers and patients. Thank you again for your thoughts on this topic!
Hi Ashley, Yes I have questioned the accuracy of Halo’s data elsewhere, in another article. It seems pretty useless to me- or it could potentially make us more anxious, and not more insightful. I think you are also correct that these devices could be the privilege of the rich, making them healthier- but I would think that the government would find it in its interest to subsidize these devices for the poor, to help with common afflictions that Medicaid ends up footing the bill for- heart disease, obesity, diabetes. Obamacare focused on changing people’s habits to make them healthier, and thus more affordable patients.
Dr. DeBrabander, thank you for your thought-provoking post. It is not surprising to me either that constant health monitoring has taken our society by storm. As you mentioned, we live in a hyper-competitive, hyper-individualistic society. On top of that, in today’s society, instant access to information and to “things” are not only sought but have become our new norm. Technology has driven this demand and need for instant access. And while I think much of our reliance on instant access to information and to things is harmful to society, I believe that instant access into one’s health is an outstanding way in which technology has evolved. I agree with the philosophers that self-knowledge leads to a life of virtue and happiness.
However, although I stand-by the great benefits of health monitoring, I also appreciate the concerns you raised such as drowning in all the data and the information that will become accessible to data analysts. Additionally, there are two other concerns that came to mind upon reflection. My first concern is the effect health monitoring will have on health inequalities that already exist in society, and my second concern is accountability.
As for my first concern, you mentioned that our ailments will be less expensive to treat if they are caught early by using health monitoring technology. I agree. However, the cost cutting will only benefit populations that have the means to obtain health monitoring technology. As such, health monitoring may, in turn, only deepen the health inequities that already exist.
As for my second concern, I have concerns as to accountability and who will be held accountable for harm that may inevitably be caused in connection to health monitoring platforms or technology. In contrast to the immense accountability a nurse or doctor has, the law does not impose such accountability on health monitoring platforms or technology such as Amazon or Toto. For instance, if a nurse commits medical malpractice, there are established procedures and relief to redress harm experienced. However, if a health monitoring app or devise commits the equivalent of medical malpractice, there are clearly not the same established paths to relief as there would be in the traditional doctor-patient relationship. If more and more of society relies on health monitoring, who will be held accountable?
Nevertheless, I encourage developments in health monitoring and the right patients have to their health information.
You are absolutely right, Avanna- if this becomes widespread, as it seems to be, it will be easier for spies to access and perhaps abuse this information. It will become a crutch we cannot live without- and vulnerabilities are easier to manipulate. I also agree this will be the privilege of the rich for the foreseeable future- but I suspect it could be a boon to governments looking to cut healthcare costs for the poor- they could subsidize such devices. Or, the devices could simply become cheaper, as so often happens when the technology is further developed.
This was such an intriguing read, Dr. DeBrabander! At first glance, continuous health monitoring sounds like the new trajectory of healthcare. Preventative health is often overlooked and hard to enforce in the U.S., continuous health monitoring would offer a reasonable solution to the lack of preventative care. Incorporating continuous health monitoring into devices that many individuals already have would certainly make it more convenient to become more health conscious. From personal experience, the “Health” app that is preinstalled in my iPhone has definitely made me more conscious of the amount of steps, specifically, lack of steps. My parents and many of my friends, who used to live more sedentary lives, now try to reach their daily goal of 10,000 steps.
While the benefits of health monitoring devices seem endless (smart beds could prevent snoring and help individuals achieve better rest, smart forks can aid in digestion by slowing down the rate of eating, and smart toilets could potentially detect colon cancer, UC, Crohn’s, and IBS) the amount of data that is required is a very real concern. The amount of data taken from social media is already concerning, therefore, it’s very important that we understand who receives sensitive data pertaining to individual’s health. As Shoshana Zuboff mentions, personal health data would be highly valuable to numerous marketing spheres, therefore, how do we maintain that health data is not leaked to different corporations?
Another good point you addressed is how would incorporating continues health monitoring affect healthcare utilization. While we expect that increasing preventative care measures would decrease the amount of urgent/emergent healthcare visits, could it be possible that it’ll result in over-utilization of labs/imaging and overwhelm primary and specialty health care? Although healthcare monitoring is definitely going to be a new frontier of healthcare, there are plenty of valid concerns that only time will tell
I am sympathetic to your concern, Yiwen- I wonder if all this health monitoring will make us huge hypochondriacs! It is reminiscent of our relationship with Web MD– which many people compulsively check to diagnose themselves. What a headache for doctors.
World class runners and cyclists can employ an array of physiological monitoring techniques to optimize training and competitive performance. But non-elite fitness athletes may regard highly intensive monitoring as a non-beneficial habit prone to “false positives” or conducive to time-consuming addiction, rather than healthy appreciation of benefits of consistent exercise training. Rather than devote excessive time and attention to highly detailed personal monitoring data, why not focus on appreciation of healthy exercise training?
I like your point Helen. Most of us are non elite athletes! Why do we need such devices to monitor our exercising? Well, because marketers want to sell us stuff of course! This has become a profitable market- but my father in law, who is a doctor, says walking is the best exercise (swimming too)- he is seeing so many ailments from too much exercise in older patients.
Thank you for this thought-provoking piece. I would say my opinion on continuous health monitoring tools has changed recently, because I long thought of continuous health monitoring wearables as active measures of prevention against known conditions: like the Embrace2 (an FDA cleared wearable for advanced seizure detection) or continuous glucose monitoring from insulin pumps. The style of continuous health monitoring you’ve described is what I would consider passive prevention: monitoring for overall symptoms of ill health among healthy individuals (surely if NBA players aren’t healthy than none of us are). Beyond the overall lack of regulation regarding these general wellness wearables , we run into a similar moral/ethical dilemma as from comprehensive genetic testing: is it better to know what could potentially come to pass?
It’s generally considered good practice to get genetic testing done if you have prior familial history of a known disorder. I would argue that our moral/ethical dilemma regarding comprehensive genetic testing is more easily solved than with general wellness wearables, because you have either current family members or future family members to consider. In the case of general wellness wearables, you really only have yourself to consider. Would you want to know everything about your health status all the time?
Let’s assume that general wearables did indeed provide prescient knowledge of your future health (I’m not considering a wearable detecting your elevated temperature hours before you yourself realize you have a fever, but rather a toilet that looks at your bowels and determines that you are making progress into one of the many food intolerances or inflammation related disorders). What would separate these general wearables that are monitoring your currently healthy personage from the active health monitoring wearables like Embrace2 I mentioned earlier? Certainly a device like the Embrace2 might not be able to tell you that your bowels are inflamed, and a smart toilet cannot tell you that you are about to have a seizure. Each device has its own role to play and was designed with a particular purpose in mind. What separates your usage of these two devices? If you were using the Embrace2, you would need a prescription from a clinician that states you need active seizure monitoring. If you purchased a smart toilet off of Amazon, you don’t need any validation from anyone. The other devices that you’ve mentioned, like a fertility planning module in the Apple watch or an emotion tracker in Halo, are produces that are designed for a particular purpose but need no approval for usage.
Of course, the topic of medical approvals and medical clearance in general is a wide topic with many subfields of ethical quandaries. My mentioning of medical approvals earlier is specifically to illustrate the differences between what we call precision medicine and “precision” medicine. An Embrace2 is prescribed because you already have a prediagnosed condition. These general wearables with specific interest modules are not taking active measures to ensure your current health and safety, they are making passive observations for your future health and safety.
I think what these general wearables has provided us with is a sense of ownership of our own health and health data. We are the ones that have decided to invest in our future health, and choose to monitor ourselves and enlist the help of Halo, a smart toilet, an airpod to arm ourselves against health failures. We do all this without a specific target in mind, because we do not yet know what target we are aiming against. This isn’t really precision medicine, because precision medicine requires the knowledge of a directed target, a druggable protein, a specific symptom that needs treatment. In most healthcare frameworks I know of, that precision medicine is prescribed and diagnosed by someone else, a specialist who you pay to know what defense you need to take, so that you can go purchase your tools at the pharmacy. Using the Embrace2 is an active prevention that doesn’t feel very active on the part of the user when we compare it to the host of general wearables we can self-ascribe. This sense of empowerment is one of the biggest sells for general wearables, regardless of whether or not such monitoring truly leads to a more healthful, fulfilling life. I think we have yet to see how these devices will change the relationship between medical provider and patient, or patient and their own health outcomes.
Thank you for making this distinction, Sue, between types of precision medicine. I think it could be a helpful distinction for future thought on the morality of these devices. Consumers could know they and their data are especially safe when they are signed up for true precision medicine… The other devices are not so ‘precise’ as they suggest, really. Especially Halo. This misleading precision is a big selling point, offering a superior sense of autonomy. But Halo, for example offers the kind of ‘precise’ data that can make you more anxious and miserable– needlessly so, I would argue