Member Reviews
et me start off by saying I quickly realized this was not the book I thought it would be and maybe I’m not the intended audience. I was intrigued by the proposal of using Big Data and analytics to help revolutionize the mental healthcare profession and treatment. I was also weary because Big Data is still so trendy and its applications are being used in new and interesting ways, but also possibly in unethical and harmful ways. I will say that while data analysis can give us insights (which is what the author is looking for), it is not some magic answer to all of our problems. It can be easy to bake in biases to analysis and using it to analyze an individual rather than to glean insight on some bigger population or category can have some very complicated ethical implications. Overall I believe the author presented some good ideas but I think some more introspection and rules to counterbalance any abuses of data analytics should be considered as well.
This is absolutely crucial for anyone in tech and learning about big data. I think people can read this and learn more and more about the effects of what is happening within the technology industry.
Psychiatry is an inexact science. If someone who was previously active and energetic is now spending their time isolated and alone, they can reasonably be assessed as depressed. A person who hears or sees things that are not real is experiencing psychosis. However, what a clinician knows about a patient is limited to what the patient (and sometimes family and friends) tells the clinician and what the clinician is able to observe. A mental status examination provides limited and somewhat vague information to develop a baseline and measure improvement or decline. Even having a suspected diagnosis provides little information as to how to help a specific patient improve. Medications and therapy can help, but it is difficult to objectively measure improvement in mental health in the same way that physical conditions can be measured and tracked.
The author of "Reading Our Minds" is a psychiatrist who wants to utilize digital technology to better track and understand mental illness on a personal level. The author discusses how much information people voluntarily share on social media and how that information, if available to a clinician, would enable the clinician to get a better sense of what the patient has been experiencing and how their mental functioning has changed over time. Studies have shown that what people share on social media is a good indicator of mental health. Being able to review six months of social media posts and online searches would provide a lot more information than can be gained in a thirty minute evaluation. While a clinician can ask how you felt or what you were doing or how you were functioning weeks or months ago, you may not be remember or your recollection may be inaccurate. Social media posts, being time-stamped, provide an accurate recollection of how you perceived you were doing and what you are doing at a given time. The author makes it clear that it would be up to the patient to decide what information was made available and to whom it was available. The author also discusses the possibility of utilizing apps as part of a treatment regimen.
I appreciate the author’s desire to make diagnosis and treatment of mental illness more reliable and potentially more “objective.” Utilizing digital data seems like something that could be beneficial. However, personally, I don’t think I would be comfortable sharing my digital data with mental health professionals. "Reading Our Minds" is an interesting and thought-provoking book.
I received a copy of the e-book via NetGalley in exchange for a review.
This book basically explains how hard it is to have evidence of psychological work. Meaning, being a psychiatrist or therapist is all a guessing game. The author goes into detail about the test and measurements that physical doctors can use, to test theories and get actually numerical feedback. Blood test, stress test, urine test... but what do clinicians of the mind have?... nothing but self report (basically, we can use observations of people who interact, but that’s only if they are willing and the client is willing to include that person).
This book spoke to me on so many levels, I found myself nodding my head on the list of limitations we have for getting scientific evidence to treat a persons mental health, yet for the body we have so many test. Being a therapist or psychiatrist is incredibly difficult due to not having more solid test and measurements.
That being said this author purposes we use smart phones, social media, and apps to help us get more concrete answers to our self reported assessments. What would it be like to have your therapist get answers about how you are doing by looking through your text, studying your language, how often you communicate, and scrolling your social media to see the same. Have if your web browser studied to identify your thoughts? We are already allowing big corporations to do this in their own marketing benefit, who not do it for the sake of our mental health.
Read to learn more.
In a very pithy book, Barrons makes the case for "quantified self' approach in mental health as well, particularly for better diagnosis. The book clearly points out the drawbacks of the current approach of history taking in psychology. The premise that efforts like Framingham Study helped to being " objectivity" in diagnoses is central to the book. Using some limited studies, the author argues for various tech that can be used to help diagnose specific conditions. None of the tech (be it sensors or speech/sentiment analysis , etc) are particularly new, but Barron is pointing out how they can be leveraged.
While issues of consent, privacy, and most importantly, who will pay for all this - is addressed, albeit somewhat naively and optimistically. (Fitbit data is still not part of health records yet, right?) Perhaps an interesting discussion worthy of more time was on data ownership - who actually owns the data? Concepts such as digital navigator sounds interesting in small scale - but when it needs to be offered for the entire population- what is the cost of training personnel to manage, store, interpret and use that data? No word in the book on the additional liabilities having this data exposes even the clinicians to..
Despite the overly optimistic views, limited studies cited for the hypotheses, and a good problem statement, the book is at best a good opening shot at starting a discussion on what personal data (in an ever increasing digital world) should be part of one's clinical/health/mental health record.
Reading Our Minds by Daniel Barron, a psychiatrist and pain management fellow, explores the incorporation of Big Data to improve the practice of psychiatry. The idea of supporting psychiatric assessment with solid data is an appealing one, but many questions come to mind.
I was surprised by an apparent blind spot of the author’s that appeared early on. He writes:
“A recent study showed that Google searches for explicitly suicidal terms were better able to predict completed suicides than conventional self-report measures of suicide risk. Perhaps this is because people who are ‘really gonna do it’ go through the planning and researching (i.e., on Google) of how to kill themselves, but it could be that people are more honest when they approach Google with what’s on their mind.”
Perhaps he hasn’t caught on to the fact that those of us with mental illness are well aware that the doctor who’s asking us questions about suicide has the power to commit us to hospital involuntarily, where our clothes and belongings will be taken away and we may be locked in a seclusion room with nothing to do or think about except how wrong it was to be honest with that damn doctor. That’s not a hypothetical, either; that’s exactly what goes through my mind when I’m contemplating disclosing suicidal ideation to a doctor, because that has happened in the past. Aside from that, though, just imagine if Google had an algorithm that would flag it to emergency services if they thought you were getting a little too close to the edge. I think I’d be motivated to start using Tor. Hello dark web!
A running example through the book was the author’s assessment of a girl he ended up diagnosing with schizophrenia. Her mom reported that she’d had changes in behaviour patterns, social engagement, and internet use, and the author argued it would have been useful to have her browsing history, geolocation data, call/text logs, etc., as this would help to establish her baseline “normal” and what deviated from that.
The results of a number of relevant studies were presented. For example, changes in Twitter behaviour were observed in women who developed postpartum depression. Another study looked at Facebook posts by people with psychotic disorders and noted distinct changes that were seen shortly before people ended up being hospitalized. The changes included more swearing, anger, and references to death.
There were some interesting suggestions for objectively measuring things that are currently evaluated subjectively, which I agree would very much be of benefit to the practice of psychiatry. Speech was one of the examples given. I experience speech impairment as a psychomotor effect of depression, and it could be quite useful to be able to monitor that in a clinical setting.
If you’re wondering about the issue of consent and privacy with all this data, it came up, but it didn’t seem to be treated as much of a barrier. The author writes that he began the book thinking that it would be hard to get patients to agree to data collection, but COVID proved him wrong. As an example, he pointed out that people were willing to download apps that would track geolocation to determine COVID contacts. I’m quite confident in saying that the identifiable data that I’d be willing to give up in the context of a deadly pandemic is not going to be the same as what I’d give up to a psychiatrist.
I think this where another big blind spot comes in. Patients are people. There is a significant power differential between psychiatrist and patient. Involuntary treatment takes away people’s rights for the sake of treatment. Even when treatment is voluntary, decisions are often made by the prescriber alone rather than as part of a collaborative process that supports the agency of individuals with mental illness. Sometimes physicians assume that patients should be able to put up with side effects rather than recognizing the patient’s right to make those choices for themselves. Mental health professionals are in no way immune to stigma; this is borne out both anecdotally and in the research literature.
I could go on, but that’s already a whole lot of context to consider, and it’s disappointing that the author just doesn’t seem to consider it. There’s no indication in the book that the author has sought out feedback from anyone on the patient side of the fence to see how they would feel about the idea of handing over their Google search history to their psychiatrist; perhaps this wasn’t seen as an important part of the process?
It seems like too big an overlook to be accidental that patients don’t appear in this book as people who are empowered to be advocates for themselves, their health care, and their privacy. To assume that patients will readily hand over anything the good doctor wants smacks of paternalism. That’s especially true when no argument has been offered about how all of this Big Data will benefit patients.
As someone who has straddled the patient and mental health professional side of the fence, I say a) back away from my data, and b) I would recommend the author reflect on what that fence looks like for him, and what it might be preventing him from seeing.
I received a reviewer copy from the publisher through Netgalley.
READING OUR MINDS offers an insightful, engaging look into a path forward for psychiatry. Barron clearly identifies gaps in the practice of psychiatry and proposes data-driven, tech-based solutions. The length was perfect for a one-sitting read, and the questions raised by this book have already led me to discussions with friends and colleagues. I highly recommend READING OUR MINDS!
In this short book by Dr. Daniel Barron, the author makes the case for using some of the data from our digital lives to help with diagnosing mental illness. Barron, who has been practicing psychiatry for years, points out that most of the current standard methods of diagnosing mental illness rely on self-reporting from the patient, which can be inaccurate or imprecise at best. He explains that most people are not very good at recalling their own behavior, let alone someone that might be experiencing an abnormal psychological state. Barron laments that in an era where tech companies have so much digital data about our behavior patterns and activities, this information still isn't currently being used to help doctors more accurately diagnose patients.
Dr. Barron cites several examples of studies where algorithms were able to predict and/or identify medical conditions like postpartum depression, simply by analyzing the social media patterns of patients that agreed to share this information. Information like search history, social media posts, accelerometer data, and even location history can be invaluable tools to understand a patient's mental state and behavior patterns; and can result in much more beneficial treatment plans. Dr. Barron explains how many specific individual technologies that currently exist could be helpful in increasing the precision of psychiatric medicine.
Barron mentions the suggestion of a “Digital Navigator” position; medical personnel that would work with a patient to decide what data they are comfortable sharing, and how the information would be used. This sounds like an excellent idea, and I would imagine this could easily be standard practice at some point in the future.
Although this isn't discussed in the book at all, I couldn't help but wonder if these machine leaning algorithms might be used in the future to analyze our digital data and identify possibly severe psychiatric illness that could be potentially violent or dangerous. With so many mass shootings and other violent acts committed in the US by people with psychiatric issues, I wonder if these technologies will be used to try to prevent tragedies before they happen, even if it means sacrificing some sense of privacy.
I really enjoyed this book, as the author makes some very convincing arguments, using very clear and well documented examples. It really made me think about what the future of psychiatric medicine will look like. 5/5 Thanks to NetGalley for the ARC.
Thank you to NetGalley and Columbia Global Reports for an advanced electronic copy of this book in exchange for an honest review!
In this novella-length book, the author explores the lack of measurement used in psychiatry, which makes it a subjective medical field, and how different types of data can potentially be used to give psychiatrists something to measure and work with for more accurate diagnosis and treatment. If you are someone who has ever struggled to communicate with a doctor about what's going on in your mind (like me), or if you're a person who finds the idea of data collection for mental health interesting, then this book is for you.
It's hard to describe the impact of this book, but I can say with certainty that I feel optimistic after reading. As mentioned, I do find it difficult to communicate my mental health issues to a doctor and feel that the resulting diagnosis and treatment is actually what I needed. Therefore, the idea that a doctor can use data to "get inside my head" and understand, without relying on their interpretation of how I present myself (especially as someone with high-fuctioning anxiety) is incredible. I am excited to share this book with friends.