Skip To Main Content

Early Detection or Ethical Overreach? Evaluating SleepFM in Health Care

Early Detection or Ethical Overreach? Evaluating SleepFM in Health Care
  • Students

Stanford Medicine published an article in early January outlining the results of a new AI model, SleepFM, which was designed to predict health risks based on sleep data. The model was trained on a dataset comprising more than 585,000 hours’ worth of brain activity, heart activity, respiratory signal, and body movement metrics taken during one night of participant sleep. So far, SleepFM is especially successful at predicting Parkinson’s disease, dementia, hypertensive heart disease, heart attack, prostate cancer, breast cancer, and death. 

Although this is an exciting and impressive development, as early detection is key, there are some considerations that need to be addressed. An obvious and common one when discussing AI is data privacy, but this is especially important when discussing health predictions like these. If insurance companies gain access to this kind of information, there could be significant discrimination for patients involved. Wealth inequity should also be considered, as these kinds of data collections are likely expensive and even require overnight monitoring. AI deployment is often discussed in the healthcare setting as a way to broaden accessibility, but this new technology has the potential to intensify disparities. 

In addition, biases and inequities are embedded within training data; if the dataset underrepresents certain populations, predictions are likely to be less accurate or reliable for those minority groups. A misestimation of risk could lead to over- or under-treatment among already underserved populations. Furthermore, such elaborate knowledge about one’s own health could lead to significant stress and pose a threat to mental well-being, especially if a predicted condition has no cure or possibility of prevention. This could lead to over-medicalization, as people may seek tests or treatments based on the calculated statistical risk from an AI model instead of objective clinical necessity. This is similar to currently available testing, such as genetic testing, though patients themselves make the choice to undergo these analyses. 

This model undoubtedly has immense potential and has shown promising results and predictions, but just as with any other case when AI is deployed in the healthcare setting, it is integral to consider the black-box nature of its models. SleepFM takes in sleep data, then converts it into predictions of patients’ futures; when clinicians don’t fully understand the reasoning behind these conclusions, should they be allowed to act on these insights? Ultimately, even though models like SleepFM offer remarkable opportunities for early detection, their integration into health care must be guided by transparency, equity, and a careful consideration of who truly benefits from knowing the future.

Anna Bulto '26 and Skylar Li '26

Reference

Bai, Nina. “New AI model predicts disease risk while you sleep.” Stanford Medicine, 6 January 2026, https://med.stanford.edu/news/all-news/2026/01/ai-sleep-disease.html. Accessed 25 January 2026.

  • Ethics AI