Promoting Fairness in Mental Health Prediction Apps
This project seeks to identify and address disparities in automated mobile mental health prediction. Mobile and ubiquitous data can be used to infer the general state of an individual’s mental health, and these algorithmic predictions often have high accuracy. Although these efforts hold great promise for developing and delivering health interventions, they may also be inequitable, reproducing or magnifying existing disparities in healthcare. Among the group’s principal goals are to characterize the problem of algorithmic fairness in mental health prediction by an audit of mental health-related algorithms; develop a framework for defining and understanding fairness by conducting focus groups and interviews with key stakeholders; and propose a method for improving the fairness of mental health algorithms based on these findings. PI Costello’s 2020 article, “Predictive ads are not doctors: Mental health tracking and technology companies,” was foundational to the current research. Paris and co-author Diana Floegel’s findings showed how people with mental health conditions feel about such technologies was an underexplored area of research. Their exploratory interview study with 12 people began to address these gaps by focusing specifically on how participants, who all had been diagnosed with mental health conditions, perceived tech companies’ involvement with mental health apps and mental health digital phenotyping. Although participants were satisfied to interact with mood-tracking apps, they were wary of digital phenotyping for mental health diagnostics. Participants raised concerns related to profit motives, distrust, and they recommended regulations be put in place to keep tech companies in check.