Public health organizations are already relying on AI to make critical decisions about resource allocation—often without fully examining why they are outsourcing these judgements to AI in the first place. This is significant because it sets the stage for conflating “AI is better than a human” with “AI makes things easier for a human.”
In 1948, before the term artificial intelligence even existed, cybernetics pioneer Norbert Wiener anticipated a future in which we ceded decision-making to it, cautioning:
“[T]he machine [that] can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us.”
A year later, he put it even more starkly:
“[M]achines will do what we ask them to do and not what we ought to ask them to do.”
Now, more than 75 years later, Wiener’s warning remains as relevant as ever. In fact, some justifications for AI-driven decision-making, when examined closely, might better support the opposite conclusion: that AI shouldn’t be making these decisions at all.
Consider the Homelessness Prevention Unit, a collaboration between UCLA’s California Policy Lab and the LA County Departments of Health Services and Mental Health. This pilot program uses machine learning to identify individuals and households in LA County—notably, not from a pre-selected pool—at risk of imminent homelessness. The AI draws from an unprecedented range of data points, including data not traditionally considered in social services work, to “predict” who is most at risk. After the AI generates a list of names, aid workers reach out with just-in-time financial and other resources. Rather than addressing the structural causes of homelessness or assisting those already unhoused, the program intervenes at a critical downstream moment—seeking to prevent people from becoming homeless just before they otherwise would. Think Minority Report, but for public health, not policing.
The program’s rationale for using AI seemingly rests on two claims. The first is straightforward: AI enhances analytical breadth and produces more precise risk assessments. A 2024 CalMatters article captures this position succinctly:
“Is a computer really better at guessing who will become homeless than human social workers trained in this work? [The executive director of the California Policy Lab at UCLA] says yes—3.5 times better, to be exact.”
“Better” alludes to testing the AI’s capacity to match outcomes from the recent, known past. But alongside this argument for precision, a second, more implicit, claim emerges: AI makes more rational, less biased choices. For instance, the program asserts that AI better prevents racial bias from impacting decisions about resource allocation. Yet embedded in this claim that AI is less biased lies a subtler implication: AI offers a buffer from impossible choices. The same article notes that human social workers, due to their direct relationships with at-risk individuals, are prone to bias:
“The problem with humans, [the UCLA Policy Lab executive director] said, is that they’re biased toward the people they know. ‘It’s just human nature to want to help the people that you’re in contact with,’ she said. ‘They all seem housing-unstable and at high risk. You want to help those individuals or those families in front of you. But not all of them are going to become homeless and be on the street or use shelter if they don’t get assistance.’”
In other words, the very impulse to help is reframed as a source of imprecision that AI is capable of overcoming on our behalf.
Both of these claims—AI’s precision and its ability to depersonalize difficult choices—help explain the utility of predictive AI in homelessness prevention. But utility is not the same as ethical justification. My concern is that these two arguments are already starting to converge—that AI’s capacity to buffer us from decision-making becomes mistaken for evidence of its superior judgment.
The project’s lack of transparency around how it tests its AI beyond initial training is concerning, as is the potential for false negatives: those who the AI overlooks, but who still go on to become unhoused. But the most pressing ethical problem in this arrangement may not lie in the looming possibility of AI “getting it wrong” in any given decision about who will receive support, but rather, in the broader consequences of granting unchecked credibility to AI prediction itself. The growing injustices of AI prediction in contexts such as policing and insurance do not vanish in this context simply because the aim appears comparatively benevolent. And while critiques of data-driven prediction are by no means new, what may be distinctive about AI’s role in this process is the quasi-personhood we ascribe to it—however unmerited. This lends its findings an unprecedented aura of certainty, reinforcing the idea that AI is not merely assisting in decision-making, but rightfully making choices on our behalf. We are primed for AI paternalism.
Even if this specific instance of AI is not intended to replace human decision-making, the very AI attributes that the Homelessness Prevention Unit praises form the blueprint for precisely that prospect. As noted above (and shown in their 2024 report), the project highlights their AI’s capacity to outperform humans in terms of mitigating racial bias in resource allocation decisions. Yet relying on AI to correct for racial bias, rather than empowering human social workers to better recognize and address it themselves, sets an inadvisable precedent. Fairness should be upheld through AI use, not outsourced to it—and the use of AI should not be justified by this outsourcing.
Without reservation, the stories of the individuals and families assisted by this pilot program are compelling. Yet the program’s emphasis on these accounts of sudden, almost miraculous intervention casts a subtle shadow: it implies that part of the AI’s success lies in ensuring resources aren’t wasted on the “wrong” people. Again, the experiment’s objective isn’t to identify all people experiencing significant precarity, but to choose a select few to receive assistance—a delineation that isn’t necessarily objectionable in and of itself, but one that may produce broader effects.
Namely, this experiment in no way challenges the notion that homelessness is an inevitability. By treating it as a problem to be forecasted and prevented at the individual household level, the experiment leaves no opening to contemplate or address homelessness as a product of policy decisions. AI prediction, accordingly, doesn’t simply mirror existing policy constraints, it can reinforce our passive acceptance of them and shift the goal from actively shaping the future to merely reacting to it. This is the disquieting foundation on which the seeming reasonableness of relying on AI to decide who receives help—and by extension, who doesn’t—now rests.
To be clear, I’m not challenging the intentions of those running the pilot project; I have no doubt that they want to help people—and they clearly have. But allowing AI to decide who receives support bolsters a sense of inevitability—that there will never be enough resources for everyone to move beyond precarity—by coating it in technologically-mediated certainty. AI’s role reinforces that idea that scarcity itself is a foregone conclusion, something to be managed rather than challenged.
When we ask AI to decide, we, too, are making choices. The blending of decision-making “buffering” (ease) with technical accuracy reshapes the role of human social workers: instead of directing AI, they become implementers of its determinations. The affordance of “ease” should not be discounted; alleviating human decision-making burdens is—and will remain—a key driver of AI adoption, and merits further attention. Anthropologist Natasha Dow Schüll’s ethnographic research on self-tracking health wearables is instructive, as it highlights a deeply human impulse: the desire to offload some of our decisions to machines. AI offers a similar promise. It may be tempting to let AI help in this way—and in some settings, it may even become prudent to do so.
But at any given moment, we must remain aware that that’s what we’re doing: asking AI to make a subjective, gray-area ethical choice on our behalf. Otherwise, by treating subjective decisions as unimpeachably objective, we risk normalizing the uncritical acceptance of AI determinations. But worse still: we risk losing our own capacity to imagine or pursue alternatives to those determinations—the very possibility of rethinking what we ought to be asking.
Contemporary public health professionals may be better positioned than most to recognize how easily a seemingly objective resource-allocation decision can conceal what is, ultimately, a subjective choice. Yet they may also be particularly vulnerable to wanting AI to assume these decisions: working within systems that treat scarcity as inevitable means facing an endless onslaught of corollary choices—in which giving help to one demands withholding it from another. This is a problem that precedes and exceeds AI. As public health continues to pivot towards addressing upstream determinants rather than downstream mitigation, the goal may not necessarily be to abstain from AI altogether, but to ensure that its use doesn’t deepen the very constraints we are trying to move beyond.
Authored by Valerie Black
Valerie Black is a medical anthropologist (PhD, UC Berkeley) and disability studies scholar whose work focuses on the “human side” of how we make, use, and relate to AI and neurotechnology. She is currently a postdoctoral scholar at the UCSF Decision Lab. More at: www.vblackphd.com


