Is AI Predictivity a risk?

Carissa Veliz’s article for The Economist presents a very compelling and timely argument about the expanding role of predictive AI, particularly in shaping decisions about human lives. The central premise of the article being that prediction is not a neutral, technical exercise but one with ethical and even political consequences, is incredibly persuasive. However, I feel that whilst the concerns raised are valid, it is important to approach the conclusions with a degree of caution and recognise that the long-term implications of predictive technologies are still unfolding. 

The piece rightly challenges the assumption that predictive algorithms are merely tools of efficiency. Through highlighting how such systems are embedded in decisions about employment, justice, finance and healthcare, it underscores that these technologies actively structure opportunity and constraint. The point it makes that predictions can become self-fulfilling is especially powerful; when individuals are denied credit, employment or insurance based on algorithmic forecasts, those forecasts may end up bringing about the very outcomes they predict. With this in mind the comparison to historical oracles is apt, as both shape behaviour under the guise of foresight. 

Furthermore, the critique of opacity is very convincing. The black-box nature of many machine-learning systems raises legitimate concerns about accountability and contestability. If individuals cannot understand or challenge decisions made about them, then the risk of fairness and due process being undermined increases significantly. 

However, Ms Veliz, in my view, leans towards a somewhat deterministic view of predictive AI’s negative effects. While bias, opacity and self-fulfilling prophecies are genuine risks, they are not inevitable outcomes. There is ongoing work in developing more transparent, interpretable and accountable AI systems, as well as regulatory frameworks aimed at mitigating precisely these harms. Furthermore, prediction itself is not inherently problematic; as the article acknowledges at its outset, it has always been central to human decision-making. The question, then, is not whether prediction should exist, but rather how it should be governed and applied. 

The suggestion that certain forms of prediction might need to be restricted is thought-provoking, particularly in sensitive domains such as criminal justice or insurance. Yet implementing such limits raises complex practical questions. Who decides which predictions are permissible? On what basis? And how do we balance innovation with protection against harm? These are issues that require careful deliberation rather than definitive answers at this stage.

Ultimately, the article succeeds in prompting necessary reflection on the ethical dimensions of predictive AI. Its warning against uncritical acceptance of algorithmic outputs is well taken. Nevertheless, it is equally important to avoid premature conclusions about the trajectory of these technologies. Predictive AI is still evolving, as are the societal norms and regulations surrounding it. A measured approach—one that acknowledges both the risks and the potential benefits, while remaining attentive to emerging evidence—seems the most prudent path forward.

Leave a comment

Blog at WordPress.com.

Up ↑