There is one key thread that runs across the varied themes at the heart of my research: agency. My research seeks to unearth the factors that can compromise our capacity to question the way things are and project ourselves into the future. This question often brings me right at the intersection between law and ethics.
Sometimes it is to criticise existing legal frameworks. My article on professional responsibility (OJLS) for instance criticises the courts’ delineation of obligations meant to address lay vulnerability: the perduring focus on knowledge asymmetry misses the extent to which the vulnerability at stake has to do with a patient’s / client’s / pupil’s ongoing ability to (re)construct their ‘sense of self’. A professional’s stance can have a considerable impact on my ability to not be defined by my illness, or my being accused of murder.
Sometimes my interest in agency leads me to put forward novel legal mechanisms. The data trusts framework I developed with Neil Lawrence is meant to complement top-down regulation. As bottom-up empowerment mechanisms, data trusts enable groups to pool together the rights they have over their data and task an intermediary – the data trustee - to leverage those rights. The data trustee may then be in a position to obtain better terms and conditions with service providers and/or monitor data sharing agreements.
Sometimes my interest in agency does not intersect with law as such, but rather with the ambient, data-intensive technologies we have become so reliant on. In a series of recent publications, I question the logic underlying the optimisation tools at the heart of these data-intensive technologies. When the foods we eat, the people we meet and the books we read are all streamlined according to the traits and desires inferred from our past behaviour, have we gained a greater degree of agency?
In a distinct, but related vein, my most recent work on Machine learning interpretability calls for the introduction of what I call ‘ensemble contestability features’. This work underlies a concrete, cross-disciplinary project that compares concrete ways of implementing contestability features for ML systems deployed in ethically or legally significant contexts.
At other times my interest in agency is (almost) self-contained: while my forthcoming Habitual Ethics? book investigates the non-deliberative underpinnings of ethical agency, a recent publication on Turing on Lovelace looks at the relationship between agency, originality and surprise (probably the most fun I’ve had writing a paper for a long time).
The cross-disciplinary dimensions of this focus on (pre)-reflective agency and the role it plays in our ongoing capacity for transformation was highlighted in a cross-disciplinary conference on the role played by our pre-reflective intelligence within our ethical lives that I organised on 29th June 2023.