C2 Reading Test – Algorithmic Personhood & Legal Responsibility
Advanced, free C2 practice on AI personhood. Test definition analysis, necessary/sufficient conditions, and precedent-based reasoning.
Read the passage and decide if each statement is True (T), False (F), or Not Given (NG).
Debates about “algorithmic personhood” ask whether certain AI systems should carry legal status independent of their owners or operators. Proponents argue that limited personhood could simplify liability: if an autonomous trading agent or delivery drone breaches a duty, the entity itself could hold insurance, enter contracts, and face sanctions without proving which human pressed which button. They point to corporate personhood as precedent—an artificial subject that concentrates responsibility rather than diffusing it across thousands of shareholders.
Skeptics counter that personhood is not a filing convenience but a moral threshold. Corporations, they note, ultimately aggregate human intentions; algorithms optimize objective functions that may be opaque even to their designers. Granting personhood to systems that lack consciousness, interests, or the capacity for remorse risks laundering responsibility; actors could hide behind a shell that “decides” while no one answers for the incentives that trained it. On this view, the real task is not to crown machines as persons but to trace accountability upstream: who chose the data, designed the loss function, set deployment constraints, and monitored feedback loops?
Practical lawyering often pursues a middle path. Rather than personhood, jurisdictions experiment with duty-of-care frameworks and strict-liability buckets: the more autonomous and socially hazardous the system, the higher the mandated insurance, audit frequency, and documentation burden. Contracting parties may agree to “kill switch” terms, incident reporting timelines, and escrowed model versions that regulators can inspect after a dispute. These tools treat AI as equipment that requires governance—like elevators or pharmaceuticals—without pretending it holds intrinsic rights.
Yet gray zones persist. When a model retrains continuously on live data, who owns the state that produced the harmful action: the vendor, the client who supplied the stream, or the integrator who tuned the pipeline? When a system’s output persuades rather than commands—say, a recommender that nudges risky trades—how far does causal responsibility stretch? Legal systems, built around discrete acts and identifiable agents, strain under probabilistic causation and emergent behavior. The risk is either over-personalizing machines or under-personalizing the institutions that profit from them.
A principled test would ask less “Is the system a person?” and more “Which human or human-governed entity had the power and duty to prevent this harm, ex ante?” If that locus is unclear, the problem is not metaphysics but missing governance: inadequate logging, unenforced audits, and incentives that reward deployment speed over red-teaming. Until those scaffolds are routine, algorithmic personhood may be a seductive distraction from the slower work of building traceable, enforceable responsibility.
Supporters believe limited algorithmic personhood could streamline liability by allowing systems to hold insurance and contracts.
The passage claims corporate personhood and algorithmic personhood are identical in moral terms.
Skeptics argue that granting personhood to non-conscious systems could obscure human accountability.
According to the text, strict-liability regimes may scale with a system’s autonomy and hazard level.
The author states that all jurisdictions already require escrowed model versions for regulators.
Continuous retraining raises questions about who owns the model state that led to harm.
The passage provides a numerical formula for calculating audit frequency based on risk.
Recommender systems that nudge decisions complicate causal responsibility, according to the passage.
The author endorses full legal personhood for advanced AI as the best near-term solution.
The concluding view is that clearer governance (logging, audits, incentives) is more urgent than metaphysical debates about personhood.