C1 Reading Test – Algorithmic Governance & Democratic Accountability
Advanced reading on algorithmic decision-making and democracy. Multiple-choice practice testing main ideas, inference, and author stance.
Read the passage and decide if each statement is True (T), False (F), or Not Given (NG).
When public agencies adopt algorithms to allocate inspections, score welfare fraud risk, or prioritize police patrols, they move decisions from desks to models—but not outside politics. Algorithmic governance can increase consistency and speed, yet it also relocates discretion into places citizens rarely see: training data, feature choices, and thresholds set during procurement. A system may be “neutral” in code while reproducing historical patterns in practice, because the past it learned from was uneven.
Accountability, in democratic terms, requires that affected people can contest outcomes and that elected bodies can oversee the tools they authorize. Transparency helps but is not a cure-all. Source code releases may say little if the dataset is secret; publishing performance metrics without sub-group breakdowns can hide disparities. Some agencies now require impact assessments that document purpose, data lineage, error types, and fallback procedures, alongside a public register of deployed models. Others add meaningful appeal rights: when a benefit is denied, a human reviewer must explain which inputs mattered and how to correct them.
Risk is not uniform. A traffic-light optimizer that times signals carries different stakes than an algorithm assigning child-protection investigations. Proportional oversight—tiered by impact—avoids drowning low-risk projects in paperwork while demanding rigorous audits for high-stakes use. Still, audits face their own limits: vendors may claim trade secrets, logs may be incomplete, and models can drift as populations change. Without ongoing monitoring and the budget to act on findings, audits become theater.
Procurement shapes outcomes long before deployment. Contracts can mandate access for auditors, prohibit unverifiable “black-box” claims, and require sandbox trials with independent baselines. Public participation also matters. Community hearings surface harms that dashboards miss—like how an “efficiency” fix reroutes noise into one neighborhood. The democratic question is therefore not simply “Does the model work?” but “For whom, under what conditions, and with what remedy if it fails?”
Ultimately, algorithmic governance is a constitutional design problem in miniature. Tools can widen or narrow civic power depending on how discretion, explanation, and recourse are allocated. The test of legitimacy is practical: when a model gets it wrong, can people understand why, get it fixed, and hold someone answerable—not just somewhere, but within the institutions they already vote to steer.
Algorithms can reproduce historical inequalities if trained on biased past data.
Publishing source code always guarantees full transparency about a system’s fairness.
The passage states that performance metrics should include sub-group breakdowns to reveal disparities.
All government algorithms should receive the same level of oversight regardless of risk.
Audits can be limited by claims of trade secrets and model drift.
The text claims community hearings are unnecessary because dashboards capture all harms.
Procurement contracts can require auditor access and sandbox trials before deployment.
According to the passage, traffic-light optimization and child-protection screening are treated as equally high-stakes.
The author provides a detailed legal definition of “meaningful appeal rights” used worldwide.
Legitimacy, in the author’s view, depends on whether people can understand errors, obtain remedies, and hold someone accountable.