C1 Writing Test - Technology & AI Regulation
Practise C1 Writing on AI regulation with gap-fills and MCQs. Build vocabulary (accountability, safeguards, data governance) and craft balanced, evidence-based arguments.
Fill each blank with one word from the Word Bank. Use each word once only.
Word bank (choose 8)
audit - impact assessments - accountability - robustness - sandboxing - redress mechanisms - transparency - traceability
Policymakers increasingly favour a risk-based approach in which obligations scale with system impact. For high-risk uses, vendors must document [1] , ensure model [2] , and maintain end-to-end [3] so incidents can be investigated. Regulators also promote controlled [4] to test innovations under supervision, provided firms accept strict [5] if harms occur. Beyond publishing reports, platforms need accessible [6] for users to contest decisions, plus independent [7] to verify claims about safety and fairness. Ultimately, meaningful [8] - not slogans - determines whether trust is earned.
Fill each blank with one word from the Word Bank. Use each word once only.
Word bank (choose 8)
accountability • data governance • oversight • consent • audit • safeguards • algorithmic bias • transparency
As AI systems scale across hiring, lending, and public services, regulators argue that real [1] requires independent [2] and clear rules to detect and remedy [3] . Yet firms still treat disclosure as a branding exercise rather than meaningful [4] , leaving users unsure how their data is processed or challenged. Without informed [5] and robust [6] standards, personal information can be repurposed in ways that entrench inequality. To rebuild trust, authorities should mandate periodic [7] and publish risk summaries, while companies embed technical and organisational [8] from design to deployment.