C2 Reading Test – Epistemic Trust in the Age of Misinformation
Free C2 Reading practice on misinformation and trust. Analyze arguments, evaluate evidence, and infer author stance with advanced questions.
Read the passage (~420 words) and choose the best answer (A–D).
Trust is not merely a feeling toward sources; it is an epistemic strategy for navigating claims we cannot check ourselves. In complex domains—vaccines, monetary policy, climate models—most citizens borrow certainty from institutions, experts, or peers who seem reliable. Misinformation exploits this necessity not by out-arguing science but by rerouting trust: it seeds doubt about referees (journals, regulators, courts) and elevates influencers who perform credibility without bearing its costs.
Platforms complicate matters. Their ranking systems optimize for engagement—speed, novelty, emotional intensity—metrics orthogonal to truth. Even when fact-checks exist, they arrive downstream of virality and are framed as opinions competing with other opinions. Transparency dashboards promise progress, but dashboards are performances unless tied to enforceable rules (rate limits, friction for repeat offenders, provenance requirements) and to shared norms about what counts as disclosure versus deflection.
A second failure mode is confusing verification with verification theater. “Screenshots of PDFs” look rigorous, yet they may be cropped, decontextualized, or fabricated. Provenance signals—cryptographic signing, authenticated capture trails, and tamper-evident edits—raise the cost of forgery, but they cannot decide meaning: interpretation requires domain literacy and plural review. Hence the shift from single-point corrections to layered safeguards: prebunking that teaches manipulation patterns; source diversity in recommendation; and community-level accountability where claims must cite checkable, first-order evidence.
None of this resolves the tension between speed and care. Crises demand fast guidance, while knowledge often matures slowly. The practical remedy is trust calibration, not blind deference: ask what the claim would predict tomorrow, what would falsify it, and which independent channels could confirm the same signal. Institutions must reciprocate by exposing failure modes, publishing error bars and retraction pathways, and welcoming adversarial audits. Trust grows less from a flawless track record than from visible repair when things go wrong.
In the end, misinformation thrives where incentives reward attention over accuracy and where audiences are trained to treat skepticism as a personality rather than a method. Healthy ecosystems make good faith cheaper than bad faith: they align revenue with reliability, couple reach to responsibility, and make the easiest story to share also the easiest to check.
The passage’s main claim is that combating misinformation requires
“Rerouting trust” most nearly means
The author’s view of transparency dashboards is that they
The function of prebunking in the text is to
“Credence domains” refers to areas where
What is implied about platform incentives?
Which option best captures “layered safeguards” as used in the passage?
Which title best fits the passage?