C2 Reading Test – Linguistic Justice on the Global Internet

Free C2 Reading about language rights online. Evaluate fairness criteria, code-switching dynamics, and platform policy trade-offs.

Question 1 of 1

Read the passage (~400 words) and choose the best answer (A–D).

 

Linguistic justice online is not only about adding more languages to interfaces; it is about who sets the terms of legibility. Platforms typically optimize for scale, which favors a handful of “pivot” languages that machine translation systems treat as hubs. Content that passes easily through these hubs tends to circulate faster and wider, while material in low-resource languages moves through the network with friction—misread by algorithms, delayed by human review, or throttled by safety systems trained on other linguistic norms.

Proponents of the status quo argue that translation quality and moderation coverage are improving rapidly, and that universal tools—auto-captioning, multilingual search—are inherently equalizing. Yet even well-intentioned tools can encode asymmetries. A toxicity detector calibrated on American English may over-flag reclaimed slurs in Caribbean patois; sentiment models might read code-switching as noise; and dialect spelling that signals identity in North African Arabic can be “normalized” into a safer but blander prose that loses social meaning.

Policy fixes often chase visibility metrics—how many languages are “supported”—while neglecting governance: who decides annotation guidelines, which communities get paid to curate training data, and how appeal channels handle disputes that are partly linguistic, partly political. In high-stakes contexts—public health alerts, elections, crisis mapping—the difference between available translation and accountable translation is not academic. Communities want an audit trail: who translated what, using which model, under what constraints, with which fallback if confidence was low.

There is also a distributional question about discoverability. Recommendation engines reward “engagement fitness,” and languages with dense creator–audience graphs compound their advantage. To counter this, some researchers propose linguistic interoperability: portable subtitles, cross-lingual embeddings tuned for under-represented pairs, and ranking bonuses for content that successfully bridges language clusters without flattening them. The risk, of course, is performative diversity—badges and boosts without sustained investment in local trust, safety staff, and data pipelines.

A just internet will not translate everything into a single neutral voice. It will make friction visible and governable, so that users can route around failures and contest the defaults. Linguistic justice, in other words, is less about perfect fluency than about allocating discretion—who gets to define meaning at scale, and what remedies exist when the system speaks too confidently for those it barely understands.

Question 1

The main claim of the passage is that linguistic justice online primarily concerns

Question 2

The role of “pivot languages” is presented as

Question 3

Which example best illustrates tool-driven asymmetry?

Question 4

The contrast between available vs accountable translation emphasizes the need for

Question 5

In paragraph 3, “governance” most nearly refers to

Question 6

The author’s attitude toward ranking bonuses for cross-lingual content is

Question 7

The phrase “performative diversity” most nearly means

Question 8

According to the passage, code-switching may be penalized because

Question 9

The pronoun “them” in “bridges language clusters without flattening them” refers to

Question 10

Which title best captures the passage?