The Declaration
In the Tuyuca language, spoken by roughly a thousand people along the Vaupés River in the border region of Colombia and Brazil, every statement must carry an evidential suffix. There are five: visual (the speaker saw it), nonvisual sensory (the speaker heard, smelled, or felt it), apparent (the speaker inferred it from evidence), secondhand (someone told the speaker), and assumed (the speaker believes it on general grounds). The suffixes are not optional emphasis. They are obligatory morphology. A Tuyuca speaker cannot say "it rained" without simultaneously declaring how they know it rained.
Tuyuca is at the strict end of a widespread phenomenon. Roughly a quarter of the world's languages — Turkish, Quechua, Tibetan, Bulgarian, among others — have grammatical evidentiality. But Tuyuca's five-category system makes the structural effect clearest.
The usual interpretation is that evidentiality makes communication more reliable. The speaker must show their work. This is partially correct. But the deeper effect is not the addition of information — it is the migration of error. A Tuyuca speaker can still be wrong about the rain. What they cannot do is be wrong about the rain without also being wrong about their basis for believing in the rain. If there is deception, it must be compound: the content and its declared source must be fabricated together. The error does not disappear. It moves from the content to the declaration.
In December 2011, a German software developer submitted a patch to OpenSSL implementing the TLS heartbeat extension. The patch introduced a buffer over-read vulnerability — a failure to validate the length field in heartbeat request messages. The bug was trivial: a missing bounds check. It was also catastrophic. When publicly disclosed in April 2014 as Heartbleed (CVE-2014-0160), it affected an estimated seventeen percent of the internet's secure web servers, potentially exposing private keys, session tokens, and user credentials.
OpenSSL is open-source software. Every line of code is publicly readable. The patch that introduced the vulnerability was submitted through a public process, reviewed by at least one maintainer, and merged into a public repository. The code was visible for twenty-seven months. The vulnerability was not hidden in the code. It was hidden in the space between the code and the reading of the code — in the metadata: who reviewed this patch, how thoroughly they reviewed it, and whether anyone with sufficient expertise examined the bounds-checking logic of the heartbeat implementation.
The mandate works — the content is visible. But the error does not vanish. It migrates from "what does this code do" (which anyone can answer by reading it) to "who has read this code and how carefully" (which no one can answer by reading it). The opacity moves one layer up.
In 2005, the International Committee of Medical Journal Editors announced that member journals would no longer consider for publication any clinical trial that had not been registered in a public database before enrollment of the first participant. The policy was a response to a specific failure: pharmaceutical companies conducting multiple trials, publishing only the ones with favorable results, and leaving negative results in the file drawer. Robert Rosenthal had named this the file-drawer problem in 1979. For twenty-six years it remained an open secret. The 2005 mandate closed it — not by preventing selective publication, but by making the selective publication visible.
Before pre-registration, a trial that produced negative results could simply not exist in the published record. After pre-registration, the trial exists in the registry whether or not the results are published. The gap between registered and reported becomes itself a measurable signal. Researchers can now count the missing results. They cannot recover the missing data, but they can identify its absence. The error — suppression of inconvenient findings — migrated from invisible (the trial was never public) to visible (the trial is registered but unreported). The content is still missing. The metadata now points to the hole.
In the American legal system, every item of physical evidence must maintain a documented chain of custody from the moment of collection through storage, transfer, analysis, and courtroom presentation. Every handler signs. Every transfer is logged. Every gap is a potential ground for exclusion.
In the 1995 trial of O.J. Simpson, the prosecution presented DNA evidence that was, by the standards of forensic genetics, overwhelming. Blood found at the crime scene matched the defendant. Blood on a glove found at the defendant's property matched the victims. The statistical weight of the DNA match was on the order of one in several billion. None of this was in serious scientific dispute. But the defense did not need to dispute the science. They disputed the handling. Criminalist Dennis Fung's testimony revealed procedural lapses in evidence collection. A reference vial of the defendant's blood could not be fully accounted for. Detective Mark Fuhrman, who discovered a key piece of evidence, was impeached on other grounds. The chain of custody had gaps, and the gaps were sufficient. The jury acquitted.
When provenance documentation is mandatory, attacking the documentation becomes the strategy for attacking the content. The error migrated from the evidence to the evidence about the evidence.
In each case — grammar, code, trials, courtrooms — a system mandates that sources be declared. And in each case, the mandate succeeds at the layer it addresses. The content becomes transparent. But the error does not disappear. It migrates to the metadata layer — to the declaration itself, to the review process, to the reporting gap, to the handling record. The new opacity is not a failure of the transparency mechanism. It is its structural product. Every system that forces the content into the light specifies exactly where the shadow moves.
The error is conserved. Only its address changes.