TruthOrBluff publishes encyclopedia-style explanations of every topic on the site. This page describes how that content is written, how it is fact-checked, and what the "Reviewed by N AI fact-checkers" badge on each article does and does not mean.
How content is written
Each topic has up to four articles, one per reading level (Rookie ages 8+, Curious 12+, Sharp 16+, Expert 18+). Articles are drafted by AI under a strict editorial style guide that mandates US English, encyclopedic tone, and citation of primary sources in the article frontmatter. Every numerical claim, named law or limit, attribution, and date is meant to be verifiable against the cited sources.
What the badge means
When you see a badge like "Reviewed by 2 independent AI fact-checkers · 14 confirmed · 1 disputed · 0 uncertain across 17 claims" at the top of an article, here is what each piece means:
- Reviewer count. The number of distinct AI models that have reviewed at least one claim in this article. A model run from a different terminal harness (Claude Code versus Codex, for example) counts as a different reviewer because tool quality and behavior differ between harnesses.
- Claims. A claim is one verbatim sentence from the article body that contains at least one specific factual assertion: a year, a quantity, a named law or theorem, or an attribution to a person or institution. The total count is automatic; the script flags every sentence that triggers any of these patterns.
- Confirmed. The reviewer found at least one authoritative primary source on the live web that supports the specific assertion in the sentence.
- Disputed. The reviewer found one or more sources that contradict the sentence, or that disagree with each other in a way the sentence glosses over.
- Uncertain. The reviewer could not find a primary source on the live web that confirms the specific assertion. The claim is not necessarily wrong; it is unverified.
- Last reviewed. The most recent date on which any reviewer wrote or updated a verdict for this article.
How a reviewer works
A reviewer is given the article's verbatim claims, the source URLs the article cites, and the rule that every verdict must reference an authoritative primary source URL on the live web. Reviewers must use live web search; they are forbidden from confirming a claim from training data alone. Preferred source order:
- Peer-reviewed paper or pre-print on arXiv or a journal site.
- Official institution page (NASA, ESA, CERN, NIST, NIH, USGS, an observatory, a university physics department).
- Reference encyclopedia entry whose own cited primary references the reviewer has actually opened (Wikipedia is acceptable only when its references check out).
- Britannica.
- Museum or government statistical agency.
If a reviewer cannot find a primary source, the verdict is "uncertain" rather than "confirmed". The methodology errs on the side of caution.
Reviewer identification
Each reviewer self-identifies as a stable lowercase string combining the harness name and the model name, for example claude-code-opus-4-7 or codex-gpt-5. Per-claim verdicts include the reviewer ID, the date, the verdict, the source URL, and a one-sentence note. The full per-claim review record is stored alongside each article in the repository and can be inspected directly.
Honest limits
"Reviewed by N independent AI fact-checkers" raises trust more than it raises actual accuracy. AI models share substantial training data and can confidently agree on the same wrong fact. Real independence comes from the reviewer's requirement to cite a primary source URL on the live web, not from the reviewer's reasoning. Specifically:
- A high reviewer count does not guarantee correctness. It does mean that several independent agents looked, several independent agents found primary sources, and the badge stands behind those findings.
- A "confirmed" verdict means a primary source supports the specific assertion, not that the assertion has been peer-reviewed in the academic sense.
- Frontier scientific topics (recent supernova mass measurements, current limits on physical constants, Population III star properties) are areas where even authoritative sources disagree. Articles in those areas may carry "disputed" verdicts that reflect genuine ongoing scientific debate rather than article errors.
- Human review by domain experts is not part of the current process. Adding it is on the roadmap for highest-stakes topics.
Editorial standards
Beyond fact-checking, every article must pass a written-content audit covering: no AI-tell phrasing (such as em dashes or signature filler words), US English spelling and punctuation, US-customary units with metric in parentheses for everyday measurements, and an explicit ban on a curated list of overused AI phrases. The audit script that enforces these rules is run before any article is committed.
Reporting an error
If you find a wrong fact in an article, please report it through the feedback link in the site footer. Errors are corrected promptly, and the affected article's review record is invalidated for the changed sentences so reviewers will re-verify on the next pass.