Overview
How to tell if a review is trustworthy? This guide uses the Selection Logic framework to systematically assess conflict of interest, source credibility, and verifiability of data. Every review embeds value assumptions (T1.2 Corollary); the goal is not "absolute truth" but to spend cognitive budget on sources more likely to reflect real-world use.
Mapping to theory: T1 Matching Theorem reminds us that review conclusions often assume the reviewer's scenario; M4 Comparative Analysis requires cross-checking multiple sources rather than relying on one.
Source credibility
Different sources have different incentive structures. Prioritize distinguishing: independent media (editorial–ad separation), KOLs (disclosure of partnerships), brand-owned content, and user UGC (selection bias but no direct commercial payoff).
| Source type | Typical incentives | Credibility checks |
|---|---|---|
| Independent media / test lab | subscription, ads, brand deals | sponsorship disclosure, consistent methodology |
| KOL / creator | ads, samples, affiliate | “sponsored,” “partner,” “affiliate–disclosure |
| Brand site / store | sales conversion | use for specs only, not neutral evidence |
| User reviews | no direct payoff; occasional fake reviews | read negative/neutral, timing, verifiability |
Conflict of interest
Disclosed sponsorship does not make a review fake, but it raises the bar for verification. Undisclosed samples, affiliate links, or brand deals significantly reduce trust. See Authority bias and Social proof: a big name or “everyone loves it–does not replace a conflict check.
Cross-checking data
For key claims (performance, battery, image quality), use a M2-style approach: verify with at least two independent sources. If one review contradicts most verifiable data without explanation, downweight or treat as uncertain.
Common manipulation
Selective presentation, comparison anchoring, vague framing (“best in class–undefined), review farming and moderation. Recognizing these helps avoid Anchoring and Confirmation bias.
Building a personal filter
Per T2 Cognitive Budget: for high-stakes decisions, fix 2–3 sources with clear disclosure policies and habitually check conflict and cross-checkability; for low-stakes, accept “good enough–information and avoid endless verification.