Overview
Build an evidence pipeline: sources, measurements, uncertainty, and replication.
Theoretical basis (zh-aligned)
- T1 Matching (goal is fit, not universal ranking): /en/wiki/theorem-1-matching
- A2 Conditional subjectivity (dimensions vs weights): /en/wiki/axiom-2-conditional-subjectivity
Why systematic evaluation is necessary
Limits of purely subjective reviews
- vulnerable to mood and framing,
- hard to replicate or verify,
- incomplete (omits critical dimensions),
- weak for cross-product comparison.[^1]
Value of systematic evaluation
- reduces omissions,
- increases consistency,
- supports comparison,
- enables post-hoc validation and iteration.
The systematic evaluation pipeline (zh-aligned)
Need definition — Dimension selection — Criteria building — Test design — Data collection — Analysis → Reporting → Validation & iteration
Standardized reporting checklist
- evaluation goal and scope,
- criteria and measurement method,
- raw data availability (when feasible),
- limitations and uncertainty,
- how weights were chosen (explicitly or via tool).
References
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.[source]
- Keeney, R. L., & Raiffa, H. (1993). Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge University Press.[source]