← Back to list

The Scientific Foundations of Product Evaluation - Selection Logic

Why good evaluation separates evidence from values, uses operational criteria, and reports uncertainty.

Selection Logic Team · 2026-01-19
#Selection Logic #theoretical foundation #product evaluation #measurement #review methodology #evidence-based

Abstract

Product evaluation is not “one score.” It is a pipeline: define measurable criteria, collect evidence, state value weights, and report uncertainty. Without explicit criteria, evaluations hide assumptions and become persuasion rather than analysis.[^1][^2]


1. Evaluation = measurement + value model

In multi-criteria settings, you need:
- operational definitions (what is measured, how),
- reproducible methods (test protocols),
- explicit weights (what the user values).

This aligns with Selection Logic’s A2 and T1.2: weights are conditional, and reviews embed assumptions — A2 Conditional subjectivity · T1.2 Corollary


2. Evidence hierarchies (practical)

Different questions require different evidence:
- lab measurements (battery life, throughput),
- long-term reliability data (where available),
- field studies and user panels (usability).


3. Standards in English-world contexts

Many domains rely on well-known standards bodies and test methods:
- ISO/IEC for systems and technical properties (domain-dependent)
- ASTM for materials and test methods (domain-dependent)
- NIST guidance for security-relevant claims

Standards are helpful as baselines, but they are not “a universal best” relevance depends on user needs (A2).


References

  1. Akerlof, G. A. (1970). The market for “lemons”: Quality uncertainty and the market mechanism. Quarterly Journal of Economics, 84(3), 488–00.[source]
  2. Popper, K. R. (1959). The Logic of Scientific Discovery. Routledge. (Original work published 1935)[source]
  3. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–02.[source]
  4. Longino, H. E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton University Press.[source]
  5. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.[source]
  6. Messick, S. (1995). Validity of psychological assessment. American Psychologist, 50(9), 741–49.[source]
  7. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.[source]
  8. International Organization for Standardization. (2015). ISO 9000:2015 Quality management systems — Fundamentals and vocabulary.[source]
  9. Keeney, R. L., & Raiffa, H. (1993). Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge University Press.[source]

Further Reading


Further Reading