Introduction
The metaphor of a house without a foundation will in no time crumble aptly underscores the indispensable relationship between research and evaluation. To assert that evaluation can exist independently of research is to deny its epistemological roots. Gary Miron, Ph.D., of Western Michigan University, reinforces this connection when he notes that evaluators must “research to get their facts.” Fitzpatrick (1997) similarly recognized that evaluation emerged directly from social science research. Consequently, disentangling evaluation from research is not merely artificial but risks obscuring the very methodological and philosophical underpinnings upon which evaluation rests. While some scholars have advocated for a distinct identity of evaluation, often elevating it above research, I contend that evaluation remains subordinate to research. It is an applied manifestation of research logic, grounded in its methods, but oriented toward decision-use and contextual judgment. Thus, this article interrogates the perceived differences articulated by Fitzpatrick, Sanders, and Worthen (2010), while maintaining that evaluation derives its legitimacy from research rather than existing as an autonomous discipline.
The Differences Between Evaluation and Research
The debate surrounding the independence of evaluation from research is not incidental; it mirrors the dynamics of disciplinary emancipation. Advocates of evaluation’s distinctiveness—such as Fitzpatrick and colleagues—have often framed research as reductionist in order to highlight evaluation’s contextual richness. Yet, a closer inspection reveals that these differences are overstated and often mischaracterized.
Purpose. Evaluation has been described as decision-oriented, whereas research is presented as conclusion-seeking. This dichotomy is flawed. Evaluation, rather than producing decisions, generates information that complements pre-existing decision structures. For example, in project management or military recruitment, evaluative criteria are predetermined; the evaluation merely verifies conformity rather than shaping the decision ex nihilo. In contrast, research, especially descriptive and exploratory forms, often provides far more granular specificity than evaluation, contradicting the claim that evaluation is uniquely suited to concrete description.
Agenda Setting. Fitzpatrick et al. suggest that evaluators respond to stakeholder-driven questions, while researchers self-generate hypotheses. This view collapses under the broader context of research praxis. In fields such as technology management and applied sciences, research questions are often co-constructed with stakeholders, given that the ultimate utility of findings depends on public or client relevance. Hence, research cannot be confined to a purely autonomous intellectual exercise, as some evaluation enthusiasts imply.
Generalizability. The critique that evaluation is context-bound while research generalizes is a partial truth at best. Research generalization is contingent on design: statistical research extrapolates from samples, while parametric studies encompass entire populations, obviating the need for generalization. Evaluation often relies upon these very research traditions for legitimacy. Moreover, evaluators rarely generate facts independently but validate their judgments against empirical findings derived from research.
Criteria and Standards. Fitzpatrick et al. contrast internal/external validity in research with accuracy, utility, and propriety in evaluation. This comparison is reductionist. Reliability and validity remain the fundamental hallmarks of research quality (Middleton, 2019), but these encompass and intersect with evaluative concerns such as accuracy and utility. Ethical and legal considerations (propriety), far from being unique to evaluation, are rigorously embedded in research through institutional review boards across universities and health systems. Thus, the distinction collapses upon scrutiny, revealing more overlap than separation.
Similarities Between Evaluation and Research
The persistent attempts to distinguish evaluation inadvertently reaffirm its genealogical dependence on research. Both domains contribute to knowledge production, albeit through different emphases: research toward explanation and theory-building, evaluation toward application and accountability. Nevertheless, evaluators consistently borrow methodologies—quantitative and qualitative—from research traditions. To deny this lineage is to sever evaluation from its methodological lifeblood. The maxim that “a river that forgets its source will surely run dry” encapsulates the danger of divorcing evaluation from research.
Reconceptualizing Measurement, Evaluation, and Research
A more integrative understanding arises when measurement, evaluation, and research are positioned as hierarchical processes within knowledge production. Measurement captures the dimensions of a phenomenon, providing the raw data for interpretation. Evaluation subjects these measurements to standards of merit and worth, contextualizing outcomes within normative frameworks. Research synthesizes both processes into a systematic inquiry designed to establish reliability, validity, and generalizability. In this sense, evaluation is not co-equal to research but embedded within its larger epistemic framework.
Toward a More Powerful Synthesis
What elevates this debate beyond semantic distinctions is the recognition that evaluation without research risks devolving into opinion, while research without evaluative grounding risks becoming abstract and detached from lived realities. Michael Scriven, a prominent evaluation theorist, emphasized that evaluation is fundamentally a transdisciplinary practice whose legitimacy derives from applying rigorous research methods to real-world problems. In contemporary practice, evidence-based policymaking, randomized controlled trials, and mixed-methods designs all demonstrate how evaluation remains tethered to research paradigms. The future of the field lies not in polarization but in synthesis: a recognition that evaluation is the pragmatic arm of research, ensuring that knowledge generation translates into actionable wisdom. This perspective not only preserves the epistemic integrity of both fields but also maximizes their societal relevance.
Conclusion
The insistence on differentiating evaluation from research may serve institutional identity politics, but it does little to advance epistemological clarity. Research and evaluation are symbiotic, yet asymmetrical, with evaluation inheriting its methodological legitimacy from research. Efforts to elevate evaluation above research risk undermining its intellectual coherence by divorcing it from its empirical foundations. Ultimately, the task for scholars is not to force artificial distinctions but to recognize the interdependent continuum whereby measurement informs evaluation, and evaluation, in turn, is situated within the broader architecture of research.
References
Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston: Allyn & Bacon.
Middleton, F. (2019, July 3). Reliability vs. validity in research: Difference, types and examples. Retrieved from https://www.scribbr.com/methodology/reliability-vs-validity/
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.