Research and Evaluation Services

The Center for Positive Practices (CPP) has provided research and evaluation services to many clients over the years.  Our staff of research and evaluation consultants have participated in and/or directed numerous cluster, program, project, media, instructional and classroom evaluations in the fields of education and community-based services.

 

Evaluations can range in scale from the simple to the complex, such as the following:

  • finding evidence of whether something specific is working, such as whether students are learning from a given classroom lesson;
  • understanding the context, culture, personal and social habits and perceptions of a given population;
  • determining whether a given program or project is being implemented and on schedule;
  • learning whether a program or initiative is on track and how it may be improved; and,
  • testing theories and building models for generalizing across multiple settings.

CPP conducts every research and evaluation study as a participatory collaboration with its clients and their stakeholders.

 

More information about our evaluation strategies are briefly detailed below.

Overview

CPP consultants perform a large array of research and evaluation services, including the following:

 

Data gathering: We collect high quality, rigorous, valid and reliable data on program needs, options, and opportunities.

 

Data analysis: We perform a wide range of qualitative and quantitative analyses that are driven by the particular research questions of every study.

 

Findings and recommendations: We detail our findings in realistic contexts, and make recommendations to the project on what it may do to improve its performance, strategies and operations.

 

Standards and ethics: We follow standards and operating procedures consistent with the evaluation standards developed by The Joint Committee on Standards for Education Evaluation (1994). We also adhere to other nationally recognized, local, and/or institutional standards for the study of human subjects, as relevant, to each project.  CPP maintains confidential records and data of all projects for at least three years, or longer as may be required for any given project.

Evaluation Goals

The goals for any research or evaluation study are driven by the needs of the project. These goals may include:

 

Assessing for continuous improvement: Investigating the issues, concerns, attitudes, behaviors, performance, practices, and options in a project in order to assess existing strategies. This includes an understanding of the barriers and facilitators, as well as the context, of any project and how existing resources may be leveraged to overcome obstacles to successful implementation of the project.

 

Evaluating outputs, processess, outcomes, and impacts: Investigating whether the project is on course, off course, or needs to change course in order to attain its stated goals and objectives, and whether the project strategies are having their intended effects.  The effects of any project may include the strength or existing of short and long-term changes such as:

  • improved learning;
  • behavior or performance;
  • beliefs and principles;
  • buy-in and adoption or support of the target audience or stakeholders;
  • systemic processes;
  • resource allocations;
  • dissemination and policy formation;
  • building theories or models;
  • levels of scale and sustainability; or,
  • any combination of these as required by a project.

Evaluation Framework

CPP is particularly strong in gathering data that is informative ot the project client and its stakeholders.  This requires that the evaluation team is flexible and understanding of projects that themselves are dynamic.  Seldom any more does any project remain stagnant, nor should it.  We live in changing environments, so it is appropriate that projects constantly reassess their strategies and continuously seek new ways of improving their products, processess, strategies, and goals.  Our evaluation framework is designed to adapt to the project needs and therefore is able to work with the changing needs of the project, instead of simply issuing scorecards of sucess or failure.

 

Our approach is an action oriented and reciprocal system for continuous mixed-method data collection, analyses, participation, and interaction, focusing on goal-directed and theory-guided changes in relationships, actions, and culture.  The result is a multi-leveled approach that goes beyond the post-intervention correlational analysis.  Instead, our approach is designed to bring out expectations upfront, and to inform participants of the ongoing efficacy of their actions, revealing patterns and trends that both explain and predict what appears to be effective, sustainable, scalable, and replicable.

 

Our evaluation approach is built on four components: participatory research, theories of change, systems thinking, and standards for quality evaluation. Each will be briefly summarized below along with an explanation of how the component will be translated into action for this evaluation.

Participatory Research

Participatory evaluation research values the roles of all participants and recognizes the power of understanding that can be derived through direct participation and collaboration of the individuals involved in the activities of the Initiative.  It is positive, inviting, empowering, motivating, and appreciative of the participants and what they bring to a collaborative evaluation process.  Participants are also coached to increase their own evaluation and program implementation capacities.

Theories of Change

CPP's evaluation framework follows leading strategies for evaluating comprehensive community initiatives (CCIs) for children and families (Aspen Institute, Steering Committee on Evaluation, 1995; Weiss, 1997). From this perspective, care and attention must be given to working across functional areas and populations, collaborations, contexts, and change. Evaluators must add elements of rigor to informing stakeholders and policymakers of the potential applicability and sustainability of CCI results in other contexts. The theories of change approach (Connell, et.al., 1995; Weiss, 1995) illustrates that collaborations among functional areas and organizations are premised on personal and organizational theories (understandings of cause/effect and meaning of events, practices, and activities) to effect behavioral and/or organizational change. To adequately evaluate why and how activities and outcomes occur, theories of action and their underlying assumptions must be Aunpacked@ and made explicit. This way, evaluation becomes deeper and more meaningful, and assumes more predictive validity since hypotheses about activities and relationships become known. This adds an element of rigor that is often missing in evaluative research associated with service learning.

Systems Thinking

The third component of our evaluation approach is based on the "learning organizations" model proposed by Senge (1990; Senge, et.al.,1994) This model provides a framework for analyzing and implementing complex systemic change by unraveling patterns and interrelationships between systems, events, assumptions, vision, and collaboration. By evaluating the Initiative from a systems perspective, we will be able to continuously examine which activities and processes appear to have effects on identifiable outcomes. The usefulness of identifying these relationships lies in their ability to illuminate the levers that promote the kinds of change desired during the formative stages of the intiative.

Standards for Quality Evaluation

The Joint Committee on Standards for Education Evaluation (1994) developed standards for evaluation that we believe must be met for the evaluation to have integrity and usefulness. Briefly, these standards include the following:

 

Utility Standards: ensures that the evaluation will meet the needs of the clients. These include identifying stakeholders, being responsive to needs and interests, performing work with integrity and trustworthiness, carefully describing the perspectives, procedures, and rationale for data collection and interpretation, clearly describing programs and their contexts and purposes, disseminating information in a timely fashion, and encouraging follow through so that the information is used to improve programs.

 

Feasibility Standards: ensures that the evaluation will be realistic, prudent, diplomatic, and efficient. These include practicality, political viability, and cost effectiveness.

 

Propriety Standards: ensures that the evaluation is conducted legally, ethically, and respectfully, with due regard to those involved in the evaluation and those affected by it. These include a service orientation that explicitly recognizes the obligation to be open with all participants; formal agreements about what is to be done, how, by whom, and when; protecting the rights of human subjects; keeping human interactions positive and nonthreatening; being fair in all data collection and interpretation; disclosing findings to all interested parties; dealing with any conflicts of interest in a forthright manner, being ethically responsible; and maintaining fiscal responsibility and integrity so that expenditures are accounted for and appropriate.

 

Accuracy Standards: ensures that the evaluation will reveal and convey technically adequate information. These include program documentation, context analysis, detailed explanations of purposes and procedures, defensible information sources, valid and reliable information, systematic review of data, justified conclusions, impartial reporting, and reflection on the evaluation process itself to uncover any errors, flaws, alternative interpretations and explanations, and need for more information.

2010-04-14T15:42-07:00