Home
ABOUT SSI
PLANNING
RESEARCH
Working with SSI
Research Design
Research Studies
Data Collection
Instrumentation
Data Analysis
Reporting
Publications
COMMUNICATION
FUNDING
CLIENTS
CONTACT
 


______________________________________________________________________

Strategic Solutions employs mixed-methods evaluations to provide rigorous and scientific evidence of program or organizational efficacy.  Research models may be either experimental, quasi-experimental, non-experimental or a combination thereof.  Experimental designs are the most scientifically rigorous, non-experimental, the least rigorous.

Depending upon an organization’s needs and resources, the research/evaluation design could consist of two or more corresponding studies incorporating both quantitative and qualitative methods. The evaluation could also be summative, formative or both, and address one or more research questions.
  

                        

Further, to better establish internal and convergent validity and reliability,
all research and evaluations are designed using standardized research methods and multiple levels of analysis, and where possible, standardized instruments.

SSI also creates Logic Models to link evaluation questions, data elements, data sources, data collection strategies and analytical techniques.

Brief descriptions of evaluation designs are presented below.
______________________________________________________________________

Research/Evaluation Designs

A. Experimental
: (most rigorous) this model helps determine program or treatment efficacy by 1) randomly assigning program recipients into 2) treatment (program) and control (non-program) groups; and 3) testing recipients at two or more points in time.  

B. Quasi-Experimental
: (less rigorous) this model helps determine program or treatment efficacy by 1) testing recipients at one or two points in time; BUT 2) either no control group is used and/or 3) they are not randomly assigned.   

C. Non-Experimental
: (least rigorous) this model helps determine program or treatment efficacy by studying recipients in detail and in-depth via observation, interview and/or profile. This design does not involve a control group or establishing a baseline for determining recipient progress.
______________________________________________________________________

Research/Evaluation Classifications

A. Summative: provide evidence of program success by examining outcomes, effects and impact. They are helpful for determining program efficacy (e.g., success, achievement, satisfaction, etc.). Summative evaluations can be both quantitative and qualitative and typically ask: does the program work; how well does it work; in what areas; and for what types of populations?  

B. Formative
: provide a context for impact results by examining processes, structures and content. They are helpful for determining the effectiveness of program implementation (e.g., design, staffing, leadership, resources, environment, etc.). Formative evaluations can be both quantitative and qualitative and typically ask: what does the program look like; how does it work; in what context or under what circumstance; and with what resources?

C. Quantitative: quantify a pre-determined set of variables (e.g., the number of recipients that have improved as a result of treatment) that are expected to provide evidence of the success (or lack thereof) of a program or organization. Data collection methods are used that best yield quantifiable information: surveys and questionnaires, tests and assessments, inventories and checklists, rubrics, records, etc. 

D. Qualitative: examine qualities or characteristics unique to a program or organization (e.g., particular views or actions of a select group of program recipients) that could shed light on the success (or lack thereof) of a program or organization. Data collection methods are used that provide more in-depth information not readily captured through quantitative means: case study, interview, focus group, site visit, observation, ethnographic study, etc.