Blog
Aug 17

Guest blog: Q&A with an afterschool researcher, part II

Welcome to part II of our Q&A with Neil Naftzger, American Institutes for Research (AIR), about his evaluation work related to 21st CCLC programs specifically and the afterschool field broadly. Below are answers to oneof the questions we asked, with our emphasis added in bold, which establish that there is in fact clear evidence demonstrating that 21st CCLC work for students. To read part I, click here.

What changes would you like to see in terms of 21st CCLC data collection and evaluation?

This is a big question. First, I think we need to be clear around the purposes we’re trying to support through data collection and evaluation. Normally, we think about this work as falling within three primary categories:

  1. Data to support program staff in learning about quality practice and effective implementation
  2. Data to monitor the participation and progress of enrolled youth
  3. Data to assess the impact of the program on youth that participate regularly in the program

States have done an amazing job over the span of the past decade to develop quality improvement systems predicated on using quality data to improve practice (purpose #1). Effective afterschool quality improvement systems start with a shared definition of quality. In recent years, state 21st CCLC systems have come to rely upon formal assessment tools like the Youth Program Quality Assessment (YPQA) and the Assessment of Program Practices Tool (APT-O) to provide that definition, allowing 21st CCLC grantees to assess how well they are meeting these criteria and crafting action plans to intentionally improve the quality of programming. Use of these tools typically involves assigning a score to various program practices in order to quantify the program’s performance and establish a baseline against which to evaluate growth. A recent report completed by AIR indicates approximately 70 percent of states have adopted a quality assessment tool for use by their 21st CCLC grantees. Our sense is that these systems have been critical to enhancing the quality of 21st CCLC programs, and any efforts to modify the 21st CCLC data collection landscape should ensure program staff have the support and time necessary to participate in these important processes.

Secondly, additional work needs to be done to define key indicators for the program to support efforts to monitor the participation and progress of enrolled youth and inform efforts to make targeted refinements to programming to enhance quality and effectiveness (purpose #2). For example, our sense is that indicators could be crafted to answer the following three questions:

  1. To what extent are centers retaining youth in 21st CCLC programming, both during the span of the school year and across school years? From our statewide evaluation work, we have plenty of evidence to suggest that youth benefit more from 21st CCLC programming the more they participate. Keeping youth enrolled in programming is linked both to the underlying quality of a center’s activities and ensuring that youth have access to developmentally appropriate activities as they get older that keep them interested and engaged in program activities.
  2. To what extent are youth reporting having positive experiences in 21st CCLC programs? We consider it vital for programs to understand the subjective experiences youth have while participating in programming and use this information to enhance program offerings to ensure a “goodness of fit” between where youth are and what learning supports and opportunities the program is providing. Youth surveys are primarily used to obtain these types of data.
  3. To what extent are youth demonstrating improvement on those outcomes a center is specifically attempting to impact through the provision of intentional programming aligned with those specific outcomes? This question is predicated on the belief that all centers should not be expected to positively impact the full array of school-related outcomes we typically examine when attempting to assess program impact. Rather, federal, state, and local stakeholders and funders should be looking for improvement in those specific areas targeted explicitly by center goals and objectives and the manner in which center staff go about designing and delivering aligned activities meant to support the positive development of youth.

We also would advocate for the collection and use of these data using a quality improvement, as opposed to an accountability, framework. The focus here should be on using data to enhance program implementation, as opposed to making summative judgments about efficacy or impact.

Assessing program impact represents the final way of looking at 21st CCLC data (purpose #3). In this instance, the focus should be on using rigorous quasi-experimental designs to support making causal inferences about how participation in 21st CCLC programming may be having a positive impact on participating youth. Here, we would like to see improvement in two areas: (1) enhancing the rigor of evaluation efforts and (2) improving on the measurement of outcomes that are especially likely to be impacted through 21st CCLC participation.

In terms of rigor, if we are trying to make causal inferences on the impact of 21st CCLC, we need to make sure we are using research designs that will support these inferences. In all the statewide 21st CCLC evaluations completed by AIR, each of the analyses undertaken to assess the relationship between regular program participation and youth outcomes compared youth participating in programming for 60 days or more during the school year with a similar group of youth from the same schools who did not participate in programming. In order to ensure the participating and non-participating groups were as similar as possible, we used an approach called propensity score matching to create the non-participant comparison group. Given that youth were not randomly assigned to participate in programming or not, there is always the concern that youth who did opt to participate differed in important ways from youth who did not enroll in programming. That is, potential program effects could be driven more by existing differences between the groups at baseline than a true relationship between program participation and youth outcomes. The goal in using propensity score matching was to mitigate this selection bias when estimating potential program effects by accounting for preexisting differences between youth who attended the program regularly and those who did not. As a result, these analyses helped us to isolate the potential effect participation in 21st CCLC had on youth outcomes.

Efforts to evaluate the impact of 21st CCLC should increasingly try to employ these types of designs, particularly at the state-level where there additional evaluation resources are available to support these types of analyses. In addition, very little work has been done to understand the longitudinal effects of 21st CCLC participation on the educational and career trajectories of participating youth. More work needs to be done in this area as well.

Finally, there is a great opportunity to improve upon how we are measuring youth outcomes supported by 21st CCLC participation. When we talk with center coordinators about how youth benefit from participating in the program, quite often their responses can be classified as falling in one of the following categories:

  • Belonging/Mattering
    • Creating a sense of belonging and connectedness for participating youth through positive and supportive relationships
    • Supporting the development of positive interpersonal skills
  • Promoting Youth Agency
    • Supporting the development of positive mindsets and beliefs that participating youth can succeed through effort , including promoting confidence and a sense of self-efficacy
    • Cultivating new cognitive tools, like strategic thinking, by being afforded the opportunity to take ownership in the learning process through approaches like project-based and inquiry-based learning
  • Interest Development
    • Providing opportunities to participating youth to experience and do new things
    • Discovering new areas of interest and passion (e.g., STEM, the Arts, etc.)
  • Self-Management
    • Developing skills to manage cognition, behavior, and emotion
  • Sense of Purpose
    • Supporting the understanding of why what youth do with their lives matters

Unfortunately, many of these areas are currently going unmeasured, leading to a gaping hole in our understanding of how 21st CCLC programming is truly impacting participating youth. Our sense is that if we really want to understand how 21st CCLC may be impacting youth, we need to dedicate some additional effort to examining these types of outcomes. However, before states and grantees widely pursue efforts to measure these types of program outcomes, they should wait for the research community to provide more concrete recommendations regarding which skills, beliefs, and attitudes can be reliably measured (and under what conditions), and what protections need to be in place to ensure youth are not adversely impacted by participating in these measurement efforts.