Several years ago, I wrote a blog post all about evaluating the effectiveness of your group counseling services. I shared some more about it in my tier 2 virtual training. It wasn’t enough, though! Because I left out some of the nitty-gritty details around actually analyzing the data. I’m rectifying that mistake by breaking down (with examples) three specific ways you might use your group counseling data.
If you have not read the post above on evaluating groups, you may want to do so now! It will give you an idea in your mind of what the rest of this post looks like!
Before we dive into the “how,” we need to pause for a second and discuss the “why.” Why are you collecting and analyzing data?
- To determine whether or not the group was effective in order to make changes to it the next time you run a similar group?
- To determine whether or not a student has improved or progressed enough to no longer receive an intervention?
- Or to advocate for group counseling (and the school counseling program) as a whole with stakeholders such as admin?
Your answer can and should guide both what data you collect and what you do with it!
What do I do with the pre/post surveys?
One way to use the pre-survey data is to guide the group itself. For example, if the teacher marks “usually” for the emotions prompts, but “rarely” for the conflict and peer relations prompts, then I might switch out the emotions sessions (or only do one) and instead expand one or both of the peer relations sessions.
Comparing Pre/Post to Evaluate Effectiveness
First, you may want to review what the goal of the group was in the first place. You could run the same group with two different groups of students and have two different goals for success! For example, you might run two social skills groups using the same curriculum. Of course you’re hoping both groups increase in their use of prosocial expected social skills, but one might also be really focused on conflict and behavior referrals while the other is focused on making and keeping friends. You can use the group’s pre/post survey to gauge success, but you might look at different specific prompts for each and/or look at some other data, too.
When you’re comparing pre/post-surveys, there are a few ways to look at the numbers and gauge how effective your group was.
The first is to look at the surveys’ total scores.
- Add up a total score for each survey (0 for rarely, 1 for sometimes, 2 for usually – adjust as needed for your survey).
- Look at combined scores for the group pre vs. post, OR look at combined scores for individuals pre vs. post.
- Ask yourself: How has each student’s score increased/improved? How has the group’s score as a whole increased/improved?
Example: “The students’ scores on self-control, friendship. and emotional regulation increased 25% over the course of the group.”
Another option is to look at specific skill areas and see how many students showed improvement.
Example: “3 of the 4 students showed improvement in the area of self-confidence.”
In addition, you can compare each student’s pre-survey to post-survey individually. Quantify it by looking at how many skill areas each student showed improvement on. This usually just means the number of questions their answers improved on.
Example: “Three students showed improvement in all six skill areas, and two students showed improvement in four skill areas.”
Last, instead of using pre/post surveys, you might be using progress monitoring instead. In this situation, you would determine a group’s effectiveness by identifying if students are making progress throughout!
Example: “Four of the five students showed continued progress throughout the last six weeks of the eight week group.”
How do I use the data to decide if a student no longer needs an intervention?
Legal and Ethical Considerations
Let’s start here: Evidence-based measures are awesome. They give us informational labels and cut scores and clear guidance about whether or not a problem (still) exists. The problem is that ethically and perhaps legally (per Dr. Carolyn Stone, our ethics expert), we can only use them when we have caregiver permission. That is because of their relationship to screening and diagnosis. This means we are usually relying on existing school data (i.e. attendance, behavior referrals) or survey tools we created to align specifically to the intervention goals. Quick side note: Depending on your district’s interpretation of the Protection of Pupil Rights Amendment, even unofficial surveys we create may require caregiver consent if the student is answering the questions.
Team-Based Decision Making
The tricky part here is that “There is no standard for mastery or expected growth rate for behavioral skills. Rather, these criteria must be established by the student support teams depending on the measure.” (McDaniel et al., 2015, pg. 13). It’s up to us as the SEL and mental health experts of the school, alongside teachers and other members of our teams, to make decisions about what the data means.
There are two questions we can ask ourselves (and the team) as we’re making these choices:
- What score would we deem “successful without supports?” What score would another student in class who doesn’t need supports but has occasional developmentally appropriate struggles get? Set your own benchmark and then compare post survey scores to that. This is something that I think is somewhat subjective and is likely impacted by the classroom culture, the age of the students, and other classroom and site-specific norms.
- Which prompts/skill areas on the survey are the most crucial/valuable to that student’s functioning at school? Some skills might make more of a difference in a student’s success than others, depending on their personality, their grade, the class dynamics, etc. After you’ve determined the prompts to focus on, make a decision like “needs to score at least ‘sometimes’ in all areas to exit the intervention” or something similar to establish your own “cut score” so to speak.
Mcdaniel, S. C., Bruhn, A. L., & Mitchell, B. S. (2015). A Tier 2 Framework for Behavior Identification and Intervention. Beyond Behavior, 24(1), 10–17.
Advocating for Groups
Your purpose or your why for collecting group counseling data might be for advocacy. Maybe you want parents and families to see what the school counseling program can do. Maybe you need your admin to see the impact of groups. Or maybe you’re hoping faculty will better understand how you’re spending your time.
Like with other uses of group counseling data, your purpose even within advocacy will guide what data you collect, analyze, and share. Here are some examples of what this might look like:
- An end of the semester infographic that includes number of students served in groups and/or number of groups run and/or number of group counseling sessions provided
- Emailing admin to share some of the effectiveness statements (such as percentage improvement and number of skill areas students grew in)
- Share some of the impact info at the start of meetings where you’re going to ask teachers for referrals
Do you have any more questions about using group counseling data? Let me know in the comments and I’ll do my best to answer!