Resource Details

This article originally appeared as part of the ReShape Newsletter: Sexual Assault Program Evaluation Coalitions Take the Lead (Winter 2013)

An Interview with Sean Black, Communications Coordinator, Illinois Coalition Against Sexual Assault and conducted by Kendra Malone, Information & Technical Assistance Specialist, Iowa Coalition Against Sexual Assault December 18th, 2012

1. Why evaluate programs?

Sexual assault service program evaluation is important because programs need to know what is working and what is not so that they can provide the best possible services. Evaluation is also useful for coalitions as they assist programs in determining their strengths so that they may help guide other providers in improving their service provision via peer to peer networking. This helps foster program’s ability to share best practices with each other so that those who may need assistance can receive the occasional boost. Program evaluation is also useful for demonstrating positive survivor and community impact to funders and other stakeholders. For instance being able to demonstrate that a program has evidence that their services are working and that survivors and families find value and healing in a program’s services helps to bolster the validity of programs to survivors, funders and their communities. Many programs are able to take evaluations from the last few years to funders and legislators to provide backing for how these services are impactful. The ability to use this information to demonstrate the value of the work to communities and to funders reveals program successes in that they are able to help communities. Often evaluation results affirm that this community is stronger because this rape crisis center is here. Additionally, evaluation sets sexual assault programs apart from other nonprofit organizations in demonstrating the need for services and their impacts. It is also noteworthy that in this financial climate the significance of evaluation is crucial as this information is useful at the local level when programs seek funding to be able to draw upon evaluations to demonstrate their good work.

2. How is your coalition evaluating programs?

The Illinois Coalition Against Sexual Assault (ICASA) has undertaken outcome measure evaluations for all centers over the last three years each October. Three client surveys were provided to centers: forms for children; adolescents; and adults. The coalition works with a research institute in Rockford, who is able to compile survey results into a report on how clients have been impacted by services. Two of these reports have been completed and they are in the process of compiling the 2012 report making three years worth of data available for comparison. After this round of data collection the coalition hopes to implement a longitudinal client study to glean insight into long-term impacts of services. The coalition is meeting this summer to plan the next steps regarding evaluation as they will not be repeating the yearly client evaluation surveys in their current format.

3. What tips do you have for working with consultants or researchers?

ICASA works with researchers that are university-based. We value this relationship because it provides us with the opportunity to have programs’ evaluation data presented with an authority that comes along with rigorous academic processes. This is particularly true as the skills and abilities of university-based researchers carry more weight regarding analysis and data presentation that shows service impact. It is also important to have the vested involvement of a 3rd party who is able to provide a fresh perspective on how those outside of the field will conceptualize the information. Working with researchers should be a collaborative process that includes coalition staff, programs and the researcher. ICASA has an evaluation committee that includes program representation. The committee works their way through the questions to be asked, the logistics of executing surveys in the rape crisis centers, and works to ensure that survey delivery will be plausable within the confines of the real-life experiences of advocates. There is significant pre-planning involved in conducting evaluations in which the coalition holds meetings with programs, coalition staff and researchers to discuss survey questions, dissemination and collection logistics, and how the data can be used for the benefit of all. It is also important to give the evaluation process time. ICASA is working with researchers to publish their results in scientific journals as this further legitimizes the work. Make use of universities as they are a valuable resource! Since their collaboration with the researcher who worked on program evaluation the coalition has been approached by several other researchers who have submitted proposals regarding other evaluation projects.

4. How do you determine outcome measures?

Particularly when programs may have multiple funders & stakeholders to report to who maintain different criteria. Determining outcome measures is a give and take with programs, researchers and coalition staff. First, the collaborative committee decides what information would be most useful to programs with the primary goal of understanding how services impact individuals, their families and communities. Questions generally seek to consider improvements in survivor relationships and look to gain a better understanding of how services contribute to their larger community involvement. Survey questions are built off of a community improvement model as opposed to a solely individualistic approach.

5. How does the coalition support programs in making time for evaluation?

The coalition provides TA webinars on how to go through the evaluation process. Staff are responsible for supporting programs through reminding programs to complete surveys and by making surveys and related materials available on the ICASA webpage to promote ease of access for programs. The program committee and the entire governing body approve evaluations so they are not presented from the coalitions in a top-down fashion but as a collaborative effort between the coalition, programs and researchers.

6. What do you do with evaluation results?

Evaluation results are used for a variety of purposes. They are used to help strengthen programs sexual assault services, used to demonstrate the positive impact programs’ services have on individuals and communities, and to pinpoint positive trends in service provision as well as highlight opportunities for improvement.

7. What are the benefits of evaluation?

The coalition is careful not to over burden programs with too many evaluation requirements therefore they do not conduct more than one type of evaluation at a time. Membership buy-in and support is key to successful evaluation so all calls for evaluation start with leaders from the centers understanding the necessity and benefits of evaluation. One of the primary benefits is having results they can use at the local level, whether it is to support increased funding or to build new community relationships. Of particular benefit to the coalition is that having this data has allowed them to avoid funding cuts because they can show that outcome measures are meaningful. It is a way for them to see how programs are doing and to assist them with improvements as necessary as every center is invested in providing the best possible services to survivors. Additionally, the coalition utilizes evaluation as a learning curve that gives programs a foundation for programming.

8. How do you quell fears regarding if evaluations show something needs improvement?

You must be honest and upfront in the beginning regarding the goal of the evaluation and what purpose it serves programs and survivors. There are no repercussions if the evaluation highlights places of improvement, instead that is used an opportunity for programs and the coalition work together to advance services. In such instances questions like – can we try to figure out why this is the case, are asked to locate where the disconnect lies to then support them in building capacity to make improvements. It is also essential to note that the coalition has built a trusting relationship with programs so that everyone is on the same page regarding roles, expectations and outcomes. The coalition is here to help not hinder services.

9. Other information to add?

ICASA has found paying for evaluations is fairly accessible in terms of hiring consultants and researchers. They consider it money well spent as it is an investment in programs, has helped stall funding cuts and ensures that services are the best that they can possibly be. In the beginning stages of launching their evaluation a lot of their costs came from travel for planning meetings however this expense has decreased due to technologies that can connect people like Skype. All are welcome to reach out to ICASA to talk about their evaluation processes and to use any of their materials. Sean can be reached at (217) 753-4117. We recommend calling to ask questions of people that you know. Also, look to your budget to determine what is feasible for your coalition to spend, be upfront with a consultant or researcher about the resources you have available for evaluation. Ask them what they can do with that specific amount. Also, make sure researchers can utilize the assistance of graduate students or others for help as things like data entry and reports formatting are time intensive but can be done by others.