Analyze the steps involved in planning the collection evaluation programme
Identify the purpose of the assessment and the audience
The first thing to do when developing your assessment plan according to Mertens and Wilson (2018) is to consider how the assessment will be used. Who is the audience for the rating? These may be sponsors, program staff, executives who make future plans, or community members who participated in the program. Probably each of these different groups wants to know about things about the program (Taber et al., 2017). For example, program staff may want to know if participants are benefiting from the activity, and project sponsors may want to know if the plan is achieving the desired results.
Developing assessment questions
A single assessment design cannot be recommended because the appropriate assessment design depends on the purpose of the assessment and the program being evaluated. Some factors to consider are the type of program or project you are trying to evaluate; the questions you want to answer; your target audience; the purpose of your assessment; and resources.
Taber et al (2017) state that there is need to clarify the goals and objectives of your initiative. These objectives are the main things that need to be accomplished and the process involved in achieving them. Explaining this will help identify which key components should be evaluated. One way to do this is to create a table of elements and program elements. If there has already been an identification of short, medium and long term performance in a program line, this benchmarking will be much easier as there is a description of what is needed to measure and may already have a specified timeframe waiting for these results to be declared.
While you have indicated a variety of results for your program in your schedule, estimating requires time and resources, and it may be more realistic to evaluate some of your results than all of them (Mertens and Wilson, 2018). When choosing which results to measure, there are several factors to consider questions to answer.
As per McDavid et al (2018) planning and implementation questions may include: How well was the plan or project organized and how well was it implemented? Other possible questions include who participates? Is there diversity among participants? Why do participants enter and leave your applications? Are there a variety of services and other features created? Do they most need help maintaining services? Are community members happy that the plan meets local needs?
Possible ways to answer these questions involve monitoring systems that monitor the actions and outcomes associated with project implementation, survey members for satisfaction with objectives, survey members for satisfaction with results. The next phase must also answer how well the plan or initiative achieved the stated goals. Questions on the number of people to participate or the number of hours they will participate can be answered through monitoring, member survey of satisfaction with outcomes and goal attainment scaling.
Impact on participants: How much and what kind of a difference has the program or initiative made for its targets of change?
Possible questions: How has behavior changed as a result of participation in the program? Are participants satisfied with the experience? Were there any negative results from participation in the program?
Possible methods to answer those questions: member survey of satisfaction with goals, member survey of satisfaction with outcomes, behavioral surveys, interviews with key participants.
Impact on the community: How much and what kind of a difference has the program or initiative made on the community as a whole?
Possible questions: What resulted from the program? Were there any negative results from the program? Do the benefits of the program outweigh the costs?
Possible methods to answer those questions: Behavioral surveys, interviews with key informants, community-level indicators.
Once you have determined which results, you will focus on your assessment of the issues at hand. The evaluation question is specific, measurable and targeted to ensure that you receive useful information and do not have much data to analyze (Taylor-Powell, Jones & Henert, 2003; WHO, 2013). As you develop your assessment questions, be sure to find or collect data to answer the question without much difficulty. Once you have the questions you want to answer your assessment, the next step is to decide which methods are best suited for those questions. Here is a brief overview of some common evaluation methods and what they work best for.
Monitoring and Feedback System
Operations in progress: They tell you what you did to implement your initiative.
Results: inform what were the results; and
Observation system: This is what you do to monitor the company as it happens.
Indicator is the information you need to collect to answer your evaluation question. The results and clues are sometimes confusing; the results are the changes that result from the program and the indicators are the things you see, hear or read that provide information to know what and how much has changed. Some results and evaluation questions could be better measured with more than one indicator.
The methods and tools you will use to collect data depend on the type of data you are collecting. Data can be quantitative (numbers) or qualitative (words). The type of data you want to collect is usually determined by the evaluation question. Many evaluations are “mixed methods” that combine quantitative and qualitative data (Taber et al., 2017). Numeric information tells you how many, how many, or how often something has happened. Numeric information is usually expressed as a percentage, percentage, or percentage. Numerical methods for collecting data include tools for measuring results, classification surveys, or observation methods that tell how often something happened (Mertens and Wilson, 2018). Qualitative data will tell you why or how something happened and are helpful in understanding attitudes, opinions, and behaviors. Qualitative data collection methods include interviews, focus groups, surveys, and open surveys (Taber et al., 2017). It is difficult to demonstrate how the programs positively impacted participants with qualitative data alone.
Taber, J.J., Bohon, W., Bravo, T.K., Dordevic, M., Dorr, P.M., Hubenthal, M., Johnson, J.A., Sumy, D., Welti, R. and Davis, H.B., 2017, December. Increasing the use of evaluation data collection in an EPO program. In AGU Fall Meeting Abstracts.
Mertens, D.M. and Wilson, A.T., 2018. Program evaluation theory and practice. Guilford Publications.
McDavid, J.C., Huse, I. and Hawthorn, L.R., 2018. Program evaluation and performance measurement: An introduction to practice. Sage Publications.