Means-ends program development
Ross Woods
Means-ends program development is the process in which program developers start by forming goals, then formulate ways to achieve them. The original analogy was to a factory; manufacturing processes (means) contribute to making a product (end).
The goals (ends) may also be called "objectives" or "outcomes." For simplicity and consistency, let's just call them "goals".
Means-end program development can be used either for formulating new programs or for evaluating existing programs.
Means-end program development
Means-end program development basically works like this:
- Identify the overall purpose of the program, which may be expressed in a slogan or mission statement, or in a formal document commissioning the program. The purpose will probably fit into a long-term overall strategy, even if the strategy is not clearly written down.
- Use what is already known and being done as a basis.
- Review the literature, noticing important guidelines, applicable standards, and potential improvements and adaptations.
- Review similar or related existing programs, looking for the same kinds of things that you sought in the literature review.
- Propose improvements and adaptations necessary for their particular goals.
- Survey the real or felt needs of the target population, and use survey results to formulate goals. The overall purpose of the program might be adjusted later when the actual needs are better understood.
- Develop sets of explicit, detailed goals that you will use to plan the program and how it will work. To a large extent, the goals are the measure of success, but you may need to establish other suitable methods of finding out how successful the program is. Who will measure and what instruments will be used?
- Get agreement from stakeholders or gatekeepers on your list of goals.
- Determine the ways in which you could most effectively reach those goals. (These are the processes.) Then design the new program, noticing carefully the rationale, results of consultation with stakeholders, and expected implementation issues.
- Implement the program:
- Use the list of goals to evaluate progress while the program is running but also observe and record any changes. Quite likely, they will find that they should be aiming for different things than they first thought, or at least thinking about them in very different ways. Some goals might have been too optimistic or not optimistic enough.
- Notice and record the kinds of adaptations and changes made necessary by the implementation process.
- Describe local factors that affect what participants do.
- In some cases, this involves training the staff with new skills, and monitoring their implementation.
- Evaluate the program and draw conclusions on how successful it was. Different aspects probably varied in their successfulness.
- Review the goals, results and processes, and suggest improvements.
- The cycle starts again, incorporating the improvements.
In the means-ends view, the quality of the program is largely the quality of the goals it reaches. Its essential values are purposefulness, fitness for purpose, and the articulation and realization of purposes. It assumes that the issues of quality and goals are essentially expressible in language. However, means and ends (or process and product) are closely interrelated and there are limits to how sharply one can between them.
Perceptions
Means-ends thinking appears to be at least partly culturally determined. North Americans clearly value it highly, while many other cultures are far more driven by processes or the associated interpersonal relationships. The clearly linear logic does not always fit well with holistic, global thinkers. People in some cultures appear to presume that circumstances will change rapidly, so that purposes must too flexible and adaptive to be very useful as a planning tool. Consequently, in using means-ends planning and evaluations in multicultural teams, some team members will perceive the process in highly different ways.
Even many westerners are not inclined to strictly means-ends thinking. These include people who:
- identify product and process as intertwined,
- see purposes as dependent on tacit knowledge, or
- see purposes as re-interpretations of experience (This is most interesting as people tend constantly to re-interpret their experiences and to draw more meaning from them.)
Besides, different team members will perceive the planning process according to their particular gifts, roles and dispositions. Some look at program formulation as primarily political bargaining, others see a consensus-building process, and others will look for a personal purpose.
Program evaluation I: Inputs and processes
If a program achieves its goals, it might still have a quality problem if it is poorly organized, wastes its resources, costs more than you can realistically afford, or costs too much for what it produces.
- Were the means actually effective in reaching the goals? Would other means have been as effective?
- What inputs and resources were required? What alternative strategies could have been used?
- Was the process with the institution's capabilities?
- Were design practicalities done well?
- Were processes appropriate and efficient?
- Was the planning procedure appropriate?
- How much did it cost, and was it cost-effective?
- What was the cost in terms of lost alternatives?
- Do the actions suit the goals? (Organizations easily busy themselves with activities that do not support their goals.)
Program evaluation II: Evaluate implementation
By attempting to meet real needs, programs normally run quite differently from the plan. Completely static programs simply don't exist; evaluation and modification start when implementation begins, and sometimes even before then. Programs don't actually produce exactly what they intended, and this is not necessarily bad. Altered goals and side-effects can be more important and desirable than intended products. On the negative side, program implementers tend to water down any major innovations, making them more like past programs in which they have experience.
- What were the agreed-upon goals?
- Did different stakeholders have different perceptions of the goals?
- Did the goals change during implementation? If so, which goals and how did they change? (Note that changes might not have been written down, and real changes might be perceived to be "interpretations"?
- How well did program implementation go?
- What side-effects did the program produce?
- Were staff roles appropriate?
- What feedback was given to staff? Was it helpful?
- What suggestions for improvement come from the implementation process?
- How well did we document the program?
One educator even suggested that if someone else were to come into the program and observe what it actually does, they might not draw the same conclusions about its goals as the program developers.
Program evaluation III: Evaluate achievements
- Did you reach the goals? How did you measure? Would other kinds of measurement give you the same answer?
- What was achieved at the end of the program? For example, a school might look at the caliber of its students at graduation when they have been through he program. They might look at a culminating result, such as a final thesis or practicum or an eventual result. How will you measure?
- What was achieved in the long-term strategy? For example, schools might ask what students eventually do after graduation or their subsequent career paths.
- Were the goals the right ones?
- Was the target population correctly identified?
- Were their needs and underlying problems correctly identified and appropriated addressed?
- How could you express program goals more clearly? Were they too wordy or too brief?
- Was this project appropriate for an organization like us? (Did it match our goals? Was it within our capabilities?)