Facilities Management

Colleagues listening to executive at meeting in office boardroom

Facilities Management

Sections of This Topic Include

Plan Your Facilities
Select the Best Location
Specific Facilities (signage, computers, etc.)
Setting Up an Office
Telecommuting (working from home)
Inventory
Management (this section is in “Operations Management)

Logistics
and Transportation Management (this section is in “Operations Management”)

Also consider
Related Library Topics


Plan Your Facilities

Small-business planning often overlooks the critical importance of clarifying
what facilities are needed to support the development and provision of a product
or service, and then to plan how to get those facilities. That is true especially
if you are providing a product rather than a service because a product often
requires space to store the necessary materials and supplies to produce the
product, as well as to develop it.

Important decisions about facilities include, for example: How much space do
I need for storage? Production? Personnel? How should the facility represent
my brand, my colors and tone? What about parking? What about expected future
growth? Should I rent or buy?

After answering the above questions and considering the guidelines in the following
articles, write down your requirements for facilities. You, or any others who
are helping you, can continually reference your requirements to ensure that
your facilities meet your requirements.
Strategic
Facility Planning — Now More Important Than Ever
What
to Consider When Making Business Facility Decisions
Implementing
Facilities Management Successfully in 7 Steps
Facility
Planning: Steps, Process, Objectives, Importance
50
Expert Facilities Management Tips and Best Practices

Useful Resources in Facilities Planning Include
Planning
| Project
Planning
| Technical
Writing

Select the Best Location

Having having thought about your needs and preferences for facilities, you
are ready to think about where to locate your business. Decisions include, for
example: Do I want proximity to my customers? Proximity to my suppliers? Distance
from my competitors? What municipalities might grant you some tax breaks if
you locate near them? What about parking?

Before you select a location, your answers to these questions should be written
in a specification that you can reference when searching for a location or that
you can bring to a real-estate agent. That way, you will be making the best
choice based on your actual needs, rather than on your personal preferences.
How to Find the Best Location
A
Step-by-Step Guide to Finding the Right Location for Your Business
Choosing
the Right Location for Your New Business
How
to Choose the Best Location for Your Small Business
How
to Choose a Business Location

Useful Resources in Selecting a Location Include
Planning
| Technical
Specifications
| Contracting

Specific Facilities

Building

Strategic Facility Planning:
A White Paper

HVAC Systems

HVAC (Definition)
Types
of HVAC Systems

Offices

See Setting Up an Office below.

Computers

Computer and Network Security
The Use of Computers in Facilities/Installations Planning
Computer Aided Facilities
Management

Setting Up An Office

Designing
An Effective Home Office

Designing
Your Home Office

Introduction
to Property Management

(This section is about setting up office facilities. Information about organizing
yourself, your files, paperwork, etc., is included in the topic Organizing
Yourself
.)
Office
Design, Leasing and Organizing

Organizing
Yourself (organizing your office and activities, once the office is set up)

Time
Management

Telecommuting (virtual workplace)

Basics

Basics of Telecommuting for Companies
The Rise of the Truly Virtual Workplace
Why and how to transition to a virtual workplace

General Resources About Telecommuting

Telecommuting Defined
26 Great Telecommuting Resources
Telecommuting Resource Center
List of sites
and articles about telecommuting

General Resources
About Facilities Management

USArchitecture.com a by the State Directory of Architecture, Engineering and Construction
International Facilities Management Association
Facilities Management News


For the Category of Facilities Management:

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.

Related Library Topics

Recommended Books


Theory of Change

Change-word on a-wooden-background

Theory of Change – Understanding How Any System Works

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content
of this topic came from this book:
Nonprofit Programs - Book Cover

Sections of This Topic Include

Basic Overview of Theory of Change
How to Develop a Theory of Change
Example of a Theory of Change (for Community Collaboration)
Examples of Theory of Change
Special Topics About Theory of Change
Trainings and Resources About Theory of Change

Also consider
Guidelines
and Framework for Developing Logic Models
Related Library Topics


Basic Overview of Theory of Change

A system, such as a program, product or program has a recurring set of activities,
including:

  1. Inputs to the system, such as curriculum materials, funding and expertise
  2. Processes that occur to the inputs, such as trainings, facilitations and
    coaching
  3. Outputs from the processes, such as the number of students trained
  4. Changes in external resources, such as new knowledge, skills and abilities
    among the students

Logic models are often used to depict this flow of activities. However, what
is missing from the logic models are depiction and explanation of how those
activities affect — or are supposed to affect — each other. The theory of
change is extremely useful in that regard.

A logic model can clearly depict the order of the phases in a systematic program,
such as the training program above. However, it does not explain how those phases
are closely integrated to produce the desired outcomes from the program. For
example, the logic model does not explain the assumptions that program designers
make when they conclude that certain processes will produce certain outputs
and outcomes.

The theory of change of the program explains those assumptions. It explains
the assumed causes and effects that program designers can study in order to
understand why a program works or does not work. The theory of change can also
explain what others must do to duplicate or improve similar programs.

Theory of change applies to almost any kind of designed system, including products,
services and programs. Thus, the concept can be extremely useful to any kind
of organization or internal unit in the organization.

Theory
of Change vs Logical Framework – what’s the difference?
Differences
Between the Theory of Change and the Logic Model

How to Develop a Theory of Change

Theory
of Change – When to Use
Theory
of Change (a how-to)
Theory
of Change (another how-to)
How
to Build a Theory of Change

Example of a Theory of Change (for a Community
Collaboration)

The following example is based on this logic model for a community collaboration of several nonprofit organizations working together to accomplish a common overall change in the community. The reader
is encouraged to print out that one-page model as he or she reads the following
theory of change. During the example collaboration, the lead organization supports
the organizational development of each partnering organization with assessments,
trainings, coaching and peer learning.

Information in a theory of change is sometimes described in a reverse order
of the parts of the logic model because the primary focus is on – and
starts with – the expected outcomes.

7. Community-level, long-term outcomes:

  • certain social issues will be resolved for a certain group of clients in
    certain geographic area(s)

Program Activities:

These desired outcomes are to reduce the occurrence of gang activity, youth
violence, and child abuse and neglect among 12-21 year-olds in a certain area
by the end of the 3-year Program. The amount of change and the indicators to
those amounts for each social service might not yet be determined. Hopefully
the suggested degree of alignment between the participating organization’s
program outcomes can come from the result of the upcoming community assessment
and asset mapping. The occurrence and quality of these outcomes will be evaluated
at the end of the Program.

Assumption:

  • The outcomes (impacts on clients) of the programs of each of the participating
    organizations in the Program are aligned with contributing to the desired
    community-level, long-term outcomes.

6. Long-term outcomes in each of the participating organizations: organizational
effectiveness and positive impacts in the community

Program Activities:

Each organization will undergo capacity-building activities to implement the best
practices in each of the major functions, for example, Boards, strategic planning,
programs, marketing, staffing, finances and fundraising.

Assumption:

The Program’s capacity-building activities will result in each organization’s
successful implementation of best practices that, in turn, will achieve effectively
organizational and program effectiveness for each organization and, in turn,
will result in positive impacts in the community.

5. Intermediate outcomes in each organization: new skills and abilities for
the personnel in the organizations

Program Activities:

Each organization’s personnel will learn about the best practices needed
in each common function in an organization in order to achieve a highly effective
organization and that personnel will apply those new skills to develop those
new abilities.

Assumption:

The Program’s methods and short-term outcomes will produce sufficient
skills for those personnel to implement best practices and capacity building.

4. Short-term outcomes in each organization: new knowledge for personnel in
the organizations

Description:

Personnel in each organization will gain new knowledge about the necessary
best practices in the most important functions in nonprofits in order to develop
high-performing nonprofits.

Assumption:

The Program’s methods will produce sufficient knowledge about best practices
and capacity building, along with indicators toward that knowledge.

3. Tangible outputs for each organization

Program Activities:

Tangible results will include, for example, valuation plans, assessments and
reports, action plans, strategic plans, training sessions, coaching sessions,
peer learning sessions, coaches’ notes, facilitators’ notes and
status reports.

Assumptions:

  1. The Program actually uses the desired methods according to the eight principles
    for successful capacity building, as suggested in the Human Interaction Research
    study (listed below in the “Program methods …” section).
  2. The Program methods actually produce these recurring outputs.
  3. Establishment of best practices, and subsequent organizational and program
    effectiveness, will proceed through short-term outcomes (knowledge about capacity
    building), intermediate outcomes (skills to use capacity building to implement
    best practices) and long-term outcomes (having implemented the best practices).
  4. These recurring outputs will contain sufficient information about the status
    of implementation of the best practices such that various levels of outcomes
    can be ascertained.

2. Program methods / interventions (capacity building activities)

Program Activities (capacity building activities):

The Program’s methods/interventions are designed according to the eight
principles for capacity building effectiveness, which are:

  1. Comprehensive (comprehensive assessments are done and a variety of capacity
    building activities are used, including assessments, awards, training, coaching,
    peer learning, etc.).
  2. Customized (according to the life cycle and culture of the organization
    via assessment and interviews).
  3. Competence-based (capacity building plans are customized to the organization’s
    resource level).
  4. Timely (capacity building plans are scheduled according to the organization’s
    resource level).
  5. Peer-connected (a time-tested, peer coaching model is used).
  6. Assessment-based (each organization is assessed via a variety of methods,
    including two different organizational assessment tools and interviews).
  7. Readiness-based (each organization’s readiness is assessed via a readiness
    checklist and interviews).
  8. Contextualized (capacity building continually accommodates/adjusts for other
    current activities within and around the organization).

Assumptions:

  1. Each of the organizations will participate as expected in the Program.
  2. Program personnel will be trained and effective in delivering Program services.
  3. The Program’s capacity building methods ultimately will guide participants
    to implement and operate the best practices.

1. Inputs to the Program

Description:

Among the inputs are nonprofit organizational performance “best practices”
as defined by the United Way Management Indicators Checklist, which is a comprehensive
organizational assessment tool designed by 20 nonprofit organizational development
consultants. The best practices are itemized as approximately 170 specific behaviors
within a nonprofit organization. The best practices will be embellished with
best practices for sustaining a successful collaboration among the participating
organizations.

Other inputs are Program funding, eight participating organizations, consultants,
trainers, coaches, capacity building “best practices” and facilities.

Assumptions:

  1. The Program’s selected “best practices” are those that
    together, when implemented, will achieve organizational and program effectiveness
    for each organization. The definition of “organizational effectiveness”
    has long been under scrutiny. Thus, this Program adopts these operating definitions.
    An “effective” program achieves desired outcomes among its targeted
    group of clients and in the timeframe desired. Also, an “effective”
    organization has ongoing high-quality operations that support ongoing effective
    programs.
  2. The selected organizations each have programs that, together, achieve the
    desired community-level outcomes.
  3. These best practices can be organized into the funder’s mandated four
    areas of outcomes for each participating organization, including development
    of leadership (Board and staff), organizational systems, program operations
    and community engagement/awareness.

Articles About Basics of Theory of Change

What is
Theory of Change?
What
is this thing called ‘Theory of Change’?
Theory of Change
Theory
of change basics: A primer on theory of change
Theory of change
How
Does Theory of Change Work?

Examples of Theory of Change

Theory of Change
– Examples
Use
of Theory of Change in Health Interventions
How
can a Theory of Change framework be applied to short-term international volunteering?
Theory
of Change for Strategic Planning
Use
of Theory of Change in Project Evaluations
Constructing
Theories of Change for Information Society Impact Research

Special Topics About Theory of Change

Evaluating a Theory of
Change Framework
Six
Theory of Change Pitfalls to Avoid
How to and how not to develop
a theory of change to evaluate a complex intervention

Trainings and Resources About Theory
of Change

Center for Theory of Change
Theory
of Change Training Curriculum
Theory
of Change for Development
Theory
of change (numerous articles)



For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.

Related Library Topics

Recommended Books


Checklist for Program Evaluation Planning

People Having a Discussion at the Office

Checklist for Program Evaluation Planning

© Copyright Carter McNamara, MBA, PhD

The following checklist might prove useful when planning evaluations for programs. The reader would benefit from first reading Basic Guide to Program Evaluation.

Name of Organization

Name of Program

Purpose of Evaluation?
What do you want to be able to decide as a result of the evaluation? For example:
__ Understand, verify or increase impact of products or services on customers/clients (eg, outcomes evaluation)
__ Improve delivery mechanisms to be more efficient and less costly (eg, process evaluation)
__ Verify that we’re doing what we think we’re doing (eg, process evaluation)
__ Clarify program goals, processes and outcomes for management planning
__ Public relations
__ Program comparisons, eg., to decide which should be retained
__ Fully examine and describe effective programs for duplication elsewhere
__ Other reason(s)

Audience(s) for the Evaluation?
Who are the audiences for the information from the evaluation, for example:
__ Clients/customers
__ Funders/Investors
__ Board members
__ Management
__ Staff/employees
__ Other(s)

What Kinds of Information Are Needed?
What kinds of information are needed to make the decision you need to make and/or enlighten your intended audiences, for example, information to understand:
__ The process of the product or service delivery (its inputs, activities and outputs)
__ The customers/clients who experience the product or service
__ Strengths and weaknesses of the product or service
__ Benefits to customers/clients (outcomes)
__ How the product or service failed and why, etc.
__ Other type(s) of information?

Type of Evaluation?
Based on the purpose of the evaluation and the kinds of information needed, what types of evaluation is being planned?
__ Goal-based?
__ Process-based?
__ Outcomes-based?
__ Other(s)?

Where Should Information Be Collected From?
__ Staff/employees
__ Clients/customers
__ Program documentation
__ Funders/Investors
__ Other(s)

How Can Information Be Collected in Reasonable and Realistic Fashion?
__ questionnaires
__ interviews
__ documentation
__ observing clients/customers
__ observing staff/employees
__ conducting focus groups among
__ other(s)

When is the Information Needed?

What Resources Are Available to Collect the Information?


For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.


Basic Guide to Program Evaluation (Including Many Additional Resources)

Group of People Gathered Around Wooden Table

Basic Guide to Program Evaluation (Including Outcomes Evaluation)

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content of this topic came from this book:
Nonprofit Programs - Book Cover

This document provides guidance toward planning and implementing an evaluation process for for-profit or nonprofit programs — there are many kinds of evaluations that can be applied to programs, for example, goals-based, process-based and outcomes-based. Nonprofit organizations are increasingly interested in outcomes-based evaluation. If you are interested in learning more about outcomes-based evaluation, then see the sections Outcomes-Evaluation and Outcomes-Based Evaluations in Nonprofit Organizations.

Sections of This Topic Include

Also consider

Learn More in the Library’s Blogs Related to Program Evaluations

In addition to the articles on this current page, see the following blogs which have posts related to Program Evaluations. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog.


A Brief Introduction …

Note that the concept of program evaluation can include a wide variety of methods to evaluate many aspects of programs in nonprofit or for-profit organizations. There are numerous books and other materials that provide in-depth analysis of evaluations, their designs, methods, combination of methods and techniques of analysis.

However, personnel do not have to be experts in these topics to carry out a useful program evaluation. The “20-80” rule applies here, that 20% of effort generates 80% of the needed results. It’s better to do what might turn out to be an average effort at evaluation than to do no evaluation at all. (Besides, if you
resort to bringing in an evaluation consultant, you should be a smart consumer.

Far too many program evaluations generate information that is either impractical or irrelevant — if the information is understood at all.) This document orients personnel to the nature of program evaluation and how it can be carried out in a realistic and practical fashion.

Note that much of the information in this section was gleaned from various works of Michael Quinn Patton.


Program Evaluation

Some Myths About Program Evaluation

1. Many people believe evaluation is a useless activity that generates lots of boring data with useless conclusions. This was a problem with evaluations in the past when program evaluation methods were chosen largely on the basis of achieving complete scientific accuracy, reliability and validity.

This approach often generated extensive data from which very carefully chosen conclusions were drawn. Generalizations and recommendations were avoided. As a result, evaluation reports tended to reiterate the obvious and left program administrators disappointed and skeptical about the value of evaluation in general. More recently (especially as a result of Michael Patton’s development of utilization-focused evaluation), evaluation has focused on utility, relevance and practicality at least as much as scientific validity.

2. Many people believe that evaluation is about proving the success or failure of a program. This myth assumes that success is implementing the perfect program and never having to hear from employees, customers or clients again — the program will now run itself perfectly. This doesn’t happen in real life. Success is remaining open to continuing feedback and adjusting the program accordingly. Evaluation gives you this continuing feedback.

3. Many believe that evaluation is a highly unique and complex process that occurs at a certain time in a certain way, and almost always includes the use of outside experts. Many people believe they must completely understand terms such as validity and reliability. They don’t have to. They do have to consider what information they need in order to make current decisions about program issues or needs. And they have to be willing to commit to understanding what is really going on.

Note that many people regularly undertake some nature of program evaluation — they just don’t do it in
a formal fashion so they don’t get the most out of their efforts or they make conclusions that are inaccurate (some evaluators would disagree that this is program evaluation if not done methodically).
Consequently, they miss precious opportunities to make more of difference for their customer and clients, or to get a bigger bang for their buck.

So What is Program Evaluation?

First, we’ll consider “what is a program?” Typically, organizations work from their mission to identify several overall goals which must be reached to accomplish their mission. In nonprofits, each of these goals often becomes a program. Nonprofit programs are organized methods to provide certain related services to constituents, e.g., clients, customers, patients, etc. Programs must be evaluated to decide if the programs are indeed useful to constituents. In a for-profit, a program is often a one-time effort to produce a new product or line of products.

So, still, what is program evaluation? Program evaluation is carefully collecting information about a program or some aspect of a program in order to make necessary decisions about the program. Program
evaluation can include any or a variety of at least 35 different types of evaluation, such as for needs assessments, accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes, etc.

The type of evaluation you undertake to improve your programs depends on what you want to learn about the program. Don’t worry about what type of evaluation you need or are doing — worry about what you need to know to make the program decisions you need to make, and worry about how you can accurately collect and understand that information.


Where Program Evaluation is Helpful

Frequent Reasons:

Program evaluation can:
1. Understand, verify or increase the impact of products or services on customers or clients – These “outcomes” evaluations are increasingly required by nonprofit funders as verification that the nonprofits are indeed helping their constituents. Too often, service providers (for-profit or nonprofit) rely on their
own instincts and passions to conclude what their customers or clients really need and whether the products or services are providing what is needed.

Over time, these organizations find themselves in a lot of guessing about what would be a good product or service, and trial and error about how new products or services could be delivered.

2. Improve delivery mechanisms to be more efficient and less costly – Over time, product or service delivery ends up to be an inefficient collection of activities that are less efficient and more costly than need be. Evaluations can identify program strengths and weaknesses to improve the program.

3. Verify that you’re doing what you think you’re doing – Typically, plans about how to deliver services, end up changing substantially as those plans are put into place. Evaluations can verify if the program is really running as originally planned.

Other Reasons:

Program evaluation can:
4. Facilitate management’s really thinking about what their program is all about, including its goals, how it meets it goals and how it will know if it has met its goals or not.
5. Produce data or verify results that can be used for public relations and promoting services in the community.
6. Produce valid comparisons between programs to decide which should be retained, e.g., in the face of pending budget cuts.
7. Fully examine and describe effective programs for duplication elsewhere.


Basic Ingredients: Organization and Program(s)

You Need An Organization:

This may seem too obvious to discuss, but before an organization embarks on evaluating a program, it should have well established means to conduct itself as an organization, e.g., (in the case of a nonprofit) the board should be in good working order, the organization should be staffed and organized to conduct activities to work toward the mission of the organization, and there should be no current crisis that is clearly more important to address than evaluating programs.

You Need Program(s):

To effectively conduct program evaluation, you should first have programs. That is, you need a strong impression of what your customers or clients actually need. (You may have used a needs assessment to determine these needs — itself a form of evaluation, but usually the first step in a good marketing plan). Next, you need some effective methods to meet each of those goals. These methods are usually in the form of programs.

It often helps to think of your programs in terms of inputs, process, outputs and outcomes. Inputs are the various resources needed to run the program, e.g., money, facilities, customers, clients, program staff, etc. The process is how the program is carried out, e.g., customers are served, clients are counseled, children are cared for, art is created, association members are supported, etc.

The outputs are the units of service, e.g., number of customers serviced, number of clients counseled, children cared for, artistic pieces produced, or members in the association. Outcomes are the impacts on the customers or on clients receiving services, e.g., increased mental health, safe and secure development, richer artistic appreciation and perspectives in life, increased effectiveness among members, etc.


Planning Your Program Evaluation

Depends on What Information You Need to Make Your Decisions and On Your Resources.

Often, management wants to know everything about their products, services or programs. However, limited resources usually force managers to prioritize what they need to know to make current decisions.

Your program evaluation plans depend on what information you need to collect in order to make major decisions. Usually, management is faced with having to make major decisions due to decreased funding, ongoing complaints, unmet needs among customers and clients, the need to polish service delivery, etc. For example, do you want to know more about what is actually going on in your programs, whether your programs are meeting their goals, the impact of your programs on customers, etc? You may want other information or a combination of these. Ultimately, it’s up to you.

But the more focused you are about what you want to examine by the evaluation, the more efficient you can be in your evaluation, the shorter the time it will take you and ultimately the less it will cost you (whether in your own time, the time of your employees and/or the time of a consultant).

There are trade offs, too, in the breadth and depth of information you get. The more breadth you want, usually the less depth you get (unless you have a great deal of resources to carry out the evaluation). On the other hand, if you want to examine a certain aspect of a program in great detail, you will likely not get as much information about other aspects of the program.

For those starting out in program evaluation or who have very limited resources, they can use various methods to get a good mix of breadth and depth of information. They can both understand more about certain areas of their programs and not go bankrupt doing so.

Key Considerations:

Consider the following key questions when designing a program evaluation.

1. For what purposes is the evaluation being done, i.e., what do you want to be able to decide as a result of the evaluation?

2. Who are the audiences for the information from the evaluation, e.g., customers, bankers, funders, board, management, staff, customers, clients, etc.

3. What kinds of information are needed to make the decision you need to make and/or enlighten your intended audiences, e.g., information to really understand the process of the product or program (its inputs, activities and outputs), the customers or clients who experience the product or program, strengths and weaknesses of the product or program, benefits to customers or clients (outcomes), how the product or program failed and why, etc.

4. From what sources should the information be collected, e.g., employees, customers, clients, groups of customers or clients and employees together, program documentation, etc.
5. How can that information be collected in a reasonable fashion, e.g., questionnaires, interviews, examining documentation, observing customers or employees, conducting focus groups among customers or employees, etc.

6. When is the information needed (so, by when must it be collected)?

7. What resources are available to collect the information?






Some Major Types of Program Evaluation

When designing your evaluation approach, it may be helpful to review the following three types of evaluations, which are rather common in organizations. Note that you should not design your evaluation approach simply by choosing which of the following three types you will use — you should design your evaluation approach by carefully addressing the above key considerations.

Goals-Based Evaluation

Often programs are established to meet one or more specific goals. These goals are often described in the original program plans.

Goal-based evaluations are evaluating the extent to which programs are meeting predetermined goals or objectives. Questions to ask yourself when designing an evaluation to see if you reached your goals, are:

1. How were the program goals (and objectives, is applicable) established? Was the process effective?
2. What is the status of the program’s progress toward achieving the goals?
3. Will the goals be achieved according to the timelines specified in the program implementation or operations plan? If not, then why?
4. Do personnel have adequate resources (money, equipment, facilities, training, etc.) to achieve the goals?
5. How should priorities be changed to put more focus on achieving the goals? (Depending on the context, this question might be viewed as a program management decision, more than an evaluation question.)
6. How should timelines be changed (be careful about making these changes – know why efforts are behind schedule before timelines are changed)?
7. How should goals be changed (be careful about making these changes – know why efforts are not achieving the goals before changing the goals)? Should any goals be added or removed? Why?
8. How should goals be established in the future?

Process-Based Evaluations

Process-based evaluations are geared to fully understanding how a program works — how does it produce that results that it does. These evaluations are useful if programs are long-standing and have changed over the years, employees or customers report a large number of complaints about the program, there appear to be large inefficiencies in delivering program services and they are also useful for accurately portraying to outside parties how a program truly operates (e.g., for replication elsewhere).

There are numerous questions that might be addressed in a process evaluation. These questions can be selected by carefully considering what is important to know about the program. Examples of questions to ask yourself when designing an evaluation to understand and/or closely examine the processes in your programs, are:

1. On what basis do employees and/or the customers decide that products or services are needed?
2. What is required of employees in order to deliver the product or services?
3. How are employees trained about how to deliver the product or services?
4. How do customers or clients come into the program?
5. What is required of customers or client?
6. How do employees select which products or services will be provided to the customer or client?
7. What is the general process that customers or clients go through with the product or program?
8. What do customers or clients consider to be strengths of the program?
9. What do staff consider to be strengths of the product or program?
10. What typical complaints are heard from employees and/or customers?
11. What do employees and/or customers recommend to improve the product or program?
12. On what basis do employees and/or the customer decide that the product or services are no longer needed?

Outcomes-Based Evaluation

Program evaluation with an outcomes focus is increasingly important for nonprofits and asked for by funders. An outcomes-based evaluation facilitates your asking if your organization is really doing the
right program activities to bring about the outcomes you believe (or better yet, you’ve verified) to be needed by your clients (rather than just engaging in busy activities which seem reasonable to do at the time).

Outcomes are benefits to clients from participation in the program. Outcomes are usually in terms of enhanced learning (knowledge, perceptions/attitudes or skills) or conditions, e.g., increased literacy, self-reliance, etc. Outcomes are often confused with program outputs or units of services, e.g., the number of clients who went through a program.

The United Way of America (http://www.unitedway.org/outcomes/) provides an excellent overview of outcomes-based evaluation, including introduction to outcomes measurement, a program outcome model, why to measure outcomes, use of program outcome findings by agencies, eight steps to success for measuring outcomes, examples of outcomes and outcome indicators for various programs and the resources needed for measuring outcomes. The following information is a top-level summary of information from this site.

To accomplish an outcomes-based evaluation, you should first pilot, or test, this evaluation approach on one or two programs at most (before doing all programs).

The general steps to accomplish an outcomes-based evaluation include to:

1. Identify the major outcomes that you want to examine or verify for the program under evaluation. You might reflect on your mission (the overall purpose of your organization) and ask yourself what impacts you will have on your clients as you work towards your mission.

For example, if your overall mission is to provide shelter and resources to abused women, then ask yourself what benefits this will have on those women if you effectively provide them shelter and other services or resources. As a last resort, you might ask yourself, “What major activities are we doing now?” and then for each activity, ask “Why are we doing that?” The answer to this “Why?” question is usually an outcome.

This “last resort” approach, though, may just end up justifying ineffective activities you are doing now, rather than examining what you should be doing in the first place.

2. Choose the outcomes that you want to examine, prioritize the outcomes and, if your time and resources are limited, pick the top two to four most important outcomes to examine for now.

3. For each outcome, specify what observable measures, or indicators, will suggest that you’re achieving that key outcome with your clients. This is often the most important and enlightening step in outcomes-based evaluation.

However, it is often the most challenging and even confusing step, too, because you’re suddenly going from a rather intangible concept, e.g., increased self-reliance, to specific activities, e.g., supporting clients to get themselves to and from work, staying off drugs and alcohol, etc. It helps to have a “devil’s advocate” during this phase of identifying indicators, i.e., someone who can question why you can assume that an outcome was reached because certain associated indicators were present.

4. Specify a “target” goal of clients, i.e., what number or percent of clients you commit to achieving specific outcomes with, e.g., “increased self-reliance (an outcome) for 70% of adult, African American women living in the inner city of Minneapolis as evidenced by the following measures (indicators) …”

5. Identify what information is needed to show these indicators, e.g., you’ll need to know how many clients in the target group went through the program, how many of them reliably undertook their own transportation to work and stayed off drugs, etc. If your program is new, you may need to evaluate the process in the program to verify that the program is indeed carried out according to your original plans.

(Michael Patton, prominent researcher, writer and consultant in evaluation, suggests that the most important type of evaluation to carry out may be this implementation evaluation to verify that your program ended up to be implemented as you originally planned.)

6. Decide how can that information be efficiently and realistically gathered (see Selecting Which Methods to Use below). Consider program documentation, observation of program personnel and clients in the program, questionnaires and interviews about clients perceived benefits from the program, case studies of program failures and successes, etc. You may not need all of the above. (see Overview of Methods to Collect Information below).

7. Analyze and report the findings (see Analyzing and Interpreting Information below).


Overview of Methods to Collect Information

The following table provides an overview of the major methods used for collecting data during evaluations.

Method

Overall Purpose

Advantages

Challenges

questionnaires, surveys, checklists when need to quickly and/or easily get lots of information from people in a non threatening way -can complete anonymously
-inexpensive to administer
-easy to compare and analyze
-administer to many people
-can get lots of data
-many sample questionnaires already exist
-might not get careful feedback
-wording can bias client’s responses
-are impersonal
-in surveys, may need sampling expert
– doesn’t get full story
interviews when want to fully understand someone’s impressions or experiences, or learn more about their answers to questionnaires -get full range and depth of information
-develops relationship with client
-can be flexible with client
-can take much time
-can be hard to analyze and compare
-can be costly
-interviewer can bias client’s responses
documentation review when want impression of how program operates without interrupting the program; is from review of applications, finances, memos, minutes, etc. -get comprehensive and historical information
-doesn’t interrupt program or client’s routine in program
-information already exists
-few biases about information
-often takes much time
-info may be incomplete
-need to be quite clear about what looking for
-not flexible means to get data; data restricted to what already exists
observation to gather accurate information about how a program actually operates,
particularly about processes
-view operations of a program as they are actually occurring
-can adapt to events as they occur
-can be difficult to interpret seen behaviors
-can be complex to categorize observations
-can influence behaviors of program participants
-can be expensive
focus groups explore a topic in depth through group discussion, e.g., about reactions to an experience or suggestion, understanding common complaints, etc.; useful in evaluation and marketing -quickly and reliably get common impressions
-can be efficient way to get much range and depth of information in short time
– can convey key information about programs
-can be hard to analyze responses
-need good facilitator for safety and closure
-difficult to schedule 6-8 people together
case studies to fully understand or depict client’s experiences in a program, and conduct comprehensive examination through cross comparison of cases -fully depicts client’s experience in program input, process and results
-powerful means to portray program to outsiders
-usually quite time consuming to collect, organize and describe

-represents depth of information, rather than breadth

Also consider

Ethics: Informed Consent from Program Participants

Note that if you plan to include in your evaluation, the focus and reporting on personal information about customers or clients participating in the evaluation, then you should first gain their consent to do so. They should understand what you’re doing with them in the evaluation and how any information associated with them will be reported.

You should clearly convey terms of confidentiality regarding access to evaluation results. They should have the right to participate or not. Have participants review and sign an informed consent form. See the sample informed-consent form.

How to Apply Certain Methods


Selecting Which Methods to Use

Overall Goal in Selecting Methods:

The overall goal in selecting evaluation method(s) is to get the most useful information to key decision makers in the most cost-effective and realistic fashion. Consider the following questions:

1. What information is needed to make current decisions about a product or program?
2. Of this information, how much can be collected and analyzed in a low-cost and practical manner, e.g., using questionnaires, surveys and checklists?
3. How accurate will the information be (reference the above table for disadvantages of methods)?
4. Will the methods get all of the needed information?
5. What additional methods should and could be used if additional information is needed?
6. Will the information appear as credible to decision makers, e.g., to funders or top management?
7. Will the nature of the audience conform to the methods, e.g., will they fill out questionnaires carefully, engage in interviews or focus groups, let you examine their documentations, etc.?
8. Who can administer the methods now or is training required?
9. How can the information be analyzed?

Note that, ideally, the evaluator uses a combination of methods, for example, a questionnaire to quickly collect a great deal of information from a lot of people, and then interviews to get more in-depth information from certain respondents to the questionnaires.

Perhaps case studies could then be used for more in-depth analysis of unique and notable cases, e.g., those who benefited or not from the program, those who quit the program, etc.

Four Levels of Evaluation:

There are four levels of evaluation information that can be gathered from clients, including getting their:

1. reactions and feelings (feelings are often poor indicators that your service made lasting impact)
2. learning (enhanced attitudes, perceptions or knowledge)
3. changes in skills (applied the learning to enhance behaviors)
4. effectiveness (improved performance because of enhanced behaviors)

Usually, the farther your evaluation information gets down the list, the more useful is your evaluation. Unfortunately, it is quite difficult to reliably get information about effectiveness. Still, information about learning and skills is quite useful.


Analyzing and Interpreting Information

Analyzing quantitative and qualitative data is often the topic of advanced research and evaluation methods. There are certain basics which can help to make sense of reams of data.

Always start with your evaluation goals:
When analyzing data (whether from questionnaires, interviews, focus groups, or whatever), always start from review of your evaluation goals, i.e., the reason you undertook the evaluation in the first place. This will help you organize your data and focus your analysis.

For example, if you wanted to improve your program by identifying its strengths and weaknesses, you can organize data into program strengths, weaknesses and suggestions to improve the program. If you wanted to fully understand how your program works, you could organize data in the chronological order in which clients go through your program.

If you are conducting an outcomes-based evaluation, you can categorize data according to the indicators
for each outcome.

Basic analysis of “quantitative” information
(for information other than commentary, e.g., ratings, rankings, yes’s, no’s, etc.):

1. Make copies of your data and store the master copy away. Use the copy for making edits, cutting and pasting, etc.
2. Tabulate the information, i.e., add up the number of ratings, rankings, yes’s, no’s for each question.
3. For ratings and rankings, consider computing a mean, or average, for each question. For example, “For question #1, the average ranking was 2.4”. This is more meaningful than indicating, e.g., how many respondents ranked 1, 2, or 3.
4. Consider conveying the range of answers, e.g., 20 people ranked “1”, 30 ranked “2”, and 20 people ranked “3”.

Basic analysis of “qualitative” information
(respondents’ verbal answers in interviews, focus groups, or written commentary on questionnaires):

1. Read through all the data.
2. Organize comments into similar categories, e.g., concerns, suggestions, strengths, weaknesses, similar experiences, program inputs, recommendations, outputs, outcome indicators, etc.
3. Label the categories or themes, e.g., concerns, suggestions, etc.
4. Attempt to identify patterns, or associations and causal relationships in the themes, e.g., all people who attended programs in the evening had similar concerns, most people came from the same geographic area, most people were in the same salary range, what processes or events respondents experience during the program, etc.
4. Keep all commentary for several years after completion in case needed for future reference.

Interpreting Information:

1. Attempt to put the information in perspective, e.g., compare results to what you expected, promised results; management or program staff; any common standards for your services; original program goals (especially if you’re conducting a program evaluation); indications of accomplishing outcomes (especially if you’re conducting an outcomes evaluation); description of the program’s experiences, strengths, weaknesses, etc. (especially if you’re conducting a process evaluation).

2. Consider recommendations to help program staff improve the program, conclusions about program operations or meeting goals, etc.

3. Record conclusions and recommendations in a report document, and associate interpretations to justify your conclusions or recommendations.


Reporting Evaluation Results

1.The level and scope of content depends on to whom the report is intended, e.g., to bankers, funders, employees, customers, clients, the public, etc.

2. Be sure employees have a chance to carefully review and discuss the report. Translate recommendations to action plans, including who is going to do what about the program and by when.

3. Bankers or funders will likely require a report that includes an executive summary (this is a summary of conclusions and recommendations, not a listing of what sections of information are in the report — that’s a table of contents); description of the organization and the program under evaluation; explanation of the evaluation goals, methods, and analysis procedures; listing of conclusions and recommendations; and any relevant attachments, e.g., inclusion of evaluation questionnaires, interview guides, etc. The banker or funder may want the report to be delivered as a presentation, accompanied by an overview of the report. Or, the banker or funder may want to review the report alone.

4. Be sure to record the evaluation plans and activities in an evaluation plan which can be referenced when a similar program evaluation is needed in the future.

Contents of an Evaluation Report — Example

An example of evaluation report contents is included later on below in this document. Click Contents of an Evaluation Plan but, don’t forget to look at the next section “Who Should Carry Out the Evaluation”.


Who Should Carry Out the Evaluation?

Ideally, management decides what the evaluation goals should be. Then an evaluation expert helps the organization to determine what the evaluation methods should be, and how the resulting data will be analyzed and reported back to the organization. Most organizations do not have the resources to carry out the ideal evaluation.

Still, they can do the 20% of effort needed to generate 80% of what they need to know to make a decision about a program. If they can afford any outside help at all, it should be for identifying the appropriate evaluation methods and how the data can be collected. The organization might find a less expensive resource to apply the methods, e.g., conduct interviews, send out and analyze results of questionnaires, etc.

If no outside help can be obtained, the organization can still learn a great deal by applying the methods and analyzing results themselves. However, there is a strong chance that data about the strengths and weaknesses of a program will not be interpreted fairly if the data are analyzed by the people responsible for ensuring the program is a good one.

Program managers will be “policing” themselves. This caution is not to fault program managers, but to recognize the strong biases inherent in trying to objectively look at and publicly (at least within the organization) report about their programs. Therefore, if at all possible, have someone other than the program managers look at and determine evaluation results.


Contents of an Evaluation Plan

Develop an evaluation plan to ensure your program evaluations are carried out efficiently in the future. Note that bankers or funders may want or benefit from a copy of this plan.

Ensure your evaluation plan is documented so you can regularly and efficiently carry out your evaluation activities. Record enough information in the plan so that someone outside of the organization can understand what you’re evaluating and how. Consider the following format for your report:

1. Title Page (name of the organization that is being, or has a product/service/program that is being, evaluated; date)
2. Table of Contents
3. Executive Summary (one-page, concise overview of findings and recommendations)
4. Purpose of the Report (what type of evaluation(s) was conducted, what decisions are being aided by the findings of the evaluation, who is making the decision, etc.)
5. Background About Organization and Product/Service/Program that is being evaluated
a) Organization Description/History
b) Product/Service/Program Description (that is being evaluated)
i) Problem Statement (in the case of nonprofits, description of the community need that is being met by the product/service/program)
ii) Overall Goal(s) of Product/Service/Program
iii) Outcomes (or client/customer impacts) and Performance Measures (that can be measured as indicators toward the outcomes)
iv) Activities/Technologies of the Product/Service/Program (general description of how the product/service/program is developed and delivered)
v) Staffing (description of the number of personnel and roles in the organization that are relevant to developing and delivering the product/service/program)
6) Overall Evaluation Goals (eg, what questions are being answered by the evaluation)
7) Methodology
a) Types of data/information that were collected
b) How data/information were collected (what instruments were used, etc.)
c) How data/information were analyzed
d) Limitations of the evaluation (eg, cautions about findings/conclusions and how to use the findings/conclusions, etc.)
8) Interpretations and Conclusions (from analysis of the data/information)
9) Recommendations (regarding the decisions that must be made about the product/service/program)
Appendices: content of the appendices depends on the goals of the evaluation report, eg.:
a) Instruments used to collect data/information
b) Data, eg, in tabular format, etc.
c) Testimonials, comments made by users of the product/service/program
d) Case studies of users of the product/service/program
e) Any related literature


Pitfalls to Avoid

1. Don’t balk at evaluation because it seems far too “scientific.” It’s not. Usually the first 20% of effort will generate the first 80% of the plan, and this is far better than nothing.
2. There is no “perfect” evaluation design. Don’t worry about the plan being perfect. It’s far more important to do something, than to wait until every last detail has been tested.
3. Work hard to include some interviews in your evaluation methods. Questionnaires don’t capture “the story,” and the story is usually the most powerful depiction of the benefits of your services.
4. Don’t interview just the successes. You’ll learn a great deal about the program by understanding its failures, dropouts, etc.
5. Don’t throw away evaluation results once a report has been generated. Results don’t take up much room, and they can provide precious information later when trying to understand changes in the program.


Online Guides

Outcomes-Evaluation

General Resources

(Thanks to Gene Shackman for suggesting many of the following resources.)


For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.


Framework for a Basic Outcomes-Based Evaluation Plan

Colleagues working together looking at a tablet device

Framework for a Basic Outcomes-Based
Evaluation Plan

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content
of this topic came from this book:
Nonprofit Programs - Book Cover

Also consider
Related Library Topics

Learn More in the Library’s Blogs Related to Outcomes Evaluations

In addition to the articles on this current page, see the following blogs which
have posts related to Outcomes Evaluations. Scan down the blog’s page to see
various posts. Also see the section “Recent Blog Posts” in the sidebar
of the blog or click on “next” near the bottom of a post in the blog.

Library’s Business
Planning Blog

Library’s Building
a Business Blog

Library’s Strategic
Planning Blog


Description

The following framework can be filled in by readers to complete
a basic outcomes-based evaluation plan. The guidelines for completing
the plan are contained in the Basic
Guide to Outcomes-Based Evaluation for Nonprofit Organizations
With Very Limited Resources
.

NOTE: Outcomes-based evaluation is
but one type of evaluation that can be applied to programs. Thus,
a nonprofit might well benefit from first completing a more general
evaluation plan the results of which, in turn, can suggest need
for an outcomes-based evaluation plan. Read more about general program evaluation
plans
.


Outcomes-Based Evaluation Plan

for

Organization (Name) _____________

Or Program (Name) ______________

Very likely, you will require a table that is larger than the example below.
The following is to give you impression of at least one possible format for
organizing outcomes-planning information.

NOTE: This example includes six columns. However, columns 4-6 are included
below columns 1-3 on this page, so that this page is easily viewable on smart
phones.

 

outcome

indicator(s)

source of data
(records, clients, etc.)

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

 

method to collect data
(question-naires, inter-views, etc.)

who collects data

when collect data

Return to

Basic
Guide to Outcomes-Based Evaluation for Nonprofit Organizations With Very Limited
Resources
.


For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.

Related Library Topics

Recommended Books


Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources

Business people going through a document while seated

Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content
of this topic came from this book:
Nonprofit Programs - Book Cover

Description

This document provides guidance toward basic planning and implementation of an outcomes-based evaluation process (also called outcomes evaluation) in nonprofit organizations. This document provides basic guidance — particularly to small nonprofits with very limited resources.

NOTE: This free, basic, online guide makes occasional references to certain pages in the United Way of America’s book, Measuring Program Outcomes: A Practical Approach (1996). That United Way book is an excellent resource! However, it can be somewhat
overwhelming for nonprofits that have very limited resources. This free online guide (that are reading now) can help nonprofits carry out their own basic outcomes evaluation planning. This online guide can also help small nonprofits to make the most of that United Way book — however, you do not have to have that United Way book in order to carry out your own basic outcomes evaluation plan by using this online guide. (Still, small nonprofits are encouraged to get the United Way book, for example, to later round out basic evaluation plans developed from this online guide and/or to learn more than provided in this basic guide about outcomes evaluation. To get the United Way book, call 703-212-6300 and ask about item #0989.)

NOTE: Outcomes-based evaluation is but one type of evaluation — there are many types of evaluations. The reader would gain deeper understanding about outcomes-based evaluation by reading about the broader topic of evaluation. To do so, read Basic Guide to Program Evaluation. This online basic guide about outcomes-based evaluation was designed by modified the Basic Guide to Program Evaluation.


Table of Contents

Also consider
Related Library Topics

Learn More in the Library’s Blogs Related to Outcomes Evaluations

In addition to the articles on this current page, see the following blogs which have posts related to Outcomes Evaluations. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog.


Reasons for Priority on Implementing Outcomes-Based Evaluation

  • There are decreasing funds for nonprofits
  • Yet there are increasing community needs
  • Thus, there is more focus on whether nonprofit programs are really making a difference — and outcomes evaluation focuses on whether programs are really making a difference for clients
  • Previous evaluation measures were on, for example, how much money spent, number of people served and on client satisfaction — these measures don’t really assess impacts on clients
  • Outcomes evaluation looks at impacts/benefits to clients during and after participation in your programs

Basic Principles for Small Nonprofits to Remember Before Starting

Nonprofit personnel do not have to be experts in outcomes-based evaluation in order to carry out a useful outcomes evaluation plan.

  • In most major activities in life and work, there is a “20% of effort that generates 80% of the results”. This basic guide will give you the direction to accomplish that 20% needed to develop an outcomes evaluation plan for your organization.
  • Once you’ve carried out the guidelines in this basic guide, you can probably let experience and funders help you with the rest of your outcomes evaluation planning, particularly as you implement your evaluation plan during its first year.
  • In life (particularly for us adults), problems exist often because we’re making things far too complex, not because we’re making things far too simple. Often, people who are new to evaluation get “mindcramp”, that is, they think too hard about evaluation. It’s actually a fairly simple notion — just don’t think so hard about it!
  • Start small, start now and grow as you’re able.
  • Ready, fire, aim!

What is Outcomes-Based Evaluation?

A Basic Definition

As noted above, outcomes evaluation looks at impacts/benefits/changes to your clients (as a result of your program(s) efforts) during and/or after their participation in your programs. Outcomes evaluation can examine these changes in the short-term, intermediate term and long-term (we’ll talk more about this later on below.)

Basic Components and Key Terms in Outcomes Evaluation

Outcomes evaluation is often described first by looking at its basic components. Outcomes evaluation looks at programs as systems that have inputs, activities/processes, outputs and outcomes — this system’s view is useful in examining any program!

  • Inputs – These are materials and resources that the program uses in its activities, or processes, to serve clients, eg, equipment, staff, volunteers, facilities, money, etc. These are often easy to identify and many of the inputs seem common to many organizations and programs.
  • Activities – These are the activities, or processes, that the program undertakes with/to the client in order to meet the clients’ needs, for example, teaching, counseling, sheltering, feeding, clothing, etc. Note that when identifying the activities in a program, the focus is still pretty much on the organization or program itself, and still is not so much on actual changes in the client.
  • Outputs – These are the units of service regarding your program, for example, the number of people taught, counseled, sheltered, fed, clothed, etc. The number of clients served, books published, etc., very often indicates nothing at all about the actual impacts/benefits/changes in your clients who went through the program — the number of clients served merely indicates the numerical number of clients who went through your program.
  • Outcomes – These are actual impacts/benefits/changes for participants during or after your program — for example, for a smoking cessation program, an outcome might be “participants quit smoking” (notice that this outcome is quite different than outputs, such as the “number of clients who went through the cessation program”)
    — These changes, or outcomes, are usually expressed in terms of:
    — — knowledge and skills (these are often considered to be rather short-term outcomes)
    — — behaviors (these are often considered to be rather intermediate-term outcomes)
    — — values, conditions and status (these are often considered to be rather long-term outcomes)
  • Outcome targets – These are the number and percent of participants that you want to achieve the outcome, for example, an outcome goal of 5,000 teens (10% of teens in Indianapolis) who quit smoking over the next year
  • Outcome indicators – These are observable and measurable “milestones” toward an outcome target. These are what you’d see, hear, read, etc., that would indicate to you whether you’re making any progress toward your outcome target or not, for example, the number and percent of teen participants who quit smoking right after the program and six months after the program — these indicators give you a strong impression as to whether 5,000 teens will quit or not over the next year from completing your program.

NOTE: Take a few minutes and really notice the differences between:
— Outputs (which indicate hardly anything about the changes in clients — they’re usually just numbers)
— Outcomes (which indicate true changes in your clients)
— Outcome targets (which specify how much of your outcome you hope to achieve)
— Outcome indicators (which you can see, hear, read, etc. and suggest that you’re making progress toward your outcome target or not)

Typically, the above concepts are organized into a logic model, which depicts the general order in which the concepts are integrated with each other. For more clarity, see Guidelines and Framework for Developing a Basic Logic Model


Common Myths to Get Out of the Way Before You Start Planning

Myth: Evaluation is a complex science. I don’t have time to learn it!

No! It’s a practical activity. If you can run an organization, you can surely implement an evaluation process!

Myth: It’s an event to get over with and then move on!

No! Outcomes evaluation is an ongoing process. It takes months to develop, test and polish — however, many of the activities required to carry out outcomes evaluation are activities that you’re either already doing or you should be doing. Read on.

Myth: Evaluation is a whole new set of activities – we don’t have the resources

No! Most of these activities in the outcomes evaluation process are normal management activities that need to be carried out anyway in order to evolve your organization to the next level.

Myth: There’s a “right” way to do outcomes evaluation. What if I don’t get it right?

No! Each outcomes evaluation process is somewhat different, depending on the needs and nature of the nonprofit organization and its programs. Consequently, each nonprofit is the “expert” at their outcomes plan. Therefore, start simple, but start and learn as you go along in your outcomes planning and implementation.

Myth: Funders will accept or reject my outcomes plan

No! Enlightened funders will (at least, should?) work with you, for example, to polish your outcomes, indicators and outcomes targets. Especially if yours is a new nonprofit and/or new program, then you very likely will need some help — and time — to develop and polish your outcomes plan.

Myth: I always know what my clients need – I don’t need outcomes evaluation to tell me if I’m really meeting the needs of my clients or not

You don’t always know what you don’t know about the needs of your clients – outcomes evaluation helps ensure that you always know the needs of your clients. Outcomes evaluation sets up structures in your organization so that you and your organization are very likely always focused on the current needs of your clients. Also, you won’t always be around – outcomes help ensure that your organization is always focused on the most appropriate, current needs of clients even after you’ve left your organization.


Planning Any Type of Evaluation Includes Answers to These Very Basic Questions

Evaluation often seems like a “heavy”, complex activity to those who are not familiar with the real nature of evaluation. Actually, planning any kind of evaluation often requires answers to some very basic questions, including:

  • What decisions do you want to be able to make as a result of your evaluation?
  • Who are primary audiences for the results?
  • What kinds of info are needed?
  • When is info needed?
  • Where get that info and how?
  • What resources are available to get the info, analyze it and report it?
  • How report that info in useful fashion?

Planning Your Outcomes Evaluation — Step 1: Getting Ready

  • Read Step 1 (Chapter 1) of UW book Measuring Program Outcomes: A Practical Approach (1996) if you have it (otherwise, you’ll still benefit from this section on this web page)
  • You can very likely draft your own version of most of your outcomes evaluation plan and then have others review your drafts of those sections of the plan. (This “short-cut” approach to outcomes evaluation planning might be questioned by some experts on outcomes — but then small nonprofits rarely have the resources to fully carry out the comprehensive and detailed steps often recommended by outcomes evaluation resources.)
  • Remember that you don’t have to be an expert to start the planning process — each plan is different — ultimately, you’re the expert at your process and your plan
  • Do consider getting a grant to support development of your plan, eg, maybe $3,000 to $5,000, particularly to have evaluation expertise to review your plans and your methods of data collection — if you can’t get this grant, you still can proceed with your plan
  • DO tap the many resources available to help you (useful online resources are listed below)
  • Now pick one program to evaluate that has a reasonably clear group of clients and clear methods to provide services to them — in other words, make sure that you have a program to evaluate!
  • NOTE: Soon, you should train at least one board member and staff member about outcomes — consider using this very basic online guide

Planning Your Outcomes Evaluation — Step 2: Choosing Outcomes

Preparation

  • Note that a logic model for your program is depiction of inputs, activities, outputs and outcomes (short-term, intermediate and long-term) regarding your program. Take a look at the information in Introduction to Program Logic Model
  • Reread the myths listed above – don’t worry about competing the “perfect” logic model – ultimately, you’re the expert here

Now Identify Your Outcomes (including short-term, intermediate and long-term)

  • Now fill in a logic model for the program to which you want to apply outcomes-based evaluation — see the example logic model and framework — BUT first read the next several bullets below in this section:
  • To identify outcomes, consider: “enhanced …”, “increased …”, “more …”, “new …”, “altered …”, etc.
  • Note that it can be quite a challenge to identify outcomes for some types of programs, including those that are preventative (health programs, etc.), developmental (educational, etc.), or “one-time” or anonymous (food shelves, etc.) in nature. In these cases, it’s fair to give your best shot to outcomes planning and then learn more as you actually apply your outcomes evaluation plan. Also seek help and ideas about outcomes from other nonprofits that provide services similar to yours. Programs that are remedial in nature (that is, that are geared to address current and observable problems, such as teen delinquency, etc.) are often easier to associate with outcomes.
  • Start with short-term outcomes
  • Regarding identifying short-term outcomes, think 0-6 months:
    — Imagine your client in the program or a day after leaving the program
    — What knowledge and skills do you prefer? Actually see?
  • Regarding identifying intermediate outcomes, think 3-9 months:
    — Imagine your client 3-9 months after leaving the program
    — What behaviors do you prefer? Actually see?
  • Regarding long-term outcomes, think 6-12 months:
    — Imagine your client 6-12 months after leaving the program
    — What values, attitudes, status would you prefer to be the fullest extent of benefit for the client? Actually see?
  • Now “chain” the short-term, intermediate- and long-term outcomes by applying the following sentence to them:
    — “if this short-term occurs, then the intermediate occurs, and if this intermediate occurs, then this long-term occurs — AGAIN, don’t worry about getting it perfect — trust your intuition

Planning Your Outcomes Evaluation — Step 3: Selecting Indicators

Preparation

  • Read Step 3 in UW book Measuring Program Outcomes: A Practical Approach (1996) if you have it (otherwise, you’ll still benefit from this section on this web page) – especially look at examples on pages 66-67.
  • Identify at least one indicator per outcome (note that sometimes indicators are called performance standards)
  • When selecting indicators, ask:
    — What would I see, hear, read about clients that means progress toward the outcome?
    — Include numbers and percent regarding the client’s behavior, eg, “2,000 of the participants (50%) of our participants will quick smoking by the end of the program” and “3,000 of the participants (75%) of our participants will quick smoking one month after the program”
    — If is your first outcomes plan that you’ve ever done or the program is just getting started, then don’t spend a great deal of time trying to find the perfect numbers and percentages for your indicators
  • Fill in your indicators in the Framework for a Basic Outcomes-Based Evaluation Plan. Also, carry over the outcomes you identified from the example logic model to the basic evaluation plan.

Planning Your Outcomes Evaluation — Step 4: Planning Data/Information

Preparation

  • Read Step 4 in UW book Measuring Program Outcomes: A Practical Approach (1996) if you have it (otherwise, you’ll still benefit from this section on this web page) — especially look at
    — Page 86 (+/-’s of data sources)
    — Page 88 (major data collection methods)
    — Pages 90-93
  • A useful resource at this point might be Overview of Useful Methods to Collect Information
  • Now might be the best time to get some evaluation expertise, for example, a consultant or utilize a local nonprofit service provider to help you review your drafted outcomes and indicators. The expert is also worth their “weight in gold” when reviewing methods to collect data.

Get Your Work Reviewed Now By Others

  • If you’ve drafted outcomes and indicators yourself, get them reviewed by:
    — Board members
    — Staff
    — Client in program? Finished with the program?
    — Evaluation consultant?

Identify Data Sources and Methods to Collect Data

  • For each indicator, identify what information you will need to collect/measure to assess that indicator. Consider:
    — Current program records and data collection
    — What you see during the program
    — Ask staff for ideas
  • Is it practical to get that data?
    — What will it cost?
    — Who will do it?
    — How can you make the time?
  • When to collect data?
    — Depends on indicator
    — Consider: before/after program, 6 months after, 12 months after
  • Data collection methods:
    — Questionnaires?
    — Interviews?
    — Surveys?
    — Document review?
    — Other(s)?
  • Get evaluation consultant/expertise?
  • Pretest your data collection methods (eg, have a few staff quickly answer the questionnaires to ensure the questions are understandable)
  • Write a brief procedure to specify:
    — What data is collected?
    — Who collects it?
    — How they collect it?
    — When they collect it?
    — What do they do with it?

Planning Your Outcomes Evaluation — Step 5: Piloting/Testing

  • If yours is a small nonprofit, then it’s very likely that you don’t have nearly the resources to invest in applying your complete outcomes evaluation process in order to test it out.
  • In that case, then the first year of applying your outcomes process is the same as piloting your process.
  • During the first year, notice problems and improvements, etc.
  • Document these in your evaluations plan.
  • If something happens to you so that you leave the organization, the organization should not have to completely recreate an outcomes plan. Be sure that write down any suggestions to improve the plan.

Planning Your Outcomes Evaluation — Step 6: Analyzing/Reporting

Preparation

  • Strongly consider getting evaluation expertise now to review, not only your methods of data collection mentioned above, but also how you can analyze the data that you collect and how to report results of that analyses.
  • Before you analyze your data, always make and retain copies of your data.

Analyzing Your Data

  • For dealing with numerical data with numbers, rankings:
    — Tabulate the information, i.e., add up the ratings, rankings, yes’s, no’s for each question.
    — For ratings and rankings, consider computing a mean, or average, for each question.
    — Consider conveying the range of answers, e.g., 20 people ranked “1”, 30 ranked “2”, and 20 people ranked “3”.
  • To analyze comments, etc. (that is, data that is not numerical in nature):
    — Read through all the data
    — Organize comments into similar categories, e.g., concerns, suggestions, strengths, etc.
    — Label the categories or themes, e.g., concerns, suggestions, etc.
    — Attempt to identify patterns, or associations and causal relationships in the themes

Reporting Your Evaluation Results

  • Level and scope of information in report depends for whom the report is intended, e.g., funders, board, staff, clients, etc.
  • Be sure employees have a chance to carefully review and discuss the report before sent out
  • Funders will likely require a report that includes executive summary – the summary should highlight key points from the evaluation, and not be a Table of Contents

Example of Evaluation Report Contents

  • Title Page (name of the organization that is being, or has a product/service/program that is being, evaluated; date)
  • Table of Contents
  • Executive Summary (one-page, concise overview of findings and recommendations)
  • Purpose of the Report (what type of evaluation(s) was conducted, what decisions are being aided by the findings of the evaluation, who is making the decision, etc.)
  • Background About Organization and Product/Service/Program that is being evaluated
    — a) Organization Description/History
    — b) Product/Service/Program Description (that is being evaluated)
    — — i) Problem Statement (in the case of nonprofits, description of the community need that is being met by the product/service/program)
    — — ii) Overall Goal(s) of Product/Service/Program
    — — iii) Outcomes (or client/customer impacts) and Performance Measures (that can be measured as indicators toward the outcomes)
    — — iv) Activities/Technologies of the Product/Service/Program (general description of how the product/service/program is developed and delivered)
    — — v) Staffing (description of the number of personnel and roles in the organization that are relevant to developing and delivering the product/service/program)
  • Overall Evaluation Goals (eg, what questions are being answered by the evaluation)
  • Methodology
    — a) Types of data/information that were collected
    — b) How data/information were collected (what instruments were used, etc.)
    — c) How data/information were analyzed
    — d) Limitations of the evaluation (eg, cautions about findings/conclusions and how to use the findings/conclusions, etc.)
  • Interpretations and Conclusions (from analysis of the data/information)
  • Recommendations (regarding the decisions that must be made about the product/service/program)
  • Appendices: content of the appendices depends on the goals of the evaluation report, eg.:
    — a) Instruments used to collect data/information
    — b) Data, eg, in tabular format, etc.
    — c) Testimonials, comments made by users of the product/service/program
    — d) Case studies of users of the product/service/program
    — e) Logic model
    — f) Evaluation plan with specified outcomes, sources to collect data, data collection methods, who will collect data, etc.

Useful Online Resources

Note that specific online resources are listed above in the sections in which those resources are most appropriate.

General Resources


For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.


Guidelines and Framework for Designing Basic Logic Model

Colleagues looking at Documents

Guidelines and Framework for Designing Basic Logic Model

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content
of this topic came from this book:
Nonprofit Programs - Book Cover

Sections of This Topic Include

Learn More in the Library’s Blogs Related to Logic Models

In addition to the articles on this current page, see the following blogs which have posts related to this Logic Models. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog.


Overview of a Logic Model

The following framework can be filled in by readers to design a logic model (or diagram) for their organization and for each of its programs. Guidelines and examples are provided to help the reader. This logic model is referenced from the Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations With Very Limited Resources.

Purpose of a Logic Model

A logic model is a top-level depiction the flow of materials and processes to produce the results desired by the organization or program. The model can be very useful to organize planning and analysis when designing the organization and its programs or when designing outcomes-based evaluations of programs. It can also be useful for describing organizations and programs (for example, in grant proposals).

What to Include and What Not to Include

Logic models can be in regard to whatever application in which the designer chooses to use them. However, when using logic models to analyze or describe organizations and programs, it’s often best to use logic models to depict major, recurring items in the organization or programs — rather than one-time items. For example, you might not choose to do a logic model for the one-time, initial activities to build an organization or program (constructing the building, registering with state and federal authorities, etc.). However, you might benefit more from using logic models to analyze and describe the major, recurring activities that occur in the organization or program (once they’re built) to continue to produce the results desired for clients and the community.

Size and Level of Detail

The logic model should be of a size that readers can easily study the model without extensive reference and cross-comparisons between pages. Ideally, the logic model is one or at most two pages long. The level of detail should be sufficient for the reader to grasp the major items that go into an organization or program, what occurs to those inputs, the various outputs that results and the overall benefits/impacts (or outcomes) that occur and to which groups of people.

Note the content of program logic models is often more specific than models for organizations. This level of specificity is often quite useful for program planners.

Definitions of Basic Terms

Logic models typically depict the inputs, processes, outputs and outcomes associated with an organization and its programs. Don’t be concerned about your grasping the “correct” definition of each of the following terms. It’s more important to have some sense of what they mean — and even more important to be consistent in your use of the terms.

Inputs

These are materials that the organization or program takes in and then processes to produce the results desired by the organization. Types of inputs are people, money, equipment, facilities, supplies, people’s ideas, people’s time, etc. Inputs can also be major forces that influence the organization or programs. For example, the inputs to a nonprofit program that provides training to clients might include learners, training materials, teachers, classrooms, funding, paper and pencils, etc. Various laws and regulations effect how the program is conducted, for example, safety regulations, Equal Opportunity Employment guidelines, etc. Inputs are often associated with a cost to obtain and use the item — budgets are listings of inputs and the costs to obtain and/or use them.

Processes (or Activities or Strategies or Methods)

Processes are used by the organization or program to manipulate and arrange items to produce the results desired by the organization or program. Processes can range from putting a piece of paper on a desk to manufacturing a space shuttle. However, logic models are usually only concerned with the major recurring processes associated with producing the results desired by the organization or program. For example, the major processes used by a nonprofit program that provides training to clients might include recruitment of learners, pretesting of learners, training, post-testing and certification.

Outputs

Outputs are usually the tangible results of the major processes in the organization. They are usually accounted for by their number, for example, the number of students who failed or passed a test, courses taught, tests taken, teachers used, etc. Outputs are frequently misunderstood to indicate success of an organization or program. However, if the outputs aren’t directly associated with achieving the benefits desired for clients, then the outputs are poor indicators of the success of the organization and its programs. You can use many teachers, but that won’t mean that many clients were successfully trained.

Outcomes

Outcomes are the (hopefully positive) impacts on those people whom the organization wanted to benefit with its programs. Outcomes are usually specified in terms of:
a) learning, including enhancements to knowledge, understanding/perceptions/attitudes, and behaviors
b) skills (behaviors to accomplish results, or capabilities)
c) conditions (increased security, stability, pride, etc.)

It’s often to specify outcomes in terms of short-term, intermediate and long-term.


Basic Example of a Logic Model

The following example is intended to further portray the nature of inputs, processes, outputs and outcomes.

The logic model is for an organization called the Self-Directed Learning Center (SDLC).

Logic models for programs are often more detailed. Note that the more comprehensive and descriptive your logic model.

NOTE: A logic model typically has four columns, with the last one being about outcomes. Outcomes can be further divided into short-term, intermediate and long-term. For the sake of viewing on smart phones, the columns of outcomes are included in an additional table below.

Inputs

Processes

Outputs

Free articles and other publications on the Web

– Collaborators

– Free Management Library

– Funders

– Self-directed learners

– Volunteers

– Computers

– Web

– Supplies

– Provide peer-assistance models in which learners support each other

– Provide free, online training program: Basics of Self-Directed Learning

– Provide free, online training program: Basic Life Skills

– Provide free, online training program: Passing your GED Exam

– 30 groups that used peer models

– 100 completed training programs

– 900 learners who finished Basics of Self-Directed Learning

– 900 learners who finished Basic Life Skills

– 900 learners who passed their GED to gain high-school diploma

Short-term outcome(s)

Intermediate outcomes

Long-term outcomes

– High-school diploma for graduates

– Increased interest to attend advanced schooling

– Increased confidence that learner can manage formal learning programs

– Full-time employment for learners in jobs that require high-school education

– Independent living for learners from using salary to rent housing

– Strong basic life skills for learners

– Improved attitude toward self and society for graduates

– Improved family life for families of graduates

– Increased reliability and improved judgment of learners

Here is a template for a logic model. You might think of a system in your work or personal life and diagram the system in the template.


Additional Perspectives on Logic Models


For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.


Evaluation Activities in Organizations

Young Business People in a Meeting

Evaluation Activities in Organizations

Evaluation is carefully collecting information about something in order to make necessary decisions about it. There are a large number and wide variety of evaluations. Evaluation is closely related to performance management (whether about organizations, groups, processes or individuals), which includes identifying measures to assess progress toward achieving results.

Also consider

Learn More in the Library’s Blogs Related to Evaluations

In addition to the articles on this current page, see the following blogs which have posts related to this Evaluations. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog.


General Guidelines About Doing All Forms of Evaluation

Overviews of Major Types of Evaluations

General Resources


For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.


Employee Wellness: Preventing Violence in the Workplace

A Woman Standing in Front of the Group

Employee Wellness: Preventing Violence in the Workplace

Various Perspectives

Workplace Violence and Harassment Resources
OSHA – Workplace Violence
Reducing the Risk of Violence with Better
Relational Skills with Your Employees

How to Deal With a Passive-Aggressive Peers

Also consider
Bullying

Diversity Management

Drugs
in the Workplace

Employee
Assistance Programs

Ergonomics:
Safe Facilities in the Workplace

HIV/AIDS
in the Workplace

Personal
Wellness

Safety
in the Workplace

Spirituality
in the Workplace

Related Library Topics

Learn More in the Library’s Blogs Related to This Topic

In addition to the articles on this current page, see the following blogs which have posts related to this topic.
Scan down the blog’s
page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click
on “next” near the bottom
of a post in the blog.

Library’s
Career Management Blog

Library’s
Human Resources Blog

Library’s
Leadership Blog

Library’s
Spirituality in the Workplace Blog

Library’s Supervision Blog


For the Category of Personal Wellness:

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.

Related Library Topics

Recommended Books


Employee Wellness: Spirituality in the Workplace

A Group of People Having a Meeting in the Office

Employee Wellness: Spirituality in the Workplace

Sections of This Topic Include

Also consider
Related Library Topics

Learn More in the Library’s Blogs Related to This Topic

In addition to the articles on this current page, also see the following blogs that have posts related to this topic. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog. The blog also links to numerous free related resources.


Spirituality in the Workplace – What is Spirituality at Work?

© Copyright Linda Ferguson

In my travels around the country providing workshops on the topic of working spiritually, I’ve found consistently that people are looking for ways to have their work make a difference and to feel energized in a richer way in their work. I want to explore here a few ways that you might examine spirituality in your work.

I provide a framework in my first book, “Path for Greatness”, for aligning your gifts, passion and purpose so you can be of service for the world. (to see more about my book on Spirituality at Work go to: http://www.amazon.com/Path-Greatness-Work-Spiritual-Service/dp/1552124983/

First idea to explore is – what feeds you spiritually? You need to continually till your spiritual soil so that you can keep energized and inspired. Take some time now and write down 3 things you do each week or every day to renew yourself.

Second, what does spirituality mean to you? What three words do you use to describe times when you feel spiritual? See how you can integrate those words and ideas into your work day. In my workshop I have people write out all the words they can to describe a spiritual experience. Those words may include joy, peace, bliss, serenity etc. I then ask, ‘Would you like to work in a place that has this?’ To a person, they all say yes.

OK then, how do you help create this in your work? What small step can you take to bring such feelings into your workplace? Please share your ideas here on this post of how you work spiritually or how you’ve seen others do this.

Three words that I like to use for working spiritually are: wholeness, meaning and connection. When we feel a sense of our own wholeness, we come from a place of greater authenticity and energy. We generally find greater meaning in what we do when we are doing it for a larger purpose than feeding ourselves or our organization. Think of how you can be your best FOR the world.

Finally, when we connect to others in a deeper way, we often feel greater compassion or joy in our relationship with them. We can strengthen all that we do when we connect with our own Source of inspiration, in whatever ways we connect with this Source.

Value of Spirituality in the Workplace

© Copyright Janae Bower

Research shows the impact spirit in the workplace can have for individuals and organizations. Here are five key outcomes that everyone can benefit from:

Boosts morale. Engaging in practices that support spirit in the workplace can uplift the spirits of everyone involved.

Influences satisfaction. Since spirit in the workplace encourages each individual to bring their whole self to both work and home, it increases the satisfaction level in both areas.

Strengthens commitment. Being aligned with an organization that fosters the essence of who you are enables you to feel and display a tremendous sense of loyalty.

Increases productivity. When you feel a greater sense of connection to your work, you are more motivated to produce good work. Which in turn increases the overall productivity of an organization.

Improves the bottom line. According to a nation-wide study on spirituality in the workplace, organizations which integrate another bottom-line into its practices – like spirituality – actually increase  the financial bottom-line. These organizations believe that spirituality could ultimately be the greatest competitive advantage.

For example, Southwest Airlines is often described in terms that would identify it as a spirit-driven organization. This was the only airline to be profitable after the September 11th tragedy that had an incredible financial impact on the airline industry and continues to remain profitable. They have a triple bottom line – People, Performance & Planet. “It takes a lot of dedication, perseverance, and hard work to do the right thing for our Customers, Employees, and Planet. We began operations in 1971 with a revolutionary idea that everyone should be able to afford to fly instead of drive and to enjoy the Safety,  comfort, and convenience of air travel. For the past 38 years, we have devoted ourselves to meeting that goal. ”

Since 1987, when the Department of Transportation began tracking Customer Satisfaction statistics, Southwest has consistently led the entire airline industry with the lowest ratio of complaints per passengers boarded. Many airlines have tried to copy Southwest’s business model, and the Culture of Southwest is admired and emulated by corporations and organizations in all walks of life.

According to their Southwest Cares Report: Doing the Right Thing, “To better understand why we at Southwest try to do the right thing, it is important to understand how we do business and how we integrate our Core Values into everything we do. It is the Southwest Culture that sets us apart.

The 35,000+ Employees of Southwest Airlines are the heart and soul of our Company. Doing the right thing for these Employees includes providing them with a stable work environment with equal opportunity for learning and personal growth. As we “Live the Southwest Way,” our Employees are  recognized through several Employee recognition programs for the hard work and caring Spirit they show to each other and our Customers. Not only do we work hard with what we call a Warrior Spirit, we work smart.” Part of living the Southwest way is also by having a servant’s heart and a fun LUVing attitude.

Reasons for the Spirituality in the Workplace Movement

© Copyright Janae Bower

There are many reasons that contribute to this movement around spirit in the workplace. Here are a few reasons I’ve found on why it began.

Employees want more from their organizations and organizations demand more from their employees. With all the corporate downsizing and restructuring, employees who are left tend to work longer hours. As a result, they want to bring more of their outside self to work. As organizations continue to struggle to find and keep talented employees, they need to offer more than just “a job.” Employees yearn to feel part of a mission, to add value and to contribute in a meaningful way.

Previous movements in the 1980’s and 1990’s such as the new age, work/life balance, simplicity and others have paved the way for this one as well as newer ones like the green movement.

Different generations are contributing to it as well. The majority of the population, 78 million baby boomers, are reaching mid-life and looking at spiritual issues such as: What is my legacy? What is my purpose? What is really important to me? Generation Xers are driven toward a what’s-in-it-for-me mentality and are willing to make organizational changes to meet those needs for work-life balance. Generation Y is the other dominate generation with 76 million. This value-based, team-focused generation is influencing the workplace in many positive ways, one being wanting flexible workplaces that provide meaning and growth opportunities.

In general it is also a reflective time in society as we experienced the first decade of the millennium. As we are living this momentous time in history, society as a whole is reflecting on matters related to spirituality, ethics and humanity.

Numerous Resources About Spirituality in the Workplace

Spirituality Beyond the Workplace – Getting to What Matter Most

© Copyright Janae Bower

Have you figured out how to get the heart of what matters most amidst the chaos of our overstuffed lives? Lately I’ve been using this affirmation as a reminder in my pursuit of what is most important in my work and life. “I make time for what matters most.” This affirmation helps me to know that while I can’t get to everything that I want to on my daily to do list, I am intentionally carving out time for those most critical things each day. Sometimes that includes making time for being and not doing. Stephen Covey refers to this concept as his third habit to put first things first. For Brendon Burchard, author of Life’s Golden Ticket, it’s about living each day fully by being able to say yes to these three questions. “Did I live? Did I love? Did I matter?” Val Kinjerski, PhD, shares another perspective of how important it is to fight for what really matters. Check her out as she speaks about it on YouTube.

Here’s how I determine what matters most:

  • Finding IT: How to Lead with your Heart. First you have to discover what matters most to YOU by finding the deeper meaning, joy, and purpose of life.
  • Living IT: How to Create and Live an Inspired Life. Next you need to decide how you’ll live. When you understand how to live the inspired life principles from the inside out, you will learn how to live your life the way you ought to be; full and richly.
  • Giving IT Away: How to Make a Difference. The final aspect is to determine your legacy. How will you be inspired to leave your mark by giving away your time, talent and treasures to serve others and be part of something larger than yourself?

Numerous Resources About Spirituality Beyond the Workplace

Aspects of Spirituality — an Alphabet Series

In the following “alphabet series,” Janae Bower takes the reader through a broad survey of the many aspects of spirituality — not just in the workplace.

People in Spirituality — an Alphabet Series


For the Category of Personal Wellness:

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.