Categories
Uncategorized

The Gap vs. the Continuum

Like many others, I am drawn to gaps. In wealth, we have the rich and the poor. In technology, we have those who connect and those who do not. In social justice, we have those with privilege and those without. 

The contrast that exists is, of course, made possible by the anchors. Hans Rosling in a respected and provocative book called Factfulness points out that what he calls the “gap instinct” is made possible by a false view of reality. He provides a ton of data to make the case that most people are somewhere in the middle of virtually all distributions. In income levels, for example, he divides the world into four levels and shows that few people are at levels 1 and 4 which anchor the gap between the haves and have nots. Most people are at levels 2 and 3.

One reason to look closely at what lies between is to understand present circumstances. While the vision of achieving food security is clear, it matters whether the best money goes into getting grocery stores and farm markets into a food desert or providing transportation for persons to get to resources that already exist. From the view point of those living between extremes, Rosling speaks persuasively on how how small increments of progress can matter a great deal. 

The other reason for looking at people, families or other units distributed between end points is to understand present degree of movement. A child static at three grade levels behind needs different and more help than one at an equal point but gaining two grade levels each year. The logic of gaps can mask important increments of change. 

In some areas there may be only be two choices. Can a person be a partial racist or is it one or the other? Even if the latter, some path forward probably describes progress better than a one-time, lasting conversion. Few of us change overnight.

The trickiness, of course, is to see the reality of  progression or regression while not losing the power and urgency that gap mindset conveys. With equity, diversity, and inclusion we must live in two worlds. As the British tube station announcement puts it, “Mind the Gap.”

Categories
Uncategorized

Watch Word Differences

A Wednesday Whimsy Regarding Words

Watch your words, we are admonished since childhood. Over many years, I have added this guidance: watch for the differences in what words mean. If I ask a grant seeker, for example, “What are your measurable and quantitative objectives?” am I making a distinction between measurable and quantitative, or do I just want to appear erudite by including two words that mean the same thing?

In the current focus on equity, diversity, inclusion, and justice the stakes are much higher when it comes to language. In these terms, I sense substantial differences in meanings, which brings far more richness to conversation than an assumption that the words simply pile on to create a critical mass needed to right a big wrong.

A quick example—equity and equality. As I understand it, equality means giving everyone the same resources. Equity means distributing resources disproportionately such as to achieve the same outcome for everyone. Circumstances may not factor in equality but they sure apply to equity. This distinction also raises important questions on means and ends. Not to be pointy headed this Wednesday. Just to say that such distinctions enrich discussion.

Categories
Uncategorized

How Long Does Success Last?

Good nonprofits track and verify the number and kind of participants who achieve a gain—whether it’s landing a job, reading on grade level, or rehoming a displaced family.  Great nonprofits track and support the number of participants who sustain that gain.     

In a few areas, like workforce development, the challenge of staying power is well known. Simply put, it costs more to help someone sustain employment than to get the job. Habitual behavior is another  area where we understand the challenge.  How many of us lose 20 pounds and then see the weight return within a year?  Mark Twain put it nicely:  “Quitting smoking is not difficult. I have done it hundreds of times.”

One way to look at the value of sustained gain is the concept of cost per gain.  Divide the cost of the program not by those who participate… but by those who achieve the intended results.  For example, if a program costs $100,000 and includes 1,000 participants, the cost per person served is $100.  If only 100 of the participants get to the defined result, however, the cost per gain is $1,000. 

This logic is helpful to both nonprofits and philanthropy in shifting away from the high counts at the level of “serving” or “reaching” persons that many foundations have traditionally supported.  A program that mentors 400 persons has historically been seen as more valuable than one that, for the same money, mentors 100.  But hey. Not so fast.  If you don’t mentor with the intensity and duration  needed, the touch is not sufficient to make a difference.    I actually experienced that mentoring example—and found that the nonprofit starting with 100 dramatically outperformed the one starting with 400.

Now let’s dig deeper.  Assume a service that gets people to food security by getting them enough food at a cost of $4,800 for a year.  If the money for food supply stops, food security ends.  The cost is an annual cost per gain.

Consider a different program that gets people to food security by growing a garden, getting more sustained income, or some other step with staying power.  In that case, the gain may well continue after the program costs end.  If food security lasts for three years after a program ends, the annual cost per gain for four years drops to $1,200.   The return on charitable investment is four times higher.

This logic suggests another insight: that the smartest investments is for steps to help participants sustain gains.  Consider a high school program for students at risk for dropout.  My clients and friends in the college access business point to the sad reality that, for many reasons,  at risk populations can experience college drop out rates that are as high as high school drop outs.   

Schools like Green Tech Charter High School in Albany, NY have discovered that a reasonable added cost to track and support its students through the first two years of college has the highest possible leverage of all education dollars. The major funds to achieve a high school education are leveraged by much lower costs to make graduation pay off in a path to success.  

I have developed descriptions of cost per gain drivers and tools from work with Leslie Clements and the Humana Foundation.  Holler if you want to see them.

Have a great week, everyone! – Hal


Categories
Uncategorized

Target Practice

Many graphics on the pursuit of results has the look of a target and a small “bulls eye” at the center. Indeed, my colleagues and I called our book Outcome Funding…a new approach to targeted grantmaking. And—sure enough—put the traditional target image on the cover. I am now questioning this way of looking at aiming points. In the vital area of inclusion, diversity, and equity, for example, a much broader canvas seems needed. Perhaps many points on the periphery should get as many points as hitting the center. The point is to recognize both complexity and different perspectives as what must first be achieved. As well as the sense that we need to loosen up to see more possibilities rather than narrowly focus on a pre-set victory shot. I will henceforth keep the bullseye for very specific gains and limit its use when vision must broaden rather than narrow.

Categories
Uncategorized Wednesday Whimsy

Never the Twain

It’s tough to learn from mistakes you never made.

Mark Twain

While I spend hours tapping out prose to make and amplify thoughts, I keep finding that short zingers like this one get to the point far more quickly.   Further,  the best are actionable without further ado.   Suggestion:   think of or discover a quote that you like.  Ask yourself what you could do to express its core meaning next week.  If you can’t think of anything, try a new quote.    Email me if you want a good batch that I have found not just wise but useful.  My way of expressing this passage is to require myself to identify one screw up a week to ask what I will do differently next time.    They are all different. Never the Twain shall meet. 

Categories
Uncategorized

Musings on Milestone Management

I spent two hours on January 10, 2020, morning with 80 plus bright eyed leaders and staff of nonprofits in Manatee County, Florida. Our subject was Milestone Management. Simply put, what do you look at that tells you that your participants are making progress that forecasts they will get to a result—whether a job, a house, grade level reading, or anything else important.  I was again reminded of how good groups can be of getting to milestones once they pry fee of the mind-set of managing to a work plan and put themselves in the shoes of those they help.  Management guru Peter Drucker once noted that a social program is never defined in the same way by its recipient as by its nonprofit creator.  How true!

It is also true that when participants co-own a result and the milestones they must achieve to get there that they will outperform a group without shared intentionality.  One milestone that participants readily get (often quicker than program staff) is how critical it is to move from having information to using it.  The fact that a participant can tell you what they learned may just measure short term memory retention.  By using what they remember they gain confidence and sense of progress.

I also find good news in how open and confident nonprofits are that they can pin down warm but fuzzy words.  I asked workshop participants, for example, if they could see and hear engagement or its lack. They said yes. For engagement, they saw energy, eye contact, smiles more than frowns, and connection.  I was also taken with great examples from my group yesterday about readily verified behaviors that reflected engagement. One was simply coming back for more sessions. Another noted was telling friends about it who then came. Nonprofit staff can be equally adept at pinning down other attractive but abstract terms that were mentioned in the workshop, including commitment, leadership, and empowerment. 

We also discussed the need to look at timing in terms of participants rather than the arbitrary periods of a contract.  The first task I put before groups attending was to respond to this question: What’s the first thing you look for that tells you a participant is or is not on track to achieve results?  No one said that they had to wait until the end of the first quarter when a report was due.   The power of milestones is their fit to program participant progression, which often begins with engagement. It is a huge predictor of success and when you can discover it is not present early-on you have trouble to change things to get to engagement. 

I am constantly reminded that such data points are simple, readily verified, and highly predictive of participant involvement. 

Speaking of simple, I end with a quick way to get from work plan step to a milestone.   Ask this question:  So What?   So you will write a resource directory.  So What?  So people who get it discover resources they did not know about.  So What?  So at least 50 persons report connecting with a new resource and getting significant value from that connection.    The journey from doing something to achieving something is greatly aided by this question. 

Categories
Uncategorized

Human-Centered Design: Part 2

Last week, I reprinted the first three principles my client Dave Haney described for Human-Centered Design. Continuing from his newly published article, In times of crisis, colleges should ask different questions than they do in a traditional strategic planning process in Inside HigherEd:


No. 4: Engage in prototyping. Too often institutions spend months planning a major initiative and then roll it out with great fanfare, not knowing whether it will produce the intended results. When possible, it’s better to prototypeby implementing a small-scale, low-risk version of an initiative that can test the critical concepts involved and allow you to readjust according to what works and what doesn’t. For example, instead of launching a new degree program, start with a badge or certificate and carefully examine how it plays with students. A prototype can also be a simulation: before creating a new enrollment office, build a mock-up (physical or virtual) and run students and staff through a simulated set of enrollment interactions. This approach can help create a culture of continuous improvement in which new ideas are constantly tested, evaluated and revised.

No. 5: Resource the early adopters, and let consensus follow later. The downfall of many strategic plans is that everybody agrees with them at the outset. If that’s the case, then it is probably too general and probably looks like everyone else’s plan because it represents the lowest common denominator.

In results-based strategic design, institutions instead provide resources, often minimal, to individuals and groups so that they can try things (prototyping), and then consensus is built around successful or promising results, not prior agreement. (From a slightly different perspective, the higher ed consulting firm CREDO also advocates abandoning consensus as a goal for the “new university.”) The Rensselaerville Institute refers to community members who are energetic early adopters as “community spark plugs.” You know who they are on your campus, and they may be administrators, faculty members, staff members or even students — where they are in the organizational chart is often less important than the energy, creativity and attitude they bring to the table. When other people see that the spark plugs are getting the resources, producing results and having more fun, the number of early adopters will grow.

No. 6: Don’t try to do everything. Too many strategic plans try to cover everything an institution does and therefore sink under their own weight. I prefer Hal Williams’s definition of strategy: something is truly strategic only if it requires a behavior change when business as usual won’t accomplish the desired results. For example, one institution included as a strategy within their plan to review the food service and facilities contracts with external vendors. Do you really need a strategic plan to tell you to do that? If such reviews are not part of business as usual, then you are looking at problems that are not going to be solved by a strategic plan.

Instead, focus on the things that require major behavioral changes. For example, one institution increased both efficiency and organizational health by changing siloed behavior in administrative offices. They cooperated with other offices to ensure student success became a specific job requirement at every level of the institution — a result that would be evaluated in performance reviews and lauded when it succeeded. That was truly strategic, because business as usual required a sharp behavioral change. Rather than spending the five years of a strategic plan checking off boxes toward the plan’s completion, it is more effective to adopt a strategic design with recursive cycles of prototyping, learning and improvement.

When I led a strategic design process in 2017 as a college president, and the steering committee had completed its preliminary design for the institution’s future, an initially skeptical faculty member gave the process an appropriate endorsement: “This process was messy as hell, but the result is good.” The times are even messier now, which makes it even more imperative that we design the future of higher education rather than simply try to plan it.

David P. Haney is the former president of Centenary University. He and Jeremy Houska, director of educational effectiveness at the University of La Verne, will present on results-based strategic design at the SCUP 2020 Virtual Annual Conference, sponsored by the Society for College and University Planning. For more resources on results-based strategic design, see davidphaney.com.

I’ll comment on how I use these six principles with remarkably easy steps in my next blog entry. Stay safe.

Hal

Categories
Uncategorized

Human-Centered Design: Part 1

In times of crisis, colleges should ask different questions than they do in a traditional strategic planning process.  My client and friend Dave Haney wrote an article just published in Inside Higher Ed. Here is a long excerpt that speaks to our work together and the first three of his six principles of result-based strategic design. 

Next Tuesday, I will post the other three principles defined in the article. Please email me if you want to see the full articles, including the introductory framing paragraphs and links included in text.


In times of crisis, colleges should ask different questions than they do in a traditional strategic planning process

By David P. Haney

…“Human-centered design” is now used worldwide for designing everything from organizational pivots in corporations to microloan programs in developing countries, often through the influential work of IDEO, whose chairman Tim Brown wrote Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation over a decade ago. When I used this approach to strategic planning as a college president, I added to the mix a sharp focus on outcomes rather than activities, based on the work of Hal Williams, former CEO of the Rensselaerville Institute. I’ve been fortunate to work with him in higher education administration, and he has helped me see how, despite the recent emphasis on outcomes assessment, higher ed is still burdened with a focus on activities that should be changed to a focus on results.

For example, why do we count student community service hours when we could be documenting the results of students’ community service work? Why do syllabi still list activities to be undertaken instead of results for student to achieve? Why do we have meeting agendas that list the topics to be covered instead of the outcomes we want to see? Why do job descriptions list expected activities (slavishly described as “duties”) instead of what employees should be expected to accomplish? (In fact, if working remotely, where activities are relatively invisible to colleagues, continues in popularity, outcomes may provide the best and perhaps only way to measure employee performance.)

The combination of human-centered design and Hal Williams’s outcomes focus produces what I call “results-based strategic design.” Here are six of the basic principles of this approach and how they can apply to higher education. Much of this involves asking questions that are different from the ones asked in a traditional strategic planning process.

No. 1: Recognize that planners plan and designers solve problems. Instead of asking, “Where do we want to be in five years,” it’s better to ask, “What problems do we need to solve?” That helps shift the focus from what we by definition can’t know (the future) to what we can do(solve problems and produce results).

For two reasons, it’s not always easy to identify the problems. First, we often jump to potential solutions before defining the key issue. For example, “Our enrollment is too low” does not state a problem. Increasing enrollment is a solution to different potential problems, such as unused capacity or most commonly an operational deficit. Increasing enrollment may be a solution to a deficit, but it may also drive up the discount rate and create additional expenses, so it may not be the appropriate solution, or it may need to be considered in concert with other solutions. As long ago as 2015, some colleges decided to address financial problems by shrinking rather than “chasing volume.”

Second, we worry too much about what designers call “gravity problems”: issues that are not really problems because, like gravity, they are going to be there no matter what. For example, current demographic trends that reduce applicant pools are not problems but rather inevitable facts. A low yield — too few accepted students who enroll — can be fixed, and the pool can be increased by looking in new places. (For example, people that would benefit from what you offer but don’t know it yet.) But the demographics are facts to be dealt with, not problems to be solved. Balance is key: some leaders resort to firefighting mode and jump to solutions too quickly, while others demand to understand all the variables before acting, and their response is too slow.

The difference between a designer and an engineer is that an engineer has a problem with a single solution: you need to get people across a river, so you build a bridge. Designers solve “wicked” problems: multiple and sometimes ill-defined problems that have multiple solutions. Higher ed is clearly rife with wicked problems. The problem-solving mentality can filter through the entire process. For example, instead of a strategic planning committee focusing on curriculum, create a design team to identify specific problems in the curriculum and create solutions.

No. 2: Use constraints to encourage creativity. Designers have learned that truly innovative and useful results come not from “blue sky thinking” but from working within a particular set of constraints. A smartphone can only be so big and cost so much, or it won’t sell. The familiar and new constraints in higher ed — changing demographics, increased competition, public skepticism and now the disruption of an as-yet unknown number of semesters by COVID-19 and the resulting human and economic consequences — need to be seen not as obstacles to planning but as catalysts for creativity and innovation. The three general constraints on new initiatives that design thinking identifies, and that can spur creativity, are: 1) viability (can it be sustained long-term?), 2) feasibility (do we have the capacity, tools and know-how to do it?) and 3) desirability (does it fit our mission and can we embrace it as an institution?).

For example, an enrollment-related result for one tuition-driven small college in the Northeast, with that region’s declining college-age population, was to attract, retain and serve students who didn’t know they would benefit from attending college in general or this institution in particular. This is potentially viable because it recognizes the decline in population while building on the fact that more students in that smaller pool need what this institution has to offer. It is feasible because there are many ways for an experienced admissions staff to reach to new areas and kinds of schools. (For example, this college started working with technical high schools and inner-city college-readiness programs.) And it is desirable because it will increase revenue as the institution continues to do what it does best — as opposed to simply lowering standards, trying to increase geographic reach or pursuing other enrollment-enhancing techniques. Keeping these constraints in mind also makes it much easier to link strategic design to resource allocation, since both viability and feasibility depend on resources.

No. 3: Determine constituents’ needs, which may not be what they say they need. This is what is called the empathy stage in design thinking, in which you observe people’s behavior in order to find the best solutions. It’s not enough to ask them what they need; as Henry Ford probably did not say, but is often quoted as saying, “If I had asked people what they needed, they would have said ‘faster horses.’” When your students complain that they face a byzantine bureaucracy, follow some of them around as they leap through the registrar’s and the financial aid offices’ hoops. Then simulate potential solutions. Or if your value proposition is not getting out through admissions and marketing, observe students’ and potential students’ responses to current and potential new messages.

I once embedded myself with a summer leadership camp for entering students and discovered that many of the reasons for their choice to attend our institution had nothing to do with what we said in our expensive marketing materials. This is not treating students as customers within a corporate model but simply respecting them as users of the services we offer. (An entire subdiscipline called user experience or UX has occasionally been recommended for higher ed planning.) Especially now that our students are changing from a traditional 18- to 22-year-old cohort to a constituency of all ages with varying and complex life situations — and will be emerging from the trauma of the pandemic with a host of new and different concerns and needs — we should carefully observe the quality of their experience. We all pay lip service to the needs of the students we serve, but strategic plans still tend to focus on the self-preservation and growth of our institutions.

David P. Haney is the former president of Centenary University. He and Jeremy Houska, director of educational effectiveness at the University of La Verne, will present on results-based strategic design at the SCUP 2020 Virtual Annual Conference, sponsored by the Society for College and University Planning. For more resources on results-based strategic design, see davidphaney.com.


Come back on Tuesday for the other three points Dave makes and how I use them with many organizations.

Categories
Uncategorized

What happened to results in considering performance?

A recent Washington Post article (6/10/20) looked at how and why funds to use farm surplus to feed hungry Americans led to awards to a San Antonio event planner and a health-and-wellness airport kiosk company.  I know of one key reason:  results and likelihood of their being achieved were not of the highest priority in selecting groups.  When asked by the Post, USDA furnished the selection criteria for awards:  “Proposals for the Farmers to Families Food Box Program were evaluated by, in descending order of importance, the technical information contained, the prices offered, past performance of the offeror and the offeror’s ability to perform.”   Wow.  Hiring a consultant to provide technical data and projecting a low price is more important than what the group has done in the past and its capability to perform now. Something sure is backwards here.

And this is not a solitary example.  In government procurements, the needs statement and program description supplied by a grants writer can add up to more points than the number of people who get a defined gain. And with some philanthropies, the program fit to mission and number of partners listed can outweigh points allocated to results.  

Two additional problems jump out from this observation. The first is that in some assessment frameworks there literally is no category of results.   A recent newspaper column reported on the grade given a superintendent.  The assessment had four areas:  leadership, high quality instruction, continuous improvement, and communication.  None includes how many students went to grade level in reading or math.    

The second problem is the confusion between form and substance. I was just asked to review a foundation application which had the following criteria considered the best way to include results:    How clear and complete is the proposal on program outcomes?  Not are the results high enough to be worth the grant…. Just “are they clear?”   I also see this issue with the evaluation factor that gives points for clarity of the evaluation plan.  I learn that the group has hired an evaluator and will use a survey.  I do not know whether this captures opinions or actual changes in behavior. Again, the process focuses on the form, not the content.   

Categories
Uncategorized

Riding for your Brand

Marc Chardon, the bright and inquisitive former leader of Blackbaud Inc, and I wrote several pieces together that will be featured on this site. One theme we call, “Riding for the brand.”  Brands we note, do not establish a class or cohort of groups. They exist to define and protect the distinctiveness of an organization, whether a cattle ranch in Wyoming or a drug treatment organization in New York. Where’s our beef? It’s underneath our brand!

As foundations move from funding programs to investing in results, they shift from sprinkling money among many organizations to investing more on those organizations that achieve the strongest human gain for the dollars available.  Yet when I ask many nonprofits where they think they stand on achievement relative to other groups in their field and geography, most do not know. And a few shrink from the question—suggesting that this means having to at least implicitly call attention to differences with colleague groups. My advice is to get over that. You need a way to move from blending in to standing out.

No need to badmouth other groups. You simply show your results relative to those of other organizations taken as a whole. You let the viewer draw his and her own conclusions. And if you are interactive with other groups to the point that shared action determines your success, say so. Be clear about how collaborations with other high achieving groups lets you together create results that are well beyond those that sum from your separable activities. Then supporting each party to the collaboration makes good sense to investors.