Victoria Sale Victoria Sale

PART I: We don’t have to incentivize humanity

In this article, Part 1 of 2, co-author Victoria Sale draws on her personal experience as a nurse and teacher to illustrate the perverse ways that performance measurement, when implemented within the social sector, often works against the very people it is intended to benefit. In Part 2, we examine the broader impact of performance measurement in the social sector and explore how to redesign measurement through shared power.

THE PRESSURE TO PERFORM

By Victoria Sale and Ellen Schultz

In this article, Part 1 of 2, co-author Victoria Sale draws on her personal experience as a nurse and teacher to illustrate the perverse ways that performance measurement, when implemented within the social sector, often works against the very people it is intended to benefit.  In Part 2, we examine the broader impact of performance measurement in the social sector and explore how to redesign measurement through shared power.

She’s in the emergency room again? I’m completely out of ideas to help her - I shouldn’t be doing this job anymore. She deserves a better nurse case manager than me.” Those were the thoughts running through my head as I sat, flushed with humiliation, sure that my colleagues in the emergency department (ED) were staring at me. I’d just logged on and seen that, once again, Angie had been to the ED last night.


Angie is not her real name, of course. Her name, her struggles, her story are not for me to tell. The tension on this day was recognizing that I was under pressure to reduce ED use for the patients I supported as a nurse case manager within an intensive care management program, and despite following a detailed checklist of medical and social follow ups, Angie’s “unnecessary utilization” remained unchanged.


This focus on measuring performance was not unique to my situation. Like most hospitals and healthcare systems across the country, our program and organizational success was largely based on our ability to reduce ED and overnight hospital visits. It was how we received the green light for more funding to expand and serve more people, and to keep the lights on. 


As a nurse case manager, when Angie kept returning to the ED, I didn’t just feel like I was letting her down. I felt like I was letting the organization down, and jeopardizing future funding for the whole program. The pressure to make Angie “perform” well by reducing her ED use was so persistent that although I’d developed deep affection for Angie and her family, when it came time to report outcomes, I flashed to visions of how I would defend myself in front of my peers at case conference. I dreaded the questions about whether it was worth continuing to see Angie, because she wasn’t “making any progress.” I found myself being aloof, not looking up from my clipboard, staying focused on my task list rather than on Angie herself. But underneath that aloofness was my own fear and desperation to change her “performance” so I could keep seeing her.


The tension of needing to make other human beings “perform for the system” was not new to me. Before becoming a nurse, I taught high school science. In just a few years of teaching I had already come to understand a predictable pattern of what happens when we shift accountability for performance outcomes with deeply systemic social roots onto frontline workers like teachers, nurses, and doctors. I saw clearly that dangling carrots, rewarding “top performers,” and shaming people does not work. I learned that applying narrowly-focused performance-driven incentives to human services systems such as healthcare and education makes them behave in predictable, counter-productive ways. This pattern is characterized by paternalistic behaviors, a transactional focus on tasks rather than building trust, and redirecting resources away from those who do not perform to system-defined standards. 


Denied Support

Performance measurement has been in use in U.S. public education for decades. It reached its most recent peak with the No Child Left Behind (NCLB) legislation of 2002 that sought to incentivize K-12 public schools to better educate students by instituting increasing levels of penalties for schools that did not meet state-specific measures of student achievement. These measures focused on reading, writing, math skills, and levels of physical fitness. While the more recent Every Student Success Act of 2015 moderates some of those penalties, the focus on standardized testing as a key measure of school and educator performance remains across the U.S. public education system.


In practice, the humanness of childhood and family life often conflict with this drive for performance. Charlie shows up to “Writer’s workshop” hungry because his father lost his job. Marcia’s mother died unexpectedly in a car accident, and she cries during math. Nicole has a crush on her female biology lab partner and spends the year questioning her identity. Faced with penalties for under-performing on standardized tests, instead of supporting students and families in navigating this complexity, schools bring in more reading experts and implement mandatory scripted math curriculums. The idiosyncratic, messy, but essential process of guiding and caring for students too often falls by the wayside in pursuit of efficiency, standardization, and fidelity to mandated curricula. 


Rather than reaching out and offering additional support to the people who most need it, experienced frontline workers learn to avert their eyes from students, patients, and clients who aren’t “performing.” Performance measures incentivize focusing on people who most readily improve performance in the short-term, particularly in response to standardized, top-down improvement strategies. This often redirects attention and resources away from those who most need sustained support from a trusted service provider. 

One moment with Angie stands out in my memory. I was checking in with her at home, checklist in hand, ready to tackle the next round of appointments and service coordination. But instead of running down my to-do list, I put aside my clipboard full of metrics and checkboxes and sat down next to her on the porch. I told her I could see she was still in pain – pain that goes deeper than physical ailments. I asked: Do you want help with that other pain? It was then Angie told me about the roots of her pain and her struggles, roots buried deep in childhood trauma. I held her hand and cried with her as she told her story. 

Angie was eventually dropped from the intensive care management program, denied continued support because she was not making progress in the way the program measured success. Angie was also discharged by the case management programs from all her other healthcare providers. While never explicitly stated, I believe a core reason for her discharge was because she “ruined outcomes averages,” threatening program funding, future grant opportunities, and organizational performance. 

We Don’t Have to Incentivize Humanity

Angie continued to go to the ED regularly throughout the time I worked with her. I could not show measurable improvement in her outcomes in any of the ways my organization or its funders wanted to see. But I did make a difference. Years after I’d moved on to a new role and she had moved to a new city, she called me. To check up on me, she said. She wanted to tell me she was doing OK. By then I was not seeing patients anymore, was no longer facing quarterly reports of ED visits or medication adherence or time between hospital stays. I did not ask Angie about her health, or medications, or appointments. I did not even ask her what “doing ok” meant to her. If she felt she was doing ok, isn’t that all that matters? I talked with Angie about her family, her life. I thanked her for calling. 

My humanness mattered to Angie in ways our measures of performance never captured. Experience shows that despite largely efficiency-driven incentives, front-line workers in jobs of caring will find ways to steal moments for humanness even amid a relentless drive for performance. Teachers call parents after school because they sense something is troubling a student. Primary care doctors call patients to answer questions about test results, even after a 12-hour shift. In hospitals across the country nurses used their personal cell phones to help patients who were severely ill with COVID-19 FaceTime with loved ones. No one gets paid any extra for this. These are good people who are doing their best to bring humanity into poorly designed systems. 

There’s no performance measure that can capture what it meant in that moment to know that Angie cared enough about me to call years later and check in. Her rate of ED visits may not have changed in the time I worked with her, but I knew that I had made a difference in her life. And she made a difference in mine. 

——————

Acknowledgements:

We are grateful to Rachel Davis, Rebecca Sax, and Gwynn Sullivan, and for making time to review and critique an earlier version of this article. We also thank Jason Turi for introducing us to one another - your spark lit a fire for us both.

*Shout out Wylly Suhendra for taking the the beautiful cover photo found on Unsplash here. Attribution is important.

Read More
Victoria Sale Victoria Sale

Are you (or your organization) at risk for Adverse Professional Experiences?

First, what are Adverse Professional Experiences?

Just like Adverse Childhood Experiences (ACEs), APEs are bad things that can happen to a person at work that have long-lasting effects on their ability to be effective (or in some cases remain employed at all).


Yikes. What does the data say?



Can I do anything about it?!

Yes! Good news. APEs have antidotes (personal, role, and systems-level antidotes). We know some and are researching the impact of more.

How are APEs being applied?

  • We’re helping organizations put APEs antidotes into practice so they don’t unintentionally cause harm to their staff. If you think we could help, let's connect here.

Think APE antidotes could help your organization? Connect with us here

Who created the APEs framework?

Meet Shannon Scott, OLE co-founder. 

Think Shannon’s the coolest? So do we. AND, we have a bunch more brilliant and authentic practical wisdom model-builders here.

Read More
Victoria Sale Victoria Sale

Is the famous Maslow pyramid supposed to be flipped?

There’s more to the famous Maslow’s Hierarchy of Needs than we thought.

Raise your hand if you’ve seen this picture on a slide deck:

Maslow’s Hierarchy of Needs (Source)

The thing is, there’s more context to the hierarchy.

Yep. Unpublished papers of Maslow’s (and several scholars*) suggest his famous hierarchy of needs was actually based on the Siksika (Blackfoot) way of life.

*In fact, members of the Blackfoot Nation, received a grant from the Canadian Government’s Social Sciences and Humanities Research Council to research Blackfoot influences on Maslow. 


Here’s the punchline: MASLOW FLIPPED THE PYRAMID.

In the Blackfoot way of life, Self-Actualization is the base. They believe a sense of belonging and living as the full embodiment of all that you are is the FOUNDATION of a thriving society, NOT the reward you get when you reach the top of the pyramid.

Here’s a quote to help frame this picture:


“In Blackfoot culture, ‘it’s like you’re credentialed at the start. You’re treated with dignity for that reason, but you spend your life living up to that.’ While Maslow saw self-actualization as something to earn, the Blackfoot see it as innate.” Source 

HOW WOULD OUR NATION'S SOCIAL SERVICE INDUSTRY SHIFT IF EVERYONE WAS THOUGHT TO BE "CREDENTIALED FROM THE START?”


No offense to Maslow...

but his unpublished manuscripts suggest he spent just six weeks with the Blackfoot nation. In his publications about Hierarchy of Needs, he flipped a key component of their culture and therefore our culture’s broader understanding of self-actualization.

What can we learn from this?

  1. Even well-regarded models that get a lot of publicity have context, and the context/perspective of who develops models matters A LOT. Systems leaders have a responsibility to look into the context of what they’re implementing before scaling models (even those that appear highly evidence based).

  2. This flipped pyramid example highlights the need to be super careful about observing lived experts for a short-time, and believing we understand the true essence of what they’re doing, their reasoning for what they’re doing, and why they’re doing it. In other words, watching an expert talk about their work doesn't make you an expert on that work. Refer back to the source!


ATTENTION LEADERS & ADMINISTRATORS:

If you could use practical wisdom to help to make sure you don’t do something silly (like flip a pyramid that becomes gospel, which serves as the basis of how we allocate billions of dollars in resources) CHECK OUT OUR NETWORK OF LIVED EXPERTS.

Shout out to Chase Moyer for the beautiful cover photo, which can be found on Unsplash here. Attribution is important.


Read More
Victoria Sale Victoria Sale

Part II: We Don’t Have to Incentivize Humanity

In this article, the second of two parts, we build on co-author Victoria Sale’s personal experience shared in Part 1 to examine evidence showing that social sector performance measurement incentivizes a focus on efficiency over lasting relationships, and standardization over human caring, ultimately working against the people it is intended to benefit. We challenge policymakers, regulators, leaders, and innovators across the social sector to recognize the human cost of performance measurement and redesign how we measure performance in the social sector through shared power with service providers and recipients.

Finding a Path Away from the Perverse Incentives of Performance Measurement

By Ellen Schultz and Victoria Sale 

In this article, the second of two parts, we build on co-author Victoria Sale’s personal experience shared in Part 1 to examine evidence showing that social sector performance measurement incentivizes a focus on efficiency over lasting relationships, and standardization over human caring, ultimately working against the people it is intended to benefit. We challenge policymakers, regulators, leaders, and innovators across the social sector to recognize the human cost of performance measurement and redesign how we measure performance in the social sector through shared power with service providers and recipients.


Driven by recognition of widespread and damning failures in the quality of healthcare and public education, the U.S. social sector has avidly embraced performance measurement as the accountability tool of choice. When used this way, performance measurement seeks to assess service providers such as doctors, teachers, and the hospitals and schools where they work by quantifying important aspects of their services. 

The urge to incentivize better performance by holding service providers accountable for meeting quality standards has an appealing logic, particularly for those paying for services. Yet this logic rests on several questionable assumptions: that we know – and can agree on – what constitutes high quality service, that we can measure this quality standard through quantifiable metrics, and that tying these metric scores to penalties and rewards for service providers will ultimately benefit service recipients.

In Part 1 of this article, co-author Victoria Sale shared her experience caring for a patient we’ll call Angie within an intensive case management program. Struggling to provide the human connection and patience that Angie needed amid intense pressure to meet cost- and efficiency-focused metrics, Sale learned to recognize the human cost of performance measurement, for both service providers and recipients.

A Look in the Mirror

Angie’s case is one example of the unintended consequences of widespread performance measurement adoption in the U.S. social sector. Amid a national crisis of poor health outcomes and escalating healthcare costs, the American healthcare system has embraced performance measurement over the last 30 years. While sporadic efforts to measure healthcare quality in the U.S. date back as early as the mid-18th century, the view that measurement is essential to the delivery of high-quality care emerged in the U.S. amid the shift toward managed care in the 1990s.1 The adage, “you can’t improve what you don’t measure,” credited to renowned management consultant Peter Drucker, took on a reverence akin to gospel. 


Today, the National Quality Forum’s catalog of performance measures boasts over 1100 metrics, of which more than 400 are endorsed for quality improvement, public reporting, and pay-for-performance uses. Public reporting initiatives such as the Federally-funded Hospital Compare website link institutional reputations to performance measure scores to try to incentivize better performance. Increasingly, the Centers for Medicare & Medicaid Services ties payment for healthcare services to performance measure scores, such as through Quality Payment Program

Yet evidence raises many questions about the effectiveness, and negative consequences, of performance measurement in the social sector. Looking across the public service sector in the U.K., Eleanor Carter and Nigel Ball wrote in SSIR in June 2021 of the many perverse incentives they have observed in evaluating a range of Payment for Results contracts. These contracts pay social service providers based on performance metrics, typically outcome measures, selected by the U.K. national government. A 2017 review of healthcare-focused pay-for-performance programs in the U.S., U.K., and several other countries failed to show any substantial improvement in long-term patient outcomes associated with such programs, with only limited evidence that pay-for-performance improved some care processes.2 Although one recent large study showed significant improvements in the safety of U.S. hospital care over the last decade,3 a 2022 report from the U.S. Office of the Inspector General reported only marginal decreases in the rate of harm Medicare beneficiaries experienced during hospitalization in 2018 (12%), compared to 2010 (13.5%). Over the same time period, the U.S. made major investments in performance measurement: a 2016 study found that U.S. physician practices spent more than $15B annually on performance measure reporting.4

In public education, with an even longer history of systematic performance measurement efforts, the impact of measurement on education outcomes is similarly mixed. In a comprehensive review of decades of performance measurement efforts across the U.S. K-12 public education system, David Deming and David Figlio conclude that measurement is typically associated with modest gains in student achievement overall.5 While somewhat effective in narrowing achievement gaps in the lowest performing schools, the authors caution that the greater the incentives associated with performance measurement, the more likely for unintended negative consequences, including concentrating resources on students who are closest to the desired performance standard rather than on those who are lowest performing. They found that strong incentives also increase the incidence of gaming, like pushing lowest-performing students into disability classifications or suspending them from school on test days. The authors conclude that although schools generally respond strongly to performance measurement, the response is not always in line with policy intention. In other words, be careful what you incentivize.

When assessing the impact of performance measurement as an accountability tool, we must look not only at this modest improvement, but also at the human cost of such narrow focus on standardized measures of performance. It is time to look ourselves in the mirror and admit the mistake of assuming that incentives that work to improve outcomes in the business and manufacturing world would work for social systems, too. And we must acknowledge the ways that performance measurement has worked against the human connection, personal relationships, and caring that are an essential part of all social sector work.

Sharing Power in Performance Measurement Design

So what do we do instead? Measurement reform efforts often focus on implementing different metrics or relying on alternative data. But shifting only what we measure fails to address power imbalances in the measurement process itself. Implementing one-size-fits-all metrics that reflect the priorities and understanding of just a few stakeholders (the policymakers, researchers, and payers who typically design, develop, and implement performance measurement) ignores the experience and insight of the people most impacted by measurement: front-line service providers and the people they serve.


Rather than offer recommendations for measuring different processes or outcomes, we instead advocate for measuring differently. We call on leaders across the social sector to first recognize the harm caused by current approaches to performance measurement, and then share power with frontline service providers, and service recipients, to collectively redesign performance measurement.


Redesigned social sector performance measurement must make benefitting service recipients its first objective. Achieving this objective requires more than stated goals and good intentions. To benefit the people at the heart of their missions, schools, healthcare organizations, and human service agencies must design measurement in partnership with service providers and the patients, students, and families they serve. Together with system administrators, these partners must:

  • Collaborate in measurement design from the beginning, including the process of deciding what to measure, with clarity around how those metrics ultimately benefit service recipients.

  • Co-design how to collect data and then make data transparent and accessible to all impacted community members. 

  • Make sense of metrics through an iterative and collaborative process that interprets data in light of real-world experiences and social, cultural, and historical context.

  • Decide together how to respond when performance metrics show room for improvement by ensuring service providers and recipients share power in designing and implementing solutions.


This kind of power sharing with front-line service providers and recipients is a radical departure from current performance measurement practice. But radical does not mean impossible. 


Efforts to co-design measurement are already happening in healthcare, child welfare, and community development. Anti-racist, community-centered program and policy design approaches, like those proposed by Sonya Soni and colleagues,6 push us to uphold a standard beyond co-design to “community ownership” where those in power hold “equitable, as opposed to equal” power with community members. As part of the Well-being In the Nation (WIN) initiative, more than 100 communities, organizations, and community members collaborated to identify measures of well-being that span economic, health, food, transportation, and public safety sectors. 


Co-author Ellen Schultz oversaw a series of pilot projects, funded with support from the Robert Wood Johnson Foundation, that developed and implemented measures of healthcare quality in partnership with patients and family caregivers. These projects demonstrated that partnerships among patients, family caregivers, clinicians, and researchers yielded innovative measures that all stakeholders found more meaningful. More recently, the Centers for Medicare and Medicaid Services has encouraged more widespread patient engagement in measure development efforts, though the extent of engagement in practice is unclear and decision making about performance measurement remains firmly in the hands of federal administrators. In contrast, development and implementation of a set of Indigenous Health Indicators, developed by and for the Swinomish Indian Tribal Community, demonstrates the potential for community-driven measurement efforts.7


Like other measure reform efforts, many of these examples focus primarily on questions of what to measure, with less attention on how policymakers and systems administrators use performance measurement or its impact on those delivering and receiving social services. Yet these examples also demonstrate some of the transformation possible when those most impacted by performance measurement begin to shape how we define and measure desired outcomes.

Avoiding perverse effects of performance measurement does not require throwing out performance measurement as an accountability tool. Tailoring performance measurement to the social sector does require acknowledging the harm caused by status quo practices and partnering with frontline service providers and those they serve to co-design a human-centered measurement system. We do not have to incentivize humanity; we are innately human. We simply have to stop creating systems that keep us from being who we are. 


Notes

  1. See Dennis McIntyre, Lisa Rogers, and Ellen Jo Heier, “Overview, History, and Objectives of Performance Measurement,” Health Care Finance Review, vol. 22, no. 3, p. 7-21, 2001 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194707/

  2. See Aaron Mendelson, et al., “The Effects of Pay-for-Performance Programs on Health, Health Care Use, and Processes of Care,” Annals of Internal Medicine vol. 166, no. 5, p. 341-353, 2017. https://doi.org/10.7326/M16-1881

  3. See Noel Elderidge, et al., “Trends in Adverse Event Rates in Hospitalized Patients, 2010 - 2019”, Journal of the American Medical Association, vol. 328, no. 2, p. 173-183, 2022. doi: 10.1001/jama.2022.9600 

  4. See Lawrence P. Casalino, et al., “US Physician Practices Spend More Than $15.4 Billion Annually To Report Quality Measures”, Health Affairs, vol. 35, no. 3, 2016 https://doi.org/10.1377/hlthaff.2015.1258

  5. See David J. Deming and David Figlio, “Accountability in U.S. Education: Applying Lessons from K-12 Experience to Higher Education”, Journal of Economic Perspectives, vol. 30, no. 3, p. 33-56, 2016 https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.30.3.33 

  6. See Sonya Soni, Jessica Mason, and Jermeen Sherman, "Beyond Human-centered Design: The Promise of Anti-racist Community-centered Approaches in Child Welfare Program and Policy Design." Child Welfare, vol. 100, no. 1, p. 81-109, 2022.

  7. See Jamie Donatuto, Larry Campbell, Robin Gregory, “Developing Responsive Indicators of Indigenous Community Health”, International Journal of Environmental Research and Public Health, vol. 13, no. 9, p. 899, 2016.https://www.mdpi.com/1660-4601/13/9/899

Acknowledgements

We are grateful to Rachel Davis, Rebecca Sax, and Gwynn Sullivan, and for making time to review and critique an earlier version of this article. We also thank Jason Turi for introducing us to one another - your spark lit a fire for us both.

*Shout out Wylly Suhendra for taking the the beautiful cover photo found on Unsplash here. Attribution is important.

Read More