Sago Health is now TriVoca Health.

We Deliver Projects. Our Clients Want Decisions. 

The healthcare industry faces the gap between what the end-client needs from the research ecosystem and what our value chain has been set up to deliver.
Share:

In This Article

By Isaac Rogers

I don’t think AI is the biggest challenge our industry faces; I think the greater threat is the gap between what the end-client needs from the research ecosystem and what our value chain has been set up to deliver.   

A colleague recently walked me through a large-scale project we’d both watched from a distance. Different agencies, different clients, but the same shape. The brief was sharp. The methodology was solid. Fieldwork came in clean. The deliverable was on time, on budget, and pixel-perfect. Stakeholders nodded. The client signed off on the reporting. Every part of the process felt it had delivered on its mission. 

Six months later, the business decision the research was supposed to inform… had been made by gut, on a different timeline, by people who never opened the report. 

I’ve seen versions of this story so many times I’ve stopped finding it interesting and started finding it instructive. Because if a project can be executed flawlessly and still not move the business forward, that isn’t a project failure. That’s a value chain failure. And I think we, as an industry, owe ourselves the honesty to admit we built it this way.

We Optimized the Wrong Thing 

Walk down the insights value chain and ask, at each stop, what we actually measure ourselves on: 

  • Cost efficiencies: CPI compression, sample efficiency, vendor consolidation, MSAs. 
  • RFPs: Methodology, sample size, timeline, deliverables, price. 
  • Project management: Scope, schedule, hours, change orders. 
  • Fielding: Quotas hit, screen-out rates, days in field, data quality. 
  • Reporting: Slides delivered, charts produced, top-line on time. 
  • Debrief: Meeting held, questions answered, project closed. 

 

Notice what’s not on that list. The business decision. The thing the client (whether external or internal) was actually trying to make. 

Our entire ecosystem, from the first procurement form to the final invoice, is calibrated to the efficiency of producing a research project. Not to the quality of the business decision the project was supposed to enable. Those aren’t the same thing, and pretending they are has cost our industry more credibility than the AI debate ever will.

How WGot Here INMystery

Over the past two decades I’ve spent in research, I’ve watched us hyper-fixate on how we deliver the research process.  Specs come to firms like TriVoca Health as a bulleted list of attributes and a subject line that says “URGENT: PRICING NEEDED ASAP”.  That project gets awarded some weeks later, and our team is now fielding a project based on a set of details separated from the actual business decision being made.  I often compare this to being handed a bag of 11 red Lego rectangles, 15 blue Lego squares, 4 Lego wheels, and being told “Here go build this,” but not being shown the cover image on the box.  

 As a result, when procurement teams need something to compare across three suppliers, they compare what’s comparable: the project. Cost, timeline, sample, deliverable. Suppliers respond to what they’re measured on, so we got better and better at delivering the project. Faster fielding. Cheaper sample. Tighter project management deadlines. Slicker decks. The whole industry has been locked into a race to be the most efficient deliverer of a thing that, increasingly, nobody is using to make a decision.

 This is not a researcher problem. The smartest people I know in this industry want to do consequential, decision-shaping work. They are trapped in a system that rewards on-time, in-scope, in-budget, and is less bullish on whether the work actually mattered. Tell anyone, in any role, that they’ll be evaluated on what they can deliver and not on what changes because of it, and you’ll get exactly what we have today.

What “Outcome-Focused” Would Look Like 

I don’t have a clean framework here, and I’d be suspicious of anyone who claimed to have one. But I know what some of the moves look like, because I’ve watched a few teams quietly start making them: 

  • Brief on the decision, not the method. What is going to be different on the other side of this work? Who has to be convinced of what, by when? If we can’t answer that before we scope, we shouldn’t be scoping.  Do you know how often the folks at TriVoca have clarity on the actual end-goal of the project?  It’s an exception, not the rule.   
  • Stop closing projects at the report. The report is in the middle of the engagement, not the end. Whatever we call the next phase – activation,  socialization, decision support, etc – it has to live inside the scope, not be the thing somebody fights for budget on after the fact. 
  • Measure ourselves on what has changed. Did the roadmap shift? Did the launch get a green light, or a smarter “not yet”? Did someone stop a bad bet? If we can’t tell that story six months later, the project was a transaction, not a piece of consulting. 

The Harder Question

The harder question for our industry isn’t whether this is true. Most people I talk to nod before I get to the second sentence. The harder question is whether we — agencies, fieldwork partners, and clients — are willing to rebuild the contracts, the RFPs, the scopes, the success metrics, and the incentive plans to match. 

Because until we do, the most talented researcher in the world can deliver a flawless project — on time, in scope, on budget — and still hand the client something other than what they came for.

 



Make the shift from delivering research to solving problems. Our team is here to here to guide you.

More Insights