Measuring What Matters Before Day One: Arizona Media Institute


You're building something important. You have the mission, the funding, and the team. What you don't have yet is a way to prove any of it is working.
And if you wait until after you launch to figure that out, you'll spend years trying to measure things backward without the baseline data you needed from day one.
That was the Arizona Media Institute. A brand new initiative backed by a million-dollar founding gift, with ambitious goals — build trust in journalism, improve civic engagement, strengthen the health of local media across the state — and a leadership team that knew measurement infrastructure had to be in place before launch, not built as an afterthought once programs were already running.
TL;DR
- Client: Arizona Media Institute (AZMI), a new initiative of the Center for the Future of Arizona — a nonpartisan nonprofit that supports stronger, fact-based journalism across Arizona through data, resources, and programming
- Situation: A brand new initiative preparing to launch with ambitious long-term goals that were genuinely difficult to measure. Multiple stakeholders. No existing measurement infrastructure. A leadership team that needed measurement built into the foundation before the institute went live.
- The work: Phase 1 strategic measurement framework — co-developed with CFA leadership and advisors across three areas: Arizona public sentiment, Arizona media health, and programming impact
- The shift: From aspirational goals that couldn't be tracked to a defined measurement framework with specific indicators, data sources, tools, and a three-year roadmap
- The outcome: A measurement foundation AZMI could build from before launch — clarity on what to measure and how across all three areas, dashboard structure and reporting cadences, and a methodology consistent enough to show change over time
The Challenge
The Arizona Media Institute was built to do something genuinely hard to prove: make journalism better, build public trust in media, and contribute to a healthier civic environment across Arizona.
Those aren't vague ambitions. The Center for the Future of Arizona had years of research on what Arizonans care about and where the media landscape was falling short. The Institute launched with real partnerships, a million-dollar founding gift, and a team that believed strongly in the mission.
But they understood a risk that most organizations only realize after the fact. If you don't define what success looks like before you launch, and if you don't build the infrastructure to measure it from day one, you end up with years of activity and no evidence of impact. You can describe the work you did. You can't prove it moved anything.
This work was particularly hard to measure because the outcomes were indirect by design. AZMI wasn't trying to control what journalists published. They were trying to give journalists better data, better connections, and better resources so that Arizona's media could serve the public more effectively. The causal chain from "we shared this data with a journalist" to "civic engagement improved in a rural community" is real and long. Measuring it required a different kind of framework than most organizations use.
There was also the complexity of three very different things to measure at once, each requiring a different methodology, different data sources, and a different reporting cadence:
- Public sentiment: How do Arizonans feel about and trust the journalism they consume? How does that change over time?
- Media health: What's the actual state of journalism capacity across the state, and is it improving or declining?
- Programming impact: Are the events, resources, and tools AZMI offers actually being used, and are they making a difference for the media professionals they're designed to serve?
None of these could be answered the same way or with the same tools. And before any of them could be answered, the framework for answering them had to exist — built before launch, with methodology consistent enough that data from year one would still be comparable to data from year three.
- A brand new initiative with no existing measurement infrastructure
- Three distinct areas to measure, each requiring a different approach and reporting cadence
- Goals that were meaningful and indirect — strengthening journalism to improve civic engagement to move public priorities forward is not a short causal chain
- Multiple stakeholders across CFA, AZMI, and an advisory council all with different views on what success looked like
- A three-year commitment that required consistent methodology from the start
- Leadership accountable to a CEO and executive vice president who needed something concrete enough to demonstrate impact to funders
The Shift
The assumption most organizations operate on is that measurement is something you build after you have results to measure. You do the work, generate the activity, and then try to retroactively prove it had impact. What they discover is that they never captured the baseline data they needed, they didn't define what impact meant before they started, and years later they have a lot of anecdote and not much evidence.
They needed measurement built into the foundation of the initiative — which meant working through the hard questions before launch. What does success actually look like for each of the three areas? What data can realistically be collected given existing tools and team capacity? What can be measured directly, and what requires indirect proxies? What does a comparison baseline look like when there isn't one yet?
Those questions had to be answered in a specific order, and they required working through each area of measurement in real detail, not at a high level, but down to specific KPIs, data sources, tools, and reporting timelines, before any of them could be acted on.
The Work
The engagement was a full discovery and framework development process, co-built with AZMI's leadership team and designed to be validated with advisors before anything was implemented.
Stakeholder Discovery and Research Review
Started with deep discovery: a kickoff session with the full team to understand goals, existing resources, partner relationships, and hypotheses about what success might look like. Reviewed third-party research CFA already had access to — including the Gallup Arizona Survey, the Arizona Progress Meters, the Arizona Voters' Agenda reports, and landscape research from national organizations tracking local journalism health.
This context was critical. CFA had years of data on what Arizonans care about. That data became the anchor for defining what quality journalism meant in practical terms — journalism that covers the topics Arizonans have confirmed matter most to them, in ways communities can actually engage with.
- Kickoff session covering goals, constraints, partner relationships, and team hypotheses
- Review of existing CFA research and third-party industry data
- Mapping of existing tools to understand what was already in place and where gaps existed
- Identification of what data could be captured from day one versus what required new infrastructure
Arizona Public Sentiment Framework
The first area of measurement was the hardest conceptually: how do you measure public trust in and engagement with media across an entire state, in a way that's comparable over time?
Two approaches used together: first-party survey research to capture attitudes directly, and sentiment analysis tools to track how media coverage and public conversation around key topics shifts over time. One important decision was making sure the methodology was consistent enough that data collected in year one would be comparable to data in year three.
One finding that shaped this section: existing trust surveys were often produced by trade associations with their own interest in the results. That created a credibility gap AZMI had the opportunity to fill. Being an independent, nonpartisan source of this data is a different and more valuable role than simply citing what the industry says about itself.
- Survey framework covering public trust in local media, engagement with journalism, and access to coverage across geographic areas — with specific attention to rural communities underrepresented in existing research
- Sentiment analysis recommendation using tools to track topic conversation volume over time, measuring whether coverage of the issues Arizonans care about most was increasing
- Methodology for baseline measurement and year-over-year comparison
- Question design guidance for surfacing real attitudes rather than reflexive responses
Arizona Media Health Framework
The second area was about the health of the media landscape itself: how many outlets exist across the state, where coverage is strong and where it's thin, whether journalism capacity is growing or declining, and how the quality of coverage is changing.
The key challenge here was distinguishing between volume and quality. The number of stories published about a topic is not the same as whether those stories are accurate, fact-based, and connected to what Arizonans actually care about. The framework had to capture both.
A quality-scoring approach emerged from this work — a set of defined criteria applied consistently over time so AZMI could track not just whether journalism was happening, but whether it was the kind of journalism the Institute existed to support.
- Indicators for media capacity: outlet quantity, geographic coverage including rural areas, journalist staffing, industry sustainability
- Framework for tracking topic coverage volume over time, tied specifically to the issues Arizonans have confirmed as priorities in CFA research
- Quality scoring approach for evaluating coverage against defined criteria: fact-based content, accuracy, coverage of priority topics, reach beyond metro areas
- Methodology for drawing on third-party data sources alongside AZMI's own monitoring tools
Programming Impact Framework
The third area was more operational: was AZMI's programming actually being used, and was it making a difference for the media professionals it was designed to serve?
AZMI was building three primary tools for launch — a data library, an events and programming function, and a subject matter expert database that journalists could access. Each required a different measurement approach.
The programming impact framework defined what to track across each tool, recommended how to structure data collection, and established a monthly dashboard model that would give AZMI's team a real-time view of engagement across all three areas.
- Data library: tracking access frequency, which topics are most accessed, and by which types and locations of outlets
- Events and programming: registration versus actual attendance, post-event feedback capture, qualitative responses from participants, and repeat engagement tracking
- Subject matter expert database: login and access tracking, how experts are referenced in published stories, coverage type and geographic reach of the resulting journalism
- Salesforce integration: tracking media professional engagement across touchpoints from email opens to program participation to content usage, enabling more targeted outreach over time
- Dashboard structure: monthly reporting on programming impact with KPIs defined by priority so the team could be realistic about what was immediately trackable versus what would take time to set up
Three-Year Measurement Roadmap
Tied everything together into a roadmap: which areas get monthly reporting, which require annual or three-year benchmarks, when advisors should be engaged, and what milestones mark progress between phases.
- Monthly reporting: programming impact dashboard
- Annual and three-year measurement: public sentiment and media health, requiring consistent methodology across the full measurement period
- Advisory engagement plan: timing, format, and scope of input at each stage of the work
- Phase 2 handoff: clear definition of what baseline assessments needed to happen once the framework was in place and AZMI had launched
The Outcome
The team had what they needed before launch: a complete measurement framework that defined what success looked like, how to measure it, and what infrastructure needed to be in place to do so.
The framework gave AZMI a shared vocabulary for talking about impact — internally, with leadership, with funders, and with the advisory council they were bringing into the process. Instead of "we believe we're building trust in journalism," they had specific indicators they were tracking, specific tools producing that data, and a methodology consistent enough to show change over time.
The three-year design mattered most here. Data collected in year one only means something in year three if the methodology was defined before either year started. That's what this engagement made possible.
- Complete measurement framework across all three areas, in place before AZMI's fall launch
- KPIs defined across Arizona public sentiment, media health, and programming impact
- Tool and data source recommendations mapped to each area
- Monthly dashboard structure for programming impact with a clear reporting cadence
- Three-year methodology for public sentiment and media health measurement
- Roadmap for advisor engagement at each phase of the work
- Clear handoff plan for Phase 2 baseline assessment and implementation
The Takeaway
If you're launching something designed to move long-term, indirect outcomes — public trust, civic engagement, behavior change — the instinct is to start doing the work and figure out measurement later. The problem is that later never produces a baseline. You can't show change from a starting point you never recorded.
The more meaningful question isn't "how will we know if it's working?" It's "what does working actually look like, specifically, and can we define that before we start?" Those are different questions. The second one is harder. It's also the one that makes the first one answerable three years from now.
Build the measurement framework first. Then launch.
