Skip to content
Back to Projects

Evaluating a Data Visualization Platform for Nonprofit Ecosystem Mapping

UX Research Lead2 weeksQ3 20254 min read
Evaluative ResearchQualitative ResearchUsability TestingB2BData VisualizationStakeholder Management

Team: Solo researcher, collaborated with PM and Designer


Overview

Candid was preparing to launch a new visualization tool that would allow nonprofits and funders to explore funding trends over time, identify ecosystem gaps, and share visual insights. The platform team had developed interactive prototypes that served two primary personas: philanthropy-serving organizations (PSOs) and funders. The team needed validation before committing development resources. As the sole UX researcher on this project, I led end-to-end evaluative research to determine whether the designs met user expectations and which features were truly essential for v1.

The Challenge

The team had invested significant design effort into a feature-rich prototype, but there was no evidence that the proposed workflows matched how users actually thought about funding data. Without research, the team risked building features users didn't need while missing ones they did. This could end up burning development cycles and delaying launch. The core risk was committing to the wrong scope for v1.

Constraints

  • Timeline: 2-week window (1 sprint) to deliver actionable recommendations
  • Budget: Limited participant incentives narrowed recruiting options
  • Access: Difficulty scheduling with busy staff at foundations and PSOs

Approach and Methodology

Method: 8 moderated in-depth interviews with design walkthrough (60–90 minutes each)

Why interviews over surveys or another method? The tool involved complex data visualizations and multi-step workflows. I needed to observe real-time reactions, probe on confusion points, and understand workflow context. There were nuances that surveys wouldn't be able to capture.

Additional details: Budget constraints and scheduling difficulty with foundation executives limited the sample size. I reached thematic saturation by participant 6; the final 2 sessions confirmed existing patterns. This was sufficient for the decision at hand: identifying major usability gaps and prioritizing features.

Execution

  1. Collaborated with PM and Designer to define research questions and align on what decisions the research needed to inform
  2. Created a semi-structured interview protocol with task-based design walkthroughs
  3. Ran pilot sessions using Figma prototypes with colleagues to refine the protocol
  4. Walked stakeholders through the methodology and secured buy-in before fieldwork
  5. Conducted 8 sessions over 1 week, uploading transcripts to Dovetail for qualitative coding
  6. Synthesized themes using affinity mapping, then prioritized recommendations into decision tiers
  7. Presented findings to PM and Designer, then collaborated on solution ideation; also presented to cross-functional stakeholders, VP of Product, and Director of UX
  8. Tracked research outcomes and follow-through via Jira tickets

Key Findings

I organized findings into three tiers to help stakeholders make clear roadmap decisions:

  • Critical Issues (MVP Blockers) — Usability problems and missing functionality that would prevent adoption if not addressed before launch. A critical filtering gap was the top finding: users couldn't narrow visualizations to their area of focus, which broke the core workflow. These became immediate priorities for the dev team.
  • High-Value Additions (Strong MVP Candidates) — Features that weren't strictly blockers but would significantly improve the user experience and support core workflows. Recommended for inclusion if timeline allowed.
  • Post-MVP Opportunities — Desirable features with clear user demand but high implementation complexity. 5+ features were documented and deprioritized to v2 to prevent scope creep, including one visualization concept that tested poorly across sessions.

Impact

Product: Prioritized filtering UX, edit functionality, and visualization redesign for MVP. Created a clear v1 vs. v2 feature roadmap with stakeholder alignment on what would ship when.

Business: Prevented costly mistakes and identified a gap that would've surfaced post-launch, deprioritized 5+ complex features that would've delayed time-to-market, and avoided building a visualization that tested poorly.

Organizational: Set up a research backlog for v2 planning, recommended follow-up usability testing on the filter interface, and flagged map functionality for a dedicated testing session. Gave stakeholders confidence in go-forward decisions.

Reflection

The hardest decisions were around deprioritizing customization features. Users wanted them, and the design team had invested in the concepts, but our timeline and resources couldn't support them in v1. I had to present data showing that while desirable, these features weren't essential to core workflows. Strategic "no's" create better products than feature-stuffed launches. Close collaboration with the PM was essential here.

If I could redo this project, I'd add a lightweight prioritization exercise at the end of each interview. While qualitative depth was essential for understanding why features mattered, having numerical rankings would've strengthened the case for scope cuts in stakeholder conversations.