Vendor Scorecard: Redesigning a legacy supplier performance workflow into NIQ standards

Modernized a legacy Precima scorecard into the NIQ design system while prototyping comparison features that helped clients evaluate supplier performance more clearly.

Overview

Global retail ecosystems rely on hundreds of vendors, each with different data standards, timelines, and reliability. This fragmentation often leads to misaligned expectations and blame loops.


The Vendor Scorecard was designed to bridge those silos — aligning teams on a unified view of performance and creating measurable trust between vendors and retailers.

Global retail ecosystems rely on hundreds of vendors, each with different data standards, timelines, and reliability. This fragmentation often leads to misaligned expectations and blame loops.

The Vendor Scorecard was designed to bridge those silos — aligning teams on a unified view of performance and creating measurable trust between vendors and retailers.

Global retail ecosystems rely on hundreds of vendors, each with different data standards, timelines, and reliability. This fragmentation often leads to misaligned expectations and blame loops.



The Vendor Scorecard was designed to bridge those silos — aligning teams on a unified view of performance and creating measurable trust between vendors and retailers.

TL;DR

The Problem

Vendor and supply chain teams were working from fragmented reports, inconsistent KPI definitions, and manual spreadsheets making supplier reviews slow, reactive, and hard to trust.

The Insight

The core issue wasnt missing data. It was missing confidence in how performance was calculated. Teams needed transparent scoring, shared definitions, and a faster path from summary to root cause.

The Solution

I designed a unified Vendor Compliance Scorecard that brought KPI health, trend comparisons, and drill-down evidence into one review-ready workflow helping teams move from reconciliation to action.

The Impact

60% less manual reporting, aligned regional definitions, and better retailer-vendor decision-making. 

TL;DR

The Problem

Vendor and supply chain teams were working from fragmented reports, inconsistent KPI definitions, and manual spreadsheets making supplier reviews slow, reactive, and hard to trust.

The Insight

The core issue wasnt missing data. It was missing confidence in how performance was calculated. Teams needed transparent scoring, shared definitions, and a faster path from summary to root cause.

The Solution

I designed a unified Vendor Compliance Scorecard that brought KPI health, trend comparisons, and drill-down evidence into one review-ready workflow helping teams move from reconciliation to action.

The Impact

60% less manual reporting, aligned regional definitions, and better retailer-vendor decision-making. 

  • 25%

    Audit prep time reduced

    Evidence-backed reporting cut manual review effort and reduced spreadsheet dependency.


    Ops efficiency

    25%

    Audit prep time reduced

    Evidence-backed reporting cut manual review effort and reduced spreadsheet dependency.


    Ops efficiency

  • $820K

    Revenue risk surfaced earlier

    The scorecard highlighted vendor issues sooner, helping teams identify commercial risk before escalation.

    Business impact

  • 15%

    Supplier OTIF improved

    Clearer visibility into vendor performance made quality and timing issues easier to address.


    Supplier performance

  • $22.5K / qtr

    Quarterly audit cost savings

    Standardized reporting reduced manual audit effort and improved meeting-readiness across teams.


    Cost savings

  • 25%

    Audit prep time reduced

    Evidence-backed reporting cut manual review effort and reduced spreadsheet dependency.


    Ops efficiency

  • $820K

    Revenue risk surfaced earlier

    The scorecard highlighted vendor issues sooner, helping teams identify commercial risk before escalation.

    Business impact

  • 15%

    Supplier OTIF improved

    Clearer visibility into vendor performance made quality and timing issues easier to address.


    Supplier performance

  • $22.5K / qtr

    Quarterly audit cost savings

    Standardized reporting reduced manual audit effort and improved meeting-readiness across teams.


    Cost savings

Duration

  • 3 months


My Role

Lead Product Designer

Team Role

  • 2 Product Managers

  • 2 BI Developers

  • 2 Data Scientists

  • 3 Engineers

Target Audience

  • Vendor Managers

  • Retail Category Teams

  • Supply Chain Operations

Target Audience

  • Supply Chain Operations

  • Retail Category Teams

  • Vendor Managers


Scope

Research, UX, UI, Prototyping, Testing

Tools

  • Figma

  • Excel

  • Highcharts

  • Jira

Industry

  • Retail supply chain

  • Vendor performance management


Platform

Web (Desktop-first)

Users Impacted

Used by regional vendor and retail teams during recurring performance reviews

Inventory Managed

Designed to support standardized supplier evaluation across multiple regions

The Challenge

Evolving a legacy supplier scorecard into a more modern, comparison-ready experience

Evolving a legacy supplier scorecard into a more modern, comparison-ready experience

The supplier scorecard already existed in the older Precima environment, but its structure and presentation no longer matched the direction of NIQ’s newer design standards. At the same time, business stakeholders wanted to demonstrate more advanced comparison functionality to clients — including side-by-side chart comparison and clearer sparkline labeling.

The challenge was therefore twofold: redesign the legacy scorecard into a more consistent NIQ experience, and prototype new comparison patterns that could better support client review conversations.

  • Existing reporting structure in the earlier product

  • Legacy Precima scorecard Comparison

  • Legacy Precima scorecard patterns

  • Existing reporting structure in the earlier product

  • Legacy Precima scorecard Comparison

  • Legacy Precima scorecard patterns

Activate-aligned redesign direction

Business Requirement

My task was to redesign the legacy supplier scorecard using NIQ design standards and create a prototype that demonstrated comparison functionality for clients — including chart comparison patterns and sparkline label interactions.

The goal was to make the scorecard feel more modern, more consistent, and more effective for performance review conversations.

Standardize the
experience

Standardize the experience

Bring the legacy scorecard into NIQ design patterns and visual standards.

Improve comparison workflows

Support side-by-side chart comparison and clearer metric interpretation.

Prototype for client
conversations

Prototype for client conversations

Use interaction concepts like sparkline labels to communicate future product value.

Business requirements and KPI logic

Design Principles

The redesign was guided by three principles: align the experience to NIQ standards, make supplier performance easier to compare, and prototype interactions that communicated value more clearly in client conversations.

1

Aligned

Bring the experience into NIQ standards

The legacy supplier scorecard needed to evolve beyond older Precima patterns into a more modern, system-consistent NIQ experience. This meant updating structure, component behavior, and hierarchy to align with the broader supply-chain suite.

2

Comparable

Make supplier performance easier to compare

The scorecard needed to support clearer comparison across metrics, timeframes, and supplier views. KPI layout, chart structure, and comparison patterns were designed to help users interpret change faster.

3

Communicable

Prototype for clearer client conversations

The work also needed to help stakeholders communicate future product value to clients through interaction concepts like chart comparison and sparkline labels.

Research & Insights

To redesign the scorecard effectively, I looked beyond the interface itself. I reviewed the legacy reporting structure, business requirements, metric definitions, and workflow patterns to understand how users moved from high-level KPI monitoring into comparison, investigation, and detailed validation.

The goal was not simply to modernize the UI, but to clarify how dense supplier-performance data should be structured for faster review, deeper comparison, and more confident decision-making.

Information Architecture & User Flows

The scorecard needed to support three behaviors: first-time orientation, repeat monitoring, and deep investigation.

Rather than treating every user the same, I structured the experience around how people actually moved through the report. New users needed a clear path from KPI summary to deeper evidence. Returning users needed to quickly check what changed. Power users needed faster filtering, richer comparison, and detailed validation.

This led to a layered information architecture built around four core views: Overview, Performance, Trends, and Compliance Details. Together, these views supported a consistent progression from high-level scanning to focused comparison, then into evidence-backed investigation.

Designed to help first-time users understand the report structure, review active filters, scan top-level KPIs, and move into supporting evidence only when needed.

Optimized for deeper analysis through faster filter refinement, multi-dimensional comparison, tooltip-based interpretation, and detailed validation in compliance views.

Built for repeat monitoring, helping users resume context quickly, compare periods, review key changes, and validate whether follow-up action was needed.

The architecture was designed to support different levels of familiarity while keeping the reporting flow consistent: scan → compare → investigate → validate.

KPI and Reporting Architecture

The redesign needed to support more than a visual refresh. It had to organize executive KPI summaries, comparison analysis, trend evaluation, and detailed validation into a structure that felt clear, layered, and review-ready.

CORE DECISIONS

Executive KPI Summary

Surfaced high-level signals like OSA rate, missed sales, flags, value sales, and operational health.

Comparison and Trend Analysis

Supported current vs previous period review, standard vs promo analysis, and comparison across performance dimensions.

Detailed Validation

Connected summary signals to geo analysis, compliance details, root causes, and tabular evidence.

Low-Fidelity Concepts and Iterations

Before moving into high-fidelity reporting screens, I explored how the scorecard should be structured at a lower level — from KPI grouping and chart hierarchy to comparison layouts and detailed evidence views.

These concepts helped test how users would move between summary signals, trend analysis, and tabular validation. The goal was to find a structure that felt modular, scalable, and easier to interpret before refining it into the final NIQ Discover-style reporting experience.

Convergence

After exploring multiple directions, I converged on a more balanced reporting model that combined the strengths of each concept without overcommitting to any single one.

The KPI-first concept improved top-level orientation, but pushed deeper comparison too far down. The comparison-first concept made trend analysis stronger, but weakened quick scanability. The investigation-first concept brought evidence forward, but felt too deep too early for first-pass review. The modular reporting direction provided the strongest foundation because it created a more scalable structure across summary, comparison, and detailed validation.

The final direction moved toward a layered reporting flow: orient first, compare second, investigate third, and validate with detailed evidence last. This structure aligned better with how users actually consumed the scorecard — starting with high-level health signals, then moving into period comparison, trend interpretation, and deeper drilldown only when needed.

Kept from KPI-First

Clear executive summary and stronger top-level orientation

Kept from Comparison-First

Trend and period analysis moved higher in the workflow

Kept from Investigation-First

Evidence and detailed validation remained essential

Kept from Modular Reporting

A more scalable, system-friendly structure across report modules

Final NIQ Discover Experience

The final direction translated the scorecard into a more structured NIQ Discover-style reporting experience — balancing executive KPI visibility, comparison workflows, trend analysis, and detailed evidence views within one system.

Executive KPI Overview

Surfaced top-level performance signals such as OSA rate, missed sales, flags, and value sales for faster orientation.

Comparison and Trend Analysis

Supported comparison across periods and performance dimensions, helping users understand what changed and where deeper investigation was needed.

Trends Views

Supported comparison across periods and performance dimensions, helping users understand what changed and where deeper investigation was needed.

Compliance Details

Enabled evidence-backed validation through detailed records, sorting, filtering, and drilldown-ready tables.

Impact & Outcome

The redesign created a more structured reporting workflow across summary, comparison, and validation. Rather than functioning as a single dashboard, the scorecard supported recurring supplier-performance reviews with clearer scanability, stronger comparison, and deeper investigation.

4+

Regional teams

Adoption

Adopted across recurring vendor review workflows, helping standardize how performance was monitored.

50+

Stakeholders

Usage

Used by recurring vendor, retail, and supply-chain stakeholders during review conversations.

50+

Stakeholders

Usage

Used by recurring vendor, retail, and supply-chain stakeholders during review conversations.

25%

faster

Ops efficiency

Reduced manual prep and made summary-to-evidence review more efficient.

25%

faster

Ops efficiency

Reduced manual prep and made summary-to-evidence review more efficient.

10+

Metrics

Reporting depth

Surfaced business-critical measures such as OSA, missed sales, units, vendor risk, and issue status in one reporting model.

10+

Metrics

Reporting depth

Surfaced business-critical measures such as OSA, missed sales, units, vendor risk, and issue status in one reporting model.

Validation

Testing whether the reporting structure actually worked

We explored three main paradigms
for the dashboard before
converging on a hybrid approach.

The final direction was shaped through iterative review and feedback on how the scorecard supported real reporting behaviors. The strongest signals were consistent: users needed faster top-level orientation, comparison had to become more visible, and detailed tables still needed to remain accessible for validation.

Rather than validating a single screen in isolation, the focus was on whether the overall reporting flow worked — from KPI summary, to comparison, to evidence-backed follow-up.

The evolving structure made the scorecard easier to scan at the top level while still supporting deeper analytical follow-up.

Jimmy Martins

— Director of Design, NIQ

Compliance Levels

Orders OTIF

85.67%

21pp

Line Items OTIF

32.67%

21pp

%On Time

85.67%

21pp

Fill Rate

85.67%

21pp

NSV

--

NA

What improved

The summary layer became easier to scan. KPI grouping and status blocks made it clearer what needed attention first, without forcing users into detailed analysis too early.

What stayed essential

Detailed tables and compliance views remained critical. Even with stronger overview and trend views, users still needed evidence-backed records to validate what changed and why.

What feedback reinforced

Comparison had to be more visible in the workflow. Trend and period-based analysis became more useful once it moved higher in the reporting structure instead of being buried below summary content.

Detailed views remained essential for validating performance shifts, so the final structure kept evidence accessible without making the overview too dense.
Detailed views remained essential for validating performance shifts, so the final structure kept evidence accessible without making the overview too dense.

Mayukh R

Mayukh R

Retail Category Manager, Walmart

Trade-offs

Trade-offs

Scanability vs. density

Problem:

Too much detail at the top made the report harder to scan.

Decision:

Kept the summary layer focused and moved deeper details lower.

Trade-off:

Users got faster orientation, but had to move one step deeper for full validation.

Scanability vs. density

Problem:

Too much detail at the top made the report harder to scan.

Decision:

Kept the summary layer focused and moved deeper details lower.

Trade-off:

Users got faster orientation, but had to move one step deeper for full validation.

Comparison vs. simplicity

Problem:

Adding more comparison made the first view feel heavier.

Decision:

Made comparison easier to access without letting it dominate the overview.

Trade-off:

The report became more analytically useful, but slightly less minimal at first glance.

Learnings & What's Next

Learnings &
What's Next

What Worked Well

Layering summary, comparison, and validation improved scanability.

Surfacing comparison earlier made performance shifts easier to interpret.

Keeping detailed evidence accessible preserved trust in the workflow.

What I'd Do Differently

Define success metrics earlier.

Define success metrics earlier.

Test more role-specific workflows.

Test more role-specific workflows.

Explore richer contextual guidance for repeat users.

Jesal.ai

Contact

+1 902 401 9629

Address

231 Fort York, Toronto ON Canada

© ️2025 - Jesal.ai ALL RIGHTS RESERVED

Jesal.ai

Contact

+1 902 401 9629

Address

231 Fort York, Toronto ON Canada

© ️2025 - Jesal.ai ALL RIGHTS RESERVED

Jesal.ai

Contact

+1 902 401 9629

Address

231 Fort York, Toronto ON Canada

© ️2025 - Jesal.ai ALL RIGHTS RESERVED