ProjectPhD

Methodology & Provenance

The benchmark is the product.

What follows is how it was built, how it is maintained, and what it can and cannot claim.

The 20-Year Empirical Record

Built through active practice. Not assembled from secondary research.

ProjectPhD’s diagnostic methodology and benchmark dataset were built through 20 years of active program assurance practice. Every dimension, every condition, and every intervention reflects programs that reached outcomes — not programs that started.

The dataset is not theoretical. It is derived from direct program assurance engagements, outcome-coded at engagement close, and maintained under a documented benchmark integrity policy.

The diagnostic is designed as forward-looking strategic intelligence — it identifies risks before they materialise and provides conditions-to-proceed to address them. It explicitly addresses business readiness and adoption risk alongside delivery controls, recognising that projects which are technically complete but where the organisation is not ready to adopt the outcome represent a systemic failure pattern.

2,000+Historical diagnostics in the benchmark dataset
1,200Outcome-coded (sponsor expectation and business outcome satisfaction)
~25%ERP and core system programs
~20%Regulatory compliance programs
20 yearsActive program assurance practice
Built by practitioners who’ve sat in the room when programs fail — and when they don’t. Every dimension, every condition, every intervention is grounded in what boards and sponsors actually needed to see and do.

MOZAIC: THE RELATIONSHIP

The dataset source, the conflict policy, and how they interact.

ProjectPhD is a product of Mozaic. The benchmark dataset and Recommendations Library were built through Mozaic’s program assurance practice over 20 years. The diagnostic methodology is standardised and versioned — it is not tuned per client outcome and does not change based on delivery relationships.

Conflict Disclosure Policy

Where Mozaic is delivering or bidding for work on the same program, that conflict is disclosed in the Board Assurance Report and the Governance Decision Memo. The scoring methodology is independent and standardised — not tuned to client expectations or outcomes. No contingent fees are tied to diagnostic outcomes. The conflict disclosure is printed, not footnoted.

BENCHMARK INTEGRITY POLICY

Four controls govern the benchmark dataset.

1
Peer Review
Methodology reviewed by independent practitioners on a scheduled basis. Version-dated and published: “Methodology v1.2, reviewed [month/year].” Recorded in every Board Assurance Report and Governance Decision Memo.
2
Recency Dating
Every benchmark cohort displays the date range of contributing diagnostics. Cohorts older than a defined threshold are flagged: “Dated cohort — interpret with care.” No silent use of stale cohort data.
3
N-Disclosure
When cohort matching produces thin results, the Board Assurance Report says so. “We matched your program to a cohort of 8. Interpret the percentile with caution.” Confidence intervals widened. False precision is worse than disclosed uncertainty.
4
Outcome Coding Basis
Outcome codes are based on two sponsor-reported statements at engagement close: “Did this program deliver to your expectations?” and “Did it achieve the business outcomes you expected?” This is a consistent proxy across the dataset — not objective success measurement. Basis and limitations disclosed in every report.

Statistical regression is applied to calculate correlations and confidence levels. The methodology does not stretch beyond what the evidence supports.

DIAGNOSTIC DIMENSIONS

What the diagnostic covers, and what it does not.

Default
Board View
8 dimensions

  • Governance & Sponsorship
  • Business Case & Benefits
  • Project Leadership
  • Scope & Requirements
  • RAID & Dependencies
  • Vendor Performance
  • Plan & Schedule
  • Change & Communications
Optional
Delivery View
All 16 dimensions (adds:)

  • Cost Tracking
  • PM Processes
  • Resources & Teamwork
  • Solution & Architecture
  • Development
  • Testing
  • Implementation Planning
  • Post-Implementation Planning

Business readiness and adoption risk are assessed within the Change & Communications and Business Case & Benefits dimensions. This is a deliberate design choice. Programs which deliver technical outputs but fail to achieve business outcomes represent a systemic failure pattern — the diagnostic explicitly tests whether the organisation is ready to adopt what is being built.

PROJECTPHD RECOMMENDATIONS LIBRARY

Evidence-based recommendations drawn from 20 years of program assurance practice.

Every recommended condition-to-proceed and intervention in the Board Assurance Report is drawn from the ProjectPhD Recommendations Library — interventions grounded in what governance forums actually needed to see at the funding gate, and what happened in comparable programs where those conditions were absent.

ProjectPhD recommends conditions. The governance forum adopts and enforces them as commitments attached to the stage-gate decision.

The Library grows with each diagnostic.

VS TRADITIONAL ASSURANCE

Different category. Different cadence. Different price.

Big-4 / Tier-1 ProjectPhD
Time to insight4–8 weeks48 hours
Cost$40–80kfrom $2,999
RepeatabilityLow (bespoke)High (standardised)
Stage-gate fitToo slowDesigned for it
Benchmark depthNone published2,000+ diagnostics
Political costSignals crisisStandardised process
Re-checkNot offeredfrom $999

Traditional assurance is credible, thorough, and designed for a different problem. By the time the report arrives, the gate has passed. ProjectPhD is what you use when you need a governance record, not a crisis response.

Scope Boundaries — What This Is Not
What it is
  • A standardised stage-gate control diagnostic
  • Forward-looking strategic intelligence
  • A program health assessment, not a performance review
  • Comparative cohort outcome rates with explicit confidence drivers and N-disclosure
What it is not
  • A financial audit
  • Full program assurance (unless separately contracted via Evidence Review)
  • Internal Audit — different mandate, different scope
  • A prediction of success or failure
  • Retrospective or fault-finding — assurance creates value when it is predictive and coaching-oriented; it becomes actively harmful when it is retrospective and fault-finding
Independence Guardrails
Standardised versioned scoring — not adjusted to client expectations or outcomes
No contingent fees tied to diagnostic outcomes
Disclosed conflicts of interest — printed in the report, not footnoted
Second-review sign-off at Evidence Review tier
Published as ProjectPhD’s own policy

Questions about the methodology?

Subscribe to insights

Receive ongoing research findings from the ProjectPhD diagnostic dataset. No sales content. Unsubscribe at any time.

Thank you. You have been subscribed.

Book a call

We will be in touch within one working day to arrange a convenient time.

Thank you. We will be in touch shortly.