The 20-Year Empirical Record
Built through active practice. Not assembled from secondary research.
ProjectPhD’s diagnostic methodology and benchmark dataset were built through 20 years of active program assurance practice. Every dimension, every condition, and every intervention reflects programs that reached outcomes — not programs that started.
The dataset is not theoretical. It is derived from direct program assurance engagements, outcome-coded at engagement close, and maintained under a documented benchmark integrity policy.
The diagnostic is designed as forward-looking strategic intelligence — it identifies risks before they materialise and provides conditions-to-proceed to address them. It explicitly addresses business readiness and adoption risk alongside delivery controls, recognising that projects which are technically complete but where the organisation is not ready to adopt the outcome represent a systemic failure pattern.
| 2,000+ | Historical diagnostics in the benchmark dataset |
| 1,200 | Outcome-coded (sponsor expectation and business outcome satisfaction) |
| ~25% | ERP and core system programs |
| ~20% | Regulatory compliance programs |
| 20 years | Active program assurance practice |
MOZAIC: THE RELATIONSHIP
The dataset source, the conflict policy, and how they interact.
ProjectPhD is a product of Mozaic. The benchmark dataset and Recommendations Library were built through Mozaic’s program assurance practice over 20 years. The diagnostic methodology is standardised and versioned — it is not tuned per client outcome and does not change based on delivery relationships.
Conflict Disclosure Policy
Where Mozaic is delivering or bidding for work on the same program, that conflict is disclosed in the Board Assurance Report and the Governance Decision Memo. The scoring methodology is independent and standardised — not tuned to client expectations or outcomes. No contingent fees are tied to diagnostic outcomes. The conflict disclosure is printed, not footnoted.
BENCHMARK INTEGRITY POLICY
Four controls govern the benchmark dataset.
Statistical regression is applied to calculate correlations and confidence levels. The methodology does not stretch beyond what the evidence supports.
DIAGNOSTIC DIMENSIONS
What the diagnostic covers, and what it does not.
- Governance & Sponsorship
- Business Case & Benefits
- Project Leadership
- Scope & Requirements
- RAID & Dependencies
- Vendor Performance
- Plan & Schedule
- Change & Communications
- Cost Tracking
- PM Processes
- Resources & Teamwork
- Solution & Architecture
- Development
- Testing
- Implementation Planning
- Post-Implementation Planning
Business readiness and adoption risk are assessed within the Change & Communications and Business Case & Benefits dimensions. This is a deliberate design choice. Programs which deliver technical outputs but fail to achieve business outcomes represent a systemic failure pattern — the diagnostic explicitly tests whether the organisation is ready to adopt what is being built.
PROJECTPHD RECOMMENDATIONS LIBRARY
Evidence-based recommendations drawn from 20 years of program assurance practice.
Every recommended condition-to-proceed and intervention in the Board Assurance Report is drawn from the ProjectPhD Recommendations Library — interventions grounded in what governance forums actually needed to see at the funding gate, and what happened in comparable programs where those conditions were absent.
ProjectPhD recommends conditions. The governance forum adopts and enforces them as commitments attached to the stage-gate decision.
The Library grows with each diagnostic.
VS TRADITIONAL ASSURANCE
Different category. Different cadence. Different price.
| Big-4 / Tier-1 | ProjectPhD | |
|---|---|---|
| Time to insight | 4–8 weeks | 48 hours |
| Cost | $40–80k | from $2,999 |
| Repeatability | Low (bespoke) | High (standardised) |
| Stage-gate fit | Too slow | Designed for it |
| Benchmark depth | None published | 2,000+ diagnostics |
| Political cost | Signals crisis | Standardised process |
| Re-check | Not offered | from $999 |
Traditional assurance is credible, thorough, and designed for a different problem. By the time the report arrives, the gate has passed. ProjectPhD is what you use when you need a governance record, not a crisis response.
- A standardised stage-gate control diagnostic
- Forward-looking strategic intelligence
- A program health assessment, not a performance review
- Comparative cohort outcome rates with explicit confidence drivers and N-disclosure
- A financial audit
- Full program assurance (unless separately contracted via Evidence Review)
- Internal Audit — different mandate, different scope
- A prediction of success or failure
- Retrospective or fault-finding — assurance creates value when it is predictive and coaching-oriented; it becomes actively harmful when it is retrospective and fault-finding
Questions about the methodology?