ProjectPhD (Project-health Diagnostic) provides an independent, benchmarked delivery risk diagnostic for programs of $2m and above — delivered as a board-grade Assurance Report within 72 hours. The benchmark draws on 2,000+ historical diagnostics conducted over 20 years, with 1,200 outcome-coded to show how comparable programs actually performed.
The output is not a retrospective. It is a forward-looking set of conditions-to-proceed, a 90-day action plan, and a Governance Decision Memo designed to sit alongside the business case at the point of capital release.
Why delivery reporting is not enough
Program status reports are generated by the team delivering the program. This is a structural problem, not a personal one — the people closest to delivery are incentivised to present progress positively. As reporting moves up the chain of command, optimism bias compounds. A schedule risk flagged at the working level becomes an amber rating at the project board and a green summary at the executive committee.
The result is what practitioners call watermelon reporting: programs that appear healthy on the outside while failing systemically on the inside. The Sponsor holds accountability but depends on filtered information. The board releases funding without an independent check. When the program fails, the post-mortem asks what governance was in place at the time — and finds a stack of self-assessed RAG dashboards.
What the Assurance Report delivers
Within 72 hours of engagement, the ordering executive receives:
- Risk rating with confidence basis — not a binary pass/fail, but a graded assessment with the evidence and assumptions behind it disclosed
- Benchmark percentile — the program scored against a matched peer cohort by sector, type, size, and stage, drawn from outcome data across 2,000+ diagnostics
- Alignment Index — where stakeholder views diverge across risk dimensions, documented as a governance signal rather than a performance finding
- Top 5 risk drivers — linked to attested respondent inputs, not consultant opinion
- 90-day action plan — highest-lift interventions with named owners, timeframes, and acceptance criteria
- Benefits-at-risk indicator — flags where delivery risk threatens committed benefits realisation
- Business readiness assessment — evaluates whether the organisation is prepared to adopt the outcome, not just whether deliverables are on track
- Governance Decision Memo — a single-page decision artefact with recommended action, confidence level, and conditions-to-proceed, designed to circulate independently in board papers
Built on evidence, not opinion
The benchmark is not a maturity model or a consultant’s framework. It is an empirical dataset: 2,000+ diagnostics conducted over 20 years of program assurance practice, with 1,200 outcome-coded against what actually happened — not what was forecast at the time.
Cohort matching controls for sector, program type, size, and stage. Statistical regression is applied. When the matched cohort is small, the report discloses this and explains the confidence impact. The Recommendations Library — the conditions-to-proceed and interventions recommended in the Assurance Report — is drawn from this 20-year evidence base. Every recommendation reflects what has demonstrably worked, and what has not, in programs facing comparable dynamics.
The methodology is versioned, dated, and published. Independence guardrails are structural: standardised scoring not tuned per client, no contingent fees, conflict disclosure, and second-review sign-off.
Take the next step
An independent, benchmarked delivery risk diagnostic for your program, delivered within 72 hours.