ProjectPhD

Independent Program Assurance

An independent view of what is really happening — what is at risk, and what to do about it

Major programs fail not because the plan was wrong, but because the real risks were not visible at the point where funding decisions were made. Delivery reporting passes through the people spending the budget before it reaches the people accountable for outcomes. By the time a material gap surfaces, the funding gate has passed and the cost of correction has multiplied.

ProjectPhD (Project-health Diagnostic) provides an independent, benchmarked delivery risk diagnostic for programs of $2m and above — delivered as a board-grade Assurance Report within 72 hours. The benchmark draws on 2,000+ historical diagnostics conducted over 20 years, with 1,200 outcome-coded to show how comparable programs actually performed.

The output is not a retrospective. It is a forward-looking set of conditions-to-proceed, a 90-day action plan, and a Governance Decision Memo designed to sit alongside the business case at the point of capital release.

Why delivery reporting is not enough

Program status reports are generated by the team delivering the program. This is a structural problem, not a personal one — the people closest to delivery are incentivised to present progress positively. As reporting moves up the chain of command, optimism bias compounds. A schedule risk flagged at the working level becomes an amber rating at the project board and a green summary at the executive committee.

The result is what practitioners call watermelon reporting: programs that appear healthy on the outside while failing systemically on the inside. The Sponsor holds accountability but depends on filtered information. The board releases funding without an independent check. When the program fails, the post-mortem asks what governance was in place at the time — and finds a stack of self-assessed RAG dashboards.

What the Assurance Report delivers

Within 72 hours of engagement, the ordering executive receives:

See how the process works | View pricing

Built on evidence, not opinion

The benchmark is not a maturity model or a consultant’s framework. It is an empirical dataset: 2,000+ diagnostics conducted over 20 years of program assurance practice, with 1,200 outcome-coded against what actually happened — not what was forecast at the time.

Cohort matching controls for sector, program type, size, and stage. Statistical regression is applied. When the matched cohort is small, the report discloses this and explains the confidence impact. The Recommendations Library — the conditions-to-proceed and interventions recommended in the Assurance Report — is drawn from this 20-year evidence base. Every recommendation reflects what has demonstrably worked, and what has not, in programs facing comparable dynamics.

The methodology is versioned, dated, and published. Independence guardrails are structural: standardised scoring not tuned per client, no contingent fees, conflict disclosure, and second-review sign-off.

Read the full methodology

Every recommendation is grounded in 20 years of seeing what boards and sponsors actually needed — and what happened when they didn’t get it.
2,000+
Historical diagnostics
1,200
Outcome-coded
20 years
Active program assurance practice

Take the next step

An independent, benchmarked delivery risk diagnostic for your program, delivered within 72 hours.

Request an Assurance ReportGet an instant Snapshot — free

Subscribe to insights

Receive ongoing research findings from the ProjectPhD diagnostic dataset. No sales content. Unsubscribe at any time.

Thank you. You have been subscribed.

Book a call

We will be in touch within one working day to arrange a convenient time.

Thank you. We will be in touch shortly.