A green RAG is not a governance position
The committee approved the next tranche. Six months later the program is a write-off. The post-mortem assembles. The question is not whether the committee acted in good faith — it is whether there was a defensible, documented basis for the decision to proceed. The answer, in most cases, is that the decision rested on a RAG dashboard compiled by the delivery team, supplemented by verbal assurances from the sponsor at a committee meeting. Neither constitutes an independent evidence base. Neither holds up under the scrutiny that follows a material failure.
This is not a hypothetical. Audit and risk committee chairs are increasingly named individually in post-incident reviews. The accountability is personal, and the exposure is straightforward: you signed off on a funding release at a gate where material control failures were present and not independently identified. The signals were there. No independent control was applied to surface them.
The structural problem is that the information reaching the committee has passed through every layer of optimism bias between the work and the boardroom. The people delivering the program are the same people reporting on its health. Management-generated assurance reporting does not resolve this — it reproduces it. Asking the delivery team to provide more detailed status reporting produces more detail, not more independence. The principal-agent problem persists regardless of the volume of information.
You have commissioned Big-4 reviews on high-profile programs. They are thorough. They also take four to eight weeks, cost $60,000 to $80,000, and cannot be deployed at every funding gate for every material program. The act of commissioning one signals to the organisation that the committee has lost confidence — which makes it a crisis response, not a governance control. The gap between what full assurance costs and what stage gates require is where most programs pass through without independent verification.
A governance record, not another report
ProjectPhD is a standardised governance control activated at the funding gate. Commissioning it does not signal concern about a specific program — it embeds an independent checkpoint into the committee’s assurance framework, the same way any standing control is applied to material expenditure.
Within 48 hours, the committee receives a Board Assurance Report and a One-page Governance Decision Memo. The report benchmarks the program against a matched peer cohort — programs of comparable size, sector, category, and complexity — and produces conditions-to-proceed: specific, forward-looking requirements that must be in place before funding is released. The Governance Decision Memo records the recommended action, confidence basis, methodology version, and conditions in a single artefact designed for board papers. This is the governance record — not the report itself, but the documented decision basis at the gate.
The methodology is standardised and versioned. Each report records the methodology version applied. Scoring uses statistical regression to calculate correlations and confidence levels against the benchmark dataset. Where matching cohorts are thin, confidence intervals are widened and the limitation is disclosed. The methodology does not stretch beyond what the evidence supports.
The evidence base is corroborated, not single-source. Multi-respondent inputs with signed attestation mean the diagnostic draws on perspectives across roles — sponsor, delivery, operational — rather than relying on any one account. The Alignment Index surfaces where those perspectives diverge, documenting disagreement as a governance signal without attributing it to named individuals. This directly addresses the filtering problem: if the delivery team rates schedule confidence as high and operational stakeholders rate it as low, that divergence is on the record.
Independence guardrails are published and standing: standardised scoring that is not adjusted to client expectations, no contingent fees tied to diagnostic outcomes, disclosed conflicts of interest, and second-review sign-off at higher tiers. A one-page Methodology Summary for Audit Committees is available for tabling — it documents the framework, independence controls, and benchmark provenance in the format your committee expects to see.
20-YEAR EMPIRICAL RECORD
Built on 20 years of documented outcomes
The benchmark dataset draws on 2,000+ program diagnostics conducted over 20 years of assurance practice, with roughly a quarter in ERP and core systems and a fifth in regulatory change programs. Outcome data from 1,200 of those programs is coded against whether sponsors judged the program delivered to expectations and achieved its intended business outcomes. Every recommended condition is drawn from the ProjectPhD Recommendations Library — interventions grounded in what boards and committees actually needed to see at the funding gate, and what happened in programs where that evidence was absent.
If the program delivers, the governance record confirms that appropriate independent controls were applied. If it does not, the record shows the committee acted on a documented, evidence-based, independently verified decision basis at the point of capital release. The methodology is designed to hold up under exactly the scrutiny that follows — post-incident review, external audit, or regulatory inquiry. That defensibility is not incidental. It is the primary design constraint. It is your documented due diligence.
Request a Board Assurance Report
A short conversation to confirm fit for your committee’s assurance framework. No commitment beyond that — if the diagnostic does not address a genuine governance gap at your next funding gate, we will say so.