Stay connected: follow us on LinkedIn and explore more at
www.CherryHillAdvisory.com.

Subscribe now to join the Risk Register community:
We've been watching the Delve story unfold with the kind of recognition you feel when someone finally names something we've been seeing for years.
A Y Combinator-backed compliance startup, recently valued at $300 million, now faces accusations of what a whistleblower calls "structural fraud." The core claim is that Delve provided customers with pre-populated evidence, fabricated documentation of board meetings and processes that never happened, and told clients they had achieved 100% compliance with frameworks like HIPAA and GDPR.
When the accusations broke, Delve countered that they simply offer "templates to help teams document their processes," the same way other compliance platforms do. The whistleblower, operating under the name DeepDelver, responded with something I found telling: "They are trying to snake their way out by denying having 'pre-filled evidence' but calling it 'templates' instead, effectively shifting the blame to customers for adopting the 'templates' as is."
That exchange crystallizes the problem we want to address.
This isn't just a story about one vendor. This is a structural warning about what happens when compliance programs prioritize speed over evidence integrity.
At the heart of every compliance program is a simple question: Can you prove what you claim?
Auditors don't care what your policy says if you can't produce the evidence that it's actually being followed. Regulators operate on a principle that most compliance teams understand intellectually but sometimes forget operationally: without supporting documentation, it's like it didn't happen.
The Delve case exposes what happens when that principle gets inverted.
Instead of documentation following activity, the documentation becomes the activity. You're not proving compliance. You're performing it through paperwork that looks right but traces back to nothing real.
Let us be clear about something: templates are useful.
Every mature compliance program uses them. Policy templates. Risk assessment frameworks. Control testing checklists. These tools help teams work efficiently and maintain consistency across the organization.
But there's a line.
A template for documenting a board meeting is fine. A pre-filled document that says your board met quarterly and reviewed cybersecurity risks, when no such meetings occurred, is fraud.
The difference is traceability.
Real evidence traces back to real activity. An access review document should connect to actual logs showing who reviewed what and when. A security awareness training record should link to completion data from your learning management system. A vendor risk assessment should reference the questionnaires you sent, the responses you received, and the decisions you made based on that information.
When your evidence is pre-populated, that chain breaks. You're left with documentation that looks compliant but can't survive scrutiny.
Here's what most people don't realize: digital forensics don't lie.
Every document has metadata. Creation dates. Modification dates. Author information embedded in the file properties. Every email has headers showing when it was actually sent. Every system log has timestamps.
When your "January access review" was actually created in October, right before the audit, auditors see it immediately.
There's another tell that experienced auditors recognize: perfection.
Real compliance is messy. Real access reviews have notes about exceptions. Real security logs show false positives. Real incident response documentation includes the chaos of figuring things out in real time.
When everything looks perfect, it usually means it's fabricated.
The stakes here go beyond failing an audit.
False compliance under HIPAA can lead to criminal liability. GDPR violations can result in fines up to 4% of global annual turnover or €20 million, whichever is higher.
For government contractors, the Supreme Court has made the standard clear. You can be liable under the False Claims Act if you actually knew a claim was incorrect, were aware of substantial risk that it could be wrong and intentionally avoided learning the truth, or were aware of such substantial and unjustifiable risk but submitted claims anyway.
That third category is the one that should concern compliance teams using pre-populated evidence. You might not have fabricated the documentation yourself, but if you adopted it without verifying it traces back to real activity, you've created exactly the kind of risk the court described.
We understand the appeal of automation in compliance.
Internal audit teams face growing demands. Risk is increasing but teams aren't scaling at the same rate. The promise of AI-powered compliance platforms is that you can do more with less, achieve faster certification, and reduce the manual burden of evidence collection.
That promise is real. But it comes with a requirement that some vendors seem to be skipping: automation should speed up real work, not replace it with synthetic work.
A system that automatically pulls access logs from your identity management platform and flags anomalies for review is valuable. A system that generates a document saying you reviewed access logs, when you didn't, is fraud with extra steps.
The difference matters because compliance frameworks derive their value from independence, operating effectiveness, and evidence integrity. When you automate away the actual work and keep only the documentation, you've hollowed out the program while maintaining its appearance.
If you're a Chief Audit Executive or internal audit leader, the Delve case should prompt specific questions about your own compliance program. These align with the warning signs and assessment criteria detailed in our diagnostic guide:
Auditor Independence:
Evidence Integrity:
Platform Substance:
One pattern we see repeatedly: organizations invest heavily in written policies but underinvest in the technical controls that make those policies real.
The CMMC framework makes this explicit. Written policies establish your commitment to cybersecurity. But policies alone don't satisfy CMMC requirements. They must be reinforced by technical controls that actively enforce what the policies describe.
This is where evidence integrity lives or dies.
If your policy says you conduct quarterly access reviews, but you have no automated tooling to pull access data, no workflow to assign reviews, and no audit trail showing who approved what, then your "evidence" of compliance is just a document someone filled out. It might be accurate. It might not be. You have no way to prove it either way.
Real compliance requires closing that gap between policy and control, between what you say you do and what you can prove you did.
If you're evaluating compliance platforms or considering how to scale your compliance program, the Delve episode offers a clear lesson: be skeptical of "compliance at speed."
Speed is attractive precisely because compliance functions as a procurement gate. Faster certification means faster revenue. Faster audits mean lower costs. Faster evidence collection means smaller teams can manage larger scopes.
But assurance frameworks derive their value from rigor. When a vendor promises to cut your compliance timeline in half, the question you should ask is: what are they cutting?
Are they automating evidence collection from your real systems? Good.
Are they streamlining workflows so your team spends less time on administrative overhead? Good.
Are they providing templates that help you document real work more efficiently? Good.
Are they generating evidence for you that doesn't trace back to actual activity in your environment? That's the line.
If you're responsible for internal audit or compliance oversight, our guide recommends three immediate actions for every organization:
Verify your auditor independently. For SOC 2, confirm the CPA firm's license through the relevant state board of accountancy. For ISO 27001, confirm accreditation through IAF MLA signatories (ANAB, UKAS, DAkkS, JAS-ANZ). If you cannot verify, your certification may not carry legal standing.
Audit your trust page against reality. Map every public claim to a specific control and evidence artifact. Remove anything you cannot substantiate with documentation that reflects actual practice. Inaccurate security disclosures have been treated as consumer protection violations in past enforcement actions.
Read your own SOC 2 Section 3. Compare the system description to your actual architecture, tools, and processes. If the language is generic or describes technologies you do not use, it is a signed misrepresentation that needs to be corrected.
Beyond these baseline actions, the appropriate response depends on your risk tier. Our guide provides a scoring framework that maps your assessment results to four tiers: Low, Moderate, Elevated, and Critical, with specific timelines and actions for each. Organizations in the Elevated or Critical tiers should treat this as a priority incident requiring independent assessment within 30 days or immediate action, respectively.
If you're wondering whether your compliance program has these gaps, we've created a diagnostic tool to help you find out.
Our "Is Your Compliance Real?" diagnostic guide walks you through a 25-question self-assessment covering auditor independence, evidence integrity, platform substance, regulatory accuracy, and vendor conduct. It includes the specific red flags identified in the Delve investigation, a scoring framework to determine your risk tier, and a decision matrix for what to do based on your results.
The guide is designed for compliance leaders, CISOs, and internal audit teams who need to verify whether their compliance artifacts would survive independent examination. It's vendor-agnostic: the questions apply whether you use Delve, Vanta, Drata, or any other GRC platform.
You can access the full guide here.

We're not arguing against automation or efficiency in compliance programs.
We're arguing for evidence that means something.
The compliance profession faces real pressure. Regulatory requirements are expanding. Frameworks are getting more complex. Organizations want assurance but don't want to staff compliance teams at the level those demands would traditionally require.
That creates market opportunity for tools that genuinely improve efficiency. But it also creates opportunity for vendors who blur the line between documentation and fabrication.
Your job as an internal audit leader is to know the difference.
When you evaluate a compliance platform, when you review evidence for an audit, when you report to your audit committee on the state of compliance, the question you should be asking is the same one regulators will eventually ask: Can you prove this is real?
Not "Can you produce documentation?"
Not "Does this look compliant?"
Can you trace this evidence back to actual activity, real controls, genuine operational work that happened when and how the documentation claims?
If the answer is yes, you have compliance.
If the answer is "I think so" or "It should be" or "The vendor assured us," you have risk.
The Delve scandal is a reminder that the distance between those two answers is the distance between assurance and liability. And in 2026, with regulators increasingly focused on evidence integrity and AI creating new ways to generate synthetic documentation, that distance is worth measuring carefully.
Subscribe now to join the Risk Register community: