Technical Overview · For Security Architects & CISOs

Value Chain Risk Institute:
Continuous Vendor Risk Quantification

A technical reference for security architects, CISOs, and technology partners evaluating VCRI's methodology, data pipeline, scoring model, and integration architecture.


1. The Problem: Why Current Approaches Fail

The Questionnaire Fiction

The dominant vendor risk management approach — sending security questionnaires and collecting attestations — has a fundamental structural failure: it produces a document describing what a vendor claims, not what their systems show. There is no technical mechanism preventing a vendor from answering "yes" to every control question regardless of actual implementation. The output of a questionnaire process is the vendor's preferred narrative, timestamped.

External scanning services (Bitsight, SecurityScorecard, and similar) address the narrative problem but introduce a different failure: they observe only what is visible from outside the perimeter. An organization with a clean external footprint may have critical internal control failures completely invisible to external probes. External hygiene score ≠ internal security posture.

The Audit Problem

Even third-party audits (SOC 2, ISO 27001, CMMC) are point-in-time events. A vendor receives a clean audit in Q1, pushes a major infrastructure change in Q2, and the audit is still valid for another nine months. The compliance document says "pass" while the actual posture has materially changed.

The Heisenberg Problem of Risk

Traditional risk quantification faces a fundamental granularity paradox: the more granular the data, the faster its truth expires. A vulnerability scan valid at the individual CVE level becomes obsolete the moment a developer commits new code. A port scan is accurate for the duration of the scan. A penetration test finding is valid for approximately five minutes past the engagement end.

VCRI refers to this as the Decay of Truth: all point-in-time security assessments begin decaying the moment they are produced. The faster the target changes — and modern agile infrastructure changes continuously — the faster the truth decays. This is why annual audits are effectively meaningless as security instruments; they are useful only as compliance artifacts.

This produces VCRI's core benchmark: 6 months → 6 seconds. Traditional risk-to-action pipelines — questionnaire distribution, vendor response, review, scoring, reporting — take approximately six months end-to-end. VCRI's continuous telemetry pipeline surfaces a risk signal in approximately six seconds. The gap is not an incremental improvement; it is a categorical architectural difference.

The Continuity Constraint

To measure risk continuously, you must measure at a level of abstraction stable enough to persist through underlying changes. This is the core insight behind VCRI's Functional System atomic unit — see Section 5.

The Nth Party Problem

Enterprise value chains have extended far beyond direct vendor relationships. A company doing data analysis on your behalf is a third party. If that company uses a cloud-based AI service for processing, the AI provider is a fourth party. If the AI provider contracts its compute infrastructure, that is a fifth party. In practice, data processed under an enterprise contract may traverse four to six organizational boundaries before returning a result.

Current vendor risk management treats this as a third-party problem. It is not. Every layer of the chain introduces independent risk that compounds non-linearly. Visibility requirements must extend through the full chain — not just to the direct vendor relationship.

The Regulatory Inflection Point

Global regulators have reached the same conclusion simultaneously. The EU's DORA (Digital Operational Resilience Act), the U.S. CMMC and CSRMC frameworks, Saudi Arabia's SAMA, UAE's NESA, and Japan's ISMAP are all moving in the same direction: from periodic attestation toward continuous, quantified, operationalized vendor risk management. The compliance question is no longer "did you check your vendors?" — it is "can you demonstrate continuous visibility?"

Organizations that cannot answer the latter question are increasingly exposed to regulatory liability, insurance pricing pressure, and contractual risk — in addition to the underlying security exposure.


2. Architecture: The Inside-Out Data Pipeline

The Inside-Out Model

VCRI's foundational architectural decision is Inside-Out data acquisition. Rather than inferring vendor security posture from external observation (Outside-In), VCRI connects directly to the commercial security platforms vendors already operate and ingests live telemetry via authorized API access. The vendor's vulnerability management platform, SIEM, identity provider, and GRC tools are the source of truth — not a form the vendor fills out.

This distinction is not cosmetic. It changes what the data represents: instead of a vendor's representation of their posture, VCRI receives the actual output of the systems governing that posture. A vendor cannot selectively disclose through an automated pipeline in the same way they can curate a questionnaire response.

In practice, this means VCRI requests read authorization on platforms the vendor already operates — endpoint security via CrowdStrike or SentinelOne, identity posture via Microsoft Entra ID or Okta, vulnerability coverage via Tenable or Qualys, SIEM telemetry via Splunk, Microsoft Sentinel, or Google Chronicle, firmware integrity via Eclypsium, and remediation velocity via ServiceNow. No new infrastructure is required. No data is exported from the vendor's environment. The vendor authorizes VCRI as a read-only reader on tools they already use — the same permission model as adding an auditor to a tenant.

Full Pipeline

Vendor Systems
Endpoint: CrowdStrike · SentinelOne
Identity: Okta · Entra ID
Vuln: Tenable · Qualys
SIEM: Splunk · Sentinel · Chronicle
Firmware: Eclypsium
GRC: ServiceNow
+ Provenance Pack
CRIBL Ingest
API Pull
Schema Norm.
Redaction
Aggregation
SCF Mapping
Control ID map
Expected vs.
Actual
comparison
CAM Scoring
TIPPSS × Assets
CMM L1–L5
Verification
Gap calc.
Greeks Weighting
α β γ θ applied
to raw score
Confidence
adjustment
Dollar-at-Risk
Process Value
× Incident
Duration
Per Functional
System
TLP Dashboard
🟢 🟡 🔴
Category-level
No raw CVEs
exposed

The pipeline above reflects the preferred automated API pull path. Vendors who cannot integrate immediately may use the AI-Assisted Intake path — see below — which pre-processes unstructured submissions into the same normalized schema before entering the CRIBL pipeline.

CRIBL Integration Layer

CRIBL serves as the data pipeline — responsible for ingesting vendor system data and transforming it into the normalized schema that VCRI's scoring engine consumes. The preferred ingestion model is automated API pull: CRIBL connects to the vendor's live systems and ingests telemetry continuously without vendor-side preparation.

CRIBL's confirmed transform capabilities supporting the VCRI pipeline include:

Schema normalization — Parser, Rename, Eval, Flatten → TIPPSS-aligned schema
Sensitivity redaction — Mask, Drop → strip CVEs/IPs, retain category signal
Aggregation — Aggregations, Rollup → 847 vulns → "Protection: HIGH"
SCF control mapping — Lookup tables → raw events → SCF control IDs
Maturity classification — Eval + Lookup → tag CMM Level 1–5 evidence
GeoIP enrichment — flag foreign-origin vendor infrastructure
State persistence — Redis → maturity trends across pipeline runs
Noise reduction — Suppress → dedup SIEM alerts before scoring

What VCRI Contributes vs. What It Applies

The underlying frameworks — SCF's control library, CMM maturity levels, IEEE/UL 2933's TIPPSS dimensions — are established standards maintained by their respective standards bodies. VCRI uses them; VCRI does not own them. VCRI's contribution is the synthesis layer: the expected-state comparison rules that calibrate what evidence a vendor at a given maturity level should be able to provide across relevant control domains. The pipeline is CRIBL. The intelligence is VCRI's calibration work applied on top of established standards.

AI-Assisted Intake: The Conversational Onboarding Path

Automated API pull is the preferred ingestion model — but not every vendor can integrate immediately. Tooling gaps, procurement cycles, and proprietary platforms mean some vendors need a starting point before a full connector is configured. The VCRI Intelligence Layer provides that path.

A vendor submits whatever security data they already have: exports from their vulnerability management platform, compliance audit reports, SBOM files, questionnaire responses, policy documents, or any combination. The AI intake agent ingests the submission, normalizes it to the VCRI schema, and then does the work that matters: it compares the vendor's evidence against VCRI's expected-state attestation baseline — the calibrated standard of what a genuinely secure vendor at each CMM maturity level should be able to demonstrate across each TIPPSS dimension. The output is not a score derived from the vendor's self-description. It is a gap analysis against an independently maintained attestational standard. Targeted follow-up questions address the gaps. The result is a working CAM assessment immediately — without waiting for API integration.

The Vendor Experience

"Here is my Qualys export, my SOC 2 report, and our SBOM." The intake agent responds: what it ingested, how it transformed each artifact into the VCRI schema, and — critically — where the evidence falls short of the attestation baseline for the vendor's claimed maturity level. The vendor does not need to know the VCRI schema or the SCF control library. The intelligence layer handles ingestion, transformation, and comparison. The vendor sees the truth about where they stand.

From a governance standpoint, AI-assisted submissions are tagged with a lower Alpha (α) confidence score than live-feed data, reflecting the difference between structured telemetry and analyst-curated exports. Vendors who later add automated connectors see their confidence score improve automatically — creating a clear, measurable incentive path toward the preferred integration model.

The AI intake layer also handles the long tail of proprietary and niche security tooling that no pre-built connector library will ever fully cover. If a vendor operates a platform VCRI doesn't have a native connector for, they can submit its output directly. The intelligence layer identifies the relevant fields, maps them to the scoring schema, and flags any normalization assumptions it made for human review. This is not a workaround — it's a design decision: the clearinghouse should be accessible to every vendor, not just vendors using the top-10 enterprise platforms.

SCF Normalization Layer

The Secure Controls Framework (SCF) serves as VCRI's master control translation layer. Because vendors operate under heterogeneous compliance regimes — some under SOC 2, others under ISO 27001, NIST CSF, HIPAA, PCI-DSS, or CMMC — their raw evidence arrives in incompatible frameworks. SCF provides a unified control library that maps all major frameworks to a common schema.

This enables apples-to-apples comparison across the entire vendor population regardless of which framework any individual vendor operates under. A SOC 2 Access Control control and a NIST CSF PR.AC control both map to the same SCF control identifier and ultimately to the same CAM scoring cell.

Provenance Packs

For component-level supply chain visibility, vendors submit Provenance Packs — a bundled collection of all applicable Bills of Materials (BOMs) for a vendor's product and service portfolio. The BOM ecosystem is evolving rapidly; VCRI accepts any machine-readable BOM type vendors can provide:

  • SBOM — Software Bill of Materials: software components, dependencies, versions, origins, and known-vulnerability mapping
  • HBOM — Hardware Bill of Materials: component-level hardware provenance, supply chain origin
  • FBOM — Firmware Bill of Materials: firmware versions, embedded components, update history
  • CBOM — Cryptographic Bill of Materials: cryptographic primitives, key lengths, algorithm choices, post-quantum readiness
  • Governance BOMs — compliance artifact inventories, policy registers, control evidence bundles
  • Manufacturing, AI/ML, Operations BOMs — any additional BOM types applicable to the vendor's category

The individual BOM formats are maintained by their respective standards bodies (NTIA, CISA, CycloneDX, SPDX, and others). VCRI does not define the BOM standards; it ingests and processes whatever vendors provide.

Minimum requirements are category-appropriate: software-primary vendors are expected to submit at minimum an SBOM; hardware vendors an HBOM; firmware/embedded vendors an FBOM. Additional BOMs are credited in the vendor's Alpha (α) score — more complete provenance means higher accuracy confidence. As the BOM ecosystem evolves and new BOM types become standard, VCRI's category minimums will update to reflect industry expectations.

Provenance Packs are the "digital birth certificate" for a vendor's systems. They feed the Trust and Identity dimensions of the CAM scoring matrix and provide the evidentiary substrate for the Alpha (α) score.

Dashboard Output Model

VCRI's dashboard outputs a Traffic Light Protocol (TLP) signal per vendor per TIPPSS category. Critically, the dashboard exposes category-level risk signal only — specific CVEs, vulnerability details, or configuration specifics are never surfaced to the client organization. This is intentional:

  • It protects the vendor's competitive and security-sensitive details
  • It prevents the dashboard from becoming a roadmap for attackers
  • It focuses the client on risk decisions, not vulnerability management (which remains the vendor's responsibility)
● COMPLIANT — Controls verified at or above required maturity level
● ACTION REQUIRED — Gap identified; remediation timeline required
● CRITICAL — Material gap; immediate remediation or escalation required

3. The CyberAssuranceMatrix (CAM) & TIPPSS

Intellectual Lineage

The CAM was co-authored by Joshua Marpet and Mitch Parker (CISO, Indiana University Health; co-Vice Chair, IEEE/UL 2933). It builds on two prior frameworks: Sounil Yu's CyberDefenseMatrix (CDM), which organizes active defense operations, and the TIPPSS framework from IEEE/UL 2933, originally developed for clinical IoT security and extended by VCRI for value chain verification.

CyberDefenseMatrix (CDM)
Sounil Yu · Active defense framework
Question: How do we survive an attack?
Mode: Ongoing operational response
Metaphor: Protects the value chain during battle
CyberAssuranceMatrix (CAM)
Marpet + Parker · Structural verification
Question: Was this built securely?
Mode: Pre-deployment + continuous
Metaphor: Ensures the value chain was built for war
"High-performing organizations use the CAM to build trust and the CDM to maintain it."

TIPPSS: The Six Verification Dimensions

TIPPSS defines six orthogonal dimensions of security assurance derived from IEEE/UL 2933. Each dimension represents a distinct failure mode if unverified:

Trust
Only designated entities have access. Unverified Trust = provenance unknown. Anyone who claims to be a legitimate actor may be. Failure mode: supply chain injection, unauthorized access via forged identity.
Identity
Entities are who they claim to be. Unverified Identity = high probability of lateral movement. If a device, service, or user cannot prove identity, any compromise can pivot freely. Failure mode: credential theft, service impersonation, man-in-the-middle.
Privacy
Sensitive data remains private. Unverified Privacy = uncontrolled data exposure. Failure mode: data exfiltration, regulatory violation, third-party data leakage.
Protection
Systems resist harm. Unverified Protection = undefined attack surface. Failure mode: successful exploitation, ransomware propagation, physical system damage.
Safety
Failures are predictable and bounded. Unverified Safety = unpredictable failure modes. In OT/ICS and medical contexts: physical harm risk. In enterprise: cascading outages, undefined recovery states. Failure mode: systemic failure, uncontrolled downtime.
Security
Data and systems remain intact and available. Unverified Security = undefined persistence and availability guarantees. Failure mode: data corruption, availability loss, integrity violations.

The CAM Matrix: 5 Assets × 6 TIPPSS Dimensions

The CAM scores vendors across a 30-cell matrix — five asset categories (Devices, Applications, Networks, Data, Users) against each TIPPSS dimension. Each cell identifies the verification approach and the technologies expected at each maturity level. The gap between a vendor's current state and required state in any cell is the risk in that dimension.

Risk Redefined

Traditional risk = Likelihood × Impact (subjective estimates). CAM risk = Verification Gaps (measurable facts). By increasing verification coverage, an organization is not just "buying tools" — it is buying certainty in its risk calculations. The unverified is the unknown. The unknown is the risk.

The full 150-cell reference matrix (6 TIPPSS × 5 assets × 5 CMM levels with named technologies) is documented in Methodology/Risk-Quantification/CMMI-CAM-MATRIX.md.


4. CMM Maturity Levels: Scoring Vendors 1–5

VCRI scores vendor security posture using CMM (Capability Maturity Model) levels 1–5 per TIPPSS dimension per asset type. CMM was selected for three reasons: universal recognition across DoD, government, and enterprise procurement; granularity superior to simpler scales; and direct intellectual lineage — Joshua Marpet co-authored CMMC v1, which was derived from CMM.

CMM's key contribution to VCRI's model: it describes not just what technology is present, but how consistently and predictably an organization operates it. A firewall that is present but not maintained to policy is not a CMM Level 3 control — it is Level 1 or 2 at best. The CAM maps both the technology expectations and the process characteristics for each level.

L1
Initial
L2
Managed
L3
Defined
L4
Quantitative
L5
Optimizing
Level 1 — InitialAd-hoc, hope-based security. No formal controls. Individual heroics determine outcomes. Cannot be measured or reproduced.
Level 2 — ManagedBasic tools deployed and measured at the project level. Security practices exist but are not standardized across the organization. Results vary by team.
Level 3 — DefinedOrganization-wide policies and consistent tooling. Security is proactive. All teams follow the same defined process. Baseline is reproducible.
Level 4 — QuantitativeStatistically managed. Security posture is metrics-driven. Quantitative performance objectives are set and tracked. Variation is understood and controlled.
Level 5 — OptimizingSelf-healing, continuously improving. Automated assurance. The organization can pivot quickly without sacrificing security posture. Defects are prevented, not just detected.

The Gap Is the Risk

For a given vendor relationship, VCRI establishes a required CMM level per TIPPSS dimension based on the sensitivity of the data and processes involved. The delta between the vendor's current level and the required level in any cell is the quantified risk exposure in that dimension. That risk signal feeds VCRI's Dollar-at-Risk formula — the client's attested Functional System value multiplied by the industry incident duration for that risk type — to produce a dollar figure that is directly actionable by finance, legal, and operations.


5. The Greeks: Data Quality Weighting

Every risk score in the VCRI platform is weighted by four data quality coefficients — The Greeks. They are not separate scores. They are multipliers applied to raw CAM gap scores before any Dollar-at-Risk calculation is produced. A vendor with a high raw risk score and high Alpha/Beta is a confirmed risk. A vendor with a high raw risk score but low Greeks may simply be poorly measured — the risk is real but the signal is noisy. Both are important to distinguish.

Symbol Name What It Measures High Value Means Low Value Means
α Alpha
Accuracy
Degree to which vendor evidence is independently verified vs. self-attested Evidence corroborated by 3rd-party assessment or automated cross-validation. Trust the data. Self-attestation only. Vendor is grading their own homework. Score reflects claims, not reality.
β Beta
Automation
% of data arriving via automated API pull vs. vendor-prepared packages Live system telemetry. No curation opportunity. Data updated continuously without human selection bias. Vendor packaged their own submission. Data reflects what they chose to include. Lower confidence in completeness.
γ Gamma
Concentration
How many critical business processes depend on this vendor Vendor failure would cascade across multiple critical processes. Systemic risk amplifier. Vendor is isolated. Failure impact is bounded. Risk exposure is contained.
θ Theta
Recency
Confidence discount applied as data ages without update Fresh data. Continuous telemetry active. Score reflects current posture. Stale data. Last assessment was months ago. Posture may have materially changed. Re-assessment required.

Gamma: The Systemic Concentration Score

Gamma is displayed as the Systemic Concentration Score on the Decision Science Quadrant dashboard view. It answers the question: "If this vendor has a problem, how bad is our problem?" A vendor with a moderate raw risk score but high Gamma (many critical processes depending on them) may require more urgent attention than a vendor with a higher raw risk score but low Gamma.

Gamma is computed at two levels: the individual client level (how much does this vendor affect MY value chain) and, where data permits, the ecosystem level (how many VCRI clients depend on this vendor — a true systemic risk indicator).

Portable Trust: The Vendor Incentive

Vendors who maintain strong Alpha and Beta scores gain a significant operational benefit: Portable Trust. A verified VCRI score is shareable across all of a vendor's customer relationships simultaneously. Rather than responding to separate questionnaire requests from 50 different enterprise clients, a vendor with a strong VCRI profile can direct clients to that profile as the authoritative answer.

This creates a market incentive aligned with the governance objective: vendors who allow automated API access and maintain continuous telemetry achieve the highest Alpha and Beta scores, which makes their Portable Trust profile most valuable to their customer relationships. Transparency becomes the rational economic choice, not merely a policy requirement.


6. The Atomic Unit of Risk: Functional System Pricing

The Granularity Problem

Security tooling can quantify risk at extraordinary granularity — down to the individual CVE, switch port, or line of code. The Heisenberg Problem applies here: that level of granularity renders continuous quantification impossible, because the target changes faster than it can be measured. Point-in-time CVE counts are the wrong unit for continuous risk pricing.

At the opposite extreme, organization-level risk is too coarse to drive prioritization decisions. Telling a board that "cyber risk costs us $X" annually provides no actionable signal about which vendors, systems, or controls to address first.

The Functional System

VCRI defines the Functional System as the atomic unit of risk quantification: the smallest self-contained loop that generates or protects value. What "value" means depends on the organization:

  • Revenue generating — commercial entities — processes that directly produce revenue: payment processing, subscription billing, sales pipeline execution.
  • Revenue protecting — commercial entities — processes that prevent revenue loss: fraud detection, authentication, backup and disaster recovery, access controls. If these fail, money already earned is lost.
  • Direct mission support — public entities — systems that execute an agency's core mandate: a VA benefits processing system, an FDA drug approval pipeline, a DOD logistics platform. Failure means mandate delivery fails.
  • Indirect mission support — public entities — systems that enable mandate execution without directly delivering it: identity management, internal communications, procurement systems. Failure cascades into mandate delivery.

For public entities, the "Process Value" input to VCAR is the mission-impact dollar equivalent — the dollar value of benefits processed, services delivered, or operational capacity supported per unit time. The formula is identical; the value unit reflects the organization's context.

The Functional System has three properties that make it the right unit:

  • Stable: The revenue-generating purpose of a Functional System persists even as the code, infrastructure, and configuration beneath it change. This stops the Decay of Truth at the scoring level.
  • Priceable: Clients assign a dollar value and downtime impact to each Functional System ("this process going down costs us $50,000 per week"). This gives every risk score a direct financial translation.
  • Bridging: It is simultaneously visible to security engineers (who can point to the supporting systems) and to executives (who recognize the business process it represents).

VCAR — Value Chain At Risk

VCRI's quantification model is VCAR (Value Chain At Risk) — a deliberate parallel to financial VaR (Value at Risk), bringing institutional risk-pricing discipline to supply chain security. The formula uses two inputs:

  • Process Value — client self-attested: "This billing cycle generates $50,000/week." No external model can know this; the client is the authority on their own operations.
  • Industry Incident Duration — VCRI-maintained: how long a specific risk type (ransomware, data exfiltration, DDoS) typically keeps organizations impaired, derived from published incident data and maintained by the research arm.

Dollar-at-Risk = Process Value × Industry Incident Duration

Example: A $50K/week billing process with a top ransomware risk (industry norm: 10-week recovery) = $500K at risk. When industry recovery improves from 10 to 4 weeks due to better tooling, the score updates automatically — no manual re-assessment required.

Total Risk Exposure and Risk Distribution

Most Functional Systems carry multiple risk types simultaneously. VCRI identifies all material risks from the vendor's telemetry, computes a Dollar-at-Risk figure for each, and sums them into a Total Risk Exposure for that system. The dashboard displays each risk type's proportional contribution — visualized as a breakdown by risk category — giving prioritization signal beyond the aggregate number. A CISO can see not just that a given vendor relationship carries $500K in exposure, but that 62% of it is ransomware risk and 23% is credential compromise risk, and prioritize remediation accordingly.

In Plain Language

"Risk priced at the level of what matters — whether you measure that in revenue or in mission."

Business Impact Analysis Integration

VCRI provides standard Business Impact Analysis (BIA) templates to help clients accurately quantify the dollar value of each Functional System. These templates ensure that the Process Value inputs to the risk pricing model are defensible — not rough estimates — and consistent with industry-standard methodology for downtime cost calculation.


7. Governance Model: Why the Data Can Be Trusted

The credibility of VCRI's risk signal depends entirely on the trustworthiness of the underlying data. A sophisticated prospect's first governance question is invariably: "If vendors are paying customers, what prevents them from gaming the submission?" The answer is architectural, not merely contractual.

Layer 1 — Automated Pull as the Preferred Model

VCRI's preferred ingestion model is direct API pull from vendor security systems via CRIBL. When this model is active, the vendor does not interact with the data submission process — CRIBL connects to the live systems and ingests whatever those systems report. The vendor's ability to curate their submission is structurally removed. Planned integrations span the major commercial security platforms: CrowdStrike and SentinelOne for endpoint posture; Microsoft Entra ID and Okta for identity risk; Tenable and Qualys for vulnerability coverage; Splunk, Microsoft Sentinel, and Google Chronicle for SIEM telemetry; Eclypsium for firmware and hardware supply chain integrity; and ServiceNow for patch and remediation velocity. Each integration requires only a read-scope API authorization — the vendor adds VCRI as a reader on a platform they already operate, with no data leaving their existing SaaS tenants.

For vendors who decline API access and submit packaged data instead, VCRI accepts the submission — but the vendor's Beta (β) score reflects the lower provenance quality. Packaged submissions receive a lower automation coefficient than live API-sourced data. The market signal is transparent: a vendor's Beta score communicates to every client organization how their data arrived. This creates a structural incentive: vendors who want the strongest Portable Trust profile have a direct economic reason to allow automated access. Transparency is the rational choice.

Layer 2 — Expected-State Comparison

VCRI does not simply store what vendors submit. Every submission is evaluated against an expected-state model: given the vendor's stated CMM maturity level, what should their security telemetry show? Gaps between expected and actual state surface as risk signals regardless of whether the submitted data itself is "clean." A vendor submitting spotless data that fails to account for missing controls is as detectable as one submitting data indicating known gaps.

Layer 3 — Non-Profit Governance Board

As a non-profit consortium, VCRI's operational policies — including what data is collected, how it is evaluated, and how the scoring model is calibrated — are governed by a board with no financial stake in vendor outcomes. Board members include former senior executives from Oracle and Lawrence Livermore National Laboratory, the CISO of a major healthcare system, and the architects of the underlying standards frameworks. The governance board, not the paying vendor base, controls the methodology.

Non-Profit Structure — Why It Matters

VCRI's non-profit status is not incidental to its governance model — it is the governance model. A for-profit vendor risk platform has an inherent tension: vendors who pay more receive services calibrated to retain them. VCRI has no such tension. Its only financial relationship with vendors is the flat escrow fee for dashboard management. The risk signal is calibrated to truth, not to retention.


8. Regulatory Alignment

VCRI's continuous vendor risk quantification model is specifically designed to satisfy the technical requirements of the emerging global regulatory landscape. The following maps the key technical requirements of each major framework to VCRI's capabilities.

DORA
EU — Digital Operational Resilience Act
Requires financial entities to implement ICT third-party risk management with continuous monitoring obligations. Mandates contractual provisions for audit rights and exit strategies. VCRI's continuous telemetry and TLP dashboard provide the monitoring infrastructure required; the Provenance Pack serves as the audit evidence substrate. GeoIP enrichment in the CRIBL pipeline flags foreign-origin infrastructure relevant to DORA's concentration risk provisions.
FedRAMP
USA — Federal Risk Authorization
Requires continuous monitoring of cloud service providers used by federal agencies. FedRAMP's ConMon requirements map directly to VCRI's pipeline: automated evidence collection, control assessment against NIST 800-53, and ongoing reporting. VCRI's SCF normalization layer translates NIST 800-53 controls to the unified scoring schema natively.
CMMC / CSRMC
USA — Defense Supply Chain
CMMC requires prime contractors to ensure subcontractor compliance, with reassessment triggered by any significant infrastructure change. CSRMC extends this with a "Continuous ATO" concept. VCRI's pipeline can detect significant changes in vendor telemetry and trigger reassessment alerts automatically. VCRI's CMM scoring framework directly reflects the CMMC maturity model — Joshua Marpet co-authored CMMC v1.
SAMA
Saudi Arabia — Monetary Authority
SAMA's Cyber Security Framework mandates third-party risk management with continuous assessment provisions for regulated financial institutions operating in Saudi Arabia. VCRI's neutral non-profit structure and standards-based methodology (IEEE/UL 2933, SCF) position it as suitable infrastructure for SAMA-compliant value chain monitoring in the GCC region.
NESA
UAE — National Electronic Security Authority
NESA's Information Assurance Standards require continuous security management and supply chain risk visibility for critical infrastructure sectors. The UAE's position as a global commerce hub makes VCRI's value chain visibility model particularly aligned with regulatory direction. Nations that operationalize continuous risk quantification first become the safest hubs for cross-border commerce.
ISMAP
Japan — Information System Security Management & Assessment Program
Japan's FedRAMP equivalent, administered by NISC and METI, requires continuous security assessment for cloud and IT systems serving Japanese government agencies. ISMAP's assessment framework maps to ISO 27001/27017 and NIST controls — all translatable via SCF to VCRI's unified scoring schema. VCRI's Portable Trust model enables vendors already assessed under ISMAP to extend that verified profile to Japanese government customers without redundant audit cycles.

9. Integration Reference

Technology Partner Stack

CRIBL Data pipeline. Ingests vendor system telemetry via API pull. Performs schema normalization, sensitivity redaction, SCF control mapping, and CMM maturity classification. The operational layer between vendor systems and the VCRI scoring engine.
VCRI Intelligence Layer AI-assisted intake agent. Enables vendors to submit existing security data — exports, compliance reports, BOMs, audit artifacts — in any format. The intelligence layer normalizes submissions to the VCRI schema, identifies which CAM cells can be scored, asks targeted follow-up questions for gaps, and produces an initial assessment without requiring pre-configured API connectors. Submissions are tagged with a lower Alpha (α) confidence score than live-feed data, creating a clear incentive path toward automated integration.
SCF Secure Controls Framework. Master control library providing cross-framework translation. Maps SOC 2, ISO 27001, NIST CSF, HIPAA, PCI-DSS, CMMC, and 100+ other frameworks to a common control identifier schema. Enables apples-to-apples vendor comparison regardless of their compliance regime. Tom Cornelius (SCF founder) is a VCRI board member.
Cyturus Continuous maturity management and assessment platform. Serves as the evidence repository for vendor control data. Tracks maturity over time using Redis state persistence in the CRIBL pipeline. Provides the longitudinal data supporting Theta (θ) time-decay scoring. Robert Hill (Cyturus CEO) is a VCRI board member.
Industry Incident Database VCRI research arm output. Continuously maintained dataset of how long specific risk types (ransomware, data exfiltration, DDoS, credential compromise) typically keep businesses impaired, drawn from published incident data. Combined with client-attested Functional System value to produce Dollar-at-Risk figures. When industry recovery norms shift — e.g., better tooling reduces average ransomware recovery from 10 weeks to 4 — affected risk scores update automatically across the platform without manual re-assessment.
Jira / ServiceNow Closed-loop remediation integration. Prioritized remediation lists generated by the dashboard can sync directly to vendor ticketing systems, enabling real-time tracking of gap closure without manual coordination overhead.

Standards Basis

IEEE/UL 2933Primary standards framework. Originally developed for Clinical IoT. VCRI extends it to general value chain verification. Defines the TIPPSS dimensions and the Functional System boundary concept. Mitch Parker (VCRI board) is co-Vice Chair of the IEEE/UL 2933 committee.
CMMMaturity scoring framework (Capability Maturity Model). Provides the 5-level scale mapped to each CAM cell. Universally recognized in DoD and government procurement.
SCFSecondary standards basis. Master control library for cross-framework normalization. Integrates 100+ compliance frameworks into the unified scoring schema.
FAIRFactor Analysis of Information Risk. An industry-recognized risk quantification framework with strong mathematical rigor. VCRI's VCAR model uses a simpler two-input design (Process Value × Industry Incident Duration) for operational continuity at scale — not as a critique of FAIR's methodology, but because continuous operation requires a model that re-runs without seven-factor re-estimation per scenario.

10. Get Involved

Government Agencies

VCRI is seeking founding government agency partners to anchor the Year 1 deployment and co-define the operational standard.

info@valuechainrisk.org
Technology Partners

Integration partners for data ingestion, maturity management, controls frameworks, and downstream risk quantification.

info@valuechainrisk.org
Strategic Donors

$3.1M founding round to build, staff, and operationalize the reference architecture. Non-profit consortium governance.

info@valuechainrisk.org
© 2026 Value Chain Risk Institute · Non-Profit Consortium · ValueChainRisk.org · Executive One-Pager "You Cannot Secure What You Only Check Once a Year."

Utilizes Secure Controls Framework (SCF) — securecompliance.org