The Framework

How VCRI Measures Risk

Independent vendor risk assessment requires a structured, repeatable methodology. VCRI's framework rests on three interconnected components: the CyberAssuranceMatrix (CAM), the VCAR risk quantification formula, and The Greeks data quality multipliers.


Core Assessment Tool

The CyberAssuranceMatrix (CAM)

Developed by Joshua Marpet and Mitch Parker (IEEE/UL 2933 co-Vice Chair), the CAM is a structural verification tool. Where active defense frameworks ask "are we currently under attack?" — the CAM asks "were we built to resist attack?" Unverified design assumptions are the risk. The gap between where a vendor is and where they need to be is what VCRI scores.

Six TIPPSS Dimensions × Six Asset Classes × CMM Levels 1–5

T
Trust
I
Identity
P
Privacy
P
Protection
S
Safety
S
Security

Asset classes scored: Devices · Applications · Networks · Data · Users · AI

CMM Maturity Levels

Every vendor is scored on CMM Levels 1–5 per TIPPSS dimension per asset class. The gap between current level and required level is the risk.

1
Initial — Ad-hoc, hope-based
No formal controls. Security depends on individual heroics, not process.
2
Managed — Basic tools deployed
Measured locally. Some controls in place but not organization-wide.
3
Defined — Org-wide policies
Consistent tooling across the organization. Security is a process, not an activity.
4
Quantitatively Managed — Metrics-driven
Statistical control of posture. Security is measured and managed by the numbers.
5
Optimizing — Self-healing, continuously improving
Automated assurance. Security posture improves without manual intervention.

Risk Quantification

VCAR — Value Chain At Risk

VCRI produces dollar-denominated risk figures using a deliberately simple two-input model — a conscious parallel to financial VaR (Value at Risk). The goal is continuous operation, not exhaustive modeling.

Dollar-at-Risk
Dollar-at-Risk
=
Client-attested
Process Value
×
VCRI-maintained
Industry Incident Duration
Process Value

The revenue or mission value of each Functional System — client self-attests. "This billing process generates $50,000/week." Only the client knows this number. No external model can replace it.

Industry Incident Duration

How long specific risk types (ransomware, exfiltration, DDoS) typically impair operations — drawn from published incident data. VCRI maintains and continuously updates this database. When industry recovery times improve, every risk score updates automatically.

Why not FAIR? FAIR is excellent — mathematically rigorous and well-designed. Its operational friction (seven input factors per scenario) makes continuous assessment expensive. VCRI's two-input model is rougher but responsive. Continuous visibility, not annual precision.

Data Quality

The Greeks — Four Quality Multipliers

Every risk score is adjusted by four multipliers that reflect data quality, vendor transparency, and concentration risk. A vendor with a technically acceptable posture but low data quality gets a worse effective score — which is correct.

α
Alpha — Accuracy
Verified evidence vs. self-attestation

High Alpha = the data comes from verified, independent sources. Low Alpha = vendor filled out a questionnaire. High Alpha vendors can share one verified profile with all their clients — Portable Trust.

β
Beta — Automation
API pull vs. manual package

Percentage of data from automated API pull vs. manually assembled packages. Low Beta means the vendor curated their submission — the equivalent of an applicant editing their own reference letters.

γ
Gamma — Concentration Risk
How many processes depend on this vendor

How many of your critical business processes depend on this vendor. High Gamma = systemic failure risk. Drives the Decision Science Quadrant: Systemic Concentration Score vs. Remediation ROI.

θ
Theta — Recency / Time Decay
Confidence drops as data ages

A scan expires the moment it's complete. High Theta = re-assessment overdue. This is the Heisenberg Problem of Risk made visible: the more granular the data, the faster its truth expires.

Infrastructure

How Data Flows

Vendors operate under heterogeneous compliance regimes. VCRI normalizes everything to a common baseline so cross-vendor comparison is possible regardless of source framework.

CRIBL Pipeline

Ingests vendor system data — vulnerability management, SIEM, GRC tools — and applies transforms: schema normalization, sensitivity redaction, SCF control mapping, maturity classification.

SCF Translation Layer

Maps 100+ compliance frameworks to a single master control library. A vendor under ISO 27001 and a vendor under CMMC become directly comparable — both map to the same SCF control ID and the same CAM cell.

CAM Output

Normalized data scores to TIPPSS × CMM cells. Verification gaps are computed. Dollar-at-Risk is calculated via VCAR. Traffic Light risk signals published per vendor per category.

Inside-Out, not Outside-In. VCRI doesn't scan vendors from the outside. Vendors submit live telemetry from the inside — preferably via automated API pull. Vendors who package their own data instead of allowing automated pull receive a lower Beta score, creating a market incentive for full transparency.

Deep Dives

Technical Documentation

Full technical specifications, regulatory alignment, and methodology details.