CARI
CPCI
Community Project Cost Index

A transaction-derived cost benchmark for community association capital projects. Every data point originates from a verified vendor payment in a double-entry ledger, linked to a building component, a contractor record, and a set of physical building characteristics.

Methodology published April 14, 2026
1.

Definition

What CPCI is and what it is not

The Community Project Cost Index (CPCI) is a regional cost benchmark that tracks the actual, verified cost of capital replacement projects completed by community associations — condominiums, homeowners associations, and planned communities.

CPCI is derived exclusively from completed vendor payments recorded in CommunityPay general ledgers. Every data point represents a real transaction: money that left a real bank account, posted through a real double-entry ledger, linked to a real building component and a real licensed contractor. No survey data. No self-reported estimates. No broker opinions. No manufacturer list prices.

CPCI is not an appraisal. It is not a cost estimate. It is not a substitute for a professional reserve study with site inspection. It is a statistical benchmark derived from observed transaction data, segmented by component type, building characteristics, and geographic region. Its purpose is to ground cost assumptions in empirical evidence rather than professional judgment alone.

2.

The Problem CPCI Solves

Why this data has never existed

There are approximately 370,000 community associations in the United States, collectively managing over $100 billion in reserve funds. These funds exist to pay for capital replacements: roofs, elevators, plumbing systems, parking lots, siding, HVAC systems, and hundreds of other building components with finite useful lives.

Every year, reserve study professionals estimate the future cost of replacing these components. They use manufacturer data, contractor bids, RS Means databases, professional judgment, and regional adjustments. These estimates inform how much associations should contribute to reserves annually.

The problem: when the replacement actually happens, the actual cost is recorded in the association's general ledger — but that ledger has never been connected to a structured database of component types, building characteristics, and contractor records. The data dies in the ledger. It is never aggregated, never benchmarked, never fed back into the estimation models that need it most.

The result is a $100 billion asset class governed by estimates that are never systematically validated against outcomes. Reserve studies estimate that a roof replacement for a 60-unit building in Seattle will cost $280,000. When it actually costs $340,000, that variance is absorbed by the association and forgotten. It never enters a dataset that the next reserve study professional can reference.

CPCI closes this loop. It connects four data elements that have never before been linked in a single system:

Element Source
What was replaced
Component type, category, useful life, installation year, condition at replacement
Reserve component register
What it actually cost
Bid amount, change orders, final contract value, payment schedule, retainage
General ledger (posted vendor payments)
Who did the work
Contractor license, bond amount, insurance coverage, compliance status at time of bid
BuildRated contractor graph
Building characteristics
Unit count, stories, building age, total square footage, construction type, region
Association profile
3.

Data Pipeline

From board decision to benchmark data point

Every CPCI data point follows an auditable chain from board decision to index entry. No data point enters the benchmark without traversing the full pipeline.

  1. Board creates a capital project record The project defines the component being replaced or repaired, the estimated budget, the funding source (operating or reserve), the expected timeline, and the project scope (full replacement, partial repair, phased multi-year, or emergency).
  2. Contractors submit bids Each bid is linked to a contractor profile in the BuildRated contractor graph. At bid evaluation time, the contractor's license status, bond amount, insurance coverage, and compliance history are frozen in a point-in-time snapshot. This snapshot is immutable — if the contractor's license lapses after the bid, the record proves it was valid when the board made the decision.
  3. Board evaluates and selects The board's selection rationale, vote record, and approval chain are captured. The system flags scope gaps between bids, price outliers, missing certificates of insurance, and expired bonds. A Board Decision Record (BDR) is generated as an immutable institutional artifact with SHA-256 content hash.
  4. Vendor payment posts through the enforcement ledger The actual cost — including change orders, retainage releases, and warranty holdbacks — is recorded as a journal entry via the CommunityPay enforcement dispatcher. Every payment passes through the guard chain (balance, fund segregation, vendor risk, covenant compliance). The payment links back to the project, the selected bid, and the contractor.
  5. Project completion triggers CPCI ingestion When the project status transitions to completed and all vendor payments are posted, the data point enters the CPCI pipeline. Component type, actual cost, building characteristics, region, contractor profile, bid-to-actual variance, and project timeline are indexed.
  6. Data point enters the benchmark The observation is normalized (see Section 5), assigned to its dimensional cohort, and incorporated into the rolling benchmark. Individual association data is never disclosed. Only aggregated statistics are published.
4.

Component Taxonomy

Standardized classification of capital assets

CPCI uses a hierarchical component taxonomy to classify capital projects. The taxonomy is designed to be specific enough for meaningful benchmarking while broad enough to accumulate sufficient data density within each category.

Category Components
Roofing
Composition shingle, membrane (TPO/EPDM/PVC), metal, tile, cedar shake, built-up, slate, green roof, gutter systems, flashing
Building envelope
Siding (vinyl, fiber cement, wood, metal), exterior paint, stucco, brick repointing, caulking/sealant, window replacement, balcony waterproofing, deck coating
Vertical transport
Elevator modernization, elevator cab refurbishment, escalator, wheelchair lift, controller upgrade, hydraulic-to-traction conversion
Mechanical systems
HVAC (central plant, split systems, boiler, chiller, cooling tower), fire suppression, fire alarm, generator, pumps, ventilation
Plumbing
Domestic water riser, waste/drain riser, water heater, backflow preventer, re-piping (copper, PEX, CPVC), sewer lateral, irrigation
Electrical
Main switchgear, panel replacement, exterior lighting, parking garage lighting, EV charging infrastructure, transformer, meter bank
Site work
Asphalt paving, concrete flatwork, retaining walls, fencing, landscaping (hardscape), drainage, stormwater, erosion control
Common areas
Pool/spa resurfacing, pool equipment, fitness center, lobby renovation, hallway carpet/flooring, mailbox replacement, amenity furniture
Structural
Foundation repair, post-tension cable repair, concrete spalling, structural steel, seismic retrofit, parking garage structural
Specialty
Marina/dock, tennis/pickleball court, playground, security gate, access control, telecom/low-voltage, solar/renewable

Each component is further characterized by material, specification grade, and unit of measure (per square foot, per unit, per linear foot, per floor, lump sum) to enable normalized cost comparison across different building sizes.

5.

Index Dimensions and Normalization

How cost data is segmented and compared

Raw project costs are not directly comparable across buildings. A roof replacement for a 20-unit garden-style complex is a fundamentally different project than a roof replacement for a 200-unit high-rise. CPCI normalizes costs across multiple dimensions to produce meaningful benchmarks.

Primary dimensions

Dimension Segmentation
Component category
What was replaced or repaired
Hierarchical taxonomy (Section 4). Benchmarks computed at both category and sub-component level.
Building size
Scale of the association
Unit count buckets: 2–10, 11–30, 31–75, 76–150, 151–300, 300+. Total square footage where available.
Building type
Physical configuration
Garden-style, mid-rise (4–7 stories), high-rise (8+), townhome, single-family detached (PUD), mixed-use.
Building age
Year of original construction
Decade buckets: pre-1970, 1970s, 1980s, 1990s, 2000s, 2010s, 2020s.
Region
Geographic market
CBSA (Core Based Statistical Area) for metro regions. State for non-metro. FEMA region as fallback.
Project scope
Extent of the work performed
Full replacement, partial replacement, major repair, phased multi-year, emergency/unplanned.
Completion year
When the project was completed
Calendar year of final payment. Enables inflation adjustment and trend analysis.

Normalization methods

Costs are normalized to enable comparison across different building sizes. The normalization unit depends on the component category:

Component Category Normalization Unit
Roofing
Cost per roofing square (100 sq ft), cost per unit
Building envelope
Cost per square foot of facade area, cost per unit
Elevator
Cost per cab, cost per floor served
Plumbing
Cost per riser, cost per unit, cost per floor
Site work (paving)
Cost per square foot, cost per parking space
Common areas
Cost per square foot of affected area
All categories
Total project cost, cost per unit (universal fallback)

Cost-per-unit is always computed as a universal fallback metric, enabling cross-category comparison. Component-specific normalization units are computed when the relevant building data is available.

6.

Statistical Framework

How benchmarks are computed from observations

Minimum sample size

No benchmark is published for a dimensional cohort (component category + region + building type combination) until the cohort contains a minimum of five independent observations from at least three distinct associations. This prevents any single association's project cost from being reverse-engineered from the benchmark.

Central tendency and dispersion

Each published benchmark includes: median cost (primary reference), 25th percentile, 75th percentile, mean cost, observation count, and the date range of contributing observations. The median is the primary reference value because capital project costs are right-skewed — a small number of complex or emergency projects can distort the mean.

Outlier treatment

Observations more than 3x the interquartile range above the 75th percentile or below the 25th percentile are flagged for manual review. Flagged observations are not automatically excluded — they may represent legitimate cost variation (e.g., asbestos abatement driving up a re-piping cost). Excluded observations are documented with a reason code.

Temporal adjustment

Multi-year trend analysis uses the completion year dimension. No synthetic inflation adjustment is applied. The raw data speaks for itself: if roof replacements in the Seattle metro cost 12% more in 2027 than in 2025, that is an observed fact, not an adjustment. Users who need inflation-adjusted comparisons can apply their own deflator.

Confidence assessment

Every benchmark includes a confidence level derived from observation count and data completeness:

Confidence Criteria
HIGH
20+ observations, 5+ associations, normalization data available for 80%+ of observations
MEDIUM
10–19 observations, 3+ associations, normalization data available for 50%+
LOW
5–9 observations (minimum threshold), limited normalization data. Directional only.
7.

Bid-to-Actual Variance

The second dataset inside CPCI

CPCI captures not only what projects cost but how the final cost compared to the winning bid. This bid-to-actual variance dataset is a distinct analytical layer within the index.

For every completed project where both the original bid amount and the final paid amount are recorded, the variance is computed:

Variance = (Final Paid Amount − Original Bid Amount) / Original Bid Amount

This variance is segmented by the same dimensions as the cost benchmark (component type, region, building type, project scope). It answers questions that no existing dataset can:

  1. What is the typical cost overrun for this type of project? If elevator modernizations in mid-rise buildings have a median bid-to-actual variance of +8%, reserve study professionals can factor that into their estimates.
  2. Which project types are most prone to change orders? Structural repairs and plumbing riser replacements may show systematically higher variance than roofing or paving projects because hidden conditions are discovered during construction.
  3. Do emergency projects cost more than planned replacements? The scope dimension (emergency vs. planned) enables direct comparison, quantifying the premium associations pay for deferred maintenance.
  4. Is a specific bid reasonable before the board commits? A board can compare a received bid against the CPCI median for that component, building type, and region — before spending reserve funds, not after.
8.

Contractor Performance Signal

BuildRated integration

Because every CPCI data point is linked to a contractor in the BuildRated graph, aggregate contractor performance metrics emerge from the data without requiring any additional data collection:

Metric Derivation
Bid accuracy
Mean and median bid-to-actual variance across a contractor's completed projects
Cost positioning
Where a contractor's bids and final costs land relative to the CPCI median for that cohort
Project completion rate
Ratio of projects reaching "completed" status vs. projects abandoned or terminated
Change order frequency
Rate at which the contractor's projects require change orders
Payment velocity
Time from project completion to final payment (from the CommunityPay ledger)

These metrics are aggregated and anonymized. No individual project or association is identifiable. The contractor performance signals feed back into BuildRated scoring and, through that, into the CARI vendor risk component.

9.

CARI Integration

How CPCI feeds the risk index

CPCI is a data product within the CARI risk intelligence platform. It consumes data from CARI-scoring associations and feeds two categories of signals back into the CARI score engine:

Financial health signals

When an association's reserve fund adequacy is evaluated, CPCI benchmarks replace generic cost assumptions with empirical data. If an association's reserve study estimates $200,000 for a future roof replacement, but the CPCI median for that building type and region is $280,000, the reserve adequacy assessment reflects the benchmark rather than the study estimate.

Vendor risk signals

When an association's capital project bid deviates significantly from the CPCI regional median, it generates a risk signal. A bid 40% above the median for comparable buildings warrants scrutiny. A bid 40% below the median may indicate scope gaps or a contractor who intends to recover costs through change orders. Both are meaningful signals for the CARI vendor risk component.

Governance signals

The board's decision-making on capital projects — whether they obtained competitive bids, whether they documented the selection rationale, whether the project came in on budget — feeds the governance component of the CARI score. Boards that consistently manage capital projects within CPCI benchmarks demonstrate fiduciary competence.

CPCI data is subject to the same consent framework as all CARI data. No individual association's project costs are disclosed to third parties. Only aggregated, anonymized benchmarks are published through the CARI API.
10.

Intended Consumers

Who uses project cost benchmarks and how

Reserve Study Professionals

Validate cost assumptions against actual transaction data. Identify component categories where their estimates systematically diverge from observed costs. Cite CPCI benchmarks in reserve study reports.

HOA Boards

Evaluate whether contractor bids are reasonable for their building type and region before committing reserve funds. Document due diligence in vendor selection. Defend capital project decisions to owners.

Mortgage Lenders

Assess whether an association's reserve fund is adequate relative to the empirical cost of upcoming replacements. Replace generic reserve adequacy assumptions in Fannie Mae 1076 / Freddie Mac 1077 reviews with data-backed benchmarks.

D&O Insurers

Evaluate board fiduciary decision-making by comparing capital project costs and vendor selection patterns against regional benchmarks. Price D&O premiums with project governance data.

Management Companies

Benchmark capital project costs across their portfolio. Identify associations paying above-median costs. Demonstrate value to boards by showing cost management relative to CPCI benchmarks.

Contractors

Understand competitive pricing by component type and region. Identify markets where their pricing is competitive. Demonstrate bid accuracy to prospective clients via BuildRated profile metrics.

11.

Data Integrity Guarantees

Why CPCI data can be trusted

CPCI inherits the integrity guarantees of the CommunityPay enforcement ledger. Every data point in the index is backed by the same institutional infrastructure that secures the general ledger:

  1. Every vendor payment is enforced No payment posts to the ledger without passing through the enforcement dispatcher and its guard chain. The payment amount, vendor, and fund source are all validated before the journal entry is created.
  2. Journal entries are immutable Once a journal entry is posted, it cannot be edited or deleted. Corrections are made via reversing entries, which are themselves subject to enforcement. The original entry persists in the audit trail.
  3. Contractor snapshots are frozen at bid time The contractor's license, bond, and insurance status are captured at the moment the bid is evaluated. This snapshot is immutable. It represents the compliance state the board relied on when making their decision, regardless of what happens to the contractor's credentials later.
  4. Board Decision Records are content-hashed The complete project record — bids received, contractor snapshots, selection rationale, board vote — is sealed with a SHA-256 content hash. Any modification is detectable.
  5. Ledger integrity is continuously verified The CommunityPay ledger integrity scan runs daily, checking for unbalanced entries, orphaned lines, missing enforcement decisions, and control account reconciliation. CPCI only ingests data from associations with clean integrity scans.
12.

Privacy and Consent

How individual association data is protected

Individual association project costs are never disclosed. All published benchmarks are aggregated across multiple associations. The minimum sample size requirement (Section 6) ensures no published benchmark can be traced to a single association.

CPCI data contribution follows the CARI consent model. An association's capital project data enters the CPCI pipeline only if the association has active CARI consent. Consent can be revoked at any time, and revocation removes the association's future project data from benchmark calculations. Historical benchmarks that already incorporated the association's data are not retroactively recalculated, as the aggregated statistics cannot be reverse-engineered to individual observations.

Contractor performance metrics (Section 8) are computed from data across multiple associations and projects. No metric is published for a contractor with fewer than three completed projects across two or more associations.

13.

API Access

Machine-to-machine benchmark queries

CPCI benchmarks are accessible through the CARI institutional API. Subscribers with cpci:read scope can query benchmarks by component category, region, building type, and time period. The response envelope follows the standard CARI format with content hash, confidence level, and data completeness.

API access requires an active CARI subscriber agreement. Rate limits and credit-based billing follow the subscriber's CARI tier (Basic, Professional, Enterprise).

14.

Current Status

Accumulating Data
Methodology published, index in formation

CPCI requires a critical mass of completed capital projects across CARI-scoring associations before benchmarks carry statistical significance. The methodology is defined. The data architecture is built. The enforcement ledger, reserve component register, and BuildRated contractor graph are all in production.

The index will begin publishing regional benchmarks as the underlying transaction volume reaches sufficient density by component category and region. Early benchmarks will be concentrated in the states where CommunityPay has the deepest association coverage.

Every vendor payment posted through the CommunityPay ledger that links to a capital project and reserve component is a future CPCI data point. The pipeline is live. The data is accumulating.

Important: CPCI benchmarks are informational tools for comparative analysis. They do not constitute appraisals, cost estimates, professional engineering assessments, or legal advice. Associations should engage qualified reserve study professionals for project-specific cost analysis. CPCI benchmarks should be used as one input among many in capital planning decisions, not as a sole basis for budgeting or vendor selection.