CODIAC.
Supporting data — technical

Supporting data — numbers & methodology

Companion to the executive briefing dated May 15, 2026. For the engineers and auditors.
May 15, 2026

1. App startup — what changed, what it costs

Source: Webapp Performance Tuning Docs, 2026-05-04 ↔ 2026-05-14.

Cold-start cost in the old setup was dominated by ASP.NET’s runtime CodeDom + csc.exe pipeline parsing and compiling 1,781 view files on the first user click after every restart.

View files in Cybersoft.Primero.WebpagesCount
.aspx pages1,201
.ascx user controls557
.asmx service endpoints23
Total compilable1,781

The four phases shipped

PhaseWhat it doesCost
1. Web.config flip Source defaults flipped from debug=true, batch=false to debug=false, batch=true. Re-enables JIT optimization, script bundling, and per-directory DLL batching. Per-cabinet overrides via DEBUG_COMPILATION / BATCH_COMPILATION env vars. Zero bytes, zero runtime cost.
2. Precompile + IIS Application Init aspnet_compiler builds all 1,781 view DLLs at image build time. IIS warmup hits /Warmup.aspx synchronously before the pod is marked Ready. +50–250 MB image size. ~2.5 h build (drops to ~30–45 min after the codebase namespace mismatches are fixed by Cybersoft).
3. NGEN + Multi-core JIT Native-image precompile over Cybersoft.*.dll + App_global.asax.dll at image build. ProfileOptimization records and replays a JIT profile across w3wp restarts. ~tens to low hundreds of MB image size.
4. 64-bit app pool Migrated DefaultAppPool from 32-bit to 64-bit. Removes the ~3.2 GB working-set ceiling. Access Database Engine swapped from x86 to x64 to match (Excel-import path). +30 % steady-state memory, +30–50 % peak under burst. 80 MB ACE installer. 17-consumer Excel-import smoke sweep required before prod.

What’s still pending

2. Database tuning — what changed, what it lifted

Source: DB Performance Tuning Docs, 2026-05-05 ↔ 2026-05-14. Target: Primero_CodiacDEV, Azure SQL Server 2022 CU21, 16 vCPU.

Headline numbers (80-user click-around test)

Metric Baseline After Tier-1 recompile strip After scalar-UDF cleanup Combined (current)
Requests / sec 93.94 99.24 (+5.6 %) 121.13 (+29 %)
50th percentile latency 420 ms 380 ms 370 ms
95th percentile latency 2400 ms 2200 ms 1500 ms (−37.5 %)
99th percentile latency 5600 ms 5800 ms 3800 ms (−32 %)

Saturation test (100 users on the parallel warmup script): 191.35 RPS — up 79 % from baseline.

Why this matters. The two changes are super-additive — scalar-UDF cleanup alone delivered only +5.6 %, but combined with the recompile strip it’s +29 %. Each fix was holding the other back. Removing both unblocked the throughput floor.

What changed, mechanically

Top remaining opportunity

CMN_Message_GetList_ByUser is the #1 CPU offender on the click-around workload — ~1 million executions / 24 h, ~120 ms CPU each. The structural fix is to materialize the multi-statement table-valued function outputs (PR_GetUserRegionsWithParentsAndChildren, NF_GetSitesForUser) into temp tables once at the top of the proc, rather than expanding them inline 15–27 times in a single plan.

This is the highest-leverage next move on the database side. Owner: Cybersoft engineering, with Codiac advisory support.

CODIAC.
Supporting data — page 2

3. Platform scaling — VM vs Codiac

Two runs, several days apart, against the same web app on two very different platforms. Both used Python Locust with comparable click-around scripts. Different databases and different days — so direct response-time comparison is unsafe; capacity ceiling comparison is fair.

VM baseline — May 7, qa.primeroedge.co

FieldValue
TargetVM-hosted QA, single fixed environment
Test scriptlocustfile_loadtest.py (random nav across the app)
Duration30 minutes (05:59 → 06:29 UTC)
Ramp0 → 250 sustained users at peak
Total requests934,009
Total failures0
Avg RPS over the test518.7
p95 (test-wide aggregate)790 ms
p99 (test-wide aggregate)1,700 ms

Source: Locust_2026-05-07-00h59_locustfile_loadtest.py_https___qa.primeroedge.co.html. The May 7 test was never pushed beyond 400 users because the VMs are known-fixed capacity and the team had no expectation 400 could be exceeded.

Codiac capacity test — May 14–15

FieldValue
TargetCodiac internal test cabinet bens-perf, 8 copies (auto-scaled to that level via the burst-schedule platform feature)
Test scriptSame locustfile.py family (random nav across the app)
Ramp designStep-up: 100 → 500 → 1,000 → 2,000 → 3,500 → 5,000 → 8,000 → 12,000 → 20,000 users, 3 min/step
What survivedSteps 1–5 ran cleanly with measurements; steps 6+ saw the load generator itself crash repeatedly (Codiac stayed up)
Codiac at 3,500 users43 req/sec successful traffic, 17 % failure rate, p50 60 s, p95 81 s — degraded but still serving real customer traffic
Codiac at 5,000+ usersLocust runner OOM’d before Codiac did. Inconclusive.

The honest comparison. Response-time numbers are not directly comparable across the two tests (different databases, different customer data shapes, different days). What is comparable is capacity ceiling: VMs sustained 250 users; Codiac at 8 copies was still serving real traffic at 3,500 users when the load generator broke. That’s an at-minimum 14× headroom multiplier with the true number unknown.

Known limit of this comparison

The 3,500-user run did show significant response-time degradation on the Codiac side. The likely shared bottleneck is the database (same shared database used in both Codiac perf-dev runs) — which is exactly why “Continue database optimization” and “Finish proving managed database services” sit in the Decisions section of the executive briefing. Adding more app copies past 8 won’t help if the database can’t take more callers.

Recordings

4. Engineering investment to date

Source: prior cost analysis (primeroedge-performance-comparison.html, May 8 2026). Reproduced here so this document stands alone.

BucketEffortBilledNotes
Rescue work ~643 commits (76 %) $196,000 Reverse-engineering and stabilising an undocumented IIS / SQL monolith. One-time cost. Does not recur for the next customer.
Platform onboarding ~168 commits (20 %) $51,000 Tenant setup, deploy templates, base scaffolding. The work Codiac does for any new customer.
Training & enablement ~30 hr (1 %) $7,000 20 hours live (2 hr/week × 10 weeks) + prep. ⅓ Codiac platform; ⅔ general cloud & PrimeroEdge operating-model training.
Total 841 commits + 30 hr $253,640 10 months of active engagement

Against ~$293K / month of recurring infrastructure savings (cost-savings document, May 8), the $253,640 pays back in ~26 days of operation. After that, the savings are pure.

5. Outstanding items & risks

ItemOwnerRisk if not done
Excel-import smoke sweep (17 consumers) before Phase 4 to production Cybersoft QA + Codiac Regression on Excel-import workflows after 64-bit migration
Cabinet memory limit review post-Phase-4 soak Codiac ops OOM under burst load with x64 working set ~30–50 % larger than x86
Cybersoft codebase fix: 100 namespace/directory mismatches Cybersoft engineering Build time stays at 2.5 h; otherwise drops to 30–45 min
CMN_Message_GetList_ByUser structural rewrite Cybersoft engineering, Codiac advisory Database CPU stays concentrated on the #1 offender; further RPS lift gated on this
Tier-2 / Tier-3 OPTION(RECOMPILE) walk Cybersoft engineering Compile rate stays elevated on medium-complexity procs
Production pilot (one real customer, both stacks running side-by-side) Codiac + customer ops Findings stay in test; recurring savings not realized

6. Reproducibility

Every number in this document is reproducible from one of: