Performance & scale findings — what we found, what we shipped, what’s next
May 15, 2026
The problem
Your customers can’t get served reliably. PrimeroEdge runs on
big fixed VMs that sit half-idle most of the day, then choke the moment a school
district hits breakfast or lunch. There is no spare capacity, and adding more
takes weeks. The VMs are slow at everything: every cold start is a
double-digit-second wait, every database call piles up on the same overworked
cores, and the only knob anyone can turn is “buy a bigger box.”
This briefing covers ten weeks of work across three fronts and the live
tests that prove it.
The three big wins
1. The app starts up fast now.
PrimeroEdge had 1,781 web pages that compiled on the first
user’s click. We moved that compile work to build time, switched the
runtime to 64-bit, and primed the IIS warm-up path. First request after a
restart now lands in milliseconds, not in tens of seconds. Customers who hit
a page after a restart used to give up and refresh. Now they don’t notice.
2. The database does more with the same hardware.
Two structural fixes — killing forced recompiles on the most-called
procedures, and modernizing the user-permissions function — lifted
sustained throughput from 94 requests/sec to 121(+29 %) and dropped slow-page latency by
37 % on the realistic click-around test. Under saturation,
we hit 191 RPS — +79 % over
where we started. No new hardware, no schema migration, no app
change. Pure tuning.
3. The platform now scales on demand.
Capacity follows traffic. Codiac can pre-emptively spin up extra copies
before breakfast and lunch — warmed up, fully ready to serve
— and shrink back afterwards. Each new copy is guaranteed live and
responding before it ever takes a single user request, so customers never
hit a cold or half-started instance. No more paying for peak capacity
24 hours a day, and no more being short of it at the rush. These tests
proved this end-to-end on live infrastructure.
The proof (these tests)
The only true VM-vs-Codiac comparison comes from two runs against the
same app, several days apart, on two very different platforms. Both used
the same load tool driving the same script.
VMs (the system today)
Codiac (8 copies, auto-scaled)
Run captured
May 7, 30-min run against the production VM-hosted QA site
May 14–15, ramped run against the Codiac internal test cabinet
Peak concurrent users sustained
250
3,500+
Capacity model
Fixed. Buying more = ordering bigger VMs. Weeks.
Pre-warmed for breakfast / lunch. Auto-grows in seconds.
Cost shape
Pay for peak, 24×7
Pay for what’s in use
The capacity ceiling. The honest comparison is not
response-time-against-response-time — the two tests hit different
databases on different days. The honest comparison is how many
concurrent users each platform could carry at all without the platform
itself becoming the bottleneck.
VM peak (May 7 test)
250
users sustained, then test ended
Codiac — still serving at
3,500+
users; ceiling not yet found
Headroom multiplier
14×+
at minimum, against the same workload
About the “+”: we don’t know exactly where Codiac
breaks because our load generator broke first. The traffic-simulator
pods crashed trying to maintain 5,000+ simulated users; Codiac itself was
still answering real requests when they fell over. A future test will use a
distributed simulator that can push further.
Yellow line — Codiac capacity, rising and falling with demand.
Green line — what a fixed VM has to be sized for, 24×7.
Codiac sits below the green line most of the day — that's
cost saved. It spikes above the green line at the breakfast and
lunch rushes — that's capacity the VMs would never have had.
Codiac platform under load — head-to-head.mov
Capacity rises with traffic; copies pre-warm before the rush.
Codiac operations view — reports.mov
Live events, replica counts, conditions, logs — the view the
PrimeroEdge operations team will use day-to-day.
VM baseline locust report:Locust_2026-05-07-00h59_locustfile_loadtest.py_https___qa.primeroedge.co.html
— the chart in that file is the “250 users, 790 ms p95”
reference cited above.
The headline. The VM model proved it can serve roughly
250 concurrent users on a clean day. To go beyond that you order more
hardware and wait weeks. Codiac took at least 14× that load without
finding its ceiling, and grew its capacity in seconds — pre-warmed
before breakfast and lunch, guaranteed responsive before routing a single
user, and shrunk back when the rush ended. Customers don’t see a
difference at the peak; the platform stops paying for capacity that
isn’t being used.
CODIAC.
Executive briefing — page 2
What’s next
Three things, in order:
Next move
Why now
Window
Promote through your release process
Everything in this briefing is shipped and proven in our internal
test environment. Codiac’s recommendation is that it’s ready for
Cybersoft to put through your normal release validation — QA, UAT,
and the production cutover criteria you already use for any major
change. The VMs stay running alongside until your team is satisfied.
Cybersoft-led
Finish the database work
Two specific procedures (the message list and the user-permissions
chain) still dominate database CPU. We have a plan for both. Finishing
them buys another sizable lift on the same database hardware.
4–6 weeks
Roll the model out
Every additional customer onboarded to Codiac multiplies the savings
below and tightens reliability. The platform was built to onboard the next
customer in days, not months.
ongoing
What this work to date includes
So you know the proof-of-concept covered the full surface:
App startup. 4 separately-deliverable changes, all
shipped and built into the container image. No customer-visible regressions.
Database tuning. 34 procedures cleaned up; 1 hot
function modernized; 1 attempted change reverted (didn’t pay back).
Detailed numbers in the supporting document.
Platform. Auto-scaling, scheduled bursts (e.g.
breakfast/lunch), live-traffic dashboards, and a 3,500-user capacity test
that did not find Codiac’s ceiling.
Customer team training. Live sessions delivered over
10 weeks, covering the platform and the deployment changes that
PrimeroEdge’s engineers will own day-to-day.
Where the time went. Roughly three quarters
of the engineering effort over the engagement went into untangling and
stabilising the legacy IIS & SQL setup — reverse-engineering pieces that
weren’t documented, fixing things that had been broken for years, and
standing up build/deploy hygiene that didn’t exist. About one
quarter was straight Codiac platform onboarding and infrastructure
work. The rescue portion is one-time. The next customer onboarded onto the
same platform skips it.
Decisions to be made
Three calls from PrimeroEdge leadership unlock the next phase.
1. Pre-compile of the web app — optional, not required
The web app starts up cleanly today with a lighter setup: compile-on-warm
plus our automatic warm-up step. That’s the path of least resistance and
it’s already running. Full pre-compile is a stronger guarantee but adds
a 2-hour build step we don’t currently need for day-to-day customer
banners.
Decision: stay with warm-up-only until PrimeroEdge is
ready for a production build. When that day comes, we flip on
pre-compile for the production pipeline.
In parallel: shrink the 2-hour pre-compile window.
Codiac knows what’s slowing it down (single-threaded compile on
~1,800 view files; about 100 file/namespace mismatches in the source
that block the faster batch mode) and the path to ~30–45 minutes is
known. Whether to invest that effort now or later is a PrimeroEdge call.
2. Continue database optimization
The work already done lifted database throughput from 94 to
121 requests/sec and cut slow-page latency by 37 %. There’s
more to take: one specific procedure (the message-list query) and a
second cleanup pass on the user-permissions function still dominate
database CPU. Both have a plan.
Decision: green-light continued tuning. Each pass is
one to two weeks of work and is independently shippable. Owner:
Cybersoft engineering, with Codiac advisory.
3. Finish proving managed database services
Right now the database still runs on a fixed VM-style footprint. Moving
it to a managed cloud service (Azure SQL Database or similar) would
remove the “buy a bigger box” capacity model on the database side
too — same shape of win as the app side. Codiac has started this proof
and needs a green light to finish it on a representative customer
workload.
Decision: approve the managed-service proof, or
explicitly defer it. Without a decision either way it stays half-done.
Cost & consumption
This is the part the VMs can’t fix.
The legacy model pays for peak capacity, all
day, every day. Even when no one is logged in. Even overnight.
Even on weekends. Even on holidays. The bill is the same.
The Codiac model pays for capacity only when it’s in use.
Three copies overnight; eight at lunch; back to three by 2 pm. The bill
follows the demand curve.
Recurring savings
~$293K
per month, measured against today’s VM footprint
Engagement to date
~$254K
one-time, ten months of work
Payback
~26 days
after which it’s pure savings, every month
The recurring number scales with customer count. Onboarding the next
customer onto the platform adds incremental cloud spend that matches their
actual usage — not another full set of always-on VMs.
Plain English. You stop paying for empty hardware. You
stop running out of capacity at lunch. You stop waiting weeks to add more.
The platform handles all three at once, and the payback on this entire
proof-of-concept clears in under a month of running.