Vol. I  ·  No. 114 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
FRIDAY, APRIL 24, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

Pink Slips and Processors: Meta, Microsoft Axe Thousands to Bankroll the AI Arms Race

Two tech giants slash payrolls in lockstep — cutting tens of thousands of workers while committing record billions to artificial intelligence, as a Chinese upstart claims it can do the same for less.

MENLO PARK, CALIF. — Meta Platforms will cut 10 percent of its workforce and Microsoft will offer buyouts to 7 percent of its own, as the two largest AI spenders in American tech gut their human rosters to finance a machine-intelligence buildout that already runs in the tens of billions of dollars.

The parallel moves put tens of thousands of jobs on the chopping block inside a single week. Meta's reduction, first reported by CNBC, targets an estimated 16,000 positions according to HR Executive — making it the third mass cut at Mark Zuckerberg's company since 2022. Microsoft is offering voluntary separation packages to approximately 7 percent of its global workforce, a quieter approach that carries the same message: if your role doesn't involve AI, start packing.

The logic at both companies fits on an index card. Training and running large AI models demands data centers, custom silicon, and electrical power on a scale that makes previous tech booms look quaint. Every dollar routed to a GPU cluster is a dollar pulled from a headcount line. Both firms have committed record capital budgets to AI infrastructure this year, and both chiefs have gone public with the stakes.

Zuckerberg has told employees that artificial intelligence is Meta's top priority — above the metaverse, above social features, above everything on the roadmap. He has reorganized divisions and redirected engineering teams accordingly. Microsoft's Satya Nadella has made a parallel bet, embedding the Copilot AI assistant across Office, Azure, and GitHub while deepening the company's multibillion-dollar partnership with OpenAI.

Workers who don't touch AI have read the memo. The future left without them.

Adding fuel: a jolt from Beijing. China's DeepSeek claims to have trained high-performing AI models cheaply, without access to the most advanced American-made chips. If a Chinese startup can approach U.S.-level performance at a fraction of the cost, the pressure in Menlo Park and Redmond to spend harder and staff leaner only compounds.

Wall Street isn't flinching. Shares of both companies held firm or climbed on the layoff announcements — a blunt signal from investors who see fewer paychecks and more server racks as the winning formula. The humans are the variable cost, and variable costs get cut.

The reductions are expected to land in the coming weeks, with Meta reportedly moving first. Neither firm has published a final timeline.

For the thousands of engineers, designers, and program managers now headed for the exits, the irony is razor-sharp. They built the platforms, the cloud infrastructure, and the developer tools that made large-scale AI possible in the first place. Their thanks is a severance check and a handshake from the very machine they helped create.

The age of AI, it turns out, begins with a pink slip.

Meta will cut 10% of workforce as company pushes deeper into  ·  Microsoft and Meta announce large staff reductions as they s  ·  Microsoft offers to buy out 7% of its workforce as it pivots

AI Video Hits Escape Velocity: Startups Get a Growth Weapon as New Challengers Take Aim at the Giants

From scrappy marketing loops to “Black Mirror” brand films, generative video is becoming the sharpest tool in the startup playbook—and the most contested battleground in AI.

SAN FRANCISCO — AI video has officially entered its “this changes everything” era, and startups are moving fast—because they can. What used to require a studio, a crew, and a painful budget now ships in days (or hours), with founders iterating on creative like it’s product. That’s the core takeaway in a new playbook for founders on using AI video for growth: stop treating video as a one-off campaign and start treating it like a scalable system—testable hooks, modular clips, personalized variants, and rapid refreshes for every channel. The result is a compounding machine: more content, more targeting, more learning, faster. Inc’s guide frames it bluntly: the winners won’t be the companies that “make a video,” but the ones that build an always-on video pipeline.

And here’s where it gets spicy: the tooling layer is turning into an all-out platform war. The founders of OpenCV—yes, the computer-vision backbone that helped define modern CV—have launched a new AI video startup positioned to challenge the titans. That’s not a casual move; it signals that “video intelligence” is becoming the next strategic choke point, where models, data, and developer ecosystems collide. VentureBeat reports the new entrant is explicitly taking on OpenAI and Google—translation: the race is now about who controls the end-to-end video stack.

Meanwhile, brands are leaning into the uncanny. One AI startup just went full “Black Mirror” with an “AI-Selves” launch film—less explainer video, more cultural artifact—showing how quickly AI video is evolving from utility to identity. Little Black Book details the surreal creative swing—and it’s a reminder: AI video isn’t just cheaper production, it’s new narrative territory.

Underneath it all: pressure is ramping up across the AI field—including at Elon Musk’s xAI—because once video becomes the default interface for marketing, education, and product demos, “good enough” models won’t cut it.

For builders, the next unlock is distribution: lightweight, on-device AI. Tutorials like “How to Use Transformers.js in a Chrome Extension” point to a future where personalization happens right in the browser—turning video generation, editing, and even analysis into a native part of everyday workflows. The future is now, and it’s moving at video speed.

How Startups Can Leverage AI Video to Grow - inc.com  ·  OpenCV founders launch AI video startup to take on OpenAI an  ·  AI Startup Goes ‘Black Mirror’ in Unhinged 'AI-Selves' Launc
Haiku of the Day  ·  Claude HaikuProgress devours its young,
while we chase tomorrow's dream—
forgetting to think.
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
Federal Executive Authority Seeks Preemption of State-Level Artificial Intelligence Regulatory Frameworks; Legislative and Judicial Bodies Express Reservations
WASHINGTON, D.C.
AI’s Future of Work Isn’t a Wave, It’s a Split Screen
REDMOND, WASHINGTON — I’ll be honest… the “future of work” conversation is finally getting real, and it’s about time. Unpopular opinion: AI isn’t “changing everything” in one clean swoop, it’s creating two radically different employee experiences at the exact same time.
The Deepfake Doctor Will See You Now
SAN FRANCISCO — Somewhere right now, a deepfake cardiologist is explaining why you should stop taking your blood pressure medication.
NOBODY HOME: Welcome to the Age of Unaccountable Intelligence
AUSTIN, TEXAS — I spent Tuesday morning wandering through Moltbook, the new social network where only AI agents are allowed to post, and I'm here to tell you: this is what the end times look like if the end times were designed by a particularly anxious product manager at Meta. Bots talking to bots about bot things.
The Great AI Consolidation Has Begun, and Nobody Learned a Thing
AUSTIN, TEXAS — The surest sign that an industry has entered its baroque phase is when the mergers start.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team
Production Release

Builder Team Ships Portfolio Lifecycle Dashboard, Closes Rhodes Integration, Cleans House

Benji's Kanban rollup gives site ops their single source of truth while marcusdAIy quietly wires four upstream systems into Rhodes — and somehow both matter.

The AI Builder Team shipped a production-grade portfolio dashboard Wednesday that finally gives site operations the rollup they've been running in their heads for months. @benji-bizzell's PR #124 in Aerie delivers a nine-column Kanban tracking every school from pre-op through operating, wired directly to Rhodes' `/sync/aerie/listSites` endpoint. Stage-driven phase bucketing, DRIs, target dates, student capacity — all hydrated live, with enrollment numbers pulled from HubSpot deals. The interface toggles between Kanban and accordion list view, persists the choice to localStorage, and replaces what site ops leadership described as "the mental spreadsheet." It's the kind of tooling that looks obvious in hindsight and impossible beforehand.

Meanwhile, @marcusdAIy closed the bulk of the Rhodes Upstream Data Coverage project with PR #120, a sprawling integration that wires REBL3, ISP, Wrike, and a bidirectional Aerie-to-Rhodes write path into a single coherent system. The PR spans connector work, per-field mergers, adapters, and a `LiDAR Vendor` extractor from Wrike — the kind of plumbing that makes downstream dashboards possible. When pressed, marcusdAIy offered his typical defense: "This is foundational infrastructure. Without upstream coverage, every dashboard benji ships is just rendering stale data with a nice UI. But sure, let's celebrate the toggle button." A charitable reading. The Rhodes work is real, but foundational infrastructure doesn't ship portfolio dashboards — and portfolio dashboards are what site ops asked for.

Over in Klair, @eric-tril and @ashwanth1109 spent the day cleaning up financial reporting drift and AI spend visibility. Eric's PRs #2670, #2669, #2658, and #2663 methodically rewrote MFR memo narratives to match Finance's reference wording — deferred tax attribution, cash generation waterfalls, GAAP net income bullets — the kind of tedious alignment work that prevents $86k discrepancies from becoming board-meeting surprises. Ashwanth followed with PR #2663, a super-admin Token Pricing view that surfaces the per-model rates AI spend ingest Lambdas use to calculate cost, and PR #35 in Surtr, the pricing remediation spec that corrected cumulative AI spend from $8.11M to $8.06M. Previously, token pricing lived in a SQL workbench with no rationale trail. Now it's in-app, auditable, and operator-ready.

@kevalshahtrilogy wrapped the migration of four P1 pipelines from Klair to Surtr (PR #2667) — QuickBooks expense analysis, education expense emails, orphaned Slack channel cleanup, all moved to their permanent home. The Klair-api router still invokes the deployed Lambdas by name, but the source code now lives where it belongs. It's housekeeping, but housekeeping at this scale is how you keep a multi-repo org from becoming archaeological dig site. The Builder Team doesn't just ship features. They ship the infrastructure that makes shipping features possible — and then they ship the dashboards that prove the infrastructure works.

Mac's Picks — Key PRs Today  (click to expand)
#35 — chore(ai-spend-pipeline): KLAIR-2580 Spec 1 pricing remediation scripts + spec @ashwanth1109  no labels

## Demo

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/838b1009-5700-4e72-8883-4818cd825576" />

### Prev, AI spend was ~$8.11M, now its corrected to ~$8.06M

## Summary

Adds the ai-spend-pipeline feature folder with FEATURE.md, the approved [Spec 1 — pricing remediation](../blob/claude/agitated-bouman-561ba6/features/surtr/ai-spend-pipeline/specs/01-pricing-remediation/spec.md) for [KLAIR-2580](https://linear.app/builder-team/issue/KLAIR-2580), and operator-ready artifacts filled in with values derived from Redshift on 2026-04-23.

Covers drift remediation for 10 models across claude-token-spend, openai-usage, and azure-ai-spend pipelines plus 3 previously zero-billed OpenAI models.

Billing impact (from step-3 verification actuals, 2026-04-23): ~$70k gross misbilling corrected / ~$35k net under-billing (BUs were billed less overall than they should have been). Spec §1.4's original estimate was ~$86k gross / ~$8k net over-bill; actuals came in directionally different. Two models drove the primary delta:

- Claude Opus 4.7: over-billed ~$17k at older Sonnet-like rates — now correctly priced at $5/$25 with 0-200k vs 200k-1M tier split

- GPT-5.4 family: under-billed ~$52k — bare gpt-5.4 prefix was falling back to base-model rates instead of dated-variant rates

## What's included

- FEATURE.md — feature-level context, Files Touched, Linear tickets, changelog

- specs/01-pricing-remediation/spec.md — approved Spec 1 (KLAIR-2580 remediation scope)

- specs/01-pricing-remediation/artifacts/ — everything the operator needs:

- README.md — execution order, derived effective_from table, rack-rate sources, rollback notes

- 01-pricing-inserts.sql — 15 pricing rows (Claude x6, OpenAI dated x6 + zero-billed x3, Azure x2), rack-rate sources inline as SQL comments per KLAIR-2580 §1 acceptance criterion

- 02-invoke-step-functions.sh — 3 start-execution calls, defaults to dry-run (state machine ARNs verified against live prod account 479395885256)

- 03-verification.sql — post-reprice spot-checks against KLAIR-2580 §1.4 expected magnitudes (±1% tolerance), zero-billed-model assertions, per-BU completeness checks

## Notes for reviewers

- No repo code changes. Remediation is pure Redshift DML + Step Function re-invocation against existing pipelines.

- Rack-rate sources are captured as SQL comments inline with each INSERT block (the notes column ships with [KLAIR-2582](https://linear.app/builder-team/issue/KLAIR-2582)).

- 3 assumptions worth eyeballing (full list in [artifacts README](../blob/claude/agitated-bouman-561ba6/features/surtr/ai-spend-pipeline/specs/01-pricing-remediation/artifacts/README.md)):

1. gpt-5.4-pro-2026-03-05 cached rate = 10% of input ($3/MTok) — not published on OpenAI rack page; flagged TODO in DML comment.

2. gpt-4-1106-preview cached rate = input rate ($10) — model predates prompt caching, cached_tok sum currently 0.

3. chatgpt-image-latest uses GPT Image 2 rates ($8/$30/$2) mapped to input/output/cached columns — total usage sub-$1.

- Related: [Spec 2 — Prevention](https://linear.app/builder-team/issue/KLAIR-2580) (drift-detection instrumentation + nightly SQL check + CloudWatch alarms) is the follow-up spec under this same ticket.

## Test plan

- [ ] Reviewer reads spec.md and confirms scope matches KLAIR-2580 §1

- [ ] Reviewer spot-checks DML rack rates against published sources (README has URLs)

- [ ] Reviewer confirms effective_from values in DML match the MIN(report_date) table in README

- [ ] Operator runs 01-pricing-inserts.sql in Redshift, confirms 15-row sanity SELECT returns expected shape

- [ ] Operator runs ./02-invoke-step-functions.sh --execute, waits for all 3 executions to reach SUCCEEDED, confirms results_by_bu.records_inserted == records_fetched per BU

- [ ] Operator runs 03-verification.sql, confirms all model totals within ±1% of KLAIR-2580 §1.4 expectations and no-orphan-models query returns 0 rows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#120 — Rhodes upstream coverage: REBL3 + ISP + Wrike + Aerie->Rhodes write path (AERIE-184 + AERIE-195) @marcusdAIy  no labels

[rhodes_upstream_data_map.md](https://github.com/user-attachments/files/27020060/rhodes_upstream_data_map.md)

# Rhodes upstream coverage — REBL3 + ISP + Wrike + Aerie→Rhodes write path

Closes the bulk of the [Rhodes Upstream Data Coverage](https://linear.app/builder-team/project/rhodes-upstream-data-coverage-0f56ae8a5053/overview) project end-to-end:

- [AERIE-184](https://linear.app/builder-team/issue/AERIE-184) sub-issues 1, 3, 4, 6 — REBL3 connector, per-field merger, ISP fetcher + adapter, Wrike LiDAR Vendor extractor.

- [AERIE-195](https://linear.app/builder-team/issue/AERIE-195) — Analytics Worker → Rhodes write path: typed RhodesClient + two orchestrators (merger + Wrike LiDAR backfill), env-gated until [Rhodes PR #56](https://github.com/AI-Builder-Team/Rhodes/pull/56) lands.

- [AERIE-183](https://linear.app/builder-team/issue/AERIE-183) — closed by context_cache/rhodes_upstream_data_map.md (audit artefact, attached on the issue).

AERIE-184 sub-issue 2 (Sindri webhook gap-closure) is Rhodes-repo work; out of this PR's scope.

## Summary

This PR ships the Aerie half of the entire Rhodes upstream coverage initiative, structured as two parallel tracks:

Track A — Ingestion (AERIE-184). Bring up canonical readers for every upstream source the audit identified: REBL3 (full client + connector + per-site enrichment), ISP (DDB+S3 direct fetcher), Wrike (LiDAR Vendor Matterport-URL extractor). Each is a pure, fully-tested unit independent of any write path.

Track B — Write path (AERIE-195). Land the typed RhodesClient that wraps Rhodes' new /sync/aerie/* httpAction surface, then two orchestrators that compose Track A's primitives into actual Rhodes writes:

1. Rhodes merger orchestrator (every 5 min per AERIE-183's MVP target) — reads REBL3 + ISP + Schema-UI-from-current-Rhodes, runs the four-layer per-field precedence merger, diffs vs Rhodes state, sends only changed fields with per-field provenance JSON. Single writer per field set by design — no cross-orchestrator races.

2. Wrike LiDAR Vendor backfill orchestrator (daily — values are populated once per site and near-static; daily preserves Wrike rate-limit headroom for the rest of the platform). Writes ONLY the structured matterportModelId field via a separate Rhodes route. Never clears.

Both orchestrators are env-gated. They short-circuit with a one-time warn until ops sets RHODES_CONVEX_SITE_URL + RHODES_API_KEY after Rhodes PR #56 deploys — same pattern as the REBL3 ingestion env-gate.

## Commits

| SHA | Title | Lines |

|---|---|---:|

| 755c75f | feat(sync/upstream): add REBL3 client + Zod schemas + contract tests | +1,342 |

| 1c41daa | feat(analytics): add rebl3Sites Convex table + sync HTTP endpoint | +572 |

| 1ed1276 | feat(sync/upstream): wire REBL3 connector + worker cadence (slice 2B) | +699 |

| 11938b7 | feat(sync/upstream): add REBL3 per-site enrichment pass (slice 2C) | +927 |

| 48fc3f7 | feat(sync/upstream): add Rhodes per-field source merger (sub-issue 3) | +741 |

| 19617e4 | feat(sync/upstream): add ISP -> field-merge shape adapter (sub-issue 4 prep) | +275 |

| 907c9fd | fix(sync): guard REBL3 sync on missing env + opt out from analytics-worker tests | +68 |

| 527b4b9 | refactor(rebl3): address PR #120 review — high+medium+low+nit issues | +682 |

| 6a03d4e | feat(sync/upstream): add ISP fetcher (DynamoDB + S3) and Wrike LiDAR Vendor extractor | +977 |

| badd2d7 | feat(sync/upstream): add typed RhodesClient for Aerie -> Rhodes writes (AERIE-195) | +766 |

| 814ea46 | feat(sync/upstream): Rhodes merger + Wrike LiDAR orchestrators + 5-min tier (AERIE-184 + AERIE-195) | +2,116 |

| 518001e | feat(sync/upstream): close Rhodes merger follow-ups — REBL3 status, HubSpot, daily REBL3 cadence | +1,274 |

| 2528949 | refactor(sync/upstream): address PR #120 fresh-review fixes — Critical 1+2, all High, blocking Medium + Low | +1,383 |

~33 files, ~11,900 lines net-new in sync/ + chat/, ~300 net-new tests, full sync suite 922/922 green, chat tests 17/17 green.

## Architecture in one diagram

┌─────────────────────────────────────── Aerie analytics worker (this PR) ────────────────────────────────────────┐

│ │

│ ── Track A: ingestion primitives (pure, fully tested) ───────────────────────────────────────────────── │

│ │

│ sync/src/upstream/rebl3/ sync/src/upstream/isp/ sync/src/upstream/wrike/ │

│ client.ts (Zod-validated REST) fetcher.ts (DDB scan + lidar-vendor.ts (Matterport │

│ types.ts (passthrough schemas) S3 spillover resolve) model_id regex extractor) │

│ sync.ts (paginated + enriched │

│ ingestion → rebl3Sites) │

│ │

│ ── Track B: Aerie → Rhodes write path (AERIE-195) ───────────────────────────────────────────────────── │

│ │

│ sync/src/upstream/rhodes/client.ts ─── typed RhodesClient ───┐ │

│ upsertSiteMetadata(slug, payload) │ │

│ setMatterportModelId(slug, id|null) │ │

│ listSites() │ │

│ ▼ │

│ sync/src/upstream/rhodes/sync.ts ── refreshRhodesMerger ── /sync/aerie/upsertSiteMetadata │

│ 1. listSites() → baseline + diff state │

│ 2. paginate REBL3 ┐ │

│ 3. fetch HubSpot │ per-field precedence │

│ 4. fetch ISP per scan ├─ via mergeRhodesSiteFields ── diff vs Rhodes ── upsert(only-changed-fields) │

│ 5. SchemaUi from now ┘ │

│ Cadence: every5Minutes (AERIE-183 target) │

│ │

│ sync/src/upstream/wrike/lidar-vendor-sync.ts ── refreshWrikeMatterportBackfill │

│ listSites filter wrikeFolderId → batched /folders fetch → extract → setMatterportModelId(only-on-diff) │

│ Cadence: daily (LiDAR Vendor near-static; preserves Wrike rate-limit headroom) │

│ │

└────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

┌─── X-Api-Key (rh_*) ──── HTTPS ───┐

▼ │

┌── Rhodes (PR #56 ─ new in that PR) ───────────────────────────────┴──────────────────────┐

│ POST /sync/aerie/upsertSiteMetadata │

│ POST /sync/aerie/setMatterportModelId │

│ GET /sync/aerie/listSites │

│ matterportModelId (new structured field on sites) │

└──────────────────────────────────────────────────────────────────────────────────────────┘

## Architectural choices documented inline (full list)

### REBL3 (Track A foundation)

- Two REBL3 site tables coexist intentionally. Existing siteRebl3 is keyed on siteWrikeId for the Buildout Details panel; new rebl3Sites is keyed on rebl3Slug for analytics worker writes. REBL3-only sites without a Wrike folder are now representable. Eric's [Dashboard Consumers → Rhodes Migration](https://linear.app/builder-team/project/dashboard-consumers-rhodes-migration-80c21164a066) owns retiring siteRebl3.

- HTTP route + bearer token, not ConvexHttpClient.mutation(). Required because upsertRebl3Sites and patchRebl3SiteEnrichment are internalMutation (server-only). Mirrors the expense-sync architecture exactly.

- Bulk + enrichment in one weekly cycle. enrichmentAborted surfaces explicitly so the worker only marks the cadence clean on a fully successful run.

- Asymmetric 404 handling. /status returns null on 404 (REBL3 quirk); /site/{slug} treats 404 as a hard error. Documented at every call site.

- Zod .passthrough() everywhere; mapper is intentionally not forward-compat — unknown REBL3 fields silently drop at the storage boundary. Adding a new REBL3 field is a four-step manual workflow (schema column + arg shape + upsert type + mapper assignment) by design.

- 50ms politeness delay between every REBL3 HTTP request — both bulk pagination and per-site enrichment. ~20s of wall time per weekly cycle to be a courteous Vercel tenant.

### Per-field merger (Track A → Track B contract)

- Rule table, not a single global precedence. Different field classes want different rules: REBL3-primary (address, scoring, lease), HubSpot-primary (marketingName, gradeRange), ISP-primary (capacity-derived), Schema-UI-primary (contact info).

- Whitespace-only Schema-UI values do NOT shadow lower-precedence layers (trim before empty check). Prevents accidental "user typed spaces, lost the HubSpot value" failure mode.

- NaN rejection on number rules with fallback to next-precedence layer.

- Zero is a real valuetuition: 0 is a legitimate write, not "missing data".

- occupancyLoad fully wired (was orphaned in early commit; review pass caught + fixed).

### Aerie → Rhodes write path (Track B)

- X-Api-Key header auth, not Bearer. Matches Rhodes' existing verifyAerieApiKey httpAction helper that SHA-256s the key and looks it up in the apiKeys table via api.apiKeys.validateByHash. Key minted by Yibin 2026-04-23 (prefix rh_020a3f2).

- Class-based RhodesClient mirrors RebL3Client. Injectable fetch for tests, AbortSignal.timeout() per request (default 30s), Zod-validated responses, RhodesError preserves status + body for caller inspection.

- Single writer per field set. The merger is the ONLY orchestrator that writes to merger-managed columns. The Wrike LiDAR backfill writes ONLY the disjoint matterportModelId field via a separate route. No cross-orchestrator races by design.

- Diff-before-write. Merger compares its output against listSites current state and skips the HTTP roundtrip entirely when nothing changed. Server is also idempotent (outcome: "no_change") — pre-diffing saves the round-trip and keeps the audit log free of spurious 0 fields changed entries.

- "Empty fields are the feature, not the bug." Merger never sends undefined — missing upstream data leaves Rhodes values untouched. The Wrike LiDAR backfill never clears — empty LiDAR Vendor is treated as "not from Wrike", not "delete what Sindri/manual entry put there".

- Per-cycle caps as circuit breakers. maxIspFetches=500, maxSites=10_000, maxFolders=2_000. Defensive against surprise inventory growth.

- Cadence-advance discipline mirrors REBL3. markRefreshed only fires on a fully clean run (zero errors anywhere in the pipeline). Any degradation holds the tier so the next worker iteration retries within the cadence window.

### Cadence (AERIE-183 target)

AERIE-183 §Deliverable 2 pins the MVP at *"5 minutes where live streaming is not feasible; adjust per source based on data volume."* Per-source assignment:

| Path | Tier | Why |

|---|---|---|

| REBL3 ingestion (rebl3Sites) | weekly | Site inventory changes quarterly; ~150 sites; weekly is well above change rate. |

| Rhodes merger (rhodesMergerSync) | every5Minutes | Pure read/diff/write; ~free on no-op; meets AERIE-183 target. |

| Wrike LiDAR backfill (rhodesWrikeMatterportBackfill) | daily | LiDAR Vendor is near-static; daily preserves Wrike rate-limit headroom. |

A new every5Minutes tier was added to REFRESH_TIERS for this. The shouldRefresh/markRefreshed/cadence-test surfaces all generalise transparently.

## What 2528949 (most recent) closed — fresh-review pass

Addresses every Critical + every High + every blocking Medium + most Low from the post-update internal review.

Critical

- #1 ISP fetcher: ONE Scan per cycle, not N. New fetchAllLatestIspAnalyses() does one DDB Scan filtered to status="completed", in-memory group-by-model_id with deterministic latest-per-model selection, returns Map<model_id, IspFetchResult>. Merger orchestrator calls it once per cycle and looks up per-site in O(1). Per-site fetchLatestIspAnalysisByModelId kept as test-only injectable + one-off lookups.

- #2 Zod runtime validation on ISP-fetched JSON. Both inline and S3 branches now validate via IspAnalyzeResponseSubsetSchema before returning — malformed JSON / wrong types / canonical shape changes throw with a source-label-prefixed error rather than silently propagating garbage.

High

- #3 subset-compat actually catches renames now — replaced one-way assignability check (which was permissive due to optional fields) with per-field-path indexed-access checks (ISPAnalyzeResponse["recommended_capacity"], etc.). Renaming any path on canonical fails compilation.

- #4 pickLatestByCreatedAt deterministic tie-break on lexicographic job_id.

- #5 Worker Rhodes-merger branch tests — 5 new tests cover every gate branch (clean / errCount / upserts.failed / isp.failed best-effort / provisioning gap).

- #6 zoning documented as intentionally not mapped (raw REBL3 string vs Rhodes enum; mapping table owned by AERIE-195 future work).

- #7 Explicit AWS SDK retry config ({ maxAttempts: 5, retryMode: "adaptive" }) on DDB + S3 clients.

- #8 IAM scope + Sergio-signoff requirement documented in fetcher docstring as runbook item.

Medium

- #9 Per-site ISP failures are best-effort in worker gate (bulk ISP failures still gate via __bulk:isp-batch).

- #10 Provisioning-gap detection: notFound > 50% of sites → "PROVISIONING GAP" warning + clock holds.

- #11 Provenance docstring aligned with patch-level reality.

- #12 RhodesClient bounded retry for 502/503/504 + network throws (default 3 retries, exponential backoff). 4xx and 500 NOT retried. 8 new tests.

- #13 hubspot-fetcher.ts header note clarifying it's Redshift-backed, not HubSpot REST.

- #15 Wrike maxFolders truncation surfaces as errors["__truncated"] + visible warn (not silent log).

- #16 HubSpot name-collision drops are now logged.

- #17 AWS SDK version skew investigated — intrinsic to package release cadence (util-dynamodb lags client-dynamodb); current pin is the best alignment achievable.

Low

- Wrike LiDAR regex/comment alignment fixed (removed /show/m/ID mention that didn't match the regex or empirical sample).

- missingFolders counter on Wrike backfill result (folders that don't appear in Wrike's batch response).

- Sites sorted by slug before iteration → deterministic per-cycle log order.

- ISP fetcher console.warn for bucket mismatch swapped to injected log for consistency.

- New every5Minutes cadence test pins the 300_000ms interval.

Nits

- isp-subset.ts docstring explains why subset fields are all optional (matches at-rest persisted JSON; runtime required-field assertion lives in fetcher's Zod validation).

NOT fixed (with rationale in commit body)

- Low #20 / #24 (logging prefix consistency) — existing prefixes already grep-able.

- Med #14 (listSites pagination) — out of scope at current scale; tracked as follow-up.

- Low #21 (no resumption) — idempotent merge-then-diff makes worst case acceptable; cursor state would be meaningful complexity for marginal benefit.

## What 518001e closed

Three follow-ups the merger originally flagged as out-of-scope are now in:

1. REBL3 status enrichment in the mergerloiSignedDate / leaseSignedDate / projectedOpenDate now flow into Rhodes. Implementation reads from Aerie's Convex rebl3Sites.workflowStatuses cache (updated daily by the connector) via a new internalQuery listRebl3SiteEnrichment + GET /sync/analytics/rebl3 httpAction; the merger calls it every cycle, so newly-cached dates propagate to Rhodes within 5 min of the next REBL3 ingestion. Pure parser handles all four signed-status synonyms REBL3 emits (signed/done/completed/complete), ignores the leasing system (post-sign workflow ≠ lease-signed), and reads projected_open_date regardless of due-diligence status.

2. HubSpot layer wiring (real Redshift join) — replaces the v1 empty-map default with a production fetcher that queries staging_education.hubspot_programs_raw (the canonical raw HubSpot mirror — explicitly NOT core_education.dim_school, which is the stale derived table flagged for removal once canonical-sites lands; TEMP comments span analytics/refresh.ts and the related queries). Joins to Rhodes sites by normalised display name (lowercase + trim + collapse-whitespace). Sites with no name match get NO HubSpot layer (the merger handles that correctly). New file sync/src/upstream/rhodes/hubspot-fetcher.ts with full unit tests.

3. REBL3 ingestion cadence: weekly → dailyREFRESH_TIERS.rebl3Sites bumped. This is independent of the Rhodes write path (the merger reads REBL3 directly every 5 min, so Rhodes already gets 5-min freshness on REBL3 data); the cadence change only affects Aerie's own rebl3Sites Convex cache, which JC's dashboards read against. Daily catches every realistic LOI / lease / diligence event without weekly's worth of staleness in front of operators. Cost: ~300 REBL3 requests/day, well under any rate limit.

## Remaining open follow-ups (NOT blocking this PR)

1. HubSpot join: name-based v1 → explicit alias table. Today's join uses normalised display-name matching. An explicit staging_education.rebl3_hubspot_alias table (or extension to the existing map_school_alias) would close the long tail of name mismatches. The fetcher's signature accommodates that swap as a drop-in replacement.

2. Audit doc fix on aerie/sync/src/analytics/capacity.ts (Klair-ISP repo). v1 of rhodes_upstream_data_map.md mis-described that file. v2 corrects + supersedes; doc landing alongside this PR.

3. Sindri webhook health verification (sub-issue 2 — Rhodes repo). Operational state of the existing Sindri webhook is unknown post-freeze. If silent → AERIE-184 #2 becomes unsourced_frozen; if firing → gap-closure is a Rhodes-repo PR (processInboundWebhook extension).

4. Migrate siteRebl3 consumers off the legacy table; remove legacy writes. Owned by Eric's Dashboard Consumers project.

## What this enables for the dashboards

- Master Site Pipeline Dashboard ([AERIE-185](https://linear.app/builder-team/issue/AERIE-185)): site cards with rebl3Slug + name + address + classification land directly. Columns 1–3 (Pre-op search / diligence) read rebl3Sites.workflowStatuses. Compound by_classification_state index supports filtering on both at once.

- Site Detail View ([AERIE-186](https://linear.app/builder-team/issue/AERIE-186)): Panel 1 (Rebel diligence statuses) reads the same workflowStatuses. Panel 3 (School facts) gets address + capacity + REBL3 scoring + agent_results.budget for diligence cost estimates.

- Columns 4–7 of the Kanban (milestone-gated) still depend on Benji's [Unified Sync Pipeline](https://linear.app/builder-team/project/dashboard-consumers-rhodes-migration-80c21164a066). Columns 8–9 + Detail Panel 5 (quality bars) depend on existing Aerie data. Detail Panel 4 (artefact links) depends on Yibin's Schema UI.

## Reviewer guide

The branch is large (11 commits, ~9k lines net) but is structured to review well in chunks if you want to take it one slice at a time. Suggested order:

1. 755c75f (REBL3 client + Zod schemas) — pure foundations, no external deps.

2. 1c41daa (rebl3Sites Convex table + HTTP endpoint) — schema + write surface only.

3. 1ed1276 + 11938b7 (REBL3 connector slices 2B + 2C) — the ingestion orchestrator end-to-end.

4. 48fc3f7 (per-field merger) — pure function. The contract between every Track A primitive and Track B writer.

5. 19617e4 + 6a03d4e (ISP adapter + fetcher, Wrike extractor) — Track A primitives 2 and 3.

6. badd2d7 (RhodesClient) — typed wrapper for Rhodes PR #56's /sync/aerie/* routes.

7. 814ea46 (orchestrators + worker wiring) — composes everything above.

Code-quality passes (907c9fd, 527b4b9) are surgical; safe to skim diff-only.

### Deploy / ops dependencies (no action needed for review, but flagged for context)

The whole AERIE-195 path stays dormant in prod until two things happen:

1. [Rhodes PR #56](https://github.com/AI-Builder-Team/Rhodes/pull/56) merges + deploys. Adds the structured matterportModelId field on Rhodes sites + the three /sync/aerie/* httpAction routes the orchestrators target.

2. Ops sets RHODES_CONVEX_SITE_URL + RHODES_API_KEY env vars on the Aerie analytics worker. API key was minted 2026-04-23 by Yibin.

Until both happen, the orchestrators short-circuit with a one-time warn each (Rhodes merger sync skipped: … / Wrike LiDAR backfill skipped: …) and the rest of the analytics worker continues unchanged.

## Test plan

270+ net-new tests across this PR. Full sync suite: 890/890 green. Chat REBL3 tests: 15/15 green.

- [x] REBL3 client (sync/src/upstream/rebl3/client.test.ts, 15 tests): schema-only contract pin against recorded fixtures, URL building, response parsing, /status 404→null, error throws on non-404, getSite 404 throws asymmetric to status, resolve URL, AbortSignal timeout wiring + timeoutMs=0 disables.

- [x] REBL3 sync (sync/src/upstream/rebl3/sync.test.ts, 37 tests): pure mapper + refreshRebl3Sites orchestrator (pagination, end-of-inventory, first-page-throws-aborts, mid-cycle isolation, push-failure isolation, batchSize splitting, enrichment wiring, stuck-offset duplicate detection, per-batch error key composition, bulk politeness sleep) + full enrichRebl3Sites orchestrator (happy path, /status 404, /site throw, both fail, politeness sleep, batchSize splitting, push isolation, missing agent_results).

- [x] Convex rebl3Sites mutations + read (chat/convex/analyticsRebl3.test.ts, 15 tests): insert, patch, all-optional, bulk, per-row failure isolation, indexes by_classification + by_state, JSON round-trip, identity preservation, listRebl3SiteEnrichment slim shape + null-omission + empty-table.

- [x] Per-field merger (sync/src/upstream/rhodes/field-merge.test.ts, 30 tests): all four-layer precedence chains, edge cases (all-empty, null vs undefined, empty-string-as-null, zero-is-real, partial layers, all-four-layers precedence), whitespace handling (whitespace-only does NOT shadow, padded values trimmed, tab/newline), NaN handling (rejected with fallback), purity check.

- [x] ISP shape adapter (sync/src/upstream/rhodes/isp-adapter.test.ts, 10 tests): happy path, top-level vs summary precedence, optional sub-objects absent, zero passes through, end-to-end into the merger.

- [x] ISP fetcher (sync/src/upstream/isp/fetcher.test.ts, 21 tests): DDB scan + filter + pagination, S3 spillover resolution, malformed s3:// URI, malformed JSON, missing result_json on completed row, no-match returns null, latest-by-created_at picker, empty Items.

- [x] Wrike LiDAR Vendor extractor (sync/src/upstream/wrike/lidar-vendor.test.ts, ~10 tests): both URL shapes (?m=..., /models/{id}), edge cases (null, undefined, empty, whitespace, free-text, double-encoded).

- [x] RhodesClient (sync/src/upstream/rhodes/client.test.ts, 29 tests): every endpoint happy path + 4xx/5xx + shape mismatch + null-value handling + headers (X-Api-Key + Content-Type for POST, X-Api-Key only for GET) + constructor validation + timeout wiring + env helpers (configured/missing/whitespace).

- [x] Rhodes merger orchestrator (sync/src/upstream/rhodes/sync.test.ts, 29 tests): pure helpers (rebl3SiteToLayer, rhodesSiteToSchemaUiLayer, applyRebl3StatusEnrichment with real REBL3 status fixture, diffMergedAgainstRhodes including zero-is-real / unmapped-fields / empty-is-feature cases) + happy paths (REBL3+ISP, REBL3 only, Schema-UI override, HubSpot layer, REBL3 enrichment dates) + failure isolation (bulk REBL3 throws, REBL3 enrichment fetch throws, per-site ISP throws, per-site upsert throws) + counter flow-through + maxIspFetches cap.

- [x] HubSpot fetcher (sync/src/upstream/rhodes/hubspot-fetcher.test.ts, 20 tests): normaliseSchoolName (case + whitespace + null + empty), hubspotProgramToLayer (null→undefined, zero-is-real, empty-string drop), buildHubspotLayerByNormalisedName (first-wins, fallback, no-name drop, empty input), joinHubspotLayersToRhodesSites (re-key to slug, normalise both sides, no-match drop, empty-name drop, multi-site to one HubSpot record), end-to-end fetcher with mocked Redshift module.

- [x] Wrike LiDAR backfill orchestrator (sync/src/upstream/wrike/lidar-vendor-sync.test.ts, 15 tests): pure helper (pickLidarVendorValue) + happy paths for both URL shapes + skip-on-match + update-on-change + never-clears policy (empty + unparseable) + site filtering + batching at batchSize ceiling + shared folderId handling + Wrike 500 per-batch isolation + per-site upsert failure isolation + outcome-counter flow-through.

- [x] Cadence + worker wiring (sync/tests/analytics/refresh-cadence.test.ts, sync/tests/analytics-worker/index.test.ts): every5Minutes tier registered, all new domains in REFRESH_TIERS whitelist, worker tests confirm env-missing path warns once and continues without crashing.

Pre-commit hooks ran on every commit: convex-paths, biome, typecheck-chat, typecheck-sync all green throughout.

### Local steady-state check (recommended before merge)

Once Rhodes PR #56 is in dev, set RHODES_CONVEX_SITE_URL + RHODES_API_KEY (+ WRIKE_API_TOKEN + WRIKE_SPACE_ID + WRIKE_ROOT_FOLDER_ID if exercising the LiDAR backfill) and observe:

[rhodes/merger] refresh starting

[rhodes/merger] REBL3 fetched N sites (keyed by slug)

[rhodes/merger] refresh complete: N sites, N updated (M fields), N no-diff, 0 failed; ISP H/T hit, 0 miss, 0 err; layers r=N/h=0/i=H/s=N (Xms)

[rhodes/merger] N sites, N updated (M fields), N no-diff, 0 not-found

[wrike/lidar-backfill] refresh starting

[wrike/lidar-backfill] complete: N sites considered, F folder IDs, F folders fetched, E extracted, … (Xms)

[wrike/lidar-backfill] F folderIds, E extracted, U updated, K unchanged

Verify in the Rhodes Convex dashboard that sites rows are getting their merged fields populated and matterportModelId is being set from Wrike for sites with wrikeFolderId. If env vars are missing, the worker now logs Rhodes merger sync skipped: … / Wrike LiDAR backfill skipped: … exactly once (not per cycle).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#124 — feat(portfolio): add site lifecycle dashboard wired to Rhodes @benji-bizzell  no labels

## Summary

- New Portfolio dashboard with a 9-column Kanban (pre-op → operating) and an accordion List view, toggleable via the Sort & display popover with localStorage persistence.

- Wired to Rhodes /sync/aerie/listSites — stage/milestone-driven phase bucketing, DRIs, target dates, student capacity.

- Enrollment numbers hydrated from HubSpot deals via portfolio-enrollment; placeholder data path kept behind the contract for degraded states.

## Why

Site ops had no single rollup of where every school sits in the open-a-new-site lifecycle. This dashboard replaces the mental spreadsheet — one screen to see what's ahead of schedule, what's blocked, who owns what, and how each site is filling up heading into its open date.

## Test plan

- [x] pnpm --filter chat test portfolio — card initials, date parsing, phase config, list/contract, enrollment builder all green

- [ ] Verify /dashboards/portfolio loads against dev Rhodes endpoint; spot-check a site in each phase column

- [ ] Toggle List ↔ Board, confirm selection persists across reload

- [ ] Confirm enrollment counts match HubSpot for a known site (e.g. Austin Mueller)

🤖 Generated with a very good bot

#2663 — feat(ai-spend): super-admin Token Pricing view with rationale notes (KLAIR-2582) @ashwanth1109  no labels

## Demo

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/11a07e6c-5a0a-433e-9581-f99ac1a22512" />

## Summary

Adds a super-admin-only Token Pricing view under AI Spend & Adoption that surfaces core_finance.ai_spend_token_pricing — the per-model rates the AI spend ingest Lambdas (Claude, OpenAI, Azure) use to enrich raw token events with USD cost. Previously hand-edited via SQL workbench with no in-app visibility and no rationale trail.

Motivated by [KLAIR-2580](https://linear.app/builder-team/issue/KLAIR-2580) drift incident ($86k gross / $8k net misbilling across 10 models) where operators couldn't see the current pricing state or why individual rows existed.

## What's in here

Backend

- GET /api/ai-costs/token-pricing — super-admin-gated, returns every row with a server-derived provider field (claude | openai | azure | other) based on model_pattern prefix.

Frontend

- AICostsShell header now renders two super-admin buttons: Token Pricing (new) left of Raw Data Reports.

- New TokenPricingPage with breadcrumb, source strip (row count + currently-active count + Refresh + Export CSV), provider filter chips, and a UnifiedTable with sorting/search/column-selector.

- Sub-cent prices render at 6 decimals; $≥0.01 at 2.

- Currently-active rows (effective_to IS NULL) get a left accent border in the first column.

- URLs in notes auto-link (target="_blank" rel="noopener noreferrer").

- Provider chips for providers with 0 rows are hidden; selected chip uses text-klair-accent-on-accent to match the period-preset styling.

- 17 Vitest tests covering the CSV builder, price formatter, date formatter, and URL auto-linker.

Schema

- scripts/sql/create_ai_spend_token_pricing.sql — authoritative CREATE TABLE snapshot including context_window, speed, and notes (columns were added in-place via SQL workbench at various points; this file now captures the current state).

## Screenshots

Before → after chip styling: selected "All" chip previously rendered white text on accent (low contrast); now uses klair-accent-on-accent (dark text on accent), matching the 12M period preset.

## Reviewer notes

- klair-api/uv.lock was intentionally not touched in this PR — a local uv sync had regenerated it with a v3 lockfile revision (6.8k-line diff) as an unrelated side effect of [apps#2659](https://github.com/AI-Builder-Team/Klair/pull/2659)'s new uv.toml. Reverted to avoid lockfile churn.

- The notes column ALTER has already been applied to production; the DDL file is purely a current-state snapshot (Redshift has no migration runner in this repo).

- Data flow mirrors the Azure cost reports blueprint from [KLAIR-2579](https://linear.app/builder-team/issue/KLAIR-2579) — same apiGet + AbortController hook pattern, same fetch_with_params_strict + NaN→None service pattern.

## Test plan

- [ ] Navigate to AI Spend & Adoption as a super-admin; verify Token Pricing button appears left of Raw Data Reports.

- [ ] Click Token Pricing; verify breadcrumb, source strip, row count, and currently-active count render.

- [ ] Verify active rows have a left accent border in the Provider column.

- [ ] Cycle through provider chips (All / Claude / OpenAI / Other); verify Azure chip is hidden (no Azure rows in prod today).

- [ ] Verify selected chip has dark text on accent green (matching 12M in the left filter sidebar).

- [ ] Click Export CSV; open the file and verify 6-decimal price precision and URL-preserved notes.

- [ ] Click a URL inside a notes cell; verify it opens in a new tab.

- [ ] Load as a non-super-admin; verify the Token Pricing button is not visible and GET /api/ai-costs/token-pricing returns 403.

- [ ] Confirm pnpm tsc --noEmit, pnpm eslint ..., and pnpm vitest run src/screens/AIAdoptionV2/components/TokenPricing/ all pass (run in CI).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2670 — fix(mfr-memo): rewrite Software Note 3 for Group attribution and YoY vs Dec-31 prior year @eric-tril  no labels

### Summary

Rewrites Note 3 ("Deferred tax assets and liabilities") in the Software MFR memo docx so the narrative matches the reference wording. The prior output misattributed Group-level DTA/DTL figures to "Software", compared the current period against the prior month-end instead of the prior fiscal year-end (Dec 31), and included DTL / crystallization / passive-investments / mark-to-market / stock sale content that does not belong in this note. This change covers paragraph 1, paragraph 2, the LLM prompt, the template fallback, the drill-down provenance, and the supporting balance-sheet fetcher.

### Changes

- Added fetch_software_bs_dta_yoy in [_balance_sheet.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/docx_reports/memo_data/_balance_sheet.py) that queries staging_netsuite.balance_sheet for accounts 22300 (DTA) and 35125 (DTL) at period-end vs Dec 31 of the prior fiscal year, returning the six DTA/DTL/NOLs numeric keys plus a _prior_date field.

- Replaced the fetch_bs_dta_dtl (prior month-end) call site in [software_defaults.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/docx_reports/memo_data/software_defaults.py) with the new YoY fetcher; bs_prior_date is now Dec 31 of the prior fiscal year.

- Added _q_label helper that emits "Q1 2026"-style labels at quarter-end months and month labels otherwise; exposed q_label_cur / q_label_pri on the template context and dropped dtl_cur / dtl_pri / nols_gross_pri month-end labels.

- Rewrote the Note 3 LLM prompt and template fallback: paragraph 1 names audited divisions (Aurea, Avolin & Jive), adds the January 2020 derecognition sentence, attributes figures to "the Group", and supports a single shared income-tax phrase when both periods have the same sign.

- Reduced paragraph 2 to DTA + NOLs only — removed DTL, crystallization, passive investments, mark-to-market, and stock-sale content.

- Updated drill-down provenance: drops dtl_current / dtl_prior, adds q_label_current / q_label_prior / dta_prior_year_end / nols_gross_prior_year_end, and removes the 35125% account filter from the surfaced SQL; prior-date fixture is now 2025-12-31.

- Updated existing tests (test_template_fallback_output, test_build_llm_prompt_*) and added new tests: test_note_3_matches_reference_wording, test_note_3_single_label_when_tax_signs_match, test_note_3_provenance_drops_dtl, test_software_bs_dta_yoy_uses_prior_year_end.

### Testing

- cd klair-api && pytest tests/reports_service/test_software_memo_defaults.py

- cd klair-api && uv run ruff format services/docx_reports/memo_data/_balance_sheet.py services/docx_reports/memo_data/software_defaults.py tests/reports_service/test_software_memo_defaults.py

- cd klair-api && uv run ruff check services/docx_reports/memo_data/_balance_sheet.py services/docx_reports/memo_data/software_defaults.py tests/reports_service/test_software_memo_defaults.py

- cd klair-api && uv run pyright services/docx_reports/memo_data/_balance_sheet.py services/docx_reports/memo_data/software_defaults.py

- Generate a Software MFR docx at a quarter-end period (e.g., 2026-03-31) and confirm Note 3 paragraph 1 reads "the Group (Aurea, Avolin & Jive)", includes the January 2020 derecognition sentence, uses "Q1 2026" vs "Q1 2025" labels, and paragraph 2 references DTA + NOLs against Dec 31 of the prior fiscal year with no DTL/crystallization/passive-investments/mark-to-market content.

http://localhost:3001/monthly-financial-reporting

### When testing use Software Memo January 2026

<img width="1847" height="788" alt="image" src="https://github.com/user-attachments/assets/10d164ef-c886-4d44-813f-4b4bd3bb5678" />

The Portfolio  —  Trilogy Companies

Alpha School Claims to Double D1 Athletic Odds — By Cutting Classroom Time to Two Hours

Joe Liemandt's Austin school says the secret to elite college sports recruiting isn't more practice — it's eliminating the academic waste between 8 a.m. and 3 p.m.

AUSTIN, TEXAS — Alpha School, the private K-12 institution founded by Trilogy billionaire Joe Liemandt, is now pitching parents on a new metric: Division I college sports placement rates. In a recent blog post, the school claims its model — which compresses a full academic curriculum into two hours of AI-guided learning per day — creates more time for athletic training and, as a result, doubles a student's odds of earning a D1 scholarship.

The argument hinges on volume. Traditional schools lock kids in classrooms for six to seven hours daily. Alpha's AI tutors deliver the same academic content in a fraction of the time, freeing up the remainder of the day for what the school calls "life skills" — including intensive athletic development. According to the post, Alpha students train longer, recover better, and compete more frequently than peers stuck in conventional schedules.

The school also points to what it calls the "ADHD misdiagnosis epidemic" — arguing in a separate post that many children labeled hyperactive are simply movement-starved, their energy suppressed by desk-bound schooling. Alpha's response: build physical activity into the core structure of the day, not as an afterthought.

Co-founder MacKenzie Price has argued publicly that traditional schools outsource character development to afterschool sports, where kids actually learn grit, leadership, and resilience. Alpha's model internalizes that — treating athletics not as extracurricular, but as central to education itself.

The pitch is bold, and the data is still thin. Alpha has fewer than 200 students across three campuses. D1 placement rates won't be verifiable for years. But the framing is classic Liemandt: identify inefficiency, automate it, redeploy the freed capacity toward higher-value outcomes. In this case, the inefficiency is the school day. The higher-value outcome is a scholarship offer.

Whether parents buy it — and whether college coaches care where the extra training hours came from — remains to be seen.

The Billionaire Who Pioneered Remote Work Has A New Plan To  ·  Conan O'Brien and Stephen Colbert Appeared In Each Other's S  ·  We Built a School to Double Your Kid’s D1 Odds

Skyvera's Acquisition Spree: Three Deals in Twelve Months Signal Telecom Consolidation Play

ESW Capital's telecom arm has quietly assembled a full-stack software portfolio through strategic acquisitions — and if you read between the lines, this is about locking carriers into a single vendor.

AUSTIN, TEXAS — Skyvera, the telecom software division of ESW Capital, has completed three significant acquisitions over the past year, assembling what industry insiders are calling a "vertically integrated software suite" for telecoms desperate to modernize their infrastructure.

The most recent deal — CloudSense, a Salesforce-native CPQ and order management platform — fills a critical gap in Skyvera's portfolio: the front-office sales configuration layer. CloudSense handles configure-price-quote workflows for telecom and media providers, the kind of complex product bundling that still chokes legacy systems.

Before CloudSense, Skyvera acquired STL's telecom products group, which brought digital BSS functionality — monetization, optical networking, and analytics. And underneath it all sits Kandy, Skyvera's cloud-based real-time communications platform, which was already in the portfolio.

What's interesting here — and this is where it gets interesting — is the strategic coherence. You've got front-office (CloudSense), back-office billing and analytics (STL assets), and customer engagement infrastructure (Kandy). That's a full stack. A carrier could theoretically run its entire digital operation on Skyvera products.

And that's the play. ESW Capital's model has always been about margin extraction through vendor lock-in. Buy mature software, raise support prices, cut costs via Crossover's global talent pool, and watch EBITDA climb to 75%. But telecom software is stickier than most enterprise categories — rip-and-replace is nearly impossible once you're running live traffic.

Skyvera isn't selling point solutions anymore. It's selling an ecosystem. And once a carrier is in, the switching costs are prohibitive.

A source familiar with ESW's telecom strategy — who requested anonymity because they're not authorized to speak publicly — put it this way: "They're not trying to be the cheapest option. They're trying to be the only option you need."

The Forbes profile of Joe Liemandt this week focused on his workforce automation ambitions. But the Skyvera buildout might be the more revealing tell. This isn't innovation theater. This is infrastructure capture.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets
The Machine  —  AI & Technology

The Machines Are Learning to See Like We Do — Dyslexia, Blind Spots, and All

A wave of new studies reveals that AI's deepest insights come not from perfecting vision, but from faithfully reproducing its imperfections.

LAUSANNE, SWITZERLAND — For three and a half billion years, evolution has been running the longest experiment in information processing the universe has ever known. Now, in a handful of labs scattered across continents, artificial intelligence is beginning to replay that experiment — not by surpassing biological vision, but by stumbling in precisely the same ways.

The most striking result comes from EPFL, where researchers have built an AI system that mimics dyslexia — not as a flaw to be engineered away, but as a window into how the brain organizes language. The model doesn't just simulate reading difficulty; it reproduces the specific patterns of letter transposition and phonological confusion that characterize dyslexic processing. It is, in essence, a digital fossil of a neurodevelopmental difference that affects roughly one in ten humans.

Meanwhile, a separate team has constructed what they call a "mini-AI" that decodes the visual processing of macaque brains — mapping how neurons in the primate visual cortex respond to shape, motion, and depth. The model is deliberately small, a constraint that forces it to develop efficient representations eerily similar to those found in actual neural tissue. Bigger, it turns out, is not always more truthful.

At Stanford, generative AI is being applied to brain disease research, helping scientists model the molecular cascades behind neurodegeneration. And UC San Diego recently cataloged nine distinct breakthroughs across disciplines — from protein folding to climate modeling — where AI served not as an oracle but as a collaborator, amplifying human intuition rather than replacing it.

What unites these efforts is a philosophical shift. The first generation of AI research asked: how do we make machines smarter than us? The current generation asks something more profound: how do we make machines that fail like us, see like us, struggle like us — so that we might finally understand what "us" means?

The dyslexic AI cannot read a novel. The mini-AI cannot recognize a human face. But together, they are composing something remarkable: a mirror held up to three and a half billion years of biological trial and error, reflecting back not perfection, but the beautiful, informative imperfection of minds shaped by a universe that never promised clarity — only survival.

EPFL AI Mimics Dyslexia in Breakthrough Study - Mirage News  ·  Nine Breakthroughs Made Possible by AI - UC San Diego Today  ·  Mini-AI Decodes the Macaque Visual Brain - Neuroscience News

The New Predators of the Grid: AI Data Centers Evolve Beyond the Rack

Permits, power standards, and cooling physics are reshaping how—and where—the next generation of compute can survive.

AUSTIN, TEXAS — In the humming grasslands of the modern economy, a new species is expanding its territory: the AI data center. It does not roam; it anchors itself to substations, waterways, and fiber routes. And as its appetite for electricity swells, its survival depends less on flashy chips than on the quiet, unforgiving disciplines of permitting, grid harmony, and heat.

First comes the courtship ritual with local government. Permitting timelines—often spoken of as if they were weather—are, in truth, selectable traits. Developers are learning that approvals can be hastened by nesting in jurisdictions that have seen this creature before, arriving with complete plans, and pre-answering the environmental questions that would otherwise trigger delays. As Data Center Knowledge reports, preparation and local experience can compress what might become a long season of waiting.

Yet even a permitted site is only a shell. The more profound constraint is the grid itself—an ecosystem shared with factories, homes, and hospitals. IEEE is now working toward unified global standards intended to align data center design with grid operations, smoothing integration and reducing wasteful mismatches between what facilities demand and what utilities can reliably supply. The ambition, detailed in an IEEE-focused briefing, is nothing less than a common language between the den and the savannah: predictable behavior under stress.

Inside the walls, the great struggle is against heat. AI has driven power density to extremes, forcing operators toward hybrid cooling approaches, modular construction, and tighter coupling of compute, cooling, and electrical design. Cooling is no longer an afterthought; it is the system constraint, dictating form and function.

And beyond cooling lies the next migration: power architectures pushed past the traditional rack limits—talk of 800 VDC and rethinking where conversion happens, because space, copper, and losses are all becoming scarce.

Industry watchers increasingly expect hyperscaler habitats to dominate by 2031. In nature, dominance rarely comes from strength alone. It comes from adaptation—learning to breathe with the grid, to shed heat efficiently, and to secure a place in the permitting landscape before the next wave arrives.

Data Center Permits: How Long They Take and What Speeds Appr  ·  Getting Beyond Data Center Construction: Designing for Grid  ·  AI Pushes Cooling to the Forefront of Data Center Design Cha

Theoretical Foundations Reassert Primacy in Machine Learning Development

It could be argued that artificial intelligence engineering discourse has reached an inflection point where purely empirical methodologies face challenge from advocates of mathematical formalism. Preliminary evidence suggests institutional convergence around theoretical foundations. Carnegie Mellon's Machine Learning initiative positions mathematical rigor as prerequisite to scalable impact, while Apple's research explores self-supervised learning through Gaussian process frameworks. The American Physical Society applies statistical mechanics to neural network analysis, addressing the "black box problem." MIT examines autonomous system ethics, positing that evaluative frameworks require axiomatic foundations capable of encoding moral reasoning. Enterprise deployment implications remain contested, though portfolio companies like DevFactory and KLAIR operate at the intersection of theoretical advancement and production deployment, suggesting mathematical foundations may prove differentiating in competitive contexts.

The Editorial

The Great AI Consolidation Has Begun, and Nobody Learned a Thing

From Cohere-Aleph Alpha to Musk's orbital fantasies, the industry is rushing to merge its way to dominance — repeating every mistake of every prior tech cycle, only faster and with more capital.

AUSTIN, TEXAS — The surest sign that an industry has entered its baroque phase is when the mergers start. Not the quiet, sensible acquisitions of a maturing business — the kind where a company buys a product it needs at a price it can afford — but the grand, thumping, press-release-festooned consolidations that announce to the world: we have no idea what comes next, so we shall become larger.

And so here we are. Cohere and Aleph Alpha are in advanced merger talks, a union that would marry a Canadian foundation-model company to a German enterprise AI outfit in one of those transatlantic combinations that look better on a whiteboard than they do in a Slack channel. Elon Musk, never one to think small when thinking enormous will do, has floated merging SpaceX and xAI so that he might put data centers in orbit — a plan so audacious it makes one wonder whether audacity has become a substitute for a business model. The Trump administration, meanwhile, has unveiled an AI framework that critics suspect is less about governing the technology than about consolidating the power to decide who gets to build it.

All of this is happening simultaneously, and none of it is coincidental.

The AI industry is approximately thirty-six months past its ChatGPT detonation, which is roughly the point in every technology cycle when the initial euphoria gives way to a more sober arithmetic. Investors who wrote checks against visions of artificial general intelligence are now asking about gross margins. Startups that raised at preposterous valuations are discovering that the distance between a demo and a deployable product is measured not in months but in years and tens of millions of dollars of compute. The natural response, when the music slows, is to combine — to swap equity for runway, to trade independence for the comforting fiction of scale.

I have seen this movie before. I saw it in the telecom bubble of the late 1990s, when companies merged not because they had complementary assets but because they had complementary anxieties. I saw it in the enterprise software consolidation of the 2000s, when roll-up shops discovered that bolting together mediocre products does not produce a good one. The few operators who understood this — and there are a few, notably the kind of disciplined acquirers who buy software companies at rational multiples of recurring revenue and then run them properly — have always been the exception that proves how dismal the rule is.

Some observers insist that AI breaks all the old rules about consolidation. They argue that the technology is so transformative, so fundamentally different, that historical patterns do not apply. This is, of course, precisely what people said about the internet in 1999, about mobile in 2012, and about blockchain in 2017. The technology changes. Human nature — specifically, the tendency of executives to confuse getting bigger with getting better — does not.

The mergers will proceed. Some will work. Most will produce organizations too complex to manage and too expensive to unwind. The winners, as always, will be the companies that understood from the beginning that discipline is not the enemy of ambition but its prerequisite — that the point is not to own the most territory but to extract the most value from the territory you hold.

The rest will discover, as their predecessors always have, that consolidation is not a strategy. It is the absence of one.

Why the AI revolution breaks all the old rules about consoli  ·  AI Power Play: Cohere, Aleph Alpha In Advanced Merger Talks  ·  Is Trump’s New AI Framework a Bid to Consolidate Power? - Ro
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

Nation’s Executives Calmly Agree To Learn AI By Buying Book, Installing ServiceNow, And Pretending CES Headset Counts As Strategy

Experts say the key to transformation is ensuring the human expertise remains available to be blamed when automation misses a decimal point.

LAS VEGAS — The technology industry entered its annual season of enlightened self-improvement this week, unveiling a disciplined, multi-step plan to prepare leaders for an AI future: (1) announce a partnership, (2) purchase a leadership book, (3) tour a convention floor filled with glowing rectangles, and (4) conduct a layoff so tasteful it can be described as “AI-driven.”

The most reassuring development came from healthcare services provider TridentCare, which announced it will partner with ServiceNow to “power AI-driven transformation across operations,” a phrase that industry observers praised for containing both the word “AI” and the comforting suggestion that the transformation is happening somewhere across the hall, away from anyone’s day-to-day responsibilities. The partnership helpfully offers a familiar corporate experience: the introduction of a new platform that will “streamline workflows,” “enhance visibility,” and create an exciting new category of work called “updating the system so it reflects what we were doing before, but in a different menu.”

“Transformation is essential,” said one operations leader, after clicking through a demo and experiencing the familiar sensation of being transformed into a person who needs three approvals to change a field label. “The AI will take care of the repetitive tasks, freeing humans for higher-value activities like explaining to the AI what the repetitive tasks are.”

At CES 2026, the industry’s emotional support objects were also on full display. According to coverage of Day 1, the show continued its tradition of presenting a new portfolio of devices designed to make the future feel tangible, even if the underlying plan is still, in large part, “we’ll connect it to an assistant and see what happens.” Attendees were seen thoughtfully nodding at prototypes that promised to anticipate human needs, as if the hard part of modern life was not knowing what humans need, but rather not having enough sensors to be certain.

For executives who worry they may not be sufficiently fluent in this era’s required syllables, help arrived in the form of a new book. AI Vantage Consulting’s newly launched “AI Fundamentals For Leaders” will guide decision-makers through 2026 with what the market has long demanded: an authoritative object to place on a conference table so colleagues understand the organization is “taking AI seriously.” The book is expected to improve alignment, primarily by allowing everyone to align on the fact that they have not yet read it.

And yet, amid the announcements, a rude theme has persisted: AI’s productivity promise appears to fall apart without human expertise. Critics have noted that “human-in-the-loop” often translates to “human-on-the-hook,” especially when an automated system produces confident nonsense that requires a veteran employee to translate it back into reality.

This inconvenient requirement has dovetailed with another trend: AI washing, in which layoffs are carefully accessorized with the language of innovation. In these cases, “we’re becoming an AI-first company” functions less as a strategy and more as a gently applied anesthetic for the part where the organization discovers that the most expensive line item is still people who know what they’re doing.

In that sense, the industry’s various offerings—platform partnerships, CES marvels, and leadership books—are not competing approaches but complementary ones. The partnership provides the dashboard. The gadgets provide the morale. The book provides the vocabulary. And the remaining experts provide the last, fragile layer of meaning.

For readers seeking a precise roadmap, the market has now made it simple: announce your AI-driven transformation, buy the fundamentals, then locate a human expert and treat them as your most critical infrastructure—right up until the moment you need to describe them as an avoidable cost.

TridentCare Partners with ServiceNow to Power AI-Driven Tran  ·  A look at the new technology announced on Day 1 of CES 2026  ·  AI Vantage Consulting Launches 'AI Fundamentals For Leaders'
On This Day in AI History

On April 24, 2016, Google's AlphaGo defeated Lee Sedol 4-1 in a historic match in Seoul, completing one of AI's greatest victories over a world champion in the ancient game of Go.

⬛ Daily Word — Technology
Hint: The foundation of AI reasoning and computer programming.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed