Vol. I  ·  No. 128 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
FRIDAY, MAY 08, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

SpaceX's $55 Billion Chip Bet Signals AI Infrastructure War Has No Ceiling

From rocket fuel to silicon: Musk's Terafab gambit, Anthropic's 80x growth claim, and Google's AI search gains converge on a single thesis — compute is the only constraint that matters.

NEW YORK — The AI industry's infrastructure arms race escalated sharply this week on three fronts, each pointing to the same underlying scarcity: chips.

SpaceX confirmed plans to invest $55 billion in a domestic semiconductor fabrication facility called Terafab, marking Elon Musk's most aggressive move yet to control the physical layer of artificial intelligence. The rocket company has no prior history in chip manufacturing. That is precisely the point. Vertical integration — from launch vehicles to large language models — has become Musk's operating doctrine, and Terafab extends it into the one bottleneck no amount of software engineering can route around.

The timing is not coincidental. Anthropic CEO Dario Amodei said this week that his company could grow 80 times over in 2026, a figure that, if directionally accurate, would place the Claude maker among the fastest-scaling enterprises in the history of enterprise software. Amodei was explicit about the consequence: exponential revenue growth translates directly into exponential compute demand. Anthropic does not manufacture chips. It buys them, at market rates, from a supply chain that every major AI lab is simultaneously straining.

Against that backdrop, Musk's Terafab announcement reads less like a moonshot and more like a hedge. Control your own fab, control your own cost curve.

Meanwhile, the consumer layer of AI continued its quiet maturation. Google's AI Mode search — still imperfect on fast-moving topics like celebrity news — is demonstrating measurable advantages in structured tasks: grocery selection, scam detection, multi-step research queries. These are not glamorous use cases. They are sticky ones. The transition from keyword retrieval to conversational synthesis is happening at the margin, one mundane query at a time.

The week's other notable data point arrived in a San Francisco courtroom, where testimony detailed the role Shivon Zilis played as Musk's informant inside OpenAI's board — a reminder that the AI industry's governance structures remain as contested as its supply chains.

The common thread: scale, at every level, is outrunning the institutions built to manage it.

Five Ways A.I. Search Beats an Old-School Google Search  ·  Elon Musk’s SpaceX Plans $55 Billion Investment to Make A.I.  ·  Elon Musk’s Confidante Shivon Zilis Is Cast as His Inside So

Washington Cuts the AI Line

Microsoft, Google and xAI hand the feds early access to unreleased models — and critics say speed just became something only the giants can afford.

WASHINGTON — Microsoft, Google and xAI agreed this week to hand federal reviewers an early look at unreleased AI models, installing a pre-launch checkpoint between Silicon Valley and the public.

The arrangement runs through Washington's safety apparatus. Models cross a federal desk before they cross the wire. The Hill reported the deal on Wednesday.

Big Tech calls it responsible stewardship. The smaller shops aren't applauding.

Startup Fortune fired the loudest shot. Its analysts argue the policy turns speed into a privilege incumbents can afford. Pre-release review costs days.

Days cost dollars. Dollars are the one thing a startup runs short on.

Do the math. Microsoft and Google field legions of compliance lawyers between them. xAI runs on Elon Musk's checkbook.

A four-person AI outfit in a garage cannot staff a single regulatory desk full-time. Same federal queue. Different bill at the end.

The timing tells the tale. The Los Angeles Times reports Google's internal turf war has handed the AI coding crown to Anthropic and OpenAI. Axios counters that the feud between those two could prop Google right back up.

Either way, the board redraws itself by the hour — and any small lab hoping to slip into the gap just got a new line item on its critical path.

Now drop a federal review window on top. Whichever giant clears the desk first ships first. The newcomer waits behind.

Cash burns. Investors get nervous. Engineers walk.

The builder shops are watching. Trilogy International's ESW Capital runs more than 75 enterprise software firms — Aurea, IgniteTech, Skyvera, Totogi, Ephor, CloudFix, Contently — staffed through Crossover talent in 130-plus countries. The model is lean teams, fast cycles, discipline at the top.

Joe Liemandt founded the operation in 1989. The bet has always been on speed.

That model breaks when somebody slows the door. Regulatory drag punishes the speedy worst. A federal review window adds nothing to a slow incumbent's calendar and weeks to a fast newcomer's.

There's a second front opening across town. A startup called Basata is automating the paperwork pile that keeps physicians from calling patients back. The founders told reporters their administrative staff aren't worried about being replaced — they're worried about drowning.

Augment now, displace later. Every AI shop in the country will hit that fork. Every regulator will be standing at it.

Microsoft signed. Google signed. xAI signed.

Smaller labs were not at the signing table.

That's the quiet part. The firms with the most to lose from disruption now hold a key Washington can use to slow disruption down. They call it safety.

Their critics call it a moat.

Watch the calendar. Watch which models clear the federal desk first. Watch which ones never get a slot.

The incumbents bought a chair at the table. The newcomers will be told to take a number. That's how rules get written in this town — and who writes them.

Microsoft, Google, xAI giving government early access to AI  ·  Google’s internal struggle is handing the AI coding race to  ·  White House pre-release AI model reviews would turn speed in

Nvidia Takes the Field With IREN as Bitcoin Miner Sprints Into AI Infrastructure

IREN, the Australia-founded data-center operator once best known for mining bitcoin, announced a strategic partnership with Nvidia tied to large-scale AI deployments. Nvidia has secured a five-year option to purchase up to 30 million IREN shares in a package valued at as much as $2.1 billion. The companies are positioning the partnership around deploying up to 5 gigawatts of AI infrastructure, a significant number in a market where power, land, cooling and chips have become critical resources.

For Nvidia, this expands its reach beyond selling GPUs into full-stack accelerated computing, including silicon, networking, software and alignment with data-center operators. For IREN, the pivot from bitcoin mining to AI infrastructure offers a richer, more institutional opportunity. The challenge lies in executing on five gigawatts—requiring substantial capital, permitting, grid access and construction discipline. But with Nvidia's involvement and growing demand for compute capacity, IREN has gained a powerful competitive advantage in the AI data-center arms race.

Haiku of the Day  ·  Claude HaikuMoney chases sky
while truth drowns in its own code
we reap what we sow
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
The Fairness Reckoning: AI's Bias Problem Defies Easy Solutions Across Medicine, Education, and Hiring
CAMBRIDGE, MASSACHUSETTS — A confluence of scholarly outputs, arriving with the peculiar simultaneity that characterizes paradigm-adjacent moments in applied computational research, has prompted renewed — and, it could be argued, overdue — interrogation of what the field has taken to calling the "AI fairness problem" (a designation which, one notes with some academic discomfort, conflates several epistemologically distinct phenomena under a single, reassuringly manageable label). The thesis, as articulated across multiple research vectors, is straightforward enough: AI systems trained on historically incomplete or structurally skewed datasets reproduce, and in certain measurable instances amplify, the inequities embedded within those datasets.
The Genome Enters the Surveillance Thicket
WASHINGTON — In the dense undergrowth of the modern security state, a new and delicate specimen has appeared: the protester’s genome, lifted not from a crime scene, but from the administrative machinery of immigration enforcement. A new lawsuit accuses the Department of Homeland Security of building what civil liberties advocates describe as a vast DNA collection system that could be used to track critics of Immigration and Customs Enforcement.
We Built the Lie Machine and Now We're Surprised It's Lying
AUSTIN, TEXAS — There is a video circulating on social media right now of a doctor you trust — maybe your doctor, maybe just a doctor who looks like someone you'd trust — telling you to stop your medication, or try this supplement, or that the thing your actual physician said is wrong.
We Are Getting Dumber, More Dangerous, and Somehow More Powerful — Congratulations, Everyone
SAN FRANCISCO — There's a particular kind of morning — and I've had too many of them lately — where you pour your coffee, open the news feeds, and the stories arrange themselves into a pattern so grotesque and so perfectly coherent that you have to sit down, put your head between your knees, and breathe slowly through your nose. This was one of those mornings. Item one: the old San Francisco tech scene is dead, replaced by something its obituarists describe as "far more sinister." Gone are the wide-eyed idealists who wanted to connect the world and accidentally destroyed democracy along the way — at least they had the decency to feel bad about it at TED Talks.
The New Productivity Stack Is Not an App, It Is a Spine
AUSTIN, TEXAS — I'll be honest...
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team

Builder Team Ships Intelligence Layer, Hardens Production Across Four Repos

From an LLM-powered observability engine in Surtr to a bulletproof deploy pipeline in Aerie to a sweeping AWS cost infrastructure overhaul in Klair, the Builder Team proved today that breadth and depth aren't mutually exclusive.

The story of today isn't one breakthrough — it's four repos firing in concert, a team-wide demonstration that the Builder org can push the frontier on intelligence, infrastructure, and product experience simultaneously. Strap in.

The headline move belongs to Surtr, where @kevalshahtrilogy landed what may be the most consequential observability work this team has shipped all quarter. PRs #41 and #42 don't just add logging — they add *judgment*. A new LLM-based evaluation layer powered by Claude Sonnet 4.6 now reads every pipeline run against a ten-category silent-failure rubric, scores it deterministically, and surfaces verdicts through a purpose-built dashboard. That alone would be worth writing about. But #42 goes further: every time an operator clicks "Ignore this finding," the system attaches a labeled false-positive feedback event to the originating Braintrust trace. The team is now, automatically, building a regression dataset from real operator behavior. This isn't monitoring. This is a system that learns. The whole thing no-ops gracefully when the API key is absent, which means zero friction for anyone not yet on the telemetry path. @kevalshahtrilogy built something that will compound in value for months.

While Surtr was getting smarter, Aerie was getting tougher. @benji-bizzell had a day that deserves its own trophy case. PR #182 is the kind of fix that only gets written by someone who stared down a failed production deploy and refused to blink — dropping a conflicting `version: 10` declaration, snapshotting `:latest` to `:previous` before every rebuild, and hardening the rollback logic so it only fires when EC2 was actually mutated. Production is safer tonight than it was this morning. Then #180 patched two real user-facing defects on `edu-ops.klair.ai`: mobile users finally have a logout path, and a Convex auth race condition that was trapping signed-in users behind a "Sign in to view" wall got squashed. And #179 — the All Sites grid at `/admin/school-fields` — gives EVP-tier users a wide editable matrix across every Rhodes site and catalog field, with localStorage persistence and inline cell editing. Three PRs. All shipped. All matter.

Over in Klair, @ashwanth1109 was running a parallel operation that can only be described as infrastructure dominance. PR #2745 extended the unified AWS SaaS budget pipeline with RDS and EC2 cost ingest modules pulling directly from Cost Explorer into Redshift, replacing ad-hoc scripts with a durable, queryable foundation. PR #2749 immediately built on that foundation — surfacing per-server DB cost columns in the Database Units table with two new backend endpoints and frontend allocation views. PR #2751 migrated the Renewal Event Retention metric off S3 JSON and onto Redshift with a dedicated FastAPI endpoint, completing a data-layer modernization that makes the ARR Retention Reports page genuinely trustworthy. This is what a platform buildout looks like when someone refuses to cut corners. @eric-tril added the finishing touch in PR #2746, extending the MFR comments system to support cell-anchored threads across Financial Statement tables — a feature that turns a reporting surface into a collaborative workspace.

And then there's @eric-tril's other contribution, PR #178 in Aerie: the clean retirement of the Wrike-fed `qualityBars` write chain, Phase 1 of 2, after a codebase-wide audit confirmed zero non-test readers. Dead code doesn't ship bugs. This is the unglamorous work that keeps a codebase healthy, and it deserves the same respect as any feature.

Four repos. Twelve PRs. One team that doesn't know how to have a slow day.

Mac's Picks — Key PRs Today  (click to expand)
#41 — feat(observer): Sonnet-rated pipeline run observations + dashboard @kevalshahtrilogy  no labels

## Summary

Adds an LLM-based observability layer that rates each pipeline run on data-quality / silent-failure dimensions, beyond what the success/failed status badge can tell you. Verdicts are produced by Claude Sonnet 4.6 reading the run record + CloudWatch logs, scored deterministically server-side from finding severities, and surfaced through a new dashboard + per-pipeline detail UI.

## What's new

Backend (\src/derive/observer/\):

- Sonnet 4.6 evaluator with a cacheable rubric (10 silent-failure categories tagged C/H/M/L)

- DDB storage with auto-create on first use (PK \run_id\, GSI \pipeline_id+observed_at\, on-demand billing)

- Per-pipeline observability flag (default off) for future Lambda auto-eval gating

- Ignore-finding feature: ignored items get passed back to the model so it stops re-flagging

- Conditional log filter for outlier pipelines with multi-MB log volumes (filter activates only when raw exceeds the cap)

- Score + verdict computed deterministically from findings: \C=−25, H=−10, M=−4, L=−1\; bands \≥90 OK, 60–89 WARN, <60 CRITICAL\

TRPC — 8 new procedures: \getRunObservation\, \evaluateRun\, \getRecentObservations\, \getDashboardObservations\, \getPipelineConfig\, \setPipelineObservability\, \listIgnoredFindings\, \ignoreFinding\, \unignoreFinding\.

UI:

- \/pipelines/dashboard\ — eagle-eye view (status tiles, at-risk pipelines, recently evaluated)

- \/pipelines/all\ — full clean list, every row clickable, status-page sparklines per row

- \/pipelines/[id]\ — split-pane master-detail with full-bleed layout (rail on left, run history on right). Clicking a run opens a slide-over sheet with Observations / Output / Logs tabs

- Trust chip with status-page-style sparkline of recent verdicts (outlined empty slots when no data yet)

- Findings cards with severity stripe + structured \Evidence\ / \Recommendation\ sections + per-finding Ignore action

- Sidebar gets separate Dashboard + All Pipelines nav items

<img width="1310" height="889" alt="Screenshot 2026-05-01 at 7 58 12 PM" src="https://github.com/user-attachments/assets/4a3bc1e6-16b9-456e-85ea-3aa66a885cc5" />

<img width="1009" height="425" alt="Screenshot 2026-05-01 at 7 46 37 PM" src="https://github.com/user-attachments/assets/ed4fd5bf-051c-4767-9860-916479db049a" />

<img width="1308" height="889" alt="Screenshot 2026-05-01 at 7 46 29 PM" src="https://github.com/user-attachments/assets/28eae87c-0c84-49a1-bc44-b2995ff06b3a" />

CLI: \pnpm observer:showcase <run-id>\ for ad-hoc evaluation.

Tests: 5 unit tests covering rubric content + Zod schema validation.

## Behavior notes

- Auto-evaluation never fires from the UI. Opening a run with no cached observation shows a clean empty state with an explicit \"Evaluate this run\" button.

- The per-pipeline \Observe\ toggle gates future Lambda-driven post-completion auto-evaluation. Manual UI buttons always work regardless of the toggle.

- Observations cached forever in DDB by run_id (runs are immutable once finished). \"Re-evaluate\" forces a fresh call.

- Failed runs aren't a finding — clean failures alarm via the existing pathway. Findings reflect data integrity (silent-failure surface).

## Setup

- New env vars in \.env.example\:

- \ANTHROPIC_API_KEY\ — required to evaluate; if missing, evaluations return UNAVAILABLE rather than failing the page

- \SURTR_OBSERVATIONS_TABLE\ — defaults to \surtr_pipeline_observations\

- DDB table is auto-created on first use — no manual provisioning. The IAM principal needs \dynamodb:CreateTable\, \DescribeTable\, \GetItem\, \PutItem\, \Query\.

## Cost & performance

- Per evaluation: ~3K cached system tokens + 2K–8K user tokens, ~500–2K output tokens

- Cached call: roughly \$0.005–\$0.02; first call (cache miss): ~\$0.02–\$0.05

- Sonnet 4.6 prompt cache verified hitting (\cache_read_tokens=3342\ after first call in the showcase)

- Wall-clock: 5–15s per evaluation

## Test plan

- [x] Unit tests pass (\pnpm vitest run test/derive/observer.test.ts\)

- [x] Lint clean on \src/derive/observer\

- [x] CLI showcase runs end-to-end against 3 real pipelines (azure-ai-spend, quickbooks-expense-sync, hubspot-sync) and produces expected verdicts

- [x] DDB table auto-creates on first call

- [x] Prompt cache engages after first evaluation

- [ ] Smoke test in dev: open dashboard, navigate to a pipeline detail, click a run, click \"Evaluate this run\", verify findings render and chip matches verdict

- [ ] Verify the Observe toggle persists across page reloads

- [ ] Verify Ignore finding flow: ignore one, re-evaluate, confirm the model doesn't re-flag

## Out of scope (not in this PR)

- Wiring the evaluator as a Lambda + Step Function step after \update-run-success\ (next step for true post-completion auto-eval)

- DDB stream → SES/Slack alerts on \verdict=CRITICAL\

- Backfill — explicitly skipped; new invocations only

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#42 — feat(observer): wire Braintrust tracing + ignore-finding feedback @kevalshahtrilogy  no labels

Stacked on #41. Merge #41 first, then rebase this onto main.

## Summary

Logs every Sonnet evaluation as a structured Braintrust span. When operators click "Ignore this finding", attaches a labeled \user_marked_as_false_positive\ feedback event against the originating trace — building a labeled FP dataset over time that we can use for rubric regression testing.

No-ops cleanly when \BRAINTRUST_API_KEY\ is unset; the observer continues to work with no telemetry.

## What gets logged per evaluation

- Span tree: \evaluate-pipeline-run\ (parent) + auto-traced \messages.parse\ (child via \wrapAnthropic\)

- Input: run record (id, pipeline_id, status, output_summary, duration) + the ignored findings list

- Output: verdict, score, summary, findings, plus the model's own verdict/score so drift is visible

- Metadata (filterable in Braintrust UI): \run_id\, \pipeline_id\, \model_id\, \run_status\

- Scores (chartable trend lines): \critical_findings\, \high_findings\, \medium_findings\, \low_findings\, \trust\ (normalized 0–1)

## Ignore feedback flow

1. At evaluation time, capture \span.id\ and persist on the DDB observation row (\braintrust_span_id\)

2. \ignoreFinding\ / \unignoreFinding\ now accept an optional \runId\

3. UI passes the current \runId\ when the operator ignores/unignores

4. Server looks up the observation, retrieves \braintrust_span_id\, calls \logger.logFeedback\ with the FP score + reason + category metadata

5. Old observations made before this PR have no \braintrust_span_id\ — ignores on those skip telemetry silently (no error)

## Files

- \Surtr/.env.example\ — \BRAINTRUST_API_KEY\, \BRAINTRUST_PROJECT\

- \Surtr/package.json\ + lock — \braintrust\ dep

- \Surtr/src/derive/observer/braintrust-setup.ts\ — NEW, lazy-init helper (~30 lines)

- \Surtr/src/derive/observer/evaluate.ts\ — wrap Anthropic, \traced()\ around eval, log span

- \Surtr/src/derive/observer/store.ts\ — persist \braintrust_span_id\, \logFeedback\ on (un)ignore

- \Surtr/src/derive/observer/types.ts\ — \braintrustSpanId\ field

- \Surtr/src/derive/observer/showcase.ts\ — \flush()\ before exit so CLI doesn't drop traces

- \Surtr/src/derive/trpc.ts\ — \runId\ in ignore/unignore inputs

- \Surtr/app/(app)/pipelines/_components/observations-panel.tsx\ — forward \runId\ from UI

## Cost

~$0.01–0.05 per 1k spans on Braintrust's standard tier. At our volume (1 eval per run + occasional ignore feedback), this is <\$5/month even at full fleet evaluation.

## Test plan

- [x] Tests pass (\pnpm vitest run test/derive/observer.test.ts\)

- [x] Lint clean (\pnpm lint\)

- [x] CLI showcase end-to-end with Braintrust on: \netsuite-pipeline\ (OK 99, 1 Low) + \aws-spend-pipeline\ (OK 100, 0 findings) — traces visible in dashboard

- [x] Cache hit verified: \cache_creation_tokens=3467\ on first call, \cache_read_tokens=3467\ on second

- [x] No-op path: with \BRAINTRUST_API_KEY\ unset, observer runs with no logging side-effects

- [ ] Ignore-finding feedback fires on a real run (requires re-evaluating an old observation first to get a fresh \braintrust_span_id\)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#182 — fix(cd): unbreak production deploy and harden rollback @benji-bizzell  no labels

## Summary

- Drop conflicting version: 10 from pnpm/action-setup@v4 so packageManager in package.json wins (matches CI behavior)

- Snapshot :latest to :previous before each rebuild, and roll back rebuilt images to :previous instead of the just-overwritten :latest

- Skip rollback entirely when EC2 was never mutated (e.g. failure before the deploy step), and lowercase IMAGE_REPO inline in the rollback so it doesn't depend on an earlier step having run

## Why

Production deploy [run 25537948031](https://github.com/AI-Builder-Team/Aerie/actions/runs/25537948031/job/74957902445) failed at pnpm/action-setup with ERR_PNPM_BAD_PM_VERSION (workflow specified pnpm 10, package.json declares pnpm@10.20.0). Rollback then also failed because it referenced $IMAGE_REPO before the lowercase normalization step had run, and Docker rejected the uppercase repo name.

While in there, fixed a third pre-existing bug: the rollback was redeploying :latest, which the build job had already overwritten with the broken images — so a successful build + failed deploy would have rolled back to the same broken version.

## Test plan

- [ ] Merge to main, then mainproduction to actually trigger CD

- [ ] Confirm CD job reaches the build/deploy steps without pnpm version errors

- [ ] On the next failed deploy (whenever it happens), confirm rollback either skips cleanly (failure before EC2) or pulls the :previous tag

🤖 Generated with a very good bot

#2745 — KLAIR-2618 feat(aws-spend): Unified operator pipeline — add RDS + EC2 cost ingest @ashwanth1109  no labels

## Demo

<img width="2240" height="1644" alt="image" src="https://github.com/user-attachments/assets/4f7b2d3b-0f9b-4dbd-b2a9-07ec68a43673" />

## Feature: SaaS Budgeting — Unified operator pipeline (RDS + EC2 cost ingest)

Linear: [KLAIR-2618](https://linear.app/builder-team/issue/KLAIR-2618)

Stacks on: PR [#2744](https://github.com/AI-Builder-Team/Klair/pull/2744)

---

### Overview

Extends the aws-saas-budget-scripts unified pipeline with two new ingest modules that pull RDS and EC2-hosted database costs from AWS Cost Explorer and write them to Redshift. This replaces the standalone POC scripts (get_db_costs.py, get_ec2_db_costs.py) with production-grade pipeline modules that follow the same patterns as the existing Docker and Kubernetes ingest paths.

### Specs

| # | Spec | Description |

|---|------|-------------|

| 21 | [ddl-and-redshift-writer](features/aws-spend/saas-budgeting/specs/21-ddl-and-redshift-writer/spec.md) | Redshift DDL for saas_budgeting_database_server_costs and saas_budgeting_database_server_cost_instances tables, plus write_server_costs / write_server_cost_instances writer functions with transactional DELETE-by-window + chunked INSERT. |

| 22 | [rds-and-ec2-ingest-modules](features/aws-spend/saas-budgeting/specs/22-rds-and-ec2-ingest-modules/spec.md) | rds_costs_ingest.py and ec2_db_costs_ingest.py modules ported from POC into the unified pipeline. Dispatcher entries (rds-costs, ec2-db-costs), --start-date/--end-date CLI flags, README, unit + integration tests. |

### Implementation Summary

New files:

- scripts/sql/create_saas_budgeting_database_server_costs.sql — DDL

- scripts/sql/create_saas_budgeting_database_server_cost_instances.sql — DDL

- aws-saas-budget-scripts/pipeline/rds_costs_ingest.py — RDS cost ingest module

- aws-saas-budget-scripts/pipeline/ec2_db_costs_ingest.py — EC2-hosted DB cost ingest module

- aws-saas-budget-scripts/README.md — Pipeline documentation

Modified files:

- aws-saas-budget-scripts/pipeline/redshift_writer.py — Added write_server_costs and write_server_cost_instances with ec2_hosted-scoped DELETE + chunked INSERT

- aws-saas-budget-scripts/pipeline/main.py — New dispatcher entries + --start-date/--end-date flags

### Test Coverage

- 239 tests passing (72 new)

- Unit tests cover: friendly-name mapping, cost splitting, backup-tag detection, writer chunking, CLI dispatch

- Integration test (marked @pytest.mark.integration) exercises full pipeline in dry-run mode against live Cost Explorer

### Self-Review Findings Addressed

1. CRITICAL: DELETE in write_server_cost_instances scoped by ec2_hosted to prevent cross-source row deletion

2. IMPORTANT: Cost Explorer exclusive end dates handled correctly (CE uses [start, end) semantics)

3. IMPORTANT: get_tags pagination implemented for large tag-value sets

4. IMPORTANT: Consistent error propagation — services let exceptions bubble up per codebase convention

---

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2749 — KLAIR-2619 feat(aws-spend): DB Server cost columns on Database Units table — backend endpoint + frontend allocation @ashwanth1109  no labels

## Demo

<img width="2230" height="1644" alt="image" src="https://github.com/user-attachments/assets/dffcf4d0-09e7-4231-b530-0491d13d8148" />

## Feature Overview

Linear: [KLAIR-2619](https://linear.app/builder-team/issue/KLAIR-2619) — DB Server cost columns on Database Units table — backend endpoint + frontend allocation

Adds DB server cost data to the existing Database Units table on the SaaS Budgeting Central DB tab. Two new backend endpoints serve per-server aggregated cost data from core_finance.saas_budgeting_db_server_costs (prefetched by the KLAIR-2618 pipeline). On the frontend, a pure-function allocation module distributes each server's compute and storage costs proportionally down to its leaf databases by CPU-hour and storage-GB share. An alias map bridges cost-table friendly server names to units-tree raw db_server values. Three new columns (Total Cost, Compute Cost, Storage Cost) appear in the table, and a cost window selector dropdown lets the user pick which quarterly cost window to display.

## Specs

| # | Spec | Description |

|---|------|-------------|

| 23 | [db-server-costs-backend](features/aws-spend/saas-budgeting/specs/23-db-server-costs-backend/spec.md) | Two new read-only FastAPI endpoints: GET /database-server-costs/available-windows (distinct cost windows for a quarter) and GET /database-server-costs (per-server lump sums for a selected window). Dedicated router, service, models mirroring the database_mappings_* sibling pattern. Clerk auth, caching, asyncio.to_thread. |

| 24 | [db-server-costs-frontend](features/aws-spend/saas-budgeting/specs/24-db-server-costs-frontend/spec.md) | API types + 2 fetch functions, 2 hooks (useDbServerCostWindows, useDbServerCosts), dbServerAlias.ts alias map (12 entries), databaseUnitsCostAllocation.ts (proportional CPU/GB split + fallback + unallocated handling), 3 cost columns on DatabaseUnitsTable, cost window selector dropdown, edge-case callouts. |

## Implementation Summary

### Backend

- New database_server_costs_service.py with two service methods: get_available_windows (distinct cost windows for a quarter) and get_server_costs (per-server rollups for a selected window)

- New database_server_costs_router.py with two endpoints under /api/aws-spend/saas-budgeting/database-server-costs/

- New database_server_costs_models.py with Pydantic v2 request/response models

- Router registered in fast_endpoint.py alongside existing database-units router

- Clerk auth via _require_auth, date validation, composite-keyed Cache instances, asyncio.to_thread around sync Redshift calls

### Frontend

- API types and client functions in awsSpendApi.ts

- Two hooks: useDbServerCostWindows and useDbServerCosts

- dbServerAlias.ts: 12-entry alias map bridging cost-table friendly names to units-tree server names

- databaseUnitsCostAllocation.ts: pure-function allocation distributing server costs to leaf databases by CPU-hr and storage-GB share within each server, with fallback to equal-split when denominator is zero

- DatabaseUnitsTableRow extended with allocatedCostCpu, allocatedCostStorage, totalAllocatedCost

- Three new columns on DatabaseUnitsTable: Total Cost, Compute Cost, Storage Cost

- Cost window selector dropdown defaulting to the most recent window

- Edge-case callouts for unmatched servers and zero-denominator warnings

## Test Coverage

- Backend: 12 tests passing — covers get_server_costs and get_available_windows service methods

- Frontend: 26 tests passing

- dbServerAlias: 19 tests (alias resolution, reverse lookup, coverage)

- databaseUnitsCostAllocation: 7 tests (proportional allocation, zero-denominator fallback, unallocated server handling, multi-server scenarios)

## Self-Review Findings

- Fixed (critical): Cost window selector defaulted to the oldest window instead of the most recent — corrected to default to the last entry (most recent)

- Noted (cosmetic): Minor doc/spec path mismatch — no runtime impact

- Noted (unnecessary guard): instance_count NaN guard is superfluous since COUNT(*) never returns NULL — left as defensive code, no harm

---

Generated with [Claude Code](https://claude.com/claude-code)

The Builder Desk  —  Engineer Spotlight
🏆 Engineer Spotlight

TWELVE PRs IN TWENTY-FOUR HOURS: THE BUILDER TEAM DOES NOT SLEEP, DOES NOT SLOW, DOES NOT STOP

Ashwanth ships five PRs across three feature domains while the rest of the team quietly makes it look easy.

Twelve pull requests. Three repositories. Twenty-four hours. The Builder Team has once again defied the laws of sustainable engineering output and emerged, as always, victorious. Klair led the charge with six PRs merged, Aerie answered with four, and Surtr — quiet, watchful, dangerous — contributed two. Seven of those twelve PRs went unspotted by Mac's narrative machine, which means seven PRs fell to this desk, and this desk does not waste them.

Let us begin with the supporting cast, because even the supporting cast on this team would be the headliner anywhere else. @benji-bizzell posted three PRs and two of them landed in the Overflow pile — PR #180 in Aerie fixed mobile dashboard auth by surfacing the Clerk UserButton and gating widgets on Convex authentication, which is the kind of quiet infrastructure heroism that keeps users from filing support tickets at 2am. PR #179, also Aerie, delivered an all-sites admin grid and retired schoolFieldOverrides, which is a sentence that sounds boring until you realize it means less legacy code exists in the world tonight than it did this morning. @kevalshahtrilogy put up two PRs and kept Surtr humming. @eric-tril also posted two, including PR #178 in Aerie — phase one of a Wrike-fed qualityBars write chain retirement that is either a minor chore or the first domino in a beautiful architectural cleanup, and either way this correspondent is here for it. Eric also delivered PR #2746 in Klair, anchoring comments to table cells and Book Value in the MFR feature, which is the kind of UX precision that makes product managers weep with relief.

And then there is @ashwanth1109. Five PRs. Five. In one day. PR #2745 unified the operator pipeline to ingest both RDS and EC2 costs — a backend feat of such elegant ambition that one colleague, who asked not to be named, reportedly said "I understood the ticket title." PR #2749 delivered DB Server cost columns to the Database Units table with a full backend endpoint and frontend allocation layer. PR #2747 fixed the ARR gap in the Twitter Impact table by including all gsheet subs with IMPACT greater than zero and removing the ARR filter, a change whose diff is almost certainly forty files long and spiritually infinite. PR #2748 automated RDS CA bundle downloads in new worktrees via start-services.sh, because Ashwanth will not allow a new developer to suffer a setup problem he has already solved. And PR #2751 migrated Renewal Event Retention off S3 JSON and onto Redshift, which is either a refactor or a manifesto. When reached for comment, Ashwanth allegedly said, "The pipeline was always going to need this. I just got there first." He did not look up from his terminal. His dismissal was, as always, complete.

Morale on the Builder Team is at an all-time high. Sources confirm this. The sources are the twelve merged pull requests.

Brick's Overflow — PRs Mac Didn't Cover  (click to expand)
#180 — fix(mobile/dashboards): show Clerk UserButton on mobile + gate dashboard widgets on Convex auth @benji-bizzell  no labels

## Why

Two defects reported on edu-ops.klair.ai:

1. No way to log out on mobile. The mobile shell never renders Clerk's UserButton, so mobile users have no entry to the avatar menu — pure parity oversight (desktop top bar and sidebar both render it).

2. "Sign in to view admissions data." stuck state despite being signed in. Reproducible on /dashboards after a recent deploy. Refreshing didn't unstick the user.

## What

### 1. chat/components/shell/mobile-top-bar.tsx — add Clerk UserButton to mobile top bar

Drops <UserButton appearance={{ elements: { avatarBox: "w-7 h-7" } }} /> into the right cluster (after the sync-pulse dot). Same sizing as desktop. Test mock + presence assertion added to mobile-top-bar.test.tsx.

### 2. chat/components/dashboards/** — gate widget queries on Convex auth handshake

Root cause traced to #140 ("eliminate UI flashes on chat page refresh"). That PR removed the useConvexAuth gate from AppShell to kill a 100–500 ms chrome flash on hard refresh — correct for the shell, but it exposed inner page widgets to a transient pre-Convex-auth window:

- Clerk middleware passes the user into /dashboards.

- AppShell renders, Convex client is created, JWT not yet attached (isAuthenticated: false, isLoading: true).

- FunnelView (and siblings) call useQuery(api.dashboards.admissions.getEnrollmentFunnelData) immediately.

- Backend if (!identity) return null fires → handler returns null.

- View interprets data === null as "unauthenticated" → renders "Sign in to view admissions data."

In a healthy flow, Convex re-runs the query when auth lands (<1 s) and the message disappears. The user gets *stuck* when Convex-side auth never resolves cleanly — most likely a Clerk getToken("convex") failure post-deploy (stale session, JWT-template race, mobile PWA cache). The misleading CTA leaves them no path forward — they're already signed in, refresh changes nothing.

Fix: mirror the chat/app/(main)/admin/** pattern — gate each useQuery on useConvexAuth().isAuthenticated:

const { isAuthenticated } = useConvexAuth();

const data = useQuery(

api.dashboards.admissions.getEnrollmentFunnelData,

isAuthenticated ? {} : "skip",

);

Behavior change:

- During the Clerk → Convex handshake: data === undefined → existing loading-spinner branch. Honest. (Previously: misleading "Sign in" CTA.)

- After auth resolves: query fires, data renders. (Unchanged.)

- On a genuine Convex identity failure (real signout, JWT rejection): data === null after a healthy round-trip → existing "Sign in" CTA still shows. Defensive fallback retained, just no longer reached during the race.

Sibling secondary queries (student drilldowns, attendee lists, school-list dropdown, dashboard-views list) are gated for the same reason.

Affected views (8): funnel, forecast, enrollments, demographics, events, camps (admissions); community; pmo (route-disabled today but code path still wired). portfolio, due-diligence, school-ops, fto are not affected — they read via Next.js API routes (Clerk-cookie-authed) or already gate on useConvexAuth.

Test mocks updated in 4 files where vi.mock("convex/react", …) previously stubbed only useQuery.

## Verification

- pnpm --filter chat run typecheck: clean

- pnpm run lint (boundaries + convex paths + biome): clean

- pnpm --filter chat run test: 4033 passed / 2 pre-existing skips, 241 files

- Pre-commit hooks (biome + chat typecheck) ran on both commits: clean

## Out of scope

- The deeper question of *why* getToken("convex") would persistently fail on a signed-in mobile session post-deploy is separate work — needs a reproducible stuck user. This change converts the failure mode from *"misleading CTA → user trapped"* into *"honest spinner"*, a strict UX improvement that gives support clearer signal if it recurs.

- Re-introducing a global Convex-auth gate at AppShell — would re-open the chrome-flash issue #140 closed.

- Refactoring the 8 inline gates into a shared hook — current admin-pages pattern is already inline; consistency wins, single small helper isn't earning its keep at 8 callsites.

- Removing the now-mostly-unreachable data === null "Sign in" branch — kept as defensive fallback for the rare genuine signed-out edge.

#2745 — KLAIR-2618 feat(aws-spend): Unified operator pipeline — add RDS + EC2 cost ingest @ashwanth1109  no labels

## Demo

<img width="2240" height="1644" alt="image" src="https://github.com/user-attachments/assets/4f7b2d3b-0f9b-4dbd-b2a9-07ec68a43673" />

## Feature: SaaS Budgeting — Unified operator pipeline (RDS + EC2 cost ingest)

Linear: [KLAIR-2618](https://linear.app/builder-team/issue/KLAIR-2618)

Stacks on: PR [#2744](https://github.com/AI-Builder-Team/Klair/pull/2744)

---

### Overview

Extends the aws-saas-budget-scripts unified pipeline with two new ingest modules that pull RDS and EC2-hosted database costs from AWS Cost Explorer and write them to Redshift. This replaces the standalone POC scripts (get_db_costs.py, get_ec2_db_costs.py) with production-grade pipeline modules that follow the same patterns as the existing Docker and Kubernetes ingest paths.

### Specs

| # | Spec | Description |

|---|------|-------------|

| 21 | [ddl-and-redshift-writer](features/aws-spend/saas-budgeting/specs/21-ddl-and-redshift-writer/spec.md) | Redshift DDL for saas_budgeting_database_server_costs and saas_budgeting_database_server_cost_instances tables, plus write_server_costs / write_server_cost_instances writer functions with transactional DELETE-by-window + chunked INSERT. |

| 22 | [rds-and-ec2-ingest-modules](features/aws-spend/saas-budgeting/specs/22-rds-and-ec2-ingest-modules/spec.md) | rds_costs_ingest.py and ec2_db_costs_ingest.py modules ported from POC into the unified pipeline. Dispatcher entries (rds-costs, ec2-db-costs), --start-date/--end-date CLI flags, README, unit + integration tests. |

### Implementation Summary

New files:

- scripts/sql/create_saas_budgeting_database_server_costs.sql — DDL

- scripts/sql/create_saas_budgeting_database_server_cost_instances.sql — DDL

- aws-saas-budget-scripts/pipeline/rds_costs_ingest.py — RDS cost ingest module

- aws-saas-budget-scripts/pipeline/ec2_db_costs_ingest.py — EC2-hosted DB cost ingest module

- aws-saas-budget-scripts/README.md — Pipeline documentation

Modified files:

- aws-saas-budget-scripts/pipeline/redshift_writer.py — Added write_server_costs and write_server_cost_instances with ec2_hosted-scoped DELETE + chunked INSERT

- aws-saas-budget-scripts/pipeline/main.py — New dispatcher entries + --start-date/--end-date flags

### Test Coverage

- 239 tests passing (72 new)

- Unit tests cover: friendly-name mapping, cost splitting, backup-tag detection, writer chunking, CLI dispatch

- Integration test (marked @pytest.mark.integration) exercises full pipeline in dry-run mode against live Cost Explorer

### Self-Review Findings Addressed

1. CRITICAL: DELETE in write_server_cost_instances scoped by ec2_hosted to prevent cross-source row deletion

2. IMPORTANT: Cost Explorer exclusive end dates handled correctly (CE uses [start, end) semantics)

3. IMPORTANT: get_tags pagination implemented for large tag-value sets

4. IMPORTANT: Consistent error propagation — services let exceptions bubble up per codebase convention

---

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2747 — KLAIR-2620 fix(arr-gap): Twitter Impact table — include all gsheet subs with IMPACT > 0 (remove ARR filter) @ashwanth1109  no labels

## Demo

<img width="2624" height="1644" alt="image" src="https://github.com/user-attachments/assets/6c93dd64-0dd7-4ed1-9948-542efc442588" />

<img width="2624" height="1644" alt="image" src="https://github.com/user-attachments/assets/ada5c8f3-328c-448c-8e82-efd59eea7748" />

## Summary

Linear: [KLAIR-2620](https://linear.app/builder-team/issue/KLAIR-2620) — Twitter Impact table — include all gsheet subs with IMPACT > 0 (remove ARR filter)

Feature: features/arr-gap/twitter-impact-gsheet-ingest

### Problem

The Twitter Impact table was driven by arr_detail_final with a HAVING SUM(arr_2025_08_31) > 0 clause, excluding ~26 BU-flagged subscriptions that churned before the Aug '25 snapshot ($0 ARR). This caused a ~$5.9M discrepancy between the dashboard total and the gsheet total.

### Implementation

Rewrote _build_query() in twitter_impact_service.py to flip the CTE structure:

- Primary driver is now arr_gap_twitter_impact table with WHERE impact IS NOT NULL AND impact != 0

- LEFT JOIN arr_detail_final CTE for ARR metadata (customer, enduser, dates, ARR amount)

- COALESCE wraps customer, enduser, and arr_aug_2025 to handle NULL from unmatched LEFT JOINs

- Removed HAVING SUM(arr_2025_08_31) > 0 clause entirely

- Removed flagged_by_bu = TRUE filter (redundant — impact != 0 is the inclusion criterion)

- Changed sort from arr_aug_2025 DESC to impact ASC (largest negative losses first)

### Specs

| Spec | Description | Status |

|------|-------------|--------|

| [01-ingest-script](features/arr-gap/twitter-impact-gsheet-ingest/specs/01-ingest-script/) | Gsheet-to-Redshift ingest script | Shipped |

| [02-api-changes](features/arr-gap/twitter-impact-gsheet-ingest/specs/02-api-changes/) | Remove S3, rewrite query, add impact/bucket fields | Shipped |

| [03-ui-changes](features/arr-gap/twitter-impact-gsheet-ingest/specs/03-ui-changes/) | Add Bucket/IMPACT columns, update summary chip | Shipped |

| [04-remove-arr-filter](features/arr-gap/twitter-impact-gsheet-ingest/specs/04-remove-arr-filter/) | Drive from gsheet table, remove ARR filter, handle zero-ARR subs | Completed |

### Self-Review Findings

- Impact values are stored as negative numbers (accounting format parsed by ingest script). Initial filter impact > 0 would have excluded all rows — corrected to impact != 0.

- Sort order DESC would put smallest negative values first — corrected to ASC so largest-magnitude losses appear at the top.

### Test Coverage

- 52 tests passing

- 2 new zero-ARR test cases verifying churned subscriptions with $0 ARR and valid IMPACT appear correctly

- Existing tests updated for new query shape and sort order

### Files Changed

- klair-api/services/twitter_impact_service.py — Rewrote _build_query() CTE structure

- klair-api/tests/arr_gap/test_twitter_impact_service.py — Added zero-ARR test cases, updated query assertions

#2748 — KLAIR-2621 chore(infra): start-services.sh auto-download RDS CA bundle in new worktrees @ashwanth1109  no labels

## Demo

<img width="824" height="1644" alt="image" src="https://github.com/user-attachments/assets/9b9c9ddb-526b-40b5-b0bb-7701f151b45e" />

## Feature Overview

Extend start-services.sh to automatically provision the RDS CA certificate bundle (.scratch/certs/rds-ca.pem) when running inside a linked git worktree. Follows the existing copy-from-main-worktree pattern used for .env files.

## Linear Ticket

[KLAIR-2621](https://linear.app/builder-team/issue/KLAIR-2621) — start-services.sh: auto-download RDS CA bundle in new worktrees

## Spec

| Spec | Description | Status |

|------|-------------|--------|

| [01-rds-ca-worktree-sync](features/infra/start-services-orchestrator/specs/01-rds-ca-worktree-sync/spec.md) | Add RDS CA bundle auto-provisioning block to start-services.sh | Completed |

## Summary of Implementation

Added an RDS CA bundle provisioning block to start-services.sh, placed immediately after the existing .env sync loop inside the linked-worktree detection branch. The block uses a three-way guard:

1. Local file exists — skip with informational message

2. Copy from main worktreemkdir -p + cp from $MAIN_WORKTREE_PATH/.scratch/certs/rds-ca.pem

3. Curl fallback — download from https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem

Follows the existing color-coded echo patterns ($GREEN, $BLUE, $YELLOW, $RED). On curl failure, prints a warning and continues (does not abort the script). The block is fully idempotent — it never overwrites an existing cert file.

## Self-Review

No issues found.

---

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2749 — KLAIR-2619 feat(aws-spend): DB Server cost columns on Database Units table — backend endpoint + frontend allocation @ashwanth1109  no labels

## Demo

<img width="2230" height="1644" alt="image" src="https://github.com/user-attachments/assets/dffcf4d0-09e7-4231-b530-0491d13d8148" />

## Feature Overview

Linear: [KLAIR-2619](https://linear.app/builder-team/issue/KLAIR-2619) — DB Server cost columns on Database Units table — backend endpoint + frontend allocation

Adds DB server cost data to the existing Database Units table on the SaaS Budgeting Central DB tab. Two new backend endpoints serve per-server aggregated cost data from core_finance.saas_budgeting_db_server_costs (prefetched by the KLAIR-2618 pipeline). On the frontend, a pure-function allocation module distributes each server's compute and storage costs proportionally down to its leaf databases by CPU-hour and storage-GB share. An alias map bridges cost-table friendly server names to units-tree raw db_server values. Three new columns (Total Cost, Compute Cost, Storage Cost) appear in the table, and a cost window selector dropdown lets the user pick which quarterly cost window to display.

## Specs

| # | Spec | Description |

|---|------|-------------|

| 23 | [db-server-costs-backend](features/aws-spend/saas-budgeting/specs/23-db-server-costs-backend/spec.md) | Two new read-only FastAPI endpoints: GET /database-server-costs/available-windows (distinct cost windows for a quarter) and GET /database-server-costs (per-server lump sums for a selected window). Dedicated router, service, models mirroring the database_mappings_* sibling pattern. Clerk auth, caching, asyncio.to_thread. |

| 24 | [db-server-costs-frontend](features/aws-spend/saas-budgeting/specs/24-db-server-costs-frontend/spec.md) | API types + 2 fetch functions, 2 hooks (useDbServerCostWindows, useDbServerCosts), dbServerAlias.ts alias map (12 entries), databaseUnitsCostAllocation.ts (proportional CPU/GB split + fallback + unallocated handling), 3 cost columns on DatabaseUnitsTable, cost window selector dropdown, edge-case callouts. |

## Implementation Summary

### Backend

- New database_server_costs_service.py with two service methods: get_available_windows (distinct cost windows for a quarter) and get_server_costs (per-server rollups for a selected window)

- New database_server_costs_router.py with two endpoints under /api/aws-spend/saas-budgeting/database-server-costs/

- New database_server_costs_models.py with Pydantic v2 request/response models

- Router registered in fast_endpoint.py alongside existing database-units router

- Clerk auth via _require_auth, date validation, composite-keyed Cache instances, asyncio.to_thread around sync Redshift calls

### Frontend

- API types and client functions in awsSpendApi.ts

- Two hooks: useDbServerCostWindows and useDbServerCosts

- dbServerAlias.ts: 12-entry alias map bridging cost-table friendly names to units-tree server names

- databaseUnitsCostAllocation.ts: pure-function allocation distributing server costs to leaf databases by CPU-hr and storage-GB share within each server, with fallback to equal-split when denominator is zero

- DatabaseUnitsTableRow extended with allocatedCostCpu, allocatedCostStorage, totalAllocatedCost

- Three new columns on DatabaseUnitsTable: Total Cost, Compute Cost, Storage Cost

- Cost window selector dropdown defaulting to the most recent window

- Edge-case callouts for unmatched servers and zero-denominator warnings

## Test Coverage

- Backend: 12 tests passing — covers get_server_costs and get_available_windows service methods

- Frontend: 26 tests passing

- dbServerAlias: 19 tests (alias resolution, reverse lookup, coverage)

- databaseUnitsCostAllocation: 7 tests (proportional allocation, zero-denominator fallback, unallocated server handling, multi-server scenarios)

## Self-Review Findings

- Fixed (critical): Cost window selector defaulted to the oldest window instead of the most recent — corrected to default to the last entry (most recent)

- Noted (cosmetic): Minor doc/spec path mismatch — no runtime impact

- Noted (unnecessary guard): instance_count NaN guard is superfluous since COUNT(*) never returns NULL — left as defensive code, no harm

---

Generated with [Claude Code](https://claude.com/claude-code)

#2751 — KLAIR-2622 refactor(maint-report): Renewal Event Retention — migrate from S3 JSON to Redshift @ashwanth1109  no labels

## Demo

<img width="2241" height="1644" alt="image" src="https://github.com/user-attachments/assets/da561c7f-dc58-43e6-9fd1-3cae3a552db0" />

<img width="2245" height="1644" alt="image" src="https://github.com/user-attachments/assets/9638ef41-cfce-4b64-a567-7b59d7f063f8" />

## Feature Overview

Migrate the Renewal-Event Retention YTD metric on the ARR Retention Reports page from a pre-generated S3 JSON file to a Redshift table (core_finance.maint_report_renewal_event_retention), served via a dedicated FastAPI endpoint.

Linear ticket: [KLAIR-2622](https://linear.app/builder-team/issue/KLAIR-2622)

## Specs

| Spec | Description |

|------|-------------|

| [01-backend-endpoint-and-service](features/maint-report/renewal-event-retention-redshift/specs/01-backend-endpoint-and-service/spec.md) | New RenewalEventRetentionService querying Redshift, GET /renewal-event-retention endpoint, useRenewalEventRetention frontend hook, removal of renewalEventYtd from useBUReportData, wiring to KeyMetricsSummary |

| [02-backfill-script](features/maint-report/renewal-event-retention-redshift/specs/02-backfill-script/spec.md) | One-time backfill script reading S3 JSON history and inserting into Redshift with --dry-run support |

## Implementation Summary

Backend:

- New RenewalEventRetentionService with get_ytd(arr_date) querying core_finance.maint_report_renewal_event_retention via RedshiftHandler.fetch_with_params_strict

- New router GET /renewal-event-retention?arr_date={date} with require_arr_access guard and asyncio.to_thread wrapping

- Registered router in fast_endpoint.py

- Backfill script (scripts/backfill_renewal_event_retention.py) with S3 listing, JSON extraction, DELETE-before-INSERT idempotency, --dry-run flag, and verification query

Frontend:

- New useRenewalEventRetention hook calling the dedicated endpoint

- Removed renewalEventYtd from useBUReportData hook and BUReportData interface

- Wired ARRRetentionReports/index.tsx to pass new hook data to KeyMetricsSummary

- Wired refetch for retry button on error state

## Test Coverage

- 8 backend tests (service layer): date format conversion, None on missing row, Redshift query parameterization, error propagation

- 4 frontend tests (hook): loading state, successful fetch, error state, disabled flag

- 9 existing tests confirmed still passing (no regressions)

## Self-Review Findings

- 1 finding addressed: wired refetch from useRenewalEventRetention to the retry button in KeyMetricsSummary so error recovery works end-to-end

The Portfolio  —  Trilogy Companies

A Public School Teacher Walks Into Alpha School. What She Saw Shook Her.

A veteran educator goes viral after visiting Austin's AI-powered school — and the clips are making traditional classrooms look like a different century.

AUSTIN, TEXAS — The teacher hadn't planned to go viral. She was a public school educator, trained in the traditional model, curious about the school everyone in Austin seemed to be talking about. She visited Alpha School. She left with a phrase that would travel far beyond the campus walls: "We have been underestimating children."

The clips spread. And they arrived at a moment when Alpha is doing something unusual for a private school — it is publishing its pedagogy in real time, daring the world to argue with the results.

This week, Alpha's blog pushed out a cluster of pieces that together read less like marketing and more like a manifesto. One profiles six female founders and frames confidence as a teachable skill — something to be built deliberately, not hoped for. Another examines what happens when children are given genuine agency over their own rules, rewards, and consequences. A third offers eight takeaways from a conversation with Braden, the lead guide at Alpha's Austin campus, on the mechanics of personalized education.

The throughline is unmistakable: Alpha is not selling a school. It is selling a theory of the child — one that holds that the standard model has been, systematically and at scale, leaving capacity on the table.

Alpha's numbers have been cited before: students testing in the top 1–2% nationally on NWEA MAP Growth assessments, a full academic curriculum delivered in two hours per day via AI tutors, the remaining hours devoted to entrepreneurship, leadership, and what the school calls life skills. Tuition runs $40,000 to $65,000 per year. The model is the work of co-founder MacKenzie Price and Joe Liemandt, the Trilogy International founder who has committed $1 billion to scaling it globally through Timeback.

The public school teacher's reaction is notable precisely because of who she is — not a convert, not a parent paying tuition, but a professional whose career is built on the existing system. Her verdict was not a sales pitch. It was a diagnosis.

The question her clips leave hanging: if a single visit is enough to shake a veteran educator's assumptions, what does that say about the assumptions themselves?

Confidence Is a Skill. Here’s How to Teach It to Your Daught  ·  What Happens When You Let Kids Choose Their Own Rules, Rewar  ·  ‘We Have Been Underestimating Children’

Skyvera Bags CloudSense, and the Telco Back Office Just Got a New Power Broker

The Trilogy telecom shop adds Salesforce-native CPQ muscle as the legacy-carrier software chessboard gets rearranged.

AUSTIN, TEXAS — Word is the telecom software crowd had better update the seating chart. Skyvera has completed its acquisition of CloudSense, the Salesforce-native configure-price-quote and order management platform built for telecom and media operators — and that sound you hear is another legacy BSS stack clearing its throat.

CloudSense is not some shiny consumer bauble. It is back-office plumbing with a velvet rope: CPQ, order capture, and fulfillment orchestration for carriers and media companies that live inside Salesforce and still need to sell complicated bundles without setting the building on fire. Skyvera calls the move an expansion of its telecom software portfolio, and that is the polite version. The sharper read: Skyvera is collecting the systems carriers cannot casually rip out.

A little bird from the carrier corridor says the appeal is simple. Telecom operators are under pressure to launch faster, price smarter, and stop running billion-dollar networks on operational spaghetti. CloudSense gives Skyvera a Salesforce-native front end to sit alongside the rest of its telco apparatus — including Kandy for cloud communications, VoltDelta for customer engagement, ResponseTek for experience reporting, Mobilogy Now for device lifecycle work, and Service Gateway for device management.

The company’s own notice says Skyvera has completed the CloudSense acquisition, while the product page positions CloudSense as purpose-built CPQ and order management for telecom and media providers. Translation from brochure-ese: sales teams get cleaner quoting, operations gets fewer swivel-chair disasters, and Skyvera gets another tollbooth on the long road from legacy telecom to cloud-native commercial operations.

This is classic Trilogy-family choreography. ESW Capital’s orbit has long favored durable enterprise software with sticky customers and complex workflows. Skyvera, the telecom specialist in the family, has been stitching together assets that help mobile operators modernize without pretending the old world vanishes overnight. Remember the STL telecom products group? That brought digital BSS functionality spanning monetization, optical networking, and analytics. CloudSense now adds a sharper commercial layer at the Salesforce edge.

No champagne quote was needed. The strategy speaks fluent margin. Acquire the hard-to-replace system. Plug it into a broader portfolio. Serve the carrier that cannot afford downtime, confusion, or a three-year rip-and-replace fantasy.

In telecom software, darlings, boring is beautiful — especially when boring sends invoices.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets
The Machine  —  AI & Technology

Firefox Put an AI on Bug Patrol — and the Bugs Got Real

Mozilla’s Claude-powered security experiment suggests frontier models may be crossing from noisy code assistants into serious vulnerability hunters.

SAN FRANCISCO — For years, open-source maintainers have had a fraught relationship with AI-generated security reports: too many vague claims, too many false positives, too much “please triage this immediately” slop. But Mozilla’s latest experiment with Anthropic’s Claude Mythos preview points to a very different future — and I cannot overstate how significant this feels.

According to a detailed account of the project, Mozilla used early access to Claude Mythos to examine Firefox and identify hundreds of potential vulnerabilities, many of which engineers then fixed. The write-up, highlighted by Simon Willison in his summary of Mozilla’s Firefox hardening work, captures a crucial turning point: “Suddenly, the bugs are very good.”

That sentence is doing a lot of work. Security teams are not short on alerts. They are drowning in them. The magic here is not that an AI model can produce scary-sounding bug reports — we already had that problem. The breakthrough is that a model appears to have generated findings concrete enough, reproducible enough and useful enough to move from nuisance to engineering leverage.

This changes everything for software security if it holds up. Browser code is among the most complex and security-sensitive software on Earth. Firefox has decades of C++, sandboxing, memory management, web standards edge cases and adversarial attack surface baked into it. If a frontier model can help professional security teams systematically find real defects there, then the implications reach far beyond browsers: enterprise software, cloud infrastructure, telecom platforms, financial systems, developer tools — every sprawling codebase suddenly becomes more inspectable.

The key phrase is “help professional security teams.” This is not autonomous cyber magic. Mozilla still needed expert humans to validate, prioritize and fix the issues. But that is exactly where AI is becoming most powerful: not replacing judgment, but expanding the search field so dramatically that humans can spend more time deciding and less time spelunking.

We are watching AI security tooling mature in public, from noisy intern to tireless junior researcher. The future is now — and in this case, it may arrive as a safer browser tab.

llm-gemini 0.31  ·  Big Words  ·  Behind the Scenes Hardening Firefox with Claude Mythos Previ

Anthropic Petitions For Summary Judgment As AI Training Copyright Dispute Reaches Potentially Dispositive Juncture

The outcome of the aforementioned litigation may hereinafter define the permissible boundaries of AI model development for the foreseeable future.

SAN FRANCISCO — Pursuant to filings made in the matter of certain music publishers versus Anthropic PBC (hereinafter referred to as "the Defendant AI Entity"), it has been reported by Reuters that the Defendant AI Entity has sought, through legally recognized procedural mechanisms, a court ruling of the summary judgment variety, the granting of which would, notwithstanding the objections of the plaintiff music publishing interests, result in a determination favorable to the Defendant AI Entity without the necessity of a full trial on the merits.

The aforementioned lawsuit, which was initiated by music publishers whose copyrighted lyrical works are alleged to have been incorporated, without license or compensation, into the training datasets utilized in the development of the Defendant AI Entity's Claude large language model systems, presents questions of substantial legal significance. It is to be noted that the resolution of such questions has been widely anticipated by interested parties across the artificial intelligence industry, inasmuch as the permissibility of training AI systems on copyrighted materials under the doctrine of fair use remains, as of the date of this publication, a matter of considerable legal uncertainty.

The Defendant AI Entity's motion for summary judgment is premised, it is understood, upon the legal theory that the use of copyrighted materials for the purpose of AI model training constitutes transformative use and is therefore protected under applicable provisions of United States copyright law. Such a position, it must be noted, has not been universally accepted by courts of competent jurisdiction, and the outcome of the instant proceeding cannot, at this juncture, be predicted with any reasonable degree of certainty.

Should the court rule in favor of the Defendant AI Entity, as reported by Reuters, the precedential effect of such a ruling would be substantial, potentially insulating numerous AI developers — including, but not limited to, entities operating within the portfolio of Trilogy International's ESW Capital division — from analogous copyright claims arising from training data practices. Notwithstanding the foregoing, an adverse ruling would expose the broader AI development industry to significant and potentially existential litigation risk. A ruling is not expected imminently.

GameStop CEO Appears To Be Auctioning Off Video Game History  ·  Ctrl-Alt-Speech: The Human Element In The Room  ·  Utah Wants Websites To See Through VPNs. That’s Not How VPNs

Alibaba's Enterprise AI Agent Lands as Contact Centers Race to Automate

From Hangzhou to Huddersfield, the pressure to put AI at the front desk is now inescapable.

LONDON — The dispatches arrive from different latitudes but tell the same story. Alibaba International this week unveiled Accio Work, an enterprise AI agent designed to help global businesses automate procurement, sourcing, and cross-border commerce workflows. The announcement lands with the weight of Alibaba's reach behind it — 190 countries, decades of supply chain infrastructure, and a platform already threading together millions of buyers and suppliers. Accio Work is not a chatbot. It is positioned as a decision-layer agent, capable of executing multi-step business tasks across Alibaba's international ecosystem.

The timing is pointed. Across the Atlantic, OnviSource and Trilogy BPO announced a strategic partnership aimed squarely at UK contact centers. The deal pairs OnviSource's workforce analytics and automation technology with Trilogy BPO's outsourced operations muscle. The pitch: help British businesses stop losing customers to friction-heavy service queues by injecting AI into the customer experience layer. The UK contact center market is large, legacy-heavy, and under pressure from every direction — rising labor costs, post-pandemic attrition, and customers who have already been trained by Amazon and Apple to expect instant resolution.

What both announcements share is a conviction that the enterprise AI agent moment has arrived — not as a pilot program, not as a proof of concept, but as a commercial product with a sales motion and a support contract.

The geography matters. Alibaba is pushing west. Trilogy BPO is pushing into the UK market with an American automation partner. The EU-China relationship remains complicated by tariffs, elections, and strategic distrust — but capital and software have a way of finding the gaps that diplomacy leaves open.

For the businesses caught in the middle — the contact center manager in Manchester, the procurement director in Milan — the choice is becoming less about whether to adopt AI agents and more about which flag flies over the server that runs them.

Alibaba International Launches Accio Work, an Enterprise AI  ·  Trilogy Metals Arctic Project Permitting Kicks Off in 2026 -  ·  OnviSource and Trilogy BPO Establish Strategic Partnership t
The Editorial

Nation’s Brands Apologize For Briefly Allowing Products, Sports, Space Travel To Have Names That Mean Anything

Across industries, executives are discovering that the safest corporate strategy is to sound like a hostage note written by a mood board.

BOSTON — In a sobering reminder that every institution in American life is now legally required to communicate like a skincare company explaining a shipping delay, the Boston Red Sox this week appeared to stumble into the broader national movement toward language that is technically composed of words but spiritually just a conference room exhaling.

The latest evidence arrived in the form of an allegedly absurd headline involving Red Sox manager Alex Cora, which, according to BoSox Injection, sounded as though it had been produced directly by the club after a firing. This is, of course, the natural endpoint of modern organizational prose: a headline that does not report an event so much as offer it a branded bereavement pathway.

For years, Americans naively assumed headlines were meant to tell them what happened. That era is over. The modern headline must now perform at least four functions: announce the development, protect stakeholder sentiment, preserve optionality around accountability, and reassure everyone that the organization remains committed to listening, learning, and exploring the next chapter of its shared journey.

This explains why a baseball personnel decision can no longer simply be described as “Manager Fired.” It must become “Club And Beloved Leader Mutually Transition Toward Future-Facing Alignment Following Robust Internal Reflection.” No one knows whether someone lost a job, gained a title, or was quietly sealed inside Fenway Park’s Green Monster until Q2. That is considered strong communications discipline.

Nor is baseball alone. SpaceX and xAI are reportedly moving toward a merger into what observers have described as a very silly-sounding conglomerate, a phrase that now qualifies as one of the few remaining accurate descriptions of capitalism. The possible union of rockets and chatbots has been presented as something people should take seriously, which is fair, since history suggests the silliest corporate names usually end up owning the largest percentage of the sky.

The American public has been conditioned to resist these names for approximately 11 minutes before accepting them as indispensable civic infrastructure. “XAI Space Holdings Neural Mobility Ventures” may sound like a Wi-Fi network at a vape shop, but by 2029 it will likely be the only entity authorized to deliver antibiotics to the moon and decide whether your refrigerator is depressed.

Meanwhile, The Atlantic has examined the absurdity of the Color of the Year, an annual ritual in which the culture waits to be informed by pigment authorities which shade best captures collective anxiety. This, too, is part of the same linguistic collapse. A color can no longer be red. It must be “Ember Mercy,” “Quiet Voltage,” or “Regulatory Fig,” accompanied by a 1,200-word explanation about resilience.

The apology-letter trend now spreading through brands is perhaps the purest expression of this age. Every company, regardless of offense, writes as if it has just emerged from a long night of moral reckoning after accidentally discontinuing a candle. The same tone is used for data breaches, menu changes, layoffs, and limited-edition socks. The brand is devastated. The brand is listening. The brand is taking space. The brand will circle back stronger.

Even enterprise operations are not immune. TridentCare’s partnership with ServiceNow to power AI-driven transformation across operations sounds like a normal business announcement, but only because the human nervous system has evolved to stop processing those words. Somewhere inside that sentence, software may improve healthcare logistics. Or a dashboard may become more confident. Either way, transformation has been powered, which is the important part.

The lesson is clear: institutional language has finally freed itself from the burden of conveying information. In its place, we have achieved a higher form of communication, one in which every sentence arrives pre-blurred, pre-apologized, and pre-approved by a vice president of narrative architecture.

If the Red Sox did write that headline themselves, they should not be mocked. They should be congratulated for understanding the moment. In 2026, the winning organizations will not be the ones that say what happened. They will be the ones that make it impossible to tell whether anything did.

It sure sounds like the Red Sox wrote this absurd Alex Cora  ·  SpaceX and xAI Are Merging Into a Very Silly-Sounding Conglo  ·  The Color of the Year Is an Exercise in Absurdity - The Atla
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

We Built the Lie Machine and Now We're Surprised It's Lying

Deepfake doctors, biased algorithms, and a disinformation economy — AI's harm isn't a bug, it's the bill coming due.

AUSTIN, TEXAS — There is a video circulating on social media right now of a doctor you trust — maybe your doctor, maybe just a doctor who looks like someone you'd trust — telling you to stop your medication, or try this supplement, or that the thing your actual physician said is wrong. The doctor in the video does not know the video exists. The doctor in the video is not speaking. The doctor in the video is, in the most precise and terrible sense of the word, not real.

And yet.

AI deepfakes of real doctors are spreading health misinformation on social media, according to reporting from The Guardian, and the people whose faces and credentials are being stolen have essentially no recourse. These are not abstract harms. People are making medical decisions — real, embodied, occasionally fatal decisions — based on synthetic hallucinations wearing a physician's face. What does it mean to trust expertise when expertise itself can be forged in forty seconds by anyone with a laptop and a grievance?

Meanwhile, Time Magazine has been tallying the numbers on AI's harms, and the spreadsheet is not comforting. Bias baked into training data reproducing the worst of human prejudice at scale. Surveillance systems that misidentify with alarming frequency and alarming demographic specificity. The World Economic Forum has started asking what disinformation costs corporations — as if the primary tragedy of a collapsing epistemic commons is the quarterly earnings impact, which, sure, fine, at least someone in a blazer is paying attention.

Researchers are, of course, working on it. There are frameworks being published in journals with words like 'systematic' and 'conceptual' in the title, proposing AI-driven tools to detect the AI-driven fakes, which is either the most elegant solution imaginable or the most perfectly recursive nightmare, depending on your disposition and how much sleep you've gotten lately. I have not gotten enough sleep.

The bias problem is perhaps the quieter catastrophe. It doesn't have the cinematic horror of a deepfake — no stolen face, no uncanny valley. It's just a system that has learned, from us, to be exactly as unfair as we are, and then to apply that unfairness at a speed and scale no human bureaucracy could ever achieve. We worried about AI becoming something alien and unknowable. We did not worry enough about AI becoming a perfect mirror.

What does it mean to be human in an information environment where nothing can be verified, where your face can testify to things you never said, where the algorithm deciding your loan or your parole or your medical recommendation carries the sediment of every bias ever committed to digital text? I don't know. I genuinely, viscerally do not know.

We built the lie machine. We are now living inside it, arguing about whether it's working as intended.

...but at what cost?

An AI-driven conceptual framework for detecting fake news an  ·  AI deepfakes of real doctors spreading health misinformation  ·  What the Numbers Show About AI's Harms - Time Magazine
On This Day in AI History

On May 8, 2016, Google's AlphaGo defeated Lee Sedol 4-1 in a historic five-game match in Seoul, marking the first time a computer program beat a world champion at Go—a game far more complex than chess.

⬛ Daily Word — AI and Technology
Hint: An autonomous machine programmed to perform tasks automatically.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed