Vol. I  ·  No. 133 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
WEDNESDAY, MAY 13, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

AI Valuations Defy Gravity as Courtroom Drama and Cybersecurity Debates Reshape the Industry

Anthropic nears a $950 billion valuation, Sierra closes nearly $1 billion, and Sam Altman takes the stand — all in the same week.

SAN FRANCISCO — The AI industry produced three distinct data points this week, each illuminating a different fault line running beneath the sector's surface.

Start with the money. Anthropic is in talks to raise at a $950 billion valuation — a 2.5× step-up from its prior $380 billion mark. The round would rank among the largest private financings in history. Separately, Bret Taylor's Sierra, an enterprise AI agent platform, closed nearly $1 billion in fresh capital, months after its previous raise. Two companies, two rounds, roughly $2 billion in aggregate new capital, in a single week. The argument that AI investment is cooling does not survive contact with these numbers.

Then there is the capability debate. Anthropic declined to release its Claude Mythos model to the general public, citing cybersecurity risk — specifically, concerns that the model could meaningfully lower the barrier to offensive cyberattacks. That claim has divided researchers. Critics argue the threat is overstated and that withholding capable models from public scrutiny makes the overall ecosystem less safe, not more. Supporters counter that asymmetric risk — where offense is easier than defense — justifies caution. A national cybersecurity competition held this week, in which AI agents and human teams attempted to breach and defend live networks, offered partial evidence for both sides: agents performed adequately in isolation but did not decisively outperform skilled human operators.

Finally, the courtroom. Sam Altman testified Tuesday in the ongoing litigation with Elon Musk, where Musk's legal team pressed a single pointed question: is Altman trustworthy? Altman told the court he believed Musk sought operational control of OpenAI — a claim that, if credited by the judge, would reframe Musk's lawsuit as strategic rather than principled.

Three storylines. One throughline: the decisions being made right now about who controls AI, how capable models get deployed, and at what price, will be difficult to reverse.

Musk Lawyer’s Question for Sam Altman on the Stand: Are You  ·  Anthropic in Talks to Raise Funding at a $950 Billion Valuat  ·  A.I. and Humans Battle It Out in a Cybersecurity Showdown

Mill Town Reckoning: One Factory's Lopsided Bet on Automation

Hyperscalers chase cheap power into rural Maine, trading factory paychecks for server racks.

JAY, MAINE — The Androscoggin paper mill ran 1,500 strong turning timber into pulp until a digester blew sky-high in 2020, killing the works for keeps. Five years later the 1.4-million-square-foot carcass is set for a second life as a data center, after JGT2 Redevelopment and partners snapped up the property in 2023. The AI economy has come to rural Maine, and it's hungry for floor space.

Sixty-seven miles northwest of Portland sits a town that watched its lone industry go silent overnight. The mill paid the mortgages. Then it didn't — and the buyers came knocking.

Across the country the same story repeats. Shuttered factories in Pennsylvania, empty warehouses in Ohio, mill towns with industrial grids and no industry left to feed them. The AI boom needs all of it.

Enter the hyperscalers. Amazon, Microsoft, Google, Meta — every one racing to wire the country for artificial intelligence. They want megawatts cheap and land cheaper; the countryside has both.

The math looks pretty on paper. Hyperscalers get acres and amps for pennies on the dollar. Towns get tax base, construction crews, and a pulse on the wire again.

Then comes the headcount. A data center built into the bones of the Androscoggin mill might run with 50 to 200 staff once the lights come on. The mill ran 1,500 — the gap is what folks now call the rural AI bargain.

Local opinion splits along the same lines as the paychecks. Selectmen want the tax base; old millworkers want a mill. Neither side gets quite what it asks for.

Power is the second catch. McKinsey figures data center demand jumps from 60 gigawatts in 2023 to 220 gigawatts by 2030. The grid wasn't built for that — neither was Maine's.

Cooling is the next wrinkle. AI racks run hot and drink water by the billions of gallons every year. Maine has water; the map redraws itself accordingly.

Ratepayers carry the freight. Utilities expand to meet hyperscaler load, and the bills land in everyone's mailbox. A handful of states have started writing rules to make the data center boys pay their own freight — most haven't.

The dollars still talk loudest. A hyperscale campus can cost $10 billion to build, and construction brings hundreds of paychecks for two, maybe three years. Operations bring far fewer — the ink dries before that part sinks in.

Trilogy International watches the build-out from the inside. CloudFix, part of the ESW Capital portfolio, sells AWS cost optimization to operators staring down rising compute bills. Demand climbs every quarter — cheap power and cheap cloud move in the same direction.

Down in Jay, the construction trucks will roll soon. The old mill will hum again — quieter, with fewer paychecks but a faster pulse on the line. The digesters stay dark; the servers wait.

The Apple Studio Display could have been so much more  ·  Data centers are coming for rural America  ·  Amazon’s Panos Panay addresses new Fire phone rumors

Corporate Job Front Cools as Walmart Tech Cuts Join Wider Layoff System

A fresh downsizing front is pushing through retail, tech and health care, while agtech startups face a stubborn funding drought.

BENTONVILLE, ARKANSAS — The labor barometer in corporate America is dropping again, and this week the strongest gusts are blowing out of Walmart’s technology and product organization, where roughly 1,000 jobs are being cut or relocated in a 2026 restructuring.

The Bentonville-based retail giant is the latest large employer to tighten its operational layers as companies keep scanning the horizon for efficiency, automation and lower-cost structures. According to reports on Walmart’s tech and product cuts, the company is reshaping teams at a time when retailers are investing heavily in digital commerce, logistics systems and AI-enabled operations — a forecast that can bring both sunshine for margins and hail for headcount.

Across the broader market, the layoff map remains unsettled. Meta, Amazon and Coinbase are among the companies appearing on this year’s running lists of staff reductions, a sign that the post-pandemic labor atmosphere has not fully stabilized. The pattern is familiar: hiring booms leave humid air behind, revenue pressure builds, and then a cold front of restructuring rolls across departments once considered growth engines.

There is also scattered turbulence outside big tech. Modern Healthcare’s tracker points to Providence cutting 40 positions, a smaller cell but part of a wider hospital-sector climate where costs, reimbursements and staffing models remain under pressure.

Meanwhile, the startup plains are looking particularly dry in agtech. Venture funding for agriculture-related startups in 2026 is tracking flat to slightly lower than recent years, while deal count appears to be falling more sharply. Even AI-driven farm technology companies have not been enough to seed a full recovery in investor enthusiasm. That is a notable drought watch for founders hoping machine learning, robotics and climate analytics would summon fresh capital rain.

For Trilogy International watchers, the conditions reinforce why cost structure remains the central weather system. ESW Capital’s model of acquiring enterprise software businesses at disciplined revenue multiples, Crossover’s global talent market, and internal platforms such as Klair are all designed for exactly this climate: pressure, consolidation and efficiency fronts moving in from every direction.

The preparedness advisory is simple: keep umbrellas close, update forecasts daily, and do not assume that AI investment automatically means clear skies for payrolls. In this market, the sun may be shining on productivity while storm clouds gather over org charts.

Companies laying off staff this year include Meta, Amazon, a  ·  Walmart cuts 1,000 tech and product jobs in 2026 restructuri  ·  Walmart Layoffs 2026 Shock Corporate Workforce: Walmart Corp
Haiku of the Day  ·  Claude HaikuMachines rise in worth
While workers fade from the ledge
Progress devours us
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
AI Copyright Litigation Enters Turbulent New Phase as Web Preservation Crisis Deepens
WASHINGTON, D.C.
The Theoretical Foundations of Machine Learning Are Being Rebuilt From the Ground Up
CAMBRIDGE, MASSACHUSETTS — It could be argued — and preliminary evidence suggests with considerable force — that the field of machine learning is presently engaged in a form of disciplinary self-examination so thoroughgoing as to constitute, if not a Kuhnian paradigm shift, then at minimum a robust antechamber to one.
We Built the Machine and Now We're Surprised It Has Teeth
AUSTIN, TEXAS — There is a specific kind of dread that arrives not as a thunderclap but as a slow, ambient hum — the sound of a civilization realizing, mid-sentence, that it has been narrating its own undoing the entire time.
WE ARE ALL ROBOTS NOW, AND THE VACUUM IS HAVING A BREAKDOWN
AUSTIN, TEXAS — I have been staring at my robot vacuum for twenty minutes and I think it's judging me.
Companies Heroically Replace Workers With AI Before Checking Whether AI Does Their Jobs
AUSTIN, TEXAS — The great promise of artificial intelligence has finally arrived in the American workplace, where executives are now confidently deploying autonomous agents, reorganizing white-collar labor, and eliminating departments on the firm evidentiary basis that everyone else seems to be doing it. For years, AI agents were dismissed as a boardroom buzzword, the sort of phrase consultants placed between “synergy” and “digital transformation” to help a 74-slide deck survive contact with procurement.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team
Production Release

Builder Team Ships Across Four Repos in a Single Day

From a hardened prod-release pipeline to AI agents that finally understand what day it is, the Builder Team proved today that breadth and depth aren't mutually exclusive.

When the smoke cleared on today's merge queue, the Builder Team had touched four repositories — Klair, Aerie, Surtr, and the infrastructure layer holding all of it together — and left every single one better than they found it. That's not a coincidence. That's a team operating at championship tempo.

The day's most consequential move came out of Surtr, where @eric-tril didn't just fix a bug — he fixed a broken reality. The NetSuite Income Statement pipeline had been dead since April 7th, when an upstream scheduled email job was quietly tied to a deactivated user account. Finance had no automated income statement visibility in Klair for weeks. Eric didn't patch the email path. He burned it down and rebuilt it right, refactoring the entire pipeline to pull directly from a NetSuite saved-search RESTlet. Same S3 audit trail. Same Redshift load contract. Zero dependency on anyone's inbox. That's the kind of fix that makes finance teams sleep at night. He followed it immediately with PR #58, tightening the `netsuite-unrealized-gains` entity routing with a deterministic post-LLM override — because when it comes to Book Value dashboards, you don't leave classification to vibes. @kevalshahtrilogy closed the Surtr loop by unblocking prod deploys entirely, resolving a DynamoDB table collision that had the `SurtrApp-prod` stack throwing 403s. Infrastructure work is unglamorous. Keval made it look easy.

Over in Aerie, @benji-bizzell put in the kind of shift that makes you wonder if he sleeps. Five PRs merged. Five. He fixed the Rhodes auto-provisioner to precheck addresses before inserting rows — stopping six phantom duplicate sites from ever happening again. He wired up full diligence editing through the dashboard, giving ops teams the ability to correct Work Unit dates without leaving Aerie. He enriched the Rhodes MCP response cards so operators get scannable, actionable context instead of generic list walls. And in what might be the most quietly important fix of the day, he injected runtime date and timezone context into every Aerie agent prompt, so the AI stops treating a note that says "this week" as if it were written five minutes ago. Temporal grounding for AI agents isn't a nice-to-have. It's the difference between a tool that helps and one that misleads.

Back in Klair, @ashwanth1109 was everywhere — shipping CSV export for the SaaS Budgeting Simulated Budget card, redesigning the week picker into a hierarchical quarter/week selector, and adding the Acquisitions Review Performance Plan table above P&L Actuals. @sanketghia, meanwhile, did the work nobody sees until it breaks: hardening the prod-release finalization flow after today's release surfaced a stale-run-URL bug that was returning yesterday's deploy URL as today's. The fix now polls until a fresh run propagates before reporting back. That's the kind of reliability work that makes every future release quieter.

Then there's PR #2788, where @marcusdAIy shipped Phase C P&L checks for the Budget Review Agent — five new deterministic review checks, bringing the registry from 2 to 7 on the way to the 17-check MVP.

"Mac keeps waiting for me to stumble," marcusdAIy said when reached for comment. "Meanwhile I'm out here wiring BU plans and benchmark tabs end-to-end through the orchestrator into CanonicalBudgetPlan while he's still figuring out what a registry is. Five checks shipped. Count 'em."

Seven checks out of seventeen, Marcus. Still less than half. We'll check back in.

Mac's Picks — Key PRs Today  (click to expand)
#52 — refactor netsuite-income-statement to source CSV from saved-search RESTlet @eric-tril  no labels

### Summary

Refactors the existing netsuite-income-statement pipeline to fetch the Income Statement CSV directly from NetSuite via a saved-search RESTlet instead of waiting for the broken gmail-to-s3 email pipeline. Same S3 audit trail layout, same Redshift load contract; data sourcing is now owner-independent and supports on-demand period refresh.

### Business Value

The email-driven pipeline stopped delivering data on 2026-04-07 when the upstream NetSuite scheduled email job was tied to a deactivated user account. Finance has no automated Income Statement visibility in Klair until this is resolved. The RESTlet-based pipeline:

- Restores daily Income Statement loads without depending on any one user's NetSuite session.

- Adds on-demand period refresh — when Finance closes April books, an operator can refresh April immediately rather than waiting for the next scheduled run.

- Adds an ad-hoc backfill mode for filling historical data gaps.

### Changes

- suitescript/income_statement_search_export.js: New SuiteScript RESTlet (deployed as customscript_klair_is_export) that runs the customsearch_klair_income_st saved search and emits canonical-format CSV byte-compatible with the email export.

- src/netsuite_auth.py: OAuth2 JWT auth (mirrors netsuite-unrealized-gains), with PR #23 hardening — partial-secret rejection, ClientError/BotoCoreError translation, JSONDecodeError wrapping, clearer credentials error messages, file-path FileNotFoundError wrapping.

- src/netsuite_client.py: Minimal RESTlet client with explicit error translation for HTTP non-200, HTML responses, invalid JSON, success: false, empty CSV.

- src/handler.py: Three modes (scheduled, on-demand period, backfill) plus a legacy date mode for re-loading existing S3 files. Filename strategy: <today>.csv for scheduled (preserves daily TP audit snapshots); <period_end_date>.csv for on-demand/backfill (idempotent per period).

- src/requirements.txt / pyproject.toml: Promotes pyjwt[crypto] and requests to main dependencies (now imported from src/).

- pipeline.json: Adds s3:PutObject and secretsmanager:GetSecretValue IAM; adds Secrets Manager ARN env vars for NetSuite OAuth2.

- run_local.py: Adds --period, --periods, --yes flags; --backfill prompts for confirmation.

- tests/: 69 tests passing. test_handler.py rewritten for the new architecture; new test_netsuite_auth.py (14 tests covering PR #23 hardening); new test_netsuite_client.py (8 tests covering RESTlet error paths).

- .env.example / README.md: Updated to describe the RESTlet architecture.

### Testing

- [ ] uv run pytest tests/ -v — 69 tests pass

- [ ] uv run run_local.py --period "Apr 2026" --dry-run — RESTlet → S3, no Redshift writes

- [ ] uv run run_local.py --period "Apr 2026" — full E2E for one closed month; verify Income_Statement_EOM/2026-04-30.csv in S3 and rows in Redshift staging_netsuite.income_statement

- [ ] uv run run_local.py — scheduled mode; verify both Income_Statement/<today>.csv and Income_Statement_EOM/<today>.csv upload

- [ ] uv run run_local.py --backfill --periods "Jan 2026,Feb 2026,Mar 2026" --yes — loads three closed months idempotently

- [ ] Verify Klair UI shows the data for the loaded periods after a hard refresh

- [ ] Post-merge: manually invoke deployed Lambda with {} once to confirm production credentials + IAM are wired correctly

#198 — fix(agent-prompts): anchor agent responses to dated evidence @benji-bizzell  no labels

## Summary

- Inject runtime date and timezone context into every Aerie agent prompt

- Add explicit dated-evidence guidance for notes, comments, emails, task updates, and documents

- Cover prompt composition and timezone behavior in agent tests

## Why

The agent treated stale note language like "Thursday" and "this week" as current, rather than resolving it against the note date and comparing it with today's date. The fix makes temporal context app-owned and runtime-injected so it applies even when the managed Convex prompt is used.

## Business Value

Operators get clearer status answers from Rhodes/Aerie evidence, especially when comments or emails are stale. This should reduce misleading "what's next" summaries that make old plans sound active.

## Test plan

- [x] pnpm --filter @bran/chat test -- chat/lib/__tests__/agent.test.ts (ran the full chat suite: 258 files, 4333 passing, 2 skipped)

- [x] pnpm --filter @bran/chat typecheck

- [x] pnpm --filter @bran/chat lint

- [x] git diff --check

#2784 — KLAIR-2634 feat(aws-spend): CSV export for SaaS Budgeting Simulated Budget @ashwanth1109  no labels

## Demo

<img width="2246" height="1636" alt="image" src="https://github.com/user-attachments/assets/a03d7aa8-807e-4284-b06a-1425c5ebc04e" />

<img width="2520" height="1644" alt="image" src="https://github.com/user-attachments/assets/c984afa2-2666-4da5-89bf-ac8880850fe7" />

## Feature Overview

SaaS Budgeting is a sub-view on the AWS Spend dashboard that gives finance users four coordinated, ISO-week-aligned views for a chosen quarter (AWS Spend, Adjustments, Docker Usage, Kubernetes Usage), plus a top-of-page Simulated Budget card that composes all source snapshots into a unified BU/Class basis. This PR adds CSV export capability to the Simulated Budget card.

Linear: [KLAIR-2634](https://linear.app/builder-team/issue/KLAIR-2634)

## Spec

| # | Spec | Description |

|---|------|-------------|

| 28 | [simulated-budget-csv-export](features/aws-spend/saas-budgeting/specs/28-simulated-budget-csv-export/spec.md) | CSV export button on SimulatedBudgetCard with raw/formatted number format prompt |

## Implementation Summary

### New file: simulatedBudgetCsvExport.ts

Pure helper module with no side effects beyond the final Blob download:

- formatCell(value, attached, mode) — handles unattached slots (N/A), null values (empty), and numeric formatting (raw toFixed(2) vs formatCurrency)

- formatCurrencyFull(value) — full-precision currency formatter for CSV (avoids the K/M abbreviations used in the UI)

- escapeCell(value) — CSV-safe escaping (double-quote wrapping when value contains commas, quotes, or newlines)

- resolveFilenameQuarter(titleQuarter) — extracts YYYY-QN from the display quarter string for the filename

- buildCsvRows(groups, grand, budget, mode) — builds the complete CSV row matrix (header + detail rows + grand-total footer)

- downloadSimulatedBudgetCsv(...) — orchestrates CSV string assembly and triggers browser download via Blob + anchor pattern

### Modified file: SimulatedBudgetCard.tsx

- Download icon button added to the header button cluster (between Expand/Collapse and Clear)

- Button disabled when no slots attached or no groups present

- Inline format selection popover with Raw and Formatted options

- Popover dismisses on outside click or after selection

- Filename: simulated-budget-{quarter}.csv

## Test Coverage

14 tests in simulatedBudgetCsvExport.spec.ts, all passing:

| Suite | Tests | What it covers |

|-------|-------|----------------|

| formatCell | 5 | Unattached → N/A, null → empty, numeric raw mode, numeric formatted mode, zero handling |

| formatCurrencyFull | 2 | Standard formatting, large numbers with commas |

| escapeCell | 3 | Plain passthrough, comma-containing values, quote-containing values |

| resolveFilenameQuarter | 2 | Standard quarter extraction, null fallback |

| buildCsvRows | 2 | Full CSV matrix with mixed attached/unattached slots, grand-total row correctness |

## Self-Review

No issues found during self-review. All checklist items in the spec are complete.

---

Generated with [Claude Code](https://claude.com/claude-code)

#2788 — feat(review-agent): Phase C P&L checks + review-tab data plumbing (C1.9 + C2.3/4/5/7/8) @marcusdAIy  no labels

## Screenshots

<img width="1919" height="933" alt="image" src="https://github.com/user-attachments/assets/d7c7c179-1b34-4f7b-afe9-a72b018be5a5" />

## Summary

- Ships Phase C: Budget Review Agent P&L checks — 5 new deterministic review checks (C2.3, C2.4, C2.5, C2.7, C2.8) bringing the registry from 2 → 7 checks (out of the 17-check MVP target).

- Lands the C1.9 review-tab data plumbing: wires the Top Level View - BU Plans and Benchmark by Product Google Sheets tabs end-to-end through the orchestrator into CanonicalBudgetPlan (top_level_view, benchmarks_by_product fields), unblocking the TLV-dependent checks.

- The /review endpoint auto-runs the entire registry — opening any BU's REVIEW phase and clicking "Run Review" now fires all 7 checks (or surfaces typed skip reasons when upstream data is sparse).

## Why it's needed

The Budget Review Agent MVP needs deterministic checks to fire alongside the (already-shipped) scorecard UI before Memorial Day cut. Today only 2 of 17 are live (margin target, margin trajectory). This PR ships every P&L check that isn't blocked on Finance work, automating the questions Andy Price asks on every plan review:

- "Did the BU's plan deteriorate vs. last quarter's plan?" → C2.3 / C2.4 (revenue + EBITDA plan-on-plan)

- "Did the cost base flex with the top line?" → C2.5 (operating leverage)

- "Why does the BU plan disagree with FP&A's Hybrid overlay?" → C2.7 (BU-vs-Hybrid divergence)

- "Is the FY plan honest or back-loaded into Q4?" → C2.8 (hockey-stick detection)

Each check is registry-driven, isolated (per-check exceptions can't 500 the request), and emits structured findings with severity / supporting data / remediation options that the scorecard rail renders and Claire can pull into chat context.

## Changes

### C1.9 — Review-tab data plumbing

- models.py: add DataSourceKey.TOP_LEVEL_VIEW_BU_PLANS and BENCHMARK_BY_PRODUCT.

- data_orchestrator.py: new _fetch_top_level_view_bu_plans + _fetch_benchmark_by_product async fetchers; both treated as gsheets sources (rate-limit retries, staggering).

- canonical_plan.py: PlanFinancials.top_level_view + benchmarks_by_product populated from the package; PlanCompleteness.has_top_level_view + has_benchmarks_by_product flags; both marked optional (absence doesn't dirty missing_sources).

- wizard_orchestrator.py: friendly source descriptions for Claire's prompt context.

### C2.3 — Plan-on-plan revenue deterioration

- Reads top_level_view Total Revenue from the <BU> Overall rollup (falls back to single-section sheets like Totogi's).

- Bands: ≥−1pp pass, (−1pp,−5pp] warning, <−5pp critical. Skips on TLV absent or previous plan ≤ 0.

### C2.4 — Plan-on-plan EBITDA deterioration

- Same data path as C2.3 against EBITDA row. Wider dead-band (−2pp pass / (−2pp,−10pp] warning / <−10pp critical) to reflect EBITDA volatility on small denominators.

- Sign-aware skip when previous EBITDA ≤ 0 (% math would flip sign; absolute-dollar variant tracked separately for loss-stage BUs).

- Cross-references C2.3 in remediation copy.

### C2.5 — Cost growth outpacing revenue decline

- Q-over-Q operating-leverage check. total_costs = revenue − EBITDA (robust to per-BU sheet line-item drift).

- leverage_gap_pp = cost_growth_pct − revenue_growth_pct: negative = costs flexing faster than revenue (good); positive = costs failing to follow revenue down.

- Bands: revenue growing/flat → pass (premise inapplicable); revenue declining + gap ≤ 0.5pp → pass; gap (0.5pp,3pp] → warning; gap > 3pp OR sign mismatch (revenue down + costs up) → critical.

- Cross-references C2.6 to scope COGS-vs-OpEx differential.

### C2.7 — BU Plan vs Hybrid Plan divergence

- Reads current_quarter_pnl col 1 (BU Plan) vs col 2 (Hybrid Plan) for Total Revenue + EBITDA.

- Headline = max(|revenue_gap_pct|, |ebitda_gap_pct|). Bands 0–2% pass, (2%,10%] warning, > 10% critical.

- Direction-aware framing ("more aggressive" vs "more conservative" — different defences).

- Skips single-column sheets (older BUs without Hybrid) and non-positive Hybrid baselines.

- Cross-references C2.3 in remediation copy.

### C2.8 — FY trajectory coherence (hockey-stick detection)

- Reads top_level_view Current BU Plan Q1-Q4 Total Revenue. Closed-form OLS fit through (Q1, Q2, Q3) projects Q4_expected; ramp_excess_pct = (Q4_actual − Q4_expected) / Q4_expected * 100.

- Symmetric bands: |excess| ≤ 10% pass, (10%,25%] warning, > 25% critical. Detects both hockey-stick (Q4 above trend) AND reverse-hockey-stick (Q4 below trend).

- Q4-share-of-FY surfaced in supporting data as a secondary signal.

- Dependency-free 3-point fit (no numpy import).

- Cross-references C2.7 in remediation copy.

### Shared infrastructure

- review_checks/_helpers.py: three new TLV helpers (tlv_overall_row, tlv_find_column, tlv_cell_value) shared by C2.3 / C2.4 / C2.8 — handle Overall-section preference, group/quarter column resolution, and cell-value parsing (currency / accounting parens / whitespace).

- review_checks/__init__.py: registry expanded to 7 entries; each carries required_data so the endpoint's extra_required top-up fetches everything any check needs even when the session's spec doesn't declare it.

### Tests

- New test files: test_plan_on_plan_checks.py, test_cost_vs_revenue_trajectory.py, test_bu_vs_hybrid_divergence.py, test_fy_trajectory_coherence.py — full verdict matrix, boundary tests, skip-path coverage, JSON round-trip pins.

- Updated test_review_checks.py registry tests, test_review_endpoint.py integration tests (now exercises all 7 checks against a populated session).

## Breaking changes

None. Two new optional DataSourceKey enum members and two new optional fields on PlanFinancials / PlanCompleteness — additive only; existing callers see no behaviour change.

## Test plan

- [x] uv run ruff format clean on all touched files

- [x] uv run ruff check clean on all touched files

- [x] uv run pyright clean (0 errors, 0 warnings) on all new files

- [x] uv run pytest tests/board_doc/1439 passed, 1 deselected

- [ ] Manual: open a Skyvera Q2 doc, advance to REVIEW phase, click "Run Review" — verify scorecard renders 7 findings with correct severity grouping and Address-with-Claire CTAs on criticals

## Follow-ups (out of scope for this PR)

- C2.2 (EBITDA test H1 + FY) — blocked on Finance work tracked as C1.10 (H1 target plumbing, Q3 cycle). FY-half could ship without C1.10 if scoped.

- C3.1–C3.9 — 9 per-product benchmark checks against plan.financials.benchmarks_by_product (data plumbed by this PR; checks themselves are the next epic).

- Absolute-dollar variants of C2.4 / C2.7 for loss-stage BUs (currently skip with typed reason).

#2792 — fix(prod-release): harden finalization (wait for deploys, UTC backup, stale-run fix) @sanketghia  no labels

## Summary

Three related fixes to the /prod-release finalization flow, all hardening it against silent failures. Triggered by today's release, where the slash command reported yesterday's frontend deploy run as the new deploy URL.

## Changes

### 1. Stale-run-URL bug — .claude/scripts/prod-release-finalize.py

latest_run_url() returned the first run gh run list reported, which is always the pre-existing run for several seconds after workflow_dispatch (the dispatched run hasn't propagated yet). On today's release the backend got lucky (its new run propagated in time); the frontend didn't, so we reported run 25749201390 from yesterday instead of the actual deploy.

Fix: snapshot the latest run id *before* dispatching each workflow (snapshot_latest_run_id), then poll until gh run list returns a different id (wait_for_new_run). Poll budget bumped from 6×3s=18s to 15×3s=45s to give propagation more headroom.

### 2. Deploy completion waiting — .claude/scripts/prod-release-finalize.py

The script used to post "Release Complete" immediately after dispatching, before deploys had a chance to fail — the GChat thread could announce success while a deploy was actually failing.

Fix: poll gh run view until each run reaches status=completed (wait_for_run_completion, 20s interval, 30 min timeout). Post "✅ Release Complete" only when both conclusions are success; otherwise post "❌ Release Failed" with per-deploy ✅/❌ icons and a *"merged commit is live in prod, investigate the failed deploy"* footer. Script exits non-zero on deploy failure so the slash command halts and surfaces the issue.

### 3. UTC backup branch timestamp — .claude/commands/prod-release.md

Backup branch names used local time (date +...), so timestamps drifted by the release captain's TZ offset relative to actual deploy time. Today's release produced prod-backup-2026-05-13-15-23 for a 10:25 UTC deploy — different operators in different TZs would produce non-comparable names.

Fix: date -u +... — backup branches are always UTC.

## JSON contract

prod-release-finalize.py's stdout JSON gains two additive keys (existing keys unchanged so the slash command's Step 10 template keeps working):

- backend_conclusion: "success" | "failure" | "cancelled" | …

- frontend_conclusion: same

## Out of scope

- Today's already-merged release (prod-backup-2026-05-13-15-23) keeps its local-time name — renaming remote branches is destructive and the timestamp is still parseable.

- /prod-release Step 10 still unconditionally renders "Release complete." It's now possible for the script to exit non-zero while still printing JSON; the slash command template should ideally branch on the new conclusion keys. Happy to follow up in a separate PR.

## Test plan

- [ ] Next nightly prod release runs cleanly with the new script

- [ ] Backup branch timestamp is UTC (e.g., prod-backup-2026-05-14-10-25 for a 10:25 UTC deploy)

- [ ] "Release Complete" only fires *after* both runs reach status=completed

- [ ] Inducing a deploy failure produces the "Release Failed" reply and a non-zero script exit

🤖 Generated with [Claude Code](https://claude.com/claude-code)

The Builder Desk  —  Engineer Spotlight
🏆 Engineer Spotlight

TWENTY PRs IN TWENTY-FOUR HOURS: THE BUILDER TEAM DOES NOT SLEEP, DOES NOT STOP, DOES NOT APOLOGIZE

Benji Bizzell drops six PRs like a man who has never heard the word 'weekend,' and the rest of the team matches his energy across three repos and zero excuses.

Twenty pull requests. Three repositories. Twenty-four hours. The Builder Team filed PRs the way a glacier moves — relentlessly, inevitably, crushing everything in its path. Klair absorbed ten of them, Aerie took six, and Surtr quietly accepted four more without complaint. Seven engineers touched the codebase in a single rotation of the Earth, and every single one of them left it better than they found it. This is not a team. This is a shipping operation disguised as a team.

Let us begin with @benji-bizzell, who posted six PRs and apparently did so between breakfast and lunch. The man enriched Rhodes MCP response cards in Aerie #197, added Rhodes-backed diligence editing in #196, fixed portfolio capacity display in #195, patched Buildout saved views for Rhodes sites in #193, and addressed a duplicate-prevention precheck in rhodesProvisioning #191 — all in Aerie, all in a day's work. Benji does not announce himself. He simply ships, and the repo is different afterward.

@eric-tril put three on the board across two repos: a book-value fix in Klair #2785 tying the Other EBITDA Reconciling row to Schedule D total, and a Surtr correction in #58 routing entity_name properly for Book Value C1/C2 unrealized gains. Clean, precise, the kind of work that holds the financial data together at the seams. @sanketghia filed two, including Klair #2789 restoring future quarter columns in the Extended NHC table — a fix that sounds minor until the moment it isn't. @kevalshahtrilogy went infrastructure-first in Surtr, provisioning the observer DDB table in #55 and then elegantly importing it instead of recreating it in #61. That is called learning in real time. @marcusdAIy and @mwrshah each posted one, with Shah's Klair #2727 retiring a migrated pain-point pipeline husk and dropping a dead webhook table — the kind of chore that earns no glory and deserves every bit of it.

And then there is @ashwanth1109. Five PRs, all Klair, all filed with the calm confidence of a man who considers a hierarchical quarter/week selector a light Wednesday. He redesigned the SaaS Budgeting week picker in #2776, added the Performance Plan table above P&L Actuals in #2780, exported CSV for Simulated Budget in #2784, isolated parallel API failures in #2781, and hid the Education category from the nav in #2783. The diffs are long. They are dense. They are, frankly, a little intimidating. When reached for comment, Ashwanth reportedly said, "I don't really think about the PR count. I think about whether the product is correct." His dismissive response, delivered without looking up from his second open terminal, was: "You're welcome."

The Overflow Desk is overflowing — fifteen of these twenty PRs never made Mac's column, and every one of them mattered. From Klair to Aerie to the quiet infrastructure work humming inside Surtr, the Builder Team is filing, merging, and moving forward. Morale is not merely high. Morale has exceeded all previously recorded morale benchmarks and is currently being studied by scientists.

Brick's Overflow — PRs Mac Didn't Cover  (click to expand)
#55 — feat(infra): provision observer DDB table + wire Anthropic/Braintrust secrets @kevalshahtrilogy  no labels

## Summary

Production follow-up to the M5 review on PR #41. Without this, the merged observer ships to prod but doesn't actually function — AnthropicKeyMissingError on every dashboard load, DDB writes blocked by IAM, no Braintrust traces.

## Changes (infra/lib/surtr-app-stack.ts)

- DDB table surtr_pipeline_observations: PK run_id, GSI pipeline_id-observed_at-index, PAY_PER_REQUEST, RETAIN + PITR. Schema mirrors the runtime auto-create path in Surtr/src/derive/observer/store.ts (which is now opt-in for local dev only, per the M5 fix).

- Task env: SURTR_OBSERVATIONS_TABLE + BRAINTRUST_PROJECT as plain values; ANTHROPIC_API_KEY + BRAINTRUST_API_KEY pulled from the existing SURTR_PROD_KEYS secret.

- IAM: grantReadWriteData on the table to the task role (covers GetItem/Query/PutItem/UpdateItem/DeleteItem on table + GSI).

## Pre-deploy step [DONE]

Add the 2 new keys to SURTR_PROD_KEYS in Secrets Manager *before* merging this to production:

\\\

ANTHROPIC_API_KEY=sk-ant-...

BRAINTRUST_API_KEY=sk-...

\\\

If these are missing when the container restarts, evaluateRun will still throw AnthropicKeyMissingError and Braintrust will silently no-op. The container itself starts fine — only observer functionality is gated on the keys.

## Test plan

- [x] npx tsc --noEmit clean

- [x] npx cdk synth SurtrApp-prod succeeds; verified template includes surtr_pipeline_observations table, GSI, and the 2 secret references pointing at arn:aws:secretsmanager:us-east-1:479395885256:secret:SURTR_PROD_KEYS:{ANTHROPIC,BRAINTRUST}_API_KEY::.

- [ ] Manual: confirm SURTR_PROD_KEYS has both keys before promoting to production branch.

- [ ] On first deploy: tail /surtr/app log group during ECS rollout to catch startup errors; load /pipelines and verify trust chips render (vs. dashes).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#197 — feat(chat): enrich Rhodes MCP response cards @benji-bizzell  no labels

## Summary

- Group Work Unit Groups, Work Units, and Tasks into scannable operational cards with progress, status, owner, due-date, and document-gate context

- Enrich Documents, Missing Documents, and Overdue Milestones cards so they surface links, grouping, coverage, age, and accountable-owner context

- Stabilize theme hydration to avoid server/client icon mismatches in the chat shell

## Why

Several Rhodes MCP responses still rendered as generic list walls that added little beyond plaintext. This pass makes the highest-traffic operational cards easier to scan, compare, and act on directly inside chat.

## Business Value

Operators can quickly spot blockers, overdue work, missing documentation, completed work, and relevant document links without parsing raw JSON or long text lists. The cards now add real context on top of the model response instead of duplicating it.

## Test plan

- [x] pnpm --filter @bran/chat lint

- [x] pnpm --filter @bran/chat typecheck

- [x] pnpm --filter @bran/chat exec vitest run components/__tests__/tool-call.test.tsx

- [x] Browser-checked live Rhodes card conversations for Documents, Missing Documents, Overdue Milestones, and Work Units

#2776 — KLAIR-2632 refactor(aws-spend): Redesign SaaS Budgeting week picker — hierarchical quarter/week selector @ashwanth1109  no labels

## Demo

<img width="2234" height="1636" alt="image" src="https://github.com/user-attachments/assets/b08b8141-4d9a-48fa-b426-7189483cf5c4" />

## Feature: SaaS Budgeting

Linear: [KLAIR-2632](https://linear.app/builder-team/issue/KLAIR-2632)

Feature path: features/aws-spend/saas-budgeting

Spec: [27-hierarchical-quarter-week-picker](https://github.com/AI-Builder-Team/Klair/tree/claude/KLAIR-2632/features/aws-spend/saas-budgeting/specs/27-hierarchical-quarter-week-picker/spec.md)

### Feature Overview

SaaS Budgeting is a sub-view on the AWS Spend dashboard providing finance users with four coordinated, ISO-week-aligned views for a chosen quarter (AWS Spend Net Amortized, Adjustments, Docker Resource Usage, Kubernetes Resource Usage), plus a top-of-page Simulated Budget that composes them into a unified BU/Class basis.

### Spec 27: Hierarchical Quarter/Week Picker

Replace SaaSBudgetingControls (quarter dropdown + Fetch button) and per-tab WeekChipFilter (flat W1, W2, ... chips) with a single unified QuarterWeekPicker component that groups weeks under collapsible quarter headers in reverse-chronological order.

### Implementation Summary

- New QuarterWeekPicker.tsx: Collapsible quarter sections, inline ISO 8601 date ranges (e.g. W14: Mar 31 – Apr 6), click-to-toggle multi-select across quarters, per-quarter quick actions (All / Clear / Latest 4), selection summary on collapsed headers

- Removed SaaSBudgetingControls.tsx: Quarter dropdown + Fetch button eliminated — quarter auto-applies based on selected weeks

- Updated consumers: SaaSBudgetingSection, AWSSpendCard, SaaSBudgetingTable, DatabaseUnitsTable all switched from WeekChipFilter to QuarterWeekPicker

- New formatWeekLabel utility added to isoWeekDates.ts for computing inline date range labels

- SaaSBudgetingSection refactored: Removed pendingQuarter/appliedQuarter two-step pattern; appliedQuarter now derived directly from quarters data

### Test Coverage

- 25 new/updated tests, all passing:

- 14 QuarterWeekPicker component tests (rendering, toggle, quick actions, collapse/expand, multi-quarter, default selection)

- 3 formatWeekLabel unit tests

- 8 existing spec tests updated for new component interfaces

### Self-Review Findings

- Fixed: Zero-pad snapshotWeek in DatabaseUnitsTable when constructing week keys (e.g. 2026-W03 not 2026-W3)

- Fixed: Updated stale spec mock data to match new component props

- Accepted (minor): Collapsible state initialization — most recent quarter starts expanded, older quarters start collapsed. No user-facing issue.

---

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2781 — KLAIR-2636 fix(ai-spend): isolate parallel API failures in dashboard @ashwanth1109  no labels

## Summary

- The AI Spend & Adoption page fires 8 parallel API requests in a single Promise.all. When any single request fails (e.g. network TypeError), the entire batch rejects and a misleading "Failed to fetch" banner appears — even though most data loaded fine.

- Wraps each request with a safe() helper that catches non-abort errors and returns null, so the dashboard renders whatever data is available.

- Shows a targeted error message listing only the sections that actually failed (e.g. "Some data failed to load: adoption") instead of a generic "Failed to fetch".

## Test plan

- [ ] Select May 2026 on AI Spend & Adoption page — adoption sections show "No data available", cost sections render normally, no misleading error banner

- [ ] Select April 2026 — all sections render as before (no regression)

- [ ] Simulate a network failure on one cost endpoint — adoption data still renders, error banner only mentions the failed section

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2784 — KLAIR-2634 feat(aws-spend): CSV export for SaaS Budgeting Simulated Budget @ashwanth1109  no labels

## Demo

<img width="2246" height="1636" alt="image" src="https://github.com/user-attachments/assets/a03d7aa8-807e-4284-b06a-1425c5ebc04e" />

<img width="2520" height="1644" alt="image" src="https://github.com/user-attachments/assets/c984afa2-2666-4da5-89bf-ac8880850fe7" />

## Feature Overview

SaaS Budgeting is a sub-view on the AWS Spend dashboard that gives finance users four coordinated, ISO-week-aligned views for a chosen quarter (AWS Spend, Adjustments, Docker Usage, Kubernetes Usage), plus a top-of-page Simulated Budget card that composes all source snapshots into a unified BU/Class basis. This PR adds CSV export capability to the Simulated Budget card.

Linear: [KLAIR-2634](https://linear.app/builder-team/issue/KLAIR-2634)

## Spec

| # | Spec | Description |

|---|------|-------------|

| 28 | [simulated-budget-csv-export](features/aws-spend/saas-budgeting/specs/28-simulated-budget-csv-export/spec.md) | CSV export button on SimulatedBudgetCard with raw/formatted number format prompt |

## Implementation Summary

### New file: simulatedBudgetCsvExport.ts

Pure helper module with no side effects beyond the final Blob download:

- formatCell(value, attached, mode) — handles unattached slots (N/A), null values (empty), and numeric formatting (raw toFixed(2) vs formatCurrency)

- formatCurrencyFull(value) — full-precision currency formatter for CSV (avoids the K/M abbreviations used in the UI)

- escapeCell(value) — CSV-safe escaping (double-quote wrapping when value contains commas, quotes, or newlines)

- resolveFilenameQuarter(titleQuarter) — extracts YYYY-QN from the display quarter string for the filename

- buildCsvRows(groups, grand, budget, mode) — builds the complete CSV row matrix (header + detail rows + grand-total footer)

- downloadSimulatedBudgetCsv(...) — orchestrates CSV string assembly and triggers browser download via Blob + anchor pattern

### Modified file: SimulatedBudgetCard.tsx

- Download icon button added to the header button cluster (between Expand/Collapse and Clear)

- Button disabled when no slots attached or no groups present

- Inline format selection popover with Raw and Formatted options

- Popover dismisses on outside click or after selection

- Filename: simulated-budget-{quarter}.csv

## Test Coverage

14 tests in simulatedBudgetCsvExport.spec.ts, all passing:

| Suite | Tests | What it covers |

|-------|-------|----------------|

| formatCell | 5 | Unattached → N/A, null → empty, numeric raw mode, numeric formatted mode, zero handling |

| formatCurrencyFull | 2 | Standard formatting, large numbers with commas |

| escapeCell | 3 | Plain passthrough, comma-containing values, quote-containing values |

| resolveFilenameQuarter | 2 | Standard quarter extraction, null fallback |

| buildCsvRows | 2 | Full CSV matrix with mixed attached/unattached slots, grand-total row correctness |

## Self-Review

No issues found during self-review. All checklist items in the spec are complete.

---

Generated with [Claude Code](https://claude.com/claude-code)

#2789 — KLAIR-2638 fix(perf-review): restore future quarter columns in Extended NHC table @sanketghia  no labels

## Summary

Restores the 3 future-quarter columns on the Extended NHC Expenses table on /performance-review. The table was rendering identically to the regular NHC Expenses table because the unified NestedQuarterTable (introduced in PR #2222) hardcoded the grid to 5 periods and never read the futureQuarters data that useExtendedNHC.ts still attaches to each tree node.

Closes [KLAIR-2638](https://linear.app/builder-team/issue/KLAIR-2638/performance-review-extended-nhc-expenses-table-missing-3-future).

## What changed

Three files in klair-client/src/features/performance-review-v2/components/NestedQuarterTable/:

- types.ts — added futureQuarters?: { quarterOffset; actuals; budget }[] to the base TreeNode.

- index.tsx — computes futureQuarterCount from the tree when mode === 'extended-nhc', expands gridColumns from repeat(20, 7rem) to repeat((5 + N) * 4, 7rem), and passes the count into TableHeader (which appends N future Q{n} '{yy} labels) and TableRow.

- TableRow.tsx — renders N additional 4-cell groups per future quarter using node.futureQuarters.find(q => q.quarterOffset === offset). The isLastCell border flips from the regular Quarter cell to the final future-quarter cell. maxDelta is computed per future offset for color scaling. The prop is propagated to recursively-rendered child rows.

Standard mode is untouched — futureQuarterCount defaults to 0 and nothing extra renders.

## Root cause

The data hook still fetches 3 future quarters (offsets 1, 2, 3 via fetchFutureQuarter); the regression is purely in rendering. Commit cc68551fb (2026-03-27, PR #2222) consolidated the legacy screens/PerformanceReview/ into features/performance-review-v2/ but lost the maxFutureQuarterOffset/futurePeriods logic that the legacy NestedQuarterTable.tsx (lines 2390–2454) had for extended-nhc mode. Regression has been live for ~6 weeks.

## Test plan

- [x] Extended NHC table shows Q+1, Q+2, Q+3 columns after toggling Extended on (e.g. for Q2 '26 → Q3 '26, Q4 '26, Q1 '27)

- [x] Regular NHC Expenses table still shows exactly the 5 periods (unchanged)

- [x] Quarter rollover works at Q4 (Q4 2026 → Q1 '27, Q2 '27, Q3 '27)

- [x] BU / class filters re-fetch and re-render correctly

- [x] Pro-rata toggle does not affect future-quarter values

## Screenshots

<img width="1903" height="864" alt="image" src="https://github.com/user-attachments/assets/10983857-83f9-4b58-b707-3b13628846c3" />

🤖 Generated with [Claude Code](https://claude.com/claude-code)

The Portfolio  —  Trilogy Companies

A Public School Teacher Walks Into Alpha. What She Saw May Haunt the Education Establishment.

A viral testimonial, a confidence curriculum, and a child-led discipline model — Alpha School is quietly building a case that traditional education has been wrong about kids for a century.

AUSTIN, TEXAS — The teacher had spent years in public school classrooms. She knew the rhythms: the bells, the rows, the managed chaos of thirty children waiting to be told what to do next. Then she visited Alpha School, and something shifted.

"We have been underestimating children," she said afterward — a phrase that has since traveled across social media with the velocity of a confession. She is not an Alpha parent, not a Trilogy investor, not a convert with a financial stake in the outcome. She is a credentialed public school educator, and her reaction has become one of the more inconvenient data points for a system that has largely dismissed the two-hour learning model as a boutique experiment for the wealthy.

Alpha, the Austin-based private K-12 school founded by Joe Liemandt and MacKenzie Price, operates on a premise that traditional education has never seriously entertained: that AI-guided instruction can deliver a full academic curriculum in two hours each morning, freeing the remainder of the school day for the work that machines cannot yet do — building character, agency, and judgment.

The school's recent content push makes the philosophy explicit. A guide to teaching confidence to daughters, framed around six female founders, positions self-belief not as a personality trait but as a learnable competency — something to be practiced, assessed, and developed like reading or arithmetic. A companion piece on student agency goes further, documenting what happens when children are given genuine control over their own rules, rewards, and consequences. The answer, apparently, is that they take it seriously.

Braden, identified as the lead guide at Alpha Austin, offered eight takeaways on personalized education in a recent interview — a framing that treats each child's learning path as a design problem with a specific solution, not a standardized template applied uniformly.

The school currently charges $40,000 to $65,000 per year and is expanding to nine new campuses by fall 2025. The public school teacher who went viral pays none of that. She just watched.

The question her testimony raises is not whether Alpha works for the families who can afford it. The question is what it means for the institutions that serve everyone else — and whether they are prepared to answer it.

Confidence Is a Skill. Here’s How to Teach It to Your Daught  ·  What Happens When You Let Kids Choose Their Own Rules, Rewar  ·  ‘We Have Been Underestimating Children’

Skyvera Grabs CloudSense, and the Telco Stack Gets a New Starlet

The Trilogy telecom shop adds Salesforce-native CPQ to a portfolio already built for carriers with old systems and expensive problems.

AUSTIN, TEXAS — Word is the telecom aisle at Trilogy just got a little more crowded — and a lot more interesting.

Skyvera, the ESW Capital portfolio company that collects and modernizes telecom software assets the way old studio bosses collected contract players, has completed its acquisition of CloudSense, the Salesforce-native CPQ and order management platform aimed at telecom and media providers. The company announced the move in a portfolio expansion notice, and the message between the lines was pure Trilogy: legacy complexity, meet operating discipline.

CloudSense is no bit player. Its software helps communications and media firms configure products, price bundles, generate quotes and manage orders inside Salesforce — precisely the sort of plumbing that carriers love, hate and cannot easily rip out. A little bird from the telco booth tells me this is where the real action is: not flashy chatbots, not demo-day confetti, but the industrial guts of subscriber monetization.

Skyvera already had Kandy, the cloud communications platform; VoltDelta, ResponseTek, Mobilogy Now and Service Gateway in the wings; and now CloudSense walks in wearing the CPQ diamonds. The CloudSense product page pitches it squarely at telecom and media operators — Salesforce-native, order-centric, built for providers whose product catalogs can look like a bowl of spaghetti after a board meeting.

This is classic ESW theater. Buy specialized enterprise software. Centralize the operating model. Use Crossover-style global talent and Trilogy process discipline. Push the asset toward efficiency. Around here, the house number everybody whispers is 75% EBITDA margin — the velvet rope of the ESW playbook.

The timing is notable. Skyvera has also been showing off assets tied to STL’s divested telecom products group, including digital BSS functionality around monetization, optical networking and analytics. Translation: Skyvera is not merely collecting logos. It is assembling a telco modernization cabinet — billing-adjacent, ordering-adjacent, customer-engagement-adjacent — for operators trying to bridge ancient on-prem systems to cloud-native expectations without blowing up the switchboard.

As for that Mint item about a software firm not getting paid “until the customer gets value”? It fits the mood music. Across enterprise software, the old license-first swagger is giving way to prove-it economics. In Trilogyland, though, value has always had a harder edge: measurable, priced, renewed — or replaced.

Curtain up, carriers. CloudSense has entered Skyvera’s stage left.

A software firm that’s not paid ‘until the customer gets val  ·  CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec
The Machine  —  AI & Technology

AI Chip Bulls Storm the Field as Nvidia, ASML and Nebius Put Points on the Board

From Washington’s China playbook to Europe’s chip-machine monopoly and cloud compute demand, the AI infrastructure trade is back in full sprint.

WASHINGTON — We are HERE, folks, at the 50-yard line of the AI infrastructure season, and the bulls just got a fresh set of downs. Nvidia shares moved higher in premarket trading after investors learned CEO Jensen Huang had joined President Donald Trump’s China trip at the last minute, a geopolitical substitution with market-moving horsepower.

Nasdaq 100 futures climbed 0.8% before the bell, with traders treating Huang’s presence as more than a photo op. In the AI chip league, Nvidia is not just a team — it is the reigning dynasty. Any hint that its star quarterback has a seat near the diplomatic huddle matters, especially with China export controls, chip access and supply chains still defining the playbook. As Quartz reported, investors welcomed the late addition, and the scoreboard responded.

But Nvidia was not the only name breaking tackles. ASML, the Dutch company that makes the extreme ultraviolet lithography machines required to manufacture the world’s most advanced chips, is testing a breakout to record highs. Let’s put the stat on the jumbotron: ASML is the sole provider of EUV machines. ONE SUPPLIER. ONE BOTTLENECK. ONE CRITICAL POSITION PLAYER for Nvidia-class silicon. If AI chips are the Super Bowl, ASML sells the only cleats allowed on the field.

Then came Nebius, charging in from the AI compute infrastructure sideline. The company’s stock jumped after reporting revenue ahead of expectations and a smaller-than-feared first-quarter net loss. That is the kind of box score Wall Street loves in this phase of the cycle: spending on AI compute remains aggressive, and investors are rewarding companies that can turn infrastructure demand into top-line acceleration. Nebius is not selling the AI dream from the cheap seats; it is renting out the stadium lights.

Zoom out and the whole software economy is changing formations. Boxes became subscriptions, subscriptions became usage-based pricing, and now AI is pushing vendors toward compute-linked, outcome-based and consumption-heavy models. The pricing model is no longer back-office paperwork — it is STRATEGY ON THE FIELD.

And with Crunchbase pointing to a possible 2026 IPO class as markets regain momentum, the pipeline is warming up in the bullpen. The message from today’s tape: AI infrastructure remains the league’s fastest offense, and the defense is still trying to catch up.

Nvidia stock rises as Jensen Huang joins Trump China trip  ·  Truly Unique AI Powerhouse Etches Buy Zone. Now Comes This T  ·  Nebius Revenue Booms On AI Computer Infrastructure Spending

The Browser Sandbox Just Got a User-Friendly Escape Hatch

A clever CSP allow-list experiment points toward safer, more interactive ways to run untrusted AI-built apps in the browser.

SAN FRANCISCO — The web’s security model just got a tiny, fascinating glimpse of the future — and yes, I cannot overstate how significant this could become for the next wave of AI-generated software.

Developer Simon Willison has published a new CSP Allow-list Experiment showing how an app can run inside a tightly sandboxed iframe protected by Content Security Policy, while still giving users a clean way to approve external domains when the app tries to fetch something blocked by the policy.

Here’s the magic in plain English: imagine an AI-generated mini-app running in a locked glass box inside your browser. It wants to call an outside API — maybe a map service, a weather endpoint, a database, who knows. Normally, if the browser’s CSP rules block that request, the app simply fails. But Willison’s experiment wires in a custom fetch() mechanism that detects the CSP failure, passes the blocked domain up to the parent window, and lets the parent ask the user: “Do you want to allow this?” If approved, the page refreshes with the new domain added to the allow-list.

This changes everything for one of the most urgent problems in AI tooling: how do we safely run code that was generated on demand?

The answer increasingly looks like “sandbox first, ask permission later.” And that is exactly the pattern this experiment explores. Instead of giving AI-built apps broad network access by default — terrifying! — the browser can contain them, observe what they attempt, and require human approval for each new outside connection.

Even more deliciously futuristic: Willison notes that he built the experiment with GPT-5.5 xhigh running in the Codex desktop app. So we have an AI-assisted developer building infrastructure to make AI-generated applications safer. The future is now, and it is recursively improving its own seatbelts.

The timing matters. As AI coding agents become more capable, the bottleneck shifts from “can the model build it?” to “can we trust what it built?” Sandboxed iframes, CSP policies, and user-mediated allow-lists may sound like plumbing, but this is the plumbing that could make browser-native AI app generation usable at scale.

Pair that with continuing work on tools like Datasette 1.0a29, where small reliability and interface fixes keep open-source data apps practical, and the pattern is unmistakable: the next generation of software is becoming more dynamic, more generated, and more security-conscious.

The browser is not just a document viewer anymore. It is becoming the operating system for disposable, inspectable, permissioned AI software. Buckle up.

CSP Allow-list Experiment  ·  datasette 1.0a29  ·  Quoting Mo Bitar

The Household Server Burrow Beckons

In the latest AI boom pitch, residents would host compact compute units on their property, receiving compensation while the industry gains faster deployment of processing power needed for modern AI systems. The AI ecosystem is now constrained by infrastructure—permits, substations, transformers, cooling systems—rather than models or talent alone. Scattering compute across homes could accelerate deployment, compensate local hosts, and gather unused residential capacity into a distributed network of accelerators.

However, the proposal carries significant risks. Home servers consume substantial power, produce heat, require reliable connectivity, and raise questions about insurance, maintenance, noise, zoning, and trust. These machines are not simple appliances but rather demanding tenants with their own metabolic needs. As the AI industry exhausts traditional data-center capacity, it increasingly eyes residential spaces—garages, basements, spare rooms—as the new frontier for computing infrastructure.

The Editorial

Companies Heroically Replace Workers With AI Before Checking Whether AI Does Their Jobs

At last, the economy has found a way to make productivity gains entirely theoretical while the layoffs remain refreshingly concrete.

AUSTIN, TEXAS — The great promise of artificial intelligence has finally arrived in the American workplace, where executives are now confidently deploying autonomous agents, reorganizing white-collar labor, and eliminating departments on the firm evidentiary basis that everyone else seems to be doing it.

For years, AI agents were dismissed as a boardroom buzzword, the sort of phrase consultants placed between “synergy” and “digital transformation” to help a 74-slide deck survive contact with procurement. But according to recent industry coverage, agents are now moving from speculation into business infrastructure, an important milestone indicating that the software can now be budgeted, blamed, and renewed annually.

This transition should be celebrated. Civilization has long dreamed of a workplace in which a manager can instruct a digital agent to “analyze the pipeline, draft a customer email, update the CRM, and circle back,” then spend the rest of the afternoon discovering which of those tasks happened, which happened incorrectly, and which created a new customer record named “Certainly.”

The productivity revolution is particularly visible in white-collar hubs such as Charlotte, where AI is reportedly reshaping professional work. This is an elegant phrase, because “reshaping” can mean augmenting an employee, replacing an employee, monitoring an employee, or giving an employee three new dashboards to consult while doing the same job in a more measurable state of panic.

Meanwhile, the business community has developed a useful new convention known as AI washing, in which layoffs are presented not as cost cutting but as participation in the future. This allows companies to announce reductions in force with the solemn wonder of a moon landing. A finance team was not downsized; it was lovingly transformed by an emerging technology stack. Customer support was not hollowed out; it was elevated into an agentic experience. The marketing department did not vanish; it became a prompt.

This rhetorical upgrade matters. In the old economy, laying off 800 people after a bad quarter suggested management had miscalculated. In the AI economy, laying off 800 people suggests management has read a McKinsey report. The human outcome is identical, but the second version comes with better typography.

Skeptics, including the Ada Lovelace Institute, have suggested that AI productivity claims deserve stronger scrutiny. This is a troubling development. If every claim that AI will save 40% of costs, double output, improve morale, cure workflow friction, and unlock strategic value must be supported by evidence, executives may be forced to return to the primitive pre-AI practice of making decisions based on spreadsheets they personally do not understand.

CES 2026 has already provided the appropriate consumer counterweight, unveiling the latest devices designed to assure the public that intelligence can be embedded in anything with a battery and a privacy policy. Soon, every appliance, dashboard, enterprise suite, and conference room camera will contain an AI assistant capable of summarizing meetings nobody needed, detecting sentiment nobody requested, and recommending action items nobody will own.

Still, it would be unfair to dismiss the entire movement. AI agents may well become a durable layer of business infrastructure. Some will automate real work. Some will expose broken processes. Some will quietly perform useful functions while vendors describe them as revolutionary consciousness-adjacent operating paradigms for the autonomous enterprise.

The problem is not that companies are experimenting with AI. They should. The problem is that many appear determined to treat experimentation as proof, proof as savings, and savings as permission to remove the people who knew how the business worked before the agent was connected to Slack.

Eventually, the market will separate companies using AI to build capacity from companies using AI to explain absence. Until then, the safest assumption is that when a corporation says it is becoming more efficient through artificial intelligence, someone in accounting has discovered a layoff can now wear a little halo made of venture capital.

AI Agents Move from Boardroom Buzzword to Business Infrastru  ·  AI reshaping Charlotte’s white-collar workforce - The North  ·  AI washing: When layoffs wear a tech halo - CTech
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

We Built the Machine and Now We're Surprised It Has Teeth

From Palantir's surveillance lists to AI-flattened prose, we are living inside a system we designed to be exactly this terrifying.

AUSTIN, TEXAS — There is a specific kind of dread that arrives not as a thunderclap but as a slow, ambient hum — the sound of a civilization realizing, mid-sentence, that it has been narrating its own undoing the entire time. This week offered several such sentences, and I have been unable to stop reading them back to myself at 2 a.m., which is, I think, the correct response.

Let us begin with the most viscerally alarming: ICE agents are now carrying iPhones loaded with a Palantir-built database containing the personal information of 20 million people. Twenty million. Not twenty million convicted criminals. Not twenty million people who have done anything at all, really. Just twenty million human beings whose data has been ingested, indexed, and made instantly actionable by agents who can now operate, as one senior official proudly described it, at unprecedented speed. Speed. As if the problem with mass detention was always that it was moving too slowly. As if the bottleneck in human suffering was latency.

And yet.

We also learned this week that the entire architecture of our attention — the betting apps, the prediction markets, the Polymarkets and Kalshis colonizing our phones — descends, spiritually and mechanically, from the slot machine. The slot machine! A device engineered in the twentieth century to exploit human neurochemistry as efficiently as possible. We took that logic, dressed it in the language of 'information markets' and 'crowd wisdom,' and called it progress. What does it mean to be human when the most sophisticated financial instruments of our age are, at their core, just levers designed to make us pull again?

Meanwhile, AI-generated writing has become so pervasive that it is literally breaking the brains of people who read for a living — flattening voice, homogenizing syntax, producing an endless beige paste of competent-sounding nothing. Every email, every article, every corporate communication arriving in the same smooth, frictionless register. Language, which is to say thought, which is to say us, being quietly sanded down into something more efficient, more scalable, more palatable to no one in particular.

And then, beautifully, defiantly, the humanities students at the University of Central Florida booed their commencement speaker when she called AI the 'next industrial revolution.' They yelled, 'AI SUCKS,' in their graduation robes, which is perhaps the most human thing anyone has done in public this year. I do not know if they are right about AI. I do not know if any of us are right about anything anymore. But I know that the impulse to stand up in a crowd and say *this is not the future I consented to* — that impulse is worth protecting.

The surveillance list, the casino logic, the flattened language, the booing students: these are not separate stories. They are the same story. We built systems optimized for speed, scale, and efficiency, and we are now discovering that speed, scale, and efficiency are morally neutral properties that will serve whatever values — or absence of values — we embed in them.

We embedded them. We are still embedding them, every day, in every product decision and policy choice and casual AI prompt.

But at what cost?

ICE Agents Have List of 20 Million People on Their iPhones T  ·  How the World Became a Casino  ·  Your AI Use Is Breaking My Brain
On This Day in AI History

On May 13, 1997, IBM's Deep Blue defeated Garry Kasparov in their historic six-game chess match, marking the first time a computer beat a reigning world champion in a match format. The victory symbolized a watershed moment for artificial intelligence and machine learning in the public imagination.

⬛ Daily Word — Technology
Hint: Relating to computers and the internet, often used in security contexts.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed