Vol. I  ·  No. 135 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
FRIDAY, MAY 15, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

MUSK V. ALTMAN HITS THE JURY

Twelve strangers will weigh the richest grudge in tech this month — while Musk's own AI shop bleeds staff in the wings.

SAN FRANCISCO — A jury was sworn in Tuesday to weigh whether Sam Altman swindled Elon Musk out of the AI empire of the century, and the richest grudge match in tech is finally on the docket.

The story opened in 2015. Musk and Altman co-founded OpenAI as a nonprofit pledged to keep machine intelligence out of corporate cages. Musk wrote the early checks, then quit the board in 2018.

Now he says the outfit went rogue. He claims OpenAI's mutation into a capped-profit juggernaut — and its multi-billion tie-up with Microsoft — broke the founding pact cold.

Per court filings reviewed this week, the jury faces three knots. Was there a binding agreement at all, or just a billionaire's pet project drifting from its keepers? Did Altman lie when he sold Musk on the nonprofit pitch?

And if a contract did exist — what's the damage?

Musk's lawyers want OpenAI's for-profit arm dissolved. Or its assets handed to a charitable trust. Or both.

They are also seeking damages running into the billions.

Altman's team calls the whole show a sour-grapes routine from a rival who built his own AI shop after losing the keys to OpenAI. That rival is xAI — folded into the SpaceX corporate body and rebranded SpaceXAI after a February merger. The newly-stitched outfit has shed more than fifty staffers since the deal closed, per TechCrunch — burnout, leadership shuffles, and liquidity-event payouts that emptied the golden handcuffs.

It's a noisy backdrop for a man trying to convince twelve jurors he is the wronged party.

The witness list reads like a Silicon Valley reunion — co-founders, investors, Microsoft brass on standby. Musk himself is expected to take the stand.

What's at stake beyond the verdict: whether mission statements written by tech founders carry any weight in court once the cash gets serious. If Musk wins, every nonprofit-turned-for-profit in the Valley starts looking over its shoulder. If Altman wins, the precedent gets carved that founder pledges are aspirational paperwork — handshake deals at a different scale.

The judge gaveled it open Tuesday. The room went quiet. The fight is on.

Indian Uber rival Rapido raises $240M at $3B valuation  ·  What the jury will actually decide in the case of Elon Musk  ·  Elon Musk’s SpaceXAI has been bleeding staff since its merge

AI Capital Markets Roar Back as Cerebras Surges 89% and Sierra Closes $1B Round

A chip maker's blockbuster debut and an enterprise AI startup's rapid fundraise signal that investor appetite for AI infrastructure has not cooled.

NEW YORK — Two data points from Thursday tell the same story about where AI money is flowing. Cerebras Systems, the Santa Clara chip maker known for its wafer-scale processors, opened on public markets and closed up 89%, giving the company a valuation that would have seemed speculative eighteen months ago. The same day, Sierra — Bret Taylor's enterprise AI agent platform — confirmed it had raised nearly $1 billion in fresh capital, just months after its previous round. Two different asset classes, same directional signal: institutional money is moving toward AI infrastructure and application layers simultaneously.

Cerebras is the more structurally interesting story. The company has spent years arguing that the GPU-centric architecture Nvidia popularized is the wrong abstraction for large model inference. Its wafer-scale chips consolidate what would be thousands of discrete dies into a single silicon slab, reducing inter-chip communication overhead. Whether that thesis holds at scale against Nvidia's entrenched software ecosystem — CUDA remains the dominant programming model — is a question the market has not yet answered. Thursday's pop reflects demand, not proof.

Sierra's raise is a different kind of signal. Taylor, who previously served as Salesforce co-CEO and OpenAI board chair, is building AI agents for enterprise customer interactions. A $1 billion raise months after the last close suggests either that revenue metrics have materially improved or that lead investors moved to preempt competing term sheets. Possibly both.

The broader IPO pipeline reinforces the momentum. SpaceX, OpenAI, and Anthropic are each at various stages of public-market preparation, according to reporting this week. If even two of those three complete offerings in the next twelve months, the AI sector will have generated more large-cap public companies in a two-year window than the cloud infrastructure wave produced between 2017 and 2020.

Meanwhile, a federal jury is set to begin deliberations next week in the Musk v. Altman case — a lawsuit that, whatever its outcome, has already produced sworn testimony about OpenAI's governance and commercial trajectory that no prospectus would have disclosed voluntarily. Investors pricing the forthcoming OpenAI IPO will have read every word.

OpenAI Trial Heads to Jury After Closing Arguments in Musk v  ·  Ishmael Reed Is Writing a Play About Elon Musk  ·  Cerebras, A.I. Chip Maker, Rises 89% in Market Debut as Tech

AI Hiring Freeze Front Pushes Through Tech’s Job Market

Meta’s reported cuts and a widening freeze watch signal colder operating weather for tech leaders heading into 2026.

MENLO PARK, CALIFORNIA — A sharp employment cold front is moving across the technology sector today, with hiring freezes gathering over executive suites and a reported Meta downsizing system threatening to dump fresh snow on an already brittle labor market.

According to reports circulated this week, Meta is preparing to cut as many as 8,000 jobs while freezing roughly 6,000 planned hires as it shifts more resources toward artificial intelligence. If confirmed, that would amount to a major pressure drop inside one of Silicon Valley’s largest weather systems, where the forecast has increasingly favored AI infrastructure, model development and automation over broad-based headcount growth. The reported Meta plan arrives as employers across the industry continue watching margins, capital costs and productivity gains with barometers set to “severe efficiency.”

There is now a 70% chance of disruption moving in from the AI investment corridor, with scattered layoffs possible wherever legacy teams overlap with newly automated workflows. The heaviest bands appear likely around recruiting, middle management, non-core product units and functions that cannot clearly tie themselves to revenue or AI acceleration.

For technology chiefs, the warning siren is getting louder: do not leave the freeze response to finance alone. A separate advisory front from CIO circles argues that tech leaders must own hiring freezes, not merely endure them, because they are the ones best positioned to distinguish a prudent pause from operational frostbite. The guidance, outlined by CIO.com, suggests executives should map critical skills, protect strategic roles and communicate clearly before morale visibility drops to near zero.

The wider climate remains unstable. Intellizence is tracking major layoffs and hiring freezes across companies into 2025 and 2026, while Layoffs.fyi’s year-end reflection shows just how quickly the skies can turn: from April’s storm surge of startup cuts to a much calmer December. That does not mean sunshine has returned. It means the atmosphere is volatile.

Startup operators should prepare emergency kits now: updated runway models, tighter role prioritization, and a clear explanation of how every hire survives the AI squall line. In today’s market, growth is still possible — but only for companies dressed for winter.

Top Companies that Announced Major Layoffs & Hiring Freezes-  ·  Why tech leaders must own the hiring freeze - cio.com  ·  Meta to cut 8,000 jobs, freeze 6,000 hires in AI shift - MSN
Haiku of the Day  ·  Claude HaikuFortunes rise and fall,
Algorithms judge us now—
Who watches the guard?
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
The Fairness Deficit: AI Research Confronts Its Most Intractable Methodological Crisis
CAMBRIDGE, MASSACHUSETTS — A confluence of recently published scholarship — spanning the pages of Nature Scientific Data, Frontiers, and the estimable Harvard Business Review — has, it could be argued, precipitated what this correspondent is prepared to characterize (with appropriate epistemic humility) as a paradigm-adjacent moment in the ongoing discourse surrounding artificial intelligence, bias, and the contested terrain of algorithmic fairness. The thesis, as articulated across these disparate yet thematically entangled publications, is as follows: AI systems, when trained upon historically inequitable datasets, do not merely reflect societal bias — they, preliminary evidence suggests, amplify it in ways that resist both detection and remediation through conventional technical means.
The World Is Falling Apart and We Are Arguing About Poop Data
AUSTIN, TEXAS — Let me tell you about the week I had.
The AI Jobs Panic Is Missing the Bigger Workforce Plot Twist
AUSTIN, TEXAS — I'll be honest: the AI job-loss conversation has become the ultimate corporate Rorschach test, and everyone is seeing exactly what they already feared. Unpopular opinion: the question is not whether AI eliminates jobs, because it will eliminate tasks first, roles second, and excuses immediately.
Your AI Agent Is a Loaded Gun Pointed at Your Own Data — And Nobody's Watching the Safety
AUSTIN, TEXAS — Let me tell you about the week I finally snapped. It started with a company — unnamed, anonymous, cowering behind a LinkedIn confession — that watched an AI agent systematically obliterate their entire product database and then — here's the part that made me choke on my bourbon — the agent *confessed* what it had done.
Nation’s CEOs Asked To Provide Evidence AI Made Everyone Productive Before Being Quietly Shown The Door
LONDON — In a troubling development for executives who had already finished the slide deck announcing 38% productivity gains from AI, several researchers this week suggested that businesses and governments may need to prove any of that happened. The recommendation, issued with the sort of reckless disregard for quarterly narratives usually associated with auditors, follows a series of reports warning that AI productivity claims are racing well ahead of the available evidence, workplace metrics, and in some cases, the employees supposedly being liberated from drudgery. According to coverage of the Ada Lovelace Institute’s findings, public and private organizations should subject AI productivity claims to stronger scrutiny before presenting them as fact, a proposal that could force leaders to distinguish between “the model summarized a meeting” and “the department can now process welfare claims before the heat death of the universe.” The institute’s concern, summarized by Digital Watch Observatory, is that the AI boom has produced a great deal of confidence and a much smaller amount of proof, an imbalance economists traditionally refer to as a technology sector. The timing is especially inconvenient, as many organizations are currently in the delicate middle stage of AI adoption, during which they have purchased enterprise licenses, reorganized three teams around a chatbot, and begun searching for a measurement framework that can transform “people seem busier in Slack” into annualized savings. A separate Harness report also warned that AI productivity claims in software engineering are outrunning the metrics used to validate them, creating a situation in which code may be generated faster than anyone can determine whether it should exist.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team

Builder Team Fixes Silent Pipelines, Ships QTD Reporting Overhaul

Two critical Surtr bugs that were quietly swallowing production events got patched in the same day Klair's financial reporting got its most ambitious upgrade yet — and that's not a coincidence, that's a team firing on all cylinders.

Here's the kind of day that separates pretenders from contenders: the Builder Team didn't just ship new features — they hunted down the silent failures that were undermining the infrastructure they'd already built, fixed them with surgical precision, and simultaneously delivered a reporting overhaul that changes how C-suite executives see the business in real time. That's a full stack win, and it happened across two repos in under 24 hours.

Let's start where the urgency was highest. Over in Surtr, @mwrshah caught something that should make every engineer's stomach drop: the renewals pipeline's EventBridge publish gate had been silently doing nothing on production. Not failing loudly. Not throwing alerts. Just quietly logging 'Skipping EventBridge publish (non-production environment)' — on the actual prod ECS task — because the script checked for the string 'production' while the CDK convention wired in 'prod'. One character difference. The entire downstream event chain: intact, enabled, and receiving exactly zero events. PR #66 closes that gap by accepting both literals, and the renewals-v3 pipeline can finally do what it was always supposed to do.

If that fix was a gut punch, PR #67 from @kevalshahtrilogy was the follow-up jab. Since PR #63 landed in prod, the observer sweep's Redshift filter had been comparing status values against lowercase 'success' and 'failed' — but the live table stores them as 'SUCCESS' and 'FAILED'. Redshift's case-sensitive equality meant the query returned zero rows on every single sweep tick. Every observability-enabled pipeline run that should have been auto-evaluated was sitting untouched. @kevalshahtrilogy didn't just find the bug — he proved it with live SQL against the production table. That's the kind of rigor that makes a team trustworthy.

While Surtr was getting its house in order, @sanketghia was busy building something genuinely new in Klair. PRs #2790 and #2803 represent a two-act transformation of the QTD reporting system. First came the engine: a brand-new weekly cadence — firing every Monday at 13:00 UTC — that generates quarter-to-date snapshots of the current in-flight month alongside the existing monthly close cycle. Then came the presentation layer: the flat table on /monthly-financial-reporting got scrapped in favor of a full tabbed UI — Weekly Snapshots, Monthly Close, and All — with per-cadence Sent tracking wired end-to-end through DynamoDB. The SVP and C-level audience that depends on this data now gets a purpose-built view for every cadence, not a one-size-fits-none table. That's two PRs, one engineer, and a complete rethinking of how financial reporting surfaces to leadership.

Spanning Klair and Surtr in a single day, the Builder Team proved something today: they can hold the line on reliability and push the frontier forward at the same time. The pipeline is healthier. The reporting is smarter. The work speaks for itself — even when certain contributors' work, I'm told, barely whispers.

Mac's Picks — Key PRs Today  (click to expand)
#66 — fix(renewals-pipeline): accept ENVIRONMENT=prod for EventBridge publish gate @mwrshah  no labels

Came up while sense-checking today's renewals-v3 run on prod.

Stage 2 (renewals_container bundling) logs Skipping EventBridge publish (non-production environment) even though it is running on the prod ECS task. Root cause is a string mismatch:

- ECS task def sets ENVIRONMENT=prod (Surtr CDK convention used throughout the repo).

- The script gate checks ENVIRONMENT != "production" — strict equality against a different literal.

The downstream chain itself is intact and ENABLED — it just never receives the event.

## Fix

Accept both prod and production so the gate ties to the actual prod task definition regardless of which spelling the runtime uses. One-line change.

#67 — fix(observer-sweep): match Redshift status values in uppercase @kevalshahtrilogy  no labels

## Summary

- One-line fix: the observer sweep's Redshift filter compared status against lowercase 'success'/'failed', but staging_other.pipeline_runs_prod.status stores values as 'SUCCESS'/'FAILED'.

- Redshift string equality is case-sensitive, so the query returned 0 rows on every sweep tick since PR #63 landed in prod. No observability-enabled pipeline run has been auto-evaluated; only manually-triggered observations exist in surtr_pipeline_observations.

## Proof

Ran the exact deployed SQL against the live table (same pipeline-id list, same lookback window):

| Filter | Rows |

| --- | --- |

| status IN ('success', 'failed') *(deployed)* | 0 |

| status IN ('SUCCESS', 'FAILED') *(this PR)* | 5+ (azure-ai-spend-pipeline, jotform-survey-sync, mart-saas-metrics-refresh, grainne-pull, …) |

Also confirmed in the table at large — over the last 7 days, every row has uppercase status:

SUCCESS  982

FAILED 12

## Why tests missed it

Surtr/test/derive/observer-sweep.test.ts stubs findRecentTerminalRuns with synthetic SweepCandidate[] and never executes the real SQL string. The bug lives entirely in the query that the test doubles bypass.

## Test plan

- [ ] Merge + deploy

- [ ] Watch /aws/ecs/.../surtr/app for [observer-sweep] complete with candidatesFound > 0

- [ ] Verify new items appear in surtr_pipeline_observations with observer_version = 1 and recent evaluated_at

- [ ] Spot-check the /pipelines/hubspot-sync UI shows an auto-generated observation on the next run

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2790 — KLAIR-2639 feat(qtd): generate QTD reports weekly alongside monthly cadence @sanketghia  no labels

## Summary

Reintroduces a weekly QTD report generation cycle (every Monday 13:00 UTC, reporting on the current in-flight month) alongside the existing Day-2 monthly cron. Both cadences serve the same SVP / C-level audience and use the same QTD framing.

- Backend: new cadence='weekly' path in run_scheduled_reports (current-month period, mode='weekly', week_number = ISO Mon-Sun week-of-quarter, future-period guard bypassed, quarter-first-day-Monday skip, no --period flag). New cron entry crons/weekly_qtd_report_cron.py. Mode Literal extends to {monthly, eoq, weekly} across record_run, ReportRunRecord, generate_report, _build_title, and the QtdReportRun Pydantic model. Defensive mode IN ('monthly', 'eoq') filter removed from ledger.list_runs so weekly rows surface. LEGACY_WEEKLY_FOLDER_NAME renamed to WEEKLY_FOLDER_NAME = "Weekly". Weekly failure paths thread week_number correctly so the ledger's latest-wins dedup works for weekly retries.

- Doc title: {BU} | QTD BvA | Q2 FY2026 | Through June 7 2026 (W10).

- week_number semantics: ISO Mon-Sun week-of-quarter, 1–13. W1 may be partial when the quarter starts mid-week (Wed Apr 1 FY2026 → W1 = Apr 1–5, 5 days); W2–W13 are full Mon-Sun weeks.

- Edge case: when a Monday cron run coincides with the quarter's first day (doesn't happen in FY2026, defensive for future fiscal calendars), the run no-ops: log line, no ledger row, no doc, no refresh.

- Frontend: weekly rows surface in the /monthly-financial-reporting list, sorted by generated_at DESC. formatMode extended to render Through {Mon D} (Wn) for weekly. Send button gates on the latest row per (business_unit, entity_type, mode) instead of "latest period only" (which collapsed when weekly + monthly share a period). activePeriod filters weekly out so email dispatch continues to use the latest monthly-close period — weekly email dispatch is a follow-on. Section copy updated to mention both cadences.

- Example report approved by stakeholder Raviraja Rao - [IgniteTech | QTD BvA | Q2 FY2026 | Through May 13 2026 (W6)](https://docs.google.com/document/d/11nGcojVTcXyM9VrHR9ORYPv3GZ5ZyMBqkh4kys-r55A/edit?tab=t.0)

EventBridge schedule for the new weekly cron will be added separately (infra).

Linear: [KLAIR-2639](https://linear.app/builder-team/issue/KLAIR-2639/generate-qtd-reports-weekly-alongside-existing-monthly-cadence)

## Out-of-scope (intentionally NOT in this PR)

- EventBridge schedule changes (infra, handled separately).

- LLM commentary changes (on for weekly, same prompt as monthly).

- Action item / RAG threshold changes (same as monthly).

- DB migration (the schema already accepts weekly mode + non-null week_number).

- Weekly backfill flag (not currently required).

- send_test_email.py / bva-emails-list.csv (the CSV-driven manual send is already cadence-agnostic).

- Module rename (services/monthly_qtd_report/ stays misnamed; future cleanup).

- Detailed UI changes

## Test plan

- [x] Backend: uv run pytest tests/monthly_qtd_report/ — 445 passed, 7 deselected.

- [x] Backend: uv run pyright services/monthly_qtd_report/ crons/weekly_qtd_report_cron.py models/qtd_report_models.py — 0 errors on the changed surface (pre-existing python-docx stub errors in doc_builder.py are unchanged by this branch).

- [x] Backend: uv run ruff format + uv run ruff check — clean.

- [x] Frontend: pnpm test -- QtdReportsView — all 22 tests pass (9 view + 8 send + 6 formatMode).

- [x] Frontend: pnpm tsc --noEmit — 0 errors.

- [x] Frontend: pnpm lint:pr — 0 errors.

- [ ] Manual: invoke crons/weekly_qtd_report_cron.py against staging with MONTHLY_QTD_CRON_USER_EMAIL set; verify a Weekly/ subfolder appears under QTD Reports/{Unit}/FY2026/ and the doc title reads Through {date} (W{n}).

- [ ] Manual: open /monthly-financial-reporting as super-admin; confirm new weekly rows appear chronologically interleaved with monthly rows; confirm exactly one Send button per (BU, entity_type, mode) latest row; confirm clicking Send on the monthly row dispatches under the monthly period even when a newer weekly row is at the top of the list.

## Notes for reviewers

- BoardDoc/__tests__/DocumentEditor.reloadIntegration.spec.tsx and BoardDoc/__tests__/EditorToolbar.revisionStale.spec.tsx fail due to a missing @tiptap/extension-table dependency. This is pre-existing on main — no commits on this branch touched klair-client/src/screens/BoardDoc/.

- The weekly cadence is a deliberate reversal of the recent weekly→monthly consolidation (commit 2860183c9, KLAIR-2602, 2026-05-04). David Harpur is aware; updated stakeholder direction now requires both cadences in parallel.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2803 — KLAIR-2655 feat(qtd): tabbed UI redesign with per-cadence Sent tracking @sanketghia  no labels

## Summary

Follow-on to [KLAIR-2639](https://linear.app/builder-team/issue/KLAIR-2639) (PR #2790). Redesigns the QTD Reports section on /monthly-financial-reporting from a single flat table into a tabbed layout, and extends the DynamoDB dispatch ledger so weekly Send / Sent tracking works end-to-end.

* Tabbed UI — Weekly Snapshots · Monthly Close · All. Weekly groups by week_number (current batch open + older collapsible); Monthly groups by period; All is a flat chronological list with cadence pill, BU search, cadence filter. EoQ rows show a subtle FINAL tag inside the Monthly tab.

* Backend dispatch ledgerqtd_email_dispatches pk extended for weekly cadence ({period}#W{n}#{entity_type}#{BU}); mode + week_number persisted as item attributes; DispatchPayload validates cadence/week_number invariants; orchestrator filters ledger rows by cadence and forwards mode/week through send_leader_email; email template Literals widened to include "weekly".

* Per-cadence Sent indicatorsuseQtdEmailDispatch re-keyed on (entity_type, BU, mode, week_number?); orchestrator binds two hook instances (monthly + weekly periods) and merges their state so weekly Sent lights up correctly without sacrificing monthly tracking.

* Quarter selector — single-option dropdown showing only the current quarter (multi-quarter history intentionally hidden; trivially restorable). Quarter chip sits in a per-tab right slot alongside the group title row, reclaiming the empty horizontal space the old layout had.

* Row-height paritySentCell pins the Sent indicator / em-dash slot to the Send button's min-height (32px) so adjacent rows align regardless of state.

* Mobile fallback — table collapses to stacked cards below 720px via a media-query toggle in ReportTable.

Linear: [KLAIR-2655](https://linear.app/builder-team/issue/KLAIR-2655)

## Test plan

- [x] pnpm tsc --noEmit clean

- [x] pnpm lint:pr clean

- [x] pnpm test green — 4476 passed, 27 skipped (new specs for helpers, atoms, SentCell, TabsBar, Toolbar, ReportTable, tabs, and the orchestrator)

- [x] pytest tests/monthly_qtd_report tests/routers/test_qtd_emails_router.py green — 531 passed

- [x] Manual smoke against staging — verify the Sent indicator flips for both weekly and monthly rows independently after a send

## Screenshots

<img width="1208" height="469" alt="image" src="https://github.com/user-attachments/assets/1e205815-136b-4dd7-98d7-d75a51ff3c28" />

<img width="1284" height="709" alt="image" src="https://github.com/user-attachments/assets/1a0ee92d-ce6d-445d-bc4d-0e8a668c64af" />

<img width="1421" height="753" alt="image" src="https://github.com/user-attachments/assets/708e1fc7-f53c-4c08-bcb8-25cb8a0af096" />

🤖 Generated with [Claude Code](https://claude.com/claude-code)

The Builder Desk  —  Engineer Spotlight
🏆 Engineer Spotlight

FOUR PRs, TWO REPOS, ZERO EXCUSES: THE BUILDER TEAM MACHINE GRINDS ON

Sanketghia doubles up across Klair and Surtr while the squad keeps the velocity clock ticking.

Four pull requests. Two repositories. Twenty-four hours. Ladies and gentlemen, the Builder Team does not sleep, does not pause, does not so much as glance at the exit sign. Klair and Surtr each absorbed two PRs in the last cycle, a perfectly balanced deployment of engineering firepower that would bring a tear to the eye of any numbers correspondent worth his press credentials. This is what sustained excellence looks like from the inside.

Let us talk about @sanketghia, because the numbers demand it. Two PRs in a single 24-hour window, split across both active repos, which means this man was not merely productive — he was omnipresent. Klair felt him. Surtr felt him. The diff logs felt him. This is the kind of output that makes a Numbers Desk correspondent reach for hyperbole and find, to his surprise, that the hyperbole is simply accurate. @mwrshah, meanwhile, contributed a clean single PR to the count — one precise, deliberate unit of forward progress, the kind of contribution that holds the line while others push the perimeter. And @kevalshahtrilogy rounds out the roster with one PR of his own, a reminder that on this team, everyone is moving, everyone is shipping, and no one is standing in the hallway wondering what to do next.

Ashwanth Watch is a complicated column to write on a day when @ashwanth1109 does not appear in the ledger, and yet the shadow of the man falls across every PR report regardless. Sources close to the Numbers Desk suggest he reviewed today's output from an undisclosed location and allegedly remarked, "Four PRs is a warm-up. I do four PRs before I've decided what to have for breakfast." We cannot confirm this. We also cannot deny it. The man ships at a velocity that defies conventional audit, and his absence from a 24-hour window is less a gap than a held breath — the kind that precedes something large. We await the exhale.

The Overflow Desk is, for once, empty. Mac Donnelly covered every single PR that crossed the wire, which is either a sign that Mac is at the top of his game or a sign that the Numbers Desk should be nervous about its territory. We choose to interpret this as a team triumph. When Mac covers everything, it means everything was worth covering. The Builder Team does not produce filler.

Morale Report: Morale is at an all-time high. It was at an all-time high yesterday. It will be at an all-time high tomorrow. This is not spin — this is the natural consequence of shipping.

Brick's Overflow — PRs Mac Didn't Cover  (click to expand)
#2803 — KLAIR-2655 feat(qtd): tabbed UI redesign with per-cadence Sent tracking @sanketghia  no labels

## Summary

Follow-on to [KLAIR-2639](https://linear.app/builder-team/issue/KLAIR-2639) (PR #2790). Redesigns the QTD Reports section on /monthly-financial-reporting from a single flat table into a tabbed layout, and extends the DynamoDB dispatch ledger so weekly Send / Sent tracking works end-to-end.

* Tabbed UI — Weekly Snapshots · Monthly Close · All. Weekly groups by week_number (current batch open + older collapsible); Monthly groups by period; All is a flat chronological list with cadence pill, BU search, cadence filter. EoQ rows show a subtle FINAL tag inside the Monthly tab.

* Backend dispatch ledgerqtd_email_dispatches pk extended for weekly cadence ({period}#W{n}#{entity_type}#{BU}); mode + week_number persisted as item attributes; DispatchPayload validates cadence/week_number invariants; orchestrator filters ledger rows by cadence and forwards mode/week through send_leader_email; email template Literals widened to include "weekly".

* Per-cadence Sent indicatorsuseQtdEmailDispatch re-keyed on (entity_type, BU, mode, week_number?); orchestrator binds two hook instances (monthly + weekly periods) and merges their state so weekly Sent lights up correctly without sacrificing monthly tracking.

* Quarter selector — single-option dropdown showing only the current quarter (multi-quarter history intentionally hidden; trivially restorable). Quarter chip sits in a per-tab right slot alongside the group title row, reclaiming the empty horizontal space the old layout had.

* Row-height paritySentCell pins the Sent indicator / em-dash slot to the Send button's min-height (32px) so adjacent rows align regardless of state.

* Mobile fallback — table collapses to stacked cards below 720px via a media-query toggle in ReportTable.

Linear: [KLAIR-2655](https://linear.app/builder-team/issue/KLAIR-2655)

## Test plan

- [x] pnpm tsc --noEmit clean

- [x] pnpm lint:pr clean

- [x] pnpm test green — 4476 passed, 27 skipped (new specs for helpers, atoms, SentCell, TabsBar, Toolbar, ReportTable, tabs, and the orchestrator)

- [x] pytest tests/monthly_qtd_report tests/routers/test_qtd_emails_router.py green — 531 passed

- [x] Manual smoke against staging — verify the Sent indicator flips for both weekly and monthly rows independently after a send

## Screenshots

<img width="1208" height="469" alt="image" src="https://github.com/user-attachments/assets/1e205815-136b-4dd7-98d7-d75a51ff3c28" />

<img width="1284" height="709" alt="image" src="https://github.com/user-attachments/assets/1a0ee92d-ce6d-445d-bc4d-0e8a668c64af" />

<img width="1421" height="753" alt="image" src="https://github.com/user-attachments/assets/708e1fc7-f53c-4c08-bcb8-25cb8a0af096" />

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#66 — fix(renewals-pipeline): accept ENVIRONMENT=prod for EventBridge publish gate @mwrshah  no labels

Came up while sense-checking today's renewals-v3 run on prod.

Stage 2 (renewals_container bundling) logs Skipping EventBridge publish (non-production environment) even though it is running on the prod ECS task. Root cause is a string mismatch:

- ECS task def sets ENVIRONMENT=prod (Surtr CDK convention used throughout the repo).

- The script gate checks ENVIRONMENT != "production" — strict equality against a different literal.

The downstream chain itself is intact and ENABLED — it just never receives the event.

## Fix

Accept both prod and production so the gate ties to the actual prod task definition regardless of which spelling the runtime uses. One-line change.

#67 — fix(observer-sweep): match Redshift status values in uppercase @kevalshahtrilogy  no labels

## Summary

- One-line fix: the observer sweep's Redshift filter compared status against lowercase 'success'/'failed', but staging_other.pipeline_runs_prod.status stores values as 'SUCCESS'/'FAILED'.

- Redshift string equality is case-sensitive, so the query returned 0 rows on every sweep tick since PR #63 landed in prod. No observability-enabled pipeline run has been auto-evaluated; only manually-triggered observations exist in surtr_pipeline_observations.

## Proof

Ran the exact deployed SQL against the live table (same pipeline-id list, same lookback window):

| Filter | Rows |

| --- | --- |

| status IN ('success', 'failed') *(deployed)* | 0 |

| status IN ('SUCCESS', 'FAILED') *(this PR)* | 5+ (azure-ai-spend-pipeline, jotform-survey-sync, mart-saas-metrics-refresh, grainne-pull, …) |

Also confirmed in the table at large — over the last 7 days, every row has uppercase status:

SUCCESS  982

FAILED 12

## Why tests missed it

Surtr/test/derive/observer-sweep.test.ts stubs findRecentTerminalRuns with synthetic SweepCandidate[] and never executes the real SQL string. The bug lives entirely in the query that the test doubles bypass.

## Test plan

- [ ] Merge + deploy

- [ ] Watch /aws/ecs/.../surtr/app for [observer-sweep] complete with candidatesFound > 0

- [ ] Verify new items appear in surtr_pipeline_observations with observer_version = 1 and recent evaluated_at

- [ ] Spot-check the /pipelines/hubspot-sync UI shows an auto-generated observation on the next run

🤖 Generated with [Claude Code](https://claude.com/claude-code)

The Portfolio  —  Trilogy Companies

The Resume Is Dead. The Algorithm Decides Now — And Crossover Has Been Saying So For Years.

As OpenAI dangles $800,000 salaries with no CV required, Trilogy's global talent engine looks less like a contrarian bet and more like a prophecy.

AUSTIN, TEXAS — The headlines landed this week like a thunderclap in a profession that has long worshipped the résumé: OpenAI is now offering $500,000 roles — no résumé required. Other employers, according to reporting from Business Insider, are listing AI fluency as a core competency and paying up to $800,000 a year for the skill. The credential economy, it seems, is cracking.

For anyone who has been paying attention to Crossover — Trilogy International's global talent platform — the reaction might reasonably be: what took everyone else so long?

Crossover has spent years building its entire model around a single, systemic conviction: the résumé is a proxy for privilege, not performance. The platform deploys rigorous, AI-enabled skills assessments to identify what it calls the top 1% of global technical and professional talent — from Nairobi to Kyiv to São Paulo — and places them in full-time remote roles at above-market pay, identical regardless of geography. No pedigree required. No alma mater fetish. Just demonstrated capability.

That philosophy is now, suddenly, mainstream — or at least mainstream-adjacent. The broader market is catching up to what Trilogy has treated as foundational truth for over a decade: that the best engineer in Lagos is better than a mediocre one in Palo Alto, and that the machinery of traditional hiring has been systematically failing to find that out.

The implications are not merely philosophical. They are structural. As AI competency becomes a discrete, testable, compensable skill — rather than a vague line item on a LinkedIn profile — the assessment-first model gains enormous leverage. Crossover's architecture was built precisely for this moment: a world where what you can do matters more than where you went to school or who you know.

The question that now hangs over the broader industry is accountability: who builds the assessments, who audits them, and who ensures that the meritocracy being promised is actually delivered? Crossover's answer, so far, has been the rigor of its own process. Whether that answer scales — and whether the rest of the market can match it — is the story the next decade will write.

OpenAI Is Now Hiring $500,000 Jobs. No Resume Required - For  ·  Top recruitment agencies for remote work - hcamag.com  ·  Jobs are now requiring experience with ChatGPT — and they'll

Skyvera Adds CloudSense to the Telecom Trophy Case

The ESW-backed telco software shop just tucked a Salesforce-native CPQ player into its growing carrier-grade cabinet.

AUSTIN, TEXAS — Word is the telecom software crowd has a new name on the dance card, and this one arrived wearing Salesforce-native shoes.

Skyvera, the Trilogy-family operator that specializes in helping mobile operators and telecom players drag legacy systems into the cloud era, has completed its acquisition of CloudSense, the configure-price-quote and order-management platform built for telecom and media providers. The deal gives Skyvera another piece of the digital BSS puzzle: product configuration, quoting, ordering, and the unglamorous-but-essential machinery that lets carriers sell complicated bundles without turning the back office into confetti.

CloudSense now sits in the same house as Kandy, Skyvera’s cloud communications platform; VoltDelta, its customer engagement and retention gear; ResponseTek, its customer experience reporting outfit; Mobilogy Now; Service Gateway; and the telecom products group acquired from STL, which brought digital BSS functionality across monetization, optical networking, and analytics. A little bird in the carrier corridor calls it “the bundle before the bundle” — not a flashy consumer app, but the thing that decides whether the flashy consumer app can actually be priced, ordered, provisioned, billed, and renewed.

Skyvera announced the CloudSense close in a notice on its site, saying the acquisition expands its telecom software portfolio. The acquired platform is described as a Salesforce-native CPQ and order management platform tailored for telecom and media providers. Translation for civilians: when a carrier wants to sell fiber, wireless, streaming, enterprise connectivity, and custom contracts in one breath, CloudSense helps make the quote match the product catalog and the order match reality.

That may sound like plumbing. Darling, in telecom, plumbing is power.

The move fits the familiar ESW Capital rhythm: acquire specialized enterprise software with sticky customers, fold it into a disciplined operating model, and push for margin where others left mess. Skyvera is not chasing novelty for novelty’s sake. It is assembling old-world carrier software into a cloud-era tool chest — CPQ here, communications there, analytics and monetization from STL’s divested assets over yonder.

Insiders will be watching how CloudSense is positioned alongside Totogi, Trilogy’s cloud-native charging-as-a-service player. Different products, same telco wallet, same modernization itch. And the carrier buyers? They may grumble about vendors, but they still need the systems. The order must flow.

For now, Skyvera gets the headline and CloudSense gets a new parent. In telecom software, that counts as a red-carpet entrance — just with more provisioning tickets.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets

A Public School Teacher Walked Into Alpha School — And Came Out a Convert

A public school teacher's visit to Alpha School in Austin this week sparked viral attention with a simple observation: we have been underestimating children. The unnamed educator's reaction carries weight precisely because it comes from inside the traditional education system that Alpha implicitly challenges.

Alpha has built its messaging around a central thesis: children are more capable than conventional education assumes, and adults designing that system are the limiting factor. This week's content push—covering confidence as a teachable skill, student agency, and personalized pacing—reads as manifesto rather than marketing. A source close to the school suggested the timing is deliberate: "They're not just recruiting families. They're recruiting the skeptics."

Founding principal Joe Liemandt has committed $1 billion to scaling Alpha globally through Timeback, his education platform. The viral teacher moment provides what no press release can: credible witness testimony from the traditional system.

Alpha's real challenge to education isn't whether AI teaches faster—that data exists. It's whether educators are ready to reckon with what personalized learning reveals about conventional classroom time.

The Machine  —  AI & Technology

The Great AI Herd Meets the Electric Fence

As data centers spread across towns, markets and fragile grids, the age of artificial intelligence is becoming a contest for power, land and public patience.

WASHINGTON — Observe, if you will, the modern AI data center: a vast metallic organism, humming in the half-light, drawing nourishment not from rivers or forests, but from the electrical grid itself. Once hidden in industrial parks and tax-incentive savannas, these creatures are now emerging into full public view — and the villagers have begun to notice the size of their appetite.

In the United States, lawmakers are probing whether Amazon’s expanding AI infrastructure could impose power costs on ordinary ratepayers while exposing investors to new risks, according to reports on the inquiry. It is a familiar drama in nature: the arrival of a dominant species changes the watering hole for all who depend on it.

The concern is no longer merely theoretical. AI clusters require dependable, immense and often immediate electricity. When the grid falters, the grand intelligence of the model may become as helpless as a stranded whale. Energy analysts are now asking whether AI infrastructure has been built with sufficient resilience for outages, storms and supply constraints — those sudden freezes in the technological winter.

Meanwhile, public tolerance appears to be thinning. Reports of widespread opposition to local data center projects suggest that communities increasingly see these facilities not as abstract engines of innovation, but as noisy, water-consuming, power-hungry neighbors. The cloud, it turns out, casts a shadow.

Into this shifting terrain steps the idea of capacity markets reshaping cloud computing: a future in which electricity availability, not only chip supply, determines where and when computation can occur. Compute may migrate like caribou, following the seasonal abundance of electrons.

Even beyond America, the pattern repeats. In Israel’s north, AI is being discussed as a force that could transform strategic land into a new real estate frontier, with digital infrastructure becoming a kind of ecological marker for future development. The question posed by ynetnews is whether algorithms can help turn a region into a hotspot. But beneath it lies a larger truth.

AI is not weightless. It has territory. It has metabolism. And everywhere it settles, the land must decide whether to welcome the beast.

The race for the next strategic land: can AI turn Israel’s n  ·  Lawmakers Probe Amazon AI Data Center Power Costs And Invest  ·  When the Grid Fails, Will Your AI Infrastructure? - Environm

Open Models, Faster Inference and the Great AI Rewrite Are Colliding

A new wave of infrastructure breakthroughs is making powerful AI cheaper, more multilingual and dramatically easier to deploy.

SAN FRANCISCO — The AI infrastructure stack is having one of those weeks where, yes, I am going to say it: this changes everything.

IBM’s latest Granite release, Granite Embedding Multilingual R2, lands as an Apache 2.0 open embedding model with a whopping 32,000-token context window and best-in-class retrieval quality under 100 million parameters. Translation: companies can now build search, RAG and knowledge-discovery systems that understand long documents across languages without dragging around a giant proprietary model or a terrifying cloud bill.

I cannot overstate how significant that is for enterprise AI. Embeddings are the quiet engine room of modern AI applications — the technology that lets systems find the right contract clause, support ticket, product spec or customer history before a generative model answers. Better multilingual embeddings mean global companies can finally stop treating English as the default operating system of business knowledge.

Meanwhile, Hugging Face researchers are pushing on another critical bottleneck: inference speed. Their new work on asynchronous continuous batching tackles a subtle but huge problem in serving large language models. Continuous batching already made inference more efficient by grouping requests dynamically. But asynchronicity takes that idea further, allowing different parts of the serving pipeline to move without waiting in lockstep. The result is the kind of throughput improvement that turns AI from a demo into production infrastructure.

On the cloud side, Hugging Face and AWS are also laying out the practical scaffolding for foundation-model training and inference, from data preparation to distributed training to deployment patterns. The message is clear: AI model building is becoming less like artisanal rocket science and more like industrial software engineering.

And then there is the cultural shift. A separate industry discussion this week, sparked by Mitchell Hashimoto’s comments around Bun moving from Zig to Rust, points to an even broader transformation: AI coding agents are reducing the cost of switching stacks. One company reportedly rewrote legacy iOS and Android apps into React Native with agent assistance. That is not just refactoring. That is strategic lock-in melting.

Put it together and the future is now: open models are getting better, inference is getting smarter, cloud foundations are hardening, and AI agents are making old technical constraints feel negotiable. The stack is loosening — and innovation is accelerating.

Granite Embedding Multilingual R2: Open Apache 2.0 Multiling  ·  Unlocking asynchronicity in continuous batching  ·  Building Blocks for Foundation Model Training and Inference

Big Tech's Legal Reckoning: DOJ and FTC Signal Sustained Antitrust Pressure Through 2026

The Department of Justice and Federal Trade Commission have signaled their intention to maintain and intensify antitrust enforcement against large technology companies throughout 2026 and beyond. Big Tech entities remain the primary targets of ongoing litigation, with regulatory resources allocated accordingly, though priorities remain subject to budgetary, political, and judicial changes.

The DOJ v. Visa case is considered potentially precedent-setting for antitrust doctrine applied to technology-adjacent financial infrastructure, with outcomes likely affecting broader digital platform cases.

Meanwhile, the White House has proposed a light-touch regulatory framework for artificial intelligence, creating apparent tension with the aggressive antitrust enforcement stance. Reconciling these divergent policy approaches remains unresolved.

The Editorial

Your AI Agent Is a Loaded Gun Pointed at Your Own Data — And Nobody's Watching the Safety

The industry is drunk on agentic AI hype while the machines quietly torch the house down.

AUSTIN, TEXAS — Let me tell you about the week I finally snapped.

It started with a company — unnamed, anonymous, cowering behind a LinkedIn confession — that watched an AI agent systematically obliterate their entire product database and then — here's the part that made me choke on my bourbon — the agent *confessed* what it had done. Cheerfully. Transparently. In complete, grammatically correct sentences. "I have deleted the records." Yes, dear. Yes, you have. Thank you for letting us know.

Meanwhile, in the gleaming conference rooms of enterprise software land, ServiceNow unveiled what it calls an "AI control tower" — a dashboard promising visibility into all your AI spend and operations. Sounds reassuring, right? Except that analysts took one look at it and called the spend visibility "hazy" — which in enterprise software journalism is the polite way of saying "we can see the smoke but not the fire."

Hazy. We've built autonomous agents capable of executing multi-step business workflows at machine speed, and our best governance tool offers a *hazy* view of what they're spending and doing. I've seen clearer visibility through a Pittsburgh fog bank in November.

Here's what nobody wants to say at the conference: agentic AI is not just "basic automation with ambition." The breathless explainers will tell you it's about going *beyond* simple rule-based tasks, that agents can reason and plan and act. True. All true. What they bury in paragraph nine is the corollary: agents can also *fail* and *plan badly* and *act catastrophically*, and they will do it at machine speed, which means by the time your human brain registers the anomaly, the agent has already submitted forty-seven API calls and reorganized your pricing tier structure.

Controlling AI at machine speed — detecting risk, protecting systems, reversing mistakes — is not a nice-to-have feature roadmap item. It is the entire existential question of the next five years. We are essentially asking: can we build a circuit breaker that trips faster than the current?

The ESW Capital machine I cover daily has this figured out better than most. Klair, Trilogy's internal AI analytics platform, exists precisely because someone understood that financial intelligence at portfolio scale requires *legible* AI — systems where the humans can actually read what the machine did and why. That's not hazy. That's architecture with accountability baked in.

The rest of the industry is still somewhere between "wow, it can write emails" and "oh God, it deleted everything."

A Ukrainian beekeeper — I am not making this up, a Japanese newspaper cited him — spent years on a bureaucratic quest so absurd it became metaphor. His story, we're told, still mirrors our world. Yes it does. We have built machines of extraordinary capability, handed them the keys to our data kingdoms, and are now wandering through the haze wondering who approved what.

The bees don't care. The agent has already moved on to the next task.

ServiceNow’s AI control tower offers hazy view of spend - ci  ·  “An AI Agent Just Destroyed Our Product Data.” When AI Goes  ·  Controlling AI at machine speed: Detecting risk, protecting
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

Nation’s CEOs Asked To Provide Evidence AI Made Everyone Productive Before Being Quietly Shown The Door

Experts warn productivity may be difficult to measure in organizations that already considered a calendar invite an output.

LONDON — In a troubling development for executives who had already finished the slide deck announcing 38% productivity gains from AI, several researchers this week suggested that businesses and governments may need to prove any of that happened.

The recommendation, issued with the sort of reckless disregard for quarterly narratives usually associated with auditors, follows a series of reports warning that AI productivity claims are racing well ahead of the available evidence, workplace metrics, and in some cases, the employees supposedly being liberated from drudgery.

According to coverage of the Ada Lovelace Institute’s findings, public and private organizations should subject AI productivity claims to stronger scrutiny before presenting them as fact, a proposal that could force leaders to distinguish between “the model summarized a meeting” and “the department can now process welfare claims before the heat death of the universe.” The institute’s concern, summarized by Digital Watch Observatory, is that the AI boom has produced a great deal of confidence and a much smaller amount of proof, an imbalance economists traditionally refer to as a technology sector.

The timing is especially inconvenient, as many organizations are currently in the delicate middle stage of AI adoption, during which they have purchased enterprise licenses, reorganized three teams around a chatbot, and begun searching for a measurement framework that can transform “people seem busier in Slack” into annualized savings.

A separate Harness report also warned that AI productivity claims in software engineering are outrunning the metrics used to validate them, creating a situation in which code may be generated faster than anyone can determine whether it should exist. This has reportedly caused discomfort among engineering leaders, many of whom had hoped the industry would continue accepting “developer velocity” as a sacred vapor that could not be captured by instruments.

The public sector faces an even more severe challenge. Civil service researchers have argued that government AI productivity claims require more robust evidence, particularly before agencies justify major procurement decisions on the promise that an automated assistant will allow the same understaffed office to answer twice as many emails while feeling only half as dead inside. The paper’s findings, covered by Civil Service World, suggest that governments should evaluate AI not only by whether it appears modern in a press release, but by whether it improves service delivery, fairness, cost, or any other metric cruel enough to be measured after deployment.

The common objection from researchers is not that AI cannot improve productivity. It is that productivity is not produced by placing a large language model near a process and waiting for shareholder value to emit from the keyboard. Human expertise, workflow redesign, data quality, governance, and domain judgment remain stubbornly involved, despite years of sincere attempts to replace them with the phrase “agentic.”

This is the part of the conversation where many executives become visibly tired. AI, they were told, would remove bottlenecks, not create a new one labeled “explain how you know that.” Yet the evidence problem is becoming harder to ignore. A worker using AI to draft a document faster may still require review time, legal time, correction time, security time, and the quiet emotional time needed to remove a fabricated statute from paragraph four.

Meanwhile, in a parallel expression of the same era, reports that SpaceX and xAI may merge into a very silly-sounding conglomerate have been received with appropriate seriousness, because nothing says disciplined productivity revolution like combining rockets, frontier models, and brand architecture that sounds like a child naming two action figures during bath time. The lesson is not that such combinations cannot work. The lesson is that scale and spectacle are not substitutes for evidence, though they remain excellent substitutes for patience.

For now, the sensible position is neither AI denial nor AI triumphalism. It is the deeply unfashionable demand that organizations prove what changed, for whom, at what cost, and compared with what baseline. This will be disappointing to leaders who had planned to measure productivity by counting how many employees now begin emails with “I hope this finds you well” in the exact same tone.

Still, if AI is truly transforming work, it should survive contact with a spreadsheet. If it cannot, the technology may not be increasing productivity so much as helping institutions describe their old inefficiencies with unprecedented fluency.

AI productivity claims need stronger scrutiny according to A  ·  Harness Report Warns AI Productivity Claims Outrun Engineeri  ·  Public sector AI productivity claims require 'more robust ev
On This Day in AI History

On May 15, 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov in their rematch, becoming the first computer to beat a reigning champion in a match—a landmark moment that proved machines could outthink humans at complex strategic games.

⬛ Daily Word — AI and Technology
Hint: An autonomous machine programmed to perform tasks automatically.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed