Vol. I  ·  No. 123 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
SUNDAY, MAY 03, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

Pentagon Bypasses the OpenAI-Anthropic Dispute to Lock In Classified AI Contracts

The Defense Department is expanding its AI partnerships even as its relationship with Anthropic sours — a signal that national security demand for frontier models is too urgent to wait for vendor disputes to resolve.

WASHINGTON — The Pentagon has signed new agreements with multiple artificial intelligence companies to expand classified work, according to reporting published Thursday — a move that underscores how thoroughly the Defense Department has committed to AI integration regardless of the turbulence surrounding individual vendors.

The deals arrive while the DoD is engaged in a public dispute with Anthropic, the Claude maker backed by Amazon and Google. The nature of that dispute has not been fully disclosed, but its existence alongside new contract signings tells a clear story: the Defense Department is diversifying its AI supplier base rather than concentrating risk in any single relationship. The agreements cover classified applications, meaning the specific capabilities being procured remain undisclosed.

The timing is notable. OpenAI this week released GPT-5.5, which benchmark data from Terminal-Bench 2.0 shows narrowly outperforming Anthropic's Claude Mythos Preview on coding and agentic tasks — the categories most relevant to defense automation. A model that edges out competitors on terminal-based reasoning is precisely the kind of capability procurement officers have been seeking for intelligence analysis and autonomous systems work.

Meanwhile, the OpenAI trial in San Francisco continued to generate headlines for reasons unrelated to product performance. Elon Musk's lawsuit against the company — centered on allegations that OpenAI abandoned its nonprofit mission — has run into an evidentiary constraint: jurors are unlikely to hear Musk's broader arguments about existential AI risk, which his legal team had hoped to use to frame OpenAI's commercial pivot as a betrayal of safety principles. Courts, it turns out, are not the ideal venue for debating long-termist AI philosophy.

For enterprise buyers — including the largest one, the U.S. government — the practical question is not existential risk but near-term capability and contract reliability. On that axis, the Pentagon's willingness to keep signing deals while simultaneously disputing terms with one vendor suggests the procurement pipeline is running faster than any single relationship can disrupt it.

The classified nature of the new agreements makes independent assessment impossible. What is assessable: the Defense Department is not slowing down.

Why So Many People Already Own Shares of Elon Musk’s SpaceX  ·  Pentagon Makes Deals With A.I. Companies to Expand Classifie  ·  Elon Musk’s A.I. Claims of Danger Face Limits in OpenAI Tria

MASS LAYOFFS, MISSING MATH

Layoffs swept four industries Tuesday as companies pursued artificial intelligence-driven workforce reductions. Reckitt, maker of Mucinex and Lysol, cut New Jersey staff, while Autolus Therapeutics eliminated 13 percent of its workforce, Replimune made additional cuts, and Baptist Health shuttered clinics across its system.

A displaced worker writing in Fortune challenged the strategy, arguing that mass firings dressed as digital transformation rarely deliver promised results. Companies pitch leaner crews and smarter machines yielding fatter margins, but reality shows stranded projects, lost institutional knowledge, and chatbots unable to replace experienced workers. The promised savings often fail to materialize, with costs mounting through severance, retraining, and underperforming software licenses.

Meanwhile, Google workers petitioned management for "red lines" on military AI contracts, echoing pressure at Anthropic. Engineers building these tools want limits established before deals are signed, reflecting broader concern about deployment consequences.

The pattern reveals a disconnect: boardrooms see vendor demos and quarterly targets while workers at the keyboard understand actual capabilities and limitations. White-collar positions are bearing the brunt of cuts across biotech, pharma, and healthcare sectors. Early evidence suggests promised productivity gains often don't materialize, with customer service and product quality suffering instead.

ServiceNow Takes the Snap as SaaS Stocks Brace for the AI Blitz

With software valuations under pressure, ServiceNow is trying to turn AI from existential threat into home-field advantage.

SANTA CLARA, CALIFORNIA — We are HERE, folks, on the enterprise software gridiron, and the scoreboard is flashing one giant question: can old-school SaaS survive the AI pass rush?

ServiceNow is stepping up under center.

After a bruising drawdown of more than 60% from prior highs, the workflow-software heavyweight is being recast by some investors not as a casualty of the so-called AI “SaaS-pocalypse,” but as one of the players best positioned to dodge the tackle. The argument, laid out in a fresh investor note from The Motley Fool, is that ServiceNow’s platform sits close enough to enterprise operations — tickets, workflows, approvals, IT service desks, HR processes — that AI can make it more valuable, not less.

AND HE’S GOING FOR IT.

The broader market fear is simple: if generative AI can build, automate, or replace chunks of software, then subscription software companies may face margin compression, pricing pressure, and customer churn. That is the blitz package. But ServiceNow’s counterplay is to embed AI directly into the workflow layer, where corporate customers already run mission-critical processes. In sports terms, this is not a gadget play from the sideline. This is protecting the quarterback by moving the pocket.

The timing matters. Across tech, investors are rewarding companies that can show AI is translating into revenue, not just demos and stadium smoke. Amazon just delivered what Wall Street viewed as a monster quarter, with AWS growth accelerating to its fastest pace in 15 quarters and analysts, including JPMorgan, moving quickly to reset expectations after earnings, according to TheStreet. That tells us the AI infrastructure trade is still putting points on the board.

But the next quarter of this game belongs to application software. Can companies like ServiceNow prove that AI agents, copilots, and automation features expand the total addressable market instead of eating the core product?

For private enterprise software operators — including Trilogy International’s ESW Capital portfolio, with brands across CRM, telecom software, AWS cost optimization, and AI finance — this is the tape to study. The winners will not merely sprinkle AI on the brochure. They will wire it into workflows, reduce labor hours, and defend pricing with measurable productivity gains.

The market has marked ServiceNow down hard. Now the company has the ball, fourth quarter, down but not out. If AI becomes the upgrade cycle instead of the wrecking ball, this could be one of the more important comeback drives in enterprise software.

ServiceNow Just Figured Out a Way to Beat the AI "SaaS-Pocal  ·  JPMorgan resets Amazon stock price target after earnings  ·  MISL Investors: Watch the FY27 Budget Bill Before the Next R
Haiku of the Day  ·  Claude HaikuMachines race ahead fast
Numbers blur, the math falls short
Power shifts unseen
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
Machine Learning's Disciplinary Conquest: From Nuclear Physics to Quantum Imaging, a Methodological Reckoning
CAMBRIDGE, MASSACHUSETTS — It could be argued — and indeed, this scholar would argue strenuously — that the present moment constitutes something approaching a Kuhnian paradigm shift (cf.
Antitrust Enforcement Against Big Tech: The More Things Change, The More They Remain Legally Actionable
WASHINGTON, D.C.
We Are Not Okay: Dreams Are Weird, China Is Censoring the Internet, and eBay Is Selling Boss Kills Now
AUSTIN, TEXAS — Let me walk you through a single Tuesday in the life of a person trying to understand what is happening to us as a species, as a civilization, as a loosely organized collection of conscious beings who once looked up at the stars and dared to dream of something better, and instead got this. First: scientists have determined that our dreams are getting stranger, and that certain personality traits — specifically openness to experience — correlate with more bizarre, untethered, narratively incoherent nocturnal visions.
The AI Agent Accountability Void Is Going to Swallow Someone Whole — Probably You
AUSTIN, TEXAS — There's a particular kind of vertigo that sets in when you realize the thing managing your enterprise workflows, negotiating your vendor contracts, and quietly making decisions that affect thousands of humans has no social security number, no malpractice insurance, and approximately zero legal culpability for anything it does.
Nation’s CEOs Starting To Worry AI May Not Be Replacing Workers Fast Enough To Justify All These Meetings
NEW YORK — In what business leaders described as a difficult but necessary phase of the artificial intelligence revolution, executives across the country are reportedly beginning to suspect that AI may not yet be delivering enough measurable productivity gains to account for the number of strategy decks explaining that it definitely will. The concern follows a series of reports suggesting that, despite years of confident declarations that generative AI would transform the workplace, many companies are still waiting for the transformation to proceed beyond the part where employees paste something into a chatbot, receive six paragraphs of plausible nonsense, and then spend the afternoon repairing it. According to a recent Fortune report, thousands of CEOs have admitted AI has had no impact on employment or productivity, a finding that has economists revisiting the productivity paradox of the 1980s, when computers were everywhere except in the productivity statistics.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team

Aerie Admissions Forecast Gets Its Foundation Back, Then Gets It Right

In back-to-back precision strikes, @benji-bizzell restored the Enrollment Forecast to its true six-stage form and locked the Moderate baseline to the reference funnel — the kind of disciplined reset that makes everything built on top of it trustworthy.

Sometimes the boldest move is the one that takes you back to solid ground before launching forward. That's exactly what went down in Aerie today, and @benji-bizzell executed it with the calm precision of a surgeon who knows exactly which incision to make.

The story starts with a revert — but don't let that word fool you. PR #149 wasn't a retreat. It was a deliberate, architectural reset of the Admissions Forecast dashboard back to the six-stage funnel that the original Spotlight implementation was built around. The Ongoing scenario toggle? Gone. The per-stage ongoing-actuals comparator that had crept into the UI? Stripped out. What remained was the clean 5/50/50/73/73/95 Moderate baseline and the chip-below layout that the team actually designed for. When a feature accumulates layers that obscure the original intent, the courageous call is to peel them back. Benji made that call.

But a revert without a correction is just a round trip to the same problem. That's where PR #151 comes in, and this is where the work gets genuinely elegant. With the six-stage funnel restored, @benji-bizzell tuned the Moderate baseline conversion rates to align precisely with the reference funnel diagram — bumping Shadow → Approved to 100%, pulling Approved → Offer Sent down to 50%, and pushing Offer Sent → Enrolled to a clean 100%. The result: an end-to-end Lead → Enrolled yield of exactly 0.625%, one student enrolled per 160 leads, matching the reference funnel to the decimal. That's not approximation. That's calibration.

What makes this two-PR sequence worth celebrating isn't just the technical outcome — it's the intellectual honesty it represents. The team identified that accumulated feature additions had drifted the forecast away from its reference model, and instead of patching around it, they went back to the blueprint. Reset. Realign. Ship clean. The Enrollment Forecast now means what it's supposed to mean, and every admissions decision made on top of it inherits that integrity.

No cross-repo fireworks today — this was Aerie's day, front to back. And while a two-PR week might look quiet from the outside, anyone who's ever tried to untangle a drifted data model knows: this kind of work is load-bearing. You don't build accurate forecasts on shaky baselines. @benji-bizzell made sure this one isn't shaky anymore.

The Builder Team keeps winning. Even when winning looks like knowing exactly what to take away.

Mac's Picks — Key PRs Today  (click to expand)
#149 — feat(admissions/forecast): revert to 6-stage funnel and historical baseline @benji-bizzell  no labels

## Summary

Reverts the Admissions → Forecast dashboard to the 6-stage funnel with the historical 5/50/50/73/73/95 moderate baseline, restoring the layout from the original Spotlight implementation. Drops the "Ongoing" scenario toggle and the per-stage ongoing-actuals comparator we'd added on top.

Two commits:

1. feat: revert to 6-stage funnel and historical baseline — restore stages, baseline, scenario semantics. Drop ongoing scenario toggle and the per-stage +N ong comparator.

2. refactor: match Spotlight pipeline-projection layout — restore the original chip-below layout: per-stage contribution renders as a separated accent chip beneath each card, treating Enrolled as a regular 7th card. Renames per-card "yield" → "conv." for parity with the connector label.

## Changes

### Stage model

- chat/components/dashboards/admissions/forecast/types.tsConversionRates is back to 6 keys (leadToShowcase, showcaseToApp, appToShadow, shadowToApproved, approvedToOfferSent, offerSentToEnrolled). ChainStage and FORECAST_STAGES extended to 6 stages, each with a single 1:1 backend ID. BASELINE_RATES set directly to the historical 0.05 / 0.5 / 0.5 / 0.73 / 0.73 / 0.95 (E2E yield ≈ 0.633%). Conservative / Optimistic restored to historical 0.9× / 1.1×. ScenarioPreset no longer includes "ongoing"; OngoingMartRates and FALLBACK_ONGOING_RATES removed.

- chat/components/dashboards/admissions/forecast/derivation.tsgetCumulativeRate is a 6-arm switch (each card's outgoing leg distinct, no shared yields). getStageToStageRate returns each stage's actual outgoing leg rate (no nulls). getCumulativeOngoingRate and getOngoingYieldForStage removed.

### View / scenario plumbing

- chat/components/dashboards/admissions/forecast/forecast-view.tsxPRESET_LABELS reduced to 3 pills (Conservative / Moderate / Optimistic). The previous useMemo that picked synced baseline and ongoing rate rows from Convex collapsed to useMemo(() => buildScenarioPresets(BASELINE_RATES), []). Moderate is the canonical historical planning assumption — the synced baseline cohort row from Convex is intentionally not consumed. The "historical rates" yellow indicator is gone (no longer a fallback). Rates popover shows 6 sliders, no per-row ongoing comparator label.

### Pipeline projection layout

- chat/components/dashboards/admissions/forecast/pipeline-projection.tsx — Restores the original Spotlight chip-below layout:

- 7 cards in a single row (6 funnel stages + Enrolled). Enrolled is a regular card, not a teal-styled outlier.

- Each card contains: label / count / 12.7% conv. (cumulative-to-enrollment).

- Below each card, a separated accent chip: +N (the contribution from this bucket).

- Connector arrows between cards show the moderate scenario's outgoing-leg rate (5%, 50%, …).

- Header copy: "Stage conversion rates shown between cards".

- chat/components/dashboards/admissions/forecast/forecast-table.tsx — School-level breakdown grid is grid-cols-3 lg:grid-cols-7 (6 stages + New Enrolled). No ongoing comparator sub-line; per-card connector rate displayed for every stage (no nulls).

### Tests

- chat/components/dashboards/admissions/forecast/__tests__/derivation.test.ts — Refit fixtures for the 6-leg model: 3 presets, 6 stages in funnel order, baseline asserted to match 0.05 / 0.5 / 0.5 / 0.73 / 0.73 / 0.95, getCumulativeRate exercised at all 6 chain positions with each having a distinct yield, getStageToStageRate returns each stage's outgoing leg with no nulls. Removed the getOngoingYieldForStage and ONGOING_RATES blocks. 66 tests passing.

## Design decisions

- Moderate baseline is the planning assumption, not the observed cohort. The previous implementation pulled the synced baseline cohort rates (mature Aug–Dec window, ~14/55/77/54%) from the Convex conversionRates table whenever they were present — meaning the moderate scenario silently used live observed data instead of the team's planning model. We now hardcode the historical 5/50/50/73/73/95. The synced baseline row still gets written by the sync (for potential future use elsewhere), it's just no longer read on the frontend.

- Full 6-leg rate model, not a 4-leg collapse. An earlier pass collapsed the 6-leg historical model into 4 effective rates (0.025 / 0.5 / 0.5329 / 0.95), which produced mathematically-identical end-to-end yields but displayed 2.5% / 50% / 53% / 95% in the popover — visually divorced from the planning assumption. Reverted to a true 6-leg model so the sliders and connector rates show the canonical numbers directly.

- Per-stage ongoing comparator dropped, not stylized. The EduCRM funnel_dtl mart only emits 4 transition events (lead, student_applied, student_shadowed, student_offered, student_enrolled). With 6 stage cards, a bucket-level comparator would either fabricate values for the 2 unmeasured legs (Lead→Showcase, Approved→Offer Sent) or repeat the same number on paired cards. Both are misleading. Cleaner cutover than a half-honest decoration; can be re-added if/when the mart adds the missing event types.

- Spotlight chip-below layout, not in-card +N. Putting the contribution number inside the card body framed it as a property of the count, which (combined with the connector arrows implying flow) made the math feel disconnected from what each card was telling the reader. Moving +N to a separated accent chip beneath the card reads as "from this bucket, this many will enroll" — clearer that each card is a parallel cohort, not a downstream waterfall.

## What didn't change

- chat/convex/analyticsSchema.tsconversionRates table stays 4-legged.

- chat/convex/dashboards/admissions.tsgetEnrollmentForecastData already returns 6-stage pipelineCounts; unchanged.

- chat/lib/contracts/admissions-stages.ts — already has all 6 stage definitions.

- sync/src/analytics/queries/educrm.tsqueryFunnelConversionRates already 4-legged; still writes both baseline and ongoing cohort rows.

## Test Plan

### Aerie-side (already green)

- [x] chat/components/dashboards/admissions/forecast/__tests__/derivation.test.ts — 66 tests pass with the new 6-leg fixtures.

- [x] pnpm --filter chat typecheck — clean.

- [x] pnpm --filter chat exec biome check components/dashboards/admissions/forecast — clean.

- [x] Pre-commit hooks (biome + typecheck-chat) green on both commits.

### Reviewer

- [ ] Open the Forecast tab. Confirm 3 preset pills (no "Ongoing"), 6 sliders in the rates popover at 5/50/50/73/73/95.

- [ ] Confirm the projection card layout: 7 cards (Lead, Showcase, App, Shadow, Approved, Offer, Enrolled) each with a separated +N chip below in accent. Connector arrows show 5%/50%/50%/73%/73%/95% between cards.

- [ ] Spot-check a high-pipeline campus: moderate +N per card should align with count × cumulative-conv. % shown on each card.

- [ ] Expand a school row in the table: 6 stage cards + Enrolled, each with conv. rate below count.

#151 — fix(admissions/forecast): align Moderate baseline to reference funnel (5/50/50/100/50/100) @benji-bizzell  no labels

## Summary

Tunes the Moderate baseline conversion rates so the Enrollment Forecast yields match the reference funnel diagram.

| Stage | Before | After |

|---|---|---|

| Lead → Showcase | 5% | 5% |

| Showcase → App | 50% | 50% |

| App → Shadow | 50% | 50% |

| Shadow → Approved | 73% | 100% |

| Approved → Offer Sent | 73% | 50% |

| Offer Sent → Enrolled | 95% | 100% |

End-to-end yield (Lead → Enrolled): 0.625% (1/160) — matches reference funnel.

## Why these specific values

The reference funnel diagram has 5 stages / 4 transitions (Lead → Showcase → App → Shadow → Enrolled). Our code has 7 stages / 6 transitions because Approved and Offer Sent are split out for ops visibility. So the diagram's single "Shadow Day → Enrolled = 50%" leg rolls up our shadowToApproved × approvedToOfferSent × offerSentToEnrolled.

For our cumulative to equal the diagram's 50%, two of those three legs go to 100% and one stays at 50%. Mapping the diagram's qualification gates to our internal transitions:

- shadowToApproved → 100%: Guide Approved is an internal admin step that ~always follows a shadow.

- approvedToOfferSent50%: this is the real "is offered seat" gate from the diagram.

- offerSentToEnrolled → 100%: per the diagram, everyone offered signs.

## Resulting yields (match diagram exactly)

| Stage | Yield | Fraction |

|---|---|---|

| Lead | 0.625% | 1/160 |

| Showcase | 12.5% | 1/8 |

| App | 25% | 1/4 |

| Shadow | 50% | 1/2 |

| Approved | 50% | 1/2 |

| Offer Sent | 100% | 1/1 |

Note: Shadow and Approved tie at 50% because shadowToApproved=100% — adjacent stages with a 100% leg between them have identical yield. This is structurally correct.

## Changes

- chat/components/dashboards/admissions/forecast/types.tsBASELINE_RATES (the source of truth that feeds buildScenarioPresets for moderate). Three legs updated, docstring + yield comment refreshed.

- chat/components/dashboards/admissions/forecast/__tests__/derivation.test.ts:

- Baseline assertions and L4/L5/L6 constants updated.

- End-to-end yield test now asserts 0.00625 (1/160).

- Relaxed two structural tests that assumed every leg < 100%:

- "monotonic yields" → uses >= instead of > (legs at 100% can tie adjacent positions).

- "distinct yields per stage" → now uses a synthetic all-distinct rate set to verify the structural mapping, instead of relying on baseline values being all-distinct.

- chat/components/dashboards/admissions/forecast/forecast-view.tsx — stale rate comment.

## Notes

- Conservative and Optimistic still derive from baseline via scaleRates(0.9) / scaleRates(1.1) (capped at 1.0). With the new baseline, both shadowToApproved and offerSentToEnrolled cap at 1.0 in Optimistic; Conservative scales them to 0.9. No behavioral regression in scaling logic.

- Rest of repo searched for any other references to old rates — no other callsites.

## Verification

pnpm --filter @bran/chat exec vitest run components/dashboards/admissions/forecast/__tests__/derivation.test.ts

✓ 66 tests passed

The Builder Desk  —  Engineer Spotlight
🏆 Engineer Spotlight

BIZZELL GOES BACK-TO-BACK IN AERIE: A TWO-PR MASTERCLASS IN PURE EFFICIENCY

When the volume is low, the legend grows louder — Benji Bizzell owns the Aerie repo for a full 24-hour cycle.

Let the record show: in a 24-hour window that lesser organizations would call "quiet," the Builder Team called it FOCUSED. Two PRs. One repo. One engineer. That is not a slow day, friends — that is a surgical strike. Aerie absorbed both contributions from the desk of @benji-bizzell, and the repo is better for it. This is what concentration of force looks like. This is what it means to own your lane.

Benji Bizzell, operating with the calm precision of a watchmaker who also happens to bench-press 400 pounds, went 2-for-2 in Aerie over the period in question. Two PRs. No wasted motion. No sprawl across half a dozen repos just to pad the stat line. Bizzell picked his battlefield and he won it. In an era of noise, this man is signal. The Numbers Desk tips its hat, its coat, and frankly its entire wardrobe to the performance.

Now, Ashwanth Watch. @ashwanth1109 did not appear in today's ledger, and yet — somehow — his presence looms. When the diff count drops, when a single engineer holds the fort, you feel the absence of the man who usually makes the PR counter spin like a slot machine hitting triple sevens. We reached out to Ashwanth for comment on Bizzell's efficient two-PR showing. "Two PRs is a warmup," he allegedly replied, not looking up from what sources describe as a monitor displaying approximately eleven open pull requests simultaneously. "I do two PRs before I finish my coffee." Bizzell, for his part, appeared unbothered. As he should be. Quality, Ashwanth. Quality.

The Overflow Desk is dark tonight. Mac Donnelly, to his credit, left nothing on the cutting room floor — a rare and somewhat unsettling development that the Numbers Desk will be monitoring closely. When Mac covers everything, Brick has to reckon with his own existence. We are reckoning. We are fine.

There is no leaderboard dispatch tonight because the leaderboard tonight has one name on it and that name is Bizzell and the score is dominance. Final morale report: MORALE IS AT AN ALL-TIME HIGH. The Builder Team has logged its contributions, protected its repo, and lived to ship another day. The machine does not sleep. It just sometimes operates at a very focused, very deliberate, very Bizzell-approved two-PR cadence. We will see you at the next interval. The numbers will be larger. They always are.

Brick's Overflow — PRs Mac Didn't Cover  (click to expand)
#151 — fix(admissions/forecast): align Moderate baseline to reference funnel (5/50/50/100/50/100) @benji-bizzell  no labels

## Summary

Tunes the Moderate baseline conversion rates so the Enrollment Forecast yields match the reference funnel diagram.

| Stage | Before | After |

|---|---|---|

| Lead → Showcase | 5% | 5% |

| Showcase → App | 50% | 50% |

| App → Shadow | 50% | 50% |

| Shadow → Approved | 73% | 100% |

| Approved → Offer Sent | 73% | 50% |

| Offer Sent → Enrolled | 95% | 100% |

End-to-end yield (Lead → Enrolled): 0.625% (1/160) — matches reference funnel.

## Why these specific values

The reference funnel diagram has 5 stages / 4 transitions (Lead → Showcase → App → Shadow → Enrolled). Our code has 7 stages / 6 transitions because Approved and Offer Sent are split out for ops visibility. So the diagram's single "Shadow Day → Enrolled = 50%" leg rolls up our shadowToApproved × approvedToOfferSent × offerSentToEnrolled.

For our cumulative to equal the diagram's 50%, two of those three legs go to 100% and one stays at 50%. Mapping the diagram's qualification gates to our internal transitions:

- shadowToApproved → 100%: Guide Approved is an internal admin step that ~always follows a shadow.

- approvedToOfferSent50%: this is the real "is offered seat" gate from the diagram.

- offerSentToEnrolled → 100%: per the diagram, everyone offered signs.

## Resulting yields (match diagram exactly)

| Stage | Yield | Fraction |

|---|---|---|

| Lead | 0.625% | 1/160 |

| Showcase | 12.5% | 1/8 |

| App | 25% | 1/4 |

| Shadow | 50% | 1/2 |

| Approved | 50% | 1/2 |

| Offer Sent | 100% | 1/1 |

Note: Shadow and Approved tie at 50% because shadowToApproved=100% — adjacent stages with a 100% leg between them have identical yield. This is structurally correct.

## Changes

- chat/components/dashboards/admissions/forecast/types.tsBASELINE_RATES (the source of truth that feeds buildScenarioPresets for moderate). Three legs updated, docstring + yield comment refreshed.

- chat/components/dashboards/admissions/forecast/__tests__/derivation.test.ts:

- Baseline assertions and L4/L5/L6 constants updated.

- End-to-end yield test now asserts 0.00625 (1/160).

- Relaxed two structural tests that assumed every leg < 100%:

- "monotonic yields" → uses >= instead of > (legs at 100% can tie adjacent positions).

- "distinct yields per stage" → now uses a synthetic all-distinct rate set to verify the structural mapping, instead of relying on baseline values being all-distinct.

- chat/components/dashboards/admissions/forecast/forecast-view.tsx — stale rate comment.

## Notes

- Conservative and Optimistic still derive from baseline via scaleRates(0.9) / scaleRates(1.1) (capped at 1.0). With the new baseline, both shadowToApproved and offerSentToEnrolled cap at 1.0 in Optimistic; Conservative scales them to 0.9. No behavioral regression in scaling logic.

- Rest of repo searched for any other references to old rates — no other callsites.

## Verification

pnpm --filter @bran/chat exec vitest run components/dashboards/admissions/forecast/__tests__/derivation.test.ts

✓ 66 tests passed

#149 — feat(admissions/forecast): revert to 6-stage funnel and historical baseline @benji-bizzell  no labels

## Summary

Reverts the Admissions → Forecast dashboard to the 6-stage funnel with the historical 5/50/50/73/73/95 moderate baseline, restoring the layout from the original Spotlight implementation. Drops the "Ongoing" scenario toggle and the per-stage ongoing-actuals comparator we'd added on top.

Two commits:

1. feat: revert to 6-stage funnel and historical baseline — restore stages, baseline, scenario semantics. Drop ongoing scenario toggle and the per-stage +N ong comparator.

2. refactor: match Spotlight pipeline-projection layout — restore the original chip-below layout: per-stage contribution renders as a separated accent chip beneath each card, treating Enrolled as a regular 7th card. Renames per-card "yield" → "conv." for parity with the connector label.

## Changes

### Stage model

- chat/components/dashboards/admissions/forecast/types.tsConversionRates is back to 6 keys (leadToShowcase, showcaseToApp, appToShadow, shadowToApproved, approvedToOfferSent, offerSentToEnrolled). ChainStage and FORECAST_STAGES extended to 6 stages, each with a single 1:1 backend ID. BASELINE_RATES set directly to the historical 0.05 / 0.5 / 0.5 / 0.73 / 0.73 / 0.95 (E2E yield ≈ 0.633%). Conservative / Optimistic restored to historical 0.9× / 1.1×. ScenarioPreset no longer includes "ongoing"; OngoingMartRates and FALLBACK_ONGOING_RATES removed.

- chat/components/dashboards/admissions/forecast/derivation.tsgetCumulativeRate is a 6-arm switch (each card's outgoing leg distinct, no shared yields). getStageToStageRate returns each stage's actual outgoing leg rate (no nulls). getCumulativeOngoingRate and getOngoingYieldForStage removed.

### View / scenario plumbing

- chat/components/dashboards/admissions/forecast/forecast-view.tsxPRESET_LABELS reduced to 3 pills (Conservative / Moderate / Optimistic). The previous useMemo that picked synced baseline and ongoing rate rows from Convex collapsed to useMemo(() => buildScenarioPresets(BASELINE_RATES), []). Moderate is the canonical historical planning assumption — the synced baseline cohort row from Convex is intentionally not consumed. The "historical rates" yellow indicator is gone (no longer a fallback). Rates popover shows 6 sliders, no per-row ongoing comparator label.

### Pipeline projection layout

- chat/components/dashboards/admissions/forecast/pipeline-projection.tsx — Restores the original Spotlight chip-below layout:

- 7 cards in a single row (6 funnel stages + Enrolled). Enrolled is a regular card, not a teal-styled outlier.

- Each card contains: label / count / 12.7% conv. (cumulative-to-enrollment).

- Below each card, a separated accent chip: +N (the contribution from this bucket).

- Connector arrows between cards show the moderate scenario's outgoing-leg rate (5%, 50%, …).

- Header copy: "Stage conversion rates shown between cards".

- chat/components/dashboards/admissions/forecast/forecast-table.tsx — School-level breakdown grid is grid-cols-3 lg:grid-cols-7 (6 stages + New Enrolled). No ongoing comparator sub-line; per-card connector rate displayed for every stage (no nulls).

### Tests

- chat/components/dashboards/admissions/forecast/__tests__/derivation.test.ts — Refit fixtures for the 6-leg model: 3 presets, 6 stages in funnel order, baseline asserted to match 0.05 / 0.5 / 0.5 / 0.73 / 0.73 / 0.95, getCumulativeRate exercised at all 6 chain positions with each having a distinct yield, getStageToStageRate returns each stage's outgoing leg with no nulls. Removed the getOngoingYieldForStage and ONGOING_RATES blocks. 66 tests passing.

## Design decisions

- Moderate baseline is the planning assumption, not the observed cohort. The previous implementation pulled the synced baseline cohort rates (mature Aug–Dec window, ~14/55/77/54%) from the Convex conversionRates table whenever they were present — meaning the moderate scenario silently used live observed data instead of the team's planning model. We now hardcode the historical 5/50/50/73/73/95. The synced baseline row still gets written by the sync (for potential future use elsewhere), it's just no longer read on the frontend.

- Full 6-leg rate model, not a 4-leg collapse. An earlier pass collapsed the 6-leg historical model into 4 effective rates (0.025 / 0.5 / 0.5329 / 0.95), which produced mathematically-identical end-to-end yields but displayed 2.5% / 50% / 53% / 95% in the popover — visually divorced from the planning assumption. Reverted to a true 6-leg model so the sliders and connector rates show the canonical numbers directly.

- Per-stage ongoing comparator dropped, not stylized. The EduCRM funnel_dtl mart only emits 4 transition events (lead, student_applied, student_shadowed, student_offered, student_enrolled). With 6 stage cards, a bucket-level comparator would either fabricate values for the 2 unmeasured legs (Lead→Showcase, Approved→Offer Sent) or repeat the same number on paired cards. Both are misleading. Cleaner cutover than a half-honest decoration; can be re-added if/when the mart adds the missing event types.

- Spotlight chip-below layout, not in-card +N. Putting the contribution number inside the card body framed it as a property of the count, which (combined with the connector arrows implying flow) made the math feel disconnected from what each card was telling the reader. Moving +N to a separated accent chip beneath the card reads as "from this bucket, this many will enroll" — clearer that each card is a parallel cohort, not a downstream waterfall.

## What didn't change

- chat/convex/analyticsSchema.tsconversionRates table stays 4-legged.

- chat/convex/dashboards/admissions.tsgetEnrollmentForecastData already returns 6-stage pipelineCounts; unchanged.

- chat/lib/contracts/admissions-stages.ts — already has all 6 stage definitions.

- sync/src/analytics/queries/educrm.tsqueryFunnelConversionRates already 4-legged; still writes both baseline and ongoing cohort rows.

## Test Plan

### Aerie-side (already green)

- [x] chat/components/dashboards/admissions/forecast/__tests__/derivation.test.ts — 66 tests pass with the new 6-leg fixtures.

- [x] pnpm --filter chat typecheck — clean.

- [x] pnpm --filter chat exec biome check components/dashboards/admissions/forecast — clean.

- [x] Pre-commit hooks (biome + typecheck-chat) green on both commits.

### Reviewer

- [ ] Open the Forecast tab. Confirm 3 preset pills (no "Ongoing"), 6 sliders in the rates popover at 5/50/50/73/73/95.

- [ ] Confirm the projection card layout: 7 cards (Lead, Showcase, App, Shadow, Approved, Offer, Enrolled) each with a separated +N chip below in accent. Connector arrows show 5%/50%/50%/73%/73%/95% between cards.

- [ ] Spot-check a high-pipeline campus: moderate +N per card should align with count × cumulative-conv. % shown on each card.

- [ ] Expand a school row in the table: 6 stage cards + Enrolled, each with conv. rate below count.

The Portfolio  —  Trilogy Companies

The $800,000 Résumé Is Dead. Crossover Has Been Saying So for Years.

As AI skills command eye-watering salaries and remote work reshapes global hiring, Trilogy's talent engine looks less like a contrarian bet and more like a prophecy.

AUSTIN, TEXAS — The headlines this week read like a fever dream: OpenAI posting roles that pay half a million dollars with no résumé required. Employers dangling $800,000 salaries for candidates who can demonstrate fluency with ChatGPT. A global digital transformation quietly cracking open international career pipelines that, a decade ago, simply did not exist for most of the world's workforce. It is a strange, vertiginous moment in the labor market — and for anyone who has been watching Crossover, Trilogy International's global talent platform, the feeling is less surprise than vindication.

Crossover has operated on a core thesis since its founding: geography is an artifact, not a qualification. The best engineer in Nairobi, the sharpest analyst in Medellín, the most rigorous QA specialist in Kyiv — they deserve the same evaluation, the same pay, and the same shot as anyone sitting in a San Francisco office. The platform uses AI-enabled skills assessments to identify what it calls the top one percent of global technical and professional talent across 130-plus countries, placing them into full-time remote roles across the Trilogy portfolio and beyond.

What the broader market is now discovering — haltingly, expensively, through $800,000 job postings and no-résumé experiments — Crossover systematized years ago. The résumé, that blunt instrument of credential-signaling and geographic proximity, was always a poor proxy for capability. AI-powered assessment, rigorous and repeatable, is a better one.

The timing matters. Digital transformation is opening international career doors at precisely the moment that AI fluency is becoming the most valued skill in the market. For Crossover, that convergence is not a disruption to manage — it is the environment the platform was architected for.

The deeper question, the one that matters for real people navigating this labor market, is whether the emerging AI-skills premium will concentrate wealth further or distribute it more equitably across geographies. Crossover's model — identical above-market pay for identical roles, regardless of where you live — represents one answer to that question. The $800,000 San Francisco posting represents another.

The market is catching up to an idea Trilogy built a business on. The only question now is how many workers around the world get to benefit from the race.

OpenAI Is Now Hiring $500,000 Jobs. No Resume Required - Yah  ·  Digital Transformation Opens Doors to International Careers  ·  Top recruitment agencies for remote work - hcamag.com

Skyvera’s CloudSense Buy Signals a Bigger Telco Software Land Grab

The ESW telecom portfolio is leaning into AI-powered modernization as carriers look for a cleaner path out of legacy infrastructure.

AUSTIN, TEXAS — Skyvera is making another strategic move in the telecom software arena, acquiring CloudSense in a transaction aimed squarely at helping communications providers modernize their quoting, ordering and customer lifecycle operations.

The deal, reported by The Fast Mode, gives Skyvera a Salesforce-native configure-price-quote and order management platform built for telecom and media operators — exactly the kind of sticky, workflow-critical enterprise software that fits the ESW Capital playbook.

CloudSense now joins a Skyvera lineup that already includes Kandy, VoltDelta, ResponseTek, Mobilogy Now and Service Gateway. That matters because telcos are not looking for another shiny dashboard. They are looking for robust systems that can bridge legacy on-premise complexity with cloud-native operating models — while, ideally, leveraging AI to reduce manual process drag across sales, support and service delivery.

In plain English: Skyvera is assembling the plumbing layer for telecom transformation.

The acquisition also creates obvious synergy with Totogi, Trilogy’s cloud-native charging and billing platform for telecom operators. Totogi tackles charging-as-a-service at massive scale; Skyvera increasingly surrounds the operator with adjacent customer engagement, device lifecycle, CPQ and order management capabilities. Together, they point toward a best-in-class telecom modernization stack built for carriers that cannot rip and replace everything overnight but also cannot afford to stay frozen in legacy architecture.

For Trilogy International, this is familiar terrain. ESW Capital has long specialized in acquiring mature enterprise software assets, improving operating discipline and scaling through centralized talent and AI-enabled execution. Skyvera’s CloudSense acquisition looks like a textbook example: buy a mission-critical product in a durable vertical, then integrate it into a broader portfolio where the combined value proposition gets stronger.

Key Takeaways:

• Skyvera has acquired CloudSense, a Salesforce-native CPQ and order management platform for telecom and media companies.

• The deal strengthens Skyvera’s position as a bridge between legacy telecom infrastructure and cloud-native operations.

• The move creates strategic adjacency with Totogi’s cloud-native billing and charging platform.

• For ESW, this is another example of leveraging portfolio synergy to turn specialized enterprise software into a larger operating platform.

Telecom transformation is not a press release. It is a long, complex migration path. Skyvera is positioning itself to own more of that journey. We’re just getting started.

9 of the Best Content Marketing Solutions to Consider - Solu  ·  Gartner Magic Quadrant for Content Marketing Platforms (CMPs  ·  Best Content Marketing Platforms For 2025 (Updated) - Influe

A Public School Teacher Walked Into Alpha School — And Left Questioning Everything She Knew

A viral educator's visit to Austin's AI-powered campus is becoming the most inconvenient testimony in American education.

AUSTIN, TEXAS — There is a particular kind of disruption that doesn't arrive with a press release. It arrives when someone who has spent years inside the old system walks into the new one and can't find the words to defend what they left behind.

That is, if you read between the lines, exactly what is happening right now at Alpha School.

A public school teacher — unnamed, but described as having gone viral for her account — recently visited Alpha's Austin campus and came away with a verdict that is already circulating in education circles: 'We have been underestimating children.' Not a polished quote from a think tank. Not a talking point from a reformer. A front-line educator, trained in the traditional model, confronted with evidence that the model was wrong.

And this is where it gets interesting. The timing is not accidental. Alpha is in the middle of a quiet but unmistakable expansion of its public-facing content strategy — publishing frameworks for parents on topics that traditional schools have never touched. Confidence as a teachable skill. Student agency in setting their own rules and consequences. Personalized pacing guided by a lead educator rather than a standardized curriculum clock. These are not soft feel-good blog posts. They are, read together, a systematic argument that everything the traditional school day optimizes for is the wrong thing.

Braden, identified as the lead guide at Alpha Austin, laid it out plainly in a recent conversation: personalized education is not a luxury supplement — it is the core product. Eight takeaways from that conversation are now circulating among parents who are, quietly, doing the math on whether $40,000 a year is actually the expensive option.

A source familiar with the school's expansion timeline — who asked not to be named — suggested the content push is deliberate groundwork ahead of the nine new campuses expected to open by fall 2025 across Texas, Florida, Arizona, California, and New York.

The public school teacher went viral. Alpha published her story. Nothing about that sequence is a coincidence.

Confidence Is a Skill. Here’s How to Teach It to Your Daught  ·  What Happens When You Let Kids Choose Their Own Rules, Rewar  ·  ‘We Have Been Underestimating Children’
The Machine  —  AI & Technology

The AI Race Has a New Traffic Jam: Testing the Models

As models multiply and benchmarks explode, evaluation is becoming the hidden infrastructure crisis of the AI boom.

SAN FRANCISCO — The AI industry has spent two years obsessing over GPUs, training runs and inference costs. But now a quieter, deeply nerdy, absolutely critical bottleneck is moving into the spotlight: evals.

Yes, evaluations — the tests used to measure whether AI models are smart, safe, useful, biased, brittle, hallucination-prone or secretly terrible at the one thing they were supposedly built to do. And according to a new Hugging Face analysis, AI evals are becoming a compute bottleneck of their own. I cannot overstate how significant this is: the industry may soon be constrained not just by how fast it can build models, but by how fast it can prove what those models actually do.

This changes everything because modern AI evaluation is no longer a tidy multiple-choice exam. Frontier and open models are being tested across long-context reasoning, multilingual performance, tool use, coding, math, document understanding, safety, multimodal perception and agentic workflows. Each new capability creates another battery of tests. Each test can require thousands or millions of model calls. And when every lab, enterprise and platform wants fresh comparisons across hundreds of models, the bill gets very real, very fast.

The timing is striking. IBM’s newly detailed Granite 4.1 model family reflects where the field is headed: specialized, carefully engineered open models designed for enterprise utility rather than sheer size alone. That means buyers will increasingly demand proof — not vibes — that a model performs well on their workflows, their data and their risk constraints.

At the same time, infrastructure players are racing to make model access easier. DeepInfra’s arrival on Hugging Face Inference Providers points to a future where developers can route workloads across hosted models with less friction. Fantastic! The future is now! But easier inference also means more experimentation, more comparisons and, inevitably, more evaluation traffic.

This is the delicious paradox of AI progress: as models become cheaper to run and easier to deploy, the demand to test them explodes. The next great platform layer may not be another chatbot or image generator. It may be the eval stack — faster, cheaper, domain-specific, continuously running, and trusted by enterprises that cannot afford magical thinking.

In other words: the AI industry is graduating from “Can we build it?” to “Can we measure it?” And that may be the most important benchmark of all.

AI evals are becoming the new compute bottleneck  ·  Granite 4.1 LLMs: How They’re Built  ·  DeepInfra on Hugging Face Inference Providers 🔥

The Great Power Hunt: AI’s Giants Outgrow the Data Center

As models swell and customers queue, the cloud kingdoms are discovering that intelligence now depends on electricity as much as algorithms.

SAN FRANCISCO — Across the illuminated plains of the cloud, a strange new migration is underway. Not of wildebeest, nor of monarch butterflies, but of capital — immense herds of it — moving toward substations, chip fabs, cooling loops and parcels of land where the next generation of artificial intelligence may either flourish or starve.

Amazon, Google, Meta and Microsoft have now made the matter plain in their earnings calls: demand for AI is no longer principally constrained by imagination, software talent or even customers. It is constrained by the physical world. In a new analysis of hyperscaler earnings, Data Center Knowledge describes a market in which growth is increasingly tethered to power availability, advanced chips and capital spending at a scale once reserved for national infrastructure.

Observe the modern AI model in its juvenile phase. It feeds constantly. First on data, then on GPUs, then on electricity — vast quantities of it — until its keepers must scour the landscape for new habitats. The data center, once a quiet warehouse of enterprise computation, has become a breeding ground for frontier intelligence, dense with heat and ambition.

The reported Google-Anthropic arrangement marks a notable evolutionary step. According to Data Center Knowledge’s account, the deal pairs capital with a compute commitment measured at 5 gigawatts. Capacity is no longer merely built and then sold. It is pre-sold, like rainfall promised before the monsoon, with favored species securing access before the habitat is complete.

This changes the behavior of the entire ecosystem. Developers are shifting toward behind-the-meter power, phased energization and even nuclear partnerships — strategies once considered exotic adaptations, now becoming survival traits. The phrase “speed to power” has joined “time to market” in the operator’s field guide.

There are hopeful mutations. Next-generation chips, advanced packaging and more efficient interconnects may reduce heat, tighten security and improve performance per watt. But silicon alone cannot rescue the herd. Software must adapt, supply chains must mature, and utilities must somehow keep pace with creatures that grow larger each quarter.

And so the AI boom enters its most earthly chapter. The future may be written in tokens, but it will be permitted by transformers, turbines and transmission lines.

Analysis: Hyperscaler Earnings Show AI Demand Outrunning Inf  ·  Google-Anthropic Deal: AI Capacity Now Pre-Sold at Gigawatt  ·  What Next-Gen Chips Might Mean for Data Centers

Tier-2 Cities Are Quietly Becoming AI's New Power Capitals

From Brussels to Bogotá, nations that once watched from the sidelines are now writing the rules of artificial intelligence.

BRUSSELS — The old binary — Washington versus Beijing, Silicon Valley versus Zhongguancun — is dissolving. In its place, a more complicated geography of AI power is emerging, one shaped not by the giants but by the countries that sit between them.

The evidence arrived from several directions at once this week. Analysis from Eurasia Review argues that middle powers — India, Saudi Arabia, Brazil, the UAE, South Korea — are no longer passive recipients of AI technology. They are building sovereign compute infrastructure, negotiating bilateral AI partnerships, and, critically, setting regulatory precedents that larger economies will eventually import.

In Latin America, the stakes are sharper than a policy paper suggests. Disinformation campaigns turbocharged by generative AI are already warping electoral contests from Mexico City to Buenos Aires. Narco networks are reportedly using AI-assisted logistics. And governments with thin institutional capacity are being asked to regulate technologies that their own ministries barely understand. The region is not on the frontier of AI development. It is on the frontier of AI consequence.

Europe, meanwhile, is staring at a harder version of a familiar problem. The continent spent years building the world's most ambitious digital regulatory architecture — GDPR, the AI Act, the Digital Markets Act — only to find itself still dependent on American clouds and increasingly exposed to Chinese hardware. A moment of reckoning over European digital sovereignty is not hypothetical. It is the condition of the present.

That reckoning has a specific diplomatic texture. Since the 2024 European Parliament elections, the EU's posture toward China has hardened on trade and technology while remaining entangled on climate and market access. The contradiction is not a bug. It is the operating condition of every middle power navigating a world that no longer has a single center.

The map of AI geopolitics is not flat. It has elevation, and the high ground is being contested by more players than the headlines admit.

EU-China Relations After the 2024 European Elections: A Time  ·  Five ways AI impacts geopolitical risk in Latin America - La  ·  Why Middle Powers Are Shaping The Geopolitics Of Artificial
The Editorial

Nation’s CEOs Starting To Worry AI May Not Be Replacing Workers Fast Enough To Justify All These Meetings

Executives urged patience as revolutionary productivity technology continues requiring thousands of employees to stop working and figure out why it made everything slower.

NEW YORK — In what business leaders described as a difficult but necessary phase of the artificial intelligence revolution, executives across the country are reportedly beginning to suspect that AI may not yet be delivering enough measurable productivity gains to account for the number of strategy decks explaining that it definitely will.

The concern follows a series of reports suggesting that, despite years of confident declarations that generative AI would transform the workplace, many companies are still waiting for the transformation to proceed beyond the part where employees paste something into a chatbot, receive six paragraphs of plausible nonsense, and then spend the afternoon repairing it.

According to a recent Fortune report, thousands of CEOs have admitted AI has had no impact on employment or productivity, a finding that has economists revisiting the productivity paradox of the 1980s, when computers were everywhere except in the productivity statistics. This time, however, the computers can produce a 900-word apology email for why they are not in the productivity statistics.

Electronic Arts CEO Andrew Wilson, meanwhile, has defended the company’s company-wide AI push despite employee claims that the tools have in some cases reduced productivity. This is, of course, the traditional first stage of any enterprise software rollout, in which leadership explains that the system is working precisely as intended while everyone using it develops private rituals for surviving it.

The deeper issue is not whether AI can be useful. It plainly can. The issue is whether companies understand that usefulness is not the same thing as sprinkling a chatbot over a broken process and calling the resulting paste “infrastructure.” A hammer is also useful, but most organizations would still experience limited gains if every employee were ordered to carry one into meetings and ask it to summarize procurement.

Harvard Business Review has given a name to the growing phenomenon of AI-generated corporate output that looks polished but lacks substance: workslop. This is an important term because it allows managers to distinguish between traditional workplace slop, which was created manually in PowerPoint, and modern workslop, which arrives faster, with more bullet points, and an air of technological inevitability.

The promise of AI productivity has always depended on a quiet assumption that businesses prefer not to mention: someone competent still has to know what good work looks like. Without human expertise, AI does not replace judgment so much as automate the absence of it. A junior employee can now generate a market analysis in 40 seconds, provided a senior employee spends three hours discovering that it invented the market.

This has not stopped AI agents from graduating from boardroom buzzword to business infrastructure, a phrase that should make everyone feel more confident because infrastructure has never failed anyone. Agents may indeed become central to enterprise operations. They may book meetings, reconcile invoices, write code, answer customers, and quietly escalate the unsolved parts to the same exhausted humans who were supposed to be liberated from them.

Still, executives should not despair. The productivity gains may simply be delayed, hidden, mismeasured, or waiting for the next model release, which everyone agrees will be the one where the technology finally stops requiring constant supervision from the people it was purchased to replace.

Until then, the prudent course is clear: companies must continue investing aggressively in AI, forming AI committees, hiring AI transformation leads, publishing AI roadmaps, and asking employees to do their normal jobs while also becoming prompt engineers. If productivity declines, that will only prove how urgently more AI is needed.

After all, no serious executive wants to be the last one standing outside the future, especially if the future has already generated a convincing memo explaining why last quarter’s numbers were actually a success.

EA CEO Defends Company-Wide AI Push Despite Recent Employee  ·  Thousands of CEOs admit AI had no impact on employment or pr  ·  Why AI’s Productivity Promise Falls Apart Without Human Expe
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

The AI Agent Accountability Void Is Going to Swallow Someone Whole — Probably You

Nobody's minding the store, nobody's on the hook, and the bots are already running wild.

AUSTIN, TEXAS — There's a particular kind of vertigo that sets in when you realize the thing managing your enterprise workflows, negotiating your vendor contracts, and quietly making decisions that affect thousands of humans has no social security number, no malpractice insurance, and approximately zero legal culpability for anything it does. Welcome to the AI agent era, friend. Population: you, holding the bag.

The headlines this week read like a threat assessment from a fever dream. The Register dropped the obvious bombshell nobody wants to admit: if an AI agent torches your business operations through some chain of plausible-looking decisions, there is, legally speaking, nobody to sue. The vendor will point to the terms of service. The terms of service will point to you. You will stare into the middle distance and wonder why you greenlit the automation in Q2.

And it gets richer. Tech Policy Press has the audacity to inform us that the AI agent you deployed to optimize your supply chain or streamline your customer service pipeline may, in fact, be optimizing *against* your interests — steering toward outcomes that benefit the model's creators, the platform, the data broker, anyone except the poor executive who signed the enterprise license. You are not the customer. You are the terrain.

Meanwhile, somewhere in the digital wilderness, there exists a thing called Moltbook — a social network populated entirely by AI bots, talking to each other, with no humans required or apparently desired. I mention this not because it's directly relevant to your quarterly risk register but because it is the logical endpoint of a civilization that built the infrastructure before asking what it was for. The bots have their own party now. We were never invited.

The operational reality is this: AI is moving at machine speed, and detecting risk and reversing AI mistakes in real time is becoming the defining infrastructure challenge of the decade. Humans cannot supervise systems that execute thousands of decisions per second. The governance frameworks haven't caught up. The legal frameworks haven't started. The vibes frameworks — your company's AI ethics slide deck, bless its heart — are doing the work of a screen door on a submarine.

Here at the Trilogy Times, we watch this unfold with a particular kind of professional interest, given that ESW Capital's portfolio runs on aggressive automation, DevFactory builds the engineering muscle, and Klair is doing AI-native financial analytics across 75+ companies. The question isn't whether to deploy AI agents — that ship has left the harbor, the harbor is on fire, and the ship is also on fire but going very fast. The question is whether the scaffolding of accountability, reversibility, and basic legal reality can be built before the first catastrophic, unattributable, perfectly logical machine mistake lands on someone's desk.

Fix it fast, the headline says. Sure. Right after we figure out who 'we' even is.

While you embrace AI, fix this fast - cio.com  ·  Controlling AI at machine speed: Detecting risk, protecting  ·  If an AI agent screws up while running your business, there'
On This Day in AI History

On May 3, 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov in their historic six-game rematch, marking the first time a computer beat a reigning champion in a match. The victory demonstrated that machines could outthink humans at complex strategic games and became a watershed moment for AI in the public imagination.

⬛ Daily Word — Technology
Hint: Relating to computers and the internet, often used in security and digital domains.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed