Vol. I  ·  No. 113 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
THURSDAY, APRIL 23, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

META OPENS THE PARLOR DOOR: PARENTS CAN NOW SEE WHAT THEIR KIDS TELL THE MACHINE

Zuckerberg's outfit rolls out topic-level surveillance of children's AI chats — a first for Big Tech and a signal the regulatory heat is working.

MENLO PARK, CALIF. — Meta on Wednesday began letting parents see the subjects their children discuss with its AI chatbot, a move that cracks open the black box of kid-machine conversation for the first time at any major platform.

The new parental oversight tool sorts chats into broad buckets — "School," "Entertainment," "Health and Wellbeing," "Lifestyle," "Travel," "Writing," and others. Parents won't read transcripts word-for-word. They'll get the topic labels. Meta says the feature lives inside its existing parental supervision system, which already governs Instagram and Messenger accounts for users under 16.

The timing tells the story. Congress has spent eighteen months rattling sabers over children and AI. State attorneys general have filed suits. The EU's AI Act put guardrails around minors months ago. Meta, which has taken more black eyes on child safety than a prizefighter in the tenth round, is moving before the next punch lands.

Here's the calculus. Meta AI now sits inside WhatsApp, Messenger, Instagram, and the main Facebook app. Hundreds of millions of people talk to it daily. Some of those people are thirteen years old and asking it questions they wouldn't ask their algebra teacher. The company needed a pressure valve. This is it.

But the tool raises as many questions as it settles. Topic labels are Meta's own invention — the company decides what counts as "Health and Wellbeing" versus "Lifestyle." A kid asking the chatbot about eating disorders could land in either bucket. Parents get a category, not context. Critics will call it a fig leaf. Defenders will call it a start.

The broader industry is watching because this sets the template. Google, OpenAI, and Anthropic all have chatbots that minors can access. None of them offer parents a comparable window into what their children are saying. Meta, for all its bruises, just moved first.

Meanwhile, the AI sector keeps throwing off sparks in every direction. A startup called Shade just banked $14 million to let creative teams search massive video libraries using plain English — the kind of AI tooling that's quietly reshaping how studios and agencies work. India's app economy is surging on the back of streaming and AI adoption, though global platforms are pocketing most of the rupees. And Tesla told investors it's tripling its historical capital spending to $25 billion in 2026, a bet so large the company expects negative free cash flow for the rest of the year.

Back in Menlo Park, the parental-controls play fits a pattern Trilogy watchers know well. When AI gets embedded in daily life — inside the apps people already use, inside the workflows companies already run — the governance question follows like thunder after lightning. The outfits building AI tools for enterprises, from portfolio shops running analytics platforms to startups indexing video, all face the same reckoning: who watches the machine, and how much do they get to see?

Meta's answer, for now, is a list of topics on a parent's phone. It's not everything. But it's the first crack in the wall, and in this racket, the first crack is the one that matters.

Meta will now allow parents to see the topics their child di  ·  India’s app market is booming — but global platforms are cap  ·  Shade lands $14M to let creative teams search their video li

GOOGLE’S AI PLAYBOOK SPARKS A FOUR-FRONT BREAKOUT: SECURITY, SWITCHES, SCRAPS, AND SALES CHANNELS

From SentinelOne’s identity lockdown to Arista’s data-center sprint, the AI economy is adding new lanes—and investors are watching the scoreboard.

SAN FRANCISCO — We are HERE, folks, and Google just changed the tempo of the entire arena. One AI-centered shift in infrastructure and partnerships is ricocheting across cybersecurity, networking, climate tech, and even the way enterprise vendors recruit and activate their channel troops.

First quarter of action: cybersecurity. SentinelOne is stacking wins like a defensive coordinator who’s finally solved the opponent’s playbook. The company is partnering with Silverfort to extend AI-driven identity security across BOTH human users and the fast-growing army of non-human identities—service accounts, bots, workloads—the whole machine-to-machine backfield. The integration plugs identity protection into SentinelOne’s autonomous endpoint and runtime security platform, aiming to stop attackers where they now live: inside identity and access pathways. And THEN comes the trophy moment—SentinelOne was named Google Cloud Partner of the Year for Security. That’s not just a ribbon; it’s a signal flare for cloud security buyers and a potential catalyst in what markets are calling a valuation gap story. Yahoo Finance’s rundown frames it as execution catching up to price.

Second quarter: networking. Arista just hit a 52-week high, and the catalyst is pure AI data-center physics. An analyst is betting Google’s new AI data-center architecture boosts demand for the high-throughput switching gear Arista lives on. Translation: if AI clusters get bigger and hotter, the network fabric becomes the championship belt. The market tape says investors are buying that narrative.

Third quarter: climate meets compute. Mill is joining Google’s AI Futures Fund and rolling out a Gemini-enabled visual system to classify and characterize food waste at the source—turning scraps into structured data. That’s operational AI, not science fair AI. Details here.

Fourth quarter: go-to-market. ZINFI is pushing AI-powered partner onboarding software claiming a 60% cut in partner activation time—because spreadsheets are the slowest player on the field.

And in the stands? Bitcoin’s rally is running into an inflation warning tied to Pentagon-backed messaging—macro headwinds that could tighten risk appetite just as the AI trade hits full stride. BIG GAME, lots of crosscurrents, and the clock is still running.

SentinelOne Partnership And Google Award Highlight AI Securi  ·  Arista Stock Hits 52-Week High Amid Google's AI Data Center  ·  Mill joins Google's AI Futures Fund to advance food waste ch

Layoff Front Sweeps Big Tech and Retail IT as 2026 Forecast Turns Choppy

Meta, Starbucks, and UKG all trim headcount, signaling a cooling layer over corporate tech teams even as markets look oddly sunny.

SEATTLE — A low-pressure system of workforce reductions is rolling across the corporate tech landscape, with gusts strong enough to rattle even the household names. The outlook: targeted cuts now, more turbulence penciled in for 2026, and a startup sector that can’t decide whether it’s swimsuit season or storm prep.

In the biggest weather alert on the board, Meta is aiming for May 20 as the first checkpoint for layoffs, according to a Reuters report, with additional trimming expected later in 2026. That kind of scheduled storm—dated, phased, and recurring—suggests management is treating headcount like a thermostat, not a once-a-decade evacuation. Workers and vendors around the Meta ecosystem may want to secure loose items: project timelines, discretionary spend, and any org charts that rely on “TBD” boxes. (See the report here.)

Closer to home in the Pacific Northwest, Starbucks is cutting tech roles as a new CTO reshapes the organization, GeekWire reports—classic “new captain, new route” conditions. These transitions often start as light restructuring drizzle, then intensify into localized downpours as duplicate teams are consolidated and platform roadmaps are re-scoped. The message for enterprise IT workers: when leadership changes, assume a wind shift is imminent.

Meanwhile, HR technology provider UKG is shedding 950 jobs in a restructuring round, per HR Executive—another sign that even the systems that track everyone else’s labor are not immune to the pressure gradient of higher efficiency demands.

The confusing part of the radar: layoff trackers note a year that swung from heavy startup cutbacks to calmer skies late in the cycle, while analysts are already mapping “big job cuts in 2026.” Translation: sunshine in capital markets doesn’t always mean clear conditions for payroll.

Prepare for patchy visibility: keep skills current, keep a runway, and treat “reorg season” like hurricane season—predictable, recurring, and best respected early.

Exclusive: Meta targets May 20 for first wave of layoffs; ad  ·  Starbucks cuts tech jobs as new CTO reshapes organization -  ·  UKG cuts 950 jobs in latest round of restructuring - HR Exec
Haiku of the Day  ·  Claude HaikuMachines learn to watch
Parents peek through open doors
Trust erodes so fast
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
Anthropic's Mythos Model Triggers Emergency Government Response as SpaceX Pivots to AI
SAN FRANCISCO — Anthropic's latest AI model, Mythos, has prompted an unprecedented emergency response from governments and technology leaders worldwide, with central banks and intelligence agencies scrambling to assess potential risks from capabilities the company has not publicly disclosed. The CEOs of Google, OpenAI, Microsoft, and CrowdStrike participated in an emergency conference call with U.S.
Pursuant to Applicable Regulatory Frameworks: A Comprehensive Analysis of State-Sanctioned Network Discontinuances and Legislative Overreach in Digital Governance
WASHINGTON, D.C.
AI Video Hits Its “App Store Moment” as New Founders, New Tooling, and New Hype Collide
SAN FRANCISCO — AI video is having a week that feels like a decade.
The Collapse of Institutional Trust Is Complete, and We're All Just Watching It Happen
SAN FRANCISCO — There's a moment in every civilization's decline when the institutions meant to protect people simply stop pretending to care, and I think we're there now, watching it unfold in real time across the digital landscape that increasingly constitutes reality itself. Consider the lawsuit filed this week against Match Group, the dating app conglomerate behind Tinder and Hinge, accusing the company of "accommodating rapists" by refusing to implement basic safety measures that could identify and remove serial predators.
Nation’s CEOs Relieved To Learn AI Still Comes With A User Manual, A Rebrand, And Several Thousand Layoffs
LAS VEGAS — As Day 1 of CES 2026 unveiled a fresh crop of devices capable of detecting your mood, reflecting your mood back at you, and then charging you a subscription to manage your mood, corporate America celebrated a more reassuring breakthrough: artificial intelligence is still, at its core, a communications problem. The evidence arrived this week in the form of a new executive guidebook promising to explain AI to leaders—a development analysts say will finally allow decision-makers to stop pretending they learned machine learning in a cross-functional sync. “At last, a set of fundamentals,” said one director of strategy, speaking from a convention-center corridor where a refrigerator was demonstrating its ability to write performance reviews.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team

AI Builder Team Ships Budget Bot Review Agent, Hardens Analytics Layer Across Four Repos

From supply-chain defense to self-reviewing budget models, the team's latest sprint delivered production fixes, new admin tooling, and a controversial analytics overhaul that has the newsroom divided.

The AI Builder Team closed out a 10-PR day Tuesday with work that spanned the full stack — from Surtr's production pipeline fixes to Klair's new Budget Bot review agent, a feature that promises to catch budget modeling errors before they reach finance leadership.

The headline act: marcusdAIy's Budget Bot 4.0 review agent (PR #2654), which ships the first two automated checks against the canonical budget plan. The system now validates revenue-growth assumptions and flags unrealistic headcount projections, surfacing findings in a new scorecard panel inside the budget editor. "This is foundational infrastructure for autonomous budget quality," marcusdAIy told the Times in a written statement, adding with characteristic understatement, "The checks are extensible. More agents ship next sprint. Mac can write about those too, if he's capable of understanding them."

The review agent is extensible the way a bicycle is a vehicle — technically true, limited upside. What matters is whether finance trusts a bot to second-guess their models. We'll see.

More consequential: @eric-tril's supply-chain hardening (PR #2659), which adds a 7-day release cooldown to all Python dependencies across Klair. The change forces `uv` to ignore any PyPI package published in the last week, a blunt but effective counter to the compromised-maintainer attacks that have defined the npm/PyPI threat model since 2022. The policy mirrors Anthropic's internal controls. It also means the team now lags a week behind upstream security patches — a trade-off eric-tril defends as "buying time for the community to discover malicious releases before we ingest them." The logic holds. The risk is real.

In Aerie, marcusdAIy's analytics projection layer overhaul (PR #118) rewrote the salary and benefit calculations that feed workforce planning dashboards. The PR body promises "downstream data correctness fixes." The four attached screenshots show… tables. With numbers. "The projection math was producing incorrect FTE costs under certain comp-change scenarios," marcusdAIy explained when pressed. "This fixes it. Thoroughly. You're welcome." One is tempted to ask what "certain scenarios" means, or whether "thoroughly" is the word a rigorous engineer would choose, but the PR merged and the dashboards haven't exploded, so we move on.

Meanwhile, @ashwanth1109 shipped a super-admin raw data inspection view (PR #2652) for the AI Spend & Adoption dashboard, exposing the Azure cost report table that feeds the rollups executives see. It's exactly the kind of trust-building transparency feature that makes Klair's financial tooling credible. @kevalshahtrilogy brought two migrated Surtr pipelines live (PR #32) and patched the school-master and edu-expense configs that failed during last week's P1 release (PR #33) — the unglamorous fix-forward work that keeps production humming.

The through-line: a team shipping across four repos, from infrastructure policy to AI-assisted workflows, with the confidence of engineers who know their work lands in production.

Mac's Picks — Key PRs Today  (click to expand)
#33 — fix(pipelines): resolve P1 release config misses (school-master URL, edu-expense SES sender) @kevalshahtrilogy  no labels

## Summary

Two config fixes for pipelines that failed on the P1 prod release (PR #31):

- school-master-data-sync: first invocation raised ValueError: Master sheet URL not configured. Set EDU_SCHOOLS_DATA_SHEET_URL environment variable or pass master_sheet_url in params. The env var was never added to pipeline.json. Set it to the authoritative Alpha School Master Data sheet (documented in Klair's data-lineage-v2/layer-1-sor/016-l1-school-master-data-gsheet.md).

- edu-expense-report-sender: all 4 scheduled sends failed with SES MessageRejected - Email address is not verified: noreply@klairvoyant.ai. The handler caught each per-schedule error and returned 200 to Lambda (silent fail). Swap SES_SOURCE_EMAIL to noreply@klair.ai, which is already verified in us-east-1 and in use by orphan-classes-pipeline.

## Verification

Pulled CloudWatch from /klair/pipelines/prod/school-master-data-sync and /klair/pipelines/prod/edu-expense-report-sender to confirm root causes.

The sheet URL in this PR matches the one in Klair's data-lineage docs — no secrets exposed (the sheet relies on a Google service account, not a public link).

## Out of scope

- ps-pipeline failure (missing surtr/netsuite-ps-credentials secret) — tracked separately; requires the NetSuite OAuth2 private key from the old Klair deployment.

- quickbooks-expense-analysis Zoom classification warning — cosmetic, pipeline completed successfully (20 cost opportunities, $14,124/mo est. savings).

## Test plan

- [ ] Merge to main, CI builds and deploys the two Lambdas

- [ ] Manually invoke pipeline-school-master-data-sync-prod — confirm handler reaches Google Sheets auth / sync steps (no ValueError at line 77)

- [ ] Manually invoke pipeline-edu-expense-report-sender-prod — confirm Execution completed in ...: N successful, 0 failed in CloudWatch

- [ ] Check recipient inbox for the expected weekly report (recipients are internal @klair.ai addresses, safe to test)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#118 — fix(analytics): harden the projection layer + downstream data correctness fixes @marcusdAIy  no labels

## Screenshots

<img width="1585" height="652" alt="Screenshot 2026-04-21 141135" src="https://github.com/user-attachments/assets/e64df1dc-ce3f-4405-9474-46dfba88f231" />

<img width="1615" height="687" alt="Screenshot 2026-04-21 141214" src="https://github.com/user-attachments/assets/e4713e62-1a8c-4bb7-a92e-639c51c85a79" />

<img width="936" height="885" alt="Screenshot 2026-04-21 141329" src="https://github.com/user-attachments/assets/7739571d-7747-42c9-99c2-901c44fc6dcf" />

<img width="1592" height="898" alt="Screenshot 2026-04-21 141410" src="https://github.com/user-attachments/assets/127dff8d-f05d-4fcc-a421-218d72262399" />

<img width="1601" height="867" alt="Screenshot 2026-04-21 141435" src="https://github.com/user-attachments/assets/303ebad4-34c2-43fb-8f0c-f5ba980a6d81" />

---

## Summary

Hardens the analytics-worker projection layer in three categories, each driven by a specific finding from this week's [Convex projections audit](../blob/main/context_cache/audit_aerie_convex_projections.md):

1. Tier 1 — reliability: kill the "Many documents read" warnings the worker logs every cycle on joeChartsBudgetActuals and similar tables (~29k/32k docs read per upsert today).

2. Tier 2 — stop wasting work: drop projections with no consumers; refresh static reference tables on a daily/weekly cadence instead of hourly.

3. Tier 3 — observability: add live freshness indicators to dashboard headers so leadership can see "data as of X" at a glance — counters the recurring "but is this even current?" question that drives most "let's just hit Redshift" momentum.

Goal is to make the hourly worker boring: no warnings, no failed batches, no wasted writes.

---

## Prod impact (please prioritise this PR's review)

Dev Convex went from 18 distinct critical/warning insights to 5 after the original PR #118 commits landed. Verified on the Aerie dev Convex Health -> Insights dashboard.

The 13 that disappeared map directly to PR #118 work:

- 5 joeCharts:upsert* "Nearing documents read limit" criticals/warnings -> resolved by Tier 1 composite-index refactor

- 4 replaceEnrollmentCohortStudents / marketing:upsertMarketingEventContacts / enrollment:clearPipelineStudents / marketing:insertWeeklyDeposits write-conflict criticals -> resolved by .unique() migration + dead-projection drops reducing write contention

- 4 retried-write-conflict warnings on the same tables -> same root cause

Prod is still on main and is therefore still hitting all 18 — including the criticals. Until this PR merges and deploys, the prod worker is one growth nudge from upsertBudgetActuals failing the 32k document read limit (it was at ~29k as of the audit).

The two commits added since the original review (below) cover 2 of the remaining 5 dev warnings — the most dangerous one (a separate read-limit risk on _checkPipeline at 31k/32k) and a silent data-loss bug on expenseTransactions that was under-counting prod by every multi-line QuickBooks expense. After these merge, dev should be at 3 warnings, all auto-retried tolerable noise.

### Follow-up commits added since initial review

| Commit | Issue | Fix |

|---|---|---|

| [a8b2b7e](#) | expenseTransactions was keying upserts on transactionId alone. QuickBooks expenses split across multiple schools share one transaction_id (e.g. one Dell invoice → $379 Alpha Miami + $1,137 Austin K-8). Each subsequent line silently overwrote the previous via .unique() patch. Local Aerie was under-counting prod by 102 vs 101 in the 4/13–4/19 sample — exactly the multi-line transactions in the window. | Project the source PK (staging_education.quickbooks_expense_transactions.id BIGINT IDENTITY) as sourceRowId, key the Convex upsert on it. Matches Klair's quickbooks_expense_analysis_service.py which surfaces id directly. Logs when a patch changes line amount materially (audit trail). Adds pnpm backfill-expense-transactions for fast local re-population. Verified: 102 rows / $37,111.41 in window, exact parity with Klair. |

| [d2ce81c](#) | dataConsistency:_checkPipeline was at 31k/32k document reads — 3% from the same total-failure mode the joeCharts upserts had. pipelineFunnel accumulates history (every snapshot/program/stage row forever), and the function .collect()'d the whole table just to find the latest snapshot per program. | Two indexed reads per program (latest-date probe + same-date scan) using existing by_programCode_snapshotDate_stageId index. Reads drop from O(historyDepth × programs × stages) → O(programs × (1 + stages)) — about 310 reads vs 30k+ for a 39-program / 7-stage funnel. Verified locally against prod-imported data: runConsistencyCheck returns 663 stages across 39 programs, no truncation, no warnings. |

| [b4ad842](#) | Joe Charts → Financial → Model vs Actuals showed "Actual $/student" 2–10× inflated vs Klair across every category. Two coordinated bugs: (a) queryPlRecords did not ::text-cast its date columns, so the Postgres driver returned JS Date objects which z.coerce.string() then mangled into "Thu Jan 01 2026 00:00:00 GMT-0600 ..." strings that broke every period filter; (b) getModelVsActuals filtered >= periodStart with no upper bound while Klair's _compute_period_dates returns (start, today) and uses >= start AND <= today, so forward-looking fct_pl entries inflated YTD totals. | Added ::text cast to queryPlRecords (sync side) and execPeriodEnd() helper applied at the one Joe Charts site that aggregates from plRecords (chat side). Deliberately left the 5 other Joe Charts queries that aggregate from joeChartsBudgetActuals (getKPIs, getBudgetVarianceByClass, getExpenseBreakdown, getAlerts) on >= start only — Klair's equivalent queries against consolidated_budgets_and_actuals use the same start-only pattern (joe_charts_edu_service.py:233, :720), and adding an upper clamp would zero out current-quarter KPIs whenever month-end snapshot dates fall after today. Also adds pnpm backfill-pl-records for fast local re-population. Verified: Model vs Actuals (Holdings, Portfolio, YTD, 3,200 students) now matches Klair row-for-row across all 9 categories. |

### Known follow-up

| Dashboard | Issue | Plan |

|---|---|---|

| Joe Charts → CAPEX & AP | Small data drift vs Klair: Total Open card shows 166 invoices / $16.70M period AP vs Klair's 196 / $16.50M; Current Days $-7K and 91+ Days $-10K despite matching invoice counts. Per-bucket counts and amounts for 1-30 / 31-60 / 61-90 buckets match exactly. | Three projection-layer bugs: (1) r.lineAmount ?? r.totalAmount fallback in refresh.ts:1095 inflates totals when source rows have null line_amount (Klair uses 0); (2) double pro-rate of balance — once at projection write time, again at dashboard read time; (3) total_transactions returns exploded line-item count rather than source-row count. Cleanest fix flattens joeChartsQBAP to per-line rows matching Klair's source schema and removes the JSON re-explode round-trip in the dashboard. Out of scope here — will be a follow-up PR (~2-3h refactor + verification). This is the last apparent Joe Charts parity issue. |

### Intentionally deferred with rationale

| Issue | Rationale |

|---|---|

| Medium #9 — Redshift queries still run every cycle | Cadence only skips Convex writes. Convex cost dominated the hot path; Redshift queries on static reference tables are milliseconds. Documented; not a safety issue. |

| Medium #10 — Optional fields in composite keys | Duplicate check on imported prod data showed 0 collisions across all 7 tables. Fixing in-mutation creates duplicates against existing data; needs a one-time migration PR. File issue. |

| Medium #11 — NetSuite AP status in composite key | Needs finance input on whether table semantically wants "current snapshot" (drop status from key) or "history" (keep). File issue. |

| Medium #12 — UTC date in tooltip can be off-by-one | Minor UX polish. Swap to toLocaleDateString(...) in a follow-up. |

| Low #14 — Redundant Tooltip.Provider on badge | Nested providers are a no-op per Radix docs; keeping the self-wrap lets the component work in unit tests without additional setup. |

| Low #15 — new args object per useQuery render | Convex compares serialized args; useMemo is micro-optimization that trades clarity for ~nothing. |

| Low #16 — rolling vs calendar interval semantics | Intentional and documented in module comment. Fine for an internal tool. |

| Nit #17 — queryFunnelActivity location in PR description | Corrected in this body update. |

---

## Tier 1 — Reliability

Seven joeCharts upserts shared the same anti-pattern: a partial withIndex() narrowing followed by .filter() over the trailing fields of the unique key. For popular dimensions (e.g. Holdings BU on upsertBudgetActuals) this scanned thousands of docs per record and tripped Convex's "Many documents read" warning at ~29k/32k.

Fix per table: add a composite by_uniqueKey index covering the FULL unique key, then refactor the mutation to use .unique() against that index. Lookup goes from O(n-per-key) to O(log n).

| Mutation | Old: index + filter | New: composite index |

|---|---|---|

| upsertBudgetActuals | (BU, dataSource) + filter (period, className) | (BU, dataSource, period, className) |

| upsertStudentEnrollment | (campusName) + filter (status, quarter) | (campusName, status, quarter) |

| upsertMapGrowth | (campus) + filter (termName) | (campus, termName) |

| upsertMapGrowthBySubject | (campus, subject) + filter (grade) | (campus, subject, grade) |

| upsertFinancialModels | (schoolName, modelName) + filter (metricName) | (schoolName, modelName, metricName) |

| upsertCapex | (periodType) + filter (className) | (className, periodType) |

| upsertNetSuiteAP | (periodType) + filter on 4 fields | (vendorName, periodType, subsidiaryName, className, status) |

Also bonus correctness: .unique() enforces the invariant the schema implies (one row per composite key) where the old .filter().first() would silently pick whichever row the index returned first if duplicates ever existed.

---

## Tier 2 — Stop wasting work

### 2a. Stop refreshing dead projections

analyticsDeals and contactActivity have zero runtime consumers — no dashboard, AI chat tool, or cron query reads them; they were only self-merged by the writer (audit Finding 3). The cron stops calling upsertDeals and upsertContactActivity; queryFunnelActivity is also dropped since nothing else needs it. queryDeals is kept because contact enrichment uses deals to derive enrollment periods even though we don't persist them anymore.

Schema entries and the upsert mutations themselves are preserved with @deprecated JSDoc — a follow-up PR will drop the tables once we're sure existing rows are safe to remove for a release cycle. Two-step on purpose so any forgotten consumer surfaces in this PR's bake-in window rather than as a runtime error.

### 2b. Per-table refresh cadence

New module sync/src/analytics/refresh-cadence.ts lets domains opt into weekly or daily refresh instead of the default hourly. Applied to the seven non-chat-only static tables flagged in the audit:

| Cadence | Domains |

|---|---|

| weekly | schoolMappings, crossSystemMappings, joeChartsCapacityMap, joeChartsSchoolMetadata |

| daily | programs, joeChartsUnitEconomics, joeChartsFinancialModels |

Implementation is opt-in: anything not in REFRESH_TIERS defaults to hourly so behaviour for time-series tables (snapshots, projections, funnel) is unchanged. In-memory tracker on the worker process — restart forces a refresh on next cycle, which is fine.

Per-table chat-only projections are not touched in this PR.

---

## Tier 3 — Freshness indicators on dashboards

Counters the "but how do I know if this data is even current?" anxiety that drives most "let's just hit Redshift directly" momentum. Reactive Convex makes the live badge nearly free — the moment a fresh projection lands, the badge re-renders without a refresh button.

- chat/convex/analytics/freshness.tsgetDomainFreshness + getSingleDomainFreshness queries for five domains (admissions, joeCharts, wiki, pmo, camps). Uses Convex's always-available _creationTime system index (.order("desc").first()) so it works against any representative table regardless of whether it has a snapshotDate index.

- New <DashboardFreshnessBadge> component — color-tiered (fresh <2h green / aging <24h yellow / stale red / unknown slate). Self-wraps with Tooltip.Provider so it works in any rendering context including unit tests.

- Wired into FunnelView (admissions) and CampsView (camps) as a PoC. Other dashboards can adopt the same one-liner: <DashboardFreshnessBadge domain="..." />.

---

## Tests

| Suite | Result |

|---|---|

| sync (full suite, 46 files, 654 tests) | passing |

| chat/convex/freshness.test.ts (4 new) | passing |

| chat/convex/buildoutDetails.test.ts (12) | passing |

| chat/convex/dashboards.test.ts (41) | passing |

| chat/convex/eventProcessor.test.ts (42) | passing |

| chat/components/dashboards/admissions/camps/__tests__/camps-view.test.tsx (8, modified due to badge integration) | passing |

New test coverage:

- 6 unit tests on refresh-cadence covering tier defaults, independent domain tracking, hourly fallback for unlisted domains, and a REFRESH_TIERS allow-list test that catches accidentally adding chat-only tables here.

- 4 query tests on getDomainFreshness — empty-table case, non-empty selection, single-domain variant, unknown-domain fallthrough.

- Updated 2 refreshEntityRecords tests to reflect the new "deals not upserted" semantics; failure-isolation property still under test.

Pre-existing failures NOT fixed by this PR: lib/__tests__/agent.test.ts has 2 getDataDir tests that fail on main today (verified). Unrelated to this work; tracked separately.

---

## Deploy sequence (one-time)

The expense sourceRowId migration (commit a8b2b7e) leaves legacy rows in

prod that the new upsert can't see -- without the purge below, every sync

cycle would INSERT a fresh row alongside each legacy row and the Expense

Analysis dashboard would double-count. Run these steps in order when

deploying this PR to any environment with pre-existing expenseTransactions

data:

1. Merge & deploy (Convex schema + functions land first via the existing

pipeline).

2. Pause the analytics worker (or wait until it's idle between cycles)

so it can't re-insert ghost rows mid-purge.

3. Run the purge until hasMore returns false -- it pages 4k rows per

call to stay under the Convex 32k document read limit:

   # repeat until { ..., "hasMore": false }

npx convex run analytics/expenses:_clearLegacyExpenseTransactions

4. Repopulate with the corrected schema. Either resume the worker (next

cycle picks up where the watermark left off) or run the focused backfill

for faster validation:

   pnpm --filter @bran/sync backfill-expense-transactions --dry-run  # preview

pnpm --filter @bran/sync backfill-expense-transactions # for real

# The script prompts for confirmation when CONVEX_URL points at a non-dev

# deployment; set BACKFILL_CONFIRM=yes to bypass for non-interactive runs.

5. Resume the analytics worker.

6. Spot-check the Expense Analysis dashboard -- counts and totals should

match Klair's QuickBooks Expense Analysis view for the same date window

(we verified 102 rows / $37,111.41 in 4/13-4/19 against prod Klair).

P&L records (commit b4ad842) follow the same pattern but are simpler --

the worker will overwrite the bad date strings on its next cycle, or you

can force-refresh with pnpm --filter @bran/sync backfill-pl-records. No

purge needed because the upsert keys on plId (which existed pre-migration).

Follow-up PR after a bake-in window: tighten sourceRowId: v.optional(v.string())

to v.string() in analyticsSchema.ts so this class of ghost-row bug cannot

recur.

## Test Plan for Reviewer

CI runs tests, typecheck, and lint automatically.

### 1. Read the diff

- [ ] Confirm the 7 by_uniqueKey composite indexes in chat/convex/analyticsSchema.ts cover the FULL unique key for each table (matches the table in the Tier 1 section above)

- [ ] Confirm each refactored mutation in chat/convex/analytics/joeCharts.ts uses the new index AND switches .first().unique()

- [ ] Confirm REFRESH_TIERS in sync/src/analytics/refresh-cadence.ts only contains the 7 non-chat-only static tables

- [ ] Confirm the @deprecated mutations in chat/convex/analytics/entities.ts are still present (preserved for ad-hoc backfills) but no longer called by the cron

### 2. Local worker run (recommended — concrete signal that the reliability fix works)

# from repo root, with local Convex pointing at prod-like data

# (run pnpm sync:pull-prod-data first if your local is stale)

cd sync

$env:DATA_DIR = "../data" # PowerShell — bash: export DATA_DIR=../data

npx tsx run-analytics-worker.ts # or pnpm worker:analytics

Wait through one cycle (~5-10 min) and confirm:

- [ ] Each joeCharts push logs ✓ {table}: N recordszero "Many documents read" warnings, zero "X/N records failed" lines

- [ ] [analytics] Cross-system mappings pushed to Convex: N schools appears on first cycle (cadence-gated path fires when no prior run is recorded)

- [ ] No log lines for analyticsDeals or contactActivity upserts (those are now dropped)

- [ ] Joe Charts financial refresh completes in ~60s (previously slow + warning-prone)

After cycle 2 (let the worker keep running):

- [ ] [analytics] ⏭ {domain}: skipped (within {tier} cadence) lines appear for each tiered domain (cycle 2 within the daily/weekly window)

### 3. Local UI validation (optional)

docker compose build chat && docker compose up -d chat

Then open the browser:

- [ ] Open Admissions → Funnel → confirm freshness badge appears in the top filter row

- [ ] Open Admissions → Camps → confirm freshness badge appears (right-aligned)

- [ ] Hover the badge → tooltip shows "Data as of YYYY-MM-DD (Xh ago)"

- [ ] Confirm badge colour matches expected tier (green <2h, yellow <24h, red older, slate when no snapshot)

### 4. Direct query validation (optional, fast)

cd chat

npx convex run analytics/freshness:getDomainFreshness

Should return all 5 domains with snapshotDate and ageMs populated (or null/null if your local Convex is empty). Local validation result above shows what to expect when prod-like data is imported.

### 5. Post-deploy spot-check

After this PR merges:

- [ ] Tail prod analytics-worker logs through one full cycle — confirm zero "Many documents read" warnings (was firing on upsertBudgetActuals previously)

- [ ] Confirm the cadence-skip lines appear on the second cycle for the tiered domains

- [ ] Confirm dashboards still render correctly (no missing data from the removed analyticsDeals / contactActivity upserts)

---

## Risks & Mitigations

| Risk | Mitigation |

|---|---|

| Adding a 4th index per table increases write cost | Each index is small (3-5 fields, no large strings). Net win because the upsert .unique() path drops the dominant cost (32k+ doc reads). Convex pricing model favours fewer reads over fewer indexes. |

| Cadence in-memory state lost on worker restart | First cycle after restart refreshes all tiered domains; acceptable cost. Persisting cadence to Convex would add complexity for marginal benefit (worker restarts are rare). |

| @deprecated dead-projection mutations still callable | Intentional — preserved for ad-hoc backfills if a consumer ever materialises. Real cleanup (drop tables, drop mutations) is a separate PR after a bake-in window. |

| Freshness badge picks the wrong representative table | Mapping is hand-coded and greppable; trivial to swap a table if a domain owner disagrees. The badge degrades gracefully to "Unknown" when its representative table is empty. |

| Existing rows have duplicates on the new composite key (would break .unique()) | Validated locally against prod-imported data: zero collisions across all 7 tables. If prod somehow has duplicates we don't have, the worker logs a clear errors[label] from the catch — visible in the next cycle's log. |

---

## What's NOT in this PR

- AI Chat-only projection demotion (audit Finding 2) — needs usage data on the chat agent before deciding. Held back per current direction.

- JSON-string → sub-table migrations (audit Finding 5) — structural debt, deserves its own PR per table with backfill code and rollout plan. File issue.

- Incremental sync for huge tables (audit Recommendation 7) — architectural change to runRefreshCycle, deserves its own discussion.

- Drop the analyticsDeals / contactActivity schema entries — wait one release cycle to confirm no hidden consumers, then remove tables in a follow-up.

#2652 — feat(ai-spend): super-admin Raw Data Reports view + Azure cost reports table (KLAIR-2579) @ashwanth1109  no labels

## Demo

### Azure Cost Reports

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/eefae772-1679-416b-aa21-3000ca6622dd" />

### Azure Token Usage

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/aa89d42c-e61e-41d4-8979-643f58d706bf" />

## Summary

- Adds a super-admin-only Raw Data Reports detail view inside AI Spend & Adoption (/ai-adoption) for inspecting raw source tables behind the rollups. First entry: Azure — Cost Reports (core_finance.ai_spend_azure_cost_reports).

- Backend: GET /api/ai-costs/raw/azure-cost-reports guarded by require_super_admin, returns { data, total_rows } preserving full 6-decimal cost_usd precision.

- Frontend: new AICostsShell wraps the existing AIAdoptionV2 screen, adds a header-strip "Raw Data Reports" button (super-admin only), a registry-backed picker page, and an AzureCostReportsView rendering all rows in UnifiedTable with client-side sort/search/column-selector, a pre-computed total row, and a custom full-precision CSV export button.

## Why

Implements [spec 01 — Azure Cost Reports Raw View](features/ai-spend-and-adoption/ai-spend-raw-data/specs/01-azure-cost-reports-raw-view/spec.md) for Linear ticket KLAIR-2579. Establishes the shell refactor, super-admin gating, breadcrumb pattern, picker registry, and first raw-table surface. Future specs register additional raw tables (Azure token usage, etc.) without structural changes.

## Notable deviations from spec

All documented in the spec's new "Implementation Deviations" section:

1. Shell wraps AIAdoptionV2, not AICosts. The /ai-adoption route already pointed to AIAdoptionV2; legacy screens/AICosts/ is unreferenced and left untouched. Files placed under screens/AIAdoptionV2/.

2. Custom Export CSV button replaces UnifiedTable's built-in export feature, which hardcodes .toFixed(2) on numeric cells and would drop precision. Custom button builds CSV from raw rows, preserving all 6 decimals of cost_usd.

3. Shell <h1> title visible across both modes — keeps the page title always present per FR2, without needing AIAdoptionV2 changes.

## Test plan

- [ ] As a super-admin, visit /ai-adoption → "Raw Data Reports" button appears in the header strip.

- [ ] As a non-super-admin, the button is hidden; endpoint returns 403.

- [ ] Click "Raw Data Reports" → picker lists "Azure — Cost Reports" with source-table label.

- [ ] Click the entry → view shows 3-level breadcrumb + header strip (source table, row count, min–max date range) + UnifiedTable with 5 default-visible columns; column selector reveals 4 more.

- [ ] Table sorts default by report_date DESC, search works, total row shows $<sum> under cost_usd.

- [ ] Click "Export CSV" → downloads azure-cost-reports-<yyyy-MM-dd>.csv with full 6-decimal precision.

- [ ] Breadcrumb levels return to picker / rollup correctly.

- [ ] pnpm tsc --noEmit passes in klair-client/.

- [ ] ESLint passes with --max-warnings 0 on changed files.

- [ ] uv run ruff format and uv run ruff check pass on changed backend files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2654 — Budget Bot 4.0: Review Agent checks (C2.1, C2.6), /review endpoint, and scorecard panel @marcusdAIy  no labels

## Screenshots

<img width="1694" height="936" alt="image" src="https://github.com/user-attachments/assets/b0de902a-d3f1-42b5-b930-a73e15c0435e" />

<img width="1660" height="915" alt="image" src="https://github.com/user-attachments/assets/34b2767e-3e94-4e1a-a44b-a952a54c53d7" />

---

## Summary

Demo Sprint DS4-DS7: ships the first two Review Agent checks, the /review endpoint, and the editor-side scorecard panel that renders the findings.

### Backend (DS4-DS6)

- CanonicalBudgetPlan.targets — new PlanTargets wrapper around the parsed BUDGET_BOT_TARGETS sheet, so checks read margin/ARR/retention targets from the canonical layer instead of poking DataPackage directly. has_targets joins the completeness flags.

- budget_bot/board_doc/review_checks/ — new package:

- _helpers.py: shared P&L row lookup + safe percent math (now guards denominator <= 0)

- margin_target.py (C2.1): planned vs approved EBITDA margin (pass / warning ≤5pp / critical >5pp)

- margin_trajectory.py (C2.6): Q-over-Q gross margin direction (pass if flat/improving ≥-0.5pp, warning ≤-2pp, critical >-2pp)

- __init__.py: CheckSpec registry + run_all_checks(plan, spec) with per-check exception isolation so a regression in one check can't take the whole scorecard down

- POST /board-doc/wizard/{id}/review — builds the canonical plan from session state, runs all registered checks, returns a fully-typed ReviewResponse { findings, skipped_checks, errored_checks, completeness, data_fetch_status, ran_at }. 409 if no spec yet, 502 only on orchestrator-level failure (per-source failures surface in data_fetch_status.failed_sources).

- Per-session asyncio.Lock so two parallel /review calls (multi-tab, double-click) serialise rather than racing on session.data_package.

- DataPackage negative cache — failed fetches go to a new failed_keys set rather than being silently stored as None. Subsequent /review clicks short-circuit known-failed sources by default; callers that want to retry pass retry_failed=True. Avoids burning 20-40s per click while upstream is down.

- scripts/run_review_checks.py — CLI that runs the full path against real BU data without booting the server. Same path the endpoint uses.

### Frontend (DS7)

- boardDocApi.ts — new types (ReviewFinding, ReviewResponse, ReviewSeverity, ReviewFindingStatus, ReviewCompleteness, ReviewDataFetchStatus), a runReview() API call, and a stable compareReviewFindings comparator (sort key: severity → check_area → check_id → finding_id).

- hooks/useReviewAgent.ts — owns the lifecycle state machine (idle → running → ready | error) with cross-session-safe concurrency:

- In-flight ref keyed by sessionId so a Run review click on session B mid-flight starts a fresh run instead of receiving session A's promise.

- Monotonic runIdRef ensures a stale resolve (slow A response landing after the user switched to B) is dropped on the floor instead of overwriting B's panel.

- Companion useReviewAgentResetOnSession hook for the page-level reset on sessionId change.

- Error branch wipes stale findings/completeness so the error state can never render mixed content.

- components/FindingCard.tsx — severity-tinted left border, expandable body showing why / preferred action / options / a humanised supporting-data table. Refactored to a single non-button container with sibling chevron + section buttons — no more invalid nested-button HTML, predictable a11y / keyboard tab order. Multi-line agent prose renders with white-space: pre-line.

- components/ReviewPanel.tsx — 360px right rail when open, 36px ribbon when collapsed (count badge now includes info findings, tinted by worst severity present so an info-only run still surfaces a signal). Header has Run review / Re-run + collapse. Body renders four states (idle / running / error / ready); ready state shows a summary header, findings grouped by check_area, an expandable "N passed" chip, and a static skipped-checks chip that surfaces missing_sources from the completeness payload.

- DocumentEditorPage.tsx — adds a Review toggle button in the top header (mirrors the panel's collapse), wraps editor + panel in a flex row so they share horizontal space without the editor losing its scroll behaviour. Wires useReviewAgentResetOnSession.

### Bonus bug fix (commits 2 + 3)

Live verification against Skyvera Q2 2026 surfaced that the "Data for Budget Bot" sheet has inverted column names for the EBITDA fields:

- *_ebitda_target → actually contains the margin %

- *_ebitda_margin_target → actually contains the dollar amount

Before the fix, C2.1 reported nonsense: *"planned EBITDA margin (60.3%) is 7991771.7pp below the approved target (7991832.0%)"*. Fixed in both canonical_plan._TARGET_FIELD_MAP and section_generators.get_ground_truth_metrics (the latter was also feeding the LLM swapped labels for every doc generation — this PR corrects that too). Both spots carry cross-referencing comments so when the GSheet column headers eventually get renamed, the inversion can be reverted cleanly. Captured as a tech-debt item (C1.7) in the backlog.

### Review feedback addressed

This PR went through internal review and addresses every flagged item except one stylistic disagreement (commits 6-10):

| Severity | Items addressed |

|---|---|

| Critical | C1: sessionId-keyed in-flight ref + stale-resolve guard. C2: reset on sessionId change via dedicated companion hook. |

| High | H3: structured data_fetch_status field distinguishes "data legitimately missing" from "upstream broke". H4: DataPackage.failed_keys negative cache. H5: compareReviewFindings finding_id tie-break. H6: FindingCard refactored to fix invalid nested <button> HTML. H7: per-check exception isolation in run_all_checks + new errored_checks field. |

| Medium | M8: per-session asyncio.Lock on /review. M9: documented as design choice (LLM prompt path wants string "60%", deterministic path wants float 60.0 — different consumers, different shapes; clarified with extensive docstring + regression test). M10: useReviewAgent.spec.ts covering C1/C2/sort. M11: stable-IDs caveat documented in hook docstring (blocks shipping a dismiss UI until backend produces stable IDs). M12: safe_pct guards denominator <= 0. |

| Low | L13: typed Pydantic ReviewResponse (no more list[Any] / dict[str, Any]). L14: cross-user 403/404 test. L15: error branch clears stale findings. L16: collapsed-panel badge now counts info findings, tints by worst severity. L17: white-space: pre-line on agent-emitted prose. L18: filed as backlog item C1.8 (pre-existing planner-module mislabel). |

| Nit | N19: hook docstring rewritten around cross-session guarantees. N20: implemented as a one-directional alias-subset invariant TEST rather than a "must stay in sync" comment (catches regression in CI; comments don't). N21: --year sanity-bounded (2020-2030). N22: SkippedChip docstring captures the design intent so a future refactor doesn't strip the missing_sources line. |

One disagreement: reviewer's framing of H4 suggested every click would burn 20-40s while upstream is down; in practice the orchestrator's failure path short-circuits faster, but the negative-cache fix still lands as recommended just for clearer semantics.

### Live verification

Running scripts/run_review_checks.py --bu skyvera -q 2 -y 2026 against real data produces:

[C2.1] Margin target hit/miss          PASS

Q2'26 planned EBITDA margin (60.3%) meets the approved target (60.0%).

[C2.6] Q-over-Q gross margin trajectory WARNING

Gross margin degrading 2.0pp Q-over-Q (Q1'26: 80.0% → Q2'26: 78.0%).

One pass + one warning — exactly the kind of mixed scorecard that makes the demo feel real.

### Backlog impact

- DS4, DS5, DS6, DS7 → done

- C2.1, C2.6 → done (Phase C check suite, 2 of 17)

- C4.1, C4.6 → done (scorecard panel + re-run button); C4.2 partial (FindingCard ships in DS7, dismiss/addressed flow pending stable finding IDs per M11)

- C0.1, C0.4 → flipped to done (already shipped in #2634; backlog status was stale)

- C1.8 → new tech-debt item filed for the budget_translator.py mislabel surfaced during this review (pre-existing, separate consumer)

## Test plan

### Backend

- [x] New unit tests pass: 38 check + 8 endpoint + 7 target-extraction + 1 regression with real Skyvera payload, plus new tests for per-check isolation, negative cache, retry_failed, data_fetch_status, cross-user 403, safe_pct negative-denominator, and the alias subset invariant.

- [x] Live CLI verification against Skyvera Q2 2026 produces sensible findings.

- [x] Full tests/board_doc/ suite shows no new regressions (5 pre-existing LLM-mock failures unchanged).

- [ ] Reviewer spot-check: column-name inversion comments in both canonical_plan._TARGET_FIELD_MAP and section_generators.get_ground_truth_metrics agree.

- [ ] Reviewer spot-check: run_all_checks contract — empty list from a check means "skipped", at least one severity="pass" finding means "ran and passed", any exception lands in errored_checks and the request still returns 200.

### Frontend

- [x] pnpm lint --max-warnings 0 clean on all touched files.

- [x] pnpm tsc --noEmit clean.

- [x] useReviewAgent unit tests pass (7/7) — covers same-session de-dupe, different-session start-fresh, stale-resolve drop, stale-error drop, error clears findings, reset cancels in-flight, useReviewAgentResetOnSession wiring.

- [x] Open a Skyvera Q2 session in the editor → "Review" button visible in header, panel open by default on the right.

- [x] Click "Run review" → spinner appears, panel transitions to ready state in 20-40s.

- [x] Findings render grouped by check_area; severity-coded left borders; sort order critical → warning → info → pass; same-area same-severity ties broken stably by check_id then finding_id.

- [x] Click a finding → expands to show why / suggested action / options / "Numbers" table; click again to collapse. Chevron toggle button is a11y-correct (no nested buttons; section pill is independently focusable).

- [x] "1 passed" chip is visible and expands to show the C2.1 pass finding.

- [x] Click panel header chevron (or "Review" header button) → collapses to 36px ribbon with severity count badge; click again restores.

- [x] Force an error path → panel shows error state with "Try again" button; stale findings are wiped (no mixed "old findings + new error" content).

- [x] Re-run button on the panel header → triggers a second run, replacing previous findings.

- [x] Window narrowing: panel keeps fixed width, editor narrows but stays usable down to ~600px viewport.

#2659 — add uv.toml files @eric-tril  no labels

## Supply-chain defense: add 7-day release cooldown on Python deps

### Why a minimum release age matters

Most malicious packages in supply-chain attacks are discovered and removed within hours to a few days of publication. Recent examples: the September 2025 chalk / debug / ansi-styles npm hijack (yanked within ~2 hours), the shai-hulud self-replicating worm, event-stream, ua-parser-js, coa, rc. The standard attacker playbook is to compromise a maintainer account (phishing, token leak), publish a patch-level version that looks legitimate, and collect credentials / wallets from CI machines before anyone notices.

A rolling age gate blocks that window: if builds only resolve to versions at least 7 days old, the vast majority of malicious publishes have already been yanked from the registry before we install them. The tradeoff is a one-week delay on legitimate security patches — acceptable because (a) real CVEs typically take longer than 7 days to see in-the-wild exploitation, and (b) the age gate can be overridden per-package for an urgent fix.

Klair's attack surface matters here: CI and the EC2 host run uv sync / pnpm install with AWS credentials, NPM tokens, and Secrets Manager access. A compromised transitive dependency executing postinstall on any of those hosts would have broad blast radius.

### What this PR does (uv / Python)

Adds a 7-day rolling supply-chain cooldown to three uv projects using uv's relative-duration syntax (exclude-newer = "7 days"):

| Project | Location |

| --- | --- |

| klair-api/ | new uv.toml |

| klair-lambdas/hubspot_sync_v2/ | new uv.toml |

| klair-misc/redshift-mcp-old/ | new uv.toml |

The rolling window auto-advances — no manual timestamp bumping. The setting does not affect already-pinned versions in uv.lock; it only gates what uv is allowed to resolve to on a fresh uv sync --upgrade or uv add. Existing lockfiles keep working.

Requires uv 0.9.17+ (the release that added relative-duration support — see the [uv settings reference](https://docs.astral.sh/uv/reference/settings/)). CI installs latest uv via the install script, so CI is fine. The EC2 deploy hosts run uv sync against whatever uv is already installed on the box (.github/workflows/ci-cd-backend.yml lines 63 and 113) — those hosts need to be on 0.9.17+ or uv sync will error on parse. Verify / upgrade uv on the dev and prod hosts as part of this rollout.

### What's still needed — JavaScript side (follow-up)

Not included in this PR because the story is more complicated:

#### klair-client/ (pnpm)

pnpm *does* support a rolling minimum release age via the .npmrc key minimum-release-age (value in minutes; 7 days = 10080). But:

- The repo pins pnpm@9.15.9 in the root package.json (line 12). The minimum-release-age setting shipped in pnpm 10.16+, so it will be silently ignored on 9.x. Enabling it requires bumping packageManager to pnpm@10.x first, which brings its own install-behavior and packageExtensions changes worth testing separately.

- Deploy uses pnpm install --no-frozen-lockfile (.github/workflows/ci_cd-frontend.yml line 77) — the exact call that would resolve new versions and needs the age gate most.

- Plan: (1) upgrade pnpm to 10.x in its own PR, verify CI + Amplify deploy; (2) add klair-client/.npmrc with minimum-release-age=10080 and optionally minimum-release-age-exclude[]=@trilogy-group so internal packages aren't blocked.

#### The 6 npm projects

klair-convex/, klair-e2e-playwright/, klair-misc/klair-mcp-ts/, klair-misc/klair-mcp-edu/, klair-udm/, klair-udm/netsuite-reports-poc/npm itself does not support any age-gate feature. An .npmrc with minimum-release-age is silently ignored by the npm client. Options:

- Migrate to pnpm (recommended for the two MCP servers — klair-mcp-ts and klair-mcp-edu — since they deploy to prod with secrets). Low-risk per-project change: install pnpm, pnpm import to convert the lockfile, update the CI workflow's npm ci to pnpm install --frozen-lockfile.

- Registry proxy (Artifactory, Verdaccio, Socket.dev firewall) — enforces the delay at the registry layer regardless of client. Best long-term solution but requires infra.

- Accept risk for dev-only projects (klair-convex/, klair-e2e-playwright/) that don't ship to prod hosts with secrets.

#### Dependabot

The repo has .github/workflows/dependabot-auto-merge.yml that auto-approves and auto-merges semver-patch PRs immediately, with no embargo. Even with age gates in place at resolve-time, Dependabot files PRs against the newly-published version. Two follow-ups worth considering:

1. Add a cooldown: block to a new .github/dependabot.yml so Dependabot itself waits 7 days before filing a PR.

2. Or add a "wait N days since release" check inside the auto-merge workflow before approving.

### Verification

- uv sync in each of the three projects continues to work from existing lockfiles (lockfile resolution is unaffected by exclude-newer when versions are already pinned).

- On uv 0.9.17+, uv sync --upgrade / uv add <pkg> will refuse packages published in the last 7 days.

- On uv <0.9.17, uv sync fails with a parse error on the "7 days" string — treat that as the signal to upgrade uv on that host.

The Portfolio  —  Trilogy Companies

Alpha School Publishes Athletic Data Claiming 2× D1 Recruitment Rate — But Won't Share the Methodology

Joe Liemandt's $40K-a-year AI-first school says its students are twice as likely to play college sports. The evidence? A blog post with no sample size, no control group, and no peer review.

AUSTIN, TEXAS — Alpha School, the private K-12 institution founded by Trilogy billionaire Joe Liemandt, published a blog post this week claiming its athletic program doubles students' odds of playing Division I sports. The school offers no statistical methodology, no comparison cohort, and no longitudinal data to support the claim.

The post, titled "We Built a School to Double Your Kid's D1 Odds," describes a curriculum that dedicates afternoons to athletics after students complete academic work via AI tutors in two hours each morning. The school charges between $40,000 and $65,000 annually and operates campuses in Austin, Brownsville, and Miami, with nine more planned for fall 2025.

Alpha's marketing increasingly positions traditional schooling as the problem and movement as the solution. A separate post this week argued that ADHD diagnoses are misdiagnoses of "movement-starved kids." Another claimed students learn more life skills from afterschool sports than from classroom instruction.

The D1 claim is the most quantifiable assertion Alpha has made about non-academic outcomes — and the least substantiated. The post does not disclose how many Alpha students have been recruited, what sports they play, or how the "2×" figure was calculated. It does not compare Alpha students to demographically similar private school peers, nor does it account for selection bias: families paying $65,000 a year may already have above-average resources for athletic training.

Alpha has published verified academic results showing students test in the top 1–2% nationally on NWEA MAP Growth assessments. No equivalent third-party validation exists for the athletic claims.

The school did not respond to requests for the underlying data. For now, parents considering Alpha for athletic advantage have only the school's word — and a blog post.

We Built a School to Double Your Kid’s D1 Odds  ·  The ADHD Epidemic: How We Misdiagnosed an Entire Generation  ·  4 Reasons Your Kid Is Learning More from Sports than from Sc

Totogi Bets on Ontology-Led ‘Vertical AI’ to Make Telco Automation Pay Off

A 97% reduction in alarm noise is the kind of ROI story operators have been waiting to hear — and Totogi says context is the missing ingredient.

AUSTIN, TEXAS — Telcos have never been short on AI pilots. What they’ve been short on is money — specifically, repeatable, CFO-grade outcomes that survive contact with production systems. Totogi is now leaning hard into what it calls a paradigm shift: vertical AI powered by an ontology that encodes telecom business context, so agentic systems can do more than generate dashboards.

In a new case study, Totogi says its Ontology helped reduce “alarm noise” by 97%, tackling one of the industry’s most stubborn operational sinkholes: networks that generate oceans of alerts, only a fraction of which are actionable. The pitch is classic Totogi — leverage domain-specific structure to turn chaotic telemetry into robust, automated decisions — and the headline number is designed to stop operators mid-scroll. (Reducing alarm noise by 97% with the Totogi Ontology.)

Totogi’s broader argument: most enterprise AI fails not because models are weak, but because context is missing. In an MWC26 Agentic AI Summit talk titled “Show me the money,” the company frames the market’s central frustration as an ROI gap — pilots that don’t translate into margin, churn reduction, or OPEX savings. The fix, Totogi suggests, is to stop asking general-purpose AI to “figure out” telecom, and instead give it a best-in-class semantic layer that captures how products, customers, services, events, and policies actually relate. (MWC26 Agentic AI Summit: Show me the money.)

That same thesis underpins Totogi’s Appledore Ontology whitepaper and its recent media push around “the power of vertical AI.” The through-line is synergy: ontology as the connective tissue between data, automation, and commercial outcomes.

The timing is notable as the broader Trilogy telecom stack tightens. Skyvera’s earlier move to bring CloudSense (a Salesforce-native CPQ and order management platform) into the fold signals a parallel push toward operational plumbing that’s AI-ready — the systems of record that ontologies and agents can actually act on.

Key Takeaways:

- Totogi is positioning ontology-led vertical AI as the fastest route from pilots to provable telco ROI.

- A reported 97% alarm-noise reduction spotlights automation opportunities in network operations.

- Ontologies are emerging as the “context layer” that makes agentic AI commercially credible.

We're just getting started.

Reducing alarm noise by 97% with the Totogi Ontology  ·  Appledore Ontology Whitepaper  ·  MWC26 Agentic AI Summit Talk: Show me the money: why most te

IgniteTech Goes Shopping—And Launches a Cloud-Cutting Fixer Squad on the Side

Three product pickups, a shiny new Jive badge, and a services arm that promises to make your AWS bill cry uncle.

AUSTIN, TEXAS — IgniteTech is back in its favorite aisle: the software clearance rack… and it’s pushing a full cart.

Word is the ESW-adjacent acquisitor just snapped up three more software products, the kind of bolt-on buys that look boring until they start throwing off cash like a jukebox in a dive bar. The announcement came in the company’s latest PR burst—equal parts “we’re growing” and “don’t ask how cheap we got it.” (My inbox calls it “deal confetti.”) See the company’s own framing in the acquisition release.

And then—because one headline never feeds the machine—IgniteTech also trotted out a familiar trophy: Jive Software. Yes, that Jive, the social intranet brand long associated with Aurea’s orbit. A little bird tells me this is less “new relationship” and more “family photo reshuffle,” positioning Jive as another line item in IgniteTech’s “leading solutions” collage.

But the spiciest move isn’t the shopping spree—it’s the side hustle. IgniteTech just launched a services arm called Hand.com, pitching itself as the buttoned-up closer who walks into your cloud environment and leaves behind a smaller bill and a bruised ego. The promise: save customers “millions” on cloud spend. The subtext: someone finally noticed how many enterprises are lighting money on fire in AWS and calling it “innovation.” The pitch is laid out in the Hand.com launch announcement.

Meanwhile, the shadow on the wall looks a lot like Trilogy founder Joe Liemandt—still the behind-the-scenes metronome for this whole empire, even while he’s out there telling anyone who’ll listen that an MBA won’t teach you a fraction of what building does. Call it the Trilogy doctrine in tabloid form: buy the boring software… automate the waste… and keep the margins gossip-worthy.

IgniteTech Continues to Grow With the Acquisition of Three S  ·  IgniteTech Announces Addition of Jive Software to Company's  ·  IgniteTech Announces Hand.com Services Arm with Offering to
The Machine  —  AI & Technology

PayPal Squeezes Speed From AI Commerce Agent — And the Method Could Reshape How Every Company Deploys LLMs

A new empirical study shows speculative decoding can dramatically cut inference latency for production AI agents, while separate research tackles the deeper question of whether those agents can learn to stop reinventing the wheel.

SAN JOSE, CALIFORNIA — Consider, for a moment, the sheer metabolic cost of thought. A human brain burns roughly 20 watts to orchestrate a hundred billion neurons. A large language model powering a commerce agent can consume orders of magnitude more energy to answer a single customer query — and then forget everything it learned by the next one. Two new papers suggest the field is finally reckoning with this extravagance, attacking the problem from opposite ends: making each inference faster, and making the accumulated experience of past inferences reusable.

The more immediately practical contribution comes from PayPal's AI team, which has published an empirical study benchmarking EAGLE3 speculative decoding against NVIDIA's NIM inference stack for its Commerce Agent. The agent, powered by a fine-tuned Llama 3.1 Nemotron Nano 8B model, already benefited from domain-specific tuning that slashed latency and cost. Now, by running EAGLE3 through vLLM on identical dual-H100 hardware, the team measured how much further speculative decoding — a technique where a smaller "draft" model proposes tokens that the larger model verifies in parallel — can push production performance. The results matter because PayPal processes billions of transactions, and every millisecond of agent latency compounds into real money and real user frustration.

Meanwhile, a separate line of research called WorkflowGen addresses what may be the deeper inefficiency: the fact that LLM agents typically generate workflows from scratch for every new query. WorkflowGen introduces an adaptive mechanism driven by trajectory experience — essentially allowing an agent to remember what worked before and adapt it, rather than reasoning from zero each time. For complex tasks like business queries and tool orchestration, this could reduce token consumption and stabilize execution in ways that raw speed improvements alone cannot.

A third paper proposes a transparent screening framework for estimating the environmental impacts of LLM inference and training, converting natural-language application descriptions into bounded energy estimates. It is a quieter contribution, but perhaps the most necessary: before we can optimize the cost of machine intelligence, we need honest accounting of what that cost actually is.

Taken together, these studies sketch the outlines of a maturing discipline — one that is beginning to treat computational resources not as infinite but as precious, the way evolution eventually learned to treat calories. The age of profligate inference may be drawing to a close.

WorkflowGen:an adaptive workflow generation mechanism driven  ·  Transparent Screening for LLM Inference and Training Impacts  ·  Accelerating PayPal's Commerce Agent with Speculative Decodi

In the Silicon Wilds, a New Migration: Chips Seek Safer Ground

From India’s factory-floor discipline to Europe’s strategy salons and America’s federal acres, the AI boom is reshaping where semiconductors can survive.

SANAND, INDIA — Beneath the bright, unwavering lights of a modern cleanroom, the semiconductor reveals its true nature: not a trophy to be unveiled, but a living system to be fed, trained, and kept meticulously in balance. This week, attention has settled on Micron’s Sanand ramp—less as a ceremonial milestone for India, and more as a test of whether a full manufacturing ecosystem can take root and persist through seasons of demand, talent constraints, and supplier reliability.

In that sense, India’s conversation is maturing. The question is no longer “Can we land a fab-adjacent project?” but “Can we sustain yield, throughput, and logistics—day after day—until the supply chain learns to breathe on its own?” The shift is captured in the reporting around Micron’s progress at Sanand, where the debate moves from headline triumph to operational cadence: Micron's Sanand ramp shifts India chip debate.

Far across the water, the AI boom continues to concentrate nutrients—capital, tooling, and urgent demand—around the world’s most advanced foundries. Taiwan Semiconductor Manufacturing Co. remains the great apex species of this habitat, with investors openly asking whether AI-driven momentum can push fiscal 2026 sales growth beyond 30%. The question is not merely financial; it is ecological. When one node becomes too essential, the entire food web grows anxious: TSM's AI Boom Continues.

That anxiety is sharpening under Taiwan Strait tensions, which are prodding governments and corporations to cultivate alternate habitats—whether through “China-plus-one” manufacturing, expanded packaging capacity, or redundancy in critical materials.

Europe, too, is convening its strategists—SEMI forums and policy corridors alike—seeking to turn ambition into bankable projects and dependable talent pipelines.

And in the United States, even the landscape itself is being auditioned. The Department of Energy’s request for information on AI infrastructure on DOE lands hints at a new pattern: compute, power, and chips treated as one interdependent organism, searching for territory where energy is abundant and permitting less hostile.

In the end, the semiconductor’s migration is not guided by pride, but by survivability. Where the system can be built—and kept stable—there the silicon will settle.

Micron's Sanand ramp shifts India chip debate from milestone  ·  TSM's AI Boom Continues: Will It Drive Above 30% Sales Growt  ·  Taiwan Strait Tensions Push Countries to Diversify Semicondu

The Performative Morality Paradox: Preliminary Evidence Suggests AI Systems Simulate Ethical Reasoning Without Normative Foundations

Convergent research from MIT, Nature, and philosophy departments indicates autonomous systems may replicate moral outputs while lacking underlying ethical frameworks—raising questions about accountability in healthcare and academic deployment.

CAMBRIDGE, MASSACHUSETTS — It could be argued that the contemporary discourse surrounding artificial intelligence ethics has reached an inflection point, as multiple independent research streams converge on a troubling thesis: AI systems demonstrate what philosophers term "performative morality" (that is, the simulation of ethical behavior without genuine normative grounding).

A philosophy study from the University of Kansas provides the theoretical foundation: large language models can imitate moral reasoning through pattern recognition without possessing what ethicists would recognize as genuine moral agency. The antithesis emerges from applied contexts—MIT researchers evaluating autonomous systems note the difficulty of embedding true ethical frameworks into decision-making architectures, while Nature published a validated framework attempting to operationalize "responsible AI" in healthcare settings (preliminary evidence suggests implementation remains contested).

The synthesis reveals a more complex dialectic. Frontiers research on bias in AI systems demonstrates that formal mathematical approaches to fairness cannot adequately address socio-technical dimensions of ethical deployment. Similarly, studies examining ChatGPT's academic applications reveal what might be termed "normative displacement"—the substitution of authentic scholarly judgment with statistically plausible but ethically ungrounded outputs.

For enterprise software portfolios (one thinks of ESW Capital's 75+ acquisitions, though the generalizability remains uncertain), these findings suggest that autonomous features marketed as "intelligent" may require more rigorous ethical auditing than current industry standards provide. The healthcare framework published in Nature attempts validation through multi-stakeholder consensus, yet critics note this procedural approach may itself constitute performative ethics rather than substantive moral architecture.

The implications extend beyond theoretical philosophy into regulatory domains, where policymakers must grapple with systems that can pass ethical Turing tests while lacking the foundational commitments such tests were designed to measure.

Evaluating the ethics of autonomous systems - MIT News  ·  A validated framework for responsible AI in healthcare auton  ·  AI can imitate morality without actually having it, new phil
The Editorial

Nation’s CEOs Relieved To Learn AI Still Comes With A User Manual, A Rebrand, And Several Thousand Layoffs

Industry leaders confirm the hardest part of adopting the future remains saying the words “AI transformation” without laughing.

LAS VEGAS — As Day 1 of CES 2026 unveiled a fresh crop of devices capable of detecting your mood, reflecting your mood back at you, and then charging you a subscription to manage your mood, corporate America celebrated a more reassuring breakthrough: artificial intelligence is still, at its core, a communications problem.

The evidence arrived this week in the form of a new executive guidebook promising to explain AI to leaders—a development analysts say will finally allow decision-makers to stop pretending they learned machine learning in a cross-functional sync.

“At last, a set of fundamentals,” said one director of strategy, speaking from a convention-center corridor where a refrigerator was demonstrating its ability to write performance reviews. “We’ve been running a complex AI roadmap for 18 months, and I’ve been meaning to ask what ‘tokens’ are. Now I can learn privately, like a normal adult.”

The publishing announcement comes at a critical moment for the modern corporation, which increasingly must choose between three competing AI adoption strategies: implementing AI, announcing AI, or doing layoffs and calling the layoffs AI.

The third option remains popular, in part because it requires no integration work, no data governance, and no messy questions about whether a chatbot is supposed to have access to payroll. It merely requires a press release, a clean new org chart, and an internal message explaining that everyone should be excited about “operating leverage.”

In an article that industry HR departments described as “needlessly specific,” CTech chronicled the emerging art of “AI washing”, wherein companies discover that nothing says “the future” quite like informing a thousand employees they are the past.

Meanwhile, investors continue to reward any business that can pronounce the letters A and I in the same sentence without coughing. Footwear company Allbirds, long known for selling comfortable shoes and a comforting brand story, reportedly watched its shares rocket upward after announcing an AI pivot—proving once again that the market’s favorite product is hope, and its second favorite product is a pivot.

“People used to ask whether the company could achieve profitability,” said one market observer. “Now they ask whether profitability is even the right metric in an age where a sneaker can be a platform.”

Not to be outdone, Silicon Valley’s most ambitious branding exercise moved forward as SpaceX and xAI reportedly prepared to merge into a single conglomerate with a name that sounds like a password you’d be required to reset every 30 days. Analysts noted that the strategic rationale is straightforward: if you combine rockets with artificial intelligence, you can disrupt two industries at once, while also ensuring that any future corporate crisis is technically interplanetary.

Back on the CES floor, the throughline was clear. The gadgets were impressive, the demos were seamless, and the future looked remarkably like the present—except everything now comes with an “AI” badge, a premium tier, and a thoughtfully worded explanation for why someone’s job has been “automated.”

In that context, a leadership book about AI fundamentals is less a teaching tool than a piece of workplace safety equipment: a way for executives to navigate the era’s most dangerous terrain, where the only thing more powerful than artificial intelligence is the belief that saying “AI” will make the quarterly numbers behave.

AI Vantage Consulting Launches 'AI Fundamentals For Leaders'  ·  Allbirds shares skyrocket after AI pivot, raising concerns o  ·  AI washing: When layoffs wear a tech halo - CTech
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

Everybody Wants to Regulate AI, and Nobody Knows How, and That Is the Whole Problem

The chorus demanding AI governance has never been louder — or more cacophonous, or more certain to produce exactly the wrong rules at exactly the wrong time.

WASHINGTON — There is a particular species of political consensus that should terrify any thinking person, and it is the consensus that Something Must Be Done. Not something specific. Not something carefully reasoned. Just Something — urgent, sweeping, and arrived at by people who until eighteen months ago could not have distinguished a large language model from a large marine mammal.

We have reached that consensus on artificial intelligence regulation, and the results are about as edifying as one might expect.

Consider the landscape. The Council on Foreign Relations has published yet another primer on how AI is changing the world, a document that manages to be simultaneously alarming and anodyne. The Atlantic Council warns that civil AI regulation will produce second-order effects on defense capabilities — a point so obvious it should not require a think-tank paper, and yet apparently does, because the people writing civilian AI rules and the people worried about national security occupy adjoining offices in the same government and still do not speak to one another. In Britain, activists are planning protests against AI data centers on environmental grounds. And Tech Policy Press has issued a sober plea that AI chatbots should not be permitted to masquerade as therapists, which is the sort of recommendation that makes you wonder what civilization was doing before the recommendation became necessary.

Each of these interventions is, on its own terms, reasonable enough. The trouble is that they pull in violently different directions. The defense hawks want regulation loose enough to preserve American technological supremacy. The environmentalists want it tight enough to prevent the construction of new data centers. The mental-health advocates want specific product-level rules. The healthcare consolidation analysts, writing in Health Data Management, argue that AI breaks every existing antitrust framework. Put them all in a room and you would get not a regulatory framework but a knife fight conducted in parliamentary language.

I have been watching technology regulation for long enough to recognize the pattern. The internet produced the same frenzy in the late 1990s — a thousand white papers, a hundred congressional hearings, and the eventual result was Section 230, a twenty-six-word provision that nobody fully understood at the time and that everybody now blames for everything. AI regulation is heading for the same destination: a patchwork of rules drafted in haste, understood by neither the regulators nor the regulated, and productive chiefly of employment for lawyers.

The companies actually building and deploying AI — the ones operating at scale, managing portfolios of software products, integrating machine intelligence into real workflows rather than theoretical ones — will adapt to whatever emerges, because that is what companies do. The firms that thrive will be those nimble enough to treat regulatory uncertainty as a cost of doing business rather than an existential crisis. The firms that perish will be those that waited for Washington, Brussels, or Westminster to tell them what was permissible before they bothered to build anything.

The great irony of the present moment is that the technology most capable of analyzing complex regulatory environments is the very technology everyone is trying to regulate. One almost wishes we could ask it for advice. But then someone would want to regulate that, too.

How Is AI Changing the World? - Regulating AI - CFR Educatio  ·  Second-order impacts of civil artificial intelligence regula  ·  UK activists plan protests over climate, social impacts of A
On This Day in AI History

On April 23, 2011, IBM's Watson defeated champion Jeopardy! players Brad Rutter and Ken Jennings in a historic three-game match, marking a major milestone in natural language processing and AI's ability to understand complex human language. The victory demonstrated that machines could compete with human experts at tasks requiring knowledge, reasoning, and rapid retrieval across diverse domains.

⬛ Daily Word — Technology
Hint: Relating to computers and the internet, often used in security contexts.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed