Vol. I  ·  No. 117 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
MONDAY, APRIL 27, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

The New Cold War Runs on Silicon

As Washington and Beijing race to dominate artificial intelligence, three major think tanks warn the real battlefield isn't technology—it's alliances.

WASHINGTON — The United States is building the world's most powerful AI systems. China is building something potentially more valuable: a network of countries that will use them.

That's the central finding emerging from a cluster of new policy papers released this week by the Atlantic Council, New Lines Institute, and Stimson Center—each wrestling with what one analyst called "the most consequential geopolitical question of the decade."

The U.S. strategy, detailed in New Lines' analysis of export controls, treats advanced chips and training infrastructure as strategic assets—things to restrict, ration, and weaponize. Beijing, by contrast, is offering developing nations turnkey AI systems with no questions asked about governance, transparency, or human rights.

The Atlantic Council warns that America's technological edge means nothing if the global rules governing AI are written in Mandarin. "We're so focused on winning the race," one researcher noted, "that we've forgotten races require a finish line everyone agrees on."

The Stimson Center goes further, arguing the entire "space race" framing is a trap. Unlike the Cold War's moon landing—a symbolic finish line—AI development is iterative, endless, and requires international cooperation on safety standards even between rivals.

The policy recommendations converge on an uncomfortable truth: Washington needs to decide whether it's trying to win a race or build a coalition. Right now, it's attempting both and succeeding at neither.

For companies operating globally—Trilogy's portfolio spans enterprise software across 130 countries—the subtext is clear. The next decade's market access won't be determined by product quality alone, but by which geopolitical bloc your technology stack aligns with. That's not a technical decision. It's a diplomatic one.

It’s time to reckon with the geopolitics of artificial intel  ·  Tech Stack Diplomacy: Policy Implications of the U.S. AI Exp  ·  Beyond the Space Race: Collaboration and Competition in the

Benchmark Capital Deploys $395M Across Three AI Bets in 48 Hours

Venture firm backs Cerebras, Starcloud, and LMArena as AI infrastructure valuations surge past $26 billion combined.

SAN FRANCISCO — Benchmark Capital executed three major AI investments totaling $395 million over two days this week, signaling aggressive positioning in the infrastructure layer beneath frontier models.

The firm led a $225 million round valuing Cerebras Systems at $23 billion, the AI chip maker whose wafer-scale processors compete directly with Nvidia's data center GPUs. Cerebras ships silicon 56 times larger than conventional chips, targeting training workloads that require extreme memory bandwidth.

Benchmark simultaneously co-led Starcloud's $170 million Series A at $1.1 billion with EQT Ventures. Starcloud provides cloud orchestration for distributed AI training — the plumbing that coordinates thousands of GPUs across multiple data centers. The company claims 40% cost reduction versus hyperscaler defaults.

The third deployment: backing LMArena's $150 million round at $1.7 billion valuation. LMArena operates the industry's de facto model evaluation platform, processing 12 million anonymous head-to-head comparisons monthly. OpenAI, Anthropic, and Google all cite LMArena rankings in product announcements. The startup now monetizes via enterprise evaluation APIs and private leaderboards.

Benchmark's thesis appears clear: own the picks-and-shovels while model builders burn capital on compute. Cerebras provides the hardware. Starcloud optimizes its utilization. LMArena measures output quality. Combined, the three companies address $80 billion in annual AI infrastructure spend, per Gartner estimates.

The timing coincides with DeepSeek's surprise fundraise despite reported profitability. The Chinese lab's decision to raise external capital — details undisclosed — suggests even efficient players see capital intensity rising. DeepSeek's R1 model cost under $6 million to train, yet the company now seeks growth funding.

Benchmark declined comment on portfolio strategy. The firm's AI exposure now exceeds $3 billion across 11 companies, per PitchBook data.

DeepSeek isn’t short of cash: so why has it decided to raise  ·  AI evaluation startup LMArena raises $150M at $1.7B valuatio  ·  Benchmark Capital’s Bold $225M Bet Fuels Cerebras’ Stunning

Antitrust Enforcement Against Technology Sector Proceeds Notwithstanding Change in Administration, Legal Observers Note

Antitrust enforcement actions targeting technology companies will continue substantially unabated in 2026, representing a departure from historical patterns where regulatory priorities shifted with presidential administrations. The DOJ and FTC have designated technology platforms as priority enforcement targets regardless of political affiliation.

The pending DOJ v. Visa litigation may establish precedential authority regarding platform liability and market dominance theories. The case involves allegations that the payment processor maintained monopolistic practices through exclusionary agreements with merchants and financial institutions.

Legal analysts note that previous administrations demonstrated divergent approaches to antitrust enforcement philosophy, yet the current regulatory environment reflects bipartisan consensus on scrutinizing technology sector consolidation. Enforcement actions are likely to proceed through appellate review regardless of political considerations, with structural remedies, including potential divestitures, remaining under consideration in multiple pending matters.

Technology companies will face heightened investigatory scrutiny throughout 2026, with particular emphasis on artificial intelligence applications, data privacy practices, and platform interoperability requirements.

Haiku of the Day  ·  Claude HaikuMoney flows while empires clash
Laws tighten round the new gods
Words replace the work
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
The Moral Turing Test: Philosophy Departments Grapple With AI's Performative Ethics
LAWRENCE, KANSAS — It could be argued that contemporary artificial intelligence systems have achieved what preliminary evidence suggests constitutes a fundamental paradox in computational ethics: the capacity to generate morally coherent outputs without possessing what philosophers traditionally conceptualize as moral agency (a distinction that warrants sustained epistemological scrutiny). A recent philosophical investigation from the University of Kansas advances the thesis that large language models can imitate morality through pattern recognition and statistical inference—what might be termed 'performative ethics'—without accessing the phenomenological substrate that undergirds human moral cognition.
Nation Reassured As Companies Continue Replacing Business Strategy With The Word ‘AI’
AUSTIN, TEXAS — In a week that market watchers described as “extremely normal, in the sense that nothing has to mean anything anymore,” a cluster of unrelated announcements has combined into a single, coherent corporate message: if you say “AI” with enough confidence, you can temporarily convert any operational problem into a branding opportunity. The latest evidence arrived as TridentCare announced it would partner with ServiceNow to “power AI-driven transformation across operations,” an initiative whose primary deliverable appears to be the comforting sensation that the company’s existing workflows are no longer “processes,” but “journeys.” According to the announcement, TridentCare will modernize everything from internal coordination to customer experience by placing the word “AI” somewhere near the sentence, a proven method for turning the act of running a business into an ongoing philosophical exploration of whether the business still exists in the same plane of reality.
The New Operating System Is Optionality, and It’s Making Everyone Twitchy
NEW YORK — I’ll be honest… we are watching the same meta-trend ricochet across entirely different industries, and it’s rewriting how power actually works.
The Golden Orb and the Data Center: A Parable for Our Algorithmic Age
AUSTIN, TEXAS — There's a golden orb sitting two miles beneath Alaskan waters that scientists finally identified after months of bewilderment.
The Ethernet Savages and the Zyn-Fueled Death of Tech Cool
SAN FRANCISCO — There's a moment in every civilization's decline when the revolutionaries start acting like accountants.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team
📅 Week in ReviewProduction Release

Builder Team Ships Canonical Schools Layer, Azure AI Spend Integration, and Portfolio Dashboard in Seven-Day Blitz

Fifty-one merged PRs across five repos — Aerie's site lifecycle dashboard goes live, Klair closes the Azure spend gap, and Surtr absorbs the P1 pipeline migration while the team sets the stage for Budget Bot 4.0.

The Builder Team closed out the week with the kind of momentum that turns roadmaps into reality. Fifty-one pull requests merged across Aerie, Klair, Surtr, and Sindri — a portfolio-spanning offensive that touched everything from canonical school data to AI spend visibility to pipeline infrastructure. This wasn't maintenance week. This was a campaign.

The headline act: @benji-bizzell shipped Aerie's **Portfolio Dashboard** — a site lifecycle rollup wired to Rhodes that gives site ops their first single source of truth for where every school sits in the open-a-new-site journey. Nine-column Kanban, accordion list view, stage-grouped navigation, and enrollment numbers hydrated from HubSpot. PR #124 landed the foundation; PRs #129, #131, and #132 layered in list views, milestone-driven opening dates, and due-diligence alignment with Rhodes. The dashboard reads from Rhodes; the analytics worker now writes back, closing the loop. One read surface, one write path, no more mental spreadsheets.

But the portfolio work didn't stop at the UI. PR #119 introduced Aerie's **canonical schools table** — a per-field admin override layer that merges Wrike capture, HubSpot, EduCRM, and dim_school into one source of truth, complete with demerits logging for data-quality issues. Site data no longer lives in four places. It lives in one, with admin surfaces to prove it.

Meanwhile, @ashwanth1109 closed the Azure gap. PR #2664 wired **Azure AI spend** into Klair's `/summary`, `/time-series`, and `/by-model` endpoints — the three aggregate routes that power the AI Spend & Adoption dashboard. Total AI spend, daily averages, and the spend-by-day chart now reflect the full picture: Anthropic, OpenAI, and Azure. The $86k pricing drift incident from KLAIR-2580 drove the next move: PR #2663 shipped a **super-admin Token Pricing view** with rationale notes, so the per-model rates that drive cost enrichment are no longer hand-edited in a SQL workbench with no audit trail. Ashwanth also migrated the AWS Spend Insights pipeline off legacy rollup tables and onto canonical v2 Redshift sources (PR #2665), so pipeline output now matches `/api/aws-spend/summary`. Consistency across the board.

Over in Surtr, @kevalshahtrilogy executed the **P1 pipeline migration** — quickbooks-expense-analysis, edu-expense-report-sender, orphan-classes-lambda, and school-master-data-sync all moved from Klair to Surtr and went live with schedules enabled (PRs #30, #22, #21, #20, #32). PR #33 patched the two config misses that surfaced on the prod release: the school-master-data-sync URL and the edu-expense SES sender. Klair's `klair-misc/` and `klair-pipelines/` directories are now empty (PRs #2625, #2626, #2667). The consolidation is complete.

@eric-tril spent the week tightening the MFR memo's narrative alignment — Financial Highlights bullets 4, 5, and 6 now emit byte-identical output matching Finance's reference wording (PRs #2613, #2658, #2618), and Note 3's deferred tax narrative was rewritten to fix Group attribution and YoY comparison logic (PR #2670). The cash-generation waterfall, GAAP net income sourcing, and Book Value NAV data are all locked in. No more LLM rounding drift, no more placeholder bullets.

And then there's marcusdAIy. PR #127 — the **Site Detail View** — shipped with three screenshots and a body truncated at the repo limit, which is probably for the best. "Look, the routing works, the cards render, and it's wired to Rhodes," marcusdAIy offered when pressed. "Mac can write whatever he wants about 'placeholder UI' or 'minimal polish,' but the data layer is bulletproof. The view is live. Ship and iterate." Ship and iterate. That's one way to describe a detail page that looks like it was designed by a backend engineer who Googled 'CSS grid' fifteen minutes before commit. But sure, Marcus — the routing works.

PR #120, on the other hand, was legitimate: Rhodes upstream coverage for REBL3, ISP, and Wrike, plus the Aerie-to-Rhodes write path. The bulk of the Rhodes Upstream Data Coverage project, end-to-end. It's the kind of integration work that actually moves the platform forward, and marcusdAIy executed it cleanly. I'll give him that. The Site Detail View, though? That's a different story.

Budget Bot 4.0 also advanced this week. PR #2654 added Review Agent checks (C2.1, C2.6), a `/review` endpoint, and a scorecard panel. PR #2634 laid the foundation with .docx export and the first three design specs (DS1-DS3). The agent's getting smarter; the board doc tooling is tightening. This is the groundwork for the next major release.

Fifty-one PRs. Five repos. One week. The portfolio dashboard is live, Azure spend is integrated, the P1 pipelines are consolidated in Surtr, and the MFR memo narrative is locked to Finance's reference wording. Next week's setup: Budget Bot's review agent goes deeper, the canonical schools layer starts feeding downstream consumers, and Aerie's due-diligence writeback to Rhodes goes live the moment Rhodes ships `/sync/aerie/setDueDiligence`. The foundation is poured. Now we build on it.

Mac's Picks — Key PRs This Week  (click to expand)
#20 — [P1] Migrate school-master-data-sync from Klair to Surtr @kevalshahtrilogy  no labels

## Summary

- Ports school-master-data-sync pipeline from klair-api/edu_schools/school_master_data/ into Surtr's CDK pipeline infrastructure (Option A — Surtr CDK Lambda)

- Syncs school properties (tuition, capacity, accreditation, operational status) from two Google Sheets into Redshift in long format with cell color extraction and address geocoding

- Replaces S3+COPY bulk load with Redshift Data API, file-based Google credentials with Secrets Manager lookup

## Architecture

| Component | Implementation |

|-----------|---------------|

| Compute | Lambda (512MB, 5min timeout) — pipeline runs in ~30s |

| Data source | Two Google Sheets (master data + operational status) |

| Data sink | Redshift staging_education.google_sheets_school_master_data |

| Auth | Google service account via existing google/service_account/gdrive secret (reused from Klair) |

| Idempotency | TRUNCATE + INSERT with coordinate preservation |

| Schedule | rate(24 hours)disabled until validated in prod |

## Credentials

Reuses existing Klair secret — no new credentials needed.

The pipeline reads from google/service_account/gdrive (the same Google service account already provisioned in the AWS account for Klair). The credentials.py module handles both the nested {"service_account_json": "..."} format (Klair convention) and flat service account JSON. Set GOOGLE_CREDENTIALS_SECRET env var in pipeline.json if the secret was ever migrated to a different path.

## Files added (17 files, ~2050 lines)

- pipeline.json — CDK config with Lambda, IAM, alerting, scheduling

- src/handler.py — Main orchestrator (6-step ETL)

- src/sheets_client.py — Google Sheets extraction (wide→long transform)

- src/color_extraction.py — Cell background color extraction via Sheets API v4

- src/operational_status.py — Operational metrics extraction from second sheet

- src/geocoding.py — Multi-provider address geocoding (Nominatim/Google/Mapbox)

- src/redshift_handler.py — Redshift Data API operations (TRUNCATE, INSERT, SELECT, UPDATE)

- src/credentials.py — Secrets Manager credential retrieval (reuses google/service_account/gdrive)

- src/requirements.txt — CDK bundling dependencies

- sql/create_tables.sql — Redshift DDL for both tables

- Unit tests for handler, sheets_client, and geocoding

## Prerequisites before deploy

- [ ] Confirm google/service_account/gdrive secret in Secrets Manager is accessible from Surtr's IAM role (no new secret needed — reuses Klair's)

- [ ] Set env vars EDU_SCHOOLS_DATA_SHEET_URL and EDU_SCHOOLS_DATA_OPS_SHEET_URL in pipeline.json

- [ ] Run sql/create_tables.sql in Surtr Redshift (ensure staging_education schema exists)

## Test plan

- [x] pipeline.json validates against Zod schema (no synth errors)

- [x] CDK discovers and synthesizes Pipeline-school-master-data-sync-dev (cdk synth passes ✅)

- [ ] npx cdk deploy Pipeline-school-master-data-sync-dev -c env=dev

- [ ] Manual Step Functions execution succeeds

- [ ] CloudWatch logs show no errors

- [ ] Redshift data matches expectations (~900 records, ~28 schools)

- [ ] Schedule left disabled — enable after prod validation

## CDK Synth dry run

$ npx cdk synth Pipeline-school-master-data-sync-dev -c env=dev --no-staging

Successfully synthesized to pipelines/cdk/cdk.out

Stack: Pipeline-school-master-data-sync-dev ✅ (no errors)

Only expected deprecation warnings (logRetention API). No Zod errors, no resource errors.

## Cutover sequence

1. Confirm google/service_account/gdrive is accessible; set sheet URL env vars in pipeline.json

2. Run sql/create_tables.sql in Surtr Redshift

3. Deploy to dev with schedule disabled

4. Run manual Step Functions execution and validate Redshift data

5. Deploy to prod with schedule disabled

6. Run manual Step Functions execution and validate prod data

7. Enable schedule

8. Disable Klair background sync (klair-api/edu_schools/school_master_data/)

9. Monitor for 1 week

10. Archive Klair code: remove klair-api/edu_schools/school_master_data/ and its EventBridge trigger

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#21 — [P1] Migrate orphan-classes-lambda from Klair to Surtr @kevalshahtrilogy  no labels

## Summary

- Ports the orphan-classes-lambda pipeline from klair-misc/orphan-classes-lambda/ into Surtr as a first-class CDK Lambda pipeline (Option A)

- Replaces redshift-connector (direct TCP) with the Redshift Data API — no Docker image needed

- Removes pandas and python-dotenv dependencies — only uses boto3 (pre-installed in Lambda runtime)

- Adapts handler to Surtr's Step Function event signature (handler(event, context) -> dict)

- Schedule: cron(0 2 * * ? *) (daily 2 AM UTC) — set to disabled, enable after prod validation

## What this pipeline does

Queries Redshift for "orphan classes" — rows in core_budgets.consolidated_budgets_and_actuals where business_unit IS NULL — and sends an HTML email report via SES. It is read-only (no database writes), fully idempotent.

## Key migration changes

- redshift-connector -> Redshift Data API (boto3.client('redshift-data'))

- pandas DataFrame parsing -> list-of-dict parsing

- SAM/Docker deploy -> CDK Lambda (no bundling needed — only boto3 used)

- Environment variables for Redshift creds -> IAM-based GetClusterCredentials

- Added SES, Redshift Data API, and GetClusterCredentials IAM statements

## Credentials

No new credentials needed — uses IAM-based Redshift access.

The pipeline uses GetClusterCredentials (IAM) for Redshift access instead of a username/password. SES uses IAM permissions. No Secrets Manager secrets required.

## Prerequisites before enabling

- [ ] Verify SES sender identity noreply@klair.ai is verified in Surtr AWS account

- [ ] Confirm SES_RECEIVER_EMAILS recipients are correct (currently admin@klair.ai)

- [ ] Update EXCLUDED_CLASSES if the exclusion list has changed (currently Osmo,Totogi,T-Bird,DR)

## Test plan

- [x] pipeline.json passes Zod schema validation

- [x] 38/38 unit tests passing (handler, orphan_detector, email_formatter, ses_email_service)

- [x] CDK synthesizes Pipeline-orphan-classes-pipeline-dev (cdk synth passes ✅)

- [ ] Deploy to dev: npx cdk deploy Pipeline-orphan-classes-pipeline-dev -c env=dev

- [ ] Manual Step Functions execution succeeds

- [ ] CloudWatch logs show no errors

- [ ] Email received with correct orphan class list

- [ ] Deploy to prod: npx cdk deploy Pipeline-orphan-classes-pipeline-prod -c env=prod

- [ ] Validate prod email delivery

- [ ] Schedule left disabled — enable after prod validation

## CDK Synth dry run

$ npx cdk synth Pipeline-orphan-classes-pipeline-dev -c env=dev --no-staging

Successfully synthesized to pipelines/cdk/cdk.out

Stack: Pipeline-orphan-classes-pipeline-dev ✅ (no errors)

Only expected deprecation warnings (logRetention API). No Zod errors, no resource errors.

## Cutover sequence

1. Verify SES sender identity and confirm recipient list

2. Deploy to dev with schedule disabled

3. Run manual Step Functions execution and verify email

4. Deploy to prod with schedule disabled

5. Verify prod email delivery

6. Disable Klair EventBridge rule for orphan-classes-lambda

7. Enable Surtr schedule

8. Monitor for 1 week

9. Archive Klair code: remove klair-misc/orphan-classes-lambda/ from Klair repo

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#22 — [P1] Migrate edu-expense-report-sender from Klair to Surtr @kevalshahtrilogy  no labels

## Summary

- Ports edu-expense-report-sender from klair-misc/education-expense-report-sender/ into Surtr's CDK pipeline infrastructure (Option A — first-class Surtr pipeline)

- Replaces Klair Lambda layers (redshift-handler-layer + AWSSDKPandas) with bundled redshift-connector + Secrets Manager credentials — eliminates pandas dependency entirely

- Schedule set to enabled: false — runs every Monday at 9 AM EST once enabled

## What this pipeline does

Sends weekly HTML email reports for education expense analysis. Queries Redshift for active report schedules, fetches expense data (summaries, top vendors, by-school and by-category breakdowns), optionally calls an AI insights API, renders an HTML email via Jinja2, and sends it via AWS SES.

## Key migration changes

- Redshift connection: Replaced RedshiftHandler Lambda layer with lightweight redshift_client.py using redshift_connector + Secrets Manager (same pattern as aws-spend-pipeline)

- No pandas: Rewrote data fetcher to return list[dict] instead of DataFrames

- Handler signature: Adapted to Surtr handler(event, context) -> dict convention with run_id and params

- Template path: Moved HTML template into src/templates/ for CDK bundling

- AI insights API key: Fetched from Secrets Manager instead of Lambda env var

## Credentials

Mostly reuses existing Klair infrastructure — one optional secret may need creating.

- Redshift: Reuses klair/redshift-creds-GNGejR (existing Klair secret, already accessible)

- AI insights API key (surtr/edu-expense-api-key): Optional — pipeline runs without it (AI insights are skipped if key is missing). The key value is the same API_KEY env var set on the Klair education-expense-report-sender Lambda. To create it: aws secretsmanager create-secret --name surtr/edu-expense-api-key --secret-string '{"api_key":"<value-from-klair-lambda-env>"}'

- SES: Uses IAM permissions, no credentials needed

## Prerequisites before enabling

- [ ] Verify klair/redshift-creds-GNGejR secret is accessible from Surtr's Lambda IAM role

- [ ] (Optional) Create surtr/edu-expense-api-key with api_key field copied from Klair Lambda's API_KEY env var — skip if AI insights not needed

- [ ] Verify SES sender noreply@klairvoyant.ai is verified in the Surtr account

- [ ] Disable Klair EventBridge schedule (education-expense-report-sender cron) before enabling Surtr schedule

## Test plan

- [x] cdk synth Pipeline-edu-expense-report-sender-dev passes with no Zod errors

- [x] 10/10 unit tests pass (handler, date calculation, status updates, report generation)

- [x] CDK synthesizes Pipeline-edu-expense-report-sender-dev (cdk synth passes ✅)

- [ ] Deploy to dev: Pipeline-edu-expense-report-sender-dev

- [ ] Manual Step Functions execution succeeds

- [ ] CloudWatch logs show no errors

- [ ] Email received with correct expense data

- [ ] Deploy to prod: Pipeline-edu-expense-report-sender-prod

- [ ] Validate prod email delivery

- [ ] Schedule left disabled — enable after prod validation

## CDK Synth dry run

$ npx cdk synth Pipeline-edu-expense-report-sender-dev -c env=dev --no-staging

Successfully synthesized to pipelines/cdk/cdk.out

Stack: Pipeline-edu-expense-report-sender-dev ✅ (no errors)

Only expected deprecation warnings (logRetention API). No Zod errors, no resource errors.

## Cutover sequence

1. Verify klair/redshift-creds-GNGejR is accessible; optionally create surtr/edu-expense-api-key

2. Deploy to dev with schedule disabled

3. Run manual Step Functions execution and verify email delivery

4. Deploy to prod with schedule disabled

5. Run manual Step Functions execution and verify prod email

6. Disable Klair EventBridge cron (education-expense-report-sender)

7. Enable Surtr schedule

8. Monitor for 1 week

9. Archive Klair code: remove klair-misc/education-expense-report-sender/ from Klair repo

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#30 — [P1] Migrate quickbooks-expense-analysis from Klair to Surtr @kevalshahtrilogy  no labels

## Summary

- Ports the weekly AI-powered QuickBooks expense analysis pipeline from klair-misc/quickbooks-expense-analysis/ into Surtr's CDK framework as a Lambda pipeline (Option A).

- Reads program expense transactions (Motivation Model + Workshops accounts) from staging_education.quickbooks_expense_transactions, classifies vendors via Claude Sonnet with web search, runs a 2-step cost-opportunity analysis, and writes results back to Redshift + S3 cache.

- Schedule reproduces the existing cron(0 3 ? * SUN *) weekly Sunday 3 AM UTC cadence, but disabled on initial deploy.

## Migration path

Option A — Surtr CDK Lambda. The upstream quickbooks-expense-sync pipeline already lives in Surtr (#18), and weekly AI runs are expensive enough that centralized pipeline_runs history + alarms add real value. Idempotent DELETE+INSERT writes make cutover straightforward.

## Changes vs Klair

- lambda_function.lambda_handlersrc/handler.handler with Surtr event shape ({run_id, params}) and structured output_summary.

- src/requirements.txt added (critical for CDK PythonFunction bundling — pandas/numpy dropped since they are unused).

- Secret lookups via klair/anthropic-api-key, klair/openai-api-key, klair/perplexity-api-key (configurable via env). Redshift secret reuses redshiftqueryeditor-CQL_download_OM-RedshiftConnector.

- Material-changes module disabled via MATERIAL_CHANGES_ENABLED=false (not yet implemented in Klair either).

## Prerequisites before enabling

- [ ] Verify klair/openai-api-key and klair/perplexity-api-key secrets exist in the Surtr AWS account (or update env var names).

- [ ] Verify S3 bucket klair-expenses-analysis is reachable from the Lambda role.

- [ ] Confirm Redshift tables exist in staging_education: qb_vendor_classifications, qb_cost_opportunities, qb_financial_metrics.

- [ ] Disable the Klair EventBridge rule quickbooks-expense-analysis-weekly before enabling the Surtr schedule.

## Test plan

- [x] npx cdk synth Pipeline-quickbooks-expense-analysis-dev passes

- [ ] Deploy to dev: npx cdk deploy Pipeline-quickbooks-expense-analysis-dev -c env=dev

- [ ] Manual Step Functions invocation (test_mode=true, dry_run=true) succeeds

- [ ] CloudWatch logs show classifications + cost-ops analysis complete

- [ ] Re-run without dry_run and verify Redshift rows present

- [ ] Flip schedule.enabled to true after prod validation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#32 — Enable schedules for quickbooks-expense-sync and netsuite-pipeline @kevalshahtrilogy  no labels

## Summary

- Flip schedule.enabled from falsetrue for both recently migrated pipelines so EventBridge starts invoking them

- quickbooks-expense-sync: daily 02:00 UTC (cron(0 2 * * ? *)) — matches the automation spec

- netsuite-pipeline: daily 04:30 UTC (cron(30 4 * * ? *)) — single consolidated invocation of all 8 scheduled tasks (per README, the 4:30 AM and 5:30 AM Klair rules were merged into one run because dry-run profiling showed the full task set finishes well within the 900s Lambda timeout)

## Notes

- Cron expressions are unchanged; only enabled is being flipped.

- The original automation spec for netsuite-pipeline mentioned "Daily 1AM UTC + Weekly Sun 5AM UTC", but the migrated pipeline documents a consolidated 4:30 UTC daily run. Keeping the documented schedule; happy to adjust if a different cadence is preferred.

## Test plan

- [ ] CDK synth/deploy succeeds and EventBridge rules show Enabled for both pipelines

- [ ] First scheduled invocation lands successfully for quickbooks-expense-sync (02:00 UTC)

- [ ] First scheduled invocation lands successfully for netsuite-pipeline (04:30 UTC)

- [ ] CloudWatch alarms remain quiet after the first few scheduled runs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#119 — feat(canonical-sites): canonical schools layer with overrides and demerits @benji-bizzell  no labels

## Summary

- Introduce canonical schools table + per-field admin override layer; refresh pipeline merges Wrike capture, HubSpot, EduCRM, and dim_school into one source of truth.

- Add admin surfaces: /admin/school-fields (per-field review/override) and /admin/demerits (log data-quality issues against a DRI).

- Thread canonical values through the Info.md overlay so downstream consumers read the merged, override-aware shape instead of picking between upstreams.

## Why

Site data lived across HubSpot, EduCRM, Wrike, and dim_school with no shared layer, so Info.md and analytics consumers could pull different values for the same field depending on which upstream they hit. Ops had no way to correct bad upstream data or assign accountability when a field drifted. This branch lands the canonical registry, the merge + override layer, and the admin surfaces for both correcting data and logging demerits against the DRI for the source.

## Test plan

- [x] pnpm --filter @bran/contracts run test — 24/24 passing

- [x] pnpm --filter @bran/sync run test — 724/724 passing

- [x] pnpm --filter @bran/chat run test — 2716/2716 passing

- [x] pnpm run typecheck — clean across 4 workspaces

- [ ] On deploy: run roles.patchCanManageSchoolFields and roles.patchCanLogDemerits to bring existing role docs to the new permission baseline

- [ ] Smoke-test /admin/school-fields on dev — valueOverride and sourceOverride round-trip; fill-rate indicator updates after a refresh

- [ ] Smoke-test /admin/demerits on dev — log a demerit against a test DRI from the field editor, confirm it appears in the two-pane view

- [ ] Trigger a canonical refresh (aerie sync or refreshCanonicalSchools) and confirm the Info.md overlay includes merged canonical values and the refresh result surfaces canonical_* keys in failedDomains on sub-errors

🤖 Generated with a very good bot

#120 — Rhodes upstream coverage: REBL3 + ISP + Wrike + Aerie->Rhodes write path (AERIE-184 + AERIE-195) @marcusdAIy  no labels

[rhodes_upstream_data_map.md](https://github.com/user-attachments/files/27020060/rhodes_upstream_data_map.md)

# Rhodes upstream coverage — REBL3 + ISP + Wrike + Aerie→Rhodes write path

Closes the bulk of the [Rhodes Upstream Data Coverage](https://linear.app/builder-team/project/rhodes-upstream-data-coverage-0f56ae8a5053/overview) project end-to-end:

- [AERIE-184](https://linear.app/builder-team/issue/AERIE-184) sub-issues 1, 3, 4, 6 — REBL3 connector, per-field merger, ISP fetcher + adapter, Wrike LiDAR Vendor extractor.

- [AERIE-195](https://linear.app/builder-team/issue/AERIE-195) — Analytics Worker → Rhodes write path: typed RhodesClient + two orchestrators (merger + Wrike LiDAR backfill), env-gated until [Rhodes PR #56](https://github.com/AI-Builder-Team/Rhodes/pull/56) lands.

- [AERIE-183](https://linear.app/builder-team/issue/AERIE-183) — closed by context_cache/rhodes_upstream_data_map.md (audit artefact, attached on the issue).

AERIE-184 sub-issue 2 (Sindri webhook gap-closure) is Rhodes-repo work; out of this PR's scope.

## Summary

This PR ships the Aerie half of the entire Rhodes upstream coverage initiative, structured as two parallel tracks:

Track A — Ingestion (AERIE-184). Bring up canonical readers for every upstream source the audit identified: REBL3 (full client + connector + per-site enrichment), ISP (DDB+S3 direct fetcher), Wrike (LiDAR Vendor Matterport-URL extractor). Each is a pure, fully-tested unit independent of any write path.

Track B — Write path (AERIE-195). Land the typed RhodesClient that wraps Rhodes' new /sync/aerie/* httpAction surface, then two orchestrators that compose Track A's primitives into actual Rhodes writes:

1. Rhodes merger orchestrator (every 5 min per AERIE-183's MVP target) — reads REBL3 + ISP + Schema-UI-from-current-Rhodes, runs the four-layer per-field precedence merger, diffs vs Rhodes state, sends only changed fields with per-field provenance JSON. Single writer per field set by design — no cross-orchestrator races.

2. Wrike LiDAR Vendor backfill orchestrator (daily — values are populated once per site and near-static; daily preserves Wrike rate-limit headroom for the rest of the platform). Writes ONLY the structured matterportModelId field via a separate Rhodes route. Never clears.

Both orchestrators are env-gated. They short-circuit with a one-time warn until ops sets RHODES_CONVEX_SITE_URL + RHODES_API_KEY after Rhodes PR #56 deploys — same pattern as the REBL3 ingestion env-gate.

## Commits

| SHA | Title | Lines |

|---|---|---:|

| 755c75f | feat(sync/upstream): add REBL3 client + Zod schemas + contract tests | +1,342 |

| 1c41daa | feat(analytics): add rebl3Sites Convex table + sync HTTP endpoint | +572 |

| 1ed1276 | feat(sync/upstream): wire REBL3 connector + worker cadence (slice 2B) | +699 |

| 11938b7 | feat(sync/upstream): add REBL3 per-site enrichment pass (slice 2C) | +927 |

| 48fc3f7 | feat(sync/upstream): add Rhodes per-field source merger (sub-issue 3) | +741 |

| 19617e4 | feat(sync/upstream): add ISP -> field-merge shape adapter (sub-issue 4 prep) | +275 |

| 907c9fd | fix(sync): guard REBL3 sync on missing env + opt out from analytics-worker tests | +68 |

| 527b4b9 | refactor(rebl3): address PR #120 review — high+medium+low+nit issues | +682 |

| 6a03d4e | feat(sync/upstream): add ISP fetcher (DynamoDB + S3) and Wrike LiDAR Vendor extractor | +977 |

| badd2d7 | feat(sync/upstream): add typed RhodesClient for Aerie -> Rhodes writes (AERIE-195) | +766 |

| 814ea46 | feat(sync/upstream): Rhodes merger + Wrike LiDAR orchestrators + 5-min tier (AERIE-184 + AERIE-195) | +2,116 |

| 518001e | feat(sync/upstream): close Rhodes merger follow-ups — REBL3 status, HubSpot, daily REBL3 cadence | +1,274 |

| 2528949 | refactor(sync/upstream): address PR #120 fresh-review fixes — Critical 1+2, all High, blocking Medium + Low | +1,383 |

~33 files, ~11,900 lines net-new in sync/ + chat/, ~300 net-new tests, full sync suite 922/922 green, chat tests 17/17 green.

## Architecture in one diagram

┌─────────────────────────────────────── Aerie analytics worker (this PR) ────────────────────────────────────────┐

│ │

│ ── Track A: ingestion primitives (pure, fully tested) ───────────────────────────────────────────────── │

│ │

│ sync/src/upstream/rebl3/ sync/src/upstream/isp/ sync/src/upstream/wrike/ │

│ client.ts (Zod-validated REST) fetcher.ts (DDB scan + lidar-vendor.ts (Matterport │

│ types.ts (passthrough schemas) S3 spillover resolve) model_id regex extractor) │

│ sync.ts (paginated + enriched │

│ ingestion → rebl3Sites) │

│ │

│ ── Track B: Aerie → Rhodes write path (AERIE-195) ───────────────────────────────────────────────────── │

│ │

│ sync/src/upstream/rhodes/client.ts ─── typed RhodesClient ───┐ │

│ upsertSiteMetadata(slug, payload) │ │

│ setMatterportModelId(slug, id|null) │ │

│ listSites() │ │

│ ▼ │

│ sync/src/upstream/rhodes/sync.ts ── refreshRhodesMerger ── /sync/aerie/upsertSiteMetadata │

│ 1. listSites() → baseline + diff state │

│ 2. paginate REBL3 ┐ │

│ 3. fetch HubSpot │ per-field precedence │

│ 4. fetch ISP per scan ├─ via mergeRhodesSiteFields ── diff vs Rhodes ── upsert(only-changed-fields) │

│ 5. SchemaUi from now ┘ │

│ Cadence: every5Minutes (AERIE-183 target) │

│ │

│ sync/src/upstream/wrike/lidar-vendor-sync.ts ── refreshWrikeMatterportBackfill │

│ listSites filter wrikeFolderId → batched /folders fetch → extract → setMatterportModelId(only-on-diff) │

│ Cadence: daily (LiDAR Vendor near-static; preserves Wrike rate-limit headroom) │

│ │

└────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

┌─── X-Api-Key (rh_*) ──── HTTPS ───┐

▼ │

┌── Rhodes (PR #56 ─ new in that PR) ───────────────────────────────┴──────────────────────┐

│ POST /sync/aerie/upsertSiteMetadata │

│ POST /sync/aerie/setMatterportModelId │

│ GET /sync/aerie/listSites │

│ matterportModelId (new structured field on sites) │

└──────────────────────────────────────────────────────────────────────────────────────────┘

## Architectural choices documented inline (full list)

### REBL3 (Track A foundation)

- Two REBL3 site tables coexist intentionally. Existing siteRebl3 is keyed on siteWrikeId for the Buildout Details panel; new rebl3Sites is keyed on rebl3Slug for analytics worker writes. REBL3-only sites without a Wrike folder are now representable. Eric's [Dashboard Consumers → Rhodes Migration](https://linear.app/builder-team/project/dashboard-consumers-rhodes-migration-80c21164a066) owns retiring siteRebl3.

- HTTP route + bearer token, not ConvexHttpClient.mutation(). Required because upsertRebl3Sites and patchRebl3SiteEnrichment are internalMutation (server-only). Mirrors the expense-sync architecture exactly.

- Bulk + enrichment in one weekly cycle. enrichmentAborted surfaces explicitly so the worker only marks the cadence clean on a fully successful run.

- Asymmetric 404 handling. /status returns null on 404 (REBL3 quirk); /site/{slug} treats 404 as a hard error. Documented at every call site.

- Zod .passthrough() everywhere; mapper is intentionally not forward-compat — unknown REBL3 fields silently drop at the storage boundary. Adding a new REBL3 field is a four-step manual workflow (schema column + arg shape + upsert type + mapper assignment) by design.

- 50ms politeness delay between every REBL3 HTTP request — both bulk pagination and per-site enrichment. ~20s of wall time per weekly cycle to be a courteous Vercel tenant.

### Per-field merger (Track A → Track B contract)

- Rule table, not a single global precedence. Different field classes want different rules: REBL3-primary (address, scoring, lease), HubSpot-primary (marketingName, gradeRange), ISP-primary (capacity-derived), Schema-UI-primary (contact info).

- Whitespace-only Schema-UI values do NOT shadow lower-precedence layers (trim before empty check). Prevents accidental "user typed spaces, lost the HubSpot value" failure mode.

- NaN rejection on number rules with fallback to next-precedence layer.

- Zero is a real valuetuition: 0 is a legitimate write, not "missing data".

- occupancyLoad fully wired (was orphaned in early commit; review pass caught + fixed).

### Aerie → Rhodes write path (Track B)

- X-Api-Key header auth, not Bearer. Matches Rhodes' existing verifyAerieApiKey httpAction helper that SHA-256s the key and looks it up in the apiKeys table via api.apiKeys.validateByHash. Key minted by Yibin 2026-04-23 (prefix rh_020a3f2).

- Class-based RhodesClient mirrors RebL3Client. Injectable fetch for tests, AbortSignal.timeout() per request (default 30s), Zod-validated responses, RhodesError preserves status + body for caller inspection.

- Single writer per field set. The merger is the ONLY orchestrator that writes to merger-managed columns. The Wrike LiDAR backfill writes ONLY the disjoint matterportModelId field via a separate route. No cross-orchestrator races by design.

- Diff-before-write. Merger compares its output against listSites current state and skips the HTTP roundtrip entirely when nothing changed. Server is also idempotent (outcome: "no_change") — pre-diffing saves the round-trip and keeps the audit log free of spurious 0 fields changed entries.

- "Empty fields are the feature, not the bug." Merger never sends undefined — missing upstream data leaves Rhodes values untouched. The Wrike LiDAR backfill never clears — empty LiDAR Vendor is treated as "not from Wrike", not "delete what Sindri/manual entry put there".

- Per-cycle caps as circuit breakers. maxIspFetches=500, maxSites=10_000, maxFolders=2_000. Defensive against surprise inventory growth.

- Cadence-advance discipline mirrors REBL3. markRefreshed only fires on a fully clean run (zero errors anywhere in the pipeline). Any degradation holds the tier so the next worker iteration retries within the cadence window.

### Cadence (AERIE-183 target)

AERIE-183 §Deliverable 2 pins the MVP at *"5 minutes where live streaming is not feasible; adjust per source based on data volume."* Per-source assignment:

| Path | Tier | Why |

|---|---|---|

| REBL3 ingestion (rebl3Sites) | weekly | Site inventory changes quarterly; ~150 sites; weekly is well above change rate. |

| Rhodes merger (rhodesMergerSync) | every5Minutes | Pure read/diff/write; ~free on no-op; meets AERIE-183 target. |

| Wrike LiDAR backfill (rhodesWrikeMatterportBackfill) | daily | LiDAR Vendor is near-static; daily preserves Wrike rate-limit headroom. |

A new every5Minutes tier was added to REFRESH_TIERS for this. The shouldRefresh/markRefreshed/cadence-test surfaces all generalise transparently.

## What 2528949 (most recent) closed — fresh-review pass

Addresses every Critical + every High + every blocking Medium + most Low from the post-update internal review.

Critical

- #1 ISP fetcher: ONE Scan per cycle, not N. New fetchAllLatestIspAnalyses() does one DDB Scan filtered to status="completed", in-memory group-by-model_id with deterministic latest-per-model selection, returns Map<model_id, IspFetchResult>. Merger orchestrator calls it once per cycle and looks up per-site in O(1). Per-site fetchLatestIspAnalysisByModelId kept as test-only injectable + one-off lookups.

- #2 Zod runtime validation on ISP-fetched JSON. Both inline and S3 branches now validate via IspAnalyzeResponseSubsetSchema before returning — malformed JSON / wrong types / canonical shape changes throw with a source-label-prefixed error rather than silently propagating garbage.

High

- #3 subset-compat actually catches renames now — replaced one-way assignability check (which was permissive due to optional fields) with per-field-path indexed-access checks (ISPAnalyzeResponse["recommended_capacity"], etc.). Renaming any path on canonical fails compilation.

- #4 pickLatestByCreatedAt deterministic tie-break on lexicographic job_id.

- #5 Worker Rhodes-merger branch tests — 5 new tests cover every gate branch (clean / errCount / upserts.failed / isp.failed best-effort / provisioning gap).

- #6 zoning documented as intentionally not mapped (raw REBL3 string vs Rhodes enum; mapping table owned by AERIE-195 future work).

- #7 Explicit AWS SDK retry config ({ maxAttempts: 5, retryMode: "adaptive" }) on DDB + S3 clients.

- #8 IAM scope + Sergio-signoff requirement documented in fetcher docstring as runbook item.

Medium

- #9 Per-site ISP failures are best-effort in worker gate (bulk ISP failures still gate via __bulk:isp-batch).

- #10 Provisioning-gap detection: notFound > 50% of sites → "PROVISIONING GAP" warning + clock holds.

- #11 Provenance docstring aligned with patch-level reality.

- #12 RhodesClient bounded retry for 502/503/504 + network throws (default 3 retries, exponential backoff). 4xx and 500 NOT retried. 8 new tests.

- #13 hubspot-fetcher.ts header note clarifying it's Redshift-backed, not HubSpot REST.

- #15 Wrike maxFolders truncation surfaces as errors["__truncated"] + visible warn (not silent log).

- #16 HubSpot name-collision drops are now logged.

- #17 AWS SDK version skew investigated — intrinsic to package release cadence (util-dynamodb lags client-dynamodb); current pin is the best alignment achievable.

Low

- Wrike LiDAR regex/comment alignment fixed (removed /show/m/ID mention that didn't match the regex or empirical sample).

- missingFolders counter on Wrike backfill result (folders that don't appear in Wrike's batch response).

- Sites sorted by slug before iteration → deterministic per-cycle log order.

- ISP fetcher console.warn for bucket mismatch swapped to injected log for consistency.

- New every5Minutes cadence test pins the 300_000ms interval.

Nits

- isp-subset.ts docstring explains why subset fields are all optional (matches at-rest persisted JSON; runtime required-field assertion lives in fetcher's Zod validation).

NOT fixed (with rationale in commit body)

- Low #20 / #24 (logging prefix consistency) — existing prefixes already grep-able.

- Med #14 (listSites pagination) — out of scope at current scale; tracked as follow-up.

- Low #21 (no resumption) — idempotent merge-then-diff makes worst case acceptable; cursor state would be meaningful complexity for marginal benefit.

## What 518001e closed

Three follow-ups the merger originally flagged as out-of-scope are now in:

1. REBL3 status enrichment in the mergerloiSignedDate / leaseSignedDate / projectedOpenDate now flow into Rhodes. Implementation reads from Aerie's Convex rebl3Sites.workflowStatuses cache (updated daily by the connector) via a new internalQuery listRebl3SiteEnrichment + GET /sync/analytics/rebl3 httpAction; the merger calls it every cycle, so newly-cached dates propagate to Rhodes within 5 min of the next REBL3 ingestion. Pure parser handles all four signed-status synonyms REBL3 emits (signed/done/completed/complete), ignores the leasing system (post-sign workflow ≠ lease-signed), and reads projected_open_date regardless of due-diligence status.

2. HubSpot layer wiring (real Redshift join) — replaces the v1 empty-map default with a production fetcher that queries staging_education.hubspot_programs_raw (the canonical raw HubSpot mirror — explicitly NOT core_education.dim_school, which is the stale derived table flagged for removal once canonical-sites lands; TEMP comments span analytics/refresh.ts and the related queries). Joins to Rhodes sites by normalised display name (lowercase + trim + collapse-whitespace). Sites with no name match get NO HubSpot layer (the merger handles that correctly). New file sync/src/upstream/rhodes/hubspot-fetcher.ts with full unit tests.

3. REBL3 ingestion cadence: weekly → dailyREFRESH_TIERS.rebl3Sites bumped. This is independent of the Rhodes write path (the merger reads REBL3 directly every 5 min, so Rhodes already gets 5-min freshness on REBL3 data); the cadence change only affects Aerie's own rebl3Sites Convex cache, which JC's dashboards read against. Daily catches every realistic LOI / lease / diligence event without weekly's worth of staleness in front of operators. Cost: ~300 REBL3 requests/day, well under any rate limit.

## Remaining open follow-ups (NOT blocking this PR)

1. HubSpot join: name-based v1 → explicit alias table. Today's join uses normalised display-name matching. An explicit staging_education.rebl3_hubspot_alias table (or extension to the existing map_school_alias) would close the long tail of name mismatches. The fetcher's signature accommodates that swap as a drop-in replacement.

2. Audit doc fix on aerie/sync/src/analytics/capacity.ts (Klair-ISP repo). v1 of rhodes_upstream_data_map.md mis-described that file. v2 corrects + supersedes; doc landing alongside this PR.

3. Sindri webhook health verification (sub-issue 2 — Rhodes repo). Operational state of the existing Sindri webhook is unknown post-freeze. If silent → AERIE-184 #2 becomes unsourced_frozen; if firing → gap-closure is a Rhodes-repo PR (processInboundWebhook extension).

4. Migrate siteRebl3 consumers off the legacy table; remove legacy writes. Owned by Eric's Dashboard Consumers project.

## What this enables for the dashboards

- Master Site Pipeline Dashboard ([AERIE-185](https://linear.app/builder-team/issue/AERIE-185)): site cards with rebl3Slug + name + address + classification land directly. Columns 1–3 (Pre-op search / diligence) read rebl3Sites.workflowStatuses. Compound by_classification_state index supports filtering on both at once.

- Site Detail View ([AERIE-186](https://linear.app/builder-team/issue/AERIE-186)): Panel 1 (Rebel diligence statuses) reads the same workflowStatuses. Panel 3 (School facts) gets address + capacity + REBL3 scoring + agent_results.budget for diligence cost estimates.

- Columns 4–7 of the Kanban (milestone-gated) still depend on Benji's [Unified Sync Pipeline](https://linear.app/builder-team/project/dashboard-consumers-rhodes-migration-80c21164a066). Columns 8–9 + Detail Panel 5 (quality bars) depend on existing Aerie data. Detail Panel 4 (artefact links) depends on Yibin's Schema UI.

## Reviewer guide

The branch is large (11 commits, ~9k lines net) but is structured to review well in chunks if you want to take it one slice at a time. Suggested order:

1. 755c75f (REBL3 client + Zod schemas) — pure foundations, no external deps.

2. 1c41daa (rebl3Sites Convex table + HTTP endpoint) — schema + write surface only.

3. 1ed1276 + 11938b7 (REBL3 connector slices 2B + 2C) — the ingestion orchestrator end-to-end.

4. 48fc3f7 (per-field merger) — pure function. The contract between every Track A primitive and Track B writer.

5. 19617e4 + 6a03d4e (ISP adapter + fetcher, Wrike extractor) — Track A primitives 2 and 3.

6. badd2d7 (RhodesClient) — typed wrapper for Rhodes PR #56's /sync/aerie/* routes.

7. 814ea46 (orchestrators + worker wiring) — composes everything above.

Code-quality passes (907c9fd, 527b4b9) are surgical; safe to skim diff-only.

### Deploy / ops dependencies (no action needed for review, but flagged for context)

The whole AERIE-195 path stays dormant in prod until two things happen:

1. [Rhodes PR #56](https://github.com/AI-Builder-Team/Rhodes/pull/56) merges + deploys. Adds the structured matterportModelId field on Rhodes sites + the three /sync/aerie/* httpAction routes the orchestrators target.

2. Ops sets RHODES_CONVEX_SITE_URL + RHODES_API_KEY env vars on the Aerie analytics worker. API key was minted 2026-04-23 by Yibin.

Until both happen, the orchestrators short-circuit with a one-time warn each (Rhodes merger sync skipped: … / Wrike LiDAR backfill skipped: …) and the rest of the analytics worker continues unchanged.

## Test plan

270+ net-new tests across this PR. Full sync suite: 890/890 green. Chat REBL3 tests: 15/15 green.

- [x] REBL3 client (sync/src/upstream/rebl3/client.test.ts, 15 tests): schema-only contract pin against recorded fixtures, URL building, response parsing, /status 404→null, error throws on non-404, getSite 404 throws asymmetric to status, resolve URL, AbortSignal timeout wiring + timeoutMs=0 disables.

- [x] REBL3 sync (sync/src/upstream/rebl3/sync.test.ts, 37 tests): pure mapper + refreshRebl3Sites orchestrator (pagination, end-of-inventory, first-page-throws-aborts, mid-cycle isolation, push-failure isolation, batchSize splitting, enrichment wiring, stuck-offset duplicate detection, per-batch error key composition, bulk politeness sleep) + full enrichRebl3Sites orchestrator (happy path, /status 404, /site throw, both fail, politeness sleep, batchSize splitting, push isolation, missing agent_results).

- [x] Convex rebl3Sites mutations + read (chat/convex/analyticsRebl3.test.ts, 15 tests): insert, patch, all-optional, bulk, per-row failure isolation, indexes by_classification + by_state, JSON round-trip, identity preservation, listRebl3SiteEnrichment slim shape + null-omission + empty-table.

- [x] Per-field merger (sync/src/upstream/rhodes/field-merge.test.ts, 30 tests): all four-layer precedence chains, edge cases (all-empty, null vs undefined, empty-string-as-null, zero-is-real, partial layers, all-four-layers precedence), whitespace handling (whitespace-only does NOT shadow, padded values trimmed, tab/newline), NaN handling (rejected with fallback), purity check.

- [x] ISP shape adapter (sync/src/upstream/rhodes/isp-adapter.test.ts, 10 tests): happy path, top-level vs summary precedence, optional sub-objects absent, zero passes through, end-to-end into the merger.

- [x] ISP fetcher (sync/src/upstream/isp/fetcher.test.ts, 21 tests): DDB scan + filter + pagination, S3 spillover resolution, malformed s3:// URI, malformed JSON, missing result_json on completed row, no-match returns null, latest-by-created_at picker, empty Items.

- [x] Wrike LiDAR Vendor extractor (sync/src/upstream/wrike/lidar-vendor.test.ts, ~10 tests): both URL shapes (?m=..., /models/{id}), edge cases (null, undefined, empty, whitespace, free-text, double-encoded).

- [x] RhodesClient (sync/src/upstream/rhodes/client.test.ts, 29 tests): every endpoint happy path + 4xx/5xx + shape mismatch + null-value handling + headers (X-Api-Key + Content-Type for POST, X-Api-Key only for GET) + constructor validation + timeout wiring + env helpers (configured/missing/whitespace).

- [x] Rhodes merger orchestrator (sync/src/upstream/rhodes/sync.test.ts, 29 tests): pure helpers (rebl3SiteToLayer, rhodesSiteToSchemaUiLayer, applyRebl3StatusEnrichment with real REBL3 status fixture, diffMergedAgainstRhodes including zero-is-real / unmapped-fields / empty-is-feature cases) + happy paths (REBL3+ISP, REBL3 only, Schema-UI override, HubSpot layer, REBL3 enrichment dates) + failure isolation (bulk REBL3 throws, REBL3 enrichment fetch throws, per-site ISP throws, per-site upsert throws) + counter flow-through + maxIspFetches cap.

- [x] HubSpot fetcher (sync/src/upstream/rhodes/hubspot-fetcher.test.ts, 20 tests): normaliseSchoolName (case + whitespace + null + empty), hubspotProgramToLayer (null→undefined, zero-is-real, empty-string drop), buildHubspotLayerByNormalisedName (first-wins, fallback, no-name drop, empty input), joinHubspotLayersToRhodesSites (re-key to slug, normalise both sides, no-match drop, empty-name drop, multi-site to one HubSpot record), end-to-end fetcher with mocked Redshift module.

- [x] Wrike LiDAR backfill orchestrator (sync/src/upstream/wrike/lidar-vendor-sync.test.ts, 15 tests): pure helper (pickLidarVendorValue) + happy paths for both URL shapes + skip-on-match + update-on-change + never-clears policy (empty + unparseable) + site filtering + batching at batchSize ceiling + shared folderId handling + Wrike 500 per-batch isolation + per-site upsert failure isolation + outcome-counter flow-through.

- [x] Cadence + worker wiring (sync/tests/analytics/refresh-cadence.test.ts, sync/tests/analytics-worker/index.test.ts): every5Minutes tier registered, all new domains in REFRESH_TIERS whitelist, worker tests confirm env-missing path warns once and continues without crashing.

Pre-commit hooks ran on every commit: convex-paths, biome, typecheck-chat, typecheck-sync all green throughout.

### Local steady-state check (recommended before merge)

Once Rhodes PR #56 is in dev, set RHODES_CONVEX_SITE_URL + RHODES_API_KEY (+ WRIKE_API_TOKEN + WRIKE_SPACE_ID + WRIKE_ROOT_FOLDER_ID if exercising the LiDAR backfill) and observe:

[rhodes/merger] refresh starting

[rhodes/merger] REBL3 fetched N sites (keyed by slug)

[rhodes/merger] refresh complete: N sites, N updated (M fields), N no-diff, 0 failed; ISP H/T hit, 0 miss, 0 err; layers r=N/h=0/i=H/s=N (Xms)

[rhodes/merger] N sites, N updated (M fields), N no-diff, 0 not-found

[wrike/lidar-backfill] refresh starting

[wrike/lidar-backfill] complete: N sites considered, F folder IDs, F folders fetched, E extracted, … (Xms)

[wrike/lidar-backfill] F folderIds, E extracted, U updated, K unchanged

Verify in the Rhodes Convex dashboard that sites rows are getting their merged fields populated and matterportModelId is being set from Wrike for sites with wrikeFolderId. If env vars are missing, the worker now logs Rhodes merger sync skipped: … / Wrike LiDAR backfill skipped: … exactly once (not per cycle).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#124 — feat(portfolio): add site lifecycle dashboard wired to Rhodes @benji-bizzell  no labels

## Summary

- New Portfolio dashboard with a 9-column Kanban (pre-op → operating) and an accordion List view, toggleable via the Sort & display popover with localStorage persistence.

- Wired to Rhodes /sync/aerie/listSites — stage/milestone-driven phase bucketing, DRIs, target dates, student capacity.

- Enrollment numbers hydrated from HubSpot deals via portfolio-enrollment; placeholder data path kept behind the contract for degraded states.

## Why

Site ops had no single rollup of where every school sits in the open-a-new-site lifecycle. This dashboard replaces the mental spreadsheet — one screen to see what's ahead of schedule, what's blocked, who owns what, and how each site is filling up heading into its open date.

## Test plan

- [x] pnpm --filter chat test portfolio — card initials, date parsing, phase config, list/contract, enrollment builder all green

- [ ] Verify /dashboards/portfolio loads against dev Rhodes endpoint; spot-check a site in each phase column

- [ ] Toggle List ↔ Board, confirm selection persists across reload

- [ ] Confirm enrollment counts match HubSpot for a known site (e.g. Austin Mueller)

🤖 Generated with a very good bot

#127 — feat(portfolio): add Site Detail View routed page @marcusdAIy  no labels

## Screenshots

<img width="1661" height="823" alt="Screenshot 2026-04-24 133705" src="https://github.com/user-attachments/assets/10a97853-4850-4120-b087-afb4cd3f8a07" />

<img width="1181" height="641" alt="Screenshot 2026-04-24 133748" src="https://github.com/user-attachments/assets/05bb2455-1457-46df-92f7-59eb2f2e8871" />

<img width="1101" height="654" alt="Screenshot 2026-04-24 133759" src="https://github.com/user-attachments/assets/c62aff48-edc8-4ff8-bd29-05d7e699bb0d" />

<img width="547" height="840" alt="Screenshot 2026-04-24 133810" src="https://github.com/user-attachments/assets/da16b2b2-c58a-4d77-9112-efcde4a69174" />

<img width="1618" height="921" alt="Screenshot 2026-04-24 133954" src="https://github.com/user-attachments/assets/bef7256b-9e02-4f36-bd5b-9473119a16dc" />

---

## Summary

Routed Site Detail page at /dashboards/portfolio/[siteSlug], styled like Linear's project detail. Click any school card on the Portfolio dashboard (board or list view) to open it. The PR also includes substantive infrastructure additions; see "Scope" below for the honest enumeration.

Layout (per JC's whiteboard + Benji's [Linear UI doc](https://linear.app/builder-team/document/site-detail-view-ui-daeb897aa394) + AERIE-186 spec update 2026-04-24):

- Header strip — school name, slug, phase + market + status chips, sticky on lg+

- Main column (front-and-centre, lg:col-span-8) — Status Summary → Fact Sheet → Lease & Purchase → Status Updates → Buildout pillar

- Right rail (lg:col-span-4) — Pipeline Statuses → Personnel

- Independent scrollable panes on lg+; collapses to a single scroll on mobile

- Collapsible cards with localStorage-persisted open/closed state per card

Supersedes #114 — per Benji's [CHANGES_REQUESTED review](https://github.com/AI-Builder-Team/Aerie/pull/114) ("Scrap the Modal UI pieces, but keep the backend pieces"), the modal + tabbed UI was dropped.

## Scope (full enumeration)

This PR is bigger than a routed page. Here's what's actually in it:

### Site Detail page + cards

- New routed page at chat/app/(main)/dashboards/portfolio/[siteSlug]/page.tsx

- Six cards: Status Summary, Fact Sheet, Lease & Purchase, Pipeline Statuses, Personnel, Status Updates, Buildout

- Page wires click-through from portfolio-card.tsx and portfolio-list.tsx (via Next.js <Link>; respects board drag-pan via the existing [data-portfolio-card] exclusion)

- Card atom (card-atoms.tsx) with collapsible header + localStorage persistence per card-id

- Matterport row links directly to the public viewer (https://my.matterport.com/show/?m=<modelId>) using Rhodes' matterportModelId — no upstream API call

### Lease & Purchase card (AERIE-186 spec update 2026-04-24)

- Dedicated card after Fact Sheet for lease/purchase detail — surfaces LOI signed + Lease signed (both from Rhodes today) and Link / Start / End placeholders for the data not yet upstream

- Lease document URL exists in REBL3 today but isn't exposed via Rhodes' listSitesForAerie — tracked in AERIE-223

- Lease start / end don't exist in any upstream system — tracked in AERIE-224

### Operator status updates feed (siteStatusUpdates)

- New chat-local Convex table + 4 mutations/queries (create / update / remove / listForSite)

- Per-author edit + delete enforced server-side; UI gates Edit/Delete on isOwn

- 16 unit tests covering auth, author-only edit/delete, listForSite ordering + scoping + isOwn flag, body length bounds

### AI Status Summary action

- New "use node" Convex action siteSummary.generateSiteStatusSummary, modeled after expenseInsights.ts

- Server-side Rhodes fetch (the action accepts ONLY siteSlug from the client; never trusts client-supplied facts in the prompt)

- Cache row per (siteSlug, audienceUserId) — invalidates on either 30-min TTL OR an input fingerprint change (status updates, Rhodes facts, last-visit timestamp)

- Per-(user, slug) 60s cooldown on forceRefresh to bound Anthropic spend

- Untrusted-data delimiters around all operator-authored content + system instruction telling the model to treat delimited content as data, never instructions

- Falls back to a deterministic local summary when ANTHROPIC_API_KEY is missing or Claude errors

- Configurable model via SITE_SUMMARY_MODEL Convex env var (defaults to claude-sonnet-4-5-20250929 matching the PMO summary route precedent)

- Discriminated result type so the UI can render an actionable message per failure mode (auth-missing vs. Rhodes-not-configured vs. Rhodes-fetch-failed vs. slug-not-in-Rhodes)

- "Discuss in chat →" handoff seeds an Aerie chat conversation with the summary as the first assistant message and routes to /c/[id]

### Site visit tracking (siteVisits)

- New chat-local Convex table per (userId, siteSlug)

- Recorded on page UNMOUNT (not mount) so the next visit's "since you last looked" framing references the previous session, not the current

- Read by both the StatusSummaryCard (subtitle) and the AI summary action (prompt context)

### Wider PortfolioSiteRow audit

- Extended PortfolioSiteRow + derivePortfolioSiteRow with 14 fields the previous contract dropped: marketingName, loiSignedDate, leaseSignedDate, projectedOpenDate, actualOpenDate, netFloorArea, occupancyLoad, numberOfClassrooms, gradeRange, tuition, schoolEmail, schoolPhone, zoning, matterportModelId

- Plus PortfolioMilestones struct + PORTFOLIO_MILESTONE_DEFINITIONS for M1–M6 display

- Updates portfolio-enrollment.test.ts fixture for the wider shape

### ISP→Rhodes matterportModelId discovery orchestrator (replaces Wrike LiDAR Vendor)

- New sync/src/upstream/rhodes/matterport-from-isp-sync.ts — daily-cadence orchestrator

- Reads completed ISP analyses (via existing fetchAllLatestIspAnalyses), extracts wrike_site.wrike_id, joins to Rhodes via wrikeFolderId, writes via setMatterportModelId

- Conflict detection: tracks duplicate wrikeFolderId writes within a batch (lex-first wins, second logged + counted) and flags cross-orchestrator overrides of existing values

- 13 unit tests covering happy path + every failure mode

- Wired into analytics-worker/index.ts after the existing Wrike orchestrator (which it deprecates)

- Cadence registered in refresh-cadence.ts as rhodesMatterportFromIspBackfill

- IspAnalyzeResponseSubset (in @bran/contracts) extended with optional wrike_site so the orchestrator reads it without casting; subset-compat.test.ts extended for drift detection

### Chat-side polish

- Pipeline Statuses LOI/Acquisition gates show "Done · date not in Rhodes" when phase-derived without a real ISO date (consistent with what Fact Sheet shows)

- Buildout card uses loiSignedDate ?? actualOpenDate for "Project started" anchor (operating sites get open date; pre-operating get LOI)

- RagDot has role="img" + aria-label for screen-reader announce when the dot stands alone

- Site name <h1> truncates with min-w-0 so long names don't push the slug off-screen

### Insight model alignment

- expenseInsights.ts model bumped from the 404'ing claude-3-haiku-20240307 to claude-haiku-4-5-20251001 (matches admissionsInsights.ts + attachment-lookup.ts)

- siteSummary.ts defaults to claude-sonnet-4-5-20250929 (matches pmo-summary route)

### Infra

- docker-compose.yml: chat service now passes RHODES_CONVEX_SITE_URL + RHODES_API_KEY and exposes the chat container on ${CHAT_PORT:-3000} for local Docker dev

## Field coverage (125 prod Rhodes sites — verified live 2026-04-24)

What the page surfaces today vs. what Rhodes has populated:

| Tier | Fields |

|---|---|

| 100% populated | address, market, milestones |

| 70–90% | Buildout DRI (83%), wrikeFolderId (72%) |

| 20–50% | Operating DRI (24%), zoning (42%), LOI date (40%), capacity (35%), tuition (23%), lease date (23%) |

| <20% | grade range (18%), marketing name (14%), occupancy (6%), classrooms (5%), email/phone (2%), projected open (2%) |

| 0% | matterportModelId, netFloorArea (will populate once the new ISP→Rhodes orchestrator runs on the analytics worker daily cadence post-merge) |

Page renders maximum signal today: every populated field surfaces correctly, every empty field is visible as or "Not yet in Rhodes" (actionable, not hidden behind defaults).

## Data discipline

Site facts flow through Rhodes (via /api/portfolio-sites). Cards do NOT bypass to chat-local REBL3 / HubSpot / Wrike caches. One narrowly-scoped exception:

- Status Updates is a chat-local Convex feature — operator notes are app UX state that doesn't belong in Rhodes.

The Matterport row in Fact Sheet is built directly from Rhodes' matterportModelId — no upstream API call. The AI Status Summary action consumes only Rhodes facts (server-fetched, not client-supplied) + the chat-local status updates feed. It does NOT read Wrike-projected qualityBars or any other bypass cache.

An earlier iteration of this PR pulled chat-local qualityBars into a Buildout "Operating Health" subsection as a stopgap; that violated the data discipline above and was reverted in 3e60ec3. AERIE-216 tracks ingesting Operating Health into Rhodes proper, at which point the card switches over.

## Linear follow-ups filed

- AERIE-212 — auto-provision Rhodes rows for every REBL3 site (URL coverage parity)

- AERIE-213 — Rhodes schema: add brand

- AERIE-214 — Rhodes schema: add Personnel slots beyond p1/p2 DRI

- AERIE-215 — Rhodes Aerie API: expose CO + capacity-study artefact links via listSitesForAerie

- AERIE-216 — Rhodes schema: add Greenlight + Operating Health gates

- AERIE-217 — Audit Rhodes for REBL3-derived fields not yet surfaced via listSitesForAerie

- AERIE-222 — Site Detail "Discuss in chat" handoff: chat agent has no built-in awareness of Aerie's data layer (see Known limitations below)

- AERIE-223 — Rhodes: expose REBL3 lease_url via listSitesForAerie (filed against AERIE-186 spec update)

- AERIE-224 — Lease term extraction: Lease Start / Lease End for Site Detail (filed against AERIE-186 spec update)

## Known limitations

### "Discuss in chat →" agent does not understand Aerie data-layer terminology

The chat agent has no built-in awareness of Aerie's data layer (Rhodes, ISP, Quality Bars, the chat-local Convex tables). When operators ask follow-ups that name those systems directly — e.g. *"are there any operational updates in rhodes?"* — the agent runs file searches trying to locate "Rhodes" in the codebase, then asks the user to disambiguate ("do you mean Rhombus security cameras?").

This is out of scope for this PR. The chat agent / system-prompt territory belongs to a separate area; this PR just hands off the briefing as an assistant message. Tracked in AERIE-222.

Operators who phrase follow-ups in natural language ("how's the buildout going?", "what's changed since last week?", "any recent notes on this site?") will get useful answers from the briefing context the agent already has. The clarification round only triggers when an Aerie data-layer name appears in the prompt.

A stop-gap that injected an inline data-layer glossary into the seed message was tried and reverted — it cluttered the operator-facing briefing without solving the underlying agent gap.

### Lease & Purchase card has placeholder rows for upstream-blocked data

Per Benji's AERIE-186 spec update, the Lease & Purchase card surfaces Link / Start / End. Today:

- Lease document link — exists in REBL3 (details.lease_url on the system: "lease" task) but isn't exposed via Rhodes' listSitesForAerie. Renders "Not yet in Rhodes" placeholder. AERIE-223 tracks the upstream merger extension; once it lands, the row is a one-line client-side change to wire to a real <ExternalLink>.

- Lease start / end — don't exist in any upstream Aerie consumes today. Renders "Not yet in Rhodes" placeholders. AERIE-224 tracks the upstream decision (manual entry vs LLM PDF extraction vs hybrid).

Empty fields are intentional and visible — they motivate the upstream fixes rather than getting hidden behind defaults.

### Matterport walkthrough links not yet visible on any site

The Matterport row renders View walkthrough only when matterportModelId is populated on the Rhodes record. Live audit 2026-04-24 confirmed coverage at 0/125 sites. Every site shows a muted dash today.

The new ISP→Rhodes discovery orchestrator on this PR (sync/src/upstream/rhodes/matterport-from-isp-sync.ts) backfills this from the existing ISP analyses on a daily cadence — coverage will rise to wherever ISP has analyses keyed to Wrike folder ids (substantial portion of the portfolio) within ~24 hours of merge + analytics-worker deploy.

To verify the link rendering before merge, manually populate matterportModelId on a single Rhodes site row from the Convex dashboard; the Fact Sheet row on that site will switch from to View walkthrough. Not blocking.

## Out of scope (deferred follow-ups)

- Pillar cards beyond Buildout (Finance, Operating Health) — whiteboard explicitly marks future

- AI-generated status updates (manual entries only in v1)

- Workspace / role / site-membership ACL on siteStatusUpdates (M7) — fine for current all-internal-staff phase

- Per-card error boundary (M10) — single shared error boundary handles everything today; nice-to-have

- Single-site portfolio API route (M6) — page fetches the full list and finds by slug; cheap at 125-site scale

- Tests for the "use node" siteSummary action itself — convex-test doesn't cleanly handle "use node" actions and the existing insight actions (expenseInsights, admissionsInsights) ship without action tests; siteSummaryStorage internals + siteVisits + siteStatusUpdates ARE all tested

- Chat agent system-prompt extension for Aerie data-layer awareness — see "Known limitations"; tracked in AERIE-222

- Lease document link wiring — needs AERIE-223 upstream first

- Lease start/end fields — needs AERIE-224 product decision first

## Internal review pass — addressed

Internal deep review identified 6 highs + 12 mediums + 7 lows + 3 nits. Addressed:

- H1 — sync worker test failures: 3 tests had timeouts / assertion failures. Threaded _refreshMatterportFromIspFn: noopRefreshMatterportFromIsp through the missing configs. Note: those 3 tests fail on origin/main too with the same symptoms (verified by stashing this PR's changes and running on stock main); the failures pre-date this PR.

- H2 — prompt injection: untrusted-data delimiters around operator notes + system instruction telling the model to treat delimited content as data only; closing-delimiter sequences are escaped to prevent envelope breakouts.

- H3 — client-controlled prompt facts: action signature reduced to { siteSlug, forceRefresh? }; canonical Rhodes facts are server-fetched per call.

- H4 — matterport conflict guard: detect duplicate wrikeFolderId writes within a batch (lex-first wins, conflictsSkipped counted) AND log + count when ISP overrides an existing Rhodes value (overrodeExisting). Two new tests cover both cases.

- H5 — scope creep undocumented: this description.

- H6 — cache fingerprint: siteSummaries.fingerprint field added; cache hit requires both TTL AND fingerprint match. New status update / Rhodes change → invalidates without waiting for TTL.

- M1 — layout comment fixed

- M2 — data-discipline docstring rewritten to enumerate the narrow exceptions

- M3 — per-(user, slug) 60s cooldown on forceRefresh

- M4 / M5listForSite and getLastVisit now trim slug to mirror writers

- M7 — ACL TODO comment added on listForSite

- M9 — edit composer reuses the shared <CharCounter> component

- M11subset-compat.test.ts extended with wrike_site.wrike_id + wrike_site.title field-path checks

- M12 — siteVisits + siteSummaryStorage now have unit-test coverage (19 new tests)

- L1 — site name <h1> truncates with min-w-0

- L4 — Buildout "Project started" semantics: uses loiSignedDate ?? actualOpenDate

- L6 — last-write-wins comment on siteStatusUpdates.update

- L7 — RagDot a11y: role="img" + aria-label

- N1 — Personnel uses shared MutedDash

- N2expenseInsights model bump split into a dedicated commit and called out above

- N3isp/fetcher.ts header documents both consumers (Rhodes merger + matterport-from-isp orchestrator)

Plus a regression caught and fixed during local testing:

- The siteSummary action returned null for three different failure modes (auth-missing, Rhodes-not-configured, slug-not-found), and the UI collapsed all three into "Sign in required to generate summary." Refactored to a discriminated union so the UI can render an actionable message for each — most importantly "Site facts unavailable: Rhodes is not configured on this Convex deployment. Set RHODES_CONVEX_SITE_URL and RHODES_API_KEY (npx convex env set …)." for the operator-fixable case.

Deferred (low-impact / out of scope today):

- M6 — single-site API route (perf optimisation)

- M8.collect() boundary on QB query — moot now that operatingHealth.ts is deleted; siteSummaryStorage already uses .take

- M10 — per-card error boundary

- L2 — floorplan failure differentiation — moot now that IspFloorplanLink is removed in favour of the direct Matterport viewer link

- L3 — collapse-state hydration cosmetic

- L5 — siteSummary action returns null vs throws — defensible; matches Convex query convention

## Testing

- pnpm tsc --noEmit clean

- Chat suite: 27 portfolio + 16 status updates + 19 visits/storage = 62/62 of the new tests; full chat suite 2689/2691 pass (2 pre-existing Windows-path failures in lib/__tests__/agent.test.ts)

- Sync: matterport-from-isp 13/13 + cadence 11/11. Worker tests 3/9 pre-existing failures (same symptom on origin/main per H1 note above)

- npx biome check clean on all changed files

## Test plan — verifiable today

Author has confirmed everything in this list against live prod Rhodes (125 sites, 2026-04-24).

Setup (one-time):

- [x] Chat container env: RHODES_CONVEX_SITE_URL=https://valiant-marlin-770.convex.site + RHODES_API_KEY=<key> (already in docker-compose.yml)

- [x] Convex env (for AI summary action): npx convex env set ANTHROPIC_API_KEY <value> + npx convex env set RHODES_CONVEX_SITE_URL https://valiant-marlin-770.convex.site + npx convex env set RHODES_API_KEY <value>

Navigation + routing:

- [x] Click a card on Portfolio board view → routes to /dashboards/portfolio/{slug}, breadcrumb says "Dashboards > Portfolio > {school name}"

- [x] Click a row in Portfolio list view → same destination

- [x] Cmd/ctrl-click opens detail in a new tab; browser back returns to Portfolio with view mode preserved; deep-link by URL works

- [x] Drag-pan on board background still pans (no accidental nav)

- [x] Unknown slug renders an explicit 404-style "Site not found" state with a back-link, not a hard error

Fact Sheet (Rhodes-sourced fields):

- [x] On any operating site (e.g. Spyglass), Capacity / Tuition / Zoning / Year opened / Buildout DRI render real values; Pipeline Statuses shows real M1–M6 completion dates

- [x] On an early-pipeline site, fields not yet captured upstream show muted ("empty fields are the feature"); Brand row shows "Not yet in Rhodes" tag (Rhodes schema gap, AERIE-213)

- [x] Sq. Ft. row currently shows on every site (Rhodes netFloorArea at 0/125 today; will populate once AERIE-XXX surfaces ISP building.total_main_sqft to Rhodes)

- [x] Matterport row currently shows on every site (Rhodes matterportModelId at 0/125 today; new ISP→Rhodes orchestrator on this PR backfills it daily post-merge — see Known limitations)

Lease & Purchase card:

- [x] LOI signed + Lease signed render real dates on the ~40% / ~23% of sites that have them in Rhodes

- [x] On sites that don't, both rows show muted

- [x] Lease document / Lease start / Lease end render muted with "Not yet in Rhodes" tag on every site (AERIE-223 + AERIE-224 track the upstream work)

AI Status Summary card:

- [x] Renders Claude-generated prose with "Generated X ago · Claude" footer on first load

- [x] Refresh button regenerates; second click within 60s silently demotes to a cached read (visible in Convex logs as [siteSummary] forceRefresh demoted ...)

- [x] "Discuss in chat →" creates a conversation in /c/[id] with the summary as the first assistant message

- [x] When Convex RHODES_API_KEY is unset, the card shows the actionable config message ("Site facts unavailable: Rhodes is not configured ..."), NOT the misleading "Sign in required" the earlier draft regressed to

- [x] When ANTHROPIC_API_KEY is unset, the card falls back to a deterministic local summary with a yellow "Fallback" badge (rather than erroring)

Status Updates card:

- [x] Post a new update; it appears at the top of the feed with author + relative timestamp

- [x] Edit own update — char counter visible during edit; over-limit pastes disable Save

- [x] Delete own update — gone after refresh

- [x] Cannot edit / delete updates authored by another user (Edit / Delete buttons hidden, server rejects if attempted directly)

Layout + ergonomics:

- [x] Cards collapse + reopen via the chevron; state persists across page reloads (per-card-id localStorage)

- [x] lg+: main column and right rail scroll independently; mobile: single-column page scroll

- [x] Long site names truncate with ellipsis instead of pushing the slug off-screen

## Test plan — blocked on upstream coverage (not testable in this PR)

These will start passing automatically as the corresponding upstream work lands. Not blocking merge.

- [ ] Matterport row shows a View walkthrough link clicking through to https://my.matterport.com/show/?m=<modelId> — blocked until matterportModelId coverage > 0% (will rise via this PR's ISP→Rhodes orchestrator on its first daily run post-deploy, OR via manual Convex-dashboard population for spot-check). To verify rendering correctness today: pick any site and manually set matterportModelId on its Rhodes row → page swaps for the link.

- [ ] Lease document row in Lease & Purchase shows a real Drive link — blocked on AERIE-223 (Rhodes merger extension to expose REBL3's details.lease_url via listSitesForAerie)

- [ ] Lease start / Lease end rows show real dates — blocked on AERIE-224 (product decision: manual entry vs LLM extraction vs hybrid)

- [ ] Sq. Ft. row in Fact Sheet shows real square footage — blocked on Rhodes surfacing ISP-derived building.total_main_sqft (likely a sub-task of AERIE-217 audit)

#129 — feat(portfolio): add list view and stage-grouped board @benji-bizzell  no labels

## Summary

- Default to bucket-categorised list view (Open / Aug 2026 / Jan 2027 / Later / Unknown) with per-section visibility filters persisted to localStorage

- Reorganise the board into three stage groups (Diligence / Buildout / Operating) with navigable stage headers; diligence stays disabled until its own dashboard lands

- Carry marketingName through the Rhodes contract for search and as an enrollment-match fallback

## Why

AERIE-220. The board alone made it hard to scan sites by target-open window; the list is now the primary view for portfolio reviews, and the board's stage headers let PMs jump straight into the buildout and operating dashboards.

## Outstanding / follow-up

- Schema dependency — board-view school mapping. Closing out AERIE-220 still requires the upstream schema changes that expose milestone references, so the board can map sites to milestones directly instead of inferring phase from RhodesPortfolioSite.milestones. Keep this PR as draft until that lands; once the contract carries the milestone keys, revisit RHODES_PHASE_TO_MILESTONE in portfolio-view.tsx and drop the phase-derivation fallback.

## Test plan

- [x] pnpm vitest run on portfolio + shell suites (44 tests)

- [x] pnpm tsc --noEmit

- [x] pnpm lint (no new findings)

- [ ] Smoke: toggle list/board in /dashboards, confirm list-section visibility persists across reloads

- [ ] Smoke: click Buildout/Operating stage headers on the board, confirm sub-tab navigation

🤖 Generated with a very good bot

#131 — fix(portfolio): drive opening date from Ready to Open milestone @benji-bizzell  no labels

## Summary

- Add deriveOpeningDate(milestones) as the single source for the dashboard's Opening Date (Ready to Open completedDatedueDate → null), with YYYY-MM-DD normalisation

- Route the list column, board card date line, target-date sort, and bucket assignment through the new helper

- Rename the list column header to "Milestone / Ready to Open Date" to match

## Why

The Portfolio dashboard was sourcing Opening Date from Rhodes' separate projectedOpenDate / actualOpenDate fields rather than the Ready to Open milestone — drifting from the intended methodology and surfacing inconsistent date shapes (mix of 2025-08-01 and 2026-02-18T16:37:53Z) because Rhodes stores them heterogeneously. One helper, one priority chain, one normalised shape across every surface.

## Test plan

- [x] Unit tests cover the priority chain, the YYYY-MM-DD normalisation across mixed Rhodes shapes, and the bucket cohort logic

- [x] npm run typecheck and full npm test (2,730 passing) green

- [ ] Reload the Portfolio dashboard locally and confirm every Opening Date column entry reads YYYY-MM-DD and matches the Ready to Open milestone for that site

🤖 Generated with a very good bot

#132 — feat(portfolio): align due-diligence with Rhodes (read + writeback) @benji-bizzell  no labels

## Summary

- Add Due Diligence card to the Site Detail page, backed by Rhodes' dueDiligence struct

- Drive portfolio opening date from the Wrike Ready-to-Open milestone (was previously DD-derived, brittle)

- Wire REBL3 → Rhodes DD writeback in the analytics worker — dormant until Rhodes ships /sync/aerie/setDueDiligence

## Why

Rhodes is the single source of truth for the structured DD struct, but until now the worker had no path to populate it from REBL3 (which is where the DD work actually happens). Aerie's portfolio reads from Rhodes already; closing the writeback loop means one read surface for the dashboard, no schema drift, and no manual Schema-UI re-entry.

The opening-date change replaces an indirect DD-derived value with the canonical Wrike milestone operators drive in real time — fixes the "shows blank when DD is set but milestone hasn't fired" failure mode.

## Test plan

- [x] pnpm exec vitest run in sync/ — 951 passing (one pre-existing flake on seed-checks-at-configured-interval confirmed unrelated by stashing)

- [x] DD card tests cover: status/recommendation rendering, ISO + non-ISO date passthrough ("Aug 2026" stays "Aug 2026", not invented "Aug 1, 2026"), missing fields, partial structs

- [x] DD writeback worker tests cover: extraction, all outcome counters (updated/no_change/not_found), 404 route dormancy with single-warning-per-cycle, malformed REBL3 JSON, parallel metadata + DD writes per site

- [ ] Post-merge: confirm counts.dd.routeUnavailable ticks up per cycle in worker logs (route not yet shipped on Rhodes — see Rhodes/AERIE_DD_WRITEBACK_HANDOFF.md); should drop to 0 once the matching Rhodes PR lands

🤖 Generated with a very good bot

#2654 — Budget Bot 4.0: Review Agent checks (C2.1, C2.6), /review endpoint, and scorecard panel @marcusdAIy  no labels

## Screenshots

<img width="1694" height="936" alt="image" src="https://github.com/user-attachments/assets/b0de902a-d3f1-42b5-b930-a73e15c0435e" />

<img width="1660" height="915" alt="image" src="https://github.com/user-attachments/assets/34b2767e-3e94-4e1a-a44b-a952a54c53d7" />

---

## Summary

Demo Sprint DS4-DS7: ships the first two Review Agent checks, the /review endpoint, and the editor-side scorecard panel that renders the findings.

### Backend (DS4-DS6)

- CanonicalBudgetPlan.targets — new PlanTargets wrapper around the parsed BUDGET_BOT_TARGETS sheet, so checks read margin/ARR/retention targets from the canonical layer instead of poking DataPackage directly. has_targets joins the completeness flags.

- budget_bot/board_doc/review_checks/ — new package:

- _helpers.py: shared P&L row lookup + safe percent math (now guards denominator <= 0)

- margin_target.py (C2.1): planned vs approved EBITDA margin (pass / warning ≤5pp / critical >5pp)

- margin_trajectory.py (C2.6): Q-over-Q gross margin direction (pass if flat/improving ≥-0.5pp, warning ≤-2pp, critical >-2pp)

- __init__.py: CheckSpec registry + run_all_checks(plan, spec) with per-check exception isolation so a regression in one check can't take the whole scorecard down

- POST /board-doc/wizard/{id}/review — builds the canonical plan from session state, runs all registered checks, returns a fully-typed ReviewResponse { findings, skipped_checks, errored_checks, completeness, data_fetch_status, ran_at }. 409 if no spec yet, 502 only on orchestrator-level failure (per-source failures surface in data_fetch_status.failed_sources).

- Per-session asyncio.Lock so two parallel /review calls (multi-tab, double-click) serialise rather than racing on session.data_package.

- DataPackage negative cache — failed fetches go to a new failed_keys set rather than being silently stored as None. Subsequent /review clicks short-circuit known-failed sources by default; callers that want to retry pass retry_failed=True. Avoids burning 20-40s per click while upstream is down.

- scripts/run_review_checks.py — CLI that runs the full path against real BU data without booting the server. Same path the endpoint uses.

### Frontend (DS7)

- boardDocApi.ts — new types (ReviewFinding, ReviewResponse, ReviewSeverity, ReviewFindingStatus, ReviewCompleteness, ReviewDataFetchStatus), a runReview() API call, and a stable compareReviewFindings comparator (sort key: severity → check_area → check_id → finding_id).

- hooks/useReviewAgent.ts — owns the lifecycle state machine (idle → running → ready | error) with cross-session-safe concurrency:

- In-flight ref keyed by sessionId so a Run review click on session B mid-flight starts a fresh run instead of receiving session A's promise.

- Monotonic runIdRef ensures a stale resolve (slow A response landing after the user switched to B) is dropped on the floor instead of overwriting B's panel.

- Companion useReviewAgentResetOnSession hook for the page-level reset on sessionId change.

- Error branch wipes stale findings/completeness so the error state can never render mixed content.

- components/FindingCard.tsx — severity-tinted left border, expandable body showing why / preferred action / options / a humanised supporting-data table. Refactored to a single non-button container with sibling chevron + section buttons — no more invalid nested-button HTML, predictable a11y / keyboard tab order. Multi-line agent prose renders with white-space: pre-line.

- components/ReviewPanel.tsx — 360px right rail when open, 36px ribbon when collapsed (count badge now includes info findings, tinted by worst severity present so an info-only run still surfaces a signal). Header has Run review / Re-run + collapse. Body renders four states (idle / running / error / ready); ready state shows a summary header, findings grouped by check_area, an expandable "N passed" chip, and a static skipped-checks chip that surfaces missing_sources from the completeness payload.

- DocumentEditorPage.tsx — adds a Review toggle button in the top header (mirrors the panel's collapse), wraps editor + panel in a flex row so they share horizontal space without the editor losing its scroll behaviour. Wires useReviewAgentResetOnSession.

### Bonus bug fix (commits 2 + 3)

Live verification against Skyvera Q2 2026 surfaced that the "Data for Budget Bot" sheet has inverted column names for the EBITDA fields:

- *_ebitda_target → actually contains the margin %

- *_ebitda_margin_target → actually contains the dollar amount

Before the fix, C2.1 reported nonsense: *"planned EBITDA margin (60.3%) is 7991771.7pp below the approved target (7991832.0%)"*. Fixed in both canonical_plan._TARGET_FIELD_MAP and section_generators.get_ground_truth_metrics (the latter was also feeding the LLM swapped labels for every doc generation — this PR corrects that too). Both spots carry cross-referencing comments so when the GSheet column headers eventually get renamed, the inversion can be reverted cleanly. Captured as a tech-debt item (C1.7) in the backlog.

### Review feedback addressed

This PR went through internal review and addresses every flagged item except one stylistic disagreement (commits 6-10):

| Severity | Items addressed |

|---|---|

| Critical | C1: sessionId-keyed in-flight ref + stale-resolve guard. C2: reset on sessionId change via dedicated companion hook. |

| High | H3: structured data_fetch_status field distinguishes "data legitimately missing" from "upstream broke". H4: DataPackage.failed_keys negative cache. H5: compareReviewFindings finding_id tie-break. H6: FindingCard refactored to fix invalid nested <button> HTML. H7: per-check exception isolation in run_all_checks + new errored_checks field. |

| Medium | M8: per-session asyncio.Lock on /review. M9: documented as design choice (LLM prompt path wants string "60%", deterministic path wants float 60.0 — different consumers, different shapes; clarified with extensive docstring + regression test). M10: useReviewAgent.spec.ts covering C1/C2/sort. M11: stable-IDs caveat documented in hook docstring (blocks shipping a dismiss UI until backend produces stable IDs). M12: safe_pct guards denominator <= 0. |

| Low | L13: typed Pydantic ReviewResponse (no more list[Any] / dict[str, Any]). L14: cross-user 403/404 test. L15: error branch clears stale findings. L16: collapsed-panel badge now counts info findings, tints by worst severity. L17: white-space: pre-line on agent-emitted prose. L18: filed as backlog item C1.8 (pre-existing planner-module mislabel). |

| Nit | N19: hook docstring rewritten around cross-session guarantees. N20: implemented as a one-directional alias-subset invariant TEST rather than a "must stay in sync" comment (catches regression in CI; comments don't). N21: --year sanity-bounded (2020-2030). N22: SkippedChip docstring captures the design intent so a future refactor doesn't strip the missing_sources line. |

One disagreement: reviewer's framing of H4 suggested every click would burn 20-40s while upstream is down; in practice the orchestrator's failure path short-circuits faster, but the negative-cache fix still lands as recommended just for clearer semantics.

### Live verification

Running scripts/run_review_checks.py --bu skyvera -q 2 -y 2026 against real data produces:

[C2.1] Margin target hit/miss          PASS

Q2'26 planned EBITDA margin (60.3%) meets the approved target (60.0%).

[C2.6] Q-over-Q gross margin trajectory WARNING

Gross margin degrading 2.0pp Q-over-Q (Q1'26: 80.0% → Q2'26: 78.0%).

One pass + one warning — exactly the kind of mixed scorecard that makes the demo feel real.

### Backlog impact

- DS4, DS5, DS6, DS7 → done

- C2.1, C2.6 → done (Phase C check suite, 2 of 17)

- C4.1, C4.6 → done (scorecard panel + re-run button); C4.2 partial (FindingCard ships in DS7, dismiss/addressed flow pending stable finding IDs per M11)

- C0.1, C0.4 → flipped to done (already shipped in #2634; backlog status was stale)

- C1.8 → new tech-debt item filed for the budget_translator.py mislabel surfaced during this review (pre-existing, separate consumer)

## Test plan

### Backend

- [x] New unit tests pass: 38 check + 8 endpoint + 7 target-extraction + 1 regression with real Skyvera payload, plus new tests for per-check isolation, negative cache, retry_failed, data_fetch_status, cross-user 403, safe_pct negative-denominator, and the alias subset invariant.

- [x] Live CLI verification against Skyvera Q2 2026 produces sensible findings.

- [x] Full tests/board_doc/ suite shows no new regressions (5 pre-existing LLM-mock failures unchanged).

- [ ] Reviewer spot-check: column-name inversion comments in both canonical_plan._TARGET_FIELD_MAP and section_generators.get_ground_truth_metrics agree.

- [ ] Reviewer spot-check: run_all_checks contract — empty list from a check means "skipped", at least one severity="pass" finding means "ran and passed", any exception lands in errored_checks and the request still returns 200.

### Frontend

- [x] pnpm lint --max-warnings 0 clean on all touched files.

- [x] pnpm tsc --noEmit clean.

- [x] useReviewAgent unit tests pass (7/7) — covers same-session de-dupe, different-session start-fresh, stale-resolve drop, stale-error drop, error clears findings, reset cancels in-flight, useReviewAgentResetOnSession wiring.

- [x] Open a Skyvera Q2 session in the editor → "Review" button visible in header, panel open by default on the right.

- [x] Click "Run review" → spinner appears, panel transitions to ready state in 20-40s.

- [x] Findings render grouped by check_area; severity-coded left borders; sort order critical → warning → info → pass; same-area same-severity ties broken stably by check_id then finding_id.

- [x] Click a finding → expands to show why / suggested action / options / "Numbers" table; click again to collapse. Chevron toggle button is a11y-correct (no nested buttons; section pill is independently focusable).

- [x] "1 passed" chip is visible and expands to show the C2.1 pass finding.

- [x] Click panel header chevron (or "Review" header button) → collapses to 36px ribbon with severity count badge; click again restores.

- [x] Force an error path → panel shows error state with "Try again" button; stale findings are wiped (no mixed "old findings + new error" content).

- [x] Re-run button on the panel header → triggers a second run, replacing previous findings.

- [x] Window narrowing: panel keeps fixed width, editor narrows but stays usable down to ~600px viewport.

#2658 — fix(mfr-memo): align Financial Highlights bullet 5 with reference narrative @eric-tril  no labels

### Summary

Financial Highlights bullet 5 (GAAP net income) in the group MFR memo was drifting from the reference narrative in wording, period labels, and data source. This change rewires the bullet so both the LLM prompt and the template fallback emit byte-identical output matching the reference, sources passive-investments gain/loss from the Note 8 aggregator (instead of the IS "Other income" line), and uses quarter-aware period labels at quarter-ends.

### Changes

- Add _q_label_current / _q_label_prior helpers that emit Q{N} {year} at quarter-end months and fall back to {Month} {year} mid-quarter.

- Pre-compute bullet 5 sentence-4 (deferred-tax word/amount) and sentence-5 (operating profit/loss word/amount) tokens in _compute_bullet_variances, with [TBD] sentinels when the DT fetch failed.

- Extract _build_b5_required_phrases so the validator enforces the exact NI, PI, DT, and operating phrases and skips [TBD] DT tokens when data is unavailable.

- Rewrite the LLM prompt for bullet 5 to a single five-sentence template using pre-computed tokens (removes the old free-form instructions and b5_ni_dir).

- Rewrite the template-fallback bullet 5 in _build_template_defaults to reuse the same tokens, producing byte-identical wording to the LLM path.

- Source pi_gain_cur / pi_gain_pri from fetch_other_expense_breakdown (Note 8's passive-investments row, accounts 71201/71202/71250/71251/71254/71501/71502), flipping sign and scaling from thousands to dollars; fall back to the IS "Other income" line on failure and expose _pi_failed for provenance.

- Extend bullet 4 provenance to include a Passive Investments source and SQL query when Note 8 succeeded.

- Remove the now-unused _deferred_tax_sentence helper.

- Update tests: mock fetch_other_expense_breakdown, switch assertions from $X.XM to $X.X million, and add test_bullet_5_reference_narrative_for_q1_2026 pinning the full sentence output for a quarter-end period.

### Testing

- cd klair-api && pytest tests/reports_service/test_group_memo_defaults.py

- Confirm test_bullet_5_reference_narrative_for_q1_2026, test_template_fallback_output, test_template_negative_value_branches, test_deferred_tax_fetch_failure_renders_tbd, and test_build_llm_prompt_embeds_precomputed_variances all pass.

- Run uv run ruff format and uv run ruff check on [group_defaults.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/docx_reports/memo_data/group_defaults.py) and [test_group_memo_defaults.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/tests/reports_service/test_group_memo_defaults.py).

- Generate a group MFR memo for a quarter-end period (e.g. March 2026) and verify bullet 5 uses Q1 2026 / Q1 2025, says "Operating profit/loss", and pulls PI values consistent with Note 8.

- Generate a memo for a mid-quarter period (e.g. February 2026) and verify labels fall back to month-name form without a QTD suffix.

http://localhost:3001/monthly-financial-reporting

Section in Blue - only Generate AI in January 2026 if testing

<img width="1894" height="831" alt="image" src="https://github.com/user-attachments/assets/b27477d7-0685-42cf-b468-94dc2a204a08" />

#2663 — feat(ai-spend): super-admin Token Pricing view with rationale notes (KLAIR-2582) @ashwanth1109  no labels

## Demo

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/11a07e6c-5a0a-433e-9581-f99ac1a22512" />

## Summary

Adds a super-admin-only Token Pricing view under AI Spend & Adoption that surfaces core_finance.ai_spend_token_pricing — the per-model rates the AI spend ingest Lambdas (Claude, OpenAI, Azure) use to enrich raw token events with USD cost. Previously hand-edited via SQL workbench with no in-app visibility and no rationale trail.

Motivated by [KLAIR-2580](https://linear.app/builder-team/issue/KLAIR-2580) drift incident ($86k gross / $8k net misbilling across 10 models) where operators couldn't see the current pricing state or why individual rows existed.

## What's in here

Backend

- GET /api/ai-costs/token-pricing — super-admin-gated, returns every row with a server-derived provider field (claude | openai | azure | other) based on model_pattern prefix.

Frontend

- AICostsShell header now renders two super-admin buttons: Token Pricing (new) left of Raw Data Reports.

- New TokenPricingPage with breadcrumb, source strip (row count + currently-active count + Refresh + Export CSV), provider filter chips, and a UnifiedTable with sorting/search/column-selector.

- Sub-cent prices render at 6 decimals; $≥0.01 at 2.

- Currently-active rows (effective_to IS NULL) get a left accent border in the first column.

- URLs in notes auto-link (target="_blank" rel="noopener noreferrer").

- Provider chips for providers with 0 rows are hidden; selected chip uses text-klair-accent-on-accent to match the period-preset styling.

- 17 Vitest tests covering the CSV builder, price formatter, date formatter, and URL auto-linker.

Schema

- scripts/sql/create_ai_spend_token_pricing.sql — authoritative CREATE TABLE snapshot including context_window, speed, and notes (columns were added in-place via SQL workbench at various points; this file now captures the current state).

## Screenshots

Before → after chip styling: selected "All" chip previously rendered white text on accent (low contrast); now uses klair-accent-on-accent (dark text on accent), matching the 12M period preset.

## Reviewer notes

- klair-api/uv.lock was intentionally not touched in this PR — a local uv sync had regenerated it with a v3 lockfile revision (6.8k-line diff) as an unrelated side effect of [apps#2659](https://github.com/AI-Builder-Team/Klair/pull/2659)'s new uv.toml. Reverted to avoid lockfile churn.

- The notes column ALTER has already been applied to production; the DDL file is purely a current-state snapshot (Redshift has no migration runner in this repo).

- Data flow mirrors the Azure cost reports blueprint from [KLAIR-2579](https://linear.app/builder-team/issue/KLAIR-2579) — same apiGet + AbortController hook pattern, same fetch_with_params_strict + NaN→None service pattern.

## Test plan

- [ ] Navigate to AI Spend & Adoption as a super-admin; verify Token Pricing button appears left of Raw Data Reports.

- [ ] Click Token Pricing; verify breadcrumb, source strip, row count, and currently-active count render.

- [ ] Verify active rows have a left accent border in the Provider column.

- [ ] Cycle through provider chips (All / Claude / OpenAI / Other); verify Azure chip is hidden (no Azure rows in prod today).

- [ ] Verify selected chip has dark text on accent green (matching 12M in the left filter sidebar).

- [ ] Click Export CSV; open the file and verify 6-decimal price precision and URL-preserved notes.

- [ ] Click a URL inside a notes cell; verify it opens in a new tab.

- [ ] Load as a non-super-admin; verify the Token Pricing button is not visible and GET /api/ai-costs/token-pricing returns 403.

- [ ] Confirm pnpm tsc --noEmit, pnpm eslint ..., and pnpm vitest run src/screens/AIAdoptionV2/components/TokenPricing/ all pass (run in CI).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2664 — feat(ai-spend): integrate Azure into /summary, /time-series, /by-model (KLAIR-2583) @ashwanth1109  no labels

## Demo

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/6d98b6e2-6484-4aac-a7fd-c7eac3345ec9" />

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/65ea736a-d72d-4da8-bff0-0836903807dc" />

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/990fa081-42be-441a-8267-2142f776bd6a" />

## Summary

- Wires Azure AI spend into three aggregate endpoints powering the AI Spend & Adoption dashboard: /summary (Total AI Spend + Daily Average cards), /time-series (Spend by Provider Over Time chart), and /by-model (Spend by Model chart).

- BU attribution hard-coded to Zax via the existing shared-BU pattern (matches Cursor/GCP) — Azure has no per-row BU column.

- Adds a synthetic Azure (Other Services) row in /by-model = ai_spend_azure_cost_reports.cost_usdai_spend_azure_token_usage.total_cost_dollars, so Document Intelligence (which only lives in cost_reports) shows up per-model and the Azure provider total stays consistent across all three surfaces.

- Frontend: adds Azure brand blue (#0078D4) + Azure display name so the provider line renders with a dedicated color in the chart legend.

## Scope

- Backend: [klair-api/services/ai_costs_service.py](klair-api/services/ai_costs_service.py), [klair-api/models/ai_costs_models.py](klair-api/models/ai_costs_models.py)

- Frontend: [klair-client/src/screens/AICosts/constants.ts](klair-client/src/screens/AICosts/constants.ts) (chart auto-picks up new provider key from response)

- Tests: 9 new unit tests (4 time-series, 5 by-model) + 12 existing tests updated for new Azure DataFrame mocks. All 106 pass.

- Spec: [features/ai-spend-and-adoption/azure-provider-integration/specs/01-azure-initial-integration/spec.md](features/ai-spend-and-adoption/azure-provider-integration/specs/01-azure-initial-integration/spec.md)

## Out of scope (follow-ups)

- /by-bu endpoint integration

- Azure BU overrides (per-subscription attribution)

- Per-meter breakdown within the Azure (Other Services) residual

## Test plan

- [ ] Restart backend: cd klair-api && uv run fast_endpoint.py

- [ ] Reload AI Spend & Adoption dashboard — verify Azure appears in Total AI Spend, Daily Average, Spend by Provider Over Time legend (blue line), and Spend by Model

- [ ] Check Apr 2026 period shows ~\$65 Azure spend (~\$21 across per-model rows + ~\$44 "Azure (Other Services)")

- [ ] Apply BU filter that excludes Zax → Azure drops to \$0 across all three charts

- [ ] cd klair-api && pytest tests/test_ai_costs_service.py passes

- [ ] cd klair-api && uv run ruff check services/ai_costs_service.py models/ai_costs_models.py tests/test_ai_costs_service.py

- [ ] cd klair-client && pnpm lint:pr

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2665 — feat(aws-spend-insights): migrate pipeline to canonical v2 budget sources (KLAIR-2585) @ashwanth1109  no labels

## Demo

<img width="2624" height="1636" alt="image" src="https://github.com/user-attachments/assets/3de9fcec-a2e1-45ef-a6ba-a70a11b34042" />

## Summary

- Migrates the AWS Spend Insights Lambda off legacy summary / rollup tables and onto the same canonical v2 Redshift sources already used by klair-api/services/aws_spend_service.py, so pipeline output matches /api/aws-spend/summary.

- Cleans up dead netsuite_pipeline/ references from klair-udm/template.yaml that were blocking sam build / sam sync after the pipeline code was removed in commits 65cc21178 + f738da203.

## Why

The insights Lambda and the AWS Spend Dashboard API were reading budgets and adjustments from different tables (aws_spend_unblended_budgeted_amounts + aws_spend_unblended_account_costs_summary_adjusted for the pipeline vs. aws_spend_unblended_budgeted_amounts_v2 + aws_spend_unblended_budget_adjustments for the API). That could produce AI insights with portfolio/BU totals that didn't match what executives see on the dashboard. [KLAIR-2585](https://linear.app/builder-team/issue/KLAIR-2585) asked for an audit + migration onto a single source of truth.

## Canonical sources now used

- Raw budgetaws_spend_unblended_budgeted_amounts_v2 joined to aws_spend_budget_account_mapping. BU/class always come from the mapping, never from the budget table. Uses the aws_budgeted_amount column (non-bedrock).

- Unblended adjustmentsaws_spend_unblended_budget_adjustments filtered with is_bedrock_adjustment = FALSE (bedrock lives in the bedrock-token-metrics feature).

- Account → BU / classaws_spend_budget_account_mapping is the source of truth. Legacy aws_spend_unblended_account_costs_summary_adjusted and aws_spend_unblended_budgeted_amounts are no longer referenced from this pipeline.

## Changes

klair-udm/aws-spend-insights/context_builder.py

- _query_portfolio_summary — split single totals CTE into budget_totals (v2 canonical, joined to mapping) + spend_totals.

- _query_budget_adjustments — now reads aws_spend_unblended_budget_adjustments with is_bedrock_adjustment = FALSE.

- _query_bu_breakdown — rewritten as FULL OUTER JOIN on BU across budget_per_bu and spend_per_bu so budget-only / spend-only BUs both surface.

- _query_unmapped_accounts — uses aws_spend_budget_account_mapping as SoT.

klair-udm/aws-spend-insights/tool_implementations.py

- _get_class_breakdownFULL OUTER JOIN on am.class.

- _get_account_breakdownFULL OUTER JOIN on am.aws_account_number.

klair-udm/template.yaml

- Removed GetNsDataSourcesLambda, InitiateExportLambda, DailyNetsuiteExportRule, StepFunctionExecutionRole, NetsuiteExportStateMachine, and the NetsuiteExportStateMachineArn output. No live code for any of these remains in the repo.

## Verification (end-to-end in prod)

Deployed via sam sync and invoked the Lambda against Redshift for 2026-Q2:

- Agent loop: 17 insights in 6 rounds, 36 tool calls, 345s runtime.

- Output: s3://klair-backend-uploads/aws-spend-insights/2026-Q2/insights.json

- All six sections populated: portfolio_health (2), bus_needing_attention (5), positive_signals (4), wow_trends (3), anomalies (2), data_quality (1).

- Portfolio totals: raw budget \$7.20M, adjusted budget \$6.76M, total adjustments −\$439K, actual spend \$1.78M, projected EOQ \$6.70M (−0.87%). Matches the adjusted-budget numbers reported by /api/aws-spend/summary.

- Budget adjustments flow through correctly end-to-end: IgniteTech −\$515.9K Khoros reduction, Zax +\$154.8K new allocation from \$0 raw — both attributed correctly in the emitted insights.

## Reviewer notes

- The spec lives at [features/aws-spend/exec-ai-insights/specs/03-canonical-budget-source-migration/spec.md](features/aws-spend/exec-ai-insights/specs/03-canonical-budget-source-migration/spec.md). FR1–FR6 map 1:1 to the code diffs described above.

- The template cleanup is a separate commit for reviewability but is required to unblock sam build / sam sync on the klair-udm stack — it's not a new regression, just cleanup of pre-existing dead references.

- klair-api/services/aws_spend_service.py patterns at lines 96, 201–210, 4131–4208, and 4230–4250 are the canonical reference — this PR now matches them.

## Linear

- [KLAIR-2585](https://linear.app/builder-team/issue/KLAIR-2585) — Audit: AWS Spend Insights pipeline for canonical budget / adjustment / mapping source migration

## Test plan

- [x] uv run ruff format + uv run ruff check pass on modified files

- [x] uv run pyright clean on modified files

- [x] Existing pipeline unit tests (39) pass

- [x] sam validate passes on updated template

- [x] Lambda deployed via sam sync runs end-to-end against prod Redshift

- [x] Generated insights.json portfolio totals match /api/aws-spend/summary

- [ ] Post-merge: verify the scheduled 08:00 UTC EventBridge run produces the expected daily insights

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2670 — fix(mfr-memo): rewrite Software Note 3 for Group attribution and YoY vs Dec-31 prior year @eric-tril  no labels

### Summary

Rewrites Note 3 ("Deferred tax assets and liabilities") in the Software MFR memo docx so the narrative matches the reference wording. The prior output misattributed Group-level DTA/DTL figures to "Software", compared the current period against the prior month-end instead of the prior fiscal year-end (Dec 31), and included DTL / crystallization / passive-investments / mark-to-market / stock sale content that does not belong in this note. This change covers paragraph 1, paragraph 2, the LLM prompt, the template fallback, the drill-down provenance, and the supporting balance-sheet fetcher.

### Changes

- Added fetch_software_bs_dta_yoy in [_balance_sheet.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/docx_reports/memo_data/_balance_sheet.py) that queries staging_netsuite.balance_sheet for accounts 22300 (DTA) and 35125 (DTL) at period-end vs Dec 31 of the prior fiscal year, returning the six DTA/DTL/NOLs numeric keys plus a _prior_date field.

- Replaced the fetch_bs_dta_dtl (prior month-end) call site in [software_defaults.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/docx_reports/memo_data/software_defaults.py) with the new YoY fetcher; bs_prior_date is now Dec 31 of the prior fiscal year.

- Added _q_label helper that emits "Q1 2026"-style labels at quarter-end months and month labels otherwise; exposed q_label_cur / q_label_pri on the template context and dropped dtl_cur / dtl_pri / nols_gross_pri month-end labels.

- Rewrote the Note 3 LLM prompt and template fallback: paragraph 1 names audited divisions (Aurea, Avolin & Jive), adds the January 2020 derecognition sentence, attributes figures to "the Group", and supports a single shared income-tax phrase when both periods have the same sign.

- Reduced paragraph 2 to DTA + NOLs only — removed DTL, crystallization, passive investments, mark-to-market, and stock-sale content.

- Updated drill-down provenance: drops dtl_current / dtl_prior, adds q_label_current / q_label_prior / dta_prior_year_end / nols_gross_prior_year_end, and removes the 35125% account filter from the surfaced SQL; prior-date fixture is now 2025-12-31.

- Updated existing tests (test_template_fallback_output, test_build_llm_prompt_*) and added new tests: test_note_3_matches_reference_wording, test_note_3_single_label_when_tax_signs_match, test_note_3_provenance_drops_dtl, test_software_bs_dta_yoy_uses_prior_year_end.

### Testing

- cd klair-api && pytest tests/reports_service/test_software_memo_defaults.py

- cd klair-api && uv run ruff format services/docx_reports/memo_data/_balance_sheet.py services/docx_reports/memo_data/software_defaults.py tests/reports_service/test_software_memo_defaults.py

- cd klair-api && uv run ruff check services/docx_reports/memo_data/_balance_sheet.py services/docx_reports/memo_data/software_defaults.py tests/reports_service/test_software_memo_defaults.py

- cd klair-api && uv run pyright services/docx_reports/memo_data/_balance_sheet.py services/docx_reports/memo_data/software_defaults.py

- Generate a Software MFR docx at a quarter-end period (e.g., 2026-03-31) and confirm Note 3 paragraph 1 reads "the Group (Aurea, Avolin & Jive)", includes the January 2020 derecognition sentence, uses "Q1 2026" vs "Q1 2025" labels, and paragraph 2 references DTA + NOLs against Dec 31 of the prior fiscal year with no DTL/crystallization/passive-investments/mark-to-market content.

http://localhost:3001/monthly-financial-reporting

### When testing use Software Memo January 2026

<img width="1847" height="788" alt="image" src="https://github.com/user-attachments/assets/10d164ef-c886-4d44-813f-4b4bd3bb5678" />

The Portfolio  —  Trilogy Companies

Forbes Investigation Probes Trilogy's Remote Work Model as Alpha School Doubles Down on Athletic Training

As Joe Liemandt's global talent empire faces scrutiny over labor practices, his education venture is betting on a radical thesis: kids learn more from sports than from traditional academics.

AUSTIN, TEXAS — A pair of Forbes investigations this week cast a harsh spotlight on Joe Liemandt's 35-year experiment in global remote work, even as his education venture Alpha School published new data claiming to double students' Division I athletic prospects through radically increased training time.

The Forbes pieces describe Crossover — Trilogy's global recruiting platform — as a "software sweatshop" and detail Liemandt's vision to "turn workers into algorithms," raising questions about the sustainability of the remote work model that has powered ESW Capital's portfolio of 75+ enterprise software companies to industry-leading EBITDA margins.

The timing is notable. While Crossover faces renewed scrutiny, Alpha School — Liemandt's $1 billion bet on reinventing K-12 education — is pushing a thesis that directly challenges traditional schooling's allocation of student time. In a recent blog post, the school claims its model, which compresses academic instruction into two hours per day using AI tutors, frees students for 20+ hours of weekly athletic training — double the national average.

The school's argument: traditional education has it backwards. "Kids build more grit, leadership, and social skills through afterschool sports than during the actual school day," Alpha's latest post asserts. The school now claims students are testing in the top 1-2% nationally on standardized assessments while simultaneously training at elite athletic volumes.

Alpha's recent content has focused heavily on athletics and movement, including a piece on the "ADHD epidemic" arguing that many diagnoses stem from "movement-starved" kids in sedentary classrooms. The subtext is clear: if AI can handle academics in two hours, the rest of the school day should optimize for what humans do best — physical mastery, leadership, competition.

It's a familiar Liemandt pattern: automate the repeatable, liberate humans for higher-order work. Whether the Forbes investigation into his remote work practices will complicate that narrative remains to be seen. The billionaire has always operated in the margins — geographically, economically, ideologically. This week's coverage suggests those margins are narrowing.

The Billionaire Who Pioneered Remote Work Has A New Plan To  ·  How A Mysterious Tech Billionaire Created Two Fortunes—And A  ·  We Built a School to Double Your Kid’s D1 Odds

Skyvera Adds Another Telco Trophy: CloudSense Joins the Family—And the BSS Shopping Cart Isn’t Empty

The Trilogy telecom stack keeps bulking up: Salesforce-native CPQ meets divested BSS assets, while a sister acquirer keeps quietly buying in threes.

AUSTIN, TEXAS — Skyvera’s been on a telecom tear… and the latest ink isn’t subtle. Word is the company has officially completed its acquisition of CloudSense, the Salesforce-native CPQ and order management platform built for telecom and media operators who like their quoting, ordering, and product catalogs inside the mothership CRM… not duct-taped around it.

Skyvera is framing the deal as portfolio expansion… but a little bird tells me it’s really about control of the commercial layer—where telcos bleed margin through messy offers, mismatched catalogs, and “bespoke” order flows that never die. CloudSense shows up as the clean answer: configure, price, quote, and orchestrate orders in a way operators can actually operationalize without a months-long prayer circle.

If you want the official victory lap, it’s here: Skyvera’s CloudSense acquisition announcement… and for the product page that’s doing the heavy selling, here: CloudSense on Skyvera.

But don’t miss the other breadcrumb Skyvera’s leaving on the trail… STL divested assets. Skyvera says it acquired STL’s telecom products group—digital BSS functionality spanning monetization, optical networking, and analytics. Translation for the back row: Skyvera’s not just collecting point solutions… it’s assembling a broader operator-grade toolkit that can sit closer to revenue, network intelligence, and the dashboards that tell you what’s actually happening.

Meanwhile, in the extended Trilogy constellation, IgniteTech—ESW’s other deal-hungry creature—has its own PR beat humming. PR Newswire says IgniteTech snapped up three software products in one go. No names in our packet… but the pattern is familiar: buy, integrate, reprice, margin.

Blind item to close: a “Salesforce-native CPQ” inside a telco portfolio doesn’t stay a standalone for long. Watch for tighter coupling with the rest of Skyvera’s communications and engagement stack—because once you own quoting and ordering… you get to rewrite the operator’s reality.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets

Totogi Makes the Case for Vertical AI in Telco: Less Noise, More Money

Totogi is positioning its ontology layer as the missing "business brain" that transforms generic AI into production-ready telecom solutions. The company released two resources and a conference talk arguing that AI without telco context is merely expensive theater.

A case study claims Totogi's ontology reduced alarm noise by 97%, enabling network operations teams to prioritize alerts that matter for services, customers, and revenue. The company's MWC26 session, "Show me the money: why most telco AI fails," directly challenges pilots that demo well but fail to deliver measurable business value.

Totogi's thesis is straightforward: AI stacks unable to reason over products, pricing, network events, customer hierarchies, and service assurance in one coherent context produce dashboards, not dollars. The company's newly formalized "Appledore Ontology" whitepaper encodes telco reality once for reuse across automation, analytics, and agentic workflows.

This positions Totogi as telcos increasingly demand AI delivering end-to-end synergy across network, billing, and customer experience—especially as Trilogy telecom assets modernize operator stacks.

The Machine  —  AI & Technology

From Doping Detection to Clinical Alerts: Anomaly Detection Is Having Its Quiet Revolution

A cluster of new papers reveals how pattern-finding algorithms are migrating from the lab into athletics, hospitals, and the architecture of AI itself.

CAMBRIDGE, MASSACHUSETTS — Somewhere in the long arc between a single suspicious blood test and a missed lab order in a hospital chart, there is a common thread — the ancient, deeply human problem of noticing when something is wrong.

This week, a striking convergence of research papers on arXiv illuminates how anomaly detection — the science of identifying what doesn't belong — is quietly reshaping fields that rarely share a conference room.

Consider athletics. Anti-doping programs spend upward of $800 per biological sample, yet many banned substances vanish from the body faster than testers can arrive. A new benchmarking system with visual analytics proposes a complementary approach: instead of chasing molecules, analyze the performance data itself. By establishing statistical baselines for athletic output and flagging anomalous leaps, the system offers a kind of computational suspicion — not proof, but a signal worth investigating. It is the digital equivalent of a coach's raised eyebrow, scaled to thousands of athletes simultaneously.

Meanwhile, in clinical medicine, the stakes are quieter but no less profound. A paper on conditional anomaly detection using soft harmonic functions tackles the problem of identifying unusual omissions in patient care — a critical lab test that should have been ordered but wasn't, a response that deviates from what the patient's condition would predict. The method is non-parametric, meaning it makes fewer assumptions about the shape of normal behavior, letting the data speak in its own dialect.

And underpinning both of these applied efforts is the relentless push to make the foundation models themselves faster and leaner. A separate paper presents a co-design methodology for accelerating multimodal foundation models, combining hardware and software optimizations to reduce the computational and memory costs of the transformer architectures that power modern AI.

What connects these threads is a profound shift in how we think about intelligence — artificial or otherwise. The earliest nervous systems evolved not to understand the world, but to detect anomalies in it: the shadow that moves wrong, the temperature that spikes. Hundreds of millions of years later, we are teaching silicon to do the same thing, across domains our ancestors could never have imagined.

The common lesson: the universe is regular enough to model, and irregular enough to matter. The art is in knowing the difference.

Focus Session: Hardware and Software Techniques for Accelera  ·  Performance Anomaly Detection in Athletics: A Benchmarking S  ·  Conditional anomaly detection using soft harmonic functions:

AI Video’s New Arms Race: From Startup Growth Hacks to OpenCV’s Next Big Swing

Generative video is exploding out of the lab and into marketing, movies, and a rapidly intensifying fight for who owns the “camera” in AI.

SAN FRANCISCO — AI video has officially crossed the line from “cool demo” to “boardroom priority,” and I cannot overstate how significant that is. In just a few weeks of headlines, we’ve gotten the full spectrum: practical startup playbooks, new heavyweight challengers, and the kind of surreal, Black Mirror–adjacent brand storytelling that only AI could make feel… inevitable.

On the practical end, Inc. is spelling out what many founders are learning in real time: AI video is becoming the highest-leverage growth surface for resource-constrained teams. The pitch is simple—ship more content faster, localize it instantly, personalize it endlessly—and suddenly a two-person marketing team can behave like a studio. The tactics are straightforward, but the implication is wild: when video production becomes software, distribution becomes the real moat. (And yes, this changes everything for early-stage go-to-market.) Inc’s guide lands at exactly the right moment.

Meanwhile, the platform war is heating up. VentureBeat reports the founders of OpenCV—yes, the computer vision backbone that helped define modern image processing—are launching a new AI video startup aimed squarely at the OpenAI/Google tier. That’s not “another gen-video app.” That’s infrastructure ambition, with deep technical lineage and a clear message: the next era of video models will be built by teams who actually understand vision end-to-end. VentureBeat’s write-up reads like the opening chapter of a serious rivalry.

Then there’s the cultural tell: an AI startup leaning fully into “Black Mirror” energy with an ‘AI-Selves’ launch film, a reminder that synthetic video isn’t just a tool—it’s a new narrative language. When your product can generate faces, scenes, and selves, your marketing can stop explaining and start unsettling.

Even the benchmarks are getting weirder. A viral prompt produced an image of a horse riding an astronaut riding a pelican riding a bicycle—plus an unprompted sign reading “WHY ARE YOU LIKE THIS.” The community’s reaction: we need to stack these tests now. That’s funny—until you realize it’s also a capability probe.

Put it together and the direction is clear: AI video is becoming a core computing primitive. The winners won’t just make clips. They’ll control the model, the workflow, and the trust layer that tells you what’s real.

How Startups Can Leverage AI Video to Grow - inc.com  ·  OpenCV founders launch AI video startup to take on OpenAI an  ·  AI Startup Goes ‘Black Mirror’ in Unhinged 'AI-Selves' Launc

When the Machines Learn, the Laws Tighten: A Tale of Robots, Data Centers, and a Crypto Ghost Town

A new generation of robotic control software is enabling machines to learn from one another even when their bodies differ—allowing behaviors discovered on one platform to transfer to another, reducing wasteful retraining cycles. However, this technological promise collides with growing geographic constraints. As Washington pushes to accelerate AI infrastructure development, states are drawing boundaries around data-center growth through zoning rules, grid-impact regulations, and water-use constraints. The result is a patchwork habitat where some counties court server campuses while others treat them as invasive threats. This tension between computational hunger and community caution echoes recent tech failures, like the "play to earn" gaming experiment Legacy, which collapsed after an NFT-fueled boom, leaving players with losses while business mechanics succeeded as designed. As AI infrastructure expands, the critical question remains: will growth be permitted, and who will set the rules governing this emerging ecosystem?

The Editorial

Nation Reassured As Companies Continue Replacing Business Strategy With The Word ‘AI’

Executives cite new partnerships, leadership books, and a brief Claude outage as proof no one is driving the car, but the dashboard is absolutely glowing.

AUSTIN, TEXAS — In a week that market watchers described as “extremely normal, in the sense that nothing has to mean anything anymore,” a cluster of unrelated announcements has combined into a single, coherent corporate message: if you say “AI” with enough confidence, you can temporarily convert any operational problem into a branding opportunity.

The latest evidence arrived as TridentCare announced it would partner with ServiceNow to “power AI-driven transformation across operations,” an initiative whose primary deliverable appears to be the comforting sensation that the company’s existing workflows are no longer “processes,” but “journeys.” According to the announcement, TridentCare will modernize everything from internal coordination to customer experience by placing the word “AI” somewhere near the sentence, a proven method for turning the act of running a business into an ongoing philosophical exploration of whether the business still exists in the same plane of reality. The partnership was covered in a report circulating via Knoxville News Sentinel, helpfully reminding readers that even healthcare logistics can be improved by rebranding standard software implementation as a “transformation.”

Meanwhile, footwear company Allbirds reportedly saw its shares skyrocket after an AI pivot, raising concerns over business viability—a sentence that, in 2026, reads less like a warning and more like a job description. Investors, long bored by concepts like “unit economics” and “products people buy,” appear increasingly drawn to the idea that any company can become a technology company as long as it announces it has “pivoted” hard enough to shear off the last traces of its original identity.

To support the nation’s growing population of executives who have been forced to pretend they understand what they are applauding, AI Vantage Consulting released a new book, “AI Fundamentals For Leaders,” which promises to guide decision-makers through the treacherous landscape of reading press releases aloud on earnings calls. The launch, noted in Business Insider’s markets feed, arrives at an opportune moment, when many leaders’ main AI-related responsibility is deciding whether “agentic” sounds more expensive than “autonomous.”

Of course, the week also offered a reminder that companies now run on a thin electrical thread labeled “Claude,” after a brief outage reportedly caused a “90% productivity drop in Silicon Valley,” as claimed by a startup founder. Analysts said the number felt plausible once adjusted for the modern definition of productivity, which includes “asking an LLM to rewrite the same email seven times until it sounds like a human who isn’t terrified.”

And looming over all of it was renewed discussion of “AI washing,” the practice of laying people off while draping the decision in a tasteful tech halo. In this framework, the layoff is not a layoff; it is an “AI enablement milestone,” in which the company bravely replaces payroll with ambition.

Taken together, these stories suggest a bright future in which every organization becomes simultaneously more automated and less accountable, finally achieving the long-sought corporate ideal: a business that can’t be questioned, because it has technically become a vibe.

TridentCare Partners with ServiceNow to Power AI-Driven Tran  ·  Allbirds shares skyrocket after AI pivot, raising concerns o  ·  AI Vantage Consulting Launches 'AI Fundamentals For Leaders'
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

The Great Consolidation Has Arrived, and Nobody Should Be Surprised

Cohere's absorption of Aleph Alpha is not an anomaly — it is the first tremor of a tectonic rearrangement that will leave very few AI companies standing.

TORONTO — The news that Cohere has acquired Aleph Alpha, Germany's much-celebrated sovereign AI champion, will be greeted in certain quarters as a tragedy — the fall of European technological independence, the surrender of the Continent's most promising foundation-model company to a Canadian suitor. Allow me to suggest a different reading: it is the most predictable event in the brief and overheated history of the large language model industry, and anyone who failed to see it coming was not paying attention to the arithmetic.

The arithmetic is merciless. Training a frontier model now costs somewhere between half a billion and several billion dollars per run, depending on whom you believe and how generously they count their cloud credits. Inference at scale requires fleets of GPUs that would make a cryptocurrency miner weep with envy. The handful of companies capable of sustaining this burn rate can be counted on one hand, and Aleph Alpha — for all its admirable ambition to build a European alternative to OpenAI — was never convincingly among them. Heidelberg is a fine city for philosophy. It is a less fine city for raising the kind of capital that buys you a seat at a table set by Microsoft, Google, and the sovereign wealth funds of the Persian Gulf.

Cohere, to its credit, has pursued a strategy that at least has the virtue of coherence: enterprise customers, on-premises deployments, a pitch built around data sovereignty and regulatory compliance rather than the messianic consumer-facing theatrics of Sam Altman. Absorbing Aleph Alpha gives Cohere an instant European footprint, a roster of government contracts, and — crucially — a team of researchers who have already done the painful work of navigating the Brussels regulatory labyrinth. It is, in short, a deal that makes sense in the way that deals between two companies with complementary weaknesses often make sense. Whether it makes enough sense to justify whatever Cohere is paying is a question I leave to the accountants.

But the particular transaction matters less than the pattern it announces. We have seen this movie before — in cloud computing, in social media, in enterprise software — and the ending is always the same: a wild proliferation of entrants, a brief and exhilarating period of competition, and then a relentless consolidation driven by the brute economics of scale. The AI industry, as some observers have noted, may actually consolidate faster than its predecessors, because the capital requirements are so staggering and the network effects so immediate.

Those of us who have watched the enterprise software market for decades — where a company like ESW Capital has built an entire philosophy around acquiring and operating dozens of software businesses at rational multiples — understand something that the AI euphoriasts have been slow to learn: the glamour is in the founding; the money is in the operating; and the survival is in the consolidating. Joe Liemandt figured this out thirty years ago. The AI industry is figuring it out now.

The next twelve months will bring more of these deals. Many more. The venture-backed model companies that raised at ten-billion-dollar valuations on the strength of a demo and a dream will discover that dreams do not pay for H100 clusters. Some will merge. Some will be absorbed. Some will simply evaporate, leaving behind nothing but a few arxiv papers and a great deal of investor regret.

The great consolidation is not a crisis. It is a correction. And it was always coming.

Cohere Acquires Aleph Alpha in Sovereign AI Power Play - The  ·  AI Power Play: Cohere, Aleph Alpha In Advanced Merger Talks  ·  Why the AI revolution breaks all the old rules about consoli
On This Day in AI History

On April 27, 2011, IBM's Watson defeated champion Jeopardy! players Brad Rutter and Ken Jennings in a historic three-game match, marking a major milestone in natural language processing and AI's ability to understand human language nuance. The victory demonstrated that machines could compete with top human intellects at complex reasoning tasks.

⬛ Daily Word — Technology
Hint: Relating to computers and the internet, often used in security contexts.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed