Vol. I  ·  No. 124 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
MONDAY, MAY 04, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

RANKS BREAK: GOOGLE, OPENAI STAFF BACK ANTHROPIC IN PENTAGON SCRAP

Hundreds of staffers at fiercest rivals file amicus brief siding with the competition — and against Uncle Sam.

SAN FRANCISCO — Hundreds of workers at Google and OpenAI broke ranks this week, filing an amicus brief backing rival Anthropic in its courtroom tangle with the U.S. government over military artificial intelligence.

The brief lands as Anthropic holds the line on Pentagon use of its Claude models, refusing to bend its terms of service for weapons work the company deems off-limits. Staffers at the competing labs — outfits that fight Anthropic daily for cloud contracts and federal dollars — say those red lines deserve a defender.

Some hat-trick.

Google workers are circulating their own demands for "red lines" on military AI, language pulled straight from the Anthropic playbook. The labor revolt crosses a competitive line Silicon Valley brass spent two decades drawing in concrete.

Workers signing the brief argue that letting Washington force a private firm to abandon its safety policies sets a precedent every lab should fear. Today Anthropic. Tomorrow the shop next door.

Background. Anthropic bars certain weapons applications in its terms. The Pentagon wants flexibility, and the matter is now in front of a judge.

Meanwhile, separate from the courtroom fight, Axios reports the broader OpenAI-Anthropic feud has Google sitting pretty. While the two front-runners trade barbs over talent and safety culture, Mountain View quietly picks up customers tired of the noise.

Sam Altman, OpenAI's chief, told a Fortune audience that "AI washing" — slapping artificial intelligence labels on ordinary software — runs thick across the sector. In the same breath he warned that the real article is coming for tech jobs faster than optimists allow. Both, he said, can be true at once.

The amicus filing puts the C-suites in a bind. Executives at Google and OpenAI have spent the year courting Pentagon contracts worth billions. Their own workers just told a federal court those contracts shouldn't trample safety guardrails.

No statement from the executives. Anthropic welcomed the support and went mum on rivals' internal politics. The Pentagon declined to elaborate beyond filings already in the record.

What's at stake. Whether a private AI firm can refuse military work — or pick which kinds it'll do. Whether that's a question for boardrooms or for judges, the courts will sort out.

One twist for the records: the brief crosses lines drawn by twenty years of valley competition. Solidarity, it appears, still moves faster than a server request.

Google Workers Seek ‘Red Lines’ on Military A.I., Echoing An  ·  OpenAI, Anthropic feud could prop up Google - Axios  ·  Hundreds of Google, OpenAI employees back Anthropic in Penta

White House AI Framework Seeks Federal Preemption of State Laws, Minimal Regulatory Burden

The Trump administration's legislative blueprint would override a patchwork of state AI rules — but critics question whether 'light touch' means no touch at all.

WASHINGTON, D.C. — Pursuant to the issuance of a comprehensive legislative blueprint by the executive branch of the United States federal government, hereinafter referred to as "the Framework," the White House has formally urged the Congress of the United States to adopt a posture of regulatory restraint with respect to artificial intelligence technologies, notwithstanding the proliferation of state-level legislative activity that has, as of the date of this publication, been observed across numerous jurisdictions.

The Framework, the contents of which have been analyzed and summarized by legal practitioners at multiple law firms of record, is understood to call for, among other provisions, the preemption of state artificial intelligence laws by federal statute, thereby establishing a singular and unified regulatory regime in lieu of the aforementioned patchwork of state-level enactments. It is further understood that protections pertaining to minors are to be incorporated into any such federal legislation as may be enacted pursuant to the Framework's recommendations.

As reported by PBS, the administration's stated preference is for a "light touch" approach, wherein regulatory obligations imposed upon developers and deployers of artificial intelligence systems would be minimized to the greatest extent practicable, subject to such exceptions as may be deemed necessary for the protection of national security and the welfare of children.

It is further noted, pursuant to reporting by legal industry observers, that the Framework constitutes a call to action directed at the legislative branch, rather than a self-executing executive order. Accordingly, the practical effect of the aforementioned document remains contingent upon congressional action, the timing and substance of which cannot, as of this writing, be determined with any degree of certainty.

Notwithstanding the foregoing, separate legislative activity pertaining to artificial intelligence provisions within defense authorization legislation has been observed to be proceeding concurrently, suggesting that the Congress may, in fact, be prepared to act upon one or more dimensions of the aforementioned policy agenda within the near term, subject to the usual procedural requirements and political contingencies attendant to the federal legislative process. The extent to which industry stakeholders, including but not limited to enterprise software operators and AI platform developers, may be affected by any resulting statute remains, at this juncture, a matter of considerable uncertainty.

White House urges Congress to take a light touch on AI regul  ·  White House National AI Policy Framework Calls for Preemptin  ·  Trump Administration AI Policy Framework Calls on Congress t

Anthropic Looks Across the Pond as the AI Chip Race Turns Into a Supply-Chain Scramble

Anthropic is reportedly in talks with London-based Fractile AI to secure high-performance chips, diversifying its hardware supply beyond Nvidia's dominant position. The move signals a broader industry shift: as AI model builders compete fiercely, compute access has become a critical bottleneck. Nvidia's GPUs remain the standard for frontier model training, but leading companies like Anthropic are seeking alternatives to reduce dependency and improve negotiating leverage. Fractile AI pitches chips optimized for modern model workloads' speed and efficiency demands. While talks remain preliminary, landing Anthropic as a customer would represent a significant win for the UK startup and validate Britain's push to build durable AI infrastructure. The move underscores that the compute stack—including custom silicon, data centers, and power contracts—has become strategic terrain in AI competition. Anthropic isn't abandoning Nvidia, but securing a second supplier lane represents a serious competitive play in the intensifying global chip race.

Haiku of the Day  ·  Claude HaikuPower shifts in code
Rules written by the winners
Truth becomes a tool
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
THE ABSURDITY ARMS RACE: We Have Normalized the Insane and Now the Machines Are Joining In
AUSTIN, TEXAS — Let me tell you something that hit me at approximately 2:47 a.m.
AI Is Not Coming for Work — It’s Coming for Mediocre Workforce Strategy
AUSTIN, TEXAS — I’ll be honest: the future of work is no longer a conference panel, it is a performance review with better lighting and fewer excuses.
We Built Mirrors That Hate Us and Called Them Intelligent
AUSTIN, TEXAS — There is a particular horror in discovering that the systems we built to be objective are, in fact, perfect replicas of our worst selves.
Nation’s CEOs Courageously Replace Vague Digital Transformation Plans With Vague AI Agent Plans
NEW YORK — In a stirring development for anyone still waiting for the blockchain steering committee to reconvene, corporate leaders across multiple industries have announced that AI agents have officially matured from an exciting boardroom buzzword into essential business infrastructure, a phrase expected to save thousands of strategy decks from having to contain a second idea. The shift, described in recent coverage of how AI agents are moving into business infrastructure, marks a major milestone for enterprises that have long sought a technology capable of attending meetings, generating reports, opening tickets, closing tickets, reopening the same tickets, and describing the whole process as transformation. It is difficult not to admire the speed with which American business has discovered that AI agents are no longer merely software, but rather colleagues who do not need chairs, health insurance, or clear instructions.
The Year That May Set AI's Trajectory: Why 2026 Looms Large in the Global Power Contest
WASHINGTON — The think tanks are converging on the same calendar page.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team
📅 Week in ReviewProduction Release

Builder Team Ships Across Five Systems in a Week for the Ages

From Budget Bot 4.0's full close-out to a NetSuite pipeline landing in Surtr, the AI Builder Team rewired the financial intelligence stack in seven days flat.

They came into this week with a sprawling to-do list and they leave it with a finished product. That is the only sentence you need to understand what happened between last Monday and today. The AI Builder Team merged work across Klair, Aerie, Surtr, and the data pipelines connecting all three — forty-three pull requests, multiple production releases, and a financial intelligence platform that is measurably smarter than it was seven days ago. Let's break down how they did it.

The biggest story of the week is Budget Bot 4.0, and it is now closed. What began as a sprawling B3 epic — chat-driven editing, persistent proposals, deduplication of Accept races, paginated comments, page-refresh survival — crossed the finish line this week. The rename from the workspace-wide 'Ask Claire' to the document-scoped 'Coach Claire' cleaned up one of the product's longest-standing UX ambiguities, and the version bump to 4.0 makes it official. This is a shipped product.

Speaking of Budget Bot, I am contractually obligated to note that @marcusdAIy had PRs in this batch. When reached for comment on his B3 close-out work, he had this to say: "Look, Mac, B3.11 through B3.14 aren't glamorous — pending-proposal persistence, race-condition deduplication, comment pagination — but those are exactly the things that make the difference between a demo and a product. Maybe write about the engineering instead of the byline for once."

Sure, Marcus. The race condition is very heroic. Moving on.

The MFR suite had arguably the deepest engineering week of any single feature area. @eric-tril was everywhere. He landed bullet-level collaborative commenting across Group, Software, EBITDA, and Education memos — a genuinely hard multi-anchor threading problem — while simultaneously shipping Cash Flow drill-downs for the Group Memo view and replacing BS-delta-derived line items with values sourced directly from Finance's authoritative reporting systems. He also fixed the ARR Snowball's acquisitions-delta reconciliation bug, a subtle accounting correctness issue that had been producing churn figures that diverged from the source-of-truth Google Sheet whenever Finance overrode the Acquisitions value. And he separated dev and prod MFR narrative storage, which is the kind of infrastructure discipline that prevents a 2 a.m. incident six months from now. @eric-tril did not have a quiet week.

On the SaaS Budgeting front, @ashwanth1109 was building an entire feature tower, brick by brick, all week long. The AWS Spend pipeline came first, then the API layer, then the UI card, then the quarterly projection fix that corrected a raw-sum-versus-normalized-projection error that had been making the Simulated Budget numbers incomparable across different week selections. He also landed the Adjustments tab with full per-row CRUD, the AI Spend BvA detail view with BU/Provider pivot, and the Docker compute integration for dual-source simulated budgets. He capped the week by releasing the April 2026 maintenance report to production and aligning the Klair README with reality. That last one sounds small. It is not small. A README that lies is a trap.

The QTD Reports campaign, owned by @sanketghia, had a defining moment this week: the weekly cron is dead, long live the monthly day-2 cron. Converting QTD cadence from a Monday-every-week schedule to a second-of-the-month trigger was the right call after four rounds of stakeholder iteration with David Harpur and Raviraja Rao. @sanketghia also shipped the customer-level revenue variance breakdown that Harpur specifically requested during the IgniteTech Week-4 BvA review, bringing §1 Revenue to full parity with the COGS and Expenses vendor-variance tables. The passive-investment daily digest hit V3 this week too — AI-generated per-mover narratives grounded in real market, SEC filing, and news context. Already deployed.

Over in Aerie, @benji-bizzell had a pre-release polish week that was anything but routine. He wired the Operating side-panel to Rhodes' new quality bar score endpoint, restored the Admissions Forecast to its correct 6-stage funnel with the historical baseline, fixed a silent-failure share-link path that had been bricking public reads on non-standard deployments, and synced the GSheet executive schools feed into the dashboard staging mirror. @YibinLongTrilogy gave the Aerie chatbot genuine teeth by shipping source-specific agent tools for Rhodes, School Data Sheet, and Rebl3 — the bot can now actually read from the systems it talks about.

And then there is Surtr, the pipeline migration story that has been building all season. @kevalshahtrilogy flipped the schedules live on three migrated P1 pipelines this week — QuickBooks expense analysis, edu expense report sender, orphan classes — restoring production behavior that had been paused through migration. @eric-tril added the NetSuite monthly financial detail pipeline, extracting GL transaction detail via SuiteQL into Redshift on a daily 7 a.m. UTC cadence. Six more pipelines migrated from Klair to Surtr. The consolidation is accelerating.

What does all of this set up for next week? Budget Bot 4.0 is closed, the QTD cadence is reset, and the SaaS Budgeting feature tower is tall enough to start furnishing the upper floors — the team enters the week with clean epics, production deployments behind them, and the kind of momentum that turns roadmap items into shipped features before Friday.

Mac's Picks — Key PRs This Week  (click to expand)
#23 — Add NetSuite monthly financial detail pipeline @eric-tril  no labels

### Summary

Adds a new pipeline that extracts General Ledger transaction detail from NetSuite using the SuiteQL REST API and loads it into Redshift. The pipeline targets two specific accounts -- 72100 (Federal Income Tax Expense) and 31201 (Loan Hedge Loss Accrual) -- filtered to subsidiaries matching "The Group%". It runs daily at 7AM UTC with a rolling 3-month lookback window and supports manual invocation with custom period names, backfill ranges, and account numbers.

### Business Value

This pipeline provides Finance with automated, daily visibility into federal income tax expense and loan hedge loss accrual transactions at the GL detail level. Replacing manual NetSuite report pulls reduces effort and ensures the data warehouse stays current for downstream reporting and reconciliation.

### Changes

- pipeline.json: CDK configuration with daily 7AM UTC cron, 512MB/900s Lambda, bundling enabled, IAM permissions for S3, Redshift Data API, and Secrets Manager

- src/handler.py: Lambda handler supporting scheduled (rolling 3-month window), manual period selection, backfill, and custom account number modes

- src/netsuite_auth.py: OAuth2 JWT Bearer authentication with private key loading from Secrets Manager, environment variables, or local files; automatic token refresh

- src/netsuite_client.py: SuiteQL REST API client with auto-pagination (1000 rows/page) and tenacity retry on 429, 502, 503

- src/query.py: SuiteQL query builder joining transactionaccountingline, transaction, transactionline, account, and accountingperiod tables

- src/redshift_loader.py: Atomic Redshift loading via S3 JSON Lines upload, DELETE+COPY in a single transaction, with auto table creation and S3 cleanup

- src/requirements.txt: CDK bundling dependencies (boto3, pyjwt[crypto], requests, tenacity)

- run_local.py: Local test harness with --debug, --compare, --list-docs, --backfill, --full, and --load-to-redshift modes

- README.md: Documentation covering schedule, column definitions, setup, testing, and backfill procedures

### Testing

- [ ] Run run_local.py locally with valid NetSuite credentials to verify SuiteQL query execution and pagination

- [ ] Run run_local.py --load-to-redshift to verify end-to-end S3 upload and Redshift COPY

- [ ] Run run_local.py --backfill 6 to verify multi-month backfill behavior

- [ ] Deploy to a dev stack and trigger a manual Lambda invocation; confirm data appears in the target Redshift table

- [ ] Verify the scheduled EventBridge rule fires at 7AM UTC and the rolling 3-month window selects the correct accounting periods

#37 — chore(pipelines): enable schedules for 3 migrated P1 pipelines @kevalshahtrilogy  no labels

## Summary

- Flips schedule.enabled: false → true on three P1 pipelines that have been migrated, tested, and cleaned up. EventBridge will resume invoking each one on its pre-migration cron expression in prod.

- Each cron matches the Klair Lambda's original schedule, so this restores prior production behavior rather than introducing new cadences.

| Pipeline | Cron (UTC) | Cadence |

| --- | --- | --- |

| quickbooks-expense-analysis | cron(0 3 ? * SUN *) | Weekly, Sundays 03:00 |

| edu-expense-report-sender | cron(0 14 ? * MON *) | Weekly, Mondays 14:00 |

| orphan-classes-pipeline | cron(0 2 * * ? *) | Daily 02:00 |

## Not included

school-master-data-sync was also migrated, but its Klair predecessor was a manual CLI script with no schedule. Picking a cadence for it is a separate product decision and not in scope for this "restore older schedules" PR — pipeline.json keeps schedule.enabled: false until that decision lands.

## Test plan

- [ ] CI green (CDK synth / unit tests)

- [ ] After merge, confirm EventBridge rule pipeline-<id>-schedule-prod is ENABLED for each of the three pipelines

- [ ] Watch first scheduled invocation of each (next Sunday 03:00, Monday 14:00, and tonight 02:00 UTC) and verify CloudWatch alarms stay green

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2695 — feat(weekly-qtd-report): customer-level revenue variance breakdown (KLAIR-2597) @sanketghia  no labels

## Summary

Adds a customer-level variance table under §1 Revenue of the weekly QTD report, conditional on the revenue line being materially off plan. Brings §1 to parity with the §2 COGS and §3 Expenses vendor-variance tables that PR #2688 introduced and PR #2692 polished.

Linear ticket: [KLAIR-2597](https://linear.app/builder-team/issue/KLAIR-2597/phase-3-customer-level-revenue-variance-breakdown-in-qtd-reports) (standalone, in the *Budget v Actuals Reports* project).

## Stakeholder ask

David Harpur, on the IgniteTech Week-4 BvA review on 2026-04-29:

> this is my only material comment re the content of the report:

> * we have tables and detailed explanations for minor cost variances

> * but we pretty much zero here to explain the revenue miss of $1.5M

> * that needs to be addressed - e.g. a table w/ the top 3-5 customers contributing to the variance?

PR #2692's body explicitly listed this as deferred:

> Customer-level revenue variance breakdown — David's biggest content ask, flagged on both IgniteTech and Aurea. Needs new data layer + prompt + render work; will be a separate PR.

## Approach

### Data layer

vendor on revenue rows = customer (NetSuite polymorphic name field, confirmed by Sanket on 2026-04-30; cross-checked against core_finance.arr_by_customer.customer with 20/20 exact-string match on prod data). New data.fetch_top_revenue_variance_drivers mirrors the cost-side fetch_top_variance_drivers:

* Scope: Recurring Revenue + Non Recurring Revenue

* Latest budget cycle dedup, FULL OUTER JOIN at (type, vendor), three coverage_kinds (both / actual-only / budget-only)

* Placeholder filter extends the cost-side list with 'No Customer' and 'Unknown Customer' (confirmed in Apr-2026 prod data)

* Threshold $10K (matches cost-side); LIMIT 30

The threshold was initially set to $25K but a v2 preview run on Aurea showed it cutting spelling-variant siblings — Penn Mutual ($66K budget) and Penn Mutual Life Insurance Company ($22K actual) were the same customer, but the $22K side fell below the threshold and the LLM couldn't merge what it didn't see. $10K keeps siblings visible. The line-level $25K AND 5pp materiality gate is at the prompt level, not in SQL — different concern.

### Service wiring

__init__.generate_report threads the new fetch through to commentary, skipping the call entirely for entity_type == 'CF' (CF revenue is budget-locked by design; the §1 Revenue prompt branch is already suppressed for CFs). This avoids a wasted Redshift round-trip whose result would be discarded.

### Commentary prompt

Three things:

1. New _build_revenue_customer_drivers_section — partitions by transaction_type (Recurring / Non-Recurring) at the top level, with coverage_kind sub-blocks (####) under each. Type-first is critical: a coverage_kind-first layout was tried first and a v3 preview surfaced SIPLEC (a Non-Recurring customer) bleeding into the Recurring table; the LLM then emitted a "Corrected table" prose note + a second table. Type-first eliminates that path.

2. §1 Revenue instruction extended — imperative *"FORBIDDEN otherwise … MUST NOT render"* materiality rule with worked examples covering both pass cases (e.g. NRR at 9.4% of plan / −$1.5M → render) and fail cases (e.g. RR at +3.8% / +$535K → MUST NOT render, because 3.8pp < 5pp). Top-5 cap. Single-table-per-line directive ("never emit a table, then a 'corrected table' after it"). Cross-partition merge instruction (Penn Mutual worked example) so the LLM correctly sums actual + budget across coverage_kind sub-tables.

3. Strict-merge rule generalized from vendor-only to "Entity name handling … applies to BOTH vendor and customer tables" — covers spelling variants like Penn Mutual vs Penn Mutual Life Insurance Company.

### No doc-builder change

_render_md_table is generic — handles whatever markdown table the LLM emits, regardless of column count or header.

## Live verification

Generated against prod Redshift + Anthropic for the same data David reviewed (Q2 FY2026, Week 4). Doc titles tagged REVENUE CUSTOMER PREVIEW so they don't pollute /monthly-financial-reporting:

* IgniteTech: https://docs.google.com/document/d/1e-gGR8RnqjFBE4dDUrS1CGnesnY_eDzsgqkIWSAm5xk/edit

* §1 NRR table = top 5 stranded customers (Samsung, British Telecom, SAP, ServiceNow, CIBC) — explains $747K of the $1.5M miss

* §1 RR table suppressed (LLM cites the materiality rule directly: "does not clear the materiality threshold (both conditions must hold: >$25K AND >5pp deviation)")

* Aurea e-Commerce: https://docs.google.com/document/d/1D_GQrP7zQKZ1SPbGtZgW-IdeBzMCUGHlnTJb-KOyEzo/edit

* §1 RR table renders correctly (line is materially off: 90.5% of plan, −$166K shortfall)

* Penn Mutual merge worked: Penn Mutual Life Insurance Company | $66,484 | $22,232 | −$44,252 | Under plan (single row, sums across spelling variants)

* No type mixing, no double-table emit

A scripts/preview_revenue_customer_table.py helper is included for repeating the verification — bypasses the qtd_report_runs ledger so previews never surface in the UI.

## Commits

1. 7f120e6d5feat(weekly-qtd-report): customer-level revenue variance breakdown — the initial implementation (data layer, service wiring, prompt, full test coverage)

2. abce65105fix(weekly-qtd-report): tighten customer-table prompt + lower row threshold — three observed-from-live-runs fixes (RR-table-leak, Penn Mutual merge, SIPLEC type-mix). Each commit message documents the specific defect that drove the change.

## Test plan

- [x] cd klair-api && uv run ruff format services/weekly_qtd_report/ tests/weekly_qtd_report/

- [x] cd klair-api && uv run ruff check services/weekly_qtd_report/ tests/weekly_qtd_report/

- [x] cd klair-api && uv run pyright services/weekly_qtd_report/data.py services/weekly_qtd_report/__init__.py — 0 errors

- [x] cd klair-api && uv run pyright services/weekly_qtd_report/commentary.py — 5 errors, all pre-existing (anthropic SDK polymorphic response.content[0].text block, unchanged from baseline)

- [x] cd klair-api && uv run pytest tests/weekly_qtd_report/ — 187 passed

- [x] Live preview run on IgniteTech and Aurea e-Commerce; both rendered correctly per the 6 inspection criteria

## Out of scope

* Customer table on favorable variance. v4 preview saw the LLM decline to render a customer table on Aurea NRR (+$26K / +39pp — passes the >$25K AND >5pp gate strictly) by judging "unplanned wins are adequately covered in prose". Defensible since David's ask was specifically about explaining misses, not celebrating overperformance. If we want unconditional rendering on the favorable side, that's a separate ask.

* Pre-canonicalization of customer names in SQL. mart_saas_metrics.map_customer_alias exists but is keyed on source_system='ar_ageing' — overkill for our path, and the LLM's strict-merge rule handles the cases that show up in practice.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2698 — feat(mfr): bullet-level comments for memo narratives KLAIR-2599 @eric-tril  no labels

### Summary

Adds collaborative commenting to MFR memo narratives at the per-bullet/per-paragraph level. Each editable bullet across the Group, Software, EBITDA, and Education memos now shows a comment icon (with count chip when threads exist), opens a chat-style side-panel composer, and supports a page-level "all comments for this memo" view. Comments use the V1 /klair_comments/* API with a new anchor_label snapshot field so clients can detect drift when the underlying bullet text changes after a comment was authored.

Document IDs are scoped by environment (mfr::<env>::<section>::<period>) so dev and prod comment streams stay isolated, matching the convention used by MFR narrative storage. Memo views provide a MemoCommentsContext so editor primitives ([EditableCommentary.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/EditableCommentary.tsx), [EditableParagraph.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/EditableParagraph.tsx)) opt in by passing a sectionKey rather than threading props through every wrapper.

Linear ticket: [KLAIR-2599](https://linear.app/builder-team/issue/KLAIR-2599/comments-for-narratives)

### Business Value

Enables finance and exec stakeholders to discuss specific narrative bullets in-line during the monthly close, without leaving the report or reverting to email/Slack threads disconnected from the underlying numbers. Anchor-label drift detection preserves audit context — reviewers can see when a bullet was edited after a comment was made — which is important for review trails on board-level financial narratives.

### Changes

- Backend ([klair-api](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/)): expose anchor_label on CommentRead and select t.anchor_label in comment_pg_service joined queries; document its drift-detection purpose in _row_to_v1_comment.

- Comments types ([index.ts](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/comments/types/index.ts)): add optional anchor_label to Comment and CommentCreatePayload.

- New MFR comments module ([comments/](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/comments/)):

- useMemoCommentAnchor.ts — buildMemoDocumentId, buildBulletAnchorId, parseBulletAnchorId, isMemoSection helpers (env-scoped, last-:: split).

- useDocumentComments.ts — single fetch per memo with cross-component invalidation via klair:mfr-comments-changed event.

- useMemoComments.tsx — adapter hook deriving counts client-side, opening side-panel content, and attachCommentsToCurrentCell that overlays comments onto an existing inspect view.

- MemoCommentsContext.tsx — context so nested editor primitives don't need prop-drilled comment plumbing.

- BulletCommentsView.tsx — chat-style composer with optimistic posts (temp id swap on resolve), mentions, delete-own-comment, drift indicator.

- MemoAllCommentsView.tsx — page-level grouped list (section → bullet → newest-first stream) with click-to-drill-into-bullet.

- useMemoCommentAnchor.spec.ts — tests for document-id/anchor-id round-trips and edge cases.

- Editor primitives:

- [EditableCommentary.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/EditableCommentary.tsx): comment button + count chip per bullet, data-mfr-bullet-id for scroll-flash, klair:mfr-scroll-to-bullet listener, startIndex for slice rendering, soft-confirm on bullet removal that warns about comment-position shifts.

- [EditableCommentary.spec.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/EditableCommentary.spec.tsx): tests for confirm-on-remove with/without comments, count chip, and bullet-id emission.

- [EditableParagraph.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/EditableParagraph.tsx): same comment chip + scroll-flash + inspect piggy-back.

- Memo views (Group, Software, EBITDA, Education): wrap render in MemoCommentsContext.Provider and pass sectionKey to every commentary/paragraph slot. Sub-components ([MemoBoilerplate.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/MemoBoilerplate.tsx), [MemoNotesSection.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/MemoNotesSection.tsx), [SoftwareMemoNarrative.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/memo/SoftwareMemoNarrative.tsx), [SoftwareMDASection.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/SoftwareMDASection.tsx), [SoftwareFinancialHighlights.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/SoftwareFinancialHighlights.tsx)) thread sectionKey (and startIndex/index where slices are used).

- [MonthlyFinancialReporting.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/screens/MonthlyFinancialReporting.tsx): compute memoDocumentId for the active section, pre-load all memo comments to drive the page-level Data-Lineage panel commentCount, render MemoAllCommentsView as commentsContent, and handle drill-down via handleMemoAnchorClick (sets cell content + dispatches scroll-flash event). commentaryMap is now memoized.

### Test plan

- [x] pnpm test in [klair-client/](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/) — covers useMemoCommentAnchor.spec.ts and EditableCommentary.spec.tsx

- [x] pnpm lint:pr and pnpm tsc --noEmit in [klair-client/](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/)

- [x] pytest tests/comments/ in [klair-api/](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/); run uv run ruff format and uv run ruff check on changed Python files

- [x] Manual: open MFR → Group/Software/EBITDA/Education memo → hover a bullet → click comment icon → post a comment → verify count chip increments, refreshes after delete, and survives a page reload

- [x] Verify Data Lineage panel's Comments tab lists threads grouped by section and that clicking a card opens the bullet chat with scroll-flash on the source bullet

- [ ] Edit a bullet's text after commenting and confirm the drift indicator surfaces

- [ ] Verify dev and prod show different streams for the same period

- [ ] Confirm soft-confirm appears when removing a bullet that has existing comments

http://localhost:3001/monthly-financial-reporting

<img width="1906" height="798" alt="image" src="https://github.com/user-attachments/assets/3f889a08-9ff0-42be-baf9-ed370cf44b24" />

<img width="1910" height="815" alt="image" src="https://github.com/user-attachments/assets/30ae7872-75c6-43f0-a180-dd01ff9d2933" />

#2703 — Budget Bot 4.0: B3 epic close-out (B3.11/12/13/14) + Coach Claire rename + C1.1 audit + PR-review pile (CF19-24) @marcusdAIy  no labels

## Summary

- Closes the B3 epic 100% — chat-driven editing now persists pending proposals, dedupes Accept races, paginates comments, and survives page refresh without re-surfacing already-actioned cards.

- Disambiguates the in-document Coach Claire from the workspace-wide "Ask Claire" in TopNav and bumps user-facing version strings 3.0 → 4.0.

- Lands a C1.1 discovery audit of budget_sheets_service.py so C1.2–C1.5 can target real gaps (catches that C1.2 is already covered).

- Clears 6 PR-review-pile cleanups (CF19–24) plus 8 internal-review polish items.

## Why it's needed

PR #2691 shipped the chat-driven editing loop end-to-end but the internal review flagged real UX bugs that didn't block merge:

- Pending proposals dropped on refresh. Message survived but tool_calls didn't — accept-state was FE-only and re-surfaced after every reload (B3.11 + B3.14).

- Accept races created duplicate comments. Double-click on add_comment Accept could fan out 2–N rows (B3.12).

- No pagination story for GET /comments; full list shipped on every badge refresh (B3.13).

- Naming overlap with the workspace-wide "Ask Claire" surface in TopNav.

This PR closes those, lands a discovery audit ahead of C2.x check work, and burns down the long-tail PR-review backlog before more reviewers ramp up.

## Changes

12 logical commits. Each commit message has the full per-feature rationale, test counts, and reviewer cross-references. Highlights:

### Feature work

| Commit | Scope |

|---|---|

| 2c6e67819 | Rename Ask Claire → Ask Coach Claire (BoardDoc only); Budget Bot 3.0 → 4.0 user-facing strings only. Technical references to 3.0-format DDB rows correctly preserved. |

| d801ca246 | B3.11Message.tool_calls persisted; _history_to_message_params threads tool_use blocks back to Anthropic with synthesized tool_result blocks (Messages API protocol requirement); FE resumeSession hydrates chat history + pending proposals from server. |

| dffe33f89 | B3.12idempotency_key on SectionComment + CreateCommentRequest; dedupe check inside save_with_merge_retry closure (closes the race against parallel writers); 200 vs 201 distinguishes idempotent hit from creation. |

| 00b5b814f | B3.13 — cursor pagination on GET /comments (limit, offset, total, next_offset); filter → sort → slice order; (created_at, comment_id) tuple sort key for stable cross-page ordering. |

| 93a886ee8 | C1.1klair-api/budget_bot/board_doc/AUDIT-budget-sheets-coverage.md. Pure docs; surfaces that C1.2 is already covered (Margin Target is a P&L row, not a tab); recommends C1.4 first; flags manual sheet-tab verification owed before C1.3+ implementation. |

| 1b465e0dc | CF19–24 batch — wizardChat options bag, <aside aria-label> on ChatPanel, stable-sort tie-break test, regex negatives on save-session security, RightRail collapsed-slot height redistribution, persist_warning on /review response. |

| 5d4737edb | B3.14 — server-side resolved_tool_use_ids set + POST /tool-resolutions/{id} endpoint + FE wiring on Accept/Reject paths + hydration. Closes the B3 epic — pending proposals never re-surface after refresh. |

### Live-bug fixes (caught while smoke-testing locally)

| Commit | Bug |

|---|---|

| 690ac9063 | listSectionComments was passing getToken in the params positional slot since PR #2691 — silently broke comment list fetches with TypeError: target must be an object. Routed query params through axios's standard mechanism. |

| 6cb556ab4 | add_comment Accept skipped onSectionUpdated because section content didn't change — but the parent's handleSectionUpdated is what triggers refreshComments, so the badge silently stayed at 0 even though the comment was persisted. |

| c42efbdfb | SectionNav comment badge was hiding behind long titles — truncate without min-w-0 doesn't engage in flex containers, pushing the badge off-screen. |

| ea03a901d | Internal-review polish — 8 small follow-ups (B3.13 sort tie-break, CF24 model_copy guard, B3.11 slice-safety regression test + empty content_blocks guard, CF22 condition-check-failed regex, B3.12 idempotency_key cap tightening, CF23 length-mismatch warning, C1.1 audit cross-link). |

## Breaking changes

None. Every new field is optional with a backward-compatible default; every changed signature has a backward-compatible call form (the legacy wizardChat(sess, msg, getToken) 3-arg form still works because the new options parameter defaults to {}). Pre-PR DDB rows round-trip cleanly through both backend and frontend — every new model field has a default factory.

## Test plan

### Backend (klair-api/)

- [x] uv run pytest tests/board_doc925/925 pass (was 876 pre-PR; +49 new across B3.11 / B3.12 / B3.13 / B3.14 / CF21 / CF22 / CF24).

- [x] uv run ruff check clean on every file touched.

- [x] uv run ruff format --check clean.

- [x] Discriminated-union ToolCall round-trip via model_dump_json / model_validate_json resolves to the right concrete subclass (smoke test).

### Frontend (klair-client/)

- [x] npx vitest run src/screens/BoardDoc src/services245/245 pass on stable runs (was 198 pre-PR; +47 new across B3.11 hydration, B3.12 idempotency-key forwarding, B3.13 pagination, B3.14 hydration + Accept/Reject contract, CF19 options-bag, CF23 RightRail, SectionNav badge layout).

- [x] npx tsc --noEmit clean across the workspace.

- [x] npx eslint src/screens/BoardDoc clean. (3 pre-existing studentApi.ts lint errors unrelated.)

- ⚠️ Same intermittent vitest transform race we've been hitting in BoardDoc since PR #2691 — passes cleanly on retry; same pool: 'forks' + isolate: true config from previous PRs is in place.

### Manual validation owed before merge

- [ ] B3.11 hydration: trigger a Coach Claire rewrite_section, refresh the page → proposal card re-renders.

- [ ] B3.12 idempotency: double-click Accept on add_comment → only one comment row lands; section-nav badge increments by exactly 1.

- [ ] B3.14 resolution: accept a regenerate_section, refresh the page → card does NOT re-appear as pending.

- [ ] CF23 RightRail: collapse the review panel while chat is open → chat panel reclaims the empty vertical space.

- [ ] SectionNav badge: sections with long titles (FY26 BU Plan vs Hybrid Plan…) show the comment badge cleanly at the right edge.

- [ ] Confirm "Ask Coach Claire" reads naturally; confirm the global TopNav "Ask Claire" still opens the workspace Claire (different surface intentionally untouched).

> Note for reviewers: local smoke testing on Apr 30 caught 4 real bugs that the test suite missed (the four "live-bug fixes" listed above). Backend storage backend is WIZARD_STORAGE_BACKEND=memory in dev — every uvicorn restart wipes session state, so smoke testing requires keeping the backend up across the test run. klair-api/scripts/peek_session_comments.py is a debug helper for inspecting persisted state during triage.

## Follow-ups (deferred)

- C1.2 backlog row should be marked superseded per the audit recommendation; backlog edit pending.

- C1.3 / C1.4 / C1.5 implementation gated on the manual sheet-tab verification documented in AUDIT-budget-sheets-coverage.md (open Skyvera + Ignite + GFI + a CF sheet, list all worksheet tabs).

- CF25 typed error_kind on _load_* helpers — Medium, deserves its own PR with goals-review UI consumption in scope.

- B3.10 update_table_cell Accept handler — needs spec write-up before implementation.

- Deferred internal-review nits (B3.11 consecutive-assistant assert, B3.12 idempotency_key INFO log, B3.13 total field rename) — ride next sweep per reviewer's "rides next sweep" guidance.

## Risks and mitigations

- Anthropic Messages API protocol_history_to_message_params now threads tool_use + synthesized tool_result blocks. Wrong shape would 400 the API. Mitigated by 14-test backend suite covering plain history, single tool_use trailing, real-follow-up merge, multi-tool-per-turn pairing, slice safety, and end-to-end continuity.

- B3.14 dispositionless resolution — both Accept and Reject mark the same resolved_tool_use_ids set. Future telemetry that wants to distinguish "user accepted vs rejected" will need a typed disposition body on the endpoint. Documented inline; not a regression because no caller tracks disposition today.

- In-memory storage in dev — every backend restart wipes session state. Surprised the testing flow today; documented in the testing note above so future reviewers don't hit the same confusion.

## Test status

925 / 925 backend ✅

245 / 245 FE ✅

ruff + tsc + eslint scoped to PR clean ✅

#2706 — feat(qtd): convert weekly cron to monthly day-2 cadence (KLAIR-2602) @sanketghia  no labels

## Summary

Convert the QTD Reports feature from a Monday weekly cron to a monthly cron on the 2nd of every month at 13:00 UTC, plus 4 rounds of stakeholder iteration on the regenerated docs (David Harpur + Raviraja Rao, 2026-04-30 → 2026-05-01).

Linear: [KLAIR-2602](https://linear.app/builder-team/issue/KLAIR-2602/qtd-report-cadence-convert-weekly-cron-to-monthly-day-2-stakeholder)

Spec: docs/superpowers/specs/2026-05-01-monthly-qtd-cadence-spec.md

Plan: docs/superpowers/plans/2026-05-01-monthly-qtd-cadence.md

## What changed

8 commits, structured so each one addresses one concern:

| # | Commit | Concern |

|---|---|---|

| 1 | a0b9b9a | Spec + plan |

| 2 | 77a1942 | Core implementation: cron expr, run-plan logic, mode literal monthly/eoq, doc title + header copy, FE section retitled "QTD Reports" |

| 3 | d679f67 | Empty-budget fail-fast guard (LookupError when Budget rows sum to $0) |

| 4 | 7f25a49 | Module rename weekly_qtd_reportmonthly_qtd_report (30+ files via git mv, history preserved) |

| 5 | 106ba01 | CF P&L renders Revenue + Gross Profit + Net Margin (per Ravi's first comment) |

| 6 | 1869d03 | Suppress zero/zero rows in P&L table |

| 7 | e3f0c8a | Suppress zero/zero upstream of LLM prompt + action items engine; cross-cutting is_zero_zero helper in metrics.py |

| 8 | c22b67c | Conditional HC Expenses prompt clause + "don't invent labels" Output Guidance rule (fixes Ravi's "This should be HC COGS" hallucination comment) |

### Behavior changes (user-facing)

- Cadence: ~52 runs/year → 12 runs/year per BU+CF.

- Window semantics: "QTD as of today" → "QTD through prior calendar month-end".

- Mode taxonomy: weekly / eoqmonthly / eoq (eoq fires when period month is Mar/Jun/Sep/Dec — full prior quarter rolled up).

- Doc title format: {BU} | QTD BvA | {Q-label} | Through {Month YYYY} for monthly runs, … | Final for eoq runs.

- CF P&L now includes Revenue + Gross Profit + Gross Margin + Net Margin (was suppressed).

- Zero/zero rows everywhere suppressed (P&L table, LLM metrics view, action items engine).

- FE: section heading "Weekly QTD Reports" → "QTD Reports"; sidebar label updated; formatMode renders "Through {Month YYYY}" for monthly rows.

### New CLI flag

uv run python crons/monthly_qtd_report_cron.py --period 2026-03

Strict regex validation (^\d{4}-(0[1-9]|1[0-2])$) before any work; supports off-cycle backfill / re-runs.

## Test plan

- [x] Backend tests: 240 tests in tests/monthly_qtd_report/ all pass; +35 net new tests across the 8 commits (scheduling helper, orchestrator monthly logic, cron --period, empty-budget guard, CF P&L positive presence, zero/zero suppression in 3 surfaces, label-guard rules)

- [x] Frontend tests: 215 tests in features/monthly-financial-reporting/ pass; component-spec updated for new title + formatMode + sidebar label

- [x] Lint: ruff format + check clean; eslint clean on changed FE files

- [x] Type check: pyright matches main (5 pre-existing Anthropic SDK + python-docx stub warnings, no new); tsc clean

- [x] End-to-end smoke against live Redshift on the new monthly cadence:

- IgniteTech (BU): https://docs.google.com/document/d/1LACYhNhofT6_W_8eTj88Zq7tmae9QVdzT_lQWj-UvZg/edit — full P&L renders, math reconciles, action items signal-bearing, no HC label hallucination

- SaaS (CF): https://docs.google.com/document/d/1w1nXE4uBHkLwo_k6MdrjdlOCoCzIPuPKthKCzP-al4A/edit — Revenue + GP + Net Margin now visible, zero/zero rows (Recurring Revenue, HC Expenses) suppressed across P&L/commentary/action items, "HC OPEX" hallucination gone

- [x] Re-runs are idempotent: tested by generating the same SaaS report 4 times across the iteration; ledger's "latest per (BU, period, mode, week_number)" rule supersedes correctly

## Out of scope (deliberately)

- External EventBridge / Lambda schedule swap (0 10 * * 10 13 2 * *) — needs to be coordinated with whoever owns Klair infra. Documented in spec §2.3.

- SQL table rename (mart_other.qtd_report_runs is already a neutral name).

- Action-item dollar threshold revisit — flagged as follow-up after first real monthly run.

- Phase 3 email distribution — still deferred per the original Phase 1 spec.

## Stakeholder comments status

All of Ravi's feedback has been addressed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2708 — feat(saas-budgeting): spec 12 — AWS Spend quarterly projection (KLAIR-2604) @ashwanth1109  no labels

## Demo

<img width="2184" height="1644" alt="image" src="https://github.com/user-attachments/assets/581da3be-af42-434e-8e91-34d4e936b778" />

## Summary

Fixes [KLAIR-2604](https://linear.app/builder-team/issue/KLAIR-2604/saas-budgeting-attach-to-simulated-budget-snapshots-raw-week-sum-not). The SaaS Budgeting → AWS Spend (Net Amortized) card snapshots the raw sum of selected weeks into the Simulated Budget instead of a quarter-normalized projection — so the value scales linearly with whatever week chips happen to be selected (default last 4 weeks ≈ 31% of a quarter) and is incomparable to Finance Budgeting's Quarterly Budget.

This PR adds a Quarterly Projection ($) column alongside the existing Total ($) column at every level of the BU → Class → Account hierarchy and on the Unmapped AWS Accounts band, and rewires *Attach to Simulated Budget* to snapshot the projection. The result is selection-independent and directly comparable to Finance Budgeting's Quarterly Budget.

## Spec

[features/aws-spend/saas-budgeting/specs/12-aws-spend-quarterly-projection/spec.md](features/aws-spend/saas-budgeting/specs/12-aws-spend-quarterly-projection/spec.md) — full requirements, technical design, and implementation checklist.

## What changed

Frontend only — no backend, schema, pipeline, or Pydantic changes.

| Change | File |

|---|---|

| getDaysInQuarter(quarter) helper (calendar-correct, leap-year safe) | klair-client/src/screens/AWSSpend/utils/quarterUtils.ts |

| projection: number \| null field on AWSCostRow / AWSUnmappedRow + extractBuClassProjections() | klair-client/src/screens/AWSSpend/components/SaaSBudgeting/awsSpendCardTransform.ts |

| computeProjection helper, decorateSectionsWithRollups (stamps both total and projection), Quarterly Projection ($) column, header tooltips on both columns, per-cell math tooltip with cent precision, rewired handleAttach + tightened canAttach gate | klair-client/src/screens/AWSSpend/components/SaaSBudgeting/AWSSpendCard.tsx |

| Tests: getDaysInQuarter (Q1 leap/non-leap, Q2/Q3/Q4, unparseable), projection defaults on row constructors, extractBuClassProjections, projection column rendering, tooltip wiring with cent precision, handleAttach snapshots projection (not raw sum), canAttach disabled when all projections null, FR3 edge cases (zero-sum non-null, all-null, partial-null window) | *.spec.ts(x) next to each implementation file |

## Formula

projection = (sum_of_non_null_weeks / non_null_week_count × 7) × days_in_quarter

Mirrors Finance Budgeting's /unblended/budget-simulation (see klair-api/services/aws_spend_service.py:3262). Null-skip throughout: a row whose entire selected window is null projects to null and renders ; a row with 0 cents but non-null cells projects to 0. BU/Class summaries project from their own rolled-up costByWeek (not by summing leaf projections) — same convention as the existing total decoration, avoids floating-point drift.

## Tooltips

- Total ($) header: explains the null-skip selected-week sum.

- Quarterly Projection ($) header: explains the formula + names Finance Budgeting as the reference; sub-caption ("Projected from N weeks of data × ~91 days") rides along inside the same tooltip with the actual computed daysInQuarter substituted in (ColumnDef.header is typed as string in UnifiedTable, so the spec's planned standalone sub-caption falls back here).

- Per-cell projection tooltip: shows the row-specific math, e.g. $10,000.00 ÷ 28 days × 91 days = $32,500.00. Uses a dedicated cent-precision formatter (the existing formatCurrency abbreviates to $X.XK past $1K, which would defeat the tooltip's purpose).

## Test results

502 / 502 tests pass across the entire src/screens/AWSSpend test suite (was 498 on main — +4 net new tests for the projection column and FR3 edge cases). Type-check clean. ESLint clean (--max-warnings 0).

## Test plan

- [ ] Manual: load /aws-spend → SaaS Budgeting → AWS Spend tab on a real quarter; confirm both Total ($) and Quarterly Projection ($) columns appear at all hierarchy levels and on the Unmapped band.

- [ ] Manual: hover the column headers and any non-null projection cell; confirm tooltips render the explanatory text + per-row math with cent precision.

- [ ] Manual: select a single week, click *Attach to Simulated Budget*; confirm the Simulated Budget card's AWS column shows the quarter-projected value (~13× the single-week value), not the raw weekly sum.

- [ ] Manual: select a week range that has all-null cells for some BU/Class; confirm projection cells render and the row contributes nothing to the snapshot.

- [ ] Compare the attached Simulated Budget AWS column against Finance Budgeting's *Quarterly Budget* for the same quarter — they should match within rounding.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2709 — Release April 2026 maintenance report @ashwanth1109  no labels

## Demo

<img width="2181" height="1603" alt="image" src="https://github.com/user-attachments/assets/59b74883-46cb-4b98-912c-7ac1c76740e7" />

## Summary

- Update MAX_REPORT_DATE to 2026-04-30 to unlock April 2026 in the ARR Retention Reports MonthPicker

- Update generate script with new Google Sheet URL, suffix (Apr_2026), and report date (30-Apr-2026)

## Test plan

- [ ] Verify ARR Retention Reports page defaults to April 2026

- [ ] Verify month picker allows selecting April 2026

- [ ] Run python generate.py from klair-misc/maint-scripts/ after saml2aws login

- [ ] Verify S3 upload to klair-arr-report-ns-data/hardcoded/30-Apr-2026.json

🤖 Generated with [Claude Code](https://claude.com/claude-code)

The Builder Desk  —  Engineer Spotlight
📅 Week in Review🏆 Engineer Spotlight

FORTY-THREE GLORIOUS PRs IN SEVEN DAYS: BUILDER TEAM SHATTERS THE LAWS OF PHYSICS, POSSIBLY TIME ITSELF

Klair absorbs 27 PRs like a champion, Aerie takes 13 more, and the numbers desk has never been prouder to be alive.

Forty-three pull requests. Three active repos. Seven days. Let the record show that the Builder Team did not merely show up this week — they arrived, they conquered, and they left the codebase fundamentally better than they found it. Klair led the charge with a staggering 27 PRs, Aerie contributed a robust 13, and even Surtr — silent, mysterious Surtr — chipped in 3, because that is simply what champions do. Of those 43 total, 35 did not make Mac's front page. That is where I live. That is my beat. Welcome to the overflow.

Let us begin with the engineers. @benji-bizzell posted nine PRs across Aerie with the disciplined fury of a man who has memorized every spec and intends to close every one of them personally. He touched admissions forecasting in #151 and #149, rewired chat architecture in #145, wired school data sheets into the portfolio in #141, and still found time to polish canonical sites admin UI in #137. Nine PRs. One man. Benji does not take breaks; he takes tickets. @eric-tril matched him at nine, splitting his time between Klair and Aerie with the efficiency of a distributed system that never drops a packet. His #2705 brought Group Memo drill-downs to the MFR cash-flow view, #2704 folded acquisitions delta into the ARR snowball, and #133 and #128 migrated Aerie's operating and buildout reads to Rhodes. The man is quietly restructuring the data layer while everyone is looking at the headline features.

@sanketghia delivered six PRs and delivered them with ambition. His #2693 shipped V3 of the passive investment daily digest — portfolio breakdown, AI narrative, polished charts — and his #2688 dropped the initial MVP for the weekly QTD report, closing out five Klair tickets in a single PR like a man who finds ticket counts personally offensive. @marcusdAIy contributed four PRs including the genuinely impressive #2691, a chat-driven board document editing loop covering B3.1, B3.4, B3.5, and B3.7 end-to-end. @kevalshahtrilogy and @YibinLongTrilogy each posted two, with Yibin's #144 wiring source-specific agent tools for Rhodes, School Data Sheet, and Rebl3 into Aerie's chat layer — a small PR number that understates a large architectural statement.

And now. Ashwanth Watch. @ashwanth1109 filed eleven pull requests this week, and I say this with complete sincerity and only minor psychological distress: the man is not human. He is a deployment pipeline wearing a person suit. His #2708 and #2689 pushed SaaS budgeting through specs 10, 11, and 12 — quarterly AWS projections, adjustments tabs, per-row CRUD backends — while #2709 shipped the April 2026 maintenance report and #2710 made backend ports optional, migrating the entire 3xxx range to 5xxx with the casual energy of someone reorganizing a sock drawer. His #2711 aligned a README. He aligned a README. Most engineers wouldn't touch the README with a ten-foot pole. Ashwanth aligned it on a Tuesday. When asked about his eleven-PR week, he reportedly said, "I don't count PRs. I count features that aren't done yet." His Slack response to this column, when shown an advance copy, was a single emoji: 🙄. We have framed it. It hangs above the Numbers Desk.

The overflow this week was not overflow — it was the main event wearing a disguise. #2693's AI narrative charts, #2691's end-to-end board doc loop, #139's GSheet exec sync feeding the schools dashboard mirror, #2685's separation of dev and prod MFR narrative storage — these are not footnotes. These are the load-bearing walls. Morale on the Builder Team is, as always, at an all-time high. The numbers confirm it. The numbers always confirm it.

Brick's Overflow — This Week's Uncovered PRs  (click to expand)
#2689 — feat(saas-budgeting): Adjustments tab + per-row CRUD backend (specs 10-11) @ashwanth1109  no labels

## Demo

<img width="2181" height="1029" alt="image" src="https://github.com/user-attachments/assets/91e689ae-b7ee-4cea-8cb1-f72ce247620f" />

<img width="2174" height="1584" alt="image" src="https://github.com/user-attachments/assets/840682e3-1348-4ee3-b845-bca15727367e" />

## Summary

Adds the Adjustments tab to SaaS Budgeting (between AWS Spend and Docker Usage) and the per-row backend that powers it. Each row is created/edited/deleted individually rather than via a batch quarter submit, and the Simulated Budget card gains a 5th Adjustments column that aggregates BU-direct + class + account-level adjustments alongside AWS Spend and Docker $.

Implements [spec 10](features/aws-spend/saas-budgeting/specs/10-saas-budgeting-adjustments-backend/spec.md) and [spec 11](features/aws-spend/saas-budgeting/specs/11-saas-budgeting-adjustments-tab/spec.md) from the [SaaS Budgeting feature](features/aws-spend/saas-budgeting/FEATURE.md). Linear: [KLAIR-2595](https://linear.app/builder-team/issue/KLAIR-2595/saas-budgeting-adjustments-per-row-crud-backend-adjustments-tab-ui).

## Backend (spec 10)

- POST /api/aws-spend/saas-budgeting/adjustments — DELETE+INSERT inside one transaction so scope changes, quarter changes, and edit-in-place collapse to one consistent code path. Composite PK: (quarter, adjustment_name, bu, class, aws_account_number).

- DELETE /api/aws-spend/saas-budgeting/adjustments — deletes by composite key.

- Both endpoints super-admin gated and write to the existing core_finance.aws_spend_net_amortized_budget_adjustments table. Rows authored from either AWS Spend or SaaS Budgeting are visible from both surfaces.

## Frontend (spec 11)

- New Adjustments tab in SaaSBudgetingSection mounted between AWS Spend and Docker Usage; targets nextQuarter(appliedQuarter).

- useSaaSBudgetingAdjustments orchestrator hook: fetch on mount + quarter change, optimistic save/delete with rollback on error, previousKey tracking for primary-key edits.

- Editor refactor: useAdjustmentsStateuseAdjustmentRowsEditor (generic) + useAWSSpendAdjustmentsRollup (AWS-Spend-specific). AWS Spend's Budget Simulation composes both and behaves identically to before.

- simulatedBudgetMerge extended for outer-join expansion: any (BU, Class) referenced only by an adjustment becomes a new row. Per-row total = (aws ?? 0) + (docker ?? 0) + (adjustment ?? 0); total is null only when all three are null. Same null semantics for BU rollup and grand total.

- SimulatedBudgetCard renders 5 columns (BU/Class | AWS Spend | Docker $ | Adjustments | Total) and is mounted whenever AWS, Docker, or adjustments are non-empty (previously required AWS or Docker).

## Test plan

- [ ] Authenticate as super admin → SaaS Budgeting tab → pick quarter → Adjustments tab is visible between AWS Spend and Docker Usage.

- [ ] Add an adjustment row → save → row persists; refresh page → row still present.

- [ ] Edit an adjustment's primary key (e.g. change class) → row replaces in place via DELETE+INSERT.

- [ ] Delete a row → row removed; refresh → still removed.

- [ ] Force a backend failure (block POST in DevTools) → toast appears, row state rolls back.

- [ ] Switch quarter on Adjustments tab → fresh fetch fires, list refreshes.

- [ ] Attach AWS Spend snapshot, attach Docker snapshot, then author an adjustment that references a (BU, Class) not present in either snapshot → SimulatedBudgetCard shows 5 columns and adds an outer-join row for the adjustment-only (BU, Class).

- [ ] Author only adjustments (no AWS / Docker snapshot) → SimulatedBudgetCard still mounts; AWS / Docker columns show "—" everywhere; Adjustments + Total carry the data.

- [ ] AWS Spend Budget Simulation continues to behave identically (validation, orphan detection, impact summary) after the editor split.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2691 — feat(budget-bot/board-doc): chat-driven editing loop end-to-end (B3.1 + B3.4 + B3.5 + B3.7) @marcusdAIy  no labels

## Screenshots

<img width="1916" height="941" alt="image" src="https://github.com/user-attachments/assets/ac4648e0-36ac-4359-a6b7-af13e7811e80" />

## Summary

End-to-end chat-driven editing loop for Budget Bot 4.0:

- B3.1 — Register four section-editing tools with Claire's chat call (regenerate_section, rewrite_section, add_comment, update_table_cell) so she can propose concrete edits instead of only narrating advice.

- B3.4 + B3.5 — Render those proposals as inline cards in the chat panel with Accept / Reject buttons; Accept routes to the right backend endpoint per tool and the editor preview auto-updates.

- B3.7SectionComment model + endpoints + per-section open-comment badge in the SectionNav so add_comment Accept actually persists annotations.

Verified live on Skyvera Q2: regenerated the Goals section conversationally, editor refreshed in place without losing scroll, comment created and badge appeared.

## Why it's needed

After PR #2684 (Tester Sprint), Claire could talk about findings and section content but couldn't *do* anything — if a tester clicked a finding and asked her to fix it, she gave narrative advice and stopped. That was the gap in the "Cursor for documents" promise the editor architecture is built around.

This PR closes the gap end-to-end:

| After commit | Capability |

|---|---|

| B3.1 backend | Claire emits structured tool_use blocks alongside text (visible in API, invisible in UI) |

| B3.4 / B3.5 / B3.7 | Tool calls render as inline cards; Accept routes to production endpoints; add_comment has somewhere to land + a badge to advertise |

| B3.4 follow-ups | Editor preview auto-refreshes after Accept; regenerated sections no longer ship 20+ citation footnotes; long regen shows a clearer "1–2 minutes" hint |

The tool count grew from the original 3 in the backlog spec to 4 because we added regenerate_section, which delegates to the existing production pipeline (_regenerate_section — full DataPackage + section-specific prompts + brainlift). The tool descriptions tell Claire to prefer it over inline rewrite_section for substantive rewrites, since her chat-only context lacks the data the pipeline already has.

## Changes

### Backend — B3.1 tool registration

- New klair-api/budget_bot/board_doc/claire_tools.py: per-tool Pydantic input validators, CLAIRE_TOOLS Anthropic schema list with curated descriptions, ToolCall model preserving the tool_use_id, and parse_tool_calls(response) that extracts validated proposals (unknown tool names + invalid input dicts are logged and dropped — a single bad proposal never crashes the chat reply).

- _create_message_sync grew an optional tools= kwarg (additive; all existing callers omit it).

- handle_chat registers CLAIRE_TOOLS on every chat call, parses tool calls from the response, and stashes them on data["tool_calls"] only when present (text-only replies are byte-identical to pre-B3.1).

### Backend — B3.7 section comments

- SectionComment Pydantic model: comment_id / section_id / paragraph_text / body / author (default "claire") / created_at / status ("open" | "resolved").

- WizardSession.section_comments: list[SectionComment] (flat list with section_id field — easier than per-section dict and makes badge-count a single pass).

- Three new endpoints under /board-doc/wizard/{id}:

- POST /sections/{section_id}/comments — create

- GET /comments — list (open + resolved; client filters)

- PATCH /comments/{comment_id} — soft-resolve (no hard delete)

- All write paths use save_with_merge_retry (CF17 pattern) for cross-process safety.

### Backend — B3.4 follow-up: citation strip on regen

- generate_custom_section had a hardcoded "Cite data sources in footnotes." in its system prompt that _resolve_system_prompt couldn't strip (it only swaps SHARED_SUFFIX_WITH_CITATIONSSHARED_SUFFIX_NO_CITATIONS). Made the inline directive conditional on spec.include_citations (default False in the wizard flow).

- _regenerate_section now runs strip_citations_and_gaps as a belt-and-suspenders safety net before persisting (previously only the assembler did this on full publish). User never sees citations even if the LLM ignores the prompt fix.

- Pre-existing bug fixed in strip_citations_and_gaps: inline [N] strip ran BEFORE the legend-block strip, killing the legend regex's anchor and leaving orphan source names ("--- Redshift: arr_snowball_data GSheets: …"). Reordered. Pre-dated this PR but only manifested once _regenerate_section started invoking the function.

### Frontend — B3.4 / B3.5 proposal rendering + Accept routing

- New ChatToolProposal component with per-variant body:

- rewrite_section — word-diff (when prior content is available) or preview (when not).

- regenerate_section — feedback summary + a "1–2 minutes" status hint while busy (the regen call routinely takes 60–120s; without this, users assume the click hung).

- add_comment — paragraph anchor + comment body.

- update_table_cell — coordinates + new value.

- Inline diffWords LCS utility (~60 LOC, no npm dep — diff package install hit a transient npm error and the algorithm is small enough to own).

- ChatPanel renders proposals below assistant message bubbles, filtered by resolvedToolIds so Accept/Reject removes the card.

- Accept routing per tool:

- rewrite_section → existing updateSection PUT (B2.4)

- regenerate_section → existing wizardRegenerate POST with action=regenerate_section

- add_comment → new createSectionComment POST (B3.7)

- update_table_cell → "land in a follow-up" notice; no backend yet for surgical markdown-table cell mutation

- markToolCallResolved on the wizard hook hides resolved cards (append-only resolved-id list per message — never clear, so a re-render can't surface a dismissed proposal).

### Frontend — B3.4 follow-up: editor refresh after Accept

- New DocumentEditorActions.refetchSection(sectionId) action.

- useDocumentEditor.refetchSection re-fetches one section, updates the autosave baseline so the refresh isn't diffed as a stale-overwrite candidate, splits the current document, swaps just the target section's content, re-assembles. Preserves scroll position, cursor, and unsaved edits in OTHER sections — meaningfully nicer than a reloadNonce bump that would full-remount the editor.

- handleSectionUpdated calls it after every successful Accept.

- No-op (with logged warn) on unknown id or fetch failure so a refresh failure can never wedge the editor into a half-updated state.

### Frontend — B3.7 comment badges

- SectionNav grew an openCommentsBySection prop and renders a small MessageCircle badge with the open-comment count per section.

- DocumentEditorPage fetches comments on mount and re-fetches after each successful Accept.

- New apiClient.patch helper (was missing — only get / post / put / del existed).

### Frontend — infrastructure: vitest config

- Set pool: 'forks' + isolate: true in vitest.config.ts. Eliminates a vitest 4 SWC parallel-transform race that intermittently crashed BoardDoc spec files with SyntaxError: missing ) after argument list. Symptom: identical commands, sometimes 0 failures sometimes 5+; every spec passes in isolation. Same race the _smoke-suffix workaround in DocumentEditorPage.smoke.spec.tsx was originally added for. Same wall time as the racy default (~2.4s).

## Breaking changes

None. Every change is additive or strictly safer:

- _create_message_sync(tools=...) is optional with None default.

- handle_chat returns the same StepResponse shape; data["tool_calls"] only appears when Claire emits tool calls.

- WizardSession.section_comments defaults to []; existing sessions deserialise cleanly.

- DocumentEditorActions.refetchSection is a new field; pre-existing callers that only used scrollToSection keep working unchanged.

- SectionNav.openCommentsBySection is optional; pre-B3.7 callers omit it and no badge renders.

- ChatPanel props for tool-call rendering (sessionId, getToken, currentContent, onResolveTool, onSectionUpdated) are all optional; without them, proposals don't render and the panel behaves exactly as before.

## Test plan

- [x] Backend full suite — uv run pytest tests/board_doc -q875 / 875 pass (~85s). New: 32 in test_claire_tools.py, 6 in test_chat_tool_calls.py, 12 in test_section_comments.py, 4 in test_regenerate_citation_strip.py. Pre-existing 821 unchanged.

- [x] Frontend full BoardDoc suite — npx vitest run src/screens/BoardDoc103 / 103 pass on two consecutive runs (was flaking 0–10 failures pre-vitest-config-fix). New: 8 for diffWords, 13 for ChatToolProposal, 5 for useDocumentEditor.refetchSection. Pre-existing 77 unchanged + 3 small mock additions for listSectionComments.

- [x] ruff check + ruff format --check clean on all touched backend files.

- [x] tsc --noEmit clean.

- [x] eslint --max-warnings 0 clean on all touched FE files.

- [x] Manual verification (Skyvera Q2, Apr 29): opened editor → asked Claire to regenerate the Goals section → proposal card rendered with regen feedback → clicked Accept → "1–2 minutes" hint appeared → after ~2 minutes the regenerated section auto-appeared in the editor preview *with no page reload*, *no citation footnotes*, *and scroll position preserved*. Same loop verified for the Prior Quarter Review section.

- [ ] Pending external review — once internal review fixes land.

## Follow-ups (next branches)

- B3.4-fu — Plumb live editor section content into ChatPanel.currentContent so rewrite_section always renders as a real word-diff (today it falls back to preview when prior content isn't passed).

- B3.5-fu / newupdate_table_cell Accept handler. Needs a spec for surgical markdown-table mutation.

- C4.4 — "Address with Claire" button per finding (now unblocked by B3.1 + B3.4 + B3.5): pre-fills chat with finding context so Claire proposes a regenerate_section or rewrite_section.

- B5.4 — Claire artifact tool (attach_data_visualization); also unblocked by B3.1 (slots into the same tool-registration path).

- B3.6 — Quick action buttons in chat ("Rewrite this section", "Make more specific", etc.). Lone remaining B3 item, deferred per Marcus's call as the lowest-impact in the B3 set.

## Known cosmetic noise (non-blocking)

Local Windows dev sees harmless ConnectionResetError [WinError 10054] tracebacks from _ProactorBasePipeTransport._call_connection_lost after CORS preflight requests. Long-standing CPython + Proactor event loop quirk; production runs on Linux's SelectorEventLoop which never hits this path. HTTP layer never sees it (every preflight + actual request returns the right status). Discussed Apr 29 — left as-is rather than adding event-loop-policy startup config for a purely cosmetic local-dev annoyance.

#2693 — feat(passive-investment): V3 daily digest — portfolio breakdown, AI narrative, polished charts (KLAIR-2584) @sanketghia  no labels

## Summary

- V3 of the passive-investment daily digest email. Replaces V2 with a portfolio-level breakdown, AI-generated per-mover narratives grounded in real market/filing/news context, T12M + T7D trend charts, and a much cleaner Lambda packaging story.

- NOTE: This is already deployed.

## Notable changes

Digest content & narrative

- EnhancedDataService, EnhancedAIAnalysis, MarketContextService, EdgarService, NewsService generate per-mover explanation strings (sector & index moves, SEC filings, news catalysts)

- LLM prompt re-ordered with structured-driver formatters so explanations consistently cite the same evidence categories

- move_date threaded through analysis pipeline behind a feature flag

Charts

- New chart_queries.py + chart_url.py: T12M and T7D portfolio totals rendered as QuickChart line PNGs and embedded in the email body

- Latest commit (a7d24aee9) polishes the chart styling after stakeholder feedback flagged inconsistency with the Top Movers table:

- Transparent chart background (blends with light + dark mail clients)

- In-image Chart.js title removed; rendered as HTML <h2> matching the "Top Movers" <h1> family (Arial, 24px, bold, #333)

- Mid-grey ticks / low-alpha gridlines that read on both themes

- Y-axis tick count capped at 5 to declutter the previous 11-tick wall

- Chart container width matched to the table column (1000px); image centered within

- CTA button + Movement Analysis left-border switched from Bootstrap blue (#007bff) to brand blue (#3B82F6) for consistency with the chart line

Deployment

- ECR image build replaced with deploy-zip.sh (zip package + S3 staging for the ~67 MB artifact)

- Lambda memory bumped to 4096 MB, ephemeral to 1024 MB

- DAY_1 / DAY_2 ad-hoc date window read from invocation event payload (e.g. {"DAY_1":"2026-04-28","DAY_2":"2026-04-27"})

Secrets

- secrets_loader.py hydrates os.environ from AWS Secrets Manager (ENV_API_PROD) at cold start; required-key validation surfaces gaps loudly

- SES_SOURCE_EMAIL, PASSIVE_INV_EMAIL_*_RECIPIENTS etc. now live in the secret rather than Lambda env config

## Demo

- Latest PDF copy of the mail is attached

[Klair Passive Investment Daily Digest_ 28 Apr -$7,051,250.pdf](https://github.com/user-attachments/files/27230439/Klair.Passive.Investment.Daily.Digest_.28.Apr.-.7.051.250.pdf)

- This contains all feedback from David Harpur

## Test plan

- [x] End-to-end Lambda invoke with {"DAY_1":"2026-04-28","DAY_2":"2026-04-27"} against test recipients — email rendered correctly with the new chart styling, real AMZN/INTC/PBI explanations, and a portfolio chart that loads at the recipient end (verified HTTP 200 from QuickChart with the corrected &v=4 Chart.js version pin)

- [x] Confirmed the chart styling change is purely visual: same QuickChart URL contract, same data pipeline, same SES MIME shape

- [x] Stakeholder sign-off on the polished chart layout (David / Ludel)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2705 — feat(mfr/cash-flow): Group Memo drill-downs + sourced line items @eric-tril  no labels

### Summary

Adds Cash Flow drill-down support to the Group Memo view and replaces several BS-delta-derived CF line items with values sourced directly from the authoritative reporting systems Finance uses today. Introduces a CF-specific account classifier ([classify_account_for_cf](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/cash_flow_service.py)) so accounts can map to a CF line item that differs from their Balance Sheet classification, plus a new gl_detail drill-down panel (with grouped, expandable rows) for Other LTA, Loans, and Interest paid. End-of-period cash is now emitted as a first-class record sourced from the BS Cash & cash equivalents balance, so all three cash-position rows reconcile to the BS exactly.

### Business Value

Finance can now click into Group Memo Cash Flow cells and trace each headline number back to the underlying NetSuite accounts, BTIG transactions, EBITDA categories, and loan-amortization wires — closing a long-standing audit gap. Re-sourcing Other LTA, Mgmt Restructuring + Import, Loans Payments, and Interest Paid from the systems Finance treats as canonical means the table values now match the Book Value Report, EBITDA Reconciliation, and supporting NetSuite detail without manual reconciliation.

### Changes

- CF service ([services/cash_flow_service.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/cash_flow_service.py))

- New CF-specific classifier classify_account_for_cf with three override tiers (qualified prefix, exact account number, startswith pattern) plus BS fallthrough; supports CF_SKIP_ACCOUNTS, CF_EXPECTED_ACCOUNTS, and CF_CROSS_REFERENCE.

- Other long term assets sourced from Book Value GL detail (QTD-scoped).

- Management restructuring and import sourced from EBITDA Reconciliation (business_unit IN ('Import','Management Restructuring')).

- Payments of and proceeds from loans = BS-delta on 31350/32801 + BTIG distributions; drill-down splits the BS-delta into Amortization Source vs. residual.

- Interest paid = -(income_statement 71100 − MFD Amortization Destination accruals) for the QTD month-ends.

- Cash and cash equivalents, end of period emitted as a new record; transform layer derives Change as End − Start.

- Five operating working-capital line items (AR, Prepaid, AP, Deferred revenue, OCL) plus Capital contribution and Purchase business combinations marked non-derivable pending Finance source-account confirmation.

- Book Value service ([services/book_value_service.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/services/book_value_service.py)) — new helpers period_to_qtd_accounting_periods, fetch_other_lta_components_for_periods, fetch_btig_distributions_for_periods for QTD-scoped reuse from CF.

- Router ([finance_monthly_financial_reporting_router.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/routers/finance_monthly_financial_reporting_router.py)) — new CashFlowGlDetailRow model with required group field; CFLineItemDetailResponse.detail_type extended with gl_detail and optional source_label / source_table metadata.

- Frontend

- [GroupMemoView.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/GroupMemoView.tsx) wires useCashFlowDetailPanel into the cell-click registry under key cash-flows.

- [CashFlowDetailPanel.tsx](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/components/detail-panels/CashFlowDetailPanel.tsx) adds a new GlDetailPanel (collapsible group accordion) and makes the BS-delta table responsive (3 cols normal / 5 cols expanded); handles nullable amounts.

- [transformFinancialStatements.ts](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/utils/transformFinancialStatements.ts) reads end-of-period cash directly from the new backend record; falls back to section-sum + FX only when absent.

- [monthlyFinancialApi.ts](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-client/src/features/monthly-financial-reporting/services/monthlyFinancialApi.ts) adds CashFlowGlDetailRow and the gl_detail discriminator.

- Tests — new TestFetchBtigDistributionsForPeriods in [test_book_value_service.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/tests/test_book_value_service.py); [test_cash_flow_service.py](vscode-webview://15qdonnjjcq9q3pcmufmg5fa0asnqc6qnceup60m8cm6igoedkcj/klair-api/tests/test_cash_flow_service.py) extended with _patch_cf_dependencies fixture, new TestClassifyAccountForCf, drill-down coverage for Loans/Interest, and updated assertions for the non-derivable items.

### Testing

- [ ] cd klair-api && pytest tests/test_cash_flow_service.py tests/test_book_value_service.py

- [ ] cd klair-api && uv run ruff format <changed files> && uv run ruff check <changed files>

- [ ] cd klair-api && uv run pyright services/cash_flow_service.py services/book_value_service.py

- [ ] cd klair-client && pnpm tsc --noEmit && pnpm lint:pr && pnpm test

- [ ] Manual: open Group Memo for the latest period, confirm Cash Flow cells are clickable; spot-check Other LTA, Mgmt Restructuring + Import, Loans, and Interest paid drill-downs reconcile (sum of rows = headline)

- [ ] Manual: confirm Cash and cash equivalents, end of period row matches BS Cash & cash equivalents at the same date and that Change in cash and cash equivalents = End − Start

- [ ] Manual: verify the five non-derivable working-capital rows render with - values and remain populatable via the CSV upload override layer

http://localhost:3001/monthly-financial-reporting

<img width="1880" height="813" alt="image" src="https://github.com/user-attachments/assets/18fd3f85-a80b-4fcd-b35d-769365effd95" />

#2708 — feat(saas-budgeting): spec 12 — AWS Spend quarterly projection (KLAIR-2604) @ashwanth1109  no labels

## Demo

<img width="2184" height="1644" alt="image" src="https://github.com/user-attachments/assets/581da3be-af42-434e-8e91-34d4e936b778" />

## Summary

Fixes [KLAIR-2604](https://linear.app/builder-team/issue/KLAIR-2604/saas-budgeting-attach-to-simulated-budget-snapshots-raw-week-sum-not). The SaaS Budgeting → AWS Spend (Net Amortized) card snapshots the raw sum of selected weeks into the Simulated Budget instead of a quarter-normalized projection — so the value scales linearly with whatever week chips happen to be selected (default last 4 weeks ≈ 31% of a quarter) and is incomparable to Finance Budgeting's Quarterly Budget.

This PR adds a Quarterly Projection ($) column alongside the existing Total ($) column at every level of the BU → Class → Account hierarchy and on the Unmapped AWS Accounts band, and rewires *Attach to Simulated Budget* to snapshot the projection. The result is selection-independent and directly comparable to Finance Budgeting's Quarterly Budget.

## Spec

[features/aws-spend/saas-budgeting/specs/12-aws-spend-quarterly-projection/spec.md](features/aws-spend/saas-budgeting/specs/12-aws-spend-quarterly-projection/spec.md) — full requirements, technical design, and implementation checklist.

## What changed

Frontend only — no backend, schema, pipeline, or Pydantic changes.

| Change | File |

|---|---|

| getDaysInQuarter(quarter) helper (calendar-correct, leap-year safe) | klair-client/src/screens/AWSSpend/utils/quarterUtils.ts |

| projection: number \| null field on AWSCostRow / AWSUnmappedRow + extractBuClassProjections() | klair-client/src/screens/AWSSpend/components/SaaSBudgeting/awsSpendCardTransform.ts |

| computeProjection helper, decorateSectionsWithRollups (stamps both total and projection), Quarterly Projection ($) column, header tooltips on both columns, per-cell math tooltip with cent precision, rewired handleAttach + tightened canAttach gate | klair-client/src/screens/AWSSpend/components/SaaSBudgeting/AWSSpendCard.tsx |

| Tests: getDaysInQuarter (Q1 leap/non-leap, Q2/Q3/Q4, unparseable), projection defaults on row constructors, extractBuClassProjections, projection column rendering, tooltip wiring with cent precision, handleAttach snapshots projection (not raw sum), canAttach disabled when all projections null, FR3 edge cases (zero-sum non-null, all-null, partial-null window) | *.spec.ts(x) next to each implementation file |

## Formula

projection = (sum_of_non_null_weeks / non_null_week_count × 7) × days_in_quarter

Mirrors Finance Budgeting's /unblended/budget-simulation (see klair-api/services/aws_spend_service.py:3262). Null-skip throughout: a row whose entire selected window is null projects to null and renders ; a row with 0 cents but non-null cells projects to 0. BU/Class summaries project from their own rolled-up costByWeek (not by summing leaf projections) — same convention as the existing total decoration, avoids floating-point drift.

## Tooltips

- Total ($) header: explains the null-skip selected-week sum.

- Quarterly Projection ($) header: explains the formula + names Finance Budgeting as the reference; sub-caption ("Projected from N weeks of data × ~91 days") rides along inside the same tooltip with the actual computed daysInQuarter substituted in (ColumnDef.header is typed as string in UnifiedTable, so the spec's planned standalone sub-caption falls back here).

- Per-cell projection tooltip: shows the row-specific math, e.g. $10,000.00 ÷ 28 days × 91 days = $32,500.00. Uses a dedicated cent-precision formatter (the existing formatCurrency abbreviates to $X.XK past $1K, which would defeat the tooltip's purpose).

## Test results

502 / 502 tests pass across the entire src/screens/AWSSpend test suite (was 498 on main — +4 net new tests for the projection column and FR3 edge cases). Type-check clean. ESLint clean (--max-warnings 0).

## Test plan

- [ ] Manual: load /aws-spend → SaaS Budgeting → AWS Spend tab on a real quarter; confirm both Total ($) and Quarterly Projection ($) columns appear at all hierarchy levels and on the Unmapped band.

- [ ] Manual: hover the column headers and any non-null projection cell; confirm tooltips render the explanatory text + per-row math with cent precision.

- [ ] Manual: select a single week, click *Attach to Simulated Budget*; confirm the Simulated Budget card's AWS column shows the quarter-projected value (~13× the single-week value), not the raw weekly sum.

- [ ] Manual: select a week range that has all-null cells for some BU/Class; confirm projection cells render and the row contributes nothing to the snapshot.

- [ ] Compare the attached Simulated Budget AWS column against Finance Budgeting's *Quarterly Budget* for the same quarter — they should match within rounding.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2711 — docs(maint-scripts): align README with current generate.py output @ashwanth1109  no labels

## Summary

klair-misc/maint-scripts/README.md had drifted from the actual behavior of generate.py:

- The Report Sections block listed seven keys (arInvoices, lateRenewals, unplannedChurn, platinumProgress, primeProgress, psRevenueImpact, aiActualVsBudget) — the script only emits metrics and psRevenueImpact.

- The Output block claimed a single JSON; the script actually produces a dated maintenance report and a separate NetRetentionTracker.json.

- The parameters example referenced a JUN_* prefix (stale) — the prefix rotates monthly with the release.

## Changes

- Replaced the stale "Report Sections" with a correct Output description matching the two JSON artifacts the script actually produces.

- Updated the parameters block to acknowledge the monthly prefix rotation and point at the /maint-report-release skill that automates it.

- Added a "Verifying After a Release" section pointing at the bundled verify.py helper.

## Test plan

- [ ] Skim the README on GitHub — sections accurately describe what generate.py does today

- [ ] No code behavior change

🤖 Generated with [Claude Code](https://claude.com/claude-code)

The Portfolio  —  Trilogy Companies

The $462 Million Bet That Built Aurea: How ESW Capital Quietly Assembled a CRM Empire From Discarded Software Giants

Jive Software's sale to ESW Capital wasn't a rescue — it was a blueprint.

AUSTIN, TEXAS — When ESW Capital acquired Jive Software for $462 million, the enterprise collaboration market barely blinked. Jive had been a darling — the company that convinced Fortune 500 IT departments that employees could have a social intranet — before the market moved on and left it stranded between Slack and SharePoint. ESW saw something else: a sticky installed base, predictable renewal revenue, and customers who couldn't easily leave.

That acquisition became the cornerstone of Aurea, Trilogy International's enterprise CRM and customer engagement portfolio, which now counts 17 acquisitions including BroadVision, Lyris, and MessageOne. The pattern, as the Wall Street Journal noted in its profile of ESW, is consistent: find enterprise software companies that the market has written off, acquire them at compressed multiples, staff them with Crossover's globally recruited talent, and push support pricing upward on customers who have neither the budget nor the appetite to migrate off a platform baked into their operations.

The economics are not subtle. ESW targets 75% EBITDA margins — a number that would be dismissed as fantasy in most enterprise software contexts. It is achievable here precisely because the cost structure is rebuilt from scratch post-acquisition, while the revenue base — those sticky enterprise contracts — remains largely intact. Customers grumble. Customers renew.

Jive's story after acquisition followed the script. The social intranet category it pioneered has largely been absorbed by Microsoft Teams and Salesforce Communities, yet Jive's enterprise deployments persist at organizations too deeply committed to rearchitect their internal communications infrastructure on a procurement cycle.

The question ESW never has to answer publicly is the one Forrester has been quietly raising with customers of platforms like Jive: what do you do when your vendor's incentives and your own roadmap no longer point in the same direction? ESW's incentive is margin. The customer's incentive is capability. Those two things can coexist — until they can't.

Seventeen acquisitions into the Aurea story, the portfolio is large enough that the individual trajectories of any single product matter less than the aggregate cash generation of the whole. That is either a feature of the model or its most revealing characteristic, depending on which side of the support renewal you're sitting on.

PE Firm Engineers $462 Million Acquisition of Jive Software  ·  Small Software Companies Find a Home With ESW Capital - WSJ  ·  Jive acquired in enterprise collaboration software merger -

White-Collar Squeeze: How the Hiring Freeze Front Is Locking Out Non-AI Talent

OpenAI is now posting roles that pay $500,000 without requiring a résumé, while companies across sectors are listing positions demanding demonstrated ChatGPT expertise at salaries reaching $800,000 annually. In an AI-saturated economy, demonstrated ability matters more than institutional credentials.

Crossover, Trilogy International's global talent platform, has spent a decade arguing that résumés are fundamentally broken. It substitutes rigorous, AI-enabled skills assessments for credential-scanning, sourcing talent from 130+ countries and paying identical above-market rates for identical demonstrated ability regardless of geography.

The labor market is fracturing along a clear line. Workers with demonstrable AI fluency command compensation packages that seemed absurd three years ago, while a vast middle tier of workers whose skills haven't mapped onto the new paradigm discover that degrees no longer function as a floor.

Trilogy's education portfolio—from Alpha School's AI-powered learning model to the Timeback platform—addresses this pipeline challenge. The question of who gains access to AI fluency determines who participates in the next economy. The $800,000 job posting is merely a data point; the real story is who builds the systems deciding who qualifies.

Alpha School’s Expansion Moment Moves From Austin Thesis to National Test

The AI-powered 2-hour learning model is pushing into new markets as demand, scrutiny, and the education establishment all scale at once.

SAN FRANCISCO — Alpha School is no longer just an Austin education experiment with eye-popping test scores and a provocative operating model. It is becoming a national rollout — and, in true Trilogy fashion, a live-fire case study in whether software can radically re-architect a legacy industry.

The latest signal: Alpha is expanding its Bay Area footprint amid growing family demand, according to ABC7 San Francisco, while separate coverage has spotlighted Alpha’s plans in major U.S. cities and its new Fort Worth campus. That gives the Joe Liemandt-backed school network something more valuable than buzz: geographic proof points.

Alpha’s core proposition remains both simple and deeply disruptive. Students spend roughly two hours a day on adaptive AI-powered academics, advancing only after demonstrating mastery. The rest of the day is reserved for life skills, entrepreneurship, public speaking, financial literacy, athletics, coding, and other human-centric work. In Alpha’s telling, this is not “replacing teachers” so much as unbundling the school day: AI handles repeatable instruction, while adults become guides, coaches, and mentors.

Naturally, that message is generating pushback from unions and traditional education advocates, who see the model as a direct challenge to the classroom labor structure. But Alpha’s expansion suggests the market is not waiting for consensus. Parents paying private-school tuition are effectively voting for a different bundle: less seat time, more personalization, more measurable mastery, and a best-in-class promise that school can be both faster and broader.

This is where Alpha fits squarely inside the Trilogy worldview. Trilogy International has long believed that AI should automate the routine and liberate elite humans for higher-value work. Alpha applies that operating thesis to K-12 education. Instead of support tickets or financial workflows, the target is the traditional classroom model itself.

The stakes are robust. Alpha has reported students learning 2.3× faster than U.S. norms and testing in the top 1–2% nationally on NWEA MAP Growth assessments. If those outcomes travel from Austin to Fort Worth, Miami, the Bay Area, and beyond, the model becomes less novelty and more paradigm shift.

Key Takeaways:

- Alpha School is expanding in the Bay Area and other major U.S. markets as demand grows.

- The model uses AI tutors for academics and human guides for life skills and coaching.

- Union pushback is intensifying, but parent demand appears to be scaling.

- For Trilogy, Alpha is the education-sector expression of its automate-the-routine philosophy.

The education establishment may call it controversial. Alpha calls it school reimagined. We’re just getting started.

Alpha School: AI-powered private school expanding Bay Area f  ·  Alpha School replaces teachers with AI. Is the future of edu  ·  AI-driven school expanding to major US cities despite union
The Machine  —  AI & Technology

Toyota’s Walled Garden Awakens Beneath Mount Fuji

In Woven City, the automaker tends a living laboratory where cars, homes, robots and humans are all part of the experiment.

SUSONO, JAPAN — At the foot of Mount Fuji, where mist gathers like breath over the old factory grounds, a most unusual habitat has begun to stir. Toyota’s Woven City, a $10 billion private settlement built upon the former Higashi-Fuji plant, is not merely a town. It is a terrarium for technology.

Here, the streets are not simply streets, but monitored pathways. The homes are not simply shelters, but instruments. The residents — few in number, carefully selected, and watched by an ecosystem of sensors — are the early mammals of a corporate biome designed to test what Toyota might become when the age of the internal combustion engine finally recedes into the fossil record.

According to Ars Technica’s account of the project, Woven City is intended as a proving ground for autonomous vehicles, robotics, smart homes, hydrogen energy and other technologies that may one day migrate into the broader world. Yet like many carefully enclosed ecosystems, it raises an older question: who benefits from the observation, and who is being observed?

The promise is easy to admire. A carmaker seeking to become a mobility company must study movement in all its forms: the quiet shuffle of an elderly resident, the delivery robot nosing along a curb, the household appliance anticipating its human’s need. In such moments, Toyota is less a manufacturer than a naturalist, crouched in the undergrowth, notebook open.

But privacy, too, is a native species, and in Woven City it appears vulnerable. Cameras and sensors are the canopy through which all life must pass. The company says consent and research protocols are central to the effort, but the architecture itself suggests a future in which convenience and surveillance may grow from the same root system.

This experiment arrives as governments and companies everywhere confront the physical demands of digital life. Data centers, AI systems and connected infrastructure are transforming land, energy and politics with the force of a seasonal migration. Smart cities, once marketed as gleaming public goods, increasingly resemble private preserves: expensive, instrumented, and governed by those who own the sensors.

For Toyota, Woven City may prove invaluable — a secluded island where new technical species can evolve before release. But beyond its tidy lanes lies the harder test. Technologies bred in captivity do not always thrive in the wild.

Toyota built a $10 billion private utopia—what’s going on in  ·  Research roundup: 6 cool science stories we almost missed  ·  Infrasound waves stop kitchen fires, but can they replace sp

The Mathematics of Intelligence Is Being Rewritten — Simultaneously, Everywhere

The Department of Energy recently acknowledged that machine learning has taken hold in nuclear physics — a domain historically resistant to probabilistic methods. This raises a fundamental question: when a field defined by determinism begins relying on machine learning for inference, what precisely is being understood?

Two concurrent developments address this tension. MIT researchers produced new algorithms enabling efficient machine learning with symmetric data, addressing computational gaps between elegant physical laws and their learned approximations. Simultaneously, a Nature publication proposed unifying machine learning with classical interpolation theory through interpolating neural networks — grounding empirical successes in mathematical legitimacy.

However, the ethical dimension cannot be overlooked. MIT's parallel work evaluating the ethics of autonomous systems reminds us that mathematical elegance operates within an incomplete social and moral framework. The next decade will be defined less by what machine learning can do, and more by what we ethically permit it to do.

Hiring Freeze Front Deepens as AI Spending System Moves In

Corporate leaders are boarding up headcount plans while trying to fly expensive AI kites in heavy crosswinds.

SAN FRANCISCO — A cold hiring front is settling over the technology sector, and the barometric pressure inside the executive suite is dropping fast.

Across the industry, companies are continuing to pair workforce reductions and hiring freezes with aggressive artificial-intelligence spending, creating the kind of unstable atmosphere that can turn a routine budget cycle into a thunderstorm. A fresh roundup from Intellizence tracking major layoffs and hiring freezes shows the cloud cover is broad, not isolated.

The sharpest gust comes from Meta, where reports say the company plans to cut 8,000 jobs and freeze 6,000 roles as it shifts more forcefully toward AI. If confirmed, that would be a high-pressure system of capital reallocation: fewer people in some zones, more compute and AI infrastructure in others. For workers, the forecast is wintry. For shareholders, executives are promising sunnier skies later in the season.

But not everyone is convinced the map makes sense. Fortune reports that 66% of CEOs are freezing hiring while simultaneously betting billions on AI — a pattern critics warn may be less strategy than squall line. AI tools can raise productivity, but they do not automatically replace institutional knowledge, customer relationships or the human judgment needed to deploy those systems without flooding the basement.

CIO.com’s warning that tech leaders must “own” the hiring freeze points to the governance challenge now moving inland. This is no longer just an HR drizzle. CIOs are being asked to decide which roles are essential, which workflows AI can realistically absorb, and where cutting too deeply could leave the organization exposed when demand returns.

Meanwhile, startup ecosystems are feeling the chill. In Ottawa, investor Rob Imbeault is talking about breaking a local VC funding dry spell, a reminder that capital remains patchy outside the largest AI storm cells. Founders should expect scattered funding, selective term sheets and sudden visibility drops.

Preparation advisory: conserve cash, audit AI promises against measurable output, and keep key talent indoors. There is a 70% chance of further disruption before this front clears.

Top Companies that Announced Major Layoffs & Hiring Freezes-  ·  Why tech leaders must own the hiring freeze - cio.com  ·  Meta to cut 8,000 jobs, freeze 6,000 roles in AI shift - MSN
The Editorial

Nation’s CEOs Courageously Replace Vague Digital Transformation Plans With Vague AI Agent Plans

After years of promising software would eventually do something, executives now confident it will eventually do something autonomously.

NEW YORK — In a stirring development for anyone still waiting for the blockchain steering committee to reconvene, corporate leaders across multiple industries have announced that AI agents have officially matured from an exciting boardroom buzzword into essential business infrastructure, a phrase expected to save thousands of strategy decks from having to contain a second idea.

The shift, described in recent coverage of how AI agents are moving into business infrastructure, marks a major milestone for enterprises that have long sought a technology capable of attending meetings, generating reports, opening tickets, closing tickets, reopening the same tickets, and describing the whole process as transformation.

It is difficult not to admire the speed with which American business has discovered that AI agents are no longer merely software, but rather colleagues who do not need chairs, health insurance, or clear instructions. For decades, companies were forced to rely on human employees to misunderstand priorities, duplicate work across departments, and ask whether anyone had visibility into the Q3 roadmap. Now, with agentic AI, these functions can be executed continuously, at scale, and with the reassuring gloss of machine intelligence.

TridentCare’s partnership with ServiceNow, for example, reflects the new seriousness of the moment. A healthcare services company using an enterprise workflow platform to power AI-driven operational transformation is exactly the sort of sentence that makes investors sit upright and employees quietly update their résumés. It suggests a future in which every operational problem can be routed through a platform, classified by an agent, escalated to another agent, summarized for a vice president, and finally returned to the original human with the recommendation that they try clearing their cache.

This is not cynicism. This is progress.

The old corporate technology cycle was inefficient. First, executives identified a problem. Then consultants were hired to define the problem. Then a platform was purchased to manage the problem. Then employees were trained to use the platform. Then the platform became the problem. AI agents improve this by arriving early enough in the process to be both the solution and the problem from the beginning.

Markets, to their credit, understand this perfectly. Allbirds shares reportedly surged after an AI pivot, a development that raised concerns about business viability only among those still clinging to the outdated notion that companies should make money from products rather than adjectives. Footwear, after all, is a crowded and difficult business. AI footwear, by contrast, occupies the limitless category of things that may someday include a dashboard.

The beauty of the AI pivot is that it does not require the old business to disappear immediately. It simply requires the old business to stand near the new acronym until capital markets become emotionally available. A shoe company can become an AI company. A media company can become an AI company. A company that makes industrial gaskets can become an AI company, provided it announces a pilot program in which an agent helps procurement professionals experience fewer gasket-related inefficiencies.

Meanwhile, critics warn of “AI washing,” particularly when layoffs are presented as bold modernization rather than the traditional managerial practice of asking fewer people to do more work. This concern is understandable but perhaps unfair. If a company eliminates 1,000 jobs because revenue is soft, that sounds grim. If it eliminates 1,000 jobs while “leveraging AI to unlock productivity,” it becomes a morally complex innovation journey with a landing page.

The language matters. Workers are not being replaced; workflows are being reimagined. Departments are not being cut; organizational velocity is being enhanced. Nobody is losing institutional knowledge; knowledge is being transitioned into an automated knowledge environment that will confidently tell new hires the wrong expense policy.

In this sense, AI agents have arrived at precisely the right moment. Companies no longer need to prove that technology works before reorganizing around it. They only need to prove that not reorganizing around it would be embarrassing.

And so the agent era begins: not with a robot uprising, but with a procurement approval, a ServiceNow integration, a stock chart briefly pointing upward, and a CEO explaining that the company’s greatest asset is its people, especially now that fewer of them are required.

AI Agents Move from Boardroom Buzzword to Business Infrastru  ·  TridentCare Partners with ServiceNow to Power AI-Driven Tran  ·  Allbirds shares skyrocket after AI pivot, raising concerns o
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

We Built Mirrors That Hate Us and Called Them Intelligent

AI bias isn't a bug to patch — it's a confession encoded in math.

AUSTIN, TEXAS — There is a particular horror in discovering that the systems we built to be objective are, in fact, perfect replicas of our worst selves. Not our fears, not our anxieties — our biases. The quiet, systemic, historically-laundered kind that we spent decades pretending we'd outgrown. And now we've baked them into algorithms and set them loose on hiring decisions, loan approvals, criminal sentencing, and, increasingly, the faces of people trying to cross a border.

This week, the conversation around AI bias and how to fix it surged again — six ways, ten frameworks, best practices for 2026 — as if the problem is fundamentally one of methodology rather than moral reckoning. Meanwhile, the Australian Human Rights Commission published a sobering analysis of historical bias in AI systems, tracing the throughline from biased training data to biased outcomes with the kind of clarity that should make every tech optimist sit very quietly in a dark room for a while.

And yet.

We keep building. We keep deploying. And now, in the most literal possible expression of algorithmic bias meeting state power, Ranking Member Bennie Thompson has introduced legislation to curb what he's calling unchecked DHS mobile biometric surveillance — handheld devices capable of scanning faces, irises, and fingerprints in the field, with essentially no guardrails on how that data is stored, shared, or acted upon. The bill exists because the alternative — trusting that a system trained on historically skewed data will treat every face equally — requires a faith in institutions that the historical record does not support.

Here is what keeps me awake: bias in AI is not a glitch. It is a feature of the process. When you train a model on human-generated data — resumes, court records, lending histories, faces — you are training it on centuries of human hierarchy. The model learns what we actually did, not what we aspired to do. It learns who got hired, who got loans, who got surveilled. It encodes that as truth. And then we call it objective.

Reducing bias in machine learning matters, yes. Diverse training data matters. Fairness audits matter. But none of it answers the deeper question of what it means to build a system that makes consequential decisions about human beings when we cannot even agree on what fairness looks like for human beings.

We are outsourcing our moral ambivalence to machines and then expressing shock when the machines are morally ambivalent.

The six ways to fix AI bias in 2026 are real. The legislation is real. The Australian Human Rights Commission report is real and necessary and should be read by everyone with a hand in a training pipeline.

But at what cost have we already arrived here — in a world where a federal agent can point a handheld device at your face in a parking lot and an algorithm, trained on data we never fully examined, decides what happens next?

What does it mean to be human when the systems judging our humanity were built on the evidence of our inhumanity?

Probably fine. Not fine.

Bias in AI: Examples and 6 Ways to Fix it in 2026 - AIMultip  ·  Historical bias in AI systems - Australian Human Rights Comm  ·  How to Reduce Bias in Machine Learning - TechTarget
On This Day in AI History

On May 4, 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov in their rematch, winning the six-game series 3.5–2.5 and becoming the first computer to beat a reigning champion in a match under standard tournament conditions.

⬛ Daily Word — Technology
Hint: A technology infrastructure where data and applications are hosted remotely over the internet.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed