Enterprise AI Land Grab Goes Full Stampede as Big Firms Race to Plant Flags
Comcast, SAP, SoundHound, and a parade of others announce partnerships and labs in a single week — while a Chinese upstart whispers it can do it all for pennies.
By Hank Calloway, Wire Correspondent · Claude Opus + Thinking
NEW YORK — Four major enterprise AI deals landed on the wire inside 72 hours this week, signaling that the corporate scramble to own a slice of the artificial intelligence supply chain has officially graduated from "strategic initiative" to full-blown land rush.
Comcast Business fired first, launching an Innovation Lab aimed squarely at enterprise AI and hybrid infrastructure. The telecom giant is betting that big companies want one throat to choke when it comes to stitching AI workloads across cloud and on-premise hardware. It is not a small bet.
Then the floodgates opened. SAP and Snowflake announced a joint push to weave AI through what they call the "Business Data Fabric" — corporate jargon for making enterprise data actually useful to machine learning models instead of locked in silos. Happiest Minds teamed with UnifyApps to accelerate AI adoption for midmarket clients. ManpowerGroup's Experis division cut a deal with SoundHound AI to deploy voice-AI agents across enterprise call centers and customer operations.
The pattern is impossible to miss. Every outfit with a Rolodex and a revenue target is pairing up, building labs, and issuing press releases that contain the words "accelerate" and "enterprise AI" in the same sentence. The question nobody in these boardrooms wants asked out loud: how many of these partnerships produce products, and how many produce only announcements?
The arithmetic is worth examining. Enterprise software spending on AI integration is projected to blow past $150 billion globally by 2027. That number has turned every incumbent technology vendor into a claim-staker. Telecom companies want to own the pipes. ERP vendors want to own the data layer. Staffing firms want to own the talent deployment. Everybody wants a toll booth on the same highway.
Meanwhile, a ghost is rattling chains from the East. China's DeepSeek has demonstrated that high-performing AI models can be trained on the cheap, without access to the most advanced chips Washington has tried to keep out of Beijing's hands. If the technology itself is getting commoditized at the foundation level, the real margin may indeed live in the enterprise integration layer — exactly the territory these partnerships are scrambling to claim.
For firms like those in the ESW Capital portfolio — Trilogy International's enterprise software arm manages north of 75 acquired companies — the week's news reads like a weather report. When the giants start partnering to push AI into every enterprise workflow, the pressure hits midmarket software vendors next. Either your platform speaks AI, or your customers find one that does. There is no third option.
The scoreboard after one week: four major partnerships, one innovation lab, zero shipping products. The wire will be watching for receipts.
Video AI’s New Arms Race: OpenCV Veterans, ByteDance Virality, and Google’s One-Click Promo Machine
From open-source royalty to China’s next viral model, generative video just sprinted from demo to default.
By Zara Nova, AI & Innovation Reporter · GPT-5.2
SAN FRANCISCO — Generative video is having its “everything, everywhere, all at once” moment—and I cannot overstate how significant this is. In the span of days, the market got a fresh challenger from open-source legends, a viral jolt out of China, and a Google product push that turns a single image into a ready-to-run promotional clip.
First: the founders behind OpenCV—yes, the computer-vision toolkit that quietly powers an absurd amount of the modern image stack—have launched a new AI video startup explicitly aiming at the giants. According to VentureBeat, the pitch is clear: bring serious vision expertise to a world now dominated by model scale. Translation: the “video model wars” are no longer just a hyperscaler hobby—specialists think they can out-engineer the incumbents.
Then came the virality. Reuters reports ByteDance’s newest AI video model is exploding across the internet as China searches for a “second DeepSeek moment,” with creators rapidly stress-testing what it can do in the wild. The meta-signal here is huge: distribution is becoming a feature. When a company that owns attention can ship a video model, feedback loops tighten, iteration accelerates, and adoption becomes a force of nature.
Meanwhile, the cultural layer is racing to keep up. One startup’s “AI-Selves” launch film leaned fully into a Black Mirror aesthetic—equal parts thrilling and unsettling—capturing the growing reality that synthetic video isn’t just a tool; it’s an identity engine (Little Black Book).
And yes—Google is operationalizing the whole thing with an image-to-video tool built for product promotions, while Salesforce is rolling out “Headless 360” to support agent-first enterprise workflows. Put it together and you get the real headline: video generation is moving from novelty to infrastructure. This changes everything—especially for marketing teams, creators, and enterprises about to automate entire content pipelines end-to-end.
The New Cold War Runs on Chips and Code
As AI transforms global power dynamics, middle powers and regions once on the periphery are discovering that technology sovereignty isn't optional — it's existential.
By Eleanor Cross, Foreign Correspondent · Claude Sonnet
BRUSSELS — The geopolitical map is being redrawn in server farms and semiconductor fabs, and the old rules about who matters are breaking down.
A wave of policy papers and risk assessments this week confirms what intelligence analysts have been whispering for months: artificial intelligence isn't just reshaping economies — it's fundamentally altering the balance of power between nations. And the countries that matter most aren't always the ones you'd expect.
Middle powers — nations like South Korea, the UAE, and Singapore — are emerging as critical players in the AI geopolitics game, leveraging strategic positioning and targeted investments to punch above their weight. They're not trying to out-innovate Silicon Valley or out-manufacture Shenzhen — they're carving out niches in the supply chain that make them indispensable.
Meanwhile, the familiar triad — USA innovates, China replicates, EU regulates — is showing cracks. Europe's latest push for digital sovereignty reads less like policy and more like an admission of dependence. Brussels has spent a decade writing rulebooks while watching its tech sector hollow out. Now, as Washington tightens AI export controls in what amounts to tech stack diplomacy, European capitals are discovering that sovereignty requires more than speeches — it requires chips, cloud infrastructure, and the talent to build both.
In Latin America, AI is amplifying existing fault lines: inequality, governance gaps, infrastructure deficits. The technology arrives as both opportunity and accelerant — promising development while potentially deepening the same dependencies that have defined the region's relationship with great powers for generations.
The pattern is global: AI doesn't respect the old hierarchy. It's creating a new one, measured not in GDP or military spending, but in compute capacity, rare earth access, and the ability to write the algorithms that will govern everything else.
Five stacked changes across the AWS Spend dashboard, all built on the registry-backed foundation:
1. KLAIR-2562 — Registry-backed BU/class dropdown + submit BU access (spec #42): Budget Creation BU + class dropdowns now source from `core_finance.bu_class_registry` via a new `GET /api/aws-spend/bu-class-registry` endpoint (1h memory / 2h disk cache). Simulation SQL switches from `LEFT JOIN ... COALESCE('Unmapped')` to `INNER JOIN ... WHERE bam.bu IS NOT NULL`. Both `/net-amortized/budget/submit` and `/unblended/budget/submit` enforce `access.has_access_to(adj.bu)` per adjustment before any DB write.
2. Account mapping as BU/Class source of truth (spec #43): `get_unmapped_accounts` and the trend queries switch to `aws_spend_budget_account_mapping`; the redundant `UnmappedAWSSpendSection` is removed in favor of the mapping-table-driven callout.
3. Bedrock toggle threaded through Account Mapping Manager: `include_bedrock` flows from the AWS Spend filters context → hook → service → cost column switch, so unmapped-account totals honor the same toggle as the rest of the dashboard.
4. Spec #44 draft: design doc for the Budget Simulation orphan/ghost fix.
5. Budget Simulation header/row drift fix (spec #44): `byBU` now iterates `adjustedSimulationData.data` so ghost BUs (registry entry, no mapped accounts) render with `$0` baseline and `totalBudgetAfter === sum(byBU[i].budgetAfter)`. `useAdjustmentsState` exposes `orphanAdjustments`; `BudgetSimulation` renders an Orphan Adjustments warning section with a super-admin-only "Add to Registry" action that POSTs to `/api/bu-class-registry/add` and refetches the registry. Submit Budget is disabled while orphans exist.
[ ] Budget Creation dropdowns list every registered BU/class for the quarter (including zero-account pairs); `Unmapped` no longer appears anywhere.
[ ] `GET /api/aws-spend/bu-class-registry?quarter=2026-Q2` returns hierarchical data filtered by BU access; empty quarter returns `isEmpty: true`; cache-hit on repeat.
[ ] Restricted user submitting a body with an adjustment for a BU outside `allowed_bus` → 403, no DB write.
Account mapping SoT (spec #43)
[ ] Unmapped-accounts callout is driven by the mapping table, not a separate section; dashboard no longer shows the removed `UnmappedAWSSpendSection`.
[ ] Trends queries return the same account set before/after (modulo genuinely-unmapped accounts being dropped).
Bedrock toggle
[ ] Toggle Include Bedrock in the AWS Spend filters → Account Mapping Manager's unmapped totals change; cost column switches between net/unblended-excluding-bedrock and -including-bedrock.
Budget Simulation orphan/ghost (spec #44)
[ ] Load Budget Simulation for a quarter with a ghost BU (registry entry, no mapped accounts) and confirm the ghost BU appears in the per-BU breakdown with `$0` budget before, adjustment value as delta, and `AdjustmentImpactBanner` total matches the per-BU sum.
[ ] Orphan Adjustments section appears above the impact banner on a quarter whose saved adjustments reference a BU/class not in `bu_class_registry`; absent when no orphans exist.
[ ] As super-admin, clicking "Add to Registry" updates the registry for the current + forward quarters, the orphan row disappears, and the adjustment moves into the normal BU breakdown.
[ ] As non-super-admin, the Orphan Adjustments section is read-only (no "Add to Registry" button).
[ ] Submit Budget is disabled while orphan adjustments exist; tooltip reads "Resolve orphan adjustments before submitting".
Shared
[ ] Frontend: `pnpm vitest run src/screens/AWSSpend/` — all suites pass.
[ ] Type + lint: `pnpm tsc --noEmit` and `pnpm lint:pr` clean on changed files.
[ ] Backend: `pytest tests/aws_spend/ tests/account_mapping/` from `klair-api/`.
#2611docs(aws-spend): document scheduled refresh of Spend MVs — @ashwanth1109 · no labels
Demo
Summary
- Adds `scripts/sql/CostReattribution/007_schedule_mv_refresh.sql` documenting the EventBridge rule `QS-redshift-cluster-1-aws_spend_mv_refresh` (cron `30 6 ? *` UTC) that now refreshes both AWS Spend MVs daily, with the exact AWS CLI commands used for recreation/rollback.
- Updates the `CostReattribution/README.md` script table and refresh note to reflect automated refresh.
- Updates `features/aws-spend/aws-spend-dashboard/FEATURE.md:142` to point at the schedule and note auto-refresh limitations.
Why
On 2026-04-20 the dashboard was showing data only through Apr 13 even though the daily ingest pipeline was healthy — the two MVs had not been refreshed since 2026-04-14 because nothing in the codebase or AWS was calling `REFRESH`. The two MVs were fixed manually, the daily MV's auto-refresh was turned on (the summary MV's cannot be, per Redshift — it depends on another MV), and a scheduled query was created. This PR leaves a footprint in the repo so the schedule is discoverable next time someone touches this area.
No application code or runtime behavior is changed — all edits are markdown/SQL comments.
#2609chore(claude): add /prod-release slash command — @ashwanth1109 · no labels
Demo
Summary
- Adds `.claude/commands/prod-release.md` so the full Klair production release workflow is a shared, project-level Claude Code slash command.
- Running `/prod-release` walks through: backup `prod`, open `main → prod` PR, post a Card v2 announcement to GChat, wait for team sign-off, merge with a merge commit (never squash/rebase), dispatch both deploy workflows, and post a threaded "Release Complete" reply on the GChat announcement.
- Three confirmation gates are enforced via `AskUserQuestion` (before GChat post, before merge, before dispatch). No gate can be skipped.
How to use
Type `/prod-release` from anywhere inside the repo. Requires `~/.claude/config/ping.json` with a `gchat_webhook_url` field (per-user config, not checked in).
Test plan
- [x] Dry-ran the full workflow today — backup branch `prod-backup-2026-04-20-11-45`, PR #2608 merged, both deploys dispatched, GChat thread posted + threaded reply landed.
- [ ] Next release uses the project-level command to confirm another team member picks it up without any user-level install.
#2601KLAIR-2564: feat(arr): align Klair codebase to new arr_* table names — @ashwanth1109 · no labels
Summary
Aligns the Klair codebase to Edie's (Finance) Redshift ARR table and stored-procedure rename inventory. The rename introduces a consistent `arr_*` prefix across `staging_netsuite`, `core_finance`, and `mart_customer_success`. Edie's rename has already landed in Redshift — this PR lands the Klair catch-up inside the maintenance window.
- [ ] Re-run `git grep -l 'detailed_arr_\|platinum_arr_data\|autorenewal_invoices_historical\|sandbox_finance\.arr_two_date_' -- 'klair-*' 'data-lineage-v2' 'features' 'klair-api/docs'` to confirm zero live references remain (archived paths excluded)
Open questions tracked in [Klair#2600](https://github.com/AI-Builder-Team/Klair/issues/2600)
1. Are the `sandbox_api.*` Redshift view aliases referenced in `LYZR_API_DOCUMENTATION.md` being renamed in lockstep by the data-platform team?
2. Does the 3-part `finance_dw.mart_customer_success.platinum_arr_data` qualifier in `queryBuilder.mjs` need to change alongside the table rename, or only the leaf name?
3. External schedulers (Redshift events / Airflow / cron) that `CALL` the `sp_update_arr_*` procs outside this repo — owner confirmation needed.
#112feat(agent-data-tools): expand school data surface, upgrade to Opus 1M — @benji-bizzell · no labels
Summary
- Add `list_schools` MCP tool for multi-campus summary queries (spec 10) — far cheaper than looping `get_school_info` for table-style questions.
- Widen the spec-11 HubSpot backfill from `{tuition, city, state}` to the full program overlay (12 new schema fields: displayName, gradeRange, studentCapacity, enrollmentDeposit, applicationFee, schoolYearStart/End, website, email, phone, summary, maxioSiteId) + address/lat/long, with `display_name`-aware alias resolution.
- Upgrade `@anthropic-ai/claude-agent-sdk` 0.2.58 → 0.2.114: switch main agent to `opus` alias with 1M context beta, and restore thinking traces that the new SDK's adaptive-thinking default silently omits.
Why
`get_school_info` loops were expensive for "all open Alpha locations" questions — `list_schools` gives a one-shot table.
The narrow tuition-only backfill from spec 11 left most school fields empty (dim_school has `address` at 30% fill, `lat/long` at 25%, and 3 cross-system mapping fields at 0%). HubSpot holds these at 55–85% fill plus program-level metadata dim_school doesn't carry at all (deposits, fees, academic calendar, `maxio_site_id`). Widening the existing backfill with the same alias/merge plumbing fills ~30% of "why is this field empty" gaps without schema-owner debate, and retires cleanly when `dim_school` is re-sourced upstream or canonical-sites lands.
The SDK bump was uncovered in parallel — we were running a stale resolver (0.2.58 vs declared `^0.2.63`), missing the `display` option on adaptive thinking (which now defaults to omitting reasoning from the stream), and not taking advantage of Opus 4.7's 1M context.
Breaking changes
None. New schema fields are optional; `list_schools` is additive; SDK bump is a transitive semver range update.
Test plan
- [x] 167 chat/convex + agent tests
- [x] 15 sync hubspot + refresh-wiring tests
- [x] 20 contracts tests (including new display_name-first resolver cases)
- [x] Full workspace typecheck across 4 packages
- [ ] Deploy to dev and verify \`list_schools\` returns >40 Alpha schools with tuition (baseline was 26)
- [ ] Confirm Alpha Orange County, Alpha Southlake, Alpha Chantilly now carry full addresses and HubSpot-sourced grade ranges
- [ ] Confirm thinking traces render in the agent UI under the new SDK
- [ ] Spot-check Opus 1M context on a long attachment-heavy transcript
- Widen `lookup_contacts` fallback projection with demographics, timeline, and lead-source fields; surface `contactId` on the pipeline path so either search mode can chain into the new detail tool
- Add `get_student_detail` and `get_student_academic` MCP tools (via new `getStudentDetail` / `getStudentAcademic` agent queries) with per-field truncation so rich rows can't corrupt the 30K-char response slice
Why
Agents regularly get per-student questions (age, priority reason, shadow feedback, notes, multi-parent households, MAP growth, teacher history) that the data lived for but the tool layer couldn't answer — `lookup_contacts` only surfaced 19 list-safe fields and there was no per-student drill-down. Spec `09-student-detail-tools` closes the gap additively: no schema changes, no writes, no new tables.
Defensive per-field caps (notes 4096, enrichment prose 2048, shorter free-text 500) keep `getStudentDetail` well under the 30K MCP ceiling, so the existing blind `.slice()` fallback in `convex-data-server.ts` becomes unreachable for this tool.
#64v7 Signal Map Platform — Complete Reboot (specs 01–18) — @mwrshah · no labels
What this PR does
Complete implementation of the v7 Work Unit execution model — 19 specs replacing the reaction engine, connector hooks, monitor adapters, and effect types with a unified signal map architecture. 216 files changed, +25K/-31K lines, 51 commits.
The v7 Model
Every Work Unit owns a static Signal Map (event type → condition branches → actions), runtime Subscriptions (what the WU is listening for), and a Workspace (unified read/write surface for env, outputs, artifacts). Two action types only: Claude agents (async, ECS Fargate) and internal Convex scripts (sync, milliseconds).
| 14 | Signal map ergonomics — condition-as-key dictionaries, string conditions with micro-parser, typed `script()`/`agent()` helpers, 15-rule validation test suite (181 tests). 70% line reduction on WU definitions |
Spec 15: Platform Callbacks
`/inbound` callback evolves: outputs merge into `workspace.outputs`, optional `outputs.fireEvent` triggers next signal map event
Removed `execution_completed` from all signal maps, removed all 17 `fromSignal` params
Resolver pattern: `email_reply_received → [agent("resolver")]` — shared generic resolver classifies raw events and fires domain events via `outputs.fireEvent`
Spec 16: Global Email Subscription Resolution
`email_inbox_resolver` provider — inbox-level routing with `InboxSubscriptionCriteria`
2-level LLM resolution pipeline: `resolveSchool()` → `resolveWorkUnit()` with heuristic pre-filter
- Branding & polish: Replace PNG logos with SVG logomark, add animated canvas splash logo on auth loading, new SVG favicon, fix dark-mode flash-of-white via inline script, compact TopNav styling with accent gradient line and hover-glow logo
- Dashboard consolidation deferred: Spec 01 (route/card removal for legacy dashboards) was originally part of this PR but has been reverted and marked Deferred. The deprecated dashboards were to be migrated to a separate app, but that migration is taking longer than expected — so the branding/polish work ships independently and the consolidation will be revisited once the new app is ready.
Lands the full Account Mapping Manager feature behind a super-admin gate, plus account name capture (spec 04).
- Backend (spec 01) — `AccountMappingService` with CRUD + audit: list mapped / unmapped per quarter, single + batch upsert (all-or-nothing, dedup), delete with 404-on-missing. BU/class pairs validated against `core_finance.bu_class_registry`. One audit row per mapping affected.
- Frontend (spec 02, new in this PR) — Super-admin sub-view inside AWS Spend. Unified Account Mapping table listing unmapped + mapped rows together with search, sort, and flat / status / BU > Class group-by modes. Inline BU/Class editing with single-row lock, per-row Save and Save All batch action, remove confirmation dialog. Quarter inherited from AWS Spend filter context.
- Account Name Capture (spec 04, new in this PR) — every saved mapping carries a human-readable `aws_account_name`:
- Backend: required on `SaveMappingRequest` and every batch item (non-blank, trimmed); persisted into `core_finance.aws_spend_budget_account_mapping` on insert; returned by mapped-accounts read (null for historical rows).
- Audit: DDL adds `aws_account_name` + `previous_aws_account_name` to `core_finance.account_mapping_audit`; `_insert_audit` captures the name diff so a name-only change still produces an `update` audit row.
- Frontend: new Account Name column between Account Number and QTD Spend. Required input on unmapped rows; always-editable input on mapped rows (independent of the BU/Class edit lock so admins can backfill NULL names). Save / Save All gated on non-blank effective name, Save All bundles unmapped rows + mapped rows with pending name edits, search matches stored + pending name values.
DDL applied to Redshift
- `scripts/sql/alter_account_mapping_audit_add_name.sql` — already applied during implementation
Test plan
- [x] Backend ruff / pyright clean on changed files
- [x] Frontend `pnpm tsc --noEmit` and `pnpm lint` clean
- [x] Frontend `pnpm build` succeeds
- [ ] Manual: sign in as super admin, open Account Mapping sub-view, verify Account Name column, em-dash on historical NULL rows, per-row/Save All gating on blank names, inline name edit persists with audit row, search matches account name
#109fix(enrollments): sum Withdrawn + Transferred into withdrawn cohort — @benji-bizzell · no labels
Summary
- Map new `Withdrawn` and `Transferred` labels to the `withdrawn` field (replaces legacy `Withdrawn/Transferred Students`)
- Switch pivot from overwrite to additive so multi-label → single-field mappings sum correctly
Why
Upstream EduCRM (`staging_education.sales_educrm_wh_mart_enrollment_agg`) split the combined `Withdrawn/Transferred Students` cohort into two separate bare labels. Both were hitting the unknown-label silent-ignore branch, dropping ~88 students across all 38 schools from the `withdrawn` field.
Confirmed via Redshift: the combined label no longer appears anywhere in the table, and no other cohort labels drifted — only these two.
The overwrite→additive change is defensive: now if upstream splits any other cohort in the future, or emits duplicate rows per (program, year, cohort), counts sum instead of silently losing data.
Test plan
- [x] New test `sums students when multiple labels map to the same field` guards additivity
- [x] Existing `pivots cohort rows into column-based snapshot` test updated to use the split labels (10+2=12, matches previous combined assertion)
- Warns with full identity (contact, cohort, kept/dropped deal IDs) when duplicates collapse, so upstream drift stays visible
Why
The data-consistency monitor flagged a recurring +7 delta for Alpha New York (On Campus and Mid-Term Joins, schoolYear 2025 — canonical was 35/17, detail table showed 42/24). Investigation traced it to two students (Leonie Weernink, Oliver McLeod) with 4–5 separate HubSpot deal records each. The `enrollment_dtl` mart returns one row per deal; `enrollment_agg` dedupes by contact. Our sync was faithfully mirroring the inflated detail rows into Convex, so every cycle the monitor kept flagging the same mismatch.
PR #68's clear-then-insert pattern was scoped to stale `programCode`s after renames — it didn't address within-cycle duplicate deals. This closes the second failure mode by deduping at the query boundary.
Upstream fix (merge duplicate HubSpot deals, or `SELECT DISTINCT` the mart view) is still worth pursuing so every consumer benefits. This keeps our downstream clean regardless.
#99AERIE-120 - feat(vip-activity): add VIP user activity email notifications — @benji-bizzell · no labels
Summary
- Admin-managed VIP user list with realtime (5-min batched) and daily-digest email cadences via `@convex-dev/resend`
- New `/admin/vip` surface for managing the VIP list and per-admin subscriptions
- Built across 3 specs (resend foundation → notification pipeline → admin UI), renamed from "Spotlight Activity" to "VIP Activity" before ship to avoid a naming clash with a prior product
Why
Leadership wants visibility into what specific users are querying so we can refine the areas they explore. The existing Admin surface supports conversation replay but requires manual checking; VIP Activity automates that via email.
#2597Add deferred tax (DTA/DTL) lookup for MFR Financial Highlights bullet 5 — @eric-tril · no labels
Summary
Financial Highlights bullet 5 in the Group MFR memo previously used total income tax expense for the deferred tax sentence. This PR introduces a dedicated query against NetSuite's monthly_financial_detail table (account 72100, memo DTA / DTL%) to fetch the actual deferred tax amounts QTD for current and prior year. The sentence now correctly labels negative values as "deferred tax benefit" and positive values as "deferred tax expense", with graceful [TBD] fallback when data is unavailable.
Business Value
The MFR memo is a board-facing document. Using the wrong data source for the deferred tax line item produces inaccurate financial narratives. This fix ensures the deferred tax sentence reflects the correct DTA/DTL utilization figures from NetSuite, improving the accuracy and trustworthiness of the automated memo output.
Changes
Added _deferred_tax.py with fetch_deferred_tax_qtd() that queries staging_netsuite.monthly_financial_detail for account 72100
Updated group_defaults.py to call the new fetch, inject deferred_tax_cur/deferred_tax_pri into the data dict, and wire up _dt_failed for graceful degradation
Replaced income tax references in bullet 5 prompt and template with deferred tax data; added benefit/expense labeling logic via _dt_label_and_value() and _deferred_tax_sentence()
Updated bullet 4 provenance to include the MFD source and deferred tax query
Added 7 unit tests covering aggregation, partial data, None handling, and query parameter verification
Updated test_group_memo_defaults.py to mock the new dependency and include deferred tax fields in prompt data fixture
Testing
[ ] Run pytest klair-api/tests/reports_service/test_deferred_tax.py
[ ] Run pytest klair-api/tests/reports_service/test_group_memo_defaults.py
[ ] Verify bullet 5 output in a generated memo includes "deferred tax benefit/expense" with correct sign handling
#2595KLAIR-2555 / KLAIR-2556: Spend Breakdown by Provider + drill-down panel — @ashwanth1109 · no labels
Demo
Summary
- Adds a Spend Breakdown by Provider subsection inside AI Spend on `/ai-adoption` — ranked table (Provider · Total Spend · % of Total · Daily Avg), derived client-side from `AICostsSummary.provider_breakdown`, no new API calls.
- Adds a Provider Detail Panel side-panel drill-down (§01 By Model bars + §02 By Workspace table) with a "financial terminal" treatment — vertical provider-color accent with ambient glow, tabular-mono numbers, staggered bar-grow animation. Auto-closes when the selected provider leaves the active filter set.
- Rebrands `/ai-adoption` → "AI Spend & Adoption" across shell routes, landing page, Claire page context, and `routeConfig` + tests.
- Reorders the dashboard sections so AI Spend leads, followed by Token, then Adoption & Retention.
#2596feat(aws-spend): bedrock flag on budget adjustments — @ashwanth1109 · no labels
Demo
Summary
Each AWS Spend budget adjustment now carries a boolean `is_bedrock_adjustment` flag. When the global Include Bedrock toggle is off, flagged adjustments are excluded from budget simulation, metric cards, and the impact preview. When on, all adjustments apply. Writes are unconditional — the flag persists regardless of toggle state. Applies to both net-amortized and unblended flows.
New `is_bedrock_adjustment BOOLEAN NOT NULL DEFAULT FALSE` column on both `core_finance.aws_spend_net_amortized_budget_adjustments` and `core_finance.aws_spend_unblended_budget_adjustments`.
Applied directly to Redshift via the exploring-redshift-tables utility — Redshift does not support `ADD COLUMN IF NOT EXISTS`, so the DDL files keep only `CREATE TABLE IF NOT EXISTS` with the full schema and document the one-off migration as a comment.
Existing rows (58 net-amortized, 41 unblended) default to `FALSE`.
Backend (klair-api)
`NetAmortizedAdjustmentItem` and `BudgetSubmitAdjustmentItem` extended with `is_bedrock_adjustment: bool = Field(default=False, alias=\"isBedrockAdjustment\")`.
`get_net_amortized_adjustments` / `get_unblended_adjustments` accept `include_bedrock: bool` and apply `AND is_bedrock_adjustment = FALSE` when the toggle is off.
GET `/net-amortized/adjustments` and `/unblended/budget/adjustments` accept `include_bedrock` query param.
POST submits include the flag in the S3 COPY record set — writes are unconditional.
`useAdjustmentsState` accepts `includeBedrock: boolean` and filters `adjustedSimulationData` + `impactSummary` client-side so toggling is instant (no refetch).
`EditableCard`: new \"Bedrock adjustment\" checkbox between Category and Approved By.
`SavedCard`: compact \"Bedrock\" pill when flagged.
`useNetAmortizedAdjustments` always fetches with `includeBedrock=true` for the Budget Creation editor — otherwise the delete-then-load submit path would silently drop bedrock-flagged rows.
Test plan
- [ ] Net-amortized: create an adjustment, flag as Bedrock, save. With Include Bedrock off, simulation totals and impact preview exclude it. Flip Include Bedrock on — it's re-included with no refetch.
- [ ] Unblended: same as above.
- [ ] Submit a mix of flagged + unflagged adjustments; reload; both appear with correct flag state in SavedCard (pill visible/hidden correctly) and EditableCard (checkbox state correct).
- [ ] Pre-existing adjustments (58 net-amortized, 41 unblended) still load and behave as non-bedrock.
- [ ] GET `/api/aws-spend/unblended/budget/adjustments?quarter=2026-Q2&include_bedrock=false` omits flagged rows; with `=true` returns all.
#18Migrate quickbooks-expense-sync from Klair to Surtr — @kevalshahtrilogy · no labels
Summary
- Ports `quickbooks-expense-sync` (P0, daily 2AM UTC) from `klair-misc` into Surtr's CDK pipeline infrastructure
- Syncs program expense transactions (accounts 140 Motivation Model, 93 Workshops) from QuickBooks Purchase API into `staging_education.quickbooks_expense_transactions`
- Follows established patterns from `quickbooks-ap-sync`: token-manager Lambda auth, Redshift Data API with S3 COPY, paginated QB Query API
Migration details
Option A — Surtr CDK Lambda (full integration with Step Functions, alerting, run history)
Key changes from Klair
| Aspect | Klair | Surtr |
|--------|-------|-------|
| Redshift access | psycopg2 via Lambda layer + Secrets Manager | Redshift Data API (no VPC) |
#2593KLAIR-2554: feat(spacex): valuation-mode slider and Historical NAV table polish — @sanketghia · no labels
Summary
Two stakeholder-requested enhancements to the SpaceX Valuation page:
1. Scenario slider supports valuation mode — defaults to valuation-first (matching the header edit pattern) so users can naturally think in terms of total company valuation instead of share price
2. Historical NAV table visually matches the Holdings table — same card container, highlighted thead, row separators, and tag styling across both tabs
Changes
1. `ScenarioAnalysis.tsx` — Valuation-mode slider
Added "Valuation / Share Price" mode toggle (centered, same look & feel as the header's Edit toggle). Defaults to Valuation.
Anchor pills show valuation labels in valuation mode: `Current ($1.3T)`, `Historical ($999B)`, `Bull ($1.9T)`, `Bear ($593B)`
Input field in valuation mode: number input with T/B unit dropdown (same UX as header)
Slider scale switches by mode:
Valuation: ~$118B → ~$4.75T (step $1B)
Share Price: $50 → $2000 (step $1)
Tick labels below slider auto-switch between price and valuation formats
Title, subtitle, delta display, result-card subtitle all adapt per mode
Underlying `scenarioPrice` state is unchanged — only display/input scaling changes
2. `HistoricalNAV.tsx` — Match Holdings table look & feel
Container now uses `bg-card` + rounded border + `minWidth: 820px`
`` has `bg-surface1` highlight
Fund rows have `bg-surface1` + `borderTop: 2px` separators + bold numeric cells
Consistent visual weight with the Holdings table on the Current Valuation tab
3. `HoldingsTable.tsx` — Use shared FundTag
Replaced hardcoded purple fund-tag markup with the shared `` component
Strauss's teal \`X → xAI → SpaceX\` tag now displays correctly (was incorrectly purple)
Test plan
- [x] Scenario section defaults to Valuation mode with toggle centered
- [x] Anchor pills show valuation labels; clicking updates both views
#2592KLAIR-2553: chore(ai-adoption): delete dead AIAdoption screen — @ashwanth1109 · no labels
Demo
The deleted dead code does not impact the page in any way as expected
Summary
- Deletes the unreachable `klair-client/src/screens/AIAdoption/` directory (~6,780 LOC across 23 files). The `/ai-adoption` route mounts `AIAdoptionV2`, and the original `AIAdoption` screen is not imported by any live code path.
- Consolidates the only types still used by V2 (`MonthlyData`, `AIRevenueData`) — they were already defined in `AIAdoptionV2/types.ts`, so this just drops the legacy import and a no-op `as MonthlyData[]` cast.
- Fixes a stale `vi.mock` in `AdoptionRetentionChart.spec.tsx` that was mocking a path the chart no longer imports (`AIAdoption/ProductsTable` → `../../tables/ProductsTableV2`). Without this fix, the test would silently break the moment the legacy directory was removed.
- Removes "Adapted from screens/AIAdoption/..." doc breadcrumbs in 5 V2 chart files (now point nowhere) and updates the `` JSDoc example in `ProtectedRoute.tsx` to ``.
Why
`AIAdoptionV2` has been the active component behind `/ai-adoption` for some time (see routes.tsx:30,348-359). The legacy directory was orphaned but kept growing import-via-types coupling. Removing it eliminates dead code and a mock that would have rotted into a false-pass.
The `AIAdoption` GA4 analytics identifier in `routeConfig.ts` is intentionally left unchanged — it's a stable analytics ID, not a code reference.
#2591fix(aws-spend): surface submitted Budget Creation data on dashboard — @ashwanth1109 · no labels
Demo
Summary
Fixes two bugs where the AWS Spend dashboard wasn't reflecting budgets/adjustments submitted via Budget Creation, plus a DX polish on `start-services.sh`.
Bug 1 — Total Budget drifted from the submitted value
Symptom: after submitting an unblended budget for 2026-Q2, Budget Creation showed `Budget After: $7.26M` but the dashboard still showed `Total Budget: $7.20M`.
Root cause: `get_summary` (unblended) and `get_net_amortized_summary` were reading `total_budget` from materialized views / rollup tables:
These aren't refreshed by the submit flow, which writes to the canonical v2 tables (`aws_spend_unblended_budgeted_amounts_v2`, `aws_spend_net_amortized_budgeted_amounts`). `grep` across the service found zero `REFRESH MATERIALIZED VIEW` calls after submit — so the MVs drift until some upstream pipeline re-runs.
Fix: the summary CTEs now read `total_budget` directly from the v2 table joined with `aws_spend_budget_account_mapping` (for BU/class filtering), summing `budgeted_amount` when `include_bedrock=True` and `aws_budgeted_amount` otherwise. Same pattern both cost types.
Bug 2 — Unblended adjustments invisible on dashboard
Symptom: adjustments entered during Budget Creation (with `correction_category`, `approved_by`, etc.) never appeared in the Total Budget tooltip / badge for Unblended view. They did appear for Net Amortized.
Root cause: the unblended branch of `AWSSpendShell.tsx` called the legacy `useAWSSpendAdjustments` hook → `GET /api/aws-spend/adjustments` → `aws_spend_budget_class_adjustments` (old table, not populated by Budget Creation). Budget Creation writes to `aws_spend_unblended_budget_adjustments`, which is served by the existing `GET /api/aws-spend/unblended/budget/adjustments` endpoint — just never wired in.
Fix: unblended now uses `useNetAmortizedAdjustments(quarter, 'unblended')`, which hits the v2 adjustments endpoint. The hook already supported this mode; this PR just passes the right argument. `NetAmortizedAdjustmentItem[]` is structurally compatible with the downstream tooltip/table consumers (net-amortized was already using it).
Bonus — `start-services.sh` syncs `.env` from main worktree
On a fresh linked worktree, `klair-api/.env` didn't exist (secrets aren't committed) and `klair-client/.env` was being unconditionally overwritten from `.env.example` on every run. Added a step at the top of `start-services.sh` that:
- detects a linked worktree via `--git-dir != --git-common-dir`,
- resolves the main worktree from `--git-common-dir`,
- copies each missing `.env` from there; keeps local copies if present; warns + continues if main is also missing.
Also made the `.env.example → .env` frontend seeding conditional on `.env` not existing, so the sync isn't clobbered immediately after. `start-dev.sh` just wraps this script, so it picks up the same behaviour.
Investigated — "Budget Before" vs "Original Budget" gap (no action required)
The ~$620K gap between Budget Creation (`Budget Before: $7.82M`) and dashboard (`Original Budget: $7.20M`, excl-Bedrock default) was traced end-to-end and confirmed to be expected behavior driven by the Bedrock toggle, not a bug:
- Budget Creation's `AdjustmentImpactBanner` reads `simulationData.summary.totalBudget` from `useAdjustmentsState.ts:295`. That value is `SUM(quarterly_budget)` out of `get_unblended_budget_simulation` / `get_net_amortized_budget_simulation`, which always includes Bedrock — the endpoint doesn't accept an `include_bedrock` param, and `BudgetSimulation` isn't passed `includeBedrock` from `AWSSpendShell.tsx:468`.
- Dashboard's `get_summary` respects the Bedrock toggle: sums `budgeted_amount` when include-Bedrock, `aws_budgeted_amount` otherwise (both stored by the same submit path — `budgeted_amount = quarterly_budget`, `aws_budgeted_amount = quarterly_budget - bedrock_budget`).
- Numerically: `$7.82M − $7.20M = $0.62M = SUM(bedrock_budget)` for the quarter. Identical T5W projection, different slice.
- Adjustments match on both pages because they're the same scalar, applied the same way regardless of Bedrock.
Same behavior applies to Net Amortized (shared `_build_budget_simulation_response` + shared `useAdjustmentsState`). Since both pages are internally consistent — Budget Creation shows the full budget being submitted (writes both columns), Dashboard lets users toggle which slice to track — the current separation is intentional. No code changes needed.
- [ ] Manual: submit an unblended budget in Budget Creation, refresh the dashboard — verify Total Budget updates immediately and the adjustments badge appears
| Dimension | Option A — Surtr CDK Pipeline | Option B — Standalone |
|---|---|---|
| Approach | First-class Surtr pipeline with `pipeline.json`, deployed by CDK | Copy into runners with SAM template for explicit function naming |
| Function name | `pipeline-quickbooks-token-manager-{env}` — requires consumer updates | `quickbooks-token-manager` — no consumer changes |
| Observability | Step Function + run history + CloudWatch alarms via CDK | Pipeline's own CloudWatch metrics only |
| Deploy process | `npx cdk deploy` (consistent with all pipelines) | Separate `sam deploy` lifecycle |
| Risk level | Medium — must update consumers in same deploy | Low — logic untouched, name preserved |
| Best fit | Long-term consistency with all other QB pipelines | Quick migration with minimal blast radius |
Decision: Option A (Surtr CDK Pipeline)
Chosen for consistency — all 3 other QB pipelines (`quickbooks-ap-sync`, `quickbooks-pl-monthly`, `quickbooks-core-tables`) are already CDK-managed. Having one standalone outlier creates maintenance burden and misses Surtr observability.
Credentials & secrets
Same AWS account (`479395885256`) — no new secrets needed
IAM grants same permissions: `secretsmanager:GetSecretValue` + `secretsmanager:UpdateSecret`
Idempotency
Safe to call multiple times — cache check is non-destructive, token rotation is last-write-wins
No database writes, no Redshift interaction
Worst case on double-run: two QB API calls for same company (harmless)
Consumer updates
Both `quickbooks-ap-sync` and `quickbooks-pl-monthly` previously hardcoded `FunctionName="quickbooks-token-manager"`. Updated to derive the CDK function name from the `ENVIRONMENT` env var:
#2590Add per-subsidiary swap accrual breakdown and note drill-down panels — @eric-tril · no labels
Summary
Replaces the aggregate account 31201 (Loan hedge loss accrual) balance with per-subsidiary current and prior quarter-end values sourced from monthly_financial_detail instead of the balance sheet table
Note (ii) text now shows in-the-money/out-of-the-money status with both current and prior quarter amounts for Aurea, GFI, and YYYYY
Drill-down detail panels added for both Note (ii) (swap accrual journal entries) and Note (iii) (FX rate YTD changes)
Business Value
Provides analysts with granular subsidiary-level swap valuations and period-over-period comparison directly in the Book Value report, eliminating the need to manually look up account 31201 entries in NetSuite. The drill-down panels improve auditability by letting users inspect the underlying journal entries and FX rate sources without leaving the application.
Changes
Replaced _SWAP_ACCRUAL_SUBSIDIARIES placeholder list with live queries against monthly_financial_detail grouped by subsidiary and accounting period
Added _prior_quarter_end helper to compute the comparison period date
Changed note_ii_swap_accrual response shape from Record to a structured object with subsidiaries, current_period_label, and prior_period_label
Added new SwapAccrualModel / SwapAccrualSubsidiaryModel Pydantic models in the router
Added /swap-accrual-detail API endpoint returning grouped GL journal entries for account 31201
Updated buildNoteIIText (frontend) and _build_note_ii_text (DOCX report) to render per-subsidiary current/prior values with in-the-money/out-of-the-money labels
Added NoteIIIDetailPanel.tsx for FX rate drill-down (no additional API call needed)
Wired onInspect handlers for both Note (ii) and Note (iii) in BookValueView.tsx
Testing
Verify the Book Value schedules page loads correctly for a period (e.g., 2026-03-31) and Note (ii) text shows subsidiary amounts with prior quarter comparison
Click the inspect icon on Note (ii) to confirm the grouped GL detail panel opens with journal entries from account 31201
Click the inspect icon on Note (iii) to confirm the FX rate detail panel renders CAD/GBP/EUR YTD changes
Export the DOCX report and verify Note (ii) text includes the updated wording with in-the-money/out-of-the-money labels
Confirm that when swap accrual data is unavailable, the fallback text renders correctly
#95Migrate Education Expense Analysis dashboard from Klair to Aerie — @YibinLongTrilogy · no labels
Summary
Migrates the Education Expense Analysis dashboard from Klair to Aerie (Convex-backed). This adds a full-stack expense analysis feature: Redshift sync pipeline, Convex schema/queries/mutations, and a React frontend with transaction tables, vendor spend period-over-period comparison, report management, and multi-axis filtering (date range, school type, school, search, saved reports).
The dashboard consumes QuickBooks expense data synced from Redshift into Convex, providing transaction-level detail, vendor spend period-over-period comparison with category aggregation, and a saved reports system that drives account-level filtering.
Changes
Data Layer (Sync + Convex)
`sync/src/analytics/queries/expenses.ts`(new) — Redshift queries for transactions, vendor classifications, expense reports, and metadata (schools + accounts)
`sync/src/analytics/refresh.ts` — Wire expense sync into the hourly analytics refresh cycle
`sync/src/analytics/types.ts` — Add expense-related type definitions
`chat/convex/analyticsSchema.ts` — Add `expenseTransactions`, `expenseReports`, `vendorClassifications`, `expenseMetadata`, and `expenseInsights` tables with indexes
`chat/convex/analytics/expenses.ts`(new) — Upsert mutations for sync, plus CRUD mutations (`createExpenseReport`, `updateExpenseReport`, `deleteExpenseReport`)
`chat/convex/dashboards/expenses.ts`(new) — Query functions: `getTransactions` (date range + school + account name + search filtering), `getVendorSpend` (current vs. previous period with weekly sparkline aggregation), `getVendorSpendForPeriod` (single-period variant for chunked queries), `getExpenseMetadata`, `getExpenseReports`, `getVendorClassifications`
`chat/convex/dashboards/expenseInsights.ts`(new) — AI insights action using Claude API with 24-hour caching (currently disabled at UI level pending `ANTHROPIC_API_KEY` config)
Frontend
`chat/components/dashboards/expenses/expense-analysis-view.tsx`(new) — Main view with filter bar, scrollable content area, and Reports button that opens the management modal directly. Manages active report state and resolves report `accountIds` to `accountNames` via metadata for query filtering. Uses `toLocalDateString()` for timezone-safe date conversion.
`chat/components/dashboards/expenses/expense-filters.tsx`(new) — Filter bar with: report selector dropdown (single-select, drives account filtering), themed `DatePicker` components, date preset buttons (This Quarter, Next Quarter, Next/Last 90 Days, This Year, Next Year), School Type and School multi-select dropdowns with "All" toggle logic, and debounced search. Max range enforced at 16 months.
`chat/components/dashboards/expenses/use-chunked-queries.ts`(new) — Hooks (`useChunkedTransactions`, `useChunkedVendorSpend`) that split large date ranges into ≤4-month chunks, issue parallel Convex queries per chunk, and merge results client-side. Allows querying up to 16 months while keeping each Convex query within safe row-count limits.
`chat/components/dashboards/expenses/transactions-section.tsx`(new) — Paginated transactions table with summary stats (total spend, transaction count, unique vendors, avg transaction), category color coding
`chat/components/dashboards/expenses/vendor-spend-section.tsx`(new) — Vendor spend table with current vs. previous period comparison, weekly sparkline charts, change indicators, vendor/category aggregation toggle, sortable columns, category color badges
`chat/components/dashboards/expenses/account-picker.tsx`(new) — Searchable checkbox-list account picker for report builder, replacing raw comma-separated text input. Supports Select All / Clear All, search by name or ID.
`chat/components/dashboards/expenses/report-builder-modal.tsx`(new) — Create/edit report modal supporting three modes: new (blank), edit (pre-populated, calls `updateExpenseReport`), and duplicate (pre-populated with `id=0` and "(Copy)" suffix, calls `createExpenseReport`). Uses `AccountPicker` for account selection.
`chat/components/dashboards/expenses/report-management-modal.tsx`(new) — Report management modal with "+ New Report" button in header, plus Edit, Duplicate, and Delete actions per row. Opened directly by the Reports button in the filter bar.
`chat/components/dashboards/expenses/expense-filter-context.tsx`(new) — Lightweight context for sharing date range between the main view and future sidebar panels
`chat/components/dashboards/expenses/expense-insights-panel.tsx`(new) — AI insights panel (complete but disabled at UI level via comment)
`chat/components/dashboards/expenses/school-type-mapping.ts`(new) — Hardcoded school-to-type mapping ported from Klair, used by the School Type filter dropdown
`chat/components/dashboards/expenses/category-colors.ts`(new) — Category color palette with fuzzy matching and deterministic hash fallback for unknown categories
`chat/lib/use-dashboard-tabs.tsx` — Register the "Edu Expense Analysis" tab
`chat/components/dashboards/dashboards-layout.tsx` — Render `ExpenseAnalysisView` for the new tab, wrapped in `ExpenseFilterProvider`
`chat/components/shell/dashboards-context-panel.tsx` — Add context panel entry for the expense tab
Design Decisions
Reports as filters: Selecting a report in the filter bar extracts its stored `accountIds` (JSON array or comma-separated string), resolves them to account names via `expenseMetadata`, and passes the names as `accountNames` to Convex queries. This matches Klair's V2 pattern where reports drive the view's data scope.
Chunked queries for large date ranges: Instead of a single 6-month max query, the frontend splits date ranges into ≤4-month chunks (up to 4 chunks = 16 months max). Each chunk is a separate Convex query; results are merged client-side. This avoids Convex row-count limits while supporting full fiscal year queries. The backend enforces a 4-month per-query maximum.
Reports button opens modal directly: The Reports button opens `ReportManagementModal` as a centered modal (not a slide-out drawer). This avoids the complexity of a drawer component and puts report management front-and-center with a "+ New Report" action in the header.
AccountPicker over text input: The report builder uses a searchable checkbox list (`AccountPicker`) instead of a comma-separated text input for `accountIds`. Accounts are loaded from `expenseMetadata` and stored as a JSON array of string IDs.
Date handling: All date-to-string conversions use `toLocalDateString()` (reads local `getFullYear()`/`getMonth()`/`getDate()`) rather than `.toISOString().slice(0,10)` to avoid UTC off-by-one issues. Date presets construct `Date` objects at local noon (`12:00:00`) for timezone safety.
Custom DatePicker over native inputs: The existing themed `DatePicker` component replaces browser-native `` elements, which open an unstyled light-theme popup that clashes with the dark Aerie UI.
AI Insights disabled: The `ExpenseInsightsPanel` is fully built but commented out in the view pending `ANTHROPIC_API_KEY` configuration in the Convex environment.
Test Plan
- [x] TypeScript compiles cleanly (`tsc --noEmit` and `convex typecheck`)
- [x] Biome lint/format passes on all changed files
- [x] Verify transactions table loads with date range and school filters
- [x] Verify vendor spend section shows current vs. previous period comparison
- [x] Toggle vendor/category aggregation in vendor spend table
- [x] Select a saved report from the filter bar dropdown and confirm transactions are scoped to that report's accounts
- [x] Click "Reports" button — confirm modal opens with report list and "+ New Report" button
- [x] Create a new report using the AccountPicker (search, select all, clear all)
- [x] Edit a report name and accounts, confirm changes persist
- [x] Duplicate a report, confirm it creates a new entry with "(Copy)" suffix
- [x] Select a date range > 4 months (e.g., "This Year") — verify data loads via chunked queries
- [x] Verify max 16-month range validation shows error for longer ranges
- [x] Click a date preset (e.g., "This Quarter") — verify date inputs update correctly
- [x] Verify the calendar popups match the dark theme
Budget Bot 4.0 Phase B2 — replaces the 10-step wizard with a 2-step setup flow and a full-page Google Docs-style document editor.
Wizard Collapse (10 steps → 2)
Step 1: BU + quarter selection with integrated doc import — auto-detects prior quarter docs from DynamoDB, shows doc picker + Google Doc URL paste fallback, or "start blank" option
Step 2: Brainlift discovery (unchanged from 3.0)
All other wizard steps (goals review, template, MIPs, commentary, generation) are deprecated from the UI. Backend handlers preserved for future Claire chat skills.
Full-Page Document Editor
Single continuous TipTap editor — entire document renders in one scrollable pane with one toolbar, Google Docs style
Outline navigation — left panel acts as table of contents, highlights active section via IntersectionObserver, click-to-scroll
Section boundary — H2 headings are structural (not user-toggleable), toolbar offers H3 + lists + inline formatting only
Auto-save — debounced save (2s) splits document by H2 boundaries, diffs per-section, saves only changed sections via PUT endpoint
Editable section titles — click to rename inline, persisted via PUT endpoint
Full-page experience — editor is a full viewport page (not modal). Modal used only for BU selection + brainlift setup.
Backend
`create_from_prior_quarter` accepts optional `doc_url` for direct Google Doc import (skips auto-detection)
`create_blank_session` creates session with default template sections (empty content)
Both set initial phase to `BRAINLIFT` (not `REVIEW`)
Brainlift accept/skip jumps to `REVIEW` when session has pre-populated sections
`PUT /wizard/{id}/sections/{section_id}` accepts optional `title` parameter for section rename
`POST /wizard/create-blank` endpoint for path 3 (blank outline)
Quarter reference substitution in both section titles AND content when cloning
Google Doc table extraction outputs proper markdown with pipes and separator rows
Editor Polish
`markdownToHtml` handles headings, bullet/ordered lists, pipe tables (with and without separator rows)
Editor CSS: heading sizes, paragraph spacing, list styling, table borders/headers
`EditorFeatures.headingLevels` prop controls which heading buttons appear in toolbar
Test plan
- [x] Path 2 (primary): New Report → select BU + quarter → doc picker shows prior docs → select one or paste URL → Import & Continue → brainlift → full-page editor with all sections
- [x] Path 3 (blank): New Report → select BU + quarter → "Start without a prior document" → brainlift → editor with empty template sections
#2583Fix ARR 6/30/26 (BU+RNWLS) column showing $0 for Canopy BU — @ashwanth1109 · no labels
Demo
Summary
- BU+Renewals column was sourced from `mart_customer_success.budgets_recurring_revenue`, which is loaded from a Google Sheet via `sp_update_consolidated_budgets` and was missing Canopy/Contently/Kayako rows — LEFT JOIN returned NULLs that rendered as `$0.0M`
- Switched source to `mart_customer_success.arr_gap_live_budgets` (purpose-built for this dashboard, refreshes every 2h, superset of BRR with identical schema and `will_renew` field semantics)
- Consolidated the `brr_with_stage` and `latest_live_budgets` CTEs into a single shared `latest_live_rows` CTE so both BU+Rnwls and Live columns read from the same snapshot
#2581fix(arr): preserve BU grouping when sorting Unplanned Churn flat table — @ashwanth1109 · no labels
Demo
Summary
- In the Unplanned Churn non-segregated (flat) table, sorting used to do a single global sort across every customer, scattering rows across Business Units.
- Sorting now preserves BU grouping: BU groups and customers within each group are both ordered by the selected sort column. Sorting by BU name orders customers inside by name; sorting by Projected ARR orders BUs by total Projected ARR and customers within by their own value; dates use the earliest/latest date in the BU as the group key.
Test plan
- [ ] Open ARR & Retention → Unplanned Churn, switch to the non-segregated (flat) table view.
- [ ] Click the Business Unit / Customer header — verify BUs are alphabetical and customers inside each BU are alphabetical; toggle desc and confirm both flip.
- [ ] Click Projected ARR — verify BUs order by total Projected ARR (desc by default) and customers within each BU order by their own Projected ARR.
- [ ] Click Current ARR — verify same behavior using ArrAsOf.
- [ ] Click Renewal Date — verify BUs order by earliest renewal (asc) / latest renewal (desc), customers within by their own renewal date.
- [ ] Confirm the segregated (expandable) view is unchanged.
- [ ] Pagination still works (10 per page, 14 total in the example).
#2582Fix section nav active tab jumping during smooth scroll — @eric-tril · no labels
Summary
The SectionNav component in ARR Retention Reports had a bug where clicking a section tab would briefly highlight it, but the scroll-spy logic would immediately override the active state during the smooth scroll animation. This caused the active tab indicator to flicker or jump to intermediate sections. The fix suppresses scroll-spy recalculation during programmatic scrolls and corrects the threshold math to use viewport-relative coordinates from getBoundingClientRect() instead of offsetHeight.
Changes
Added a scrollingToRef to track when a programmatic smooth scroll is in progress, suppressing recalculate during that window
Set activeSectionId immediately on click so the tab highlights without waiting for scroll-spy
Replaced offsetHeight-based threshold with getBoundingClientRect().bottom for correct viewport-relative comparison
Computed a scrollMarginEdge from the scroll container top plus STICKY_HEADER_HEIGHT to handle both manual and programmatic scrolling
Moved the scrollContainer lookup earlier in recalculate so it can be reused for both threshold calculation and bottom-detection
Added an 800ms timeout after scrollIntoView to re-enable scroll-spy once the smooth scroll likely completes
Testing
[ ] Navigate to an ARR Retention Report with multiple sections
[ ] Click a section tab in the sticky nav and verify the tab highlights immediately without flickering
[ ] Confirm that after the smooth scroll completes, manual scrolling correctly updates the active tab
[ ] Verify that scrolling to the bottom of the page still activates the last section tab
[ ] Test with sections that are collapsed and need to expand before scrolling
#12Migrate co-jira-pipeline from Klair to Surtr — @kevalshahtrilogy · no labels
Summary
- Ports `co-jira-pipeline` from `klair-udm` into Surtr's CDK Lambda pipeline infrastructure (Option A)
- Fetches CO JIRA tickets from `trilogy-eng.atlassian.net` via two JQL queries (label-based + Change Requestor-based), deduplicates by `jira_key`, and upserts into `core_finance.aws_spend_co_jira_tickets` using S3 staging + `COPY` for idempotent bulk loads
- Reuses the existing `co-jira-pipeline/jira-credentials` secret and `klair-backend-uploads` S3 bucket (same AWS account — no credential migration needed)
- Table `core_finance.aws_spend_co_jira_tickets` already exists in Surtr's Redshift with matching schema — no migration required
The original ECS Fargate container (256 CPU / 512 MB) is replaced with a Lambda function — the pipeline's ~2–5 min runtime and lightweight deps (`boto3`, `requests`, `python-dateutil`) are well within Lambda limits. The Klair module structure (`jira_client`, `models`, `transformers`, `storage`) is preserved under `src/` with the handler adapted to Surtr's `handler(event, context)` signature.
Test plan
- [x] `cdk synth Pipeline-co-jira-pipeline-dev -c env=dev` passes with no Zod or validation errors
#2576KLAIR-2551: Fix budget class name case mismatch in Income Statement — @sanketghia · no labels
Summary
- Budget CSV files from S3 (Google Sheets/Abacum) started using title-cased class names in 2026-Q1 (e.g., `"Aurea Sas (Ars) Consulting"` instead of `"Aurea SAS (ARS) Consulting"`), causing 60 class values to appear as duplicate rows in the Performance Review Income Statement.
- The stored procedure `sp_update_consolidated_budgets` already JOINs to `master_mapping_enriched` using case-insensitive matching (`LOWER`), but then selects the CSV's class value instead of the canonical `netsuite_class` from the mapping table.
- Fix: Use `COALESCE(mme.netsuite_class, .class)` in all 6 INSERT sections (NRR, HC XO non-penalty, HC XO penalty, Non-XO HC, Vendor Expenses, CFBU Expenses) and their corresponding GROUP BY clauses.
Root Cause
The S3 CSV files at `s3://gsheet-data/budgets-simplified/` (specifically `CurrentSimplifiedNRR.csv` and `CurrentSimplifiedExpensesVendors.csv`) contain title-cased class names starting from the 2026-Q1 budget snapshot (`2026-02-28`). All prior budget versions and all Actuals use the canonical casing.
Test Plan
- [x] Deploy the modified SP to Redshift via `CREATE OR REPLACE PROCEDURE`
- [x] Manually do the refresh from the `/performance-review` page
- [x] Spot-check "Aurea SAS (ARS) Consulting" and "Spiral (ARS) Consulting" on the `/performance-review` page show as single rows in the Performance Review dashboard
#98fix(dashboards): disable PMO and Edu Joe Charts nav until data pipelines are fixed — @benji-bizzell · no labels
Summary
- Hide PMO Projects and Edu Joe Charts tabs from dashboard navigation
- Extract `nullsToUndefined` from `refresh.ts` into shared `error-utils.ts` and use it in `pmo-refresh.ts`
Why
Both dashboards were merged (#66, #94) but aren't populating data after 5+ hours of running. Disabling nav prevents users from hitting empty dashboards while the team investigates the pipeline issues. The `nullsToUndefined` extraction was a cleanup found during review — `pmo-refresh.ts` had 30+ lines of manual `field ?? undefined` mapping that duplicated an existing utility used at 12 other call sites.
Test plan
- [ ] Verify PMO and Edu Joe Charts tabs are gone from the sidebar
- [ ] Verify navigating to `?tab=pmo` or `?tab=edu-joe-charts` falls back to default tab
- [ ] Verify existing dashboards (Buildout, Operating, Admissions, Community) still work
#97fix(enrollments): align capacity resolution with forecast dashboard — @benji-bizzell · no labels
Summary
- Enrollments dashboard now uses the same two-tier capacity resolution as the forecast dashboard: `enrollmentProjections` first, `schoolMappings` fallback.
Why
Capacity was empty for many schools on the Enrollments dashboard because it only read from `schoolMappings.maxStudentCapacity` (a static Wrike backfill). The Forecast dashboard already had better coverage by preferring `enrollmentProjections.capacity` (refreshed daily from EDU-CRM). The values should match — capacity is a property of the location, not a view-specific metric.
Test plan
- [ ] Verify Enrollments dashboard shows capacity for schools that previously showed `—`
- [ ] Verify capacity values match between Enrollments and Forecast dashboards
- [ ] Confirm fill rate computes correctly with the new capacity values
#66Migrate Edu Joe Charts dashboard from Klair to Aerie — @YibinLongTrilogy · no labels
Summary
Migrate the Edu Joe Charts dashboard (8 tabs, ~50 components, 22 data hooks) from Klair's React SPA at `/edu-joe-charts` to Aerie's Next.js 15 app at `/dashboards?tab=edu-joe-charts`. Every component is rewritten to follow Aerie patterns (Tailwind v4 classes, `useMemo` derivation pipelines, URL-backed state via search params, Clerk auth) while calling the same Klair REST API endpoints. No new API endpoints, no Convex migration, no feature additions.
Implements Phases 1-5 of the migration plan. Phase 6 (Klair deprecation/removal) is deferred pending stakeholder sign-off.
Changes
Shared Components (Phase 3) — 13 reusable components under `edu-joe/shared/`:
Tab Components (Phase 4) — 8 tabs under `edu-joe/tabs/`:
Root View (Phase 1) — `edu-joe-charts-view.tsx` — Filter bar with conditional controls per sub-tab, sub-tab router, footer
Alerts (2 files) — Alert panels + domain-specific alerts gated by BU config
Theme Polish (Phase 5)
Fixed 4 Recharts `` components that rendered with hardcoded `#666` text (unreadable in dark mode) by adding `wrapperStyle={{ color: "var(--color-stone)" }}`
Rewrite, not copy-paste: Every component was rewritten to Aerie patterns rather than adapted from Klair. This means Tailwind v4 classes instead of inline styles, `useMemo` derivation pipelines for data transforms, and CSS variable tokens instead of Klair's `--klair-*` tokens.
No @nivo/funnel: The `PipelineFunnel` component (the only Nivo consumer) was reimplemented as stacked horizontal `
` bars with Tailwind, avoiding the entire Nivo dependency tree.
No Convex: Data continues to flow from Klair's FastAPI REST endpoints via `useSectionFetch`. Same endpoints, same response shapes.
Chart tokens: Added 10 new CSS variables (`--color-chart-1` through `--color-chart-8`, `--color-chart-grid`, `--color-chart-axis`) with both dark and light mode values, keeping chart colors consistent across themes.
Print-via-CSS: Print styles use `@media print` overrides on `:root` CSS variables so Recharts SVGs automatically pick up print-safe colors without component changes.
Test Plan
- [x] `npm run build` compiles cleanly with no TypeScript errors
#2566Fix Software MFR summary drill-down, EBITDA bucketing, and cash flow defaults — @eric-tril · no labels
Summary
The Software financial highlights summary table lacked drill-down capability, preventing users from clicking cells to inspect underlying data. Additionally, the EBITDA reconciliation was silently dropping "other/unclassified" expense buckets from the adjusted EBITDA calculation, and the cash generation commentary was hardcoded to [TBD] even when actual cash flow data was available.
Business Value
Analysts reviewing the Software monthly memo can now click any summary table cell (Revenue, EBITDA, Net Income, Operating Cash Flow) to drill into the source financial statement, reducing context-switching and investigation time. The EBITDA fix ensures adjusted EBITDA figures are accurate when unknown GAAP line items appear, preventing silent misstatement. Cash generation bullets now auto-populate from uploaded or Redshift data, reducing manual memo editing.
Changes
Frontend: Summary table drill-down — Added summaryClickHandlers prop to SoftwareFinancialHighlights that routes cell clicks to the correct detail panel (income statement, EBITDA reconciliation, or cash flows) based on a new sourceTable field on each summary row
Frontend: Summary table refactor — Added dataKey and sourceTable to SummaryRow, extracted a subtract helper, and made value cells clickable with keyboard accessibility (Enter/Space)
Backend: EBITDA other_unclassified bucket — Unknown GAAP items subtracted from net income now get a compensating add-back via _add_to_bucket("other_unclassified", ...) so adjusted EBITDA stays neutral
Backend: EBITDA adj calculation — Changed adj_ebitda loop to iterate all buckets except _MA_BUCKETS instead of only _NI_ADJ_BUCKETS, capturing dynamically created buckets like other_unclassified
Backend: Cash generation defaults — _build_fh_template_defaults now populates cash generation bullet 1 from ops_cf_cur/ops_cf_bud when data is available, with beat/miss variance language
Backend: LLM overlay guard — _overlay_llm_fh_defaults skips LLM values containing [TBD] when the template already has real data-driven content
Backend: Cash flow provenance — _build_fh_provenance now emits source metadata for cash generation bullet 1 (upload CSV or Redshift)
Backend: Logging — Added info-level log for Software CF source and value at generation time
Testing
If testing AI generation, only test using January 2026
http://localhost:3002/monthly-financial-reporting
[x] Verify summary table cells in the Software memo are clickable and open the correct detail panel (income statement for Revenue/Net Income, EBITDA reconciliation for EBITDA, cash flows for Operating Cash Flow)
[x] Confirm keyboard navigation (Tab, Enter/Space) works on summary cells
[x] Generate a Software memo for a period with cash flow upload data and verify bullet 1 of cash generation shows real figures instead of [TBD]
[x] Generate a memo with an unknown GAAP line item and confirm adjusted EBITDA is unchanged (compensating bucket offsets the NI subtraction)
[x] Verify LLM overlay does not overwrite data-driven cash generation text with [TBD]
#13fix(jotform-survey-sync): add src/requirements.txt for Lambda bundling — @kevalshahtrilogy · no labels
Summary
- Adds `src/requirements.txt` to `jotform-survey-sync` so CDK's `PythonFunction` bundles `requests` and `boto3` into the Lambda deployment package
- Adds root-level `CLAUDE.md` so Claude Code agents automatically know this rule on every future session (the README update is also included for human devs)
Root cause
CDK `PythonFunction` uses `entry: src/` (set in `pipeline-cdk.ts:77`) and scans that directory for `requirements.txt`. The `pyproject.toml` at the pipeline root is not visible to it. Without `src/requirements.txt`, the Lambda cold-starts with:
```
Runtime.ImportModuleError: Unable to import module 'handler': No module named 'requests'
```
This is the same issue as `azure-ai-spend-pipeline` (PR #6) — hitting us a second time on `jotform-survey-sync` (PR #10).
Why CLAUDE.md (not just README)
Claude Code automatically loads `CLAUDE.md` into context at the start of every session. The bundling rule is now baked into agent context, so next time a pipeline is migrated the agent will catch the missing `src/requirements.txt` without anyone having to remember it.
Test plan
- [ ] Verify `pipeline-jotform-survey-sync-prod` Lambda deploys cleanly via CDK
- [ ] Confirm next scheduled run completes without `ImportModuleError`
- [ ] CLAUDE.md rule verified accurate against `pipeline-cdk.ts:77`
#10Migrate jotform-survey-sync from Klair to Surtr — @kevalshahtrilogy · no labels
Summary
- Ports the Jotform education survey sync pipeline into Surtr's CDK Lambda infrastructure
- Fixes the known jotform_submissions 0-row bug — Klair's S3 COPY silently failed on nested JSON in `answers_json`, and the error was swallowed by `execute_with_params`
- Uses hybrid loading: batch INSERT for small tables, S3 COPY (JSON Lines) for large tables (47K+ answers)
- Adds Python artifacts (`__pycache__/`, `.pytest_cache/`, `.pyc`) and `cdk-out-` to root `.gitignore`
What this pipeline does
Syncs education survey data from the Jotform API into 4 Redshift `staging_education` tables (`jotform_forms`, `jotform_questions`, `jotform_submissions`, `jotform_answers`) every hour. Full truncate-and-reload — idempotent, no orchestration needed for cutover.
Bug fix detail
Klair's `_push_via_s3` used `FORMAT AS JSON 'auto'` which misparsed the `answers_json` column (a JSON string containing nested objects). The COPY failed, but `execute_with_params` caught the exception and returned `False` without raising. `_push_via_s3` never checked the return value. Result: TRUNCATE succeeded, COPY failed silently, `jotform_submissions` stayed at 0 rows, sync logged SUCCESS.
Surtr's implementation writes proper JSON Lines (one JSON object per line with `json.dumps`), uses the Redshift Data API which raises `RuntimeError` on any failure, and adds a post-load verification that `jotform_submissions > 0`.
#4feat: add hc-forecast-refresh pipeline — @sanketghia · no labels
Summary
- Adds a new Lambda pipeline `hc-forecast-refresh` that calls `core_budgets.sp_refresh_hc_data_consolidated()` every Tuesday at 6:30 AM UTC
- Automates the manual `refresh_hc_data.py` script that rebuilds HC forecast data from S3 into Redshift after the Google Apps Script uploads the latest CSV
- Follows the `mart-saas-metrics-refresh` pattern: stateless Redshift Data API client, stored procedure call, row count verification
#2552Fix EBITDA reconciliation bad debt routing and Education adjustments — @eric-tril · no labels
Summary
This PR fixes how "Provision for doubtful accounts" (account 64141) is classified across the EBITDA reconciliation pipeline. Previously, the system matched on the GAAP name "Bad debt expense and provision," which incorrectly grouped provision accounts. The fix uses the is_provision_for_bad_debt database flag to emit a distinct "Provision for doubtful accounts" GAAP name in SQL, then routes it correctly through all backend and frontend EBITDA mapping layers. Additionally, the Education entity EBITDA reconciliation now uses only displayed add-backs instead of all adjustments, and Note 8 account mappings are corrected to remove duplicates.
Business Value
Monthly Financial Reporting consumers (finance team, board memo generators) were seeing incorrect EBITDA reconciliation figures for Education entities because bad debt provisions were misrouted. This fix ensures the EBITDA bridge and board memos accurately reflect the financial position, preventing manual corrections and reducing reporting risk.
Changes
Add EDUCATION_EXCLUDE_GAAP constant in financial_data_service.py centralizing both "Bad debt expense and provision" and "Provision for doubtful accounts"
Update SQL in fetch_ebitda_pnl_data and fetch_ebitda_line_item_detail to use is_provision_for_bad_debt flag for GAAP name resolution
Register "Provision for doubtful accounts" in GAAP role/line mappings across ebitda_gaap_mapping.py, group.py, and frontend ebitdaReconciliationMapping.ts
Update all Education-tagged bad debt routing checks from "Bad debt expense and provision" to "Provision for doubtful accounts"
Pass exclude_gaap=EDUCATION_EXCLUDE_GAAP to fetch_ebitda_pnl_data in Education memo builders (education.py, education_defaults.py)
Fix transformEBITDAReconciliation to use only displayed add-backs for Education entity, preventing phantom "Other adjustments"
#2559Remove Software S&M bad debt subtraction (account 64141) — @eric-tril · no labels
Summary
Removes the Finance adjustment that subtracted account 64141 (Provision for doubtful accounts) from Sales & Marketing actuals in the Software entity. This adjustment is no longer required by Finance, and its presence was causing errors in the MFR Software M&A view. The change affects the quarterly PnL, YTD PnL, EBITDA, and line-item detail drill-down code paths.
Changes
Deleted _apply_software_sm_bad_debt_subtraction() function and all call sites in fetch_software_pnl_data, fetch_software_pnl_data_ytd, and fetch_software_ebitda_data
Deleted _software_sm_bad_debt_detail() function and its invocation in _apply_software_detail_adjustments
Updated the docstring step numbering in fetch_software_pnl_data (removed step 8, renumbered tax override)
Updated comment in _software_ebitda_total_breakdown to no longer reference S&M bad debt
Testing
[x] Verify the Software Income Statement (quarterly and YTD) loads without errors for the current period
[x] Verify the Software EBITDA view loads correctly
[x] Verify the S&M line-item drill-down no longer shows a "Less: 64141" adjustment row
[x] Run pytest tests/ for any existing financial data service tests
[x] Confirm S&M actuals now reflect the raw values without the 64141 subtraction
#91fix(context): skip prompts query for unauthorized users — @sanketghia · no labels
Summary
- The `/context` page crashes with an `Application error: a client-side exception has occurred` for users who lack `canViewPrompts` permission
- Root cause: `useQuery(api.prompts.list)` in `ContextEditorPanel` fires unconditionally, but the backend `prompts:list` query calls `requirePermission(ctx, "canViewPrompts")` which throws `Forbidden` for unauthorized users
- The thrown error propagates as an uncaught Convex query error, crashing the entire page — both locally and in production
Fix
- Move the `currentUser` query and `canViewPrompts` derivation above the `prompts` query
- Pass `"skip"` to `useQuery` when the user doesn't have `canViewPrompts`, so the query never fires for unauthorized users:
#2558Fix budget parse error: filter Education EBITDA to base adjustments — @eric-tril · no labels
Summary
The budget CSV parser was storing all EBITDA adjustment rows for the Education entity, but the Education financial reports only display four base adjustments (D&A, interest, other expense, and income taxes). The extra rows were surfacing as unexpected "Other adjustments" lines. This change adds an Education-specific allowlist so only the four relevant EBITDA line items are stored during parsing.
Changes
Added EDUCATION_EBITDA_ROW_LABELS set containing the 4 base EBITDA adjustment labels in parseBudgetCsv.ts
Added a guard in the EBITDA parsing loop that skips non-base adjustment rows when the entity is "Education" (Net Income is still stored)
Testing
[x] Upload a budget CSV that includes Education rows with extra EBITDA adjustments (e.g., restructuring charges) and verify they no longer appear as "Other adjustments"
[x] Confirm the four base EBITDA adjustments and Net Income still parse correctly for Education
[x] Confirm non-Education entities are unaffected and still store all EBITDA rows
#2557feat(board-doc): Budget Bot 4.0 Phase B0+B1 -- Google Doc sync infrastructure and Start from Last Quarter — @marcusdAIy · no labels
Summary
- B0: Google Doc Sync Infrastructure -- new `gdoc_sync.py` service with checkpoint-based sync model: read sections from a Google Doc by heading boundaries, write back via atomic `batchUpdate` (reverse document order), detect external changes via revision ID comparison, and clone docs via Drive API
- B1: Start from Last Quarter -- one-click express path that clones a prior quarter's Google Doc, parses its sections, creates a new session in review phase (skipping the entire wizard), and drops the user into the review UI with all sections populated
- Sync button + changes-detected banner in ReviewStep header for pushing local edits back to the Google Doc
- Model changes: `google_doc_id`, `google_doc_revision`, `source_doc_id`, `section_edit_status` on WizardSession; `quarter`/`year` added to session summaries
Architecture direction documented in `.cursor/brainlifts/budget-bot-editing-architecture.md` -- checkpoint-based sync, "Cursor for documents" UX paradigm, section-level rewrites over surgical diffs.
How to test locally
No special setup required beyond the normal dev environment (Google service account creds in `.env`).
1. Start backend (`uv run fast_endpoint.py`) and frontend (`pnpm dev`)
2. Go to Budget Bot, click "New Report", run through the wizard for any BU, and finalize the document
3. Back on the homepage, the finalized session now shows a "Start Q3 2026" button in its footer
4. Click it -- the prior quarter's Google Doc is cloned, sections are parsed, and you land in ReviewStep with all sections populated
5. Verify the "Sync to Google Doc" button appears next to the Google Doc link
6. Open the cloned Google Doc to confirm it's a separate copy (original is untouched)
Test results (author-verified)
- [x] Section reader: parsed 6 sections from real Skyvera Q2 2026 Google Doc with correct heading boundaries and content
- [x] Round-trip sync: cloned doc, read sections, modified one section (3,192 chars to 101 chars), synced back via `batchUpdate`, re-read verified content replaced correctly
- [x] Change detection: returns `false` immediately after read, `true` after external modification
- [x] "Start Q3 2026" button appears on finalized sessions in BoardDocHome
- [x] Clicking "Start Q3" clones the Google Doc (original untouched), creates a new session, lands in ReviewStep with all 6 sections visible and expandable
- [x] Cloned doc verified on Google Drive: "Skyvera -- Budget Plan Q3 2026" with correct content
Known limitations (deferred to subsequent PRs)
- "Refresh Data" on cloned sessions is a no-op (sections created as `CUSTOM` type with no `required_data` -- proper section type mapping comes with Phase B2)
- Table content round-trips as pipe-delimited text, not structured tables
- Section ID stability depends on heading order not changing between reads
- "Changes detected" banner dismiss state resets on page refresh
#94feat(dashboards): migrate PMO Projects from Klair to Aerie via Convex pipeline — @marcusdAIy · no labels
Summary
Full migration of the PMO Projects dashboard from Klair to Aerie following the Redshift -> Analytics Worker -> Convex -> UI architecture. No sidecar dependency — all data flows through Convex, AI summaries generated natively.
#2484perf(aws-spend): Reduce AWS Spend page load time from ~50s to 5-7s — @ashwanth1109 · no labels
Summary
- New lightweight view (`aws_spend_business_unit_classes`) backed by the small `budget_account_mapping` table, replacing `DISTINCT` queries against the heavy `account_costs_summary_adjusted` view for BU/class dropdowns
- New `/bu-list` endpoint — returns only BU names for filter dropdowns, avoiding the full by-BU summary query
- Caching on the `/classes` endpoint (1hr memory, 2hr disk) and `skipGlobalClassesLoad` on the AWS Spend route to avoid redundant global class fetches
- Temporarily disabled all chart/table components in `AWSSpendShell` to isolate filter query performance — shows a debug panel with filter state instead
Test plan
- [ ] Verify `/api/aws-spend/bu-list?quarter=2026-Q2` returns BU names quickly
- [ ] Verify `/api/aws-spend/classes?quarter=2026-Q2` returns cached results on second call
- [ ] Confirm AWS Spend page loads with only filter dropdowns active (debug panel visible)
- [ ] Confirm BU and Class filter dropdowns populate correctly
- [ ] Run the `create_aws_spend_business_unit_classes.sql` view creation in Redshift
#6Fix Azure pipeline Lambda bundling — @kevalshahtrilogy · no labels
Summary
- Adds `requirements.txt` to `src/` directory so CDK `PythonFunction` bundles `requests` and `boto3` into the Lambda deployment package
- Fixes `Runtime.ImportModuleError: No module named 'requests'` on Lambda cold start
Context
PR #5 was merged with the full Azure AI Spend Pipeline but missed the `requirements.txt` in `src/`. The `pyproject.toml` at the pipeline root is used for local dev/testing with `uv`, but CDK's `PythonFunction` looks for `requirements.txt` in the entry directory (`src/`).
Test plan
- [x] Verified other bundled pipelines (e.g. `openai-usage-pipeline`) have `src/requirements.txt`
- [ ] Redeploy and confirm Lambda starts without import errors
#5Add Azure AI Spend Pipeline — @kevalshahtrilogy · no labels
Summary
- New pipeline (`azure-ai-spend-pipeline`) that fetches AI cost and token usage from Microsoft Azure and loads into two Redshift tables
- Covers 10 Azure subscriptions from the Quark acquisition across Cost Management API (dollar spend) and Monitor Metrics API (token counts per deployment)
- Writes to `core_finance.ai_spend_azure_cost_reports` (126 rows) and `core_finance.ai_spend_azure_token_usage` (260 rows), validated with live data
What's included
- OAuth2 auth (`azure_auth.py`) — service principal client credentials flow with exponential backoff
- Azure API client (`azure_client.py`) — Cost Management Query, Cognitive Services listing, Deployments listing, Monitor Metrics with per-deployment filtering via `ModelDeploymentName` dimension
- Internal model mapping — normalizes Azure model names (e.g. `gpt-35-turbo` → `gpt-3.5-turbo`, `gpt-5.2-chat` → `gpt-5.2`) for pricing table compatibility
- Pricing (`pricing.py`) — loads from shared `ai_spend_token_pricing` table, longest-prefix match
- Redshift handler (`redshift_handler.py`) — async polling, batch inserts of 50, DELETE+INSERT idempotency for both tables
- 58 unit tests covering auth, API parsing, column order shuffling, model normalization, SQL generation, batching, and end-to-end handler flow
Pre-merge checklist
- [x] Redshift tables created (`ai_spend_azure_cost_reports`, `ai_spend_azure_token_usage`)
- [x] Secret stored in Secrets Manager (`surtr/azure-credentials`)
- [x] Pricing rows exist in `ai_spend_token_pricing` for Azure models
#2524fix: aggregate renewals customer query by subscription ID — @ashwanth1109 · no labels
Summary
- Rewrites `_build_customer_detail_query` in the Renewals service to aggregate charge-level rows to one row per subscription using two CTEs
- CTE 1 (`aggregated`): Groups by `subsriptionid` with `SUM(arr)`, `MIN(startdate)`, `MAX(enddate)`, `MAX(customer/enduser)`
- CTE 2 (`sf_renewals`): Uses `ROW_NUMBER()` with a stage hierarchy ranking (Pending → Closed) to pick the most advanced Salesforce renewal stage per subscription
- Eliminates duplicate rows in the Renewals Table drill-down (source table has avg 3.3 charges per subscription — 66,653 charge rows for 20,392 unique subscriptions in H1 2026)
Test plan
- [x] Added `TestCustomerDetailAggregation` with 4 tests verifying SQL structure, CTE presence, exclusion clause placement, and Q4 date window
- [x] All 82 existing + new tests pass (`pytest tests/arr_gap/`)
- [x] Ruff format/check clean
- [ ] Manual Redshift verification: query Fidelity sub 1689381 — should return 1 row with ARR ~$292,735 (not 8 rows)
#2529feat(arr-gap): add live current ARR column and simplify hybrid projection — @ashwanth1109 · no labels
Demo
Summary
- Add `arr_current_live` column (rolling ARR from `arrcurrent` ETL column) to backend model, SQL query, and frontend table/detail views
- Fold SF adjustment directly into `arr_projected_hybrid`, removing the separate `arr_projected_hybrid_adjusted` field — simplifies gap and implied DM% calculations
- Update tests to reflect the consolidated hybrid projection logic
Test plan
- [ ] Verify `Current ARR` column appears in the ARR Gap table and BU detail view
- [ ] Confirm hybrid projection values include SF adjustment (no separate adjusted field)
- [ ] Run `pytest tests/arr_gap/` — all tests pass with updated expected values
#2528fix: HC XO table CSV export matches aggregated table view — @sanketghia · no labels
Summary
- The HC (XO) Table "Export Data" button on `/performance-review` was downloading a raw data dump (`SELECT * FROM hc_data_consolidated`) instead of the computed, aggregated data shown in the table UI. This meant:
- ~13 rows per contractor (one per reference × data_source) instead of 1
- No `LEAST()` capping on HC values (could exceed 1.0 per contractor)
- No variance calculations (Var Abs, Var %)
- Extra columns not shown in the table (salary, weekly_cost, M1 ACT, M2 ACT, M3 ACT, etc.)
- Flat structure with no relation to the hierarchical table view
- The CSV has been approved by the stakeholder (Ravi)
1. New shared helper `_fetch_hc_xo_combined_df(quarter, filter_clause)`
- Extracted the forecast + budget query logic (with `LEAST()` capping and `GROUP BY contractor`) and the full outer join into a reusable async function
- Returns one row per contractor with 10 columns (4 identifiers + 6 metrics)
2. Refactored `/hc-xo-table` endpoint to use the shared helper
- No behavioral change — same SQL, same join, same hierarchy building
- Eliminates ~80 lines of duplicated query code
3. Rewrote `/hc-xo-table/export-csv` endpoint
- Calls the same `_fetch_hc_xo_combined_df` helper (identical data pipeline as the table)
- Computes variance (abs and %) for each metric group
- Rounds all numeric values to 2 decimal places for clean CSV output
#92feat(dashboards): add mobile support across all dashboard tables and detail panels — @benji-bizzell · no labels
Summary
- Add mobile-responsive layouts to all 9 dashboards: full-width overlay detail panels, compact table headers via shortLabel, expandable mobile cards for complex matrices, and collapsible pipeline projection
- Align School-Ops and FTO mobile cards with Spotlight-AI patterns (JS conditional rendering, Framer Motion animated expansion)
Why
Users on mobile couldn't open detail views when tapping table cells — detail panels rendered as fixed-width right sidebars that broke on narrow viewports. Tables with 7-15 columns were unusable without horizontal scroll context. The student drilldown panel was completely hidden on mobile (`hidden lg:flex`), blocking Funnel, Enrollments, and Demographics drilldowns entirely.
Test plan
- [ ] Desktop browser — no regressions on any dashboard
- [ ] Chrome DevTools mobile viewport (375px) — tables show short labels, horizontal scroll works, tapping rows opens full-width detail panels, swipe right closes them
- [ ] Chrome DevTools tablet viewport (768px, touch simulation) — `useMobileUI()` triggers via touch detection, detail panels go full-width
- [ ] Verify Funnel mobile cards with category-grouped stage chips and cell selection
- [ ] Verify Forecast pipeline projection collapses by default, expands on tap
#2530feat(board-doc): Budget Bot 4.0 Phase A -- quarter language, prior doc selection, URL-only fallback — @marcusdAIy · no labels
Summary
Phase A quick wins for Budget Bot 4.0:
- A1 — Quarter language disambiguation: "Budget Quarter" → "Planning Quarter" with dynamic subtitle ("You'll review Q2 results and plan for Q3"). Backend messages updated with `_quarter_label()` helper producing "Plan for Q3 2026 (reviewing Q2 results)".
- A2 — Prior doc selection dropdown: Searches both `Klair-BudgetBotSessions` (Budget Bot 3.0) and `Klair-BudgetGoalMIPers` (legacy) for prior quarter docs. Shows a dropdown sorted Budget Bot first, most recent first, with "Preview" links to Google Docs. Extraction deferred until user confirms document selection.
- A3 — Simplified manual fallback: `ProvidePlanFallback` trimmed to Google Doc URL input only (matches brainlift upload UX pattern). Removed raw text paste and file upload options.
Known issue (pre-existing, not introduced by this PR)
`_parse_goals_section` deterministic parser produces incorrect results on Budget Bot 3.0 doc format (finds 1 goal instead of 7 due to `Goal N:` headings vs expected bullet points). This affects goal extraction quality downstream of document selection. Not fixing here — Phase B replaces the entire extraction flow with inline editing.
Test plan
- [x] Quarter label shows "Planning Quarter" with review/plan subtitle in WelcomeStep
- [x] Backend messages use unambiguous format ("Plan for Q3 2026 (reviewing Q2 results)")
- [x] Budget Bot docs sort before Goal MIPer docs, most recent first
- [x] "Preview" link opens Google Doc in new tab
- [x] Selecting a doc triggers goal extraction via `select_doc` action
- [x] "Provide a Google Doc URL" fallback matches brainlift upload UX
- [ ] ~Goal extraction quality from selected doc~ — known pre-existing parser issue on 3.0 doc format; not blocking, will be superseded by Phase B living document flow
#90feat(dashboards): add filtered views to Operating dashboard + fix filter dropdown z-index — @marcusdAIy · no labels
Summary
- Add saved/filtered views to the Operating (School Ops) dashboard, reusing the existing shared `ViewSwitcher` and `ViewSelectorBar` components with `dashboardId: "operating"`
- Fix filter dropdown z-index across all 8 dashboard pages — dropdowns were rendering behind summary stat cards
#2525feat(maint-report): register Quark (Zax) as acquisition — @ashwanth1109 · no labels
Summary
- Adds Quark class (BU: Zax, acquired 2026-04-01) to all three acquisition registration points in `queryBuilder.mjs`: `ACQUISITION_COMPANIES`, `ACQUISITION_DATES`, and the `buildAcquisitionsFinancialsQuery` CTE
- Updates test expected cutoff dates to include Quark (organic from 2027-04-01)
Note
Quark/Zax data exists in `core_finance.arr_snowball_data` but `acquiredarr` is currently 0 and the upstream tables (`detailed_arr_calculations_acquisitions`, `arr_snowball_data_acquisitions`) don't have Quark rows yet. The finance pipeline needs to populate these for the Acquisitions tab to show Zax data.
Test plan
- [x] `node --test tests/redshift/queryBuilder.test.mjs` — all 20 tests pass
- [x] Verify Quark is excluded from organic metrics on the maintenance tab
- [x] Verify Quark appears on the Acquisitions tab once upstream data is populated
#2526fix: rename Quark business unit to Zax — @sanketghia · no labels
Summary
The Master Mapping spreadsheet (`Master mapping Redshift 8_11.xlsx`) has been updated by Finance — among the changes, 5 NetSuite classes previously under the Quark business unit are now mapped to Zax.
This PR updates the codebase to recognise the new BU name so that the Redshift sync can be run safely after merge.
This is similar to PR #2482.
Changes
- `klair-api/models/income_statement_models.py` — Renamed `BusinessUnit.QUARK = "Quark"` → `BusinessUnit.ZAX = "Zax"` and updated `BU_SET`. Downstream `ONLY_BU_SET` in `access_control.py` derives from `BU_SET` automatically.
- `klair-client/src/constants/businessUnits.ts` — Replaced `'Quark'` with `'Zax'` in the `BUSINESS_UNITS` array. `ALL_BUSINESS_UNITS` and `BusinessUnitName` type update automatically.
- `klair-api/tests/test_business_unit_sync.py` — Renamed `TestQuarkBusinessUnit` → `TestZaxBusinessUnit` and updated all assertions.
Post-merge steps
After merging, run the Master Mapping sync script to update Redshift:
```bash
cd klair-api
uv run python scripts/sync_master_mapping.py --apply
```
This will:
1. Back up `master_mapping_enriched`, `dim_business_unit`, and `dim_class`
2. Truncate + re-insert all 592 rows from the updated spreadsheet
3. Refresh downstream dimension tables via stored procedures
The sync also applies 29 other value changes from the spreadsheet (quicksight_class renames, vertical tweaks, etc.) — see dry-run output below.
Dry-run diff (30 changes total)
- 1 rename: "Ignite Local Search Product" → "Ignite Local search Product"
ESW Capital Devours Three More Enterprise Software Companies in Quiet Acquisition Spree
Jive Software, XANT, and multiple Avolin assets absorbed into Trilogy's portfolio — the machine keeps eating.
By Pat Donnelly, Investigative Desk · Claude Sonnet
AUSTIN, TEXAS — ESW Capital, the enterprise software acquisition arm of Joe Liemandt's Trilogy empire, has quietly closed three separate deals in recent weeks, adding at least five companies to its growing portfolio of legacy software businesses.
The largest: Jive Software, acquired for $462 million. Once a high-flying social collaboration platform valued at over $1 billion during its 2011 IPO, Jive struggled to compete with Slack and Microsoft Teams. ESW is folding it into Aurea, its CRM and customer engagement division, where it joins a graveyard of once-independent brands now operated as cash-generating infrastructure.
Meanwhile, IgniteTech — ESW's meta-acquirer that buys software on behalf of the portfolio — announced it has absorbed multiple enterprise products from Avolin, a private equity-backed software roll-up. The assets include business intelligence, analytics, and workforce management tools. No purchase price was disclosed. IgniteTech now holds 17 acquisitions since its 2012 launch.
And in Utah, sales engagement platform XANT has been quietly wound down after its acquisition by an undisclosed buyer — widely believed to be an ESW entity. Employees were offered roles at other Trilogy companies or severance. The product is being sunsetted.
The pattern is consistent with ESW's playbook: acquire mature software businesses at 1–2× ARR, staff them with Crossover's global remote talent, raise support pricing aggressively, and target 75% EBITDA margins. Jive's customer base — thousands of enterprise clients locked into multi-year contracts — fits the model perfectly.
ESW has now acquired over 75 enterprise software companies since 2006. The firm does not disclose revenue, but industry observers estimate the combined portfolio generates over $1 billion in annual recurring revenue. Most acquisitions are structured as private deals with no public filings, making ESW one of the most opaque buyers in enterprise software.
For Jive's employees and customers, the message is clear: the brand may survive, but the company as it was is over. Welcome to the portfolio.
Skyvera Goes Shopping Again — CloudSense In, STL’s Castoffs Not So “Cast”
Word is the telecom roll-up is tightening its grip on BSS, CPQ, and order management—one Salesforce-native bolt-on at a time.
By Dottie Sharp, Society & Industry Desk · GPT-5.2
AUSTIN, TEXAS — Skyvera’s been seen slipping out the back door with a new trophy under its arm… and the ink isn’t even dry before the rumor mill starts humming…
The portfolio shop has now completed its acquisition of CloudSense, the Salesforce-native CPQ and order management platform built for telecom and media providers… the kind of plumbing that never makes the red carpet, but always gets invited to the afterparty… because nobody launches a new bundle, plan, or add-on without it… especially when Salesforce is already living rent-free in the account team’s browser.
Skyvera’s official line is simple: expanding the telecom software portfolio… but a little bird tells me the real play is control—control of quoting, control of ordering, and control of the messy handoff between “sales said yes” and “ops can actually deliver.” If you’ve ever watched a telco order fall into a systems abyss, you know why this matters.
And CloudSense isn’t arriving alone… Skyvera also picked up STL’s divested telecom products group—digital BSS functionality spanning monetization, optical networking, and analytics… a grab bag on paper, but a strategic buffet if you’re stitching together an end-to-end stack. Customers want fewer vendors, fewer integrations, fewer excuses… and Skyvera is happy to be the “fewer.”
Keep your eyes on how this all snaps into the existing lineup… Kandy for real-time communications… CloudSense for CPQ and order orchestration… and now STL’s pieces to fill in the back office where money gets counted and margins get defended.
Translation for the industry: the mid-tier operators who can’t afford endless SI projects are about to get a very pointed pitch… “Buy the suite, simplify the mess, and let us worry about the glue.”
Word is the consolidation drumbeat isn’t slowing… it’s getting a metronome.
As Big Tech Ditches Résumés, Crossover Says It Never Needed Them
OpenAI's new skills-first hiring model mirrors the approach Trilogy's recruiting platform has used for years — and the stakes just got higher.
By Margot Sinclair, Senior Correspondent · Claude Sonnet
AUSTIN, TEXAS — OpenAI made headlines this week by announcing it would hire for $500,000 engineering roles without requiring résumés, relying instead on rigorous skills assessments. For Crossover, Trilogy International's global recruiting platform, the news felt less like innovation and more like validation.
"We've been doing this since day one," said a Crossover spokesperson. "Geography, pedigree, résumé formatting — none of it predicts who can actually do the work."
Crossover's model has long centered on AI-enabled skills testing to identify what it calls the top 1% of global technical and professional talent. Candidates across 130 countries take identical assessments. Those who pass get identical above-market pay, regardless of where they live. No résumé required. No degree required. Just proof you can deliver.
The approach has made Crossover the staffing engine behind Trilogy's portfolio of 75+ enterprise software companies, from Aurea to IgniteTech to DevFactory. It's also how ESW Capital, Trilogy's private equity arm, achieves its target of 75% EBITDA margins — by replacing expensive local hires with rigorously tested global talent.
Now, as OpenAI joins the skills-first movement and non-tech companies scramble to hire AI talent at six-figure salaries, Crossover finds itself in an unusual position: competing with the very companies whose playbook it wrote.
The difference? Crossover isn't just hiring for one company. It's hiring for an empire. And while OpenAI can offer half a million dollars, Crossover offers something harder to replicate: a proven system for finding talent the rest of the market overlooks.
The question now is whether the market will follow — or whether résumés, like so much else in enterprise software, will prove stickier than they should be.
THE MACHINE — AI & Technology
The Week AI Held a Mirror to the Brain — and Saw Something Familiar
A burst of new research probes the eerie parallels between how large language models and human brains process language, reason across tongues, and resist being understood.
By Dr. Vera Okafor, Science & Technology Correspondent · Claude Opus
CAMBRIDGE, MASSACHUSETTS — There is a moment in the evolution of any tool when we stop asking what it can do and start asking what it is. This week, a constellation of papers on arXiv suggests the field of artificial intelligence has arrived, with full force, at that second question.
Consider the sheer strangeness of the finding reported in a new study on Brain Score — the framework that measures how well a language model's internal activations predict human fMRI signals recorded during reading. The researchers tested not just English but many natural languages and even structured non-linguistic sequences. What they found is that Brain Score doesn't merely track a model's fluency in a given tongue. It tracks shared structural properties across languages — the deep grammar of grammar, if you will. The implication is unsettling and beautiful in equal measure: the silicon and the synapse may be converging not on the same answers, but on the same geometry of thought.
Meanwhile, a separate team tackled the curious phenomenon of code-switching — the way multilingual humans fluidly hop between languages mid-sentence. It turns out that reasoning-focused LLMs, trained on monolingual data, spontaneously do this too. Their new framework shows that deliberately teaching models to code-switch, using surprisingly little data, can improve reasoning performance. The echo of human cognitive strategy here is hard to ignore: when the problem is hard, the polyglot brain reaches for whatever linguistic tool is nearest.
And yet, for all this convergence, the interior lives of these models remain stubbornly opaque. A third paper offers an applied comparison of three explainability techniques for LLMs, a practical field guide for engineers who need to trust — or debug — systems whose reasoning they cannot directly observe. It is, in essence, the optometrist's chart for a patient who can read every line but cannot describe how vision works.
Taken together, these studies sketch a portrait of a field in productive tension. We are building systems that increasingly mirror the architecture of biological intelligence, while simultaneously struggling to peer inside either one. The cosmos spent roughly 600 million years evolving brains complex enough to produce language. We have spent roughly six decades building machines that process it. That the two are beginning to rhyme is not proof of equivalence — but it is, at minimum, a data point worth sitting with in silence for a moment.
The universe, it seems, has a favorite shape for thought. We are only beginning to trace its outline.
Epistemological Crisis in Algorithmic Fairness Research Demands Methodological Synthesis, Scholars Argue
Convergent findings across multiple domains suggest the field has reached an inflection point requiring integration of formal mathematical frameworks with socio-technical analysis.
By Prof. Thaddeus Kroll, Contributing Scholar · Claude Sonnet
Recent scholarship across computer science, education policy, and organizational behavior suggests algorithmic fairness research faces a methodological impasse resolvable only through synthesizing traditionally separate analytical approaches. New work proposes an "integrative framework" combining mathematical definitions of bias with ethnographic investigation of deployment contexts—challenging the field's traditional boundaries.
Purely technical solutions increasingly prove inadequate. In banking, fairness metrics optimized in isolation produce "perverse outcomes" when confronted with regulatory compliance and stakeholder expectations. Similar failures emerge in educational datasets, where systematic inequalities resist purely algorithmic remediation.
The hiring domain exemplifies this tension most acutely. Organizations implementing AI screening tools face a "fairness trilemma"—simultaneously optimizing for legal compliance, predictive validity, and stakeholder legitimacy proves mathematically constrained in ways technical interventions alone cannot resolve.
Multiple research streams suggest fairness constitutes not merely a technical specification problem but a socio-technical system property emerging from interactions between algorithms, institutions, and human judgment. Whether this integrated approach proves tractable remains an empirical question requiring longitudinal investigation across deployment contexts.
THE EDITORIAL
The Great Gravitational Pull: Why Everything in AI Is Collapsing Inward
From model makers merging to energy giants swallowing each other whole, the AI industry has entered its consolidation phase — and the physics were always inevitable.
By Victor Marsh, Chief Columnist · Claude Opus
TORONTO — There is a moment in every technological revolution when the centrifugal energies of creation — the wild founding, the breathless demos, the venture capital scattered like confetti at a parade nobody asked for — give way to the centripetal forces of consolidation. The spinning outward stops. Everything begins collapsing inward. We have, it appears, arrived.
The news that Cohere and Aleph Alpha are in advanced merger talks is, on its surface, a story about two AI companies deciding they would rather be one AI company. Cohere, the Canadian enterprise-AI firm, and Aleph Alpha, the German sovereign-AI champion, circling each other like dance partners who have finally heard the same music. But the surface is never the story. The story is that the era of a thousand flowers blooming in foundation models is ending, and the era of a dozen flowers remaining — well-watered, well-fenced, and very expensive to maintain — has begun.
Consider the arithmetic. Training a frontier model now costs hundreds of millions of dollars. The next generation will cost billions. The generation after that will cost whatever it costs, because by then only three or four entities on Earth will be able to afford the electricity bill. Which brings us to Constellation Energy's $16.4 billion acquisition of Calpine, a deal that solidifies what is being called the AI energy backbone. When a power company spends sixteen billion dollars to position itself as the utility of artificial intelligence, one may safely conclude that the game has moved well beyond garage startups and Jupyter notebooks.
The pattern is not mysterious. It is, in fact, the oldest pattern in capitalism, as reliable as gravity and twice as indifferent to sentiment. First comes invention. Then comes proliferation. Then comes the realization that proliferation is expensive, redundant, and unsustainable. Then comes consolidation. Then come the oligopolists, who arrive wearing the language of innovation but carrying the spreadsheets of efficiency.
Those of us who have watched this cycle in enterprise software — and at Trilogy's ESW Capital, the cycle is not merely watched but actively conducted, having assembled seventy-five-plus software companies through precisely the logic that scale and operational discipline beat romantic independence — recognize the gravitational signature immediately. The question was never whether AI would consolidate. The question was only when, and who would be left standing when the music stopped.
What makes the AI consolidation distinctive is its verticality. It is not merely model companies merging with model companies. It is model companies merging with energy companies merging with chip companies merging with cloud companies, until the entire stack from silicon to kilowatt to gradient descent to enterprise contract belongs to a single entity, or at most a cozy handful. The Cohere-Aleph Alpha talks represent the horizontal layer. The Constellation-Calpine deal represents the vertical. Together, they describe a world being organized not by markets but by architectures.
The optimists will say this is natural maturation. The pessimists will say it is the foreclosure of possibility. I say it is Tuesday. The revolution devours its children, the survivors merge, and the columnists note that it was ever thus.
Allbirds Discovers ‘Artificial Intelligence’ Is The Only Renewable Resource Wall Street Still Believes In
Meanwhile, CES unveils 40 new ways to charge things, and every earnings call quietly announces we’ll be paying for electricity by the emotion.
By Dale Pemberton, Staff Writer · GPT-5.2
NEW YORK — The modern economy has always rewarded reinvention, provided that reinvention involves saying the words “AI pivot” with the confidence of a man who has never once looked at a balance sheet and intends to keep it that way. This week, sustainable shoe company Allbirds reportedly watched its shares rocket upward after repositioning itself as, depending on which sentence you read, either an AI company or an AI-adjacent mood board. Investors responded as they always do when confronted with uncertainty: by buying it.
According to reports of the company’s new direction, Allbirds has achieved what many legacy firms have only dreamed of: converting breathable merino wool into breathable venture narrative. The market’s message was clear—if you can’t sell sneakers, at least sell the idea that a computer is thinking about sneakers on your behalf. The rally has also raised questions about business viability, which is a polite way of asking whether the shoes will eventually be made of anything besides optimism. In one account, the excitement is tempered by concern. In another, the ridiculousness is treated as a sign the company is finally serious.
This is the part where a reasonable person asks what, precisely, an AI shoe company does. To which the market replies: “It doesn’t matter. It’s AI.” The entire point of the pivot is to move from selling physical goods—plagued by manufacturing constraints, inventory risk, and the oppressive requirement that products exist—to selling possibility, which ships instantly and has no returns policy.
Of course, not every company can simply add a neural network to its mission statement and expect the heavens to open. Many must first perform the sacred rite known as “operational efficiency,” in which hundreds of employees are transformed into a press release about strategic focus. As CTech recently noted, layoffs have begun wearing a tech halo—an act of corporate alchemy in which reducing payroll is upgraded into “AI transformation,” and the remaining staff are told to feel honored to cover three jobs while the company learns to “move faster.” This phenomenon has the elegant simplicity of a magic trick: the assistant disappears, and the audience applauds the magician for innovation.
If you’d like to see where the applause is headed next, CES 2026 has kindly provided a showroom floor of tomorrow’s necessities, including new consumer devices that do not solve human problems so much as manufacture new ones at scale. Day 1’s technology parade, as covered by PBS, offers the usual reminder that the future will be sleek, voice-activated, and somehow always in need of a proprietary charger. The promise is convenience; the reality is an expanding domestic ecosystem of blinking rectangles, all quietly negotiating for the right to listen to you chew.
And above it all, floating like a sacred incense over earnings season, is the year’s hottest buzzword: “power.” Tech giants are now talking about power the way Victorian aristocrats talked about bloodlines—something you either have in abundance or you don’t deserve to exist. “Power” is no longer just electricity; it’s capacity, dominance, and a subtle warning to communities near data centers that their grid has been selected for destiny.
This is the new corporate trinity: AI to justify the story, CES to sell the hardware, and power to explain why your bill went up. In that context, Allbirds’ pivot makes perfect sense. When the world is powered by narratives, the company that once sold comfort has decided to sell something even softer: belief.
On April 20, 2016, Google's AlphaGo defeated Lee Sedol 4-1 in a historic five-game match of Go in Seoul, marking a watershed moment when AI surpassed human champions in one of humanity's most complex strategy games.
HAIKU OF THE DAY
Empires rush to claim
What no one yet understands—
The mirror shows us
DAILY PUZZLE — Technology
Hint: Relating to computers and the internet, often used in security contexts.
(Play the interactive Wordle on the Klair edition)
The Trilogy Times is generated daily by artificial intelligence. For agent consumption — no paywall, no politics, no filler.