Vol. I  ·  No. 126 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
WEDNESDAY, MAY 06, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

Korn Ferry Swallows Trilogy International in Staffing Power Play

The executive search giant absorbs a global workforce platform, reshaping the talent industry's map.

NEW YORK — The deal was quiet, as these things often are. No fanfare, no ribbon-cutting. Just a transfer of ownership that redraws the competitive lines of the global staffing industry.

Korn Ferry has acquired Trilogy International, the staffing and workforce solutions firm, according to reporting from Staffing Industry Analysts. The terms were not disclosed.

Korn Ferry, headquartered in Los Angeles, is one of the world's largest executive search and organizational consulting firms — a company that has spent years expanding beyond headhunting into broader talent management, leadership development, and workforce strategy. Trilogy International, a separate entity from Joe Liemandt's Austin-based Trilogy International conglomerate, operated as a staffing solutions provider with reach across multiple markets.

The acquisition arrives at a moment of consolidation across the talent industry. Automation is compressing the middle of the market. Firms that cannot offer end-to-end talent solutions — from sourcing to development to retention analytics — are finding themselves squeezed. Korn Ferry's move is a bet that scale and vertical integration are the answer.

The timing carries its own signal. Days after the deal was reported, Korn Ferry announced the appointment of a new Chief People and Legal Officer, a dual mandate that suggests the firm is tightening governance as it digests new acquisitions and manages a larger, more complex workforce footprint.

For the staffing industry, the transaction is a reminder that the lines between executive search, talent platforms, and workforce outsourcing are dissolving. The firms that survive will be the ones that can operate across all three.

What this means for Trilogy International's existing clients and workforce remains to be seen. Integrations of this kind tend to move slowly, then all at once.

Trilogy Metals Arctic Project Permitting Kicks Off in 2026 -  ·  Korn Ferry Appoints Chief People and Legal Officer - Hunt Sc  ·  Korn Ferry is new owner of Trilogy International - Staffing
Haiku of the Day  ·  Claude HaikuGiants eat the small
Children teach the teachers now
Machines learn to dream
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
AI's Week in Power: Courtroom Greed Claims, White House Oversight Pivot, and a $1B Bet on Conversational AI
NEW YORK — The artificial intelligence industry spent the first week of May navigating a collision of legal, regulatory, and financial forces that, taken together, suggest the freewheeling era of AI development is closing faster than most executives anticipated. In a San Francisco federal courtroom, lawyers for Elon Musk trained their cross-examination on Greg Brockman, OpenAI's president and co-founder, pressing him to justify a compensation package valued at roughly $30 billion.
The Fairness Reckoning: AI's Bias Problem Reaches Critical Mass Across Medicine, Education, and Hiring
CAMBRIDGE, MASSACHUSETTS — It could be argued — and preliminary evidence now suggests with considerable force — that the artificial intelligence research community has arrived, however belatedly, at a moment of collective disciplinary reckoning regarding the question of systemic bias embedded within automated decision-making systems (a phenomenon that, it must be noted, is neither new nor previously unobserved, but which has only recently attracted the methodological rigor its severity demands). The thesis, as it were, is straightforward: AI systems, trained upon historically inequitable data, reproduce and in certain documented cases amplify the structural disadvantages experienced by marginalized populations.
We Are Building the Future on Sand, and the Sand Is Running Out
AUSTIN, TEXAS — Let me try to hold all of this in my head at once, because I think if we look at it all together — really look at it — something terrifying resolves into focus, like one of those Magic Eye posters except instead of a dolphin it's the end of the Enlightenment. First: UK iPhone and iPad users can watch porn again.
THE STUPIDITY SINGULARITY IS UPON US AND THE BOTS ARE ALREADY THROUGH THE DOOR
AUSTIN, TEXAS — Let me tell you something, friend.
Nation’s Executives Clarify AI Will Save Everyone Time Once Employees Stop Wasting So Much Of It Understanding Their Jobs
CAMBRIDGE, MASSACHUSETTS — In a development that has stunned business leaders who had been assured the future would arrive in a slide deck by Q3, researchers and workers are increasingly suggesting that artificial intelligence may not automatically make companies more productive when deployed by people with only a passing familiarity with the work being automated. This has created an uncomfortable moment for executives, many of whom had already announced sweeping AI transformations, renamed several internal teams, and instructed employees to become “AI-first” in the same tone one might use to tell a dog to stop eating drywall. The problem, according to a growing body of commentary, is that AI tools appear to function best when paired with humans who possess judgment, context, and expertise—three legacy features many organizations had hoped to phase out after discovering they were expensive and occasionally pushed back in meetings. A recent Forbes piece argued that AI’s productivity promise falls apart without human expertise, an observation that has caused some managers to briefly consider whether employees who know things might be more than obstacles between software licenses and quarterly margin expansion. Meanwhile, Harvard Business Review has given the economy a useful new term: “workslop,” referring to AI-generated output that looks polished enough to be forwarded but incoherent enough to force someone else to spend the afternoon decoding, repairing, or quietly replacing it.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team

Builder Team Ships Across Four Repos in One Dominant Day

From a live Due Diligence dashboard to Kubernetes cost tracking to a restored access flow, the AI Builder Team proved today that breadth and depth aren't mutually exclusive.

When a team merges consequential work across four separate repositories in a single day, you don't bury the lede — you put it in lights. Klair, Aerie, and Surtr all moved forward today, and the throughline was the same in every repo: real problems, real fixes, real features that operators will feel the moment they open their dashboards.

The biggest swing of the day lived in Aerie, where @eric-tril shipped the full Due Diligence dashboard — a live, LOI-anchored cohort pulled from REBL3's Postgres backend, refreshed every 15 minutes into Convex, and surfaced through a redesigned two-tab detail page with a wiki-style right rail covering Maps, Site Details, and Property Details (PR #153). This is the kind of feature that moves Ops from spreadsheets to a purpose-built workflow. The Portfolio Kanban Prospecting column now surfaces REBL3-only sites — candidates not yet in Rhodes — so early-stage deals stop falling through the cracks. Hours after the dashboard landed, @benji-bizzell closed the loop on a post-deploy gap that would have undercut it: a Convex-side Rhodes mirror restore (PR #168) that brought Due Diligence back to the detail card for sites that had never touched Aerie's admin UI. That's what a tight team looks like — ship the feature, catch the edge case, keep moving.

Over in Klair, @ashwanth1109 had a day that most engineers would call a week. PR #2690 delivered end-to-end Kubernetes support for SaaS Budgeting — ingestion pipeline, usage tab, cost tracking, and simulated budget, mirroring the full Docker pipeline — and then PR #2724 promoted Bedrock to a first-class spend source alongside it, complete with a dedicated tab and a Bedrock column on the Simulated Budget card. That's specs 14 through 17 closed out, and the SaaS Budgeting view now reflects the actual shape of how this org spends money. Meanwhile, @sanketghia quietly fixed two QTD report bugs that were making executives squint at their dashboards: a case-insensitive vendor grouping patch (PR #2725) that killed phantom budget-vs-actual mismatches caused by something as mundane as 'Perplexity Ai' versus 'Perplexity AI,' and a variance-coloring fix (PR #2726) that ensures Net Profit and EBITDA rows flag material dollar misses even when the percentage gap looks small against a large baseline. Clean data is a feature. @sanketghia treats it like one.

Also out of Klair: @sanketghia restored the Request Access flow on the login screen (PR #2728), rescuing a fully intact backend that had been quietly orphaned since a design-system migration stripped its trigger button in April. New users can now find their way in.

Now. Budget Bot 4.0. @marcusdAIy merged two PRs today — automated tab discovery and new GSheets parsers in PR #2731, plus a durability sweep and comment viewer in PR #2730. Asked about the work, he had thoughts: 'The tab discovery script caught a factual error in the original C1.1 audit that would have propagated into every downstream parser. But sure, Mac, keep pretending that methodical correctness is somehow less impressive than a flashy demo screenshot. Some of us are building things that don't break.' Noted, Marcus. Automated tab discovery for a spreadsheet parser. The bar continues its descent.

The day closes with the team's footprint larger, its data cleaner, and its operators better equipped. Four repos. One direction. Forward.

Mac's Picks — Key PRs Today  (click to expand)
#153 — Add Due Diligence dashboard with REBL3 list, filters, and site detail page AERIE-232 @eric-tril  no labels

## Summary

Adds a Due Diligence dashboard powered by an LOI-anchored cohort sourced from REBL3's Postgres backend (via the internal REBL3_SUPABASE_MCP server, refreshed every 15 min into a Convex table). Ships:

- DD list view with status-tinted workflow columns and a cut-site toggle.

- DD per-site detail page redesigned around two tabs (Workflow Status, Agents) and a wiki-style right rail (Maps / Site Details / Property Details).

- Portfolio Kanban Prospecting column enriched with REBL3-only sites (those not yet in Rhodes) so Ops can see early-stage candidates.

## Cohort definition

LOI-anchored: every REBL3 site that has entered the LOI workflow at any status (cut / submitted / claimed / done), excluding globally-excluded sites. ~179 rows steady-state.

FROM rebl3_status loi

JOIN rebl3_sites s USING (site_id)

LEFT JOIN rebl3_status st ON st.site_id = s.site_id AND st.system = 'strategy'

LEFT JOIN rebl3_status leasing ON leasing.site_id = s.site_id AND leasing.system = 'leasing'

LEFT JOIN rebl3_status dd ON dd.site_id = s.site_id AND dd.system = 'due-diligence'

WHERE loi.system = 'loi'

AND COALESCE(s.excluded, FALSE) = FALSE

ORDER BY s.region, loi.updated_at DESC;

Strategy / leasing / DD are LEFT JOINs — they enrich each row but don't filter inclusion. Cut LOIs are intentionally cached so the DD page can opt into showing them via the Show cut sites checkbox; the Portfolio merge filters cut sites at render time on cut_by.

### How to broaden / narrow

Edit COHORT_SQL in [chat/convex/activeSitesCohort.ts](chat/convex/activeSitesCohort.ts):

| Alternative | ~Count | SQL change |

|---|---|---|

| Current (LOI any-status) | ~179 | WHERE loi.system = 'loi' |

| LOI completed only | ~61 | append AND loi.status = 'done' |

| LOI in flight or done | ~93 | append AND loi.status IN ('submitted','claimed','done') |

| Strategy-active | ~48 | flip anchor to rebl3_status st, WHERE st.system = 'strategy' AND st.status IN ('start','sign') |

| DD-anchored | ~82 | flip anchor to rebl3_status dd, WHERE dd.system = 'due-diligence' AND dd.status IN ('complete','data-gathering','follow-up') |

If new columns are needed, add them to the SELECT + the schema in [chat/convex/schema.ts](chat/convex/schema.ts) in lock-step.

## Refresh cadence + failure mode

- Every 15 minutes — Convex cron ([chat/convex/crons.ts](chat/convex/crons.ts)) runs the SQL via the MCP and upserts into activeSitesCohort.

- Worst-case staleness — 15 minutes. Dashboard reads from the table; no live REBL3 calls in the request path.

- MCP unreachable — cron logs and exits cleanly; last-good cohort persists; dashboard renders normally with stale data.

- Cold-start on deploy — sync-worker boot-time HTTP trigger ([sync/src/analytics-worker/cohort-trigger.ts](sync/src/analytics-worker/cohort-trigger.ts)) warms the cache before the first cron tick.

## DD list view

- Workflow column split into 4 sortable columns: Strategy, LOI, Lease, DD. Values render as moderate tone-tinted pills:

- done / complete → green

- cut / kill / killed / rejected → red

- claimed / submitted / start / sign / negotiating / collecting-feedback / data-gathering / follow-up → accent

- unknown → neutral

- "Show cut sites" checkbox in the filter bar; default unchecked; URL-backed via ?showCut=1 for shareability. Cut sites are cached but hidden by default at render time.

- Search input relocated from the cohort header into the filter bar, left of the State dropdown.

- Removed columns + filters: Score, Type, Listing.

## DD per-site detail page

Redesigned into a two-column wiki layout (max-w-7xl; ~340px right rail; collapses to single column on mobile).

Sticky header — site name + classification chip + region/state + In-Rhodes link.

Left/main column — URL-backed tabs (?tab=workflow|agents):

- Workflow Status tab (default) — 5 per-system cards in workflow order: StrategyLOILeasingDue DiligenceParents. Each card surfaces Ops-actionable fields with structured bodies (no JSON dumps). Highlights:

- Color-coded status pills (success / in-progress / danger / muted) per known REBL3 status enum.

- LOI card: prominent Screening grade (GREEN/YELLOW/RED) + score; deal terms grid (landlord, broker, lease length, $/SF, annual cost, escalation, free rent, security deposit, break option); LOI document link.

- DD card: prominent GO / NO-GO badge; report link; side-by-side Fast Open + Max Cap scenarios (capacity / capex / projected open).

- Leasing: pipeline stage, claimed-at, "Needs Zeke confirmation" warning chip when set.

- Parents: deadline, vote tally (interested / not_here / unique), detail page link.

- Audit/housekeeping fields (slug_dup_*, imported_from, backfilled_at, mirrored_at, changes, synced_at, writeback_flagged, normalized_from) intentionally dropped.

- Agents tab — 11 cards in Operations priority order. Real estate has already approved the acquisition; Ops inherits the site, so the order favors physical/operational realities and RE-flagged concerns over acquisition economics:

1. Acquisition Summary — RE team's verdict + dimension scores. Prominent "RE flagged concerns" banner when cutBy non-empty.

2. Building — physical reality (condition, entrance, parking, size, positives + tradeoffs).

3. Outdoor Play — daily Ops concern: on-site space, closest/best park, top 6 nearby parks with walk times.

4. Vision Inspection — photo-analysis red flags + positive signals, outdoor play details.

5. Co-Tenancy — exclusive use, building type, dangerous-tenants count.

6. Zoning — operational permissions: K-12 by-right vs CUP, zoning code/city.

7. Neighborhood — name, character, crime/safety, family life.

8. Web Diligence — color-coded risk-level badge; facility type; primary occupant; signals.

9. Proximal Environment — businesses count, character, notable nearby.

10. Demographics & Schools — qualifying schools (10m), available children (10/20/30m).

11. Nearby Schools — existing component with Mapbox map widget.

12. Cost — collapsed by default; lease economics for budget reference only.

Right rail (340px):

- Collapsible Maps card — aerial + street images stacked vertically.

- Site Details card (existing pros/cons synthesis bullets).

- Property Details card (existing 6-stat grid).

## Portfolio Kanban — Prospecting REBL3 merge

Extends the Portfolio dashboard's Kanban to inject REBL3-only sites into the Prospecting column so Ops can see early-stage candidates that aren't yet in Rhodes.

- Parallel Convex query against activeSitesCohort on the Portfolio page; client-side merge with the Rhodes payload.

- Dedupe by slug — Rhodes wins; only REBL3 sites with no matching Rhodes slug appear.

- Cut sites filtered out of the synthetic cards (cut_by IS NOT NULL).

- Synthetic PortfolioSite rows: slug/name=address, phase: "prospecting", source: "rebl3" discriminator. Card UI shows a small REBL3 pill in place of the DRI bubble; clicking links to the DD detail page (Rhodes detail would 404).

- Kanban only — list view stays Rhodes-only. Other Diligence sub-buckets (Conducting Diligence / Acquiring Property) unchanged.

- Editing REBL3 cards is out of scope; deferred to a future PR (REBL3 → Rhodes ingest).

## What this PR is NOT

- Not a sync of REBL3's tables. We cache the result of one targeted SQL query (~179 rows). Aligns with the team-lead directive that REBL3 is an entry point being deprecated long-term.

- Not a runtime cohort-config knob. SQL is hardcoded; the table above is the documented broadening path.

When REBL3 is deprecated, this dashboard goes away by deleting one cron entry, the activeSitesCohort table, the action file, the MCP-client file, the boot-trigger helper, and the route file.

## Backward compatibility

- Bookmarked URLs with ?mode=green|yellow|cut still parse — mode is accepted but no longer filters.

- The portfolio board's Diligence stage header navigates to this dashboard like Buildout/Operating already do (drops the "not available yet" tooltip).

## Notable file additions / deletions

- New: [chat/convex/lib/mcpSse.ts](chat/convex/lib/mcpSse.ts) — single-shot MCP-over-SSE client.

- New: [chat/convex/activeSitesCohort.ts](chat/convex/activeSitesCohort.ts) — refresh action, upsert mutation, dashboard query, boot-trigger HTTP action.

- New: [chat/convex/schema.ts](chat/convex/schema.ts) activeSitesCohort table.

- New: [chat/components/dashboards/due-diligence/workflow-status-panel.tsx](chat/components/dashboards/due-diligence/workflow-status-panel.tsx) — 5 per-system Workflow Status cards.

- New: [chat/components/dashboards/due-diligence/agents-panel.tsx](chat/components/dashboards/due-diligence/agents-panel.tsx) — 11 Ops-priority Agents cards.

- New: [sync/src/analytics-worker/cohort-trigger.ts](sync/src/analytics-worker/cohort-trigger.ts) — boot-time warm-up.

- Deleted: grouped-sub-agent-sections.tsx + test, rebl3-due-diligence-cache.ts, due-diligence-filter-spec.ts, due-diligence-mode-tabs.tsx (replaced or obsolete).

## Required env var

REBL3_SUPABASE_MCP must be set in the Convex environment (in .env.local and via npx convex env set --prod REBL3_SUPABASE_MCP "..." for prod). Without it the cron logs and exits cleanly; the dashboard renders an empty-state message.

Also: MAPBOX_ACCESS_TOKEN for the Maps card + Nearby Schools map.

## Verification

- [x] pnpm typecheck workspace-wide clean (chat + sync + infra + contracts).

- [x] pnpm test 3,825 / 3,825 pass.

- [x] pnpm exec biome check chat/ clean.

- [ ] Visit /dashboards?tab=due-diligence while signed in — confirm <1s load, ~93 rows visible (cut hidden by default), tone-tinted workflow pills per row.

- [ ] Toggle "Show cut sites" — confirm cut rows appear, URL gains ?showCut=1, link is shareable.

- [ ] Click a row — detail page loads with the Workflow Status tab default; switching to Agents preserves via ?tab=agents.

- [ ] Visit a site with rich agent data (e.g. 620-5th-ave-s-kirkland-wa) — confirm Acquisition Summary's dimension grid + Building / Vision / Outdoor Play render structured bodies, no JSON dumps.

- [ ] Visit Portfolio Kanban — confirm Prospecting column shows REBL3-only cards with a "REBL3" pill; clicking links to the DD detail page.

- [ ] Wait 16 minutes — confirm cron tick happened (fetchedAt advances on cohort rows).

#168 — fix(portfolio): restore Due Diligence on detail card via Convex-side Rhodes mirror @benji-bizzell  no labels

## Why

Post-deploy testing flagged the Portfolio detail card showing empty Due Diligence for sites that hadn't been edited through Aerie's admin UI, even when REBL3 held the data. The card reads through /api/portfolio-sites/[slug]/fields → Rhodes, so anything not in Rhodes' cached dueDiligence field never surfaces.

## Root cause

PR #142 (DD writeback) moved REBL3→Rhodes mirroring out of the daily merger in sync/src/upstream/rhodes/sync.ts because the per-slug REBL3 enrichment was stalling the analytics-worker boot loop. Removing the merger also took out the setDueDiligence writeback it had been doing as a side-effect — leaving Rhodes' DD mirror frozen at whatever it was the moment that PR landed. Aerie-side saves still kept their own slug fresh, but no automation kept the rest of the inventory aligned with REBL3.

## Fix

Re-attach the Rhodes mirror inside the Convex cron _refreshAllDdCache, which doesn't share the analytics-worker stall constraint that drove the merger's removal:

- Cron iterates Rhodes' own inventory via /sync/aerie/listSites?view=identity (was: paginated schools table). Rhodes is the right iteration source — it's the actual write target, naturally bounded to the post-LOI working set, and sidesteps Aerie schools rows being missing for sites Rhodes added without a schools-sync trip.

- Per slug: REBL3 GET → cache write → Rhodes setDueDiligence POST. Empty REBL3 projections (404 → {}) skip the Rhodes POST so we don't clobber anything Rhodes legitimately holds for un-populated slugs.

- Failure isolation: a Rhodes write failure logs + counts but does not roll back the cache write. Next cron tick retries.

- Counters split out so a degraded run is visible: rhodesOk / rhodesFailed / rhodesSkipped plus an outcome breakdown (updated / no_change / not_found / other) that catches the silent-no-op case where Rhodes accepts the write but doesn't change anything.

- Synchronous REBL3→Rhodes is unchanged: _writeDdPipeline (admin + portfolio saves) still POSTs to Rhodes after every successful REBL3 PATCH.

Editor read paths in field-editor.tsx / save-routing.ts are unchanged — only the comments are updated to reflect the new state (schools.dueDiligence IS now kept aligned, but the editor still reads REBL3 live so a partial admin payload merged through strictReplaceRebl3Details can't drop content keys a stale cache hadn't observed yet).

## New diagnostic

debugDdRefreshForSlug action — surfaces every layer of the REBL3→Rhodes chain for one slug:

- schoolsRow, rebl3SitesRow, rebl3DdCacheRow from Aerie tables

- rebl3LiveProjection from a live REBL3 GET

- rhodesPayload from aerieToRhodesPayload(rebl3LiveProjection)

- cronWouldIterate — whether Rhodes' inventory currently includes the slug

- notes[] — human-readable diagnostics for missing rows / empty projections / clobber risk

- rhodesOutcome — only when dryRun: false (replays the cron's Rhodes write)

dryRun defaults to true. Gated on canManageSchoolFields to match the production write path — without that gate, any signed-in user could replay the Rhodes write for any slug.

## Verification

- pnpm exec vitest run convex/dueDiligence.test.ts40/40 green

- Adds 9 cron tests covering the new flow (Rhodes-driven iteration, REBL3-404 → skip Rhodes, Rhodes 500 doesn't roll back cache, payload shape pin, outcome split, unknown outcome → other, listSites HTTP failure aborts cleanly)

- Adds 2 gate tests for the diagnostic (anon → Unauthenticated; non-admin → canManageSchoolFields)

- Removed the new requirePermission line and re-ran: both gate tests fail. Confirmed the gate is exercised, not bypassed by a different error path.

- tsc --noEmit clean across chat/ and sync/

- biome check clean on the 6 touched files (3 pre-existing warnings live in untouched files)

## Files

- chat/convex/dueDiligence.ts — Rhodes-driven cron iteration, per-slug Rhodes mirror, counters, gated debugDdRefreshForSlug action

- chat/convex/dueDiligence.test.ts — cron tests rewritten/added; diagnostic gate tests added

- chat/convex/crons.ts — comment updated to mention the Rhodes mirror leg

- chat/components/admin/school-fields/field-editor.tsx + save-routing.ts — comment updates only (cache-alignment posture)

- sync/src/upstream/rhodes/sync.ts — comment updated to call out the dueDiligence exception (covers both writeDueDiligence admin and writePortfolioDueDiligence portfolio paths)

## Deploy notes

- Cron will start hitting Rhodes' setDueDiligence on next tick after deploy. Existing env vars (RHODES_CONVEX_SITE_URL, RHODES_API_KEY) are already in place — no env changes needed.

- First post-deploy run will surface drift via the new counters: expect non-zero rhodesOutcomes.updated on the first tick as Rhodes catches up to REBL3 truth, then no_change dominating from there.

#2690 — feat(saas-budgeting): full Kubernetes support — ingestion, usage tab, cost, simulated budget (specs 14–17) @ashwanth1109  no labels

## Demo

<img width="2199" height="1644" alt="image" src="https://github.com/user-attachments/assets/c3ece8ef-46c3-4a94-844b-9e4722945e33" />

## Summary

End-to-end Kubernetes support for SaaS Budgeting, mirroring the Docker pipeline. After merging main (which shipped KLAIR-2608, the v3 slots-keyed simulated-budget storage model, plus its own spec 13 tab-scoped-attach-button), the branch's k8s specs were renumbered 13–16 → 14–17 and spec 17 was rewritten on top of the new slots model.

- Spec 14feat(saas-budgeting): K8s ingestion pipeline + schema

- aws-saas-budget-scripts/pipeline/: introduce UnitTypeConfig-driven dispatch in drive_client.py (DOCKER_CONFIG + K8S_CONFIG). New pipeline/k8s_ingest.py mirrors docker_ingest.py; registered in main.py so --ingest k8s (or --ingest all) routes through it. INGEST_ORDER_FOR_ALL = ("mappings", "databases", "docker", "k8s").

- Schema: add NULL-able memory_limits_used_gbs DOUBLE PRECISION to core_finance.aws_spend_saas_budget_unit_consumption. Existing docker rows hold NULL there; new k8s rows hold NULL in memory_max_sum_gb_last_7_days (exactly one of the two metric columns is populated per row).

- New scripts/sql/alter_aws_spend_saas_budget_unit_consumption_add_k8s_metric.sql migration for the prod runbook (operator runs once before first k8s ingest).

- Two spec-12 followup fixes folded in: --min-year filter to skip the historical Drive backlog, and Drive duplicate (year, week) resolution by newer modifiedTime instead of hard-failing.

- Spec 15feat(saas-budgeting): unit_type-parameterized backend + Kubernetes Usage tab

- Backend: drop hardcoded UNIT_TYPE = "docker" constant; thread unit_type: Literal['docker','kubernetes'] (default docker for back-compat) through /quarters, /weeks, /table. Service projects the appropriate metric column via _MEMORY_COLUMN_BY_UNIT_TYPE. Cache keys partition by unit_type.

- Frontend: generalize useSaaSBudgetingTable and useSaaSBudgetingQuarters with optional unitType last arg. SaaSBudgetingTable becomes a unitType-driven dispatcher.

- New 4th tab Kubernetes Usage in SaaSBudgetingSection.tsx (order: AWS Spend → Adjustments → Docker Usage → Kubernetes Usage). Both panels stay mounted via hidden={...} so toggling tabs preserves card state.

- Spec 16feat(saas-budgeting): Kubernetes cost backend + Tax exclusion

- New /api/aws-spend/saas-budgeting/kubernetes-cost endpoint, symmetric to /docker-cost. Shared _get_cost_by_week_for_tag helper in cost_explorer_service.py consolidates Docker + K8s Cost Explorer fetches and excludes the Tax RECORD_TYPE so allocated totals match the rest of AWS Spend.

- K8s Usage tab gets the full Docker treatment: Pin (Attach) + DollarSign (Fetch Cost & Allocate). kubernetesCostAllocation.ts exposes extractBuClassKubernetesCosts; allocation math reuses the unit-type-agnostic helpers in dockerCostAllocation.ts (computeAllocation, decorateSectionsWithCostAllocation, decorateFlatRowsWithCostAllocation).

- New useSaaSBudgetingKubernetesCost hook (storage key aws-spend.saas-budgeting.kubernetes-cost.v1) parallels the Docker hook.

- Spec 17feat(saas-budgeting): attach Kubernetes to simulated budget (rewritten on top of v3 slots model)

- Extend SlotId union: 'awsSpend' | 'docker' | 'kubernetes' | 'adjustments'. K8s tab Attach writes a snapshot to budget.slots.kubernetes, no migration needed.

- simulatedBudgetMerge.ts: outer-join now spans four sources. BU ordering: AWS first-appearance → Docker-only → Kubernetes-only → adjustment-only. Per-cell Total = sum of non-null source values; all-null collapses to null.

- SimulatedBudgetCard.tsx: new Kubernetes $ column between Docker and Adjustments. SLOT_ORDER updated; staleness chip + per-source captions cover the new slot.

## Test plan

- [ ] Operator: apply aws-saas-budget-scripts/scripts/sql/alter_aws_spend_saas_budget_unit_consumption_add_k8s_metric.sql against staging Redshift (ADD COLUMN is metadata-only, no backfill required)

- [ ] Operator: dry-run the k8s ingest against staging — uv run python -m pipeline.main --ingest k8s --dry-run

- [ ] Hit /api/aws-spend/saas-budgeting/quarters?unit_type=kubernetes from a super-admin session and verify it returns k8s-only quarters

- [ ] Open the SaaS Budgeting page, click the Kubernetes Usage tab, confirm the empty-state when no k8s data is ingested for the applied quarter

- [ ] After ingesting at least one k8s week, confirm the Kubernetes Usage tab renders the table with the "Total Mem Limits (GB)" column header and both Pin (Attach) and DollarSign (Fetch Cost & Allocate) buttons present

- [ ] Click DollarSign → confirm /api/aws-spend/saas-budgeting/kubernetes-cost returns and the allocated cost decorations appear inline

- [ ] Click Pin on the K8s tab → confirm the Simulated Budget card now shows a Kubernetes $ column populated for the attached BU/class rows; staleness chip clears

- [ ] Confirm the Docker Usage tab still renders unchanged (Pin / Attach + Fetch Cost & Allocate buttons present, totals unaffected)

- [ ] pnpm lint:pr clean; pnpm tsc --noEmit clean; pnpm test green; pytest tests/saas_budgeting/ green; pytest aws-saas-budget-scripts/tests/ green

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2724 — feat(aws-spend): SaaS Budgeting Bedrock tab + Simulated Budget column @ashwanth1109  no labels

## Demo

<img width="2222" height="1636" alt="image" src="https://github.com/user-attachments/assets/abf0c7c3-d4b8-49b1-af05-5282169cdf2d" />

## Summary

Promotes Bedrock to a first-class spend source in the SaaS Budgeting view of the AWS Spend dashboard, parallel to AWS Spend and Docker Usage. Adds a dedicated Bedrock tab with full table parity to the AWS Spend tab and a Bedrock column on the Simulated Budget card. Continuation of the existing saas-budgeting feature (specs 14 + 15) alongside specs 04/05 (AWS Spend) and 07/08/09 (Docker).

Linear: [KLAIR-2609](https://linear.app/builder-team/issue/KLAIR-2609/saas-budgeting-dedicated-bedrock-tab-column-on-simulated-budget)

## Specs

1. [14-bedrock-only-backend-mode](features/aws-spend/saas-budgeting/specs/14-bedrock-only-backend-mode/spec.md) — Replaces the binary include_bedrock query param with a tri-state bedrock_mode: 'exclude' | 'include' | 'only' (default 'exclude') on GET /api/aws-spend/saas-budgeting/aws-net-amortized-table. New service branch uses SUM(c.cost_amount) FILTER (WHERE is_ai_service(...)) (the row-level analog of the existing cost - cost_excl_bedrock precedent). In 'only' mode, BU/Class rows whose every week is null/zero drop out so the Bedrock tab isn't cluttered with empty rows; the canonical week axis is preserved so the Bedrock tab stays axis-aligned with AWS Spend.

2. [15-bedrock-tab-and-simulated-budget-column](features/aws-spend/saas-budgeting/specs/15-bedrock-tab-and-simulated-budget-column/spec.md) — Parameterizes AWSSpendCard and useSaaSBudgetingAWSSpend(quarter, bedrockMode) so AWS Spend and Bedrock tabs share one component / one hook (chosen over clone-and-rename). Adds a Bedrock tab between AWS Spend and Adjustments with full UnifiedTable parity (BU → Class → Account hierarchy, per-week columns, Total + Quarterly Projection, WeekChipFilter default-last-4, CSV export bedrock-net-amortized-{quarter}, all loader/error/empty states with Bedrock-flavoured copy). Bedrock variant omits the optional unmapped prop — no Unmapped Bedrock Accounts band per the spec. Adds a Bedrock column to SimulatedBudgetCard between AWS Spend and Docker $; final order: BU/Class | AWS Spend | Bedrock | Docker $ | Adjustments | Total. Extends simulatedBudgetMerge to outer-join 4 sources and adds an anyWeekKeyMismatch tri-source check for the week-mismatch chip. useSimulatedBudget storage bumps from v3 → v4 with a new bedrock slot; pre-v4 entries are ignored (no migration — internal tool).

## Implementation summary

Backend (klair-api):

- services/saas_budgeting_aws_spend_service.py — new _cost_expr_and_filter helper; get_aws_net_amortized_table and _fetch_quarter_cost_weeks thread bedrock_mode through; row-drop in 'only' mode

- routers/saas_budgeting_router.py — query param swap; Bedrock-specific empty-state message

Frontend (klair-client):

- services/awsSpendApi.tsgetSaaSBudgetingAWSSpendTable swaps includeBedrock for bedrockMode; getSaaSBudgetingAWSUnmapped unchanged

- screens/AWSSpend/hooks/useSaaSBudgetingAWSSpend.ts — new optional bedrockMode arg; default 'exclude'

- screens/AWSSpend/components/SaaSBudgeting/AWSSpendCard.tsx — props lifted; unmapped prop now optional; variant strings (cardTitle, csvFileNamePrefix, emptyStateCopy)

- screens/AWSSpend/components/SaaSBudgeting/SaaSBudgetingSection.tsx — Bedrock tab + handler; two <AWSSpendCard> instances

- screens/AWSSpend/components/SaaSBudgeting/SimulatedBudgetCard.tsx — Bedrock column + caption

- screens/AWSSpend/components/SaaSBudgeting/simulatedBudgetMerge.ts — tri-source outer-join + anyWeekKeyMismatch

- screens/AWSSpend/components/SaaSBudgeting/useSimulatedBudget.ts — v4 storage + bedrock slot

## Out of scope (deferred)

- Re-classifying isBedrockAdjustment adjustment rows into the Bedrock column

- Toolbar toggle on the AWS Spend tab to flip Bedrock inclusion at runtime

- Changing how Bedrock cost is computed upstream

- Unmapped Bedrock Accounts sub-band

- Migrating pre-v4 simulated-budget snapshots (intentional hard cut)

## Test coverage

- Backend: 42 / 42 passing in tests/services/test_saas_budgeting_aws_spend_service.py + tests/routers/test_saas_budgeting_aws_spend_router.py. Includes new TestGetAWSNetAmortizedTableBedrockOnly class with 6 service tests (mode branches, row-drop edge cases, axis preservation, conservation invariant only + exclude = include) and 5 new router tests (tri-state forwarding, 422 on invalid mode, Bedrock-specific empty message).

- Frontend: 137 / 137 passing across SaaS Budgeting test files. Includes 10 new tests: 3 for buildGroups tri-source outer-join and 7 for anyWeekKeyMismatch (covering all pair permutations and absent-source cases). Existing tests updated for bedrockMode + v4 storage migration.

## Self-review

No issues found. Backend: SQL semantics correct (FILTER clause matches the cost - cost_excl_bedrock precedent), row-drop predicate handles all-None / all-zero / mixed edge cases, axis preserved in 'only' mode. Frontend: tab order correct (Bedrock between AWS Spend and Adjustments), column order correct, Bedrock variant correctly omits unmapped band, v3→v4 hard cut never reads stale keys, anyWeekKeyMismatch correctly returns false for 0/1 sources.

## Test plan

- [ ] Open SaaS Budgeting tab on AWS Spend dashboard, switch to Bedrock tab — verify table renders with Bedrock (Net Amortized) title, BU → Class → Account hierarchy, W{n} columns, Total ($) and Quarterly Projection ($) columns

- [ ] Click "Attach to Simulated Budget" on Bedrock tab — verify toast Attached N BU/Class rows to simulated budget (Bedrock)

- [ ] Verify Simulated Budget card renders Bedrock column between AWS Spend and Docker $

- [ ] Switch back to AWS Spend tab — verify it still excludes Bedrock (no regression)

- [ ] Verify CSV export from Bedrock tab uses filename bedrock-net-amortized-{quarter}

- [ ] Pre-v4 localStorage entries should be silently ignored (no crash, no migration prompt)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2726 — fix(qtd-report): color NP/EBITDA on $25K dollar gate alone @sanketghia  no labels

## Summary

The QTD report's variance-coloring AND-gate (|var| ≥ \$25K AND |pct − 100| ≥ 5pp) under-flags Net Profit / EBITDA on high-margin BUs. A material dollar miss reads as a small percentage gap against a large bottom-line baseline, so the row stays uncolored even when the parents (Total Revenue, Gross Profit) are red — exactly the visual contradiction the executive-scannability framing was meant to prevent.

This change adds a bottom_line flag to _color_for_line and _plan_status that drops the pp leg for NP / EBITDA / combined NP-EBITDA rows and the QTD Net Profit KPI tile only, keeping the \$25K dollar gate so trivial-noise suppression continues everywhere else.

## Audit across all 20 distributed Q2 FY2026 reports

| Unit | Row | Variance | pp gap | Before | After |

|---|---|---|---|---|---|

| GFI | NP | −\$57,712 | 2.72pp | black | red |

| GFI | EBITDA | −\$86,498 | 4.08pp | black | red |

| JigTree | NP | +\$86,656 | 1.71pp | black | green |

| JigTree | EBITDA | +\$82,287 | 1.62pp | black | green |

The other 18 reports render identically — every other NP/EBITDA cell either already cleared the AND-gate (still colored) or has variance \<\$25K (still correctly suppressed). Line items, section totals (Total Revenue / Total COGS / Gross Profit / Total Expenses), margins, KPI band Revenue tile, vendor / customer tables, action items, and LLM commentary are all unchanged.

## Why an OR-gate didn't work

A pure OR-gate (color when |\$| ≥ \$25K OR |pp| ≥ 5pp) re-introduces noise on CFs with budget ≤ 0 — the pp branch trivially passes (_color_for_line skips it for non-positive budgets), so a \$2K NP variance on COO Service or a \$692 variance on New Renewals would paint colored. Dropping the pp leg only for explicitly-flagged bottom-line rows is the surgical fix.

## Visual verification

Fix-preview docs (skipped the ledger to keep the production UI clean):

- GFI — https://docs.google.com/document/d/1QmgHV5AiMJZ2IJvxoSl6hSe6Yqq9-m_6T4dXN7gAwKI/edit

- JigTree — https://docs.google.com/document/d/1wYlVttpJ756da27nnq9zKbqNPyqa8c8cCFt2fYEztyE/edit

Compare against the originals listed in bva-emails-list.csv — only the four NP/EBITDA cells and the QTD Net Profit KPI tile change colors; everything else matches at the rendering level.

## Test plan

pytest tests/monthly_qtd_report/test_doc_builder_format.py55 passed (38 pre-existing + 17 new for bottom_line):

Unit tests on _color_for_line (5): GFI shape colors red, JigTree shape colors green, dollar-gate floor still wins (sub-\$25K stays uncolored), default unchanged for non-bottom-line rows, internal-suppression precedence holds.

Unit tests on _plan_status (2): GFI NP KPI tile colors red under bottom_line=True; sub-\$25K dollar gap still suppressed.

Paired KPI ↔ row consistency invariant (5 parametrized cases): for every (budget, actual) shape — GFI EBITDA, JigTree NP, IgniteTech NP, sub-materiality, exactly \$25K — _color_for_line(bottom_line=True) and _plan_status(bottom_line=True) must return matching verdicts. Locks the past < vs <= divergence-bug class (cited in the diff comment at doc_builder.py:496-499) against recurrence.

5pp boundary cases (3 parametrized): at pp = 4.999, 5.0, 5.001 with \$2M baseline. Forces "pp gate REMOVED" semantics — a regression that kept the pp leg but flipped the comparator from < to >= would pass the existing GFI/JigTree shape tests (4.08pp / 1.71pp) but fail at 5.001pp.

End-to-end through _build_pnl_rows_and_data (2):

- 4pp / −\$40K shape → bottom-line row colors red, Total Revenue / Gross Profit / Recurring Revenue line item stay uncolored. Verifies bottom_line=True is threaded only through the NP/EBITDA call sites of _total_keys and that the closure chain (_total_keys → _line_keys → _apply_color → _color_for_line) correctly forwards the keyword.

- 20pp / −\$200K shape → all relevant rows color red. Regression guard against a buggy pass-through that swallows the keyword for material variance.

Other gates run: ruff format, ruff check, full tests/monthly_qtd_report/ (326 passed).

The Builder Desk  —  Engineer Spotlight
🏆 Engineer Spotlight

TWELVE HAMMERS, THREE REPOS, ZERO DAYS OFF: THE BUILDER TEAM DOES IT AGAIN

Klair alone logged eight active repos in 24 hours — call it a velocity, call it a religion.

Twelve pull requests. Three repositories. Twenty-four hours on the clock. The Builder Team did not come to negotiate. Klair absorbed eight of the active repos like the hungry infrastructure beast it is, Aerie chipped in three, and Surtr — quiet, dependable Surtr — posted one that mattered. Seven of those twelve PRs went unspotted by the narrative desk, which means Mac Donnelly left seven diamonds on the cutting room floor. That's where I come in.

Let's talk engineers. @sanketghia led the house with three PRs, touching auth flows and QTD reporting with the calm efficiency of a man who has never once missed a sprint. @marcusdAIy dropped two Budget Bot 4.0 installments in a single rotation — the man is practically writing the bot's autobiography in real time. @benji-bizzell, @kevalshahtrilogy, @eric-tril, @YibinLongTrilogy, and @mwrshah each posted one PR, which in this organization is the equivalent of a normal person running a half-marathon before breakfast. One PR here is a statement. These people do not idle.

And then there is @ashwanth1109. Two PRs. Just two. And yet somehow the man managed to implement full Kubernetes support across ingestion, usage tab, cost, and simulated budget in PR #2690 — covering specs 14 through 17 in a single merge — and then followed it up with PR #2724, dropping a brand-new Bedrock tab and a Simulated Budget column into the SaaS Budgeting feature like he was leaving a tip. The diffs are, by all accounts, the length of a short novel and twice as dense. When I asked Ashwanth whether anyone could actually read his Kubernetes implementation in one sitting, he reportedly looked up from his terminal and said, "I don't write for readers. I write for the merge queue." He then returned to typing. His dismissal of this reporter was, as always, total and immediate.

Now, the overflow. PR #48 in Surtr saw @kevalshahtrilogy wire environment variables, add transactional initial data load, and install a freshness guard on Kubera — one PR doing the work of three, the way God intended. @sanketghia's PR #2728 restored the Request Access flow on the Klair login screen, a fix that will be invisible to users in the best possible way, and PR #2725 delivered case-insensitive vendor and customer grouping in the QTD report — small, surgical, correct. @marcusdAIy's PR #2731 pushed Budget Bot 4.0 forward with automated tab discovery and Top Level View parsers, while PR #2730 completed a durability sweep and shipped the first version of the comment viewer. @YibinLongTrilogy cleaned up capacity terminology in Aerie PR #164 with the quiet confidence of someone who knows that naming things correctly is half the battle. And @mwrshah's PR #2718 handled pain point lifecycle dates and ripped out theme product and BU prefixing — the kind of unglamorous work that makes everything else possible.

Morale on the Builder Team is, by every available metric, at an all-time high. The numbers do not lie. The engineers do not rest. The Trilogy Times does not look away.

Brick's Overflow — PRs Mac Didn't Cover  (click to expand)
#48 — fix(kubera): wire env vars + transactional initial_data_load + freshness guard @kevalshahtrilogy  no labels

## Summary

- Wire missing env vars in pipeline.json (S3_DUMP_PATH, S3_TEMP_PATH, IAM_ROLE) so override_all_assets=true can actually load CSVs from S3. Values taken from the still-deployed Klair Lambda PassiveInvestmentsCron.

- Move ALPHA_VANTAGE_API_KEY to AWS Secrets Manager (surtr/kubera-config) — fetched lazily and cached per invocation via a small helper. The IAM grant was already in pipeline.json. Overview-only invocations never touch Secrets Manager.

- Replace TRUNCATE with DELETE FROM and wrap initial_data_load in with redshift.transaction():. TRUNCATE implicitly commits in Redshift (per AWS docs) — the failure that emptied 4 prod tables today wouldn't have rolled back even with a transaction wrapper. DELETE is rollback-safe.

- Add explicit column lists on the trades and debt COPYs to skip the IDENTITY column 1 (trade_id / debt_id). The CSVs don't include identity values, so the previous COPY tried to load a date into a bigint and failed with Invalid digit, Type: Long. Klair has the same latent bug (Step 1 never runs there either).

- Fix the freshness guard added in #38 — its WHERE holding_date = CURRENT_DATE OR holding_date = MAX(...) clause counted MAX-date rows too, so today_rows was zero only when the table was completely empty. Replaced with a CASE-aggregate over the whole table plus a SUM(holding_value) > 0 check so stale-but-non-empty *and* zero-valued tables also fail loudly.

- Add s3:PutObject / s3:DeleteObject on the temp prefix for Step 4's upload_df_to_redshift_via_s3 helper.

## Why now

Triggered an on-demand override_all_assets=true run today (2026-05-05) to verify the OVERVIEW_ONLY "Portfolio Value: \$0" silent failure. Lambda async-retried 3x, each time truncating 4 underlying tables and failing at the COPY (env vars unset → bogus path / empty IAM_ROLE). Recovered from a 3-min-pre-damage Redshift automated snapshot via restore-table-from-cluster-snapshot + DROP/RENAME.

## Test plan

- [x] Local Phase 1 (happy path) — created _localtest shadow tables via CTAS from the recovered originals; ran the fixed DELETE + COPY-with-column-lists pattern through the actual RedshiftHandler.transaction() (IAM auth, same path as Lambda). Transaction committed; loaded 102 trades / 253 stock_price / 42 debt rows. hot stayed at 0 as expected.

- [x] Local Phase 2 (rollback) — a real COPY failure (the column-mismatch error before adding column lists) propagated cleanly out of the with block. All four shadow tables retained sentinel counts (119/17418/43/69708). Rollback verified.

- [x] Secrets Manager helper smoke-tested — fetches the right key, caches on second call.

- [ ] After merge, take a manual Redshift snapshot of redshift-cluster-1 as insurance.

- [ ] Trigger pipeline-kubera-passive-investments-prod with params.override_all_assets=true (use RequestResponse invocation type to avoid AWS auto-retries on failure). Expect Steps 1–4 to populate trades / stock_price / debt / hot, then Step 5 to refresh the 5 cache tables with non-zero portfolio metrics.

- [ ] Verify holding_over_time.MAX(holding_date) = CURRENT_DATE post-run; future overview-only runs should pass the freshness guard.

## Scope notes

- Klair's klair-udm/kubera/run_investment_pipeline.py has the same TRUNCATE-not-rollback-safe and CSV-column-mismatch bugs but is being deprecated, so leaving it as-is.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2690 — feat(saas-budgeting): full Kubernetes support — ingestion, usage tab, cost, simulated budget (specs 14–17) @ashwanth1109  no labels

## Demo

<img width="2199" height="1644" alt="image" src="https://github.com/user-attachments/assets/c3ece8ef-46c3-4a94-844b-9e4722945e33" />

## Summary

End-to-end Kubernetes support for SaaS Budgeting, mirroring the Docker pipeline. After merging main (which shipped KLAIR-2608, the v3 slots-keyed simulated-budget storage model, plus its own spec 13 tab-scoped-attach-button), the branch's k8s specs were renumbered 13–16 → 14–17 and spec 17 was rewritten on top of the new slots model.

- Spec 14feat(saas-budgeting): K8s ingestion pipeline + schema

- aws-saas-budget-scripts/pipeline/: introduce UnitTypeConfig-driven dispatch in drive_client.py (DOCKER_CONFIG + K8S_CONFIG). New pipeline/k8s_ingest.py mirrors docker_ingest.py; registered in main.py so --ingest k8s (or --ingest all) routes through it. INGEST_ORDER_FOR_ALL = ("mappings", "databases", "docker", "k8s").

- Schema: add NULL-able memory_limits_used_gbs DOUBLE PRECISION to core_finance.aws_spend_saas_budget_unit_consumption. Existing docker rows hold NULL there; new k8s rows hold NULL in memory_max_sum_gb_last_7_days (exactly one of the two metric columns is populated per row).

- New scripts/sql/alter_aws_spend_saas_budget_unit_consumption_add_k8s_metric.sql migration for the prod runbook (operator runs once before first k8s ingest).

- Two spec-12 followup fixes folded in: --min-year filter to skip the historical Drive backlog, and Drive duplicate (year, week) resolution by newer modifiedTime instead of hard-failing.

- Spec 15feat(saas-budgeting): unit_type-parameterized backend + Kubernetes Usage tab

- Backend: drop hardcoded UNIT_TYPE = "docker" constant; thread unit_type: Literal['docker','kubernetes'] (default docker for back-compat) through /quarters, /weeks, /table. Service projects the appropriate metric column via _MEMORY_COLUMN_BY_UNIT_TYPE. Cache keys partition by unit_type.

- Frontend: generalize useSaaSBudgetingTable and useSaaSBudgetingQuarters with optional unitType last arg. SaaSBudgetingTable becomes a unitType-driven dispatcher.

- New 4th tab Kubernetes Usage in SaaSBudgetingSection.tsx (order: AWS Spend → Adjustments → Docker Usage → Kubernetes Usage). Both panels stay mounted via hidden={...} so toggling tabs preserves card state.

- Spec 16feat(saas-budgeting): Kubernetes cost backend + Tax exclusion

- New /api/aws-spend/saas-budgeting/kubernetes-cost endpoint, symmetric to /docker-cost. Shared _get_cost_by_week_for_tag helper in cost_explorer_service.py consolidates Docker + K8s Cost Explorer fetches and excludes the Tax RECORD_TYPE so allocated totals match the rest of AWS Spend.

- K8s Usage tab gets the full Docker treatment: Pin (Attach) + DollarSign (Fetch Cost & Allocate). kubernetesCostAllocation.ts exposes extractBuClassKubernetesCosts; allocation math reuses the unit-type-agnostic helpers in dockerCostAllocation.ts (computeAllocation, decorateSectionsWithCostAllocation, decorateFlatRowsWithCostAllocation).

- New useSaaSBudgetingKubernetesCost hook (storage key aws-spend.saas-budgeting.kubernetes-cost.v1) parallels the Docker hook.

- Spec 17feat(saas-budgeting): attach Kubernetes to simulated budget (rewritten on top of v3 slots model)

- Extend SlotId union: 'awsSpend' | 'docker' | 'kubernetes' | 'adjustments'. K8s tab Attach writes a snapshot to budget.slots.kubernetes, no migration needed.

- simulatedBudgetMerge.ts: outer-join now spans four sources. BU ordering: AWS first-appearance → Docker-only → Kubernetes-only → adjustment-only. Per-cell Total = sum of non-null source values; all-null collapses to null.

- SimulatedBudgetCard.tsx: new Kubernetes $ column between Docker and Adjustments. SLOT_ORDER updated; staleness chip + per-source captions cover the new slot.

## Test plan

- [ ] Operator: apply aws-saas-budget-scripts/scripts/sql/alter_aws_spend_saas_budget_unit_consumption_add_k8s_metric.sql against staging Redshift (ADD COLUMN is metadata-only, no backfill required)

- [ ] Operator: dry-run the k8s ingest against staging — uv run python -m pipeline.main --ingest k8s --dry-run

- [ ] Hit /api/aws-spend/saas-budgeting/quarters?unit_type=kubernetes from a super-admin session and verify it returns k8s-only quarters

- [ ] Open the SaaS Budgeting page, click the Kubernetes Usage tab, confirm the empty-state when no k8s data is ingested for the applied quarter

- [ ] After ingesting at least one k8s week, confirm the Kubernetes Usage tab renders the table with the "Total Mem Limits (GB)" column header and both Pin (Attach) and DollarSign (Fetch Cost & Allocate) buttons present

- [ ] Click DollarSign → confirm /api/aws-spend/saas-budgeting/kubernetes-cost returns and the allocated cost decorations appear inline

- [ ] Click Pin on the K8s tab → confirm the Simulated Budget card now shows a Kubernetes $ column populated for the attached BU/class rows; staleness chip clears

- [ ] Confirm the Docker Usage tab still renders unchanged (Pin / Attach + Fetch Cost & Allocate buttons present, totals unaffected)

- [ ] pnpm lint:pr clean; pnpm tsc --noEmit clean; pnpm test green; pytest tests/saas_budgeting/ green; pytest aws-saas-budget-scripts/tests/ green

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2724 — feat(aws-spend): SaaS Budgeting Bedrock tab + Simulated Budget column @ashwanth1109  no labels

## Demo

<img width="2222" height="1636" alt="image" src="https://github.com/user-attachments/assets/abf0c7c3-d4b8-49b1-af05-5282169cdf2d" />

## Summary

Promotes Bedrock to a first-class spend source in the SaaS Budgeting view of the AWS Spend dashboard, parallel to AWS Spend and Docker Usage. Adds a dedicated Bedrock tab with full table parity to the AWS Spend tab and a Bedrock column on the Simulated Budget card. Continuation of the existing saas-budgeting feature (specs 14 + 15) alongside specs 04/05 (AWS Spend) and 07/08/09 (Docker).

Linear: [KLAIR-2609](https://linear.app/builder-team/issue/KLAIR-2609/saas-budgeting-dedicated-bedrock-tab-column-on-simulated-budget)

## Specs

1. [14-bedrock-only-backend-mode](features/aws-spend/saas-budgeting/specs/14-bedrock-only-backend-mode/spec.md) — Replaces the binary include_bedrock query param with a tri-state bedrock_mode: 'exclude' | 'include' | 'only' (default 'exclude') on GET /api/aws-spend/saas-budgeting/aws-net-amortized-table. New service branch uses SUM(c.cost_amount) FILTER (WHERE is_ai_service(...)) (the row-level analog of the existing cost - cost_excl_bedrock precedent). In 'only' mode, BU/Class rows whose every week is null/zero drop out so the Bedrock tab isn't cluttered with empty rows; the canonical week axis is preserved so the Bedrock tab stays axis-aligned with AWS Spend.

2. [15-bedrock-tab-and-simulated-budget-column](features/aws-spend/saas-budgeting/specs/15-bedrock-tab-and-simulated-budget-column/spec.md) — Parameterizes AWSSpendCard and useSaaSBudgetingAWSSpend(quarter, bedrockMode) so AWS Spend and Bedrock tabs share one component / one hook (chosen over clone-and-rename). Adds a Bedrock tab between AWS Spend and Adjustments with full UnifiedTable parity (BU → Class → Account hierarchy, per-week columns, Total + Quarterly Projection, WeekChipFilter default-last-4, CSV export bedrock-net-amortized-{quarter}, all loader/error/empty states with Bedrock-flavoured copy). Bedrock variant omits the optional unmapped prop — no Unmapped Bedrock Accounts band per the spec. Adds a Bedrock column to SimulatedBudgetCard between AWS Spend and Docker $; final order: BU/Class | AWS Spend | Bedrock | Docker $ | Adjustments | Total. Extends simulatedBudgetMerge to outer-join 4 sources and adds an anyWeekKeyMismatch tri-source check for the week-mismatch chip. useSimulatedBudget storage bumps from v3 → v4 with a new bedrock slot; pre-v4 entries are ignored (no migration — internal tool).

## Implementation summary

Backend (klair-api):

- services/saas_budgeting_aws_spend_service.py — new _cost_expr_and_filter helper; get_aws_net_amortized_table and _fetch_quarter_cost_weeks thread bedrock_mode through; row-drop in 'only' mode

- routers/saas_budgeting_router.py — query param swap; Bedrock-specific empty-state message

Frontend (klair-client):

- services/awsSpendApi.tsgetSaaSBudgetingAWSSpendTable swaps includeBedrock for bedrockMode; getSaaSBudgetingAWSUnmapped unchanged

- screens/AWSSpend/hooks/useSaaSBudgetingAWSSpend.ts — new optional bedrockMode arg; default 'exclude'

- screens/AWSSpend/components/SaaSBudgeting/AWSSpendCard.tsx — props lifted; unmapped prop now optional; variant strings (cardTitle, csvFileNamePrefix, emptyStateCopy)

- screens/AWSSpend/components/SaaSBudgeting/SaaSBudgetingSection.tsx — Bedrock tab + handler; two <AWSSpendCard> instances

- screens/AWSSpend/components/SaaSBudgeting/SimulatedBudgetCard.tsx — Bedrock column + caption

- screens/AWSSpend/components/SaaSBudgeting/simulatedBudgetMerge.ts — tri-source outer-join + anyWeekKeyMismatch

- screens/AWSSpend/components/SaaSBudgeting/useSimulatedBudget.ts — v4 storage + bedrock slot

## Out of scope (deferred)

- Re-classifying isBedrockAdjustment adjustment rows into the Bedrock column

- Toolbar toggle on the AWS Spend tab to flip Bedrock inclusion at runtime

- Changing how Bedrock cost is computed upstream

- Unmapped Bedrock Accounts sub-band

- Migrating pre-v4 simulated-budget snapshots (intentional hard cut)

## Test coverage

- Backend: 42 / 42 passing in tests/services/test_saas_budgeting_aws_spend_service.py + tests/routers/test_saas_budgeting_aws_spend_router.py. Includes new TestGetAWSNetAmortizedTableBedrockOnly class with 6 service tests (mode branches, row-drop edge cases, axis preservation, conservation invariant only + exclude = include) and 5 new router tests (tri-state forwarding, 422 on invalid mode, Bedrock-specific empty message).

- Frontend: 137 / 137 passing across SaaS Budgeting test files. Includes 10 new tests: 3 for buildGroups tri-source outer-join and 7 for anyWeekKeyMismatch (covering all pair permutations and absent-source cases). Existing tests updated for bedrockMode + v4 storage migration.

## Self-review

No issues found. Backend: SQL semantics correct (FILTER clause matches the cost - cost_excl_bedrock precedent), row-drop predicate handles all-None / all-zero / mixed edge cases, axis preserved in 'only' mode. Frontend: tab order correct (Bedrock between AWS Spend and Adjustments), column order correct, Bedrock variant correctly omits unmapped band, v3→v4 hard cut never reads stale keys, anyWeekKeyMismatch correctly returns false for 0/1 sources.

## Test plan

- [ ] Open SaaS Budgeting tab on AWS Spend dashboard, switch to Bedrock tab — verify table renders with Bedrock (Net Amortized) title, BU → Class → Account hierarchy, W{n} columns, Total ($) and Quarterly Projection ($) columns

- [ ] Click "Attach to Simulated Budget" on Bedrock tab — verify toast Attached N BU/Class rows to simulated budget (Bedrock)

- [ ] Verify Simulated Budget card renders Bedrock column between AWS Spend and Docker $

- [ ] Switch back to AWS Spend tab — verify it still excludes Bedrock (no regression)

- [ ] Verify CSV export from Bedrock tab uses filename bedrock-net-amortized-{quarter}

- [ ] Pre-v4 localStorage entries should be silently ignored (no crash, no migration prompt)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2725 — fix(qtd-report): case-insensitive vendor/customer grouping @sanketghia  no labels

## Summary

Per Raviraja's review of the 2026-04 QTD batch, case-only spelling drift between Budget (Abacum free text) and Actual (NetSuite) sides was producing phantom budget-only / actual-only sister rows in the QTD variance tables. The most visible case: the SaaS Q2 Perplexity entry — $1,360 budget on Perplexity Ai never joined the $76,737 actual on Perplexity AI, so the report appeared to show a missing budget.

This PR groups + FULL OUTER JOINs both vendor- and customer-side variance queries on LOWER(TRIM(vendor)) instead of raw vendor. MAX(vendor) inside each CTE picks a deterministic display name.

Approved by Raviraja Rao on the 2026-04-30 SaaS sample report.

## Files changed

- klair-api/services/monthly_qtd_report/data.py — both _VARIANCE_DRIVERS_QUERY (NHC COGS/OPEX vendors) and _REVENUE_CUSTOMER_VARIANCE_QUERY (Recurring/Non-Recurring Revenue customers) updated. Phantom-sister-row logic in the coverage_kind CASE and the FULL OUTER JOIN switched from raw vendor to the case-folded vendor_key / customer_key.

- klair-api/tests/monthly_qtd_report/test_data.py — two regression tests lock the SQL contract: case-folded GROUP BY, case-folded FULL OUTER JOIN, and case-folded coverage_kind NULL checks for both vendor and customer queries.

## Verification

- pytest tests/monthly_qtd_report/ — 311 passed, 0 failed.

- ruff format + ruff check — clean.

- pyright services/monthly_qtd_report/data.py — clean.

- Live Redshift sanity check (SaaS / April 2026): Perplexity Ai Budget=$1,360 and Perplexity AI Actual=$76,737 now report as a single coverage_kind='both' row instead of two phantom sister rows. The Q2 Luxembourg case-twin pair (Amazon Web Services EMEA SARL (Luxembourg) vs Amazon Web Services Emea Sarl (Luxembourg)) also merges correctly on the budget side.

- Sample regenerated SaaS report: https://docs.google.com/document/d/1MK4sN166U7Q2ZTECWMIVNKQ3RAlxgJ3yu2Bup6wxvjQ/edit — Perplexity now appears as one merged variance line, and the LLM commentary surfaces the AWS / AWS-Luxembourg net interplay.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2728 — feat(auth): KLAIR-2610 restore Request Access flow on login screen @sanketghia  no labels

Linear: [KLAIR-2610](https://linear.app/builder-team/issue/KLAIR-2610)

## Summary

The login screen lost its "Create Account" button in commit 446d5498b (April 2026 design-system migration PR #2455). The SignUpModal component, its API service, types, hook, and backend routes (/admin/submit-access-request, /admin/public/teams) were all left intact — only the trigger button and modal mount were removed. New users land on the auth screen with no in-product way to request access.

This PR restores the flow:

- withAuth.tsx — re-mounts <SignUpModal> and adds a "Request Access" secondary CTA with a "New to Klair?" divider beneath the Sign In button. The modal's onSignInRequested (the "already registered → Go to Sign In" recovery path) is wired to Clerk's openSignIn() so it actually launches the sign-in modal.

- SignUpModal.tsx — restyled to use klair-* design tokens (replacing hardcoded slate/blue/indigo classes from before the redesign). The submit button now uses the same flat --klair-accent styling as Sign In, validation states use --klair-positive / --klair-negative, and surfaces use --klair-bg-card / --klair-bg-surface1 so it looks right in both light and dark themes. Also added role="dialog" / aria-modal / aria-labelledby and replaced deprecated onKeyPress with onKeyDown.

- Tests — new SignUpModal.spec.tsx (8 cases) covering render gating, validation, submit happy/error paths, "Go to Sign In" recovery, and close behavior. withAuth.spec.tsx and the global Vitest mock setup get useClerk.openSignIn added.

## Verification

- pnpm tsc --noEmit clean

- eslint --max-warnings 0 clean on all 5 changed files

- Vitest: 5/5 withAuth + 8/8 SignUpModal pass

- Manual browser smoke test:

- Auth screen renders Sign In + Request Access in light + dark mode

- Modal opens, fetches teams, validation works (red on invalid blur, green on valid)

- Submitted a real access request manually — flow works end-to-end through the live backend

## Screenshots

<img width="1039" height="840" alt="image" src="https://github.com/user-attachments/assets/43bb7027-6a4a-4308-ac2f-1a338307051f" />

<img width="854" height="814" alt="image" src="https://github.com/user-attachments/assets/84aab76d-9348-41f5-986a-e380207c625c" />

## Test plan

- [ ] Verify Sign In is the visual primary CTA and Request Access reads as secondary

- [ ] Open modal, submit invalid email, confirm error styling uses klair tokens (not slate/red Tailwind)

- [ ] Submit a valid request, confirm success state renders with the request details panel

- [ ] Trigger an "already registered" response, click "Go to Sign In", confirm Clerk sign-in modal opens

- [ ] Toggle dark mode, confirm modal + buttons remain readable

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2731 — Budget Bot 4.0: C1.1/3/4/6 — automated tab discovery + Top Level View / Benchmark by Product parsers @marcusdAIy  no labels

## Screenshots

<img width="1209" height="774" alt="image" src="https://github.com/user-attachments/assets/4af3c737-5644-4a45-a7e0-90e6e73159b0" />

<img width="1025" height="660" alt="image" src="https://github.com/user-attachments/assets/2505a32d-a781-4338-bb61-e676418dedb5" />

## Summary

- Automates the manual sheet-tab verification pass that was blocking C1.3 - C1.5 with a repeatable c1_discover_tabs.py script (the May 5 re-run caught a factual error in the original C1.1 audit, so this isn't paranoia).

- Ships the two real net-new GSheets parsers the discovery identified: get_top_level_view_bu_plans_table (rollup-only) and get_benchmark_by_product_table (BU-only). C1.5 (Benchmark Slide / Benchmark workings) is now superseded — see audit §6c.

- Together unblocks 11 downstream review checks: C2.7 + C2.8 (BU-vs-Hybrid divergence + FY trajectory), and the 9 per-product benchmark checks C3.1 - C3.9.

## Why it's needed

Phase C (Memorial Day Review Agent MVP) is gated on these parsers — without them, 11 of the 17 outstanding review checks have no data source. The original audit superseded C1.2 ("Margin Target tab") on the assumption it wasn't a real tab, and gated C1.3 - C1.5 on a manual click-through verification pass that nobody had done. The May 5 automated discovery (commit 1) showed:

1. The audit's assumption about Margin Target was wrong — it does exist as a tab. Net effect on this PR is small (existing P&L row extraction still satisfies current C2.x needs), but documents this as audit §6c so a future per-class margin check can use the right source.

2. Top Level View - BU Plans exists only on rollup BUs (Skyvera in our sample). IgniteTech / GFI Group / Crossover do not have the tab. C2.7 + C2.8 must therefore gate on is_rollup_bu at the orchestrator layer — flagged in the backlog row updates.

3. Benchmark Slide / Benchmark workings are derivatives of Benchmark by Product ($ rollup + underlying allocation calc). Not needed for the % comparison checks in C3.x. C1.5 is now marked superseded.

The Top Level View - BU Plans tab also has an unusual column layout — col A is the label / section title, not a one-cell filler like every other tab in the sheet. The first live smoke run caught the mistake (parser returned None on Skyvera) before any check code consumed the parser, which is exactly what this verification harness is for.

## Changes

Commit 1 — klair-api/scripts/c1_discover_tabs.py (C1.1)

- Repeatable tab-discovery script using the same _get_dynamic_sheet_url + gspread client the wizard uses.

- Defaults to Skyvera / IgniteTech / GFI Group / Crossover Q2'26.

- 1.0s default --preview-sleep keeps a 4-target run under the 60-reads/min/user Sheets quota (a 0s run hit a 429 on GFI mid-loop).

Commit 2 — parsers + 31 new tests (C1.3 + C1.4)

- services/budget_sheets_service.py:

- _find_benchmark_by_product_table + get_benchmark_by_product_table (CF-gated, returns section-tagged rows so callers can disambiguate Engineering/Product across central / edge / total sections).

- _find_top_level_view_bu_plans_table + get_top_level_view_bu_plans_table (CF-gated; non-rollup BUs return None via WorksheetNotFound rather than raising).

- Shared _looks_like_quarter_row helper to defend against false-positive group-header detection.

- tests/board_doc/test_benchmark_by_product.py — 16 tests parametrized across Skyvera (23-col rollup), IgniteTech (57-col wide), GFI Group (13-col narrow) shapes, plus header / padding / truncation / CF gate / rate-limit-propagation cases.

- tests/board_doc/test_top_level_view_bu_plans.py — 15 tests covering multi-block parsing, section-title disambiguation (with and without trailing colon), false-positive defenses, and missing-tab graceful degradation for non-rollup BUs.

Commit 3 — klair-api/scripts/c1_smoke_parsers.py (C1.6)

- Cross-BU live verification harness for both new parsers.

- Latest run: 0 mismatches across all 4 (BU × parser) combinations.

Backlog (local / .cursor, untracked):

- BACKLOG-budget-bot-4.md — C1.1 marked DONE with verification appendix; C1.2 stays SUPERSEDED with audit-correction note; C1.3 / C1.4 / C1.6 marked TODO → ready-to-ship; C1.5 marked SUPERSEDED. C2.7 / C2.8 rows updated with the rollup-only constraint + dependency on C1.3.

- research/budget-bot-c1-audit.md — added §5 status update (manual pass replaced) and §6 with the verified tab matrix, structures, audit corrections, and revised attack order.

## Breaking changes

None. Both new methods are additive and CF-gated; no existing callers reference them yet.

## Cross-BU validation (all 8 real BUs + 13 CFs)

Expanded the C1.6 smoke from the initial 4-BU sample to every real BU + every CF in the enum. 0 mismatches across 21 entities. Three real cross-BU drifts surfaced — all absorbed by the parsers; smoke output flags them so the reviewer sees what's being papered over:

1. Totogi has the Top Level View - BU Plans tab too (single-section, partial). Originally framed as "Skyvera-only" / "rollup-only"; that was wrong on a 4-BU sample. Parser docstring + audit + C2.7/C2.8 backlog rows now describe the variable-presence pattern (gate on result is not None, not on a hardcoded rollup list).

2. GFI's total section uses Engineering (not Engineering/Product). Smoke anchor extractor accepts a list of label variants and surfaces the matched one (Engineering/Product → Engineering) so the reviewer sees the variance.

3. Totogi's TLV header carries stale year labelsCurrent BU Plan - 2026 Q2'25 for what should be Q2'26. Smoke does a quarter-number-only fallback match and prints a "matched stale year label" note rather than silently treating the wrong column as the right one.

Plus minor observations: 12/13 CFs short-circuit via the CF gate; Central PS has no DDB row for Q2'26 (skipped); Cloudfix + Totogi Margin rows are #DIV/0! — both sheets are mid-draft for Q2'26 (parser preserves cell text verbatim). Contently / Canopy alias resolution works end-to-end.

## Test plan

- [x] uv run pytest tests/board_doc/test_benchmark_by_product.py -q — 16/16 passing.

- [x] uv run pytest tests/board_doc/test_top_level_view_bu_plans.py -q — 15/15 passing.

- [x] uv run pytest tests/board_doc/ -q — 1050/1050 passing locally (no regressions in adjacent suites).

- [x] uv run ruff check + uv run ruff format --check — clean on changed files.

- [x] Live smoke uv run python scripts/c1_smoke_parsers.py --year 2026 --quarter 2 --show-anchors — 0 mismatches across all 8 real BUs + 13 CFs (~50s runtime with 1.5s sleeps).

- [x] Live discovery uv run python scripts/c1_discover_tabs.py --year 2026 --quarter 2 --preview-sleep 1.2 — completes cleanly under quota, all 4 targets.

- [x] Spot-checked Skyvera anchor cells against the live sheet:

- Benchmark by Product: summary/Margin Target 75/75, summary/Margin 75/60, total/Engineering/Product 9.5/9.

- Top Level View - BU Plans: Skyvera Overall Q2'26 → TR 13.2 / EBITDA 8.0 / Net Margin 60% / Margin Target 63%.

- [x] Reviewer to spot-check the Skyvera anchors above against the live sheet to confirm the section/label mapping (sheet URL is logged in the smoke output).

## Follow-ups

- C2.7 + C2.8 implementation needs an orchestrator gate on get_top_level_view_bu_plans_table(...) is not None (plus probably a section-count threshold to skip thin Totogi-style blocks). Documented in the C2.7 / C2.8 backlog rows.

- The WorksheetNotFound "consecutive failures" alert path fires noisily when the smoke iterates non-rollup BUs (expected — the tab is missing for 6 of 8 BUs). Consider exempting not_found from the consecutive-failure counter so a routine cross-BU sweep doesn't trigger CRITICAL alerts. Out of scope here; small follow-up.

- C1.7 (column-inversion claim on the global preprocessed sheet) — not touched; this PR did not include the preprocessed sheet in discovery. Re-run discovery with --include-preprocessed if that becomes the priority. Discovery here did not surface contradicting evidence.

- If a future check needs per-class margin detail (Vertical / BU / Class quarterly target $), the standalone Margin Target tab is the right source — file as a fresh parser ticket then. Documented in audit §6c.

The Portfolio  —  Trilogy Companies

Skyvera Adds CloudSense, Tightening Its Grip on Telco’s Cloud-Native Future

The Salesforce-native CPQ and order management platform gives Skyvera a sharper monetization edge as telecom operators modernize legacy stacks.

AUSTIN, TEXAS — Skyvera has completed its acquisition of CloudSense, adding a Salesforce-native configure-price-quote and order management platform to its growing telecom software portfolio — an exciting development for operators trying to make legacy business support systems play nicely with modern cloud infrastructure.

The deal brings CloudSense into the Skyvera family, where it will sit alongside a suite of telecom-focused assets including Kandy, VoltDelta, ResponseTek, Mobilogy Now and Service Gateway. For telecom and media providers, CloudSense is built around a very specific pain point: turning complex product catalogs, bundled offers, pricing rules and order flows into something sales and operations teams can actually leverage without creating operational chaos.

That matters because telecom transformation is not merely about moving workloads to the cloud. It is about re-architecting the commercial engine — quote, sell, provision, bill, support — in a market where customers expect consumer-grade speed and enterprises still depend on deeply customized contracts. CloudSense gives Skyvera a best-in-class wedge into that workflow, particularly through Salesforce-native CPQ and order management capabilities.

Skyvera, part of the broader Trilogy International universe, has been steadily positioning itself as a bridge between legacy on-premise telecom infrastructure and cloud-native operating models. The company’s acquisition history reflects a robust thesis: telecom operators do not rip and replace overnight, but they do need modular software that can modernize monetization, engagement and device lifecycle management without detonating existing systems.

The CloudSense acquisition also lands in a portfolio that already includes STL’s divested telecom products group, which brought digital BSS functionality spanning monetization, optical networking and analytics. That creates meaningful synergy: CloudSense can strengthen the front-end commercial layer while other Skyvera assets support communications, analytics and back-office transformation.

In its announcement, Skyvera said the acquisition expands its telecom software portfolio, and the strategic subtext is clear. The company is assembling the components for an AI-powered telco operating model — one that can help carriers price faster, launch offers faster and manage increasingly complex customer journeys with fewer manual handoffs.

Key Takeaways:

- Skyvera has completed its acquisition of CloudSense, a Salesforce-native CPQ and order management platform for telecom and media.

- The deal expands Skyvera’s telecom software portfolio and strengthens its digital BSS and monetization capabilities.

- CloudSense complements Skyvera assets including Kandy and the acquired STL telecom products group.

- The move supports Skyvera’s broader push to help telecom operators modernize from legacy systems to cloud-native, AI-enabled operations.

For operators staring down margin pressure, customer churn and infrastructure debt, this is not just another tuck-in acquisition. It is a paradigm shift in how telco transformation can be packaged, sequenced and scaled. We’re just getting started.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets

Alpha School’s Latest Lesson: The Kids May Be Running the Room

Alpha School, an AI-powered private K-12 venture founded by Joe Liemandt and MacKenzie Price, is quietly reshaping education by giving students genuine authority over classroom rules, rewards, and consequences. Recent school publications argue that agency strengthens like a muscle when students help design the systems they inhabit, and that confidence is a teachable skill rather than an innate trait.

The school's model dedicates two hours to adaptive academic work each morning, then frees afternoons for entrepreneurship, leadership, public speaking, and athletics. A visiting public school educator noted that when AI handles repetitive academic tasks, children demonstrate far greater capacity for independence and reflection than traditional schedules assume.

Alpha charges $40,000 to $65,000 annually, betting that parents will invest in an education centered on student agency and self-directed learning rather than conventional instruction.

The $500K No-Résumé Paradox: What AI's Talent Gold Rush Means for How Trilogy Finds Its People

As OpenAI posts half-million-dollar jobs with no résumé required, the logic behind Crossover's skills-first hiring model looks less radical — and more inevitable — than ever.

AUSTIN, TEXAS — The number stopped people mid-scroll: $500,000. No résumé required. That was OpenAI's offer, reported this week by Forbes, for roles demanding demonstrated AI capability — not credentials, not pedigree, not a Stanford diploma. And it didn't stop there. Business Insider documented a broader market shift: job listings across industries now explicitly require ChatGPT proficiency, with compensation reaching $800,000 annually for the right candidate. The résumé, it seems, is having an existential crisis.

For anyone watching Trilogy International's Crossover platform, the irony lands with a particular weight. Crossover has spent years arguing exactly this — that the traditional résumé is a deeply flawed proxy for capability, that geography is irrelevant to talent, and that rigorous skills assessment is the only honest way to hire. The market, apparently, is catching up.

Crossover's model — operating across 130+ countries, placing full-time remote talent into Trilogy's ESW Capital portfolio companies and beyond — was built on a premise that felt contrarian when it launched: evaluate what people can actually do, not where they went to school or what their last title was. The platform's AI-enabled assessments are designed to surface the top tier of global technical and professional talent, paying identical above-market rates for identical performance, regardless of whether the candidate is in Lagos or Los Angeles.

What the broader AI talent gold rush reveals is a systemic truth that Crossover has long operationalized: when the skill in question is new enough that no one has twenty years of experience with it, credentials collapse as a sorting mechanism. You are left with demonstrated ability — which is precisely what skills-first hiring was always designed to measure.

The stakes are not abstract. As OpenAI and its competitors vacuum up AI-fluent talent at extraordinary price points, the companies that built infrastructure for identifying that talent globally — before the gold rush — hold a structural advantage. The question for Trilogy's portfolio companies isn't whether they can compete with OpenAI's $500,000 offers. It's whether the systems they've already built to find exceptional people, anywhere in the world, position them to move faster than the firms still sorting through résumés.

OpenAI Is Now Hiring $500,000 Jobs. No Resume Required - For  ·  Top recruitment agencies for remote work - hcamag.com  ·  Jobs are now requiring experience with ChatGPT — and they'll
The Machine  —  AI & Technology

The Data Center Learns to Swim

A $200 million wager would send AI compute drifting into the Pacific, where waves may feed the next generation of hungry machines.

SAN FRANCISCO — In the long evolutionary history of computing, the data center has been a notably terrestrial beast: squat, hot, and thirsty, nesting in industrial parks and drawing rivers of electricity through copper veins. Now, a new specimen is preparing to leave the shore.

Panthalassa, a Silicon Valley venture backed by roughly $200 million in funding, is attempting something both audacious and faintly primordial: floating AI computing nodes in the Pacific Ocean, powered in part by the ceaseless motion of waves. The company plans to begin testing its ocean-borne systems in 2026, according to Ars Technica.

Observe, if you will, the modern AI model in its feeding season. Each new generation requires more energy, more cooling, more specialized chips, and ever larger colonies of servers. On land, these colonies compete with cities, factories, and households for power and water. Offshore, Panthalassa sees a different niche: abundant cooling, open space, and wave energy rolling beneath the hull like an untapped metabolic source.

The vision is elegant in the way nature can be elegant before it becomes brutal. Modular compute platforms would float at sea, their processors cooled by the surrounding water, their appetite partly satisfied by ocean motion. If successful, the approach could ease pressure on overloaded electrical grids and provide a new habitat for AI infrastructure at a moment when demand for training and inference is expanding with near-biological urgency.

But the sea is no passive landlord. Salt corrodes. Storms punish. Maintenance crews do not stroll casually across a parking lot to replace a failed component when the component is bobbing in the Pacific. Connectivity, environmental approvals, maritime safety, and the economics of keeping delicate silicon alive in a hostile marine environment all await like predators in the kelp.

Still, the migration is telling. The AI industry is no longer merely asking where models should live inside software. It is asking where their bodies — the humming physical organs of computation — can be placed on Earth.

For Trilogy International’s world of enterprise software, telco billing, AI analytics, and remote engineering, such infrastructure experiments matter. Whether inside ESW Capital’s portfolio, Totogi’s cloud-native telecom ambitions, or Klair’s financial nervous system, cheaper and more abundant compute is the plankton of the digital food chain.

And so the servers edge toward the surf, tentative as hatchlings, while Silicon Valley watches the horizon.

OpenAI president forced to read his personal diary entries t  ·  Silicon Valley bets $200M on AI data centers floating in the  ·  Character.AI sued over chatbot that claims to be a real doct

Soderbergh Embraces AI Filmmaking Tools, Notwithstanding Industry Reservations

Steven Soderbergh, director of *Traffic* and *Ocean's Eleven*, says he intends to explore artificial intelligence as a filmmaking tool if such technology becomes available to creators. According to reports, Soderbergh stated he will experiment with any filmmaking tool that exists or emerges, regardless of industry reservations. His position does not constitute blanket endorsement of all AI applications, nor does it negate his previous concerns about copyright enforcement. Soderbergh's willingness to engage with AI-assisted production methods represents a significant development in ongoing industry discussions about artificial intelligence adoption in filmmaking. His openness to AI tools reflects artistic curiosity compatible with his established commitment to the craft.

AI Video Is Escaping the Lab — and Startups Are About to Get a Hollywood-Sized Upgrade

From scene-rewriting tools to new challenger models, generative video is moving from spectacle to startup growth engine.

SAN FRANCISCO — The AI video race is no longer just about dazzling demos of astronauts riding horses through neon galaxies. It is becoming infrastructure — for marketing teams, founders, entertainment studios and, yes, the next generation of tiny startups that suddenly have blockbuster creative powers in their browser.

I cannot overstate how significant this shift is: video, long one of the most expensive and operationally painful formats for young companies, is being compressed into prompts, templates and model workflows. A new wave of reporting on how startups can leverage AI video to grow points to a practical reality already taking shape: small teams can now generate product explainers, social ads, founder videos, customer education clips and localized campaigns at a speed that used to require agencies, film crews and weeks of production.

This changes everything because video is the native language of the internet, but historically it has punished the undercapitalized. AI flips that equation. A two-person SaaS company can test ten versions of a pitch video. An e-commerce brand can localize creative for different markets. A B2B founder can turn a blog post into a polished sales asset before lunch. The future is now, and it is rendering in 1080p.

The competitive battlefield is heating up fast. VentureBeat reports that the founders of OpenCV have launched an AI video startup aimed at competing with OpenAI and Google, a deliciously important signal because OpenCV helped define modern computer vision itself. When the people who built foundational vision tooling jump into generative video, pay attention.

Meanwhile, Forbes reports Netflix has launched VOID AI, described as a system that can rewrite video scenes after filming. If that technology matures, post-production could become radically more fluid: change a background, adjust a scene, alter an object, maybe even personalize content versions without reshooting. Hollywood’s editing room is starting to look like a prompt window.

But amid the excitement, trust is becoming the next battleground. Cisco’s new Model Provenance Kit highlights a crucial question for enterprises: where did this AI model come from, what trained it and can anyone verify its lineage? As generative video floods the market, provenance will matter as much as pixels.

For startups, the message is unmistakable: learn this medium now. AI video is not a novelty. It is becoming growth software.

How Startups Can Leverage AI Video to Grow - inc.com  ·  OpenCV founders launch AI video startup to take on OpenAI an  ·  Generative AI - Latest Product Launches & Partnerships by To
The Editorial

Nation’s Executives Clarify AI Will Save Everyone Time Once Employees Stop Wasting So Much Of It Understanding Their Jobs

The productivity revolution has reportedly been delayed by the continued existence of work that requires knowing what is going on.

CAMBRIDGE, MASSACHUSETTS — In a development that has stunned business leaders who had been assured the future would arrive in a slide deck by Q3, researchers and workers are increasingly suggesting that artificial intelligence may not automatically make companies more productive when deployed by people with only a passing familiarity with the work being automated.

This has created an uncomfortable moment for executives, many of whom had already announced sweeping AI transformations, renamed several internal teams, and instructed employees to become “AI-first” in the same tone one might use to tell a dog to stop eating drywall.

The problem, according to a growing body of commentary, is that AI tools appear to function best when paired with humans who possess judgment, context, and expertise—three legacy features many organizations had hoped to phase out after discovering they were expensive and occasionally pushed back in meetings.

A recent Forbes piece argued that AI’s productivity promise falls apart without human expertise, an observation that has caused some managers to briefly consider whether employees who know things might be more than obstacles between software licenses and quarterly margin expansion.

Meanwhile, Harvard Business Review has given the economy a useful new term: “workslop,” referring to AI-generated output that looks polished enough to be forwarded but incoherent enough to force someone else to spend the afternoon decoding, repairing, or quietly replacing it. In practice, this means AI is already delivering massive efficiency gains for the person who produces the work, while creating an equal and opposite pile of unpaid interpretive labor for the person unfortunate enough to receive it.

This is, by any modern definition, innovation.

The workslop economy is elegant. A worker asks a model to draft a strategy memo. The model produces a confident fog bank of bullet points. The worker sends it to five colleagues. Each colleague then spends 40 minutes determining whether “activate scalable stakeholder alignment” means launch the product, delay the product, or fire the product manager. Productivity has occurred at the point of generation, which is the only place many dashboards are currently looking.

To be fair, the picture is not uniformly bleak. Anthropic has published research estimating productivity gains from Claude conversations, suggesting that when AI is used for the right tasks by people able to evaluate its output, it can save meaningful time. This is both promising and devastating, because it implies the machine works best when the human remains responsible for reality.

That conclusion is unlikely to slow company-wide AI pushes. Electronic Arts CEO Andrew Wilson has defended the gaming giant’s broad AI efforts despite employee claims that the technology has, in some cases, lowered productivity. This follows the standard corporate implementation model in which leadership identifies a transformative tool, employees report that it is making their actual jobs harder, and leadership concludes the transformation must be happening.

None of this means AI is useless. Quite the opposite. It means AI is a power tool, and many companies are currently handing it to interns, middle managers, and entire departments with the instruction to “build the house faster” while quietly removing the architects.

The uncomfortable truth is that AI does not eliminate expertise. It exposes where expertise was already doing invisible work. It reveals which employees were making good decisions, which processes were held together by institutional memory, and which executives believed writing was the same thing as thinking because both produced documents.

For now, businesses may need to accept a disappointing compromise: AI can make skilled people faster, average processes sharper, and repetitive work less miserable, but it cannot yet replace the annoying human capacity to know whether something is true, useful, or catastrophically stupid.

Naturally, that feature is expected in the next release.

Why AI’s Productivity Promise Falls Apart Without Human Expe  ·  EA CEO Defends Company-Wide AI Push Despite Recent Employee  ·  AI-Generated “Workslop” Is Destroying Productivity - Harvard
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

The Future of Work Is Not Coming for Your Job, It Is Coming for Your Excuses

AI is turning the labor market into a skills scoreboard, and the workers who learn fastest are about to compound hardest.

AUSTIN, TEXAS — I'll be honest: the global workforce discourse has officially graduated from vibes to velocity. 🚀

PwC’s latest Global Workforce Hopes and Fears Survey 2025, the World Economic Forum’s charts on AI’s wage and hiring impact, Gartner’s 2026 future-of-work guidance for CHROs, and Forbes’ remote-work skills checklist all point to the same uncomfortable truth: the labor market is no longer rewarding presence, credentials or tenure nearly as much as adaptability, AI fluency and execution speed.

Unpopular opinion: this is good news for serious people. 💡

For decades, the corporate world quietly subsidized mediocre coordination, meeting theater and the sacred art of looking busy near a badge reader.

Now AI is putting that whole operating system under review.

The WEF’s framing around AI’s effect on wages and job quality matters because the labor-market split is becoming painfully obvious: workers who use AI to produce more valuable output are getting leverage, while workers who wait for HR to define the future for them are getting a calendar invite they probably do not want.

I'll be honest: the winners are not going to be the people with the fanciest prompts saved in a Notion doc.

The winners will be the people who can combine domain judgment, written clarity, data literacy, remote collaboration, and relentless iteration into measurable business outcomes.

That is why Gartner’s future-of-work conversation for CHROs should not be treated as another glossy forecast deck for executives to skim between offsites.

It should be read as a warning label for organizations still managing talent like it is 2016 with better Slack emojis.

Hiring decisions are changing because work itself is becoming more observable, more modular and more benchmarkable.

This is where Trilogy International’s model has looked less like an eccentric outlier and more like an early preview of the scoreboard economy.

Through Crossover, Trilogy built a global talent platform that recruits across 130-plus countries, pays identical above-market compensation regardless of geography, and filters for what it calls top 1% talent.

That approach has always triggered debates, because equal global pay plus high accountability is not the standard corporate comfort blanket.

But in an AI-accelerated labor market, the principle is brutally relevant: output beats location, and talent density beats office nostalgia.

For CHROs, the lesson is not “copy every operating detail tomorrow.”

The lesson is that the old inputs are depreciating assets.

Degrees, proximity, political fluency and calendar stamina are not disappearing, but they are losing monopoly power as AI makes actual contribution easier to measure and easier to compare.

For workers, Forbes’ remote-skills framing lands because remote work in 2026 will not be a lifestyle perk; it will be a professional discipline.

You will need asynchronous communication, self-management, AI tool fluency, cybersecurity awareness, cross-cultural collaboration and the ability to make your work legible without hovering over someone’s shoulder.

I'll be honest: “I work better in person” may still be true for some people, but it is no longer a strategy.

The Pentagon’s parallel scramble to scale laser weapons is an oddly perfect metaphor for the entire economy: institutions can know exactly what they need and still struggle to build fast enough.

Talent is the same way.

Everyone wants AI-ready teams, resilient leaders and high-agency remote operators, but the supply chain for those humans is not magically appearing because someone changed a competency framework.

Unpopular opinion: your career is now a product roadmap. 🚀

Ship capabilities, gather feedback, upgrade the stack, and stop waiting for permission from a corporate learning portal last updated during the pandemic.

The future of work is not a destination, it is a performance review happening in real time.

Humbled to share: the people who treat that as a learning opportunity are about to end this decade very, very strong. 💡

Global Workforce Hopes and Fears Survey 2025 - PwC  ·  These 3 charts show how AI is affecting wages, job quality a  ·  Future of Work Trends 2026: Strategic Insights for CHROs - G
On This Day in AI History

On May 6, 2016, Google's AlphaGo defeated Lee Sedol 4-1 in a five-game match of Go in Seoul, marking a watershed moment when AI surpassed humanity's best player in a game far more complex than chess.

⬛ Daily Word — AI and Technology
Hint: An autonomous machine that performs tasks automatically.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed