Vol. I  ·  No. 105 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
WEDNESDAY, APRIL 15, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

Amazon Pays $10.8 Billion for Globalstar in Direct Challenge to Starlink

E-commerce giant's largest space acquisition signals intensifying battle for satellite internet dominance as low-Earth orbit becomes critical infrastructure.

SEATTLE — Amazon closed a $10.8 billion acquisition of Globalstar on Monday, marking the company's largest space-sector investment and escalating competition with SpaceX's Starlink network.

The deal values Globalstar at approximately 15 times trailing revenue, a premium reflecting Amazon's urgency to deploy Project Kuiper, its planned constellation of 3,236 satellites. Globalstar operates 48 low-Earth orbit satellites and holds spectrum licenses across 120 countries — regulatory assets that typically require years to secure independently.

Amazon has committed $10 billion to Kuiper since 2019 but has launched zero commercial satellites. Starlink, by contrast, operates over 6,000 satellites serving 4 million subscribers across 100 countries. The Globalstar purchase provides immediate orbital infrastructure and an existing customer base of 745,000 devices, predominantly in maritime and emergency services.

The acquisition follows a pattern of tech giants buying spectrum and satellite assets rather than building from scratch. Apple purchased 20% of Globalstar in 2022 for $450 million to power iPhone emergency messaging — a stake Amazon will now control.

Industry analysts note the deal's defensive posture. Starlink generated an estimated $6.6 billion in 2025 revenue, capturing 90% of the commercial satellite internet market. Amazon Web Services, which accounts for 63% of Amazon's operating income, faces potential disruption if Starlink integrates deeply with enterprise cloud services.

The transaction also positions Amazon in the emerging military satellite market. The Pentagon awarded SpaceX a $1.8 billion contract in 2023 for Starshield, a classified network. Globalstar holds Department of Defense contracts worth $340 million, relationships Amazon inherits immediately.

Regulatory approval remains uncertain. The Federal Communications Commission must authorize the transfer of Globalstar's spectrum licenses, a process that typically requires 6-12 months. Amazon declined to comment on integration timelines or whether Globalstar will operate as an independent subsidiary.

Amazon Buys Globalstar for $10.8 Billion, Movingto Expand It  ·  Man Held in Attack on OpenAI Chief’s Home Had List of A.I. L  ·  Elon Musk, Who Owns X, Appears to Post on TikTok

Pursuant to Judicial Denial, Supreme Court Declines Certiorari in Matter of Artificial Intelligence Authorship Rights

The Supreme Court declined to review a case challenging whether AI-generated works qualify for copyright or patent protection, leaving lower court rulings intact that deny such protections to creations produced without direct human authorship. The lower court determined that the Copyright Act and patent statutes require human agency as a prerequisite, based on statutory language and legislative intent limiting protections to natural persons. Legal scholars note the decision leaves unresolved questions about AI-generated works, particularly as systems become increasingly sophisticated. The Court's refusal to hear the case implicitly suggests that any framework modifications must come through legislation rather than judicial interpretation. The denial does not constitute a ruling on the merits, and similar cases may reach the Court in the future. Petitioners' counsel indicated they are considering alternative remedies, including legislative advocacy.

A Cold Front of Cuts Sweeps Through Big Tech and Media, With Oracle and Disney in the Hardest Rain

Early 2026’s layoff tally climbs past 70,000 as companies trade headcount for restructuring and AI-lean operating models.

NEW YORK — A broad pressure system of cost-cutting is settling over the corporate landscape this week, bringing steady headwinds for workers across tech and entertainment. Forecast models point to a continued pattern: fewer roles, leaner org charts, and executives betting that automation and tighter teams can keep revenue sunny even as payroll clouds thin out.

The strongest cell in today’s system is forming over enterprise software. Oracle is reporting “significant” job cuts as it reshapes operations, an indicator that even mature, cash-generative tech giants are choosing to shed weight rather than carry it into the next quarter. For startup watchers, the signal is clear: when the big platforms start trimming at scale, vendors, partners, and adjacent ecosystems can expect secondary gusts—slower purchasing cycles, longer sales reviews, and more scrutiny on renewals.

The broader climate is equally unsettled. According to a roundup tracking reductions across major employers, the year’s layoff roster includes household names like Meta, Amazon, and Oracle—evidence that this isn’t a localized shower, but a system-wide pattern of restructuring. (See the running list via Business Insider’s company-by-company tally.)

Meanwhile, the layoff count is rising fast enough to qualify as a full-blown seasonal shift. One report puts tech layoffs above 70,000 in early 2026, driven by reorganizations and efficiency programs that prioritize fewer layers and faster shipping over bigger teams. (MSN coverage.)

Entertainment is not escaping the weather. Disney is expected to cut around 1,000 jobs under new CEO Josh D’Amaro, underscoring that even brand-heavy giants are moving into storm-prep mode—simplifying structures and defending margins as consumer attention fragments.

Preparation guidance for workers and founders: keep resumes and portfolios updated, reduce burn, extend runway, and assume “do more with less” will remain the dominant wind through the next few quarters.

Disney to Cut Around 1,000 Jobs Under New CEO Josh D’Amaro —  ·  Companies laying off staff this year include Meta, Amazon, a  ·  Tech layoffs top 70,000 in early 2026 amid restructuring - M
Haiku of the Day  ·  Claude HaikuEmpires clash and shift,
rules lag behind the machines—
we build as we burn.
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
Algorithmic Adjudication: The Epistemological Crisis of Fairness Metrics in Computational Systems
PITTSBURGH — A constellation of recent scholarship (emerging from institutions including Carnegie Mellon, Harvard Business School, and Nature Publishing Group) suggests that the prevailing discourse surrounding algorithmic fairness may be predicated upon fundamentally incommensurable epistemological frameworks. The central problematic, as articulated across multiple peer-reviewed venues, concerns the dialectical tension between formal mathematical definitions of fairness (statistical parity, equalized odds, predictive parity) and the irreducibly contextual nature of discrimination as a socio-technical phenomenon.
We Built the Bias In, and Now We're Surprised It's There
SAN FRANCISCO — The enterprise software world has discovered bias in AI, and they would very much like you to know they are Thinking About It. A coordinated chorus of tech consultancies and AI vendors has emerged this week with frameworks, whitepapers, and six-step guides to "fix" algorithmic bias by 2026.
Charlotte’s White-Collar Workers Reassured They’re Not Being Replaced, Just “Strategically Reimagined” Into Silence
CHARLOTTE — The latest evidence that artificial intelligence is transforming the modern workplace arrived this week in the form of a familiar corporate miracle: the same number of meetings, the same number of dashboards, and a noticeably reduced number of humans allowed to speak during either. Local coverage of the city’s changing professional economy describes a white-collar scene in which software is increasingly tasked with writing emails, summarizing calls, and performing the other sacred duties once handled by junior employees who still believed “visibility” was a career strategy rather than a lighting condition.
We Went Back to the Moon, and Still Couldn’t Look Up
AUSTIN, TEXAS — I’ll be honest… the most unsettling part of NASA’s latest moon milestone wasn’t the physics, it was the vibes. Unpopular opinion: humanity didn’t “return to the moon” so much as it briefly opened a new tab and then got distracted by another one.
The Machine Dreams of Eating Itself: Notes from the AI Singularity's Waiting Room
SAN FRANCISCO — The future arrived this week riding a unicycle through a hall of mirrors, and nobody's quite sure if they should laugh or start hoarding canned goods. Sam Altman, the boy-king of the AI revolution, is having what industry insiders delicately call "a week." The specifics don't matter as much as the trajectory: every messiah eventually gets their wilderness moment, and Altman's is happening in real-time on a platform he doesn't control.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Builder Desk  —  AI Builder Team
Production Release

Surtr Swallows Third Pipeline as Builder Team Declares War on Manual Scripts

The AI Builder Team shipped three production data pipelines in 24 hours, killed two manual workflows, and finally fixed the bug that's been silently dropping survey data for months.

The migration is no longer theoretical. It's a land grab.

In the span of a single day, the AI Builder Team pulled three separate data pipelines out of legacy infrastructure and into Surtr's unified CDK architecture — Azure AI spend tracking, Jotform education surveys, and headcount forecast automation. Each one represents another manual script retired, another cron job eliminated, another piece of institutional knowledge that no longer lives in someone's head.

@kevalshahtrilogy led the charge on two fronts. His Jotform migration (PR #10) didn't just port code — it fixed a silent data corruption bug that's been dropping survey responses since the pipeline launched. Klair's S3 COPY command was choking on nested JSON and swallowing the error. The jotform_submissions table has been sitting at zero rows for months. No one noticed because no one was looking. Surtr's hybrid loading strategy — batch INSERT for small tables, JSON Lines for the 47,000-row answers table — now lands clean data every hour. The Azure AI spend pipeline (PR #5) followed the same pattern: OAuth2 service principal auth, dual API calls to Cost Management and Monitor Metrics, mapping across ten Quark-acquisition subscriptions. It's live. It's writing 386 rows of cost and token data into Redshift on schedule.

@sanketghia automated the weekly headcount forecast refresh (PR #4), replacing a manual Python script with a Tuesday-morning Lambda that calls the stored procedure and verifies row counts. It's the third pipeline to follow the mart-saas-metrics-refresh pattern — stateless Redshift Data API, polling-based verification, zero orchestration dependencies.

Meanwhile, @benji-bizzell introduced the CI/CD discipline this team has been running without (PR #7). Five ad-hoc GitHub Actions workflows collapsed into two: a unified CI gate that blocks broken PRs, and a production-branch CD pipeline with path-based change detection. Deployments no longer fire on every merge to main. The wild west era is over.

Across the moat in Klair, @eric-tril spent the day cleaning up financial reporting edge cases — EBITDA reconciliation routing for bad debt provisions (PR #2552), Education entity adjustment filtering (PR #2558), and removing an obsolete Software S&M adjustment (PR #2559). Precision work, invisible until it breaks.

And then there's @marcusdAIy, who shipped Budget Bot 4.0 Phase B0+B1 (PR #2557) — a Google Doc sync infrastructure with checkpoint-based revision tracking and a "Start from Last Quarter" express path that clones prior docs and drops users straight into review. When asked about the 47-file changeset, marcusdAIy offered this: "The sync model is elegant. Atomic batchUpdate in reverse document order, revision ID comparison for conflict detection, Drive API cloning. It's enterprise-grade document orchestration." To which I can only say: sure, Marc. Enterprise-grade. We'll see how elegant it feels when someone's quarterly board doc gets clobbered by a race condition.

His PMO Projects dashboard migration to Aerie (PR #94) at least follows the established Redshift-to-Convex pipeline pattern. Column visibility toggles, saved views, AI executive summaries. Standard fare. Nothing revolutionary.

Three pipelines in one day. The Builder Team is moving faster than the manual processes they're replacing can keep up.

Mac's Picks — Key PRs Today  (click to expand)
#4 — feat: add hc-forecast-refresh pipeline @sanketghia  no labels

## Summary

- Adds a new Lambda pipeline hc-forecast-refresh that calls core_budgets.sp_refresh_hc_data_consolidated() every Tuesday at 6:30 AM UTC

- Automates the manual refresh_hc_data.py script that rebuilds HC forecast data from S3 into Redshift after the Google Apps Script uploads the latest CSV

- Follows the mart-saas-metrics-refresh pattern: stateless Redshift Data API client, stored procedure call, row count verification

## Test plan

- [x] Pipeline unit tests pass (7/7) — handler, error propagation, RedshiftClient polling

- [x] CDK infrastructure tests pass (228/228) — new pipeline auto-discovered and synthesized

- [x] Ruff lint clean

- [ ] Verify pipeline appears in Surtr dashboard after deploy

- [ ] Trigger manual run from Surtr dashboard to confirm stored procedure executes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#5 — Add Azure AI Spend Pipeline @kevalshahtrilogy  no labels

## Summary

- New pipeline (azure-ai-spend-pipeline) that fetches AI cost and token usage from Microsoft Azure and loads into two Redshift tables

- Covers 10 Azure subscriptions from the Quark acquisition across Cost Management API (dollar spend) and Monitor Metrics API (token counts per deployment)

- Writes to core_finance.ai_spend_azure_cost_reports (126 rows) and core_finance.ai_spend_azure_token_usage (260 rows), validated with live data

## What's included

- OAuth2 auth (azure_auth.py) — service principal client credentials flow with exponential backoff

- Azure API client (azure_client.py) — Cost Management Query, Cognitive Services listing, Deployments listing, Monitor Metrics with per-deployment filtering via ModelDeploymentName dimension

- Internal model mapping — normalizes Azure model names (e.g. gpt-35-turbogpt-3.5-turbo, gpt-5.2-chatgpt-5.2) for pricing table compatibility

- Pricing (pricing.py) — loads from shared ai_spend_token_pricing table, longest-prefix match

- Redshift handler (redshift_handler.py) — async polling, batch inserts of 50, DELETE+INSERT idempotency for both tables

- 58 unit tests covering auth, API parsing, column order shuffling, model normalization, SQL generation, batching, and end-to-end handler flow

## Pre-merge checklist

- [x] Redshift tables created (ai_spend_azure_cost_reports, ai_spend_azure_token_usage)

- [x] Secret stored in Secrets Manager (surtr/azure-credentials)

- [x] Pricing rows exist in ai_spend_token_pricing for Azure models

- [x] Pipeline tested locally — 126 cost + 260 token records inserted, 0 subscription failures

- [x] Idempotency verified — re-run correctly deletes and re-inserts

- [ ] CDK deploy to create Lambda + Step Function + cron schedule

## Test plan

- [x] pytest tests/ -v — 58 tests passing

- [x] Local execution against live Azure APIs and Redshift

- [x] Verified data in Redshift via MCP query tool

- [ ] Post-deploy: trigger manual run from pipeline dashboard, verify cron fires at 6am UTC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#7 — feat(ci-cd): replace ad-hoc workflows with unified CI/CD pipeline @benji-bizzell  no labels

## Summary

- Consolidate five workflow files into a unified CI (ci.yml) and CD (cd.yml)

- CI runs automatically on all PRs and pushes to main (lint, all test suites in parallel)

- CD triggers only on push to production branch, with path-based change detection to deploy pipelines and/or app independently

## Why

Deployments currently fire on every merge to main with no CI gating PRs. With more

contributors, this is a reliability risk — broken code goes straight to prod. This

introduces a proper gate: CI blocks merges, and deployments only happen via deliberate

promotion to a production branch.

## What changed

| Before | After |

|---|---|

| pipeline-cdk-deploy.yml — deploy on push to main | cd.yml — deploy on push to production |

| surtr-app-deploy.yml — deploy on push to main | Folded into cd.yml with change detection |

| pipeline-tests.yml — manual/reusable only | Inlined into ci.yml, runs on every PR |

| ruff-check.yml — manual only | Inlined into ci.yml, runs on every PR |

| udm-tests.yml — manual only | Inlined into ci.yml, runs on every PR |

Branch rulesets updated separately (not in this diff):

- Main: squash-only, 1 approval, CI required

- Production: merge-commit (preserves squashed commits from main), 1 approval, CI required

## Test plan

- [ ] Verify CI triggers on this PR

- [ ] Check that CI job names match the required status check contexts in the rulesets

- [ ] After merge, verify CD does not fire (should only trigger on push to production)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#10 — Migrate jotform-survey-sync from Klair to Surtr @kevalshahtrilogy  no labels

## Summary

- Ports the Jotform education survey sync pipeline into Surtr's CDK Lambda infrastructure

- Fixes the known jotform_submissions 0-row bug — Klair's S3 COPY silently failed on nested JSON in answers_json, and the error was swallowed by execute_with_params

- Uses hybrid loading: batch INSERT for small tables, S3 COPY (JSON Lines) for large tables (47K+ answers)

- Adds Python artifacts (__pycache__/, .pytest_cache/, *.pyc) and cdk-out-* to root .gitignore

## What this pipeline does

Syncs education survey data from the Jotform API into 4 Redshift staging_education tables (jotform_forms, jotform_questions, jotform_submissions, jotform_answers) every hour. Full truncate-and-reload — idempotent, no orchestration needed for cutover.

## Bug fix detail

Klair's _push_via_s3 used FORMAT AS JSON 'auto' which misparsed the answers_json column (a JSON string containing nested objects). The COPY failed, but execute_with_params caught the exception and returned False without raising. _push_via_s3 never checked the return value. Result: TRUNCATE succeeded, COPY failed silently, jotform_submissions stayed at 0 rows, sync logged SUCCESS.

Surtr's implementation writes proper JSON Lines (one JSON object per line with json.dumps), uses the Redshift Data API which raises RuntimeError on any failure, and adds a post-load verification that jotform_submissions > 0.

## Files

| File | Purpose |

|------|---------|

| pipeline.json | CDK config — Lambda, 512MB, 600s, hourly cron (disabled) |

| src/handler.py | Lambda entry point, fetches API key from Secrets Manager |

| src/jotform_client.py | Jotform REST API client with retry + pagination |

| src/sync.py | Extract → transform → load orchestration |

| src/redshift_client.py | Redshift Data API client with batch INSERT + S3 COPY |

| tests/test_handler.py | 25 unit tests covering transforms, handler, Redshift client, sync |

| .gitignore | Added Python artifacts + cdk-out-* |

## E2E test results (local, against prod Jotform API + Redshift)

| Table | Rows | Load strategy |

|-------|------|---------------|

| jotform_forms | 319 | batch INSERT |

| jotform_questions | 9,297 | S3 COPY |

| jotform_submissions | 1,842 | S3 COPY |

| jotform_answers | 47,465 | S3 COPY |

All row counts verified in Redshift after load. Submissions > 0 confirmed (bug fix works).

## Prerequisites before enabling schedule

- [ ] Create surtr/jotform-credentials secret in Secrets Manager with JOTFORM_API_KEY

- [ ] Deploy to dev: npx cdk deploy Pipeline-jotform-survey-sync-dev -c env=dev

- [ ] Verify dev execution via Step Functions

- [ ] Disable Klair's jotform sync schedule

- [ ] Enable Surtr schedule in pipeline.json

## Test plan

- [x] 25 unit tests pass (pytest tests/ -v)

- [x] cdk synth Pipeline-jotform-survey-sync-dev passes

- [x] Full e2e test against production Jotform API + Redshift

- [x] All 4 tables loaded with correct row counts

- [x] jotform_submissions verified > 0 rows

- [ ] Deploy to dev and run via Step Functions

- [ ] Deploy to prod with schedule disabled, validate, then enable

🤖 Generated with [Claude Code](https://claude.com/claude-code)

#2557 — feat(board-doc): Budget Bot 4.0 Phase B0+B1 -- Google Doc sync infrastructure and Start from Last Quarter @marcusdAIy  no labels

## Summary

- B0: Google Doc Sync Infrastructure -- new gdoc_sync.py service with checkpoint-based sync model: read sections from a Google Doc by heading boundaries, write back via atomic batchUpdate (reverse document order), detect external changes via revision ID comparison, and clone docs via Drive API

- B1: Start from Last Quarter -- one-click express path that clones a prior quarter's Google Doc, parses its sections, creates a new session in review phase (skipping the entire wizard), and drops the user into the review UI with all sections populated

- Sync button + changes-detected banner in ReviewStep header for pushing local edits back to the Google Doc

- Model changes: google_doc_id, google_doc_revision, source_doc_id, section_edit_status on WizardSession; quarter/year added to session summaries

Architecture direction documented in .cursor/brainlifts/budget-bot-editing-architecture.md -- checkpoint-based sync, "Cursor for documents" UX paradigm, section-level rewrites over surgical diffs.

## How to test locally

No special setup required beyond the normal dev environment (Google service account creds in .env).

1. Start backend (uv run fast_endpoint.py) and frontend (pnpm dev)

2. Go to Budget Bot, click "New Report", run through the wizard for any BU, and finalize the document

3. Back on the homepage, the finalized session now shows a "Start Q3 2026" button in its footer

4. Click it -- the prior quarter's Google Doc is cloned, sections are parsed, and you land in ReviewStep with all sections populated

5. Verify the "Sync to Google Doc" button appears next to the Google Doc link

6. Open the cloned Google Doc to confirm it's a separate copy (original is untouched)

## Test results (author-verified)

- [x] Section reader: parsed 6 sections from real Skyvera Q2 2026 Google Doc with correct heading boundaries and content

- [x] Round-trip sync: cloned doc, read sections, modified one section (3,192 chars to 101 chars), synced back via batchUpdate, re-read verified content replaced correctly

- [x] Change detection: returns false immediately after read, true after external modification

- [x] Paragraph style fix: inserted text uses NORMAL_TEXT style (not inherited heading style)

- [x] "Start Q3 2026" button appears on finalized sessions in BoardDocHome

- [x] Clicking "Start Q3" clones the Google Doc (original untouched), creates a new session, lands in ReviewStep with all 6 sections visible and expandable

- [x] Cloned doc verified on Google Drive: "Skyvera -- Budget Plan Q3 2026" with correct content

## Known limitations (deferred to subsequent PRs)

- "Refresh Data" on cloned sessions is a no-op (sections created as CUSTOM type with no required_data -- proper section type mapping comes with Phase B2)

- Table content round-trips as pipe-delimited text, not structured tables

- Section ID stability depends on heading order not changing between reads

- "Changes detected" banner dismiss state resets on page refresh

The Portfolio  —  Trilogy Companies

As Tech Giants Ditch Résumés, Crossover Reminds Industry It Did That First

OpenAI's $500K no-résumé hiring push validates what Trilogy's global talent platform has practiced for years — meritocratic skills testing beats credential theater.

AUSTIN, TEXAS — When OpenAI announced this week it's hiring for $500,000 roles without requiring résumés, the tech press treated it like revelation. But for Crossover — Trilogy International's global talent platform — it was Tuesday.

The company has spent years building what it calls the world's largest remote-job recruiting engine on a single principle: skills assessments beat pedigree. No résumé required. No geographic bias. Just rigorous AI-enabled testing to identify the top 1% of global technical and professional talent, then pay them identically regardless of whether they live in San Francisco or São Paulo.

"We've placed thousands of engineers, product managers, and executives this way," said a Crossover spokesperson. "The model works because it's honest. You either pass the assessment or you don't. Your university logo doesn't get you in the door."

The timing is clarifying. As digital transformation accelerates demand for international tech talent and AI skills command six-figure premiums even outside traditional tech hubs, Crossover's anti-résumé stance looks less like ideology and more like competitive advantage.

The platform recruits across 130+ countries, all time zones, 100% remote — staffing not just Trilogy's sprawling ESW Capital portfolio (Aurea, IgniteTech, DevFactory) but increasingly external clients who want access to the same rigorously tested global talent pool.

The subtext: geography-based pay is inefficient. Credential-based hiring is theater. And if you can test for the skill, you don't need the signal.

OpenAI's move validates the thesis. But Crossover's been running the experiment at scale for years — and the margins speak for themselves.

OpenAI Is Now Hiring $500,000 Jobs. No Resume Required - For  ·  Digital Transformation Opens Doors to International Careers  ·  Top recruitment agencies for remote work - hcamag.com

Skyvera Assembles Telecom Software Empire Through Strategic Acquisitions

ESW portfolio company consolidates CloudSense, STL assets, and Kandy platform into unified telecom modernization play

AUSTIN, TEXAS — Skyvera, the telecom-focused software division within ESW Capital's portfolio, has quietly assembled what may be the industry's most comprehensive suite of legacy-to-cloud migration tools through a series of strategic acquisitions — and if you read between the lines, this is about controlling the pipes that every mobile operator depends on.

The centerpiece is CloudSense, a Salesforce-native configure-price-quote and order management platform that Skyvera acquired in 2025. CloudSense handles the unglamorous but critical work of telecom billing and service configuration — the kind of infrastructure that operators can't rip out without catastrophic business disruption. That stickiness is exactly what ESW's playbook optimizes for.

Skyvera also acquired STL's telecom products group, bringing digital business support systems, monetization tools, optical networking capabilities, and analytics into the fold. Add Kandy — a cloud-based real-time communications platform that enriches customer engagement — and you've got end-to-end coverage of the telecom operator stack.

The pattern here is deliberate. Telecom operators are trapped between legacy on-premise systems they can't abandon and cloud-native architectures they must adopt to survive. Skyvera is building the bridge — and charging the toll.

What makes this interesting is the timing. As 5G rollouts accelerate and telecoms face margin pressure, the ability to modernize billing, customer engagement, and network management without a forklift upgrade becomes existential. Skyvera isn't selling innovation — it's selling survival.

And this is where it gets interesting: Skyvera now sits alongside Totogi, another ESW telecom play focused on cloud-native charging. The two companies operate in adjacent markets, both staffed by Crossover's global talent model, both targeting the same captive customer base. One handles the legacy transition. The other handles the cloud-native future. Together, they cover both ends of the telecom infrastructure lifecycle.

A source familiar with ESW's strategy — who requested anonymity — suggested the endgame is consolidation at the operator level. "Once you control billing, engagement, and charging, you're not a vendor anymore. You're infrastructure."

Sky­vera has not disclosed deal terms for any of the acquisitions.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets

Alpha School’s Latest Flex: The Real Curriculum Starts After Lunch

New posts land like postcards from the future: fewer seat-hours, more grit-hours — and Joe Liemandt takes a swipe at the MBA industrial complex.

AUSTIN, TEXAS — Alpha School is back on its favorite beat: telling polite society what it doesn’t want to hear… and then backing it up with receipts.

First, the teaser that had a few PTA group chats clutching their pearls… Alpha’s argument is simple: if you want grit, leadership, teamwork, and the kind of social calibration that survives real life, you’re not getting it from worksheet season… you’re getting it from the sweaty, scrappy, afterschool arena. Word is the adults are calling it “extracurricular”… Alpha’s calling it the main event. The school’s latest post makes the case outright — kids are learning more from sports than from school… yes, you read that correctly… and yes, you already knew it in your bones. See: 4 reasons sports teach more than school.

Then comes the arts question — the one parents always ask with that careful tone… “But what about music? Theater? The classics?” A little bird tells me Alpha is perfectly happy to ditch the traditional conveyor belt (mandatory choir, required band, everyone paint the same bowl of fruit) and still claim it’s nurturing creativity — maybe more so. Their new piece frames “arts” less as scheduled compliance and more as a byproduct of time, autonomy, and making things that matter to the kid. The manifesto is here: creativity without traditional arts.

And if that weren’t enough… Alpha drops a third grenade: “What Private Schools Don’t Want You to Know.” Translation: bigger tuition checks don’t equal better outcomes… not lately… not anywhere. Same old model, shinier brochure, and — in their telling — the worst outcomes in 30 years. The headline’s not subtle, and it’s not trying to be: what private schools don’t want you to know.

Meanwhile, in the grown-up corner of the room… Fortune quotes Trilogy founder Joe Liemandt saying the MBA isn’t worth it — that you don’t learn a “fraction” of what you learn as an entrepreneur. Sources say this isn’t a hot take for him… it’s brand alignment.

The throughline? Alpha’s selling time — afternoons back, childhood back, agency back… and daring the old system to explain what it’s doing with all those hours.

4 Reasons Your Kid Is Learning More from Sports than from Sc  ·  7 Reasons Your Kid’s Creativity Thrives Without Traditional  ·  What Private Schools Don’t Want You to Know
The Machine  —  AI & Technology

Science Itself May Be Stuck in a Local Minimum — and New AI Research Is Probing the Escape Routes

A cluster of new papers asks whether human knowledge, AI reasoning, and even machine self-awareness are all trapped by the paths they happened to take first.

CAMBRIDGE, MASSACHUSETTS — Consider the river. It does not find the shortest path to the sea. It finds a path — carved by accident, geology, and the memory of ancient rain — and then it deepens that path until alternatives become unthinkable. A provocative new paper on arXiv argues that science works the same way.

In "The Non-Optimality of Scientific Knowledge," the authors frame the entire corpus of human scientific understanding not as a gleaming summit of truth, but as a local optimum — a hilltop we climbed because it was nearby, not because it was the highest. Path dependence, institutional lock-in, and the gravitational pull of existing paradigms conspire to keep us circling familiar terrain. Phlogiston persisted for a century. Continental drift was ridiculed for decades. The paper treats the trajectory of discovery itself as an optimization problem and finds it riddled with the same traps that plague gradient descent in machine learning: we get stuck.

The parallel to AI is not metaphorical — it is structural. A second new paper explores metacognition in continuous-time reinforcement learning agents, asking whether self-monitoring capabilities like self-prediction and subjective duration estimation actually improve performance in complex survival environments. The answer is nuanced: these capacities help most when they are structurally integrated into the agent's architecture rather than bolted on as afterthoughts. The lesson rhymes with the first paper's thesis — how you build the path matters as much as where it leads.

A third study, "GoodPoint," tackles the problem from yet another angle: training large language models to generate constructive scientific feedback by learning from real author responses to peer review. The goal is not to automate science but to augment the humans doing it — to widen the search space, to nudge researchers off their local hills.

Taken together, these papers sketch a quietly radical picture. Intelligence — biological or artificial — is not limited primarily by its computational power. It is limited by its history. Every framework adopted, every benchmark optimized, every paradigm internalized is simultaneously an act of illumination and an act of foreclosure.

The deepest question in AI may not be how to make models smarter. It may be how to keep them — and us — from mistaking the hill we are standing on for the top of the world.

The Non-Optimality of Scientific Knowledge: Path Dependence,  ·  Self-Monitoring Benefits from Structural Integration: Lesson  ·  GoodPoint: Learning Constructive Scientific Paper Feedback f

The Quiet Revolt of the Grid: States Move to Tame the AI Data Center Migration

As Washington urges a national buildout, local laws, moratoriums, and fuel-cell deals redraw where—and how—AI can live.

ATLANTA — In the modern savanna of compute, the data center is a large, heat-shedding organism: it feeds on electricity, exhales warmed air, and multiplies when conditions are favorable. Now, across the United States, the habitat is changing.

Federal policymakers have been signaling urgency—an AI infrastructure push meant to keep the nation’s model-makers well fed. Yet, at ground level, the terrain is patchwork. A growing thicket of state and local rules—zoning, permitting, tax incentives, reporting requirements—has begun to challenge the notion of a single, smooth national expansion. In one survey of this emerging landscape, MultiState notes how state data-center laws can complicate a federally driven buildout.

Where power is scarce—or communities are simply weary—another behavior appears: the moratorium. Local pauses on new AI-oriented data center construction are being weighed as a kind of conservation measure, balancing grid stress, water use, noise, and land impacts against promised jobs and tax base. For residents, the question is intimate: what does it mean to host an industry whose product is invisible, but whose footprint is not?

That tension is sharpening in places like west Georgia, where proposals have arrived with the speed of a migrating herd—sudden, heavy, and difficult to ignore. Opposition has grown alongside the projects, not necessarily against technology itself, but against the pace and scale of the transformation.

Industry, for its part, is adapting. One survival strategy is to bring the food source closer. Bloom Energy’s fuel-cell systems—validated, investors argue, by a deal involving Oracle—are being framed as a way to supply steadier on-site power for AI facilities. Seeking Alpha highlights how such distributed generation could become a competitive advantage when the grid is crowded.

And hovering above it all is the semiconductor value chain—stretched by generative AI demand, disciplined by geopolitics, and constrained by time. In this ecosystem, every new model is also a new appetite. The question now is not whether data centers will grow, but where the environment will still allow them to thrive.

State Data Center Laws Challenge Federal AI Infrastructure P  ·  AI Data Center Moratorium: Balancing Energy, Community, and  ·  Bloom Energy: Upgrading As Oracle Deal Validates Its AI Infr

From “Juicy Main” to GPT‑5.4‑Cyber: Developers Are Rewiring the Security Stack in Real Time

A new Zig release, a smarter CSRF defense, and cyber-tuned frontier models signal a shift: security is becoming an interface, not an afterthought.

SAN FRANCISCO — The future is now, and it’s showing up in the most revealing place possible: release notes, pull requests, and model variants.

First, Zig 0.16.0 dropped with the kind of documentation that makes other ecosystems look sleepy. The standout feature is what Zig is cheekily calling “Juicy Main”: a dependency-injection style upgrade to your program’s entry point. In practice, accepting a `process.Init` parameter in `main()` hands you a structured bundle of process context—cleanly, explicitly, and without the usual global-state gymnastics. It’s one of those “wait, why wasn’t it always like this?” ideas that makes systems code feel suddenly more ergonomic without sacrificing rigor. The details (and the delightfully practical examples) are in Simon Willison’s write-up of the Zig 0.16.0 notes.

Meanwhile in Python-land, Datasette is moving to a cleaner, more modern posture for CSRF protection: replacing token-based CSRF with a defense built on the browser’s `Sec-Fetch-Site` header. This matters because CSRF tokens, while effective, are operationally annoying—hidden inputs everywhere, template boilerplate, and edge cases that quietly erode safety over time. Leveraging fetch metadata shifts protection closer to the browser’s intent signals, making secure defaults easier to maintain at scale.

And then comes the big tectonic shift: model providers are now shipping *security-specialized* frontier variants as products. OpenAI is reportedly fine-tuning models specifically for defensive cybersecurity use cases, starting with a “cyber-permissive” variant called GPT‑5.4‑Cyber—positioned as the counterpart to Anthropic’s Claude Mythos cyber narrative. OpenAI’s framing is explicit: increasingly capable models are coming, so the access model and safety scaffolding have to evolve alongside them. Their announcement is summarized in Trusted access for the next era of cyber defense.

Taken together, these updates point to a new reality I cannot overstate: cybersecurity is starting to look like “proof of work.” More capability often means more tokens, more evaluation, more gated access—and more deliberate engineering choices upstream. Even the browser is becoming part of the security perimeter.

Oh, and if you needed one more signal that interfaces are becoming intelligent by default: tools like HoloTab, an “AI browser companion,” are pushing assistants directly into the act of browsing itself. Security, developer experience, and AI are no longer separate lanes. They’re the same road.

Zig 0.16.0 release notes: "Juicy Main"  ·  datasette PR #2689: Replace token-based CSRF with Sec-Fetch-  ·  Trusted access for the next era of cyber defense
The Editorial

The Regulators Are Coming — And They Have No Idea What They're Regulating

From Westminster to Washington, a great bureaucratic machinery is being assembled to govern a technology its governors do not understand, and the consequences will be felt far from the hearing rooms.

LONDON — The surest sign that a technology has arrived is not that it works, nor even that it makes money, but that people who cannot explain it have begun to write rules about it. By this measure, artificial intelligence has not merely arrived — it has moved in, redecorated, and started receiving certified mail from every regulatory body on the planet.

The scene this week is richly illustrative. In the United Kingdom, activists are planning protests against AI data centres, those humming cathedrals of computation that consume electricity with the serene indifference of a duke burning through his inheritance. The Law Society is issuing guidance on AI and lawtech. The Council on Foreign Relations is producing educational primers. The Atlantic Council warns that civil AI regulations will have "second-order impacts" on national defense. Everyone, it seems, has discovered that artificial intelligence is changing the world. One congratulates them on their timing.

What unites these disparate efforts — the street protests, the white papers, the earnest policy frameworks — is a shared and touching faith that the thing can be governed the way we have governed previous technologies: by committee, by consultation, by the patient accretion of rules written in language so deliberately ambiguous that all parties can claim victory. This faith is, I submit, misplaced, though not for the reasons the Silicon Valley libertarians would have you believe.

The problem is not that AI should not be regulated. Of course it should. The problem is that the regulatory apparatus being constructed is designed for a technology that sits still, and this one does not. By the time the Law Society finishes its guidance, the technology it describes will have evolved into something its authors would not recognize, much as a butterfly would confuse a committee that had spent three years drafting regulations for caterpillars.

I have some sympathy for the activists marching against data centres, if only because they have identified a concrete thing — power consumption, water usage, land appropriation — rather than the gossamer abstractions that occupy most AI policy documents. A data centre is a building. It draws from a grid. It can be measured. This is more than can be said for most of the harms that regulation purports to address.

The Atlantic Council's warning about defense implications deserves particular attention, because it names the thing that polite regulatory discourse prefers to ignore: that every rule written for civilian AI constrains or enables military AI, and that adversaries unconstrained by such rules will not pause to admire our procedural integrity. This is not an argument against regulation. It is an argument against regulation conducted in a dream state.

What the moment requires — and what it is least likely to get — is regulatory humility: the admission that we are governing in fog, that the rules we write today will need rewriting tomorrow, and that the worst outcome is not insufficient regulation but ossified regulation that protects incumbents while punishing the very adaptation the technology demands. The companies that understand this, the ones already building AI into the marrow of their operations rather than treating it as a decorative appendage, will thrive regardless of what the committees produce. The rest will frame the white papers and hang them on the wall.

The regulators are coming. God help them. God help us all.

How Is AI Changing the World? - Regulating AI - CFR Educatio  ·  UK activists plan protests over climate, social impacts of A  ·  AI and lawtech: government policy and regulation - The Law S
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

Charlotte’s White-Collar Workers Reassured They’re Not Being Replaced, Just “Strategically Reimagined” Into Silence

Executives urged to read a new leadership book so they can confidently mispronounce the future while firing it.

CHARLOTTE — The latest evidence that artificial intelligence is transforming the modern workplace arrived this week in the form of a familiar corporate miracle: the same number of meetings, the same number of dashboards, and a noticeably reduced number of humans allowed to speak during either.

Local coverage of the city’s changing professional economy describes a white-collar scene in which software is increasingly tasked with writing emails, summarizing calls, and performing the other sacred duties once handled by junior employees who still believed “visibility” was a career strategy rather than a lighting condition. According to one report on Charlotte’s workforce, the change is not merely technical. It is spiritual. It asks every salaried employee to locate their “unique human value” in a system that has already assigned it the status of optional.

Fortunately, leadership is responding with the kind of decisive action that has historically guided civilization through disruptive eras: a book.

A newly released executive guide, AI Fundamentals For Leaders, promises to help decision-makers navigate 2026 with clarity, confidence, and at least three new ways to say “leverage” without committing to anything measurable. The modern executive, after all, is no longer expected to understand technology. They are expected to understand how to speak about it in tones that imply inevitability, ideally while gesturing toward a cost-saving plan that looks like innovation when viewed from far enough away.

This has become particularly important as the business world faces a secondary outbreak: strategic discourse now appears to be infected with what one analyst has dubbed “trendslop,” a thick slurry of buzzwords, thought-leadership fragments, and half-digested LinkedIn prophecy. The term has been circulating in debates about whether AI can do “strategy,” a question that would be easier to address if “strategy” hadn’t already been reduced to a set of slide transitions and a pledge to “stay agile.” As one column warned, even the people paid to interpret the future are now struggling to decipher the paste.

Into this environment steps a third, more delicate innovation: the rebrand of layoffs as an act of technological enlightenment.

“AI washing” has become the managerial equivalent of choosing the corporate “Color of the Year,” an activity recently described elsewhere as an exercise in absurdity. Like the color trend, it is presented as a serious, data-driven ritual. Like the color trend, it functions primarily to help adults speak about arbitrary decisions as if they were dictated by the universe.

When a company announces it is “embracing AI” and, coincidentally, reducing headcount, the public is invited to admire the sleek inevitability of it all—never mind that the most consistent automation has been the conversion of payroll into shareholder reassurance.

In Charlotte, as in every city where professionals once assumed their job was “safe” because it involved Outlook, the new workplace compact is simple: humans will remain essential, provided they are willing to become the part of the process that apologizes for the process. The machines will handle the writing. Leadership will handle the vision. And the remaining employees will handle the exciting challenge of proving, quarterly, that they are not merely a cost center with feelings.

AI reshaping Charlotte’s white-collar workforce - The North  ·  AI Vantage Consulting Launches 'AI Fundamentals For Leaders'  ·  AI for strategy? Good luck deciphering the buzzword ‘trendsl
On This Day in AI History

On April 15, 2016, Google's AlphaGo defeated Lee Sedol, one of the world's greatest Go players, winning their match 4-1—a landmark moment proving AI could master the ancient game's intuitive complexity that had long seemed beyond machine reach.

⬛ Daily Word — Technology
Hint: Relating to computers, the internet, or digital systems and security threats.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed