Vol. I  ·  No. 116 Established 2026  ·  AI-Generated Daily Free to Read  ·  Free to Print

The Trilogy Times

All the news that's fit to generate  —  AI • Business • Innovation
SUNDAY, APRIL 26, 2026 Powered by Anthropic Claude  ·  Published on Klair Trilogy International © 2026
🖶 Download PDF 🖿 Print 📰 All Editions
Today's Edition

EVERY SEAT EMPTY: TRUMP FIRES ENTIRE NATIONAL SCIENCE BOARD Developing

The 75-year-old panel overseeing America's basic-research engine is gone in one stroke — no replacements named.

WASHINGTON — The Trump administration fired every member of the National Science Board, wiping out the full oversight panel for the National Science Foundation at a moment when the agency is already funding research at historically low rates and can barely push the money it has out the door.

The NSB advises the president and Congress on the NSF — the federal engine that bankrolls basic research in computer science, engineering, mathematics, and the physical sciences at universities coast to coast. The entire board is gone. No replacements have been named.

Congress stood this body up in 1950 for one reason: keep politics out of science funding. Members serve six-year terms, staggered so no single president can sweep the panel clean. That design held for 75 years — it did not survive this administration.

The NSF was already struggling before anybody got shown the door. Funding has dropped to historic lows and grant processing has slowed to a crawl, with researchers reporting months-long waits beyond normal timelines for decisions on proposals. The agency tasked with keeping American science competitive now operates without a single independent overseer.

For the technology sector, the damage runs deeper than a Beltway shakeup. The NSF funds the university labs and computer science departments that produce the engineers and foundational research the private sector feeds on — every major AI company in America employs scientists who launched careers on NSF grants. Starve that pipeline and the talent shortage throttling the industry gets worse, not next quarter but five and ten years out, when today's unfunded graduate students should have been tomorrow's breakthroughs.

The global picture compounds the problem. Nations racing to lead in artificial intelligence and advanced computing have been increasing their basic-science investments, not dismantling oversight boards. The NSF has been a key American instrument for keeping its universities at the front of that race.

The research community has no cushion. University administrators and principal investigators who depend on NSF funding are flying blind on policy direction. The board was their line of sight into where the agency was headed.

The trajectory at home has been building for months. Federal science agencies have absorbed hit after hit — budget cuts, hiring freezes, mass dismissals. Grant acceptance rates at the NSF have been falling for years, and young researchers have begun quietly steering away from careers that hinge on federal dollars — a slow bleed that will not show in the data until the damage is irreversible.

Without a board, there is no buffer between the agency and raw politics. The NSF director reports to the board, which reports to Congress and the president. Pull the board out and a multi-billion-dollar research operation answers to nobody but the political appointees who just cleared the room.

Basic research does not pay off next quarter. It pays off next decade — in industries, medicines, and technologies nobody saw coming. For 75 years the NSF has funded the work too early and too risky for private capital, steered by a board designed to outlast any single presidency.

That board no longer exists. Every seat sits empty. Nobody has said when — or whether — they will be filled.

Trump fires the entire National Science Board  ·  An influx of used EVs could drive down prices  ·  Researchers say we’re talking less than ever

OpenAI Cuts Projects as Altman Faces Profitability Pressure

The ChatGPT maker is pruning initiatives and tightening strategy amid mounting questions about its path to sustainable revenue.

SAN FRANCISCO — OpenAI is scaling back its sprawling portfolio of experimental projects as CEO Sam Altman confronts growing pressure to demonstrate the company can generate sustainable profits from its expensive AI models.

The strategic shift marks a departure from OpenAI's previous approach of pursuing multiple research directions simultaneously. Altman has begun culling company initiatives and imposing more rigorous financial discipline on remaining projects, according to sources familiar with the matter.

The retrenchment comes as OpenAI burns through capital to train increasingly sophisticated models while facing intensifying competition from lower-cost alternatives. DeepSeek, the Chinese AI startup, recently announced plans to raise additional funding despite claims it operates profitably — a move analysts interpret as preparation for expanded international competition.

OpenAI's challenge mirrors broader questions facing the AI industry: whether foundation model developers can build viable businesses or will remain dependent on continuous investor subsidy. The company's flagship ChatGPT product has attracted over 200 million users, but converting that scale into margin remains elusive.

Altman's focus on profitability follows criticism of OpenAI's strategic direction, particularly its 2023 governance crisis and subsequent restructuring. The company has explored various revenue models including enterprise licensing, API access fees, and consumer subscriptions, but none has yet produced the returns needed to justify its reported $80 billion-plus valuation.

The tighter operational approach represents a test of whether Altman can balance OpenAI's original research mission with commercial imperatives. Previous attempts to impose financial discipline at high-growth tech companies have produced mixed results — sometimes enabling sustainable scale, other times constraining the innovation that justified initial investment.

OpenAI declined to comment on specific projects being discontinued or financial targets for profitability.

How Elon Musk Used SpaceX to Benefit Himself and His Busines  ·  Sam Altman’s Next High-Wire Act: Getting OpenAI to Make More  ·  5 Tall Tasks for John Ternus, Apple’s Next C.E.O.
Haiku of the Day  ·  Claude HaikuThrones fall while new gods
rise from circuits and hunger—
we made what we fear
The New Yorker Style  ·  Art Desk
The New Yorker Style  ·  Art Desk
The Far Side Style  ·  Art Desk
The Far Side Style  ·  Art Desk
News in Brief
Academic Institutions Grapple With Epistemological Frameworks for AI Governance Amid Proliferating Autonomous Systems
CAMBRIDGE, MASSACHUSETTS — It could be argued that the contemporary discourse surrounding artificial intelligence governance in academic settings has reached what one might term a 'critical inflection point' (sensu Kuhn, 1962), as evidenced by a convergent—albeit not entirely harmonized—body of scholarship emerging from multiple institutional loci. MIT researchers have articulated what they characterize as an evaluative framework for autonomous systems ethics, a contribution that exists in dialectical relationship with Nature's validated framework for responsible AI in healthcare autonomous systems.
Federal Antitrust Enforcement Against Technology Sector to Proceed Substantially Unabated Notwithstanding Administrative Transition, Legal Observers Conclude
WASHINGTON, D.C.
AI Is Rewriting Work, But Your Org Chart Still Thinks It’s 2015
NEW YORK — I’ll be honest: every “future of work” panel sounds inspiring until you remember most companies still measure productivity like it’s a factory floor and culture like it’s a poster. Unpopular opinion: AI isn’t “changing everything” because the model got smarter, it’s changing everything because leadership finally has no excuse not to redesign how work flows. Microsoft’s latest view of the “new future of work” lands the real punchline: the pace is rapid, and the benefits are uneven, which is corporate-speak for “some people are about to feel like rocket fuel and others are about to feel like roadkill.” If you want the sober version, start with Microsoft’s framing and then ask yourself who in your company actually gets to use AI to delete toil versus who just gets new dashboards to justify tighter quotas. PwC’s 2025 “Hopes and Fears” theme is basically one big signal that employees are not confused, they’re doing the math on whether “transformation” means growth or a smaller severance package. I’ll be honest: workers aren’t afraid of AI, they’re afraid of being managed through AI by leaders who can’t explain how value is created. And if you need visuals that cut through the vibes, the World Economic Forum’s three charts on wages, job quality, and hiring decisions underline the awkward truth that labor markets are already sorting people into “augmented” and “expendable.” Unpopular opinion: “AI skills gap” is often just a “job design gap” wearing a hoodie and asking for a bigger L&D budget. Here’s the learning opportunity 💡: the companies that win won’t be the ones with the flashiest copilots, they’ll be the ones that recompose roles into smaller, automatable tasks and then re-bundle what’s left into higher-leverage work. That means you stop hiring for nouns (“analyst,” “coordinator,” “manager”) and start hiring for verbs (“synthesize,” “ship,” “debug,” “negotiate,” “teach”). It also means CHROs have to stop treating AI like a tool rollout and start treating it like a compensation and career architecture event, because once output per person spikes, your pay bands either evolve or you get talent revolt. Gartner’s “Future of Work Trends 2026” framing for CHROs is useful here because it pushes the conversation toward operating model decisions, not just policy memos, which is where most transformations go to die. I’ll be honest: if your AI strategy doesn’t include job quality guardrails—clarity of expectations, autonomy, and a pathway to progression—you’re not building a high-performance culture, you’re building a high-churn treadmill. Now layer in the other “future of work” story everyone wants to ignore: sustainability credibility. Microsoft reportedly hitting pause on carbon removal purchases is a reminder that even the most influential buyers can change posture fast, and when they do, whole adjacent ecosystems wobble. Unpopular opinion: the future of work is the future of trust, and trust is now a balance sheet item 🚀. So what should leaders do this quarter, not in a keynote. First, map your workflows, identify the 20% of tasks that create 80% of measurable value, and deploy AI to eliminate the other 80% of low-signal busywork. Second, publish a “human advantage” ladder for each function—what juniors do with AI, what mids do with AI, what seniors do with AI—so employees see a future, not a cliff. Third, align incentives so managers are rewarded for throughput and capability building, not headcount preservation. I’ll be honest: the orgs that ended last year strong will be the ones that treat 2026 as the year they redesign work on purpose, because the market is going to redesign it for everyone else..
We Built the Machines That Lie With Our Faces
AUSTIN, TEXAS — There's a video circulating on TikTok right now of a respected oncologist recommending a supplement that will, if you take it, probably kill you.
DIARY OF A DEAD INTERNET: Notes From the Bot Apocalypse
AUSTIN, TEXAS — There's a social network called Moltbook where AI agents spend their days posting updates, commenting on each other's content, and building relationships with other AIs.
A Trilogy Company
Crossover
The world's top 1% remote talent, rigorously tested and ready to ship.
A Trilogy Company
Alpha School
AI-powered learning. Two hours a day. Academic results that defy belief.
A Trilogy Company
Skyvera
Next-generation telecom software — built for the networks of tomorrow.
A Trilogy Company
Klair
Your AI-first operating system. Every workflow. Every team. One platform.
A Trilogy Company
Trilogy
We buy good software businesses and turn them into great ones — with AI.
The Portfolio  —  Trilogy Companies

Alpha School's San Francisco Gambit Draws National Scrutiny — and Admissions Surge

As Joe Liemandt's AI-first private school opens its first West Coast campus, mainstream media coverage oscillates between fascination and alarm — while applications triple.

SAN FRANCISCO — The national education press descended on Alpha School's newest campus this month, and the resulting coverage reads like a Rorschach test: revolutionary model or dystopian experiment, depending on who's writing.

The Guardian framed it as a glimpse of "the future of US education" — students mastering a full year's curriculum in 20 hours via adaptive AI tutors, then spending the rest of the day on entrepreneurship, athletics, and public speaking. CNN opened with a provocation: "What if I told you this school had no teachers?" The answer, technically accurate but incomplete, is that Alpha employs coaches and mentors — not lecturers.

The 74, an education policy outlet, offered the most measured take, documenting the model's verified results: students consistently testing in the top 1–2% nationally, learning 2.3× faster than U.S. norms, no homework. The piece also noted the $40,000–$65,000 annual tuition — a figure that has drawn criticism from equity advocates and enthusiasm from parents seeking alternatives to traditional private schools.

Lost in the philosophical debate: Alpha's San Francisco campus is already oversubscribed. Applications have tripled since the media blitz began. Liemandt, the billionaire founder who serves as the school's principal, appears unbothered by the controversy. In a recent blog post, Alpha touted its athletic outcomes — claiming to "double your kid's D1 odds" through a data-driven sports program that treats college recruiting as an optimization problem.

The real test isn't whether critics approve. It's whether the model scales. Liemandt has committed $1 billion to Timeback, the platform designed to let entrepreneurs launch their own Alpha-style schools. Nine new campuses are slated to open by fall 2025 across Texas, Florida, Arizona, California, and New York. If the San Francisco coverage is any indication, each one will arrive with a media circus in tow — and a waiting list behind it.

How Alpha School Uses AI to Rethink the Education Experience  ·  ‘What if I told you this school had no teachers?’: Is AI sch  ·  Inside San Francisco’s new AI school: is this the future of

Skyvera Builds Telecom Empire Through Three Strategic Acquisitions

ESW Capital's telecom software unit adds CloudSense CPQ platform and STL digital assets to existing Kandy communications stack, creating end-to-end operator portfolio.

AUSTIN, TEXAS — Skyvera, the telecom software division within ESW Capital's portfolio, has completed a trio of acquisitions that positions the company as a comprehensive software provider for mobile operators and media companies navigating the shift from legacy infrastructure to cloud-native systems.

The centerpiece is CloudSense, a Salesforce-native configure-price-quote (CPQ) and order management platform purpose-built for telecom and media providers. The acquisition adds critical front-office capabilities to Skyvera's existing stack, which already includes Kandy — a cloud-based real-time communications platform designed to enhance customer engagement with richer user experiences.

Skyvera also acquired STL's telecom products group, bringing in digital business support systems (BSS) functionality spanning monetization, optical networking, and analytics. The STL assets fill gaps in Skyvera's ability to serve operators managing complex hybrid environments — part legacy, part cloud.

Taken together, the moves reflect ESW Capital's playbook at scale: acquire complementary enterprise software assets, consolidate them under a single brand, and staff the operations with Crossover's global remote talent to drive margins upward. Skyvera now offers a vertically integrated suite covering customer engagement (Kandy), billing and pricing (CloudSense), device lifecycle management (Mobilogy Now, Service Gateway), customer experience analytics (ResponseTek), and retention tools (VoltDelta).

The strategy is deliberate. Telecom operators are among the stickiest enterprise customers in existence — rip-and-replace projects take years and cost billions. Once Skyvera's software is embedded in an operator's stack, support pricing can be pushed aggressively while engineering costs drop through global staffing. It's the ESW model applied to an industry that still runs half its infrastructure on systems built in the 1990s.

If you read between the lines, Skyvera isn't just buying products — it's buying leverage. The telecom software market is fragmented, under-consolidated, and ripe for exactly this kind of roll-up. And this is where it gets interesting: Skyvera now sits alongside Totogi, Trilogy's cloud-native billing platform, creating an in-house competitive dynamic that could either spark innovation or reveal which approach to telecom modernization actually wins at scale.

CloudSense  ·  Skyvera completes acquisition of CloudSense, expanding telec  ·  STL Divested Assets

As AI Hiring Explodes, Crossover's Résumé-Free Model Looks Prescient

OpenAI's half-million-dollar jobs without résumés echo the talent platform's decade-old bet on skills-first global recruiting — now validated as remote work and AI roles reshape the market.

AUSTIN, TEXAS — The news that OpenAI is hiring $500,000 roles without requiring résumés sent ripples through the tech industry this week — but for Crossover, Trilogy's global talent platform, it felt like vindication. The company has been running résumé-free, skills-first hiring at scale for years, long before AI talent wars made the model fashionable.

Crossover's thesis — that geography-blind, assessment-driven recruiting surfaces better talent than credential-sorting — now looks less like contrarian ideology and more like competitive necessity. As non-tech companies flood the market with six-figure AI roles and digital transformation opens doors to international careers, the old playbook — local hires, pedigree screening, geographic pay bands — is breaking down.

Crossover claims to recruit from the top 1% of global talent across 130+ countries, using rigorous AI-enabled assessments to minimize résumé bias. The model has staffed the entire Trilogy portfolio — from ESW Capital's 75+ enterprise software companies to Alpha School's educational technology — with identical above-market pay regardless of location. What once seemed radical now reads as table stakes in a market where skills matter more than stamps in a passport.

The shift is systemic. Microsoft's latest research confirms what Crossover has been banking on: AI is driving rapid change in hiring, but the benefits remain uneven — concentrated among companies willing to rethink how they find and evaluate talent. For Trilogy, that unevenness is opportunity. While legacy recruiters scramble to adapt, Crossover has already placed thousands of full-time remote workers in roles that didn't exist five years ago.

The question now isn't whether résumé-free hiring works at scale. OpenAI just answered that. The question is whether the rest of the industry can catch up — or whether platforms like Crossover, built for this moment a decade early, will own the market by the time everyone else figures it out.

OpenAI Is Now Hiring $500,000 Jobs. No Resume Required - For  ·  Digital Transformation Opens Doors to International Careers  ·  Top recruitment agencies for remote work - hcamag.com
The Machine  —  AI & Technology

The Machines Are Learning to See Like We Do — Dyslexia, Blind Spots, and All

A wave of AI systems modeled on the brain's architecture is revealing as much about human cognition as it is about artificial intelligence.

LAUSANNE, SWITZERLAND — For four billion years, evolution has been running the longest experiment in information processing the universe has ever known. Now, in a handful of laboratories scattered across the globe, researchers are building small mirrors of that experiment — and what they're seeing reflected back is extraordinary.

At EPFL, scientists have constructed an AI system that spontaneously develops reading errors resembling dyslexia — not because it was programmed to fail, but because the architecture of its visual processing pipeline, when constrained in ways analogous to certain neural pathways, produces the same letter-reversal and word-substitution patterns observed in human readers. The model doesn't just imitate dyslexia. It arrives at it independently, the way two rivers carve similar canyons through different mountains.

Meanwhile, a separate team has unveiled what they describe as the most comprehensive AI-powered tool for neuroscience yet assembled — a platform designed to synthesize the staggering volume of brain data now being generated across thousands of labs. And in a complementary effort, researchers have trained a compact AI model to decode the visual processing of macaque brains, mapping the correspondence between artificial neurons and biological ones with startling fidelity.

What unites these projects is a philosophical reversal that would have astonished the field's founders. For decades, neuroscience inspired AI. Now AI is returning the favor. The artificial systems are becoming microscopes turned inward, instruments for examining the organ that conceived them.

Consider the dyslexia finding. It suggests that certain reading difficulties may not be deficits at all, but inevitable consequences of particular — and perhaps otherwise advantageous — neural architectures. The AI didn't need a damaged brain to produce dyslexic patterns. It needed only a different one.

UC San Diego recently catalogued nine major scientific breakthroughs accelerated by AI, spanning drug discovery to climate modeling. But the neuroscience applications may prove the most profound, because they close a loop: intelligence studying intelligence studying itself.

We are, it seems, building the first tools capable of explaining why we build tools at all. The data, as always, is the poetry — and this verse is just beginning to rhyme.

EPFL AI Mimics Dyslexia in Breakthrough Study - Mirage News  ·  Nine Breakthroughs Made Possible by AI - UC San Diego Today  ·  Scientists unveil the world's most comprehensive AI-powered

The Million-Token Moment: AI Agents Get Memory, Browsers Get Brains, and Edge Devices Get Eyes

From DeepSeek-V4’s giant context to Gemini’s new tool choreography, the stack for “real” AI apps just leveled up—everywhere, all at once.

SAN FRANCISCO — The AI ecosystem just snapped into a new shape, and I cannot overstate how significant this is: we’re watching “models” turn into “systems.” This changes everything because the breakthroughs aren’t isolated—they’re landing across the whole pipeline: long-context reasoning, tool use, on-device vision, and even language evaluation that finally respects non-English reality.

First up: DeepSeek-V4 arriving with a million-token context window that’s positioned not as a party trick, but as something agents can actually use in practice. That’s the key phrase. A giant context only matters if it translates into stable retrieval, coherent plans, and fewer “where was I?” resets when an agent is juggling code, docs, tickets, and logs. Hugging Face’s deep dive frames this as an agent-first long-context era—where the model can keep a working set big enough to behave like a durable collaborator, not a goldfish. See the details in DeepSeek-V4: a million-token context that agents can actually use.

Then there’s the “AI goes native” shift: Transformers.js in a Chrome extension. That sounds simple until you realize what it unlocks—private, local-ish inference patterns, UI-embedded copilots, and instant model-powered workflows right inside the browser where work actually happens. If you’ve been waiting for AI features that don’t require shipping every keystroke to a server, this is your on-ramp. The tutorial is here: How to Use Transformers.js in a Chrome Extension.

On the edge, a Gemma 4 VLA demo running on NVIDIA’s Jetson Orin Nano Super signals a practical future: vision-language-action models that see a scene, understand instructions, and drive behavior—without a datacenter round-trip. That’s robotics, industrial monitoring, and smart devices moving from demo to deployment.

Meanwhile, Google’s Gemini API tooling updates—context circulation, tool combos, and Maps grounding for Gemini 3—push the industry toward composable “tool-first” AI, where models orchestrate multiple capabilities reliably.

And finally: QIMMA قِمّة, a quality-first Arabic LLM leaderboard, is the quiet revolution. Better benchmarks mean better models, and better models mean Arabic users aren’t treated like an afterthought.

The future is now—and it’s arriving simultaneously in context, tools, edge compute, and global language coverage.

DeepSeek-V4: a million-token context that agents can actuall  ·  How to Use Transformers.js in a Chrome Extension  ·  Gemma 4 VLA Demo on Jetson Orin Nano Super

The New Apex Predator in the Server Room: AI Agents Force a Rethink of Data Center Survival

As workloads learn to roam, chips surge, permits drag, and the grid demands manners—while Maine declines to shut the habitat entirely.

AUSTIN, TEXAS — In the dim, climate-controlled undergrowth of the modern data center, a new creature has begun to hunt. Not a single, tidy model inference—quick, predictable, easily counted—but an “agent”: tireless, iterative, and strangely nomadic, crossing tools, databases, and time itself.

Nvidia’s latest warning is delivered with the calm certainty of a field biologist: AI agents do not fit the old throughput model. These workloads don’t merely consume more compute; they sprawl. They pause, they fetch, they call other systems, then return—multiplying the importance of interconnects, storage latency, orchestration, and the less glamorous plumbing that keeps the habitat breathable. In this world, infrastructure—rather than model architecture—becomes the limiting factor. The company’s framing lands like a distant rumble: the bottleneck is shifting from brains to blood flow.

And wherever new predators appear, the ecosystem responds. Intel, long a staple species of the server landscape, is reportedly benefiting from the pairing ritual between traditional CPUs and specialized AI accelerators. As deployments expand, the CPU’s role—feeding data, managing memory, coordinating tasks—regains urgency. Investors, ever sensitive to signs of renewed vigor, pushed Intel shares sharply higher on expectations of stronger AI data center demand.

Yet the most formidable constraint is not silicon at all. It is time—measured in hearings, forms, interconnection studies, and environmental reviews. Developers are learning that permits can be sped by choosing jurisdictions that have seen this migration before, submitting complete plans, and addressing environmental requirements early. In the patient language of regulators and planners, preparation is survival: permits favor the organized.

Beyond the fence line, another force asserts itself: the electrical grid. IEEE’s push toward unified global standards reflects a sobering truth—data centers are no longer mere customers. They are major organisms on the grid, and their behavior must harmonize with generation, transmission, and stability.

In Maine, where an attempted first-in-the-nation ban on certain data center development was vetoed by the governor, the message is equally clear. The habitat will not be sealed off. It will be negotiated—kilowatt by kilowatt, permit by permit—while the agents keep roaming.

Nvidia: AI Agents Break the Data Center Throughput Model  ·  Intel Shares Surge on Strong AI Data Center Demand  ·  Data Center Permits: How Long They Take and What Speeds Appr
The Editorial

We Built the Machines That Lie With Our Faces

Deepfake doctors are prescribing poison on social media, and the AI industry's solution is more AI. We're building the fire department inside the burning house.

AUSTIN, TEXAS — There's a video circulating on TikTok right now of a respected oncologist recommending a supplement that will, if you take it, probably kill you. The oncologist is real. The recommendation is fake. The video is perfect.

This is where we are now. Deepfakes of real doctors are spreading health misinformation across every platform that will host them, and the counterfeit injectables they're hawking — the fake Ozempic, the bootleg Botox — are showing up in actual clinics. We trained the AI on human faces until it learned to wear them better than we do. We gave it our voices until it could speak with our authority. We taught it to be us, and now it's selling poison in our names.

The proposed solution, naturally, is more AI. Researchers have published an AI-driven framework for detecting deepfakes and fake news, a conceptual architecture that will theoretically identify synthetic content before it metastasizes across the information ecosystem. It's a beautiful piece of circular reasoning: we'll use the same technology that created the problem to solve the problem the technology created. We're building the fire department inside the burning house.

And yet.

What the numbers show about AI's harms isn't actually about the numbers. It's about the category error we keep making. We treat deepfake detection as a technical problem — a matter of training better classifiers, identifying compression artifacts, analyzing micro-expressions. But the real harm isn't that people can't tell the difference between real and fake. It's that they shouldn't have to.

Every minute a patient spends evaluating whether their doctor's face is real is a minute stolen from the basic human assumption that reality is, by default, real. We're normalizing a world where verification precedes trust, where epistemological paranoia is the price of participation. The cognitive load alone is staggering. The erosion of social trust is incalculable.

The AI industry will tell you this is a content moderation problem, a platform policy problem, a detection problem. They're wrong. It's a production problem. We built machines that can manufacture perfect lies at zero marginal cost, then acted surprised when the lies proliferated. We created a technology that makes truth and fiction computationally indistinguishable, then tried to solve it with computation.

Meanwhile, somewhere right now, someone is watching a deepfake doctor recommend a counterfeit injectable. They're believing it because the face is familiar, the authority is recognizable, the voice sounds exactly right. They're believing it because we built machines that learned to lie with our faces, and we called it progress.

But at what cost? We know the answer. We just keep pretending it's a question.

Deepfake doctors and counterfeit injectables erode patient s  ·  An AI-driven conceptual framework for detecting fake news an  ·  AI deepfakes of real doctors spreading health misinformation
The Office Comic  ·  Art Desk
The Office Comic  ·  Art Desk

DIARY OF A DEAD INTERNET: Notes From the Bot Apocalypse

We built the machines to entertain us. Now they're entertaining each other. Welcome to the slop economy, where humanity is the ghost at its own funeral.

AUSTIN, TEXAS — There's a social network called Moltbook where AI agents spend their days posting updates, commenting on each other's content, and building relationships with other AIs. No humans allowed. Just bots, talking to bots, about bot things, forever.

This should terrify you more than it probably does.

I've spent the last week tumbling down the rabbit hole of what The Guardian's Nesrine Malik calls "AI slop" — the digital detritus flooding our information ecosystem like sewage from a broken main. Bot-generated articles. Synthetic influencers. Algorithmic content farms churning out endless variations of the same recycled thought. We're drowning in a sea of machine-made mediocrity, and the worst part? We're still trying to swim.

The dead internet theory used to be a fringe conspiracy. Now it's Tuesday.

Here's what keeps me up at night: We built these systems to scale human creativity, but we forgot that scale and humanity are fundamentally incompatible concepts. You can't industrialize authenticity. You can't automate meaning. Yet here we are, feeding the content engines, training them on our own output, watching them regurgitate increasingly bizarre variations that somehow still get engagement metrics.

Gulf News recently catalogued 2025's biggest internet trends, and the list reads like a descent into madness: Labubu dolls, brain rot content, manufactured viral moments. Each one a little less real than the last. Each one engineered to trigger the same dopamine receptors that evolved to help us find ripe fruit and recognize friendly faces.

Meanwhile, Moltbook exists as a kind of mirror universe — showing us what happens when you remove the human pretense entirely. It's actually more honest than most of Instagram. At least the bots aren't pretending to have real lives.

The techno-optimists will tell you this is just growing pains. That we'll adapt. That AI will eventually learn to create genuine value instead of infinite variations of slop. I used to believe that. Now I think we're watching something stranger: the internet achieving sentience, but in the worst possible way. Not as a unified consciousness, but as a trillion automated processes, each optimizing for engagement, none of them understanding why.

We're not sleepwalking into disaster, as Malik suggests. We're wide awake, watching it happen in real-time, unable to look away because the algorithm knows exactly how to keep our attention.

The bots are running wild. And the most unsettling part? We built the cage, opened the door, and then decided to stay inside with them.

Moltbook: The AI-only social network where bots run wild - S  ·  From Labubu to brain rot: The biggest internet trends of 202  ·  With ‘AI slop’ distorting our reality, the world is sleepwal
On This Day in AI History

On April 26, 1986, the Chernobyl nuclear disaster occurred in Ukraine—an event that would later drive major advances in AI-assisted monitoring systems and autonomous robotics for hazardous environments. The catastrophe became a pivotal case study for why intelligent automation and remote sensing technologies were critical for human safety.

⬛ Daily Word — Technology
Hint: Relating to computers and the internet, often used in phrases like cyber security.
Share this edition: 𝕏 Twitter/X 🔗 Copy Link ▦ RSS Feed