The Next Thing Will Not Be Big by Glyph Lefkowitz =================================================
Disruption, too, will be disrupted.
Thursday January 01, 2026
The dawning of a new year is an opportune moment to contemplate what
has transpired in the old year, and consider what is likely to happen
in the new one.
Today, I'd like to contemplate that contemplation itself.
* * *
The 20th century was an era characterized by rapidly accelerating
change in technology and industry, creating shorter and shorter
cultural cycles of changes in lifestyles. Thus far, the 21st century
seems to be following that trend, at least in its recently concluded
first quarter.
The early half of the twentieth century saw the massive disruption
caused by electrification, radio, motion pictures, and then
television.
In 1971, Intel poured gasoline on that fire by releasing the 4004, a
microchip generally recognized as the first general-purpose
microprocessor. Popular innovations rapidly followed: the
computerized cash register, the personal computer, credit cards,
cellular phones, text messaging, the Internet, the web, online games,
mass surveillance, app stores, social media.
These innovations have arrived faster than previous generations, but
also, they have crossed a crucial threshold: that of the human
lifespan.
While the entire second millennium A.D. has been characterized by a
gradually accelerating rate of technological and social change--the
printing press and the industrial revolution were no slouches, in
terms of changing society, and those predate the 20th century--most
of those changes had the benefit of unfolding throughout the course
of a generation or so.
Which means that any /individual person/ in any given century up to
the 20th might remember /one/ major world-altering social shift
within their lifetime, not five to ten of them. The diversity of
human experience is vast, but /most/ people would not /expect/ that
the defining technology of their lifetime was merely the latest in a progression of predictable civilization-shattering marvels.
Along with each of these successive generations of technology, we
minted a new generation of industry titans. Westinghouse, Carnegie,
Sarnoff, Edison, Ford, Hughes, Gates, Jobs, Zuckerberg, Musk. Not
just individual rich people, but entire new classes of rich people
that did not exist before. "Radio DJ", "Movie Star", "Rock Star",
"Dot Com Founder", were all new paths to wealth opened (and closed)
by specific technologies. While most of these people did come from at
least some level of generational wealth, they no longer came from a
literal hereditary aristocracy.
To /describe/ this new feeling of constant acceleration, a new phrase
was coined: "The Next Big Thing". In addition to denoting that some
Thing was coming and that it would be Big (i.e.: that it would change
a lot about our lives), this phrase also carries the strong
/implication/ that such a Thing would be a product. Not a development
in social relationships or a shift in cultural values, but some new
and amazing form of conveying salted meat paste or what-have-you,
that would make whatever lucky tinkerer who stumbled into it into a billionaire--along with any friends and family lucky enough to
believe in their vision and get in on the ground floor with an
investment.
The Next Big Thing
<
https://grammarphobia.com/blog/2015/11/thing-2.html>
meat paste
<
https://en.wikipedia.org/wiki/Spam_(food)>
In the latter part of the 20th century, our entire model of capital
allocation shifted to account for this widespread belief. No longer
were mega-businesses built by bank loans, stock issuances, and
reinvestment of profit, the new model was "Venture Capital". Venture
capital is a model of capital allocation /explicitly predicated/ on
the idea that carefully considering each bet on a likely-to-succeed
business and reducing one's risk was a waste of time, because the
return on the equity from the Next Big Thing would be so
disproportionately huge--10x, 100x, 1000x--that one could afford to
make at least 10 bad bets for each good one, and still come out
ahead.
The biggest risk was in /missing the deal/, not in giving a bunch of
money to a scam. Thus, value investing and focus on fundamentals have
been broadly disregarded in favor of the pursuit of the
Next Big Thing.
If Americans of the twentieth century were temporarily embarrassed millionaires, those of the twenty-first are all temporarily
embarrassed FAANG CEOs.
FAANG
<
https://en.wikipedia.org/wiki/Big_Tech#Acronyms>
The predicament that this tendency leaves us in today is that the
world is increasingly run by generations--GenX and Millennials--with
the shared experience that the computer industry, either hardware or
software, would produce some radical innovation every few years. We
assume that to be true.
But all things change, even change itself, and that industry is
beginning to slow down. Physically, transistor density is starting to
brush up against physical limits. Economically, most people are
drowning in more compute power than they know what to do with anyway.
Users already have most of what they need from the Internet.
brush up against physical limits <
https://interestingengineering.com/innovation/
transistors-moores-law>
The big new feature in every operating system is a bunch of useless
junk nobody really wants and is seeing remarkably little uptake.
Social media and smartphones changed the world, true, but... those
are both innovations from 2008. They're just not new any more.
useless junk <
https://www.cnet.com/tech/mobile/73-of-iphone-owners-say-no-thanks- to-apple-intelligence-new-data-echoes-cnets-findings/>
nobody really wants <
https://www.windowscentral.com/microsoft/windows-11/2025-has-been- an-awful-year-for-windows-11-with-infuriating-bugs-and-constant- unwanted-features>
So we are all--collectively, culturally--looking for the
Next Big Thing, and we keep not finding it.
It wasn't 3D printing. It wasn't crowdfunding. It wasn't smart
watches. It wasn't VR. It wasn't the Metaverse, it wasn't Bitcoin, it
wasn't NFTs [1].
It's also not AI, but this is why so many people assume that it will
be AI. Because it's got to be /something/, right? If it's got to be
/something/ then AI is as good a guess as anything else right now.
The fact is, /our lifetimes have been an extreme anomaly/. Things
like the Internet used to come along every thousand years or so, and
while we might expect that the pace will stay a bit higher than that,
it is not reasonable to expect that something new like
"personal computers" or "the Internet" [3] will arrive again.
We are not going to get rich by getting in on the ground floor of the
next Apple or the next Google because the next Apple and the next
Google are Apple and Google. The industry is maturing. Software
technology, computer technology, and internet technology are all
maturing.
There Will Be Next Things
=========================
Research and development is happening in all fields all the time.
Amazing new developments quietly and regularly occur in
pharmaceuticals and in materials science. But these are not
predictable. They do not inhabit the public consciousness until
they've already happened, and they are rarely so profound and
transformative that they change /everybody's/ life.
There will even be new things in the computer industry, both software
and hardware. Foldable phones do address a real problem (I wish the
screen were even bigger but I don't want to carry around such a big
device), and would probably be more popular if they got the costs
under control. One day somebody's going to crack the problem of
volumetric displays, probably. Some VR product will probably,
eventually, hit a more realistic price/performance ratio where the
niche will expand at least a little more.
Maybe there will even be something genuinely useful, which is
recognizably adjacent to the current "AI" fad, but if it is, it will
be some new /development/ that we haven't seen yet. If current AI
technology were sufficient to drive some interesting product, it
would already be doing it, not using marketing disguised as science
to conceal diminishing returns on current investments.
using marketing disguised as science <
https://theoutpost.ai/news-story/major-study-reveals-ai-benchmarks- may-be-misleading-casting-doubt-on-reported-capabilities-21513/>
conceal diminishing returns <
https://www.wired.com/story/the-ai-industrys-scaling-obsession-is- headed-for-a-cliff/>
But They Will Not Be Big
========================
The impulse to find the One Big Thing that will dominate the next
five years is a fool's errand. Incremental gains are diminishing
across the board. The markets for time and attention [2] are largely
saturated. There's no need for another streaming service if 100% of
your leisure time is already committed to TikTok, YouTube and
Netflix; famously, Netflix has already considered sleep its primary
competitor for close to a decade--years before the pandemic.
Netflix vs sleep
<
https://www.fastcompany.com/40491939/ netflix-ceo-reed-hastings-sleep-is-our-competition>
Those rare tech markets which /aren't/ saturated are suffering from
pedestrian economic problems like wealth inequality, not
technological bottlenecks.
For example, the thing preventing the development of a robot that can
do your laundry and your dishes without your input is not necessarily
that we couldn't build something like that, but that most households
just /can't afford it/ without wage growth catching up to
productivity growth. It doesn't make sense for anyone to commit to
the substantial R&D investment that such a thing would take, if the
market doesn't exist because the average worker isn't paid enough to
afford it on top of all the other tech which is already required to
exist in society.
wage growth catching up to productivity growth <
https://www.epi.org/productivity-pay-gap/>
The projected income from the tiny, wealthy sliver of the population
who could pay for the hardware, cannot justify an investment in the
software past a fake version remotely operated by workers in the
global south, only made possible by Internet wage arbitrage, i.e. a
more palatable, modern version of indentured servitude.
fake version only made possible by Internet wage arbitrage <
https://futurism.com/future-society/
robot-servant-neo-remote-controlled>
Even if we were to accept the premise of an actually-"AI" version of
this, that is still just a wish that ChatGPT could somehow improve
enough behind the scenes to replace that worker, not any substantive
investment in a novel, proprietary-to-the-chores-robot software
system which could reliably perform specific functions.
What, Then?
===========
The expectation for, and lack of, a "big thing" is a big problem.
There are others who could describe its economic, political, and
financial dimensions better than I can. So then let me speak to my
expertise and my audience: open source software developers.
When I began my own involvement with open source, a big part of the
draw for me was participating in a low-cost (to the corporate
developer) but high-value (to society at large) positive externality.
None of my employers would ever have cared about many of the
applications for which Twisted forms a core bit of infrastructure;
nor would I have been able to predict those applications' existence.
Yet, it is nice to have contributed to their development, even a
little bit.
applications
<
https://deluge-torrent.org/>
Twisted
<
https://twisted.org/>
However, it's not actually a positive externality if the public at
large can't directly /benefit/ from it.
When real world-changing, disruptive developments are occurring, the bean-counters are not watching positive externalities too closely. As
we discovered with many of the other benefits that temporarily
accrued to labor in the tech economy, Open Source that is
/usable by individuals and small companies/ may have been a ZIRP. If
you know you're gonna make a billion dollars you're not going to
worry about giving away a few hundred thousand here and there.
many other benefits <
https://www.businessinsider.com/zirp-end-of-cushy-big-tech- job-perks-mass-layoffs-2024-2>
When gains are smaller and harder to realize, and margins are
starting to get squeezed, it's harder to justify the investment in
vaguely good vibes.
But this, itself, is not a call to action. I doubt very much that
anyone reading this can do anything about the macroeconomic reality
of higher interest rates. The technological reality of "development
is happening slower" is inherently something that you can't change on
purpose.
However, what we /can/ do is to be aware of this trend in our own
work.
Fight Scale Creep
=================
It seems to me that more and more open source infrastructure projects
are tools for hyper-scale application development, only relevant to
massive cloud companies. This is just a subjective assessment on my
part--I'm not sure what tools even exist today to measure this
empirically--but I remember a big part of the open source community
when I was younger being things like Inkscape, Themes.Org and
Slashdot, not React, Docker Hub and Hacker News.
This is not to say that the hobbyist world no longer exists. There is
of course a ton of stuff going on with Raspberry Pi, Home Assistant,
OwnCloud, and so on. If anything there's a bit of a resurgence of
self-hosting. But the interests of self-hosters and corporate
developers are growing apart; there seems to be far less of a
beneficial overflow from corporate infrastructure projects into these enthusiast or prosumer communities.
This is the concrete call to action: if you are employed in any
capacity as an open source maintainer, dedicate /more/ energy to
medium- or small-scale open source projects.
If your assumption is that you will eventually reach a hyper-scale
inflection point, then mimicking Facebook and Netflix is likely to be
a good idea. However, if we can all admit to ourselves that we're
/not/ going to achieve a trillion-dollar valuation and a hundred
thousand engineer headcount, we can begin to consider ways to make
our Next Thing a bit smaller, and to accommodate the world as it is
rather than as we wish it would be.
Be Prepared to Scale Down
=========================
Here are some design guidelines you might consider, for just about
any open source project, particularly infrastructure ones:
1. Don't assume that your software can sustain an arbitrarily
large fixed overhead because "you just pay that cost once" and
you're going to be running a billion instances so it will
always amortize; maybe you're only going to be running ten.
2. Remember that such fixed overhead includes not just CPU, RAM,
and filesystem storage, but also the learning curve for
developers. Front-loading a massive amount of conceptual
complexity to accommodate the problems of hyper-scalers is a
common mistake. Try to smooth out these complexities and
introduce them only when necessary.
3. Test your code on edge devices. This means supporting Windows
and macOS, and even Android and iOS. If you want your tool to
help empower individual users, you will need to meet them
where they are, which is not on an EC2 instance.
4. This includes considering Desktop Linux as a platform, as
opposed to Server Linux as a platform, which (while they
certainly have plenty in common) they are also distinct in
some details. Consider the highly specific example of secret
storage: if you are writing something that intends to live in
a cloud environment, and you need to configure it with a
secret, you will probably want to provide it via a text file
or an environment variable. By contrast, if you want this same
code to run on a desktop system, your users will expect you to
support the Secret Service. This will likely only require a
few lines of code to accommodate, but it is a massive
difference to the user experience.
5. Don't rely on LLMs remaining cheap or free. If you have
LLM-related features [4], make sure that they are sufficiently
severable from the rest of your offering that if ChatGPT
starts costing $1000 a month, your tool doesn't break
completely. Similarly, do not require that your users have
easy access to half a terabyte of VRAM and a rack full of
5090s in order to run a local model.
Secret Service
<
https://specifications.freedesktop.org/secret-service/latest/>
Even if you /were/ going to scale up to infinity, the ability to
scale down and consider smaller deployments means that you can run
more comfortably on, for example, a developer's laptop. So even if
you can't convince your employer that this is where the economy and
the future of technology in our lifetimes is going, it can be easy
enough to justify this sort of design shift, particularly as
individual choices. Make your onboarding cheaper, your development
feedback loops tighter, and your systems generally more resilient to
economic headwinds.
So, please design your open source libraries, applications, and
services to run on smaller devices, with less complexity. It will be
worth your time as well as your users'.
But if you can fix the whole wealth inequality thing, do that first.
[1]
These sorts of lists are pretty funny reads, in retrospect. <
https://www.technologyreview.com/10-breakthrough-technologies/2013/>
[2]
Which is to say, "distraction".
[3]
... or even their lesser-but-still-profound aftershocks like
"Social Media", "Smartphones", or "On-Demand Streaming Video" ...
secondary manifestations of the underlying innovation of a
packet-switched global digital network ...
[4]
My preference would of course be that you just didn't have such
features at all, but perhaps even if you agree with me, you are part
of an organization with some mandate to implement LLM stuff. Just try
not to wrap the chain of this anchor all the way around your code's
neck.
From:
<
https://blog.glyph.im/2026/01/the-next-thing-will-not-be-big.html>
--- PyGate Linux v1.5.2
* Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)