Anonymous wrote:
Singularity point has been reached. Once code begins writing code we arethere.
https://www.youtube.com/watch?v=X_4rKVXev8k
This PC guy is kind of a dumbshit.
Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
Unix, all "wrote code" based on operational state and runtime requirements
on a regular basis every day.
Singularity point has been reached. Once code begins writing code
we are there.
https://www.youtube.com/watch?v=X_4rKVXev8k
This PC guy is kind of a dumbshit.
Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
Unix, all "wrote code" based on operational state and runtime
requirements on a regular basis every day.
Anonymous wrote:
Singularity point has been reached. Once code begins writing code we arethere.
https://www.youtube.com/watch?v=X_4rKVXev8k
This PC guy is kind of a dumbshit.
Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
Unix, all "wrote code" based on operational state and runtime requirements
on a regular basis every day.
On Tue, 14 Apr 2026 16:52:06 -0400intelligence/anthropics-claude-mythos-isnt-a-sentient-super-hacker-its-a- sales-pitch-claims-of-thousands-of-severe-zero-days-rely-on-just-198- manual-reviews
Anonymous User <noreply@dirge.harmsk.com> wrote:
Singularity point has been reached. Once code begins writing code
we are there.
https://www.youtube.com/watch?v=X_4rKVXev8k
This PC guy is kind of a dumbshit.
Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
Unix, all "wrote code" based on operational state and runtime
requirements on a regular basis every day.
Also, to the surprise of precisely nobody who's been paying any damn attention to the "AI" playbook, it turns out the whole "found a million zillion super-complicated bugs, but they live in Canada, you wouldn't
know them, also we totally *do* have an everything-proof-shield-proof-
sword and a real actual wizard staff that does magic, but they're in
the treehouse and you're not allowed up there" report is, to put it
politely, massively overstated PR hoopla:
https://www.tomshardware.com/tech-industry/artificial-
...which will doubtless come as a *total* surprise to everyone who's
spent the last three years parroting everything Dario Amodei says un- questioned, immediately forgetting about any given claim when the next
one farts out of his mouth, and never bothering to go back and check
whether any of his apocalyptic Real Soon Now predictions turned out to
Meanwhile, the money's already drying up, datacenters are behind
schedule or not even started, the big players have already started on
the "service gets worse" stage of enshittification, OpenAI just killed
its most cost-intensive service mere months after announcing a billion- dollar deal with Disney for it, and its CFO was making uncomfortable
noises about their prospects for an IPO (which would involve opening
the books for public inspection) before getting the vaudeville hook.
All very healthy and normal!
https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/ https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/ https://www.wheresyoured.at/openai-cfo-news/
One could argue that we've had "code writing code" since the first
compiler. I think the bigger issue is the tendency of LLMs to generate plausible looking nonsense that can be difficult to identify as such.
why is lithium 6 a fermion
On Wed, 15 Apr 2026 23:33:50 +0100, Bill Findlay wrote:
why is lithium 6 a fermion
"Fermion" means the object has an odd wave function, which, as far as
I can recall from undergrad physics, has nothing to do with its spin.
The opposite of "fermion" is "boson", which means it has an even wave function. The particles that transmit the fundamental forces (e.g.
photons for the electromagnetic force) are bosons.
Fermions obey the Pauli Exclusion Principle, bosons don?t. This (simplistically) means that two fermions cannot occupy the same space
at the same time.
All matter is made out of fermions. I suppose this is by definition,
really; two objects that could occupy the same space at the same time
would not be considered "material".
On Wed, 15 Apr 2026 23:33:50 +0100, Bill Findlay wrote:
why is lithium 6 a fermion
?Fermion? means the object has an odd wave function, which, as far
as I can recall from undergrad physics, has nothing to do with its
spin.
However, I think that LLM-s already can do some work like
first line helpdesk or "paper pushers" work where people
check that on the input there are appropriate documents
and push them for furthere processing. IIUC LLM-based
machine translation while not perfect made substantial
progress. There are potentially very lucrative markets
like autonomous weapons and mass surveilance.
[...] there is potential to serve as
"copyright anihilator". First, LLM-s probably can create
non-GPL version of GPL-ed software with good testsuite.
Raw costs of computations seem to go down. As Deep Seek
showed better architecture can lower amout of computations
needed by a LLM at given quality.
And it's a *fantastic* idea, too!
https://mashable.com/article/air-canada-forced-to-refund-after-chatbot-misinformation
There is no warranty in real life. However, current
semiconductor improvemements will continue for some time,
But human brain seem to learn using much smaller training data.
On Thu, 16 Apr 2026 12:51:57 -0000 (UTC), Waldek Hebisch wrote:
But human brain seem to learn using much smaller training data.
Also prone to believing things that aren?t true.
Coincidence? You be the judge.
On Thu, 16 Apr 2026 12:51:57 -0000 (UTC), Waldek Hebisch wrote:
But human brain seem to learn using much smaller training data.
Also prone to believing things that aren?t true.
In alt.folklore.computers John Ames <commodorejohn@gmail.com> wrote:
On Tue, 14 Apr 2026 16:52:06 -0400
Anonymous User <noreply@dirge.harmsk.com> wrote:
Singularity point has been reached. Once code begins writing code
we are there.
https://www.youtube.com/watch?v=X_4rKVXev8k
This PC guy is kind of a dumbshit.
Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
Unix, all "wrote code" based on operational state and runtime
requirements on a regular basis every day.
Also, to the surprise of precisely nobody who's been paying any damn
attention to the "AI" playbook, it turns out the whole "found a million
zillion super-complicated bugs, but they live in Canada, you wouldn't
know them, also we totally *do* have an everything-proof-shield-proof-
sword and a real actual wizard staff that does magic, but they're in
the treehouse and you're not allowed up there" report is, to put it
politely, massively overstated PR hoopla:
https://www.tomshardware.com/tech-industry/artificial-intelligence/anthropics-claude-mythos-isnt-a-sentient-super-hacker-its-a-sales-pitch-claims-of-thousands-of-severe-zero-days-rely-on-just-198-manual-reviews
...which will doubtless come as a *total* surprise to everyone who's
spent the last three years parroting everything Dario Amodei says un-
questioned, immediately forgetting about any given claim when the next
one farts out of his mouth, and never bothering to go back and check
whether any of his apocalyptic Real Soon Now predictions turned out to
be ludicrous bullshit (spoiler: the answer is "very much yes.")
Meanwhile, the money's already drying up, datacenters are behind
schedule or not even started, the big players have already started on
the "service gets worse" stage of enshittification, OpenAI just killed
its most cost-intensive service mere months after announcing a billion-
dollar deal with Disney for it, and its CFO was making uncomfortable
noises about their prospects for an IPO (which would involve opening
the books for public inspection) before getting the vaudeville hook.
All very healthy and normal!
https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/
https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/
https://www.wheresyoured.at/openai-cfo-news/
Zitron has some valid points. But he keeps saying that LLM-s
will not work and that there should be crash. He wrote that
in 2024, we have 2026 and no crash yet. I agree that
financial part looks like a bubble which eventually is
likely to burst.
However, I think that LLM-s already can do some work like
first line helpdesk or "paper pushers" work where people
check that on the input there are appropriate documents
and push them for furthere processing. IIUC LLM-based
machine translation while not perfect made substantial
progress. There are potentially very lucrative markets
like autonomous weapons and mass surveilance.
Recent news about LLM-generated compiler IMO shows that
there is some potential for code generation. ATM there
seem to be serious limitations, basically generated code
seem to be as good as available testsuite. That may
limit applicability, but there is potential to serve as
"copyright anihilator". First, LLM-s probably can create
non-GPL version of GPL-ed software with good testsuite.
Fuzzing and scraping of user forums may be able to create
testsuites for closed-source packages.
In slightly different spirit, classical copyright
legal theory says that information is _not_ subject
to copyright and only form is portected. But LLM-s
have huge capability to change form. One can imagine
LLM-based "clean room" pipeline, with one LLM extracting information/specification from a copyrighted artifact(s)
and other LLM producing new form.
Considering cost, claimed cost of generated compiler was
$20000. Even if real cost is higher (due to subsidized
price of LLM use) it may look attractive for businesses:
100 kloc size is likely to require more than 1 year
of developement, so it probably could cost $300000 to
create in USA. From point of view of managers at
equal cost LLM which can be quickly put to use on demand
has advantage compared to hiring human developers which
needs time (and when dissatisfied humans may go away).
Also, according to report LLM took 2 weeks to create code.
It would be hard for humans to do similar work in this
time: too much for single developer and coordinating
large team could easily take more time than coding.
Raw costs of computations seem to go down. As Deep Seek
showed better architecture can lower amout of computations
needed by a LLM at given quality. Apparently LLM-s
struggle with some tasks that are easy for classic
algorithms, so hybrids where LLM delegate algorithmic
parts to appropriate programs can work better and
cheaper. Current costs seem to consequence of approach
were improvements to quality of LLM output are mainly
due to use of increasingly larger models and of pushing
models to do more computations (beam search, the "chain
of thought" approach, agents etc). IMO, if LLM provide
sufficient quality there is potential to lower costs.
And I think that for US businesses $300000 a year for
replacing a professional is attractive. Of course,
for jobs needing low qualifications or entry level
jobs cost must be lower, but I think that for jobs
that can be done now it is lower.
If one wants to look at analogies in the past, then I think
Itanium and internet bubble are relevant. Internet bubble
caused losses for investors, but internet businnes as a field
survived: weaker players were eliminated, some creasy ideas
dropped, winners were devaluated but businnes still goes
on. Itanium is an example of technological mistake,
betting that compiler technology will improve. But it
is also example of betting on monopoly power, without
competiton Itanium had a chance to dominate market.
ATM it is not clear if there is any viable alternative
ot LLM-s. By viable I mean technology that could
substantially improve productivity. My understanding
of economic reports is that in developing countries
productivity was stagnant for several years and economic
progress was delivered by outsourcing to countries with
lower labour costs. But apparently outsourcing is at
its peak.
Like Itanium for Intel, current capital intensive approach
via LLM-s is pretty attractive to big capital. If it
manages to deliver higher productivity it will fundamentally
shift balance of power between workers and capital in favour
of capital. That is what big capital wants and due to this
LLM-s are likely to get more funding. Of course, if LLM-s
fail to deliver they may get defunded at some moment. But
to deliver what big capital wants LLM-s do not need to be
very good. Already replacing say 5% of workers at
comparable cost would be big win for capital. And since
we can expect cost of computations to decrease in the
future main thing is if LLM-s can do acceptable job.
Compilers are deterministic, at least in tons of languages.
LLM's arent.
Even the worst compiler/interpreter from https://t3x.org
it's far more useful than any LLM *over time* because
you can be sure the output will be 100% the same no
matter what you are trying to implement.
Anthk <anthk@disroot.org> writes:
[quoted text muted]
Even the worst compiler/interpreter from https://t3x.org
it's far more useful than any LLM *over time* because
you can be sure the output will be 100% the same no
matter what you are trying to implement.
Also, compilers don't "hallucinate". This is not a property of LLMs
that anyone has any idea how to correct.
Anthk <anthk@disroot.org> writes:
Compilers are deterministic, at least in tons of languages.
LLM's arent.
Even the worst compiler/interpreter from https://t3x.org
it's far more useful than any LLM *over time* because
you can be sure the output will be 100% the same no
matter what you are trying to implement.
Also, compilers don't "hallucinate". This is not a property of LLMs
that anyone has any idea how to correct.
Compilers are deterministic, at least in tons of languages.
LLM's arent.
Verily, in article <87ldeazyj0.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:
Anthk <anthk@disroot.org> writes:
[quoted text muted]
Even the worst compiler/interpreter from https://t3x.org
it's far more useful than any LLM *over time* because
you can be sure the output will be 100% the same no
matter what you are trying to implement.
Also, compilers don't "hallucinate". This is not a property of LLMs
that anyone has any idea how to correct.
The problem is that it's always "hallucinating." It has no idea what
it's actually saying, at least at this point. It's a tribute to math and ingenuity that the result usually makes sense and is often even
accurate.
The True Melissa <thetruemelissa@gmail.com> writes:
Verily, in article <87ldeazyj0.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:
Anthk <anthk@disroot.org> writes:
[quoted text muted]
Even the worst compiler/interpreter from https://t3x.org
it's far more useful than any LLM *over time* because
you can be sure the output will be 100% the same no
matter what you are trying to implement.
Also, compilers don't "hallucinate". This is not a property of LLMs
that anyone has any idea how to correct.
The problem is that it's always "hallucinating." It has no idea what
it's actually saying, at least at this point. It's a tribute to math and ingenuity that the result usually makes sense and is often even
accurate.
Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more
than a massive pile of linear algebra tuned to approximate something statistically resembling a plausible response to annarbitrary input.
It's an impressive parlour trick, but a trick none the less.
There are those who say language is fundamental
to self-aware consciousness. It's certainly an
interesting time to be alive.
Verily, in article <878qa9x2ce.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:
Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more
than a massive pile of linear algebra tuned to approximate something
statistically resembling a plausible response to annarbitrary input.
It's an impressive parlour trick, but a trick none the less.
The difficult part is that humans are the result of a similar trick.
That's why we can't rule out AI sentience entirely.
The True Melissa <thetruemelissa@gmail.com> writes:
Verily, in article <878qa9x2ce.fsf@posteo.de>, did jonathan@jlamothe.net
deliver unto us this message:
Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more >>> than a massive pile of linear algebra tuned to approximate something
statistically resembling a plausible response to annarbitrary input.
It's an impressive parlour trick, but a trick none the less.
The difficult part is that humans are the result of a similar trick.
That's why we can't rule out AI sentience entirely.
Yeah, that is an unpleasant thought that I think about from time to time.
| Sysop: | Jacob Catayoc |
|---|---|
| Location: | Pasay City, Metro Manila, Philippines |
| Users: | 5 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 493844:59:11 |
| Calls: | 146 |
| Files: | 547 |
| D/L today: |
6 files (97K bytes) |
| Messages: | 76,691 |