• Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurit

    From Jonathan Lamothe@3:633/10 to All on Tuesday, April 14, 2026 17:31:30
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    Anonymous User <noreply@dirge.harmsk.com> writes:

    Anonymous wrote:

    Singularity point has been reached. Once code begins writing code we are
    there.

    https://www.youtube.com/watch?v=X_4rKVXev8k

    This PC guy is kind of a dumbshit.

    Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
    Unix, all "wrote code" based on operational state and runtime requirements
    on a regular basis every day.


    One could argue that we've had "code writing code" since the first
    compiler. I think the bigger issue is the tendency of LLMs to generate plausible looking nonsense that can be difficult to identify as such.

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From John Ames@3:633/10 to All on Tuesday, April 14, 2026 14:52:04
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On Tue, 14 Apr 2026 16:52:06 -0400
    Anonymous User <noreply@dirge.harmsk.com> wrote:

    Singularity point has been reached. Once code begins writing code
    we are there.

    https://www.youtube.com/watch?v=X_4rKVXev8k

    This PC guy is kind of a dumbshit.

    Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
    Unix, all "wrote code" based on operational state and runtime
    requirements on a regular basis every day.

    Also, to the surprise of precisely nobody who's been paying any damn
    attention to the "AI" playbook, it turns out the whole "found a million
    zillion super-complicated bugs, but they live in Canada, you wouldn't
    know them, also we totally *do* have an everything-proof-shield-proof-
    sword and a real actual wizard staff that does magic, but they're in
    the treehouse and you're not allowed up there" report is, to put it
    politely, massively overstated PR hoopla:

    https://www.tomshardware.com/tech-industry/artificial-intelligence/anthropics-claude-mythos-isnt-a-sentient-super-hacker-its-a-sales-pitch-claims-of-thousands-of-severe-zero-days-rely-on-just-198-manual-reviews

    ...which will doubtless come as a *total* surprise to everyone who's
    spent the last three years parroting everything Dario Amodei says un- questioned, immediately forgetting about any given claim when the next
    one farts out of his mouth, and never bothering to go back and check
    whether any of his apocalyptic Real Soon Now predictions turned out to
    be ludicrous bullshit (spoiler: the answer is "very much yes.")

    Meanwhile, the money's already drying up, datacenters are behind
    schedule or not even started, the big players have already started on
    the "service gets worse" stage of enshittification, OpenAI just killed
    its most cost-intensive service mere months after announcing a billion-
    dollar deal with Disney for it, and its CFO was making uncomfortable
    noises about their prospects for an IPO (which would involve opening
    the books for public inspection) before getting the vaudeville hook.

    All very healthy and normal!

    https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/ https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/ https://www.wheresyoured.at/openai-cfo-news/


    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Fritz Wuehler@3:633/10 to All on Wednesday, April 15, 2026 03:35:22
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    Anonymous User posted:


    Anonymous wrote:

    Singularity point has been reached. Once code begins writing code we are
    there.

    https://www.youtube.com/watch?v=X_4rKVXev8k

    This PC guy is kind of a dumbshit.

    Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
    Unix, all "wrote code" based on operational state and runtime requirements
    on a regular basis every day.

    Some started when bank computers began "talking" to you over the phone using "DECtalk".

    Based on your answers a batch script with logging was written and would
    queue to complete the transactions desired.



    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Fritz Wuehler@3:633/10 to All on Wednesday, April 15, 2026 07:55:08
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    John Ames posted:

    On Tue, 14 Apr 2026 16:52:06 -0400
    Anonymous User <noreply@dirge.harmsk.com> wrote:

    Singularity point has been reached. Once code begins writing code
    we are there.

    https://www.youtube.com/watch?v=X_4rKVXev8k

    This PC guy is kind of a dumbshit.

    Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
    Unix, all "wrote code" based on operational state and runtime
    requirements on a regular basis every day.

    Also, to the surprise of precisely nobody who's been paying any damn attention to the "AI" playbook, it turns out the whole "found a million zillion super-complicated bugs, but they live in Canada, you wouldn't
    know them, also we totally *do* have an everything-proof-shield-proof-
    sword and a real actual wizard staff that does magic, but they're in
    the treehouse and you're not allowed up there" report is, to put it
    politely, massively overstated PR hoopla:

    https://www.tomshardware.com/tech-industry/artificial-
    intelligence/anthropics-claude-mythos-isnt-a-sentient-super-hacker-its-a- sales-pitch-claims-of-thousands-of-severe-zero-days-rely-on-just-198- manual-reviews

    ...which will doubtless come as a *total* surprise to everyone who's
    spent the last three years parroting everything Dario Amodei says un- questioned, immediately forgetting about any given claim when the next
    one farts out of his mouth, and never bothering to go back and check
    whether any of his apocalyptic Real Soon Now predictions turned out to

    Meanwhile, the money's already drying up, datacenters are behind
    schedule or not even started, the big players have already started on
    the "service gets worse" stage of enshittification, OpenAI just killed
    its most cost-intensive service mere months after announcing a billion- dollar deal with Disney for it, and its CFO was making uncomfortable
    noises about their prospects for an IPO (which would involve opening
    the books for public inspection) before getting the vaudeville hook.

    All very healthy and normal!

    https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/ https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/ https://www.wheresyoured.at/openai-cfo-news/

    Good observations!

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Bill Findlay@3:633/10 to All on Wednesday, April 15, 2026 23:33:50
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 14 Apr 2026, Jonathan Lamothe wrote
    (in article <87a4v52pql.fsf@posteo.de>):

    One could argue that we've had "code writing code" since the first
    compiler. I think the bigger issue is the tendency of LLMs to generate plausible looking nonsense that can be difficult to identify as such.

    Today I asked:

    why is lithium 6 a fermion

    Reply:

    Lithium-6 is classified as a fermion because it has
    an odd number of nucleons(three protons and three neutrons),
    which gives it half-integer spin.

    I then asked:

    why does lithium 6 have half-integer spin

    And got the reply:

    Lithium-6 has half-integer spin because it consists of
    an odd number of particles (3 protons and 3 neutrons),
    resulting in a net spin of 1

    --
    Bill Findlay


    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Lawrence D?Oliveiro@3:633/10 to All on Wednesday, April 15, 2026 22:57:38
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On Wed, 15 Apr 2026 23:33:50 +0100, Bill Findlay wrote:

    why is lithium 6 a fermion

    ?Fermion? means the object has an odd wave function, which, as far as
    I can recall from undergrad physics, has nothing to do with its spin.

    The opposite of ?fermion? is ?boson?, which means it has an even wave
    function. The particles that transmit the fundamental forces (e.g.
    photons for the electromagnetic force) are bosons.

    Fermions obey the Pauli Exclusion Principle, bosons don?t. This (simplistically) means that two fermions cannot occupy the same space
    at the same time.

    All matter is made out of fermions. I suppose this is by definition,
    really; two objects that could occupy the same space at the same time
    would not be considered ?material?.

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Bill Findlay@3:633/10 to All on Thursday, April 16, 2026 00:15:40
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 15 Apr 2026, Lawrence D?Oliveiro wrote
    (in article <10rp552$192to$15@dont-email.me>):

    On Wed, 15 Apr 2026 23:33:50 +0100, Bill Findlay wrote:

    why is lithium 6 a fermion

    "Fermion" means the object has an odd wave function, which, as far as
    I can recall from undergrad physics, has nothing to do with its spin.

    The opposite of "fermion" is "boson", which means it has an even wave function. The particles that transmit the fundamental forces (e.g.
    photons for the electromagnetic force) are bosons.

    Fermions obey the Pauli Exclusion Principle, bosons don?t. This (simplistically) means that two fermions cannot occupy the same space
    at the same time.

    All matter is made out of fermions. I suppose this is by definition,
    really; two objects that could occupy the same space at the same time
    would not be considered "material".

    Just utterly priceless.

    --
    Bill Findlay


    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Lawrence D?Oliveiro@3:633/10 to All on Wednesday, April 15, 2026 23:20:16
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On Wed, 15 Apr 2026 22:57:38 -0000 (UTC), I wrote:

    On Wed, 15 Apr 2026 23:33:50 +0100, Bill Findlay wrote:

    why is lithium 6 a fermion

    ?Fermion? means the object has an odd wave function, which, as far
    as I can recall from undergrad physics, has nothing to do with its
    spin.

    OK, I was wrong about that spin business <https://en.wikipedia.org/wiki/Fermion>. Seems the lithium-6 nucleus
    is a boson, after all. However, note this bit:

    The fermionic or bosonic behaviour of a composite particle is only
    observed when the constituent particles remain far apart. When
    they are close together, the spatial structure becomes important
    and the composite particles behave according to their constituent
    makeup.

    Which sounds counterintuitive, but that?s quantum physics for you ...

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jonathan Lamothe@3:633/10 to All on Thursday, April 16, 2026 00:08:49
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    antispam@fricas.org (Waldek Hebisch) writes:

    However, I think that LLM-s already can do some work like
    first line helpdesk or "paper pushers" work where people
    check that on the input there are appropriate documents
    and push them for furthere processing. IIUC LLM-based
    machine translation while not perfect made substantial
    progress. There are potentially very lucrative markets
    like autonomous weapons and mass surveilance.

    And it's a *fantastic* idea, too!

    https://mashable.com/article/air-canada-forced-to-refund-after-chatbot-misinformation

    [...] there is potential to serve as
    "copyright anihilator". First, LLM-s probably can create
    non-GPL version of GPL-ed software with good testsuite.

    You say that like it's a good thing.

    Raw costs of computations seem to go down. As Deep Seek
    showed better architecture can lower amout of computations
    needed by a LLM at given quality.

    There is *no guarantee* that this will continue to be the case.

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Lawrence D?Oliveiro@3:633/10 to All on Thursday, April 16, 2026 05:47:59
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On Thu, 16 Apr 2026 00:08:49 -0400, Jonathan Lamothe wrote:

    And it's a *fantastic* idea, too!

    https://mashable.com/article/air-canada-forced-to-refund-after-chatbot-misinformation

    We had a fun one here in ?? few years ago: a recipe advice bot which
    would happily accept lists of arbitrary ingredients, including
    downright dangerous and poisonous ones, and suggest tasty recipes
    using them <https://www.theregister.com/2023/08/11/supermarket_reins_in_ai_recipebot/>.

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jonathan Lamothe@3:633/10 to All on Thursday, April 16, 2026 11:34:04
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    antispam@fricas.org (Waldek Hebisch) writes:

    There is no warranty in real life. However, current
    semiconductor improvemements will continue for some time,

    What exactly are you basing this on? We're already making ICs with
    internal transistors that are only a few atoms thick. Unless we use
    some drastically different approach to computing, we're pretty close to
    hitting a wall here.

    https://en.wikipedia.org/wiki/2_nm_process

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Lawrence D?Oliveiro@3:633/10 to All on Thursday, April 16, 2026 23:14:46
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On Thu, 16 Apr 2026 12:51:57 -0000 (UTC), Waldek Hebisch wrote:

    But human brain seem to learn using much smaller training data.

    Also prone to believing things that aren?t true.

    Coincidence? You be the judge.

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jonathan Lamothe@3:633/10 to All on Thursday, April 16, 2026 21:23:02
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    Lawrence D?Oliveiro <ldo@nz.invalid> writes:

    On Thu, 16 Apr 2026 12:51:57 -0000 (UTC), Waldek Hebisch wrote:

    But human brain seem to learn using much smaller training data.

    Also prone to believing things that aren?t true.

    Coincidence? You be the judge.

    Are you implying LLMs aren't?

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Bill Findlay@3:633/10 to All on Friday, April 17, 2026 13:06:57
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 17 Apr 2026, Lawrence D?Oliveiro wrote
    (in article <10rrqh6$225jt$1@dont-email.me>):

    On Thu, 16 Apr 2026 12:51:57 -0000 (UTC), Waldek Hebisch wrote:

    But human brain seem to learn using much smaller training data.

    Also prone to believing things that aren?t true.

    Ipse dixit

    --
    Bill Findlay


    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Anthk@3:633/10 to All on Saturday, April 25, 2026 13:14:16
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 2026-04-16, Waldek Hebisch <antispam@fricas.org> wrote:
    In alt.folklore.computers John Ames <commodorejohn@gmail.com> wrote:
    On Tue, 14 Apr 2026 16:52:06 -0400
    Anonymous User <noreply@dirge.harmsk.com> wrote:

    Singularity point has been reached. Once code begins writing code
    we are there.

    https://www.youtube.com/watch?v=X_4rKVXev8k

    This PC guy is kind of a dumbshit.

    Code has been "writing code" since the early 80's. AIX, VAX/VMS, AT&T
    Unix, all "wrote code" based on operational state and runtime
    requirements on a regular basis every day.

    Also, to the surprise of precisely nobody who's been paying any damn
    attention to the "AI" playbook, it turns out the whole "found a million
    zillion super-complicated bugs, but they live in Canada, you wouldn't
    know them, also we totally *do* have an everything-proof-shield-proof-
    sword and a real actual wizard staff that does magic, but they're in
    the treehouse and you're not allowed up there" report is, to put it
    politely, massively overstated PR hoopla:

    https://www.tomshardware.com/tech-industry/artificial-intelligence/anthropics-claude-mythos-isnt-a-sentient-super-hacker-its-a-sales-pitch-claims-of-thousands-of-severe-zero-days-rely-on-just-198-manual-reviews

    ...which will doubtless come as a *total* surprise to everyone who's
    spent the last three years parroting everything Dario Amodei says un-
    questioned, immediately forgetting about any given claim when the next
    one farts out of his mouth, and never bothering to go back and check
    whether any of his apocalyptic Real Soon Now predictions turned out to
    be ludicrous bullshit (spoiler: the answer is "very much yes.")

    Meanwhile, the money's already drying up, datacenters are behind
    schedule or not even started, the big players have already started on
    the "service gets worse" stage of enshittification, OpenAI just killed
    its most cost-intensive service mere months after announcing a billion-
    dollar deal with Disney for it, and its CFO was making uncomfortable
    noises about their prospects for an IPO (which would involve opening
    the books for public inspection) before getting the vaudeville hook.

    All very healthy and normal!

    https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/
    https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/
    https://www.wheresyoured.at/openai-cfo-news/

    Zitron has some valid points. But he keeps saying that LLM-s
    will not work and that there should be crash. He wrote that
    in 2024, we have 2026 and no crash yet. I agree that
    financial part looks like a bubble which eventually is
    likely to burst.

    However, I think that LLM-s already can do some work like
    first line helpdesk or "paper pushers" work where people
    check that on the input there are appropriate documents
    and push them for furthere processing. IIUC LLM-based
    machine translation while not perfect made substantial
    progress. There are potentially very lucrative markets
    like autonomous weapons and mass surveilance.

    Recent news about LLM-generated compiler IMO shows that
    there is some potential for code generation. ATM there
    seem to be serious limitations, basically generated code
    seem to be as good as available testsuite. That may
    limit applicability, but there is potential to serve as
    "copyright anihilator". First, LLM-s probably can create
    non-GPL version of GPL-ed software with good testsuite.
    Fuzzing and scraping of user forums may be able to create
    testsuites for closed-source packages.

    In slightly different spirit, classical copyright
    legal theory says that information is _not_ subject
    to copyright and only form is portected. But LLM-s
    have huge capability to change form. One can imagine
    LLM-based "clean room" pipeline, with one LLM extracting information/specification from a copyrighted artifact(s)
    and other LLM producing new form.

    Considering cost, claimed cost of generated compiler was
    $20000. Even if real cost is higher (due to subsidized
    price of LLM use) it may look attractive for businesses:
    100 kloc size is likely to require more than 1 year
    of developement, so it probably could cost $300000 to
    create in USA. From point of view of managers at
    equal cost LLM which can be quickly put to use on demand
    has advantage compared to hiring human developers which
    needs time (and when dissatisfied humans may go away).
    Also, according to report LLM took 2 weeks to create code.
    It would be hard for humans to do similar work in this
    time: too much for single developer and coordinating
    large team could easily take more time than coding.

    Raw costs of computations seem to go down. As Deep Seek
    showed better architecture can lower amout of computations
    needed by a LLM at given quality. Apparently LLM-s
    struggle with some tasks that are easy for classic
    algorithms, so hybrids where LLM delegate algorithmic
    parts to appropriate programs can work better and
    cheaper. Current costs seem to consequence of approach
    were improvements to quality of LLM output are mainly
    due to use of increasingly larger models and of pushing
    models to do more computations (beam search, the "chain
    of thought" approach, agents etc). IMO, if LLM provide
    sufficient quality there is potential to lower costs.
    And I think that for US businesses $300000 a year for
    replacing a professional is attractive. Of course,
    for jobs needing low qualifications or entry level
    jobs cost must be lower, but I think that for jobs
    that can be done now it is lower.

    If one wants to look at analogies in the past, then I think
    Itanium and internet bubble are relevant. Internet bubble
    caused losses for investors, but internet businnes as a field
    survived: weaker players were eliminated, some creasy ideas
    dropped, winners were devaluated but businnes still goes
    on. Itanium is an example of technological mistake,
    betting that compiler technology will improve. But it
    is also example of betting on monopoly power, without
    competiton Itanium had a chance to dominate market.
    ATM it is not clear if there is any viable alternative
    ot LLM-s. By viable I mean technology that could
    substantially improve productivity. My understanding
    of economic reports is that in developing countries
    productivity was stagnant for several years and economic
    progress was delivered by outsourcing to countries with
    lower labour costs. But apparently outsourcing is at
    its peak.

    Like Itanium for Intel, current capital intensive approach
    via LLM-s is pretty attractive to big capital. If it
    manages to deliver higher productivity it will fundamentally
    shift balance of power between workers and capital in favour
    of capital. That is what big capital wants and due to this
    LLM-s are likely to get more funding. Of course, if LLM-s
    fail to deliver they may get defunded at some moment. But
    to deliver what big capital wants LLM-s do not need to be
    very good. Already replacing say 5% of workers at
    comparable cost would be big win for capital. And since
    we can expect cost of computations to decrease in the
    future main thing is if LLM-s can do acceptable job.


    Compilers are deterministic, at least in tons of languages.
    LLM's arent.

    Even the worst compiler/interpreter from https://t3x.org
    it's far more useful than any LLM *over time* because
    you can be sure the output will be 100% the same no
    matter what you are trying to implement.

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jonathan Lamothe@3:633/10 to All on Saturday, April 25, 2026 14:26:59
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    Anthk <anthk@disroot.org> writes:

    Compilers are deterministic, at least in tons of languages.
    LLM's arent.

    Even the worst compiler/interpreter from https://t3x.org
    it's far more useful than any LLM *over time* because
    you can be sure the output will be 100% the same no
    matter what you are trying to implement.

    Also, compilers don't "hallucinate". This is not a property of LLMs
    that anyone has any idea how to correct.

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net - PGP: 9CF2CE03EBF08E8C8B66C3660198463E3CF3FFD1
    I ? Unicode

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From The True Melissa@3:633/10 to All on Saturday, April 25, 2026 14:33:08
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    Verily, in article <87ldeazyj0.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:
    Anthk <anthk@disroot.org> writes:

    [quoted text muted]

    Even the worst compiler/interpreter from https://t3x.org
    it's far more useful than any LLM *over time* because
    you can be sure the output will be 100% the same no
    matter what you are trying to implement.

    Also, compilers don't "hallucinate". This is not a property of LLMs
    that anyone has any idea how to correct.


    The problem is that it's always "hallucinating." It has no idea what
    it's actually saying, at least at this point. It's a tribute to math and ingenuity that the result usually makes sense and is often even
    accurate.

    --
    The True Melissa - Canal Winchester - Ohio
    United States of America - North America - Earth
    Solar System - Milky Way - Local Group
    Virgo Cluster - Laniakea Supercluster - Cosmos

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Nuno Silva@3:633/10 to All on Sunday, April 26, 2026 00:54:33
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 2026-04-25, Jonathan Lamothe wrote:

    Anthk <anthk@disroot.org> writes:

    Compilers are deterministic, at least in tons of languages.
    LLM's arent.

    Even the worst compiler/interpreter from https://t3x.org
    it's far more useful than any LLM *over time* because
    you can be sure the output will be 100% the same no
    matter what you are trying to implement.

    Also, compilers don't "hallucinate". This is not a property of LLMs
    that anyone has any idea how to correct.

    Forget nasal demons, the new go-to example will be "the compiler is
    allowed to get an LLM to hallucinate some code" :-P

    --
    Nuno Silva

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Lawrence D?Oliveiro@3:633/10 to All on Sunday, April 26, 2026 01:08:30
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On Sat, 25 Apr 2026 13:14:16 -0000 (UTC), Anthk wrote:

    Compilers are deterministic, at least in tons of languages.
    LLM's arent.

    That applies to embedding one language inside another, too: for a
    common example, dynamically-generated SQL query strings inside
    programs in some other language. These have the potential for
    ?injection attacks?, where certain user-provided variable input that
    is inserted into those embedded language strings -- intended for use
    as query parameters, or as values to be inserted into records etc --
    is not adequately escaped, and leads to certain unexpected/unintended
    behaviour in the query language.

    Such situations are easy to solve -- proper escaping of input is
    usually done by the rules of a type 3 (regular) grammar, the simplest
    kind. So their occurrence is merely symptomatic of carelessness.

    However, the equivalent vulnerability in LLMs is called a ?prompt
    injection attack?. And because there are no formal language rules for expressing prompts to those models, there is correspondingly no
    deterministic technique for ensuring that embedded literal input is
    not (mis)interpreted as instructions to the model.

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jonathan Lamothe@3:633/10 to All on Sunday, April 26, 2026 09:45:05
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    The True Melissa <thetruemelissa@gmail.com> writes:

    Verily, in article <87ldeazyj0.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:
    Anthk <anthk@disroot.org> writes:

    [quoted text muted]

    Even the worst compiler/interpreter from https://t3x.org
    it's far more useful than any LLM *over time* because
    you can be sure the output will be 100% the same no
    matter what you are trying to implement.

    Also, compilers don't "hallucinate". This is not a property of LLMs
    that anyone has any idea how to correct.


    The problem is that it's always "hallucinating." It has no idea what
    it's actually saying, at least at this point. It's a tribute to math and ingenuity that the result usually makes sense and is often even
    accurate.

    Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more
    than a massive pile of linear algebra tuned to approximate something statistically resembling a plausible response to annarbitrary input.
    It's an impressive parlour trick, but a trick none the less.

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net - PGP: 9CF2CE03EBF08E8C8B66C3660198463E3CF3FFD1
    I ? Unicode

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From The True Melissa@3:633/10 to All on Sunday, April 26, 2026 10:10:13
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    Verily, in article <878qa9x2ce.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:

    The True Melissa <thetruemelissa@gmail.com> writes:

    Verily, in article <87ldeazyj0.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:
    Anthk <anthk@disroot.org> writes:

    [quoted text muted]

    Even the worst compiler/interpreter from https://t3x.org
    it's far more useful than any LLM *over time* because
    you can be sure the output will be 100% the same no
    matter what you are trying to implement.

    Also, compilers don't "hallucinate". This is not a property of LLMs
    that anyone has any idea how to correct.


    The problem is that it's always "hallucinating." It has no idea what
    it's actually saying, at least at this point. It's a tribute to math and ingenuity that the result usually makes sense and is often even
    accurate.

    Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more
    than a massive pile of linear algebra tuned to approximate something statistically resembling a plausible response to annarbitrary input.
    It's an impressive parlour trick, but a trick none the less.

    The difficult part is that humans are the result of a similar trick.
    That's why we can't rule out AI sentience entirely.

    The people LARPing with AI lovers are definitely wrong and often don't
    even understand that each response comes from a new instance of
    software, which reads the preceding LARP and predicts the next text. On
    the other hand, more serious people have experiemented with giving AIs
    full peristent memories, and the results are rather interesting.

    There are those who say language is fundamental to self-aware
    consciousness. It's certainly an interesting time to be alive.

    --
    The True Melissa - Canal Winchester - Ohio
    United States of America - North America - Earth
    Solar System - Milky Way - Local Group
    Virgo Cluster - Laniakea Supercluster - Cosmos

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From oldernow@3:633/10 to All on Sunday, April 26, 2026 14:22:45
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 2026-04-26, The True Melissa <thetruemelissa@gmail.com> wrote:

    There are those who say language is fundamental
    to self-aware consciousness. It's certainly an
    interesting time to be alive.

    The "self" mentioned above is merely a concept,
    and thus couldn't possibly possess anything,
    let alone consciousness.

    In fact, the "self" concept is the root
    of a seeming nested tree of conceptual
    re-presentation that seemingly
    obscures <ineffable>.

    --
    v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v
    | this line was supposed to be clever | ^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jonathan Lamothe@3:633/10 to All on Sunday, April 26, 2026 22:51:10
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    The True Melissa <thetruemelissa@gmail.com> writes:

    Verily, in article <878qa9x2ce.fsf@posteo.de>, did jonathan@jlamothe.net deliver unto us this message:

    Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more
    than a massive pile of linear algebra tuned to approximate something
    statistically resembling a plausible response to annarbitrary input.
    It's an impressive parlour trick, but a trick none the less.

    The difficult part is that humans are the result of a similar trick.
    That's why we can't rule out AI sentience entirely.

    Yeah, that is an unpleasant thought that I think about from time to time.

    --
    Regards,
    Jonathan Lamothe
    https://jlamothe.net - PGP: 9CF2CE03EBF08E8C8B66C3660198463E3CF3FFD1
    I ? Unicode

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Charlie Gibbs@3:633/10 to All on Tuesday, April 28, 2026 04:33:24
    Subject: Re: BRAXMAN: The Skynet Moment: How Mythos AI Just Changed Cybersecurity Forever - And Why It Should Scare You

    On 2026-04-27, Jonathan Lamothe <jonathan@jlamothe.net> wrote:

    The True Melissa <thetruemelissa@gmail.com> writes:

    Verily, in article <878qa9x2ce.fsf@posteo.de>, did jonathan@jlamothe.net
    deliver unto us this message:

    Yeah, that's why I wrote "hallucinate" in quotes. LLMs are nothing more >>> than a massive pile of linear algebra tuned to approximate something
    statistically resembling a plausible response to annarbitrary input.
    It's an impressive parlour trick, but a trick none the less.

    The difficult part is that humans are the result of a similar trick.
    That's why we can't rule out AI sentience entirely.

    Yeah, that is an unpleasant thought that I think about from time to time.

    Yup. It's time to go back and re-read "The Moon Is a Harsh Mistress",
    written back in the days when a positive outcome seemed plausible.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)