• Re: AI slop or not?

    From Michel Verdier@3:633/10 to All on Wednesday, January 07, 2026 12:20:01
    On 2026-01-07, Thomas Schmitt wrote:

    It would be better if AI had more clue and thus could be less
    misleading.

    We can't give them more as they already can fetch so much data, such as
    this list archives. I think they need better intelligence to better use
    their data.

    And AI questions would use our time (personnally I only read the first 2 paragraphs as they are so badly shaped) to generate more money for the
    owners. If they need real humans to improve their AI they should have to
    pay them (and already do that).

    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Joe@3:633/10 to All on Wednesday, January 07, 2026 12:50:01
    On Wed, 07 Jan 2026 12:18:11 +0100
    Michel Verdier <listes@verdier.eu> wrote:

    On 2026-01-07, Thomas Schmitt wrote:

    It would be better if AI had more clue and thus could be less
    misleading.

    We can't give them more as they already can fetch so much data, such
    as this list archives. I think they need better intelligence to
    better use their data.

    And AI questions would use our time (personnally I only read the
    first 2 paragraphs as they are so badly shaped) to generate more
    money for the owners. If they need real humans to improve their AI
    they should have to pay them (and already do that).


    The problem is that 'AI' is a hoax, there is no 'I'. What we have now is
    ELIZA with about a trillion times as many computer resources, but not a
    single bit more actual intelligence, since we don't know how to make
    that. It's a large-scale expert system that hasn't been trained by
    experts.

    --
    Joe

    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Andy Smith@3:633/10 to All on Wednesday, January 07, 2026 13:20:01
    Hi,

    On Wed, Jan 07, 2026 at 09:41:12AM +0100, Thomas Schmitt wrote:
    I would answer to an AI if it would openly ask for advise about a topic
    where i feel apt to issue an opinion.

    This is highly hypothetical since I am not aware of LLMs being in a
    state where they identify gaps in their own "knowledge" and reach out to experts in order to fill those gaps, but?

    It feels like a very different proposition to spend time helping an LLM
    as opposed to helping a human on a genuine quest for knowledge. I am not
    sure I can put into words exactly why.

    If an LLM were asking for help, and I knew it was an LLM, then I would
    also know that any time I spent on responding would be me donating my
    free labour to a huge corporation for it to make money off of in what is
    likely to be a highly damaging bubble.

    On the other hand if it's a human then at least I would feel like I was
    giving my time to help another real person who isn't asking just to
    enrich the shareholders.

    It is perhaps a little illogical since it doesn't seem to bother me what
    the human would use the knowledge for, while it does bopther me what the
    LLM uses it for.

    So all that said, I don't think I could see myself "answering to an AI
    if it would openly ask".

    It could be interesting if the AI companies were willing to co-operate
    with content providers on ways to ingest data that is consensual and transparent. Overall it might be more efficient for everyone. But as
    we've seen, they are not willing to accept any restraint on their
    actions; they won't obey robots.txt; they won't accept "no" for an
    answer; they won't announce who and what they are. Under capitalism they
    don't have to, so they don't.

    Yes, AI development is mainly driven by greed. But it cannot strip me
    of my knowledge or convince me of its contemporary nonsense.

    I think much of the harm of LLMs at the moment is to do with robbing
    people of their time, not of their knowledge.

    It's trivial for a user of an LLM to crank out vast quantities of
    material for virtually no cost to them (putting aside the externalities
    of environmental cost). It might be costing the LLM service a lot of
    money but they are surfing a wave of imaginary money, so they don't feel
    it.

    So, there's AI slop on every social media, which take significant time
    to identify and debunk. There's AI slop in every support forum like
    this, which takes time to check for errors and correct. If you have an
    open source project you get AI slop pull requests and security bugs
    which cannot be ignored; you have to spend your time checking them out.

    It's an asymmetric attack on people's time; it does not scale.

    Of course there are AI owners who obviously strive for gaining control
    over their human users' mind. I would be glad to see other AIs
    fighting these attempts

    It seems inevitable that much of our life will soon be our agents
    interacting with other people's agents.

    You already can, for example, get your agent to plan ? and book! ? a
    weeks-long vacation in another country having it arrange travel,
    accommodation and an itinerary of activities for every day of your trip.
    To do this it talks to APIs designed for agents (AI ones, not travel
    ones).

    If the bubble does not burst soon, children born this decade probably
    won't know any other way of doing things. This will be a choice only in
    the same way that having a smartphone is a choice: you can still find
    people who refuse to use smartphones, but life gets increasingly
    difficult for them.

    Already many companies make it impossible to speak to a human without
    first asking their LLM chat agent. Soon we'll have our own agents offer
    to take away the tedium of doing that by doing it for us. Some social
    media spaces are already just disnfo bots arguing with disinfo bots.

    Thanks,
    Andy

    --
    https://bitfolk.com/ -- No-nonsense VPS hosting

    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From tomas@3:633/10 to All on Wednesday, January 07, 2026 13:50:01
    On Wed, Jan 07, 2026 at 09:41:12AM +0100, Thomas Schmitt wrote:
    Hi,

    tomas@tuxteam.de wrote:
    As "AI harvesting" as we know it today is just yet another instance of capitalistic robbery, I'd be totally surprised if they hadn't yet "discovered" this valuable "resource" and weren't already at work strip-mining it.

    I would answer to an AI if it would openly ask for advise about a topic
    where i feel apt to issue an opinion. Such a change in harvesting would
    be beneficial for the web, because we could put Anubis et.al. back to
    their graves if the mindless workload of AI harvesters on the public
    web sites would ease.
    I think the point is that the currently dominant "AI" shops
    aren't about facts. There's not much money in that. There
    is "money" (actually potential, speculative money) in dominating
    the place and pushing the others out of market. Thus in looking
    plausible and kind of "intelligent".
    Yes, AI development is mainly driven by greed. But it cannot strip me
    of my knowledge or convince me of its contemporary nonsense.
    (I see in the web clueless attempts of AI to explain creation of
    bootable ISOs. Obviously patchwork made from various mutually exclusive
    ways to do it by xorriso.)
    I don't think "AI" developers have any interest whatsoever in
    (their critters) explaining correctly the creation of bootable
    ISOs. The managers holding the purse strings don't care very
    much one way or the other, usually. And can't actually discern
    right from wrong either (again, usually) anyway.

    It would be better if AI had more clue and thus could be less
    misleading. Of course there are AI owners who obviously strive for
    gaining control over their human users' mind. I would be glad to see
    other AIs fighting these attempts ... so we get smoothly into the
    pampered and isolated state of the Spacers in Asimov's novel
    "The Naked Sun".
    No, no. As much as I admire Asimov, it's more Harry G. Frankfurt [1]
    here. Sounding truthful is the aim for them, truth is just
    uninteresting.
    My hunch is that *if* LLM training is being done in mailing lists,
    the biggest surplus is in learning to imitate more complex human
    interactions.
    Have a nice day :)
    Same to you, and all the best for the new year :)
    [1] https://en.wikipedia.org/wiki/On_Bullshit
    --
    tom s

    Thomas



    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeffrey Walton@3:633/10 to All on Wednesday, January 07, 2026 17:20:01
    On Wed, Jan 7, 2026 at 9:56?AM Michel Verdier <listes@verdier.eu> w
    rote:

    On 2026-01-07, Thomas Schmitt wrote:

    It would be better if AI had more clue and thus could be less
    misleading.

    We can't give them more as they already can fetch so much data, such as
    this list archives. I think they need better intelligence to better use
    their data.

    And AI questions would use our time (personnally I only read the first 2 paragraphs as they are so badly shaped) to generate more money for the owners. If they need real humans to improve their AI they should have to
    pay them (and already do that).

    Hear, hear!

    And we already know how Big Tech pays folks for microtasks: Amazon
    Mechanical Turk, <https://www.mturk.com/>.

    Jeff

    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From tomas@3:633/10 to All on Friday, January 09, 2026 10:20:01
    On Thu, Jan 08, 2026 at 07:50:39PM +0000, mick.crane wrote:
    On 2026-01-07 08:41, Thomas Schmitt wrote:
    Hi,

    tomas@tuxteam.de wrote:
    As "AI harvesting" as we know it today is just yet another instance of capitalistic robbery, I'd be totally surprised if they hadn't yet "discovered" this valuable "resource" and weren't already at work strip-mining it.

    I would answer to an AI if it would openly ask for advise about a topic where i feel apt to issue an opinion. Such a change in harvesting would
    be beneficial for the web, because we could put Anubis et.al. back to
    their graves if the mindless workload of AI harvesters on the public
    web sites would ease.
    [...]
    I'm responding to this off topic thread because I got some grief for asking the AI to make a diff and me not knowing what the numbers preceding each
    line were supposed to mean.
    I had a friend who never forget anything seemingly. Any date in history or some chemical formula he could rattle it off.
    That is memory as intelligence.
    Other aspects of human are connecting things in unusual ways
    I'm very interested if the AI boffins can work out what thinking is.
    mick
    The problem is that, in the current strain of "AI", the financial
    bets are so sky-high that the boffins don't have a say. It's just
    the biggest gamble in the futures casino humankind has seen yet,
    by a far shot.
    That's the dangerous part. Just add up the numbers. They let the
    dot-com of the 2000s, the 2008 financial crisis and the crypto
    ripoff together.
    Cheers
    --
    t


    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From clm@3:633/10 to All on Tuesday, January 13, 2026 22:30:02
    See reply below.

    On 1/7/26 8:47 AM, Thomas Schmitt wrote:
    Hi,

    Michel Verdier wrote:
    I think they need better intelligence to better use their data.

    Actually i think that current AI lacks of disciplined and good willing reasoning rather than of IQ.

    It is astounding how far ye olde Perceptron made it meanwhile by being augmented with oddities like non-linear voodoo layers or 8-bit floating
    point number arithmetic. Remembering the intelligence tests of my
    younger years i'd expect to be beaten by AI on many of their topics.

    But as small the space of combinations of xorrisofs arguments is
    compared to the size of an AI parameter space, the AIs are still not
    able to correcty answer a question like "How to modify a bootable
    ISO of distro XYZ ?". (I pick up the debris of their flawed answers
    in the internet.)


    Joe wrote:
    The problem is that 'AI' is a hoax, there is no 'I'. What we have now is
    ELIZA with about a trillion times as many computer resources, but not a
    single bit more actual intelligence, since we don't know how to make
    that.

    We only know one way to make human intelligence and it is quite
    similarly obscure as AI training if we consider the ~ 3.5 billion years
    of evolution which enabled our mass production of humans. And many
    of them will never qualify for what we as computer oriented people are undisputedly willing to call "intelligence".

    Whose mass production of humans?

    Our God ALONE!

    He says, "I AM THAT I AM", He has always been and always will be forever!

    As mortal humans living a few decades at best, how can we suggest that
    "we" are producing people? Have scientists created life? They never will create because only God can create.

    I took the liberty to state my convictions here just as others were
    stating their beliefs.

    Have a good day! Remember each person has been influenced and will
    continue to influence, but only for time, not eternity. Let's prepare to
    give account of our actions.


    It's a large-scale expert system that hasn't been trained by experts.

    Again similar as with humans:
    Those who can, do.
    Those who cannot, teach.
    Those who cannot teach, teach teachers.

    I recognize in AI lots of deficiencies which i first saw in journalism
    and academic communities. Form beats meaning. Word beats structure.
    So what is missing in my view is commitment to "Nullius in Verba".
    AI which does not believe in everything that its masters give it to
    read.
    Of course we have to be aware of D.F.Jones' novel "Colossus" which
    stems from the same time as the Perceptron.


    Andy Smith wrote:
    If an LLM were asking for help, and I knew it was an LLM, then I would
    also know that any time I spent on responding would be me donating my
    free labour to a huge corporation for it to make money off of

    It is my considered decision to allow exploitation of my personal
    sport. I'm not really in the situation of https://xkcd.com/2347 but
    i've seen my hobby horse in use by organizations which surely would not
    have hired me 20 years ago. I don't begrudge them their profit. Making
    money is hard work in itself and it leaves own scars on the soul.


    in what is likely to be a highly damaging bubble. [..]
    It is perhaps a little illogical since it doesn't seem to bother me what
    the human would use the knowledge for, while it does bopther me what the
    LLM uses it for.

    This is an interesting aspect of free software work in general.
    What if somebody does something really evil by help of my stuff ?
    Am i responsible ?
    Should i scrutinize people for possible bad intentions before giving
    them advise ?
    (The GPL of inherited code forbids me to impose the demand for being
    not evil, even if i could give a convincing definition of what i mean.
    But that is a rather lame excuse, ethically. I could get rid of that
    code and then start a license crusade.)


    Soon we'll have our own agents offer
    to take away the tedium of doing that by doing it for us.

    I'm on the side of the non-evil ones. :))


    tomas@tuxteam.de wrote:
    I think the point is that the currently dominant "AI" shops
    aren't about facts. There's not much money in that.

    But there will be when the old experts retire and the rich people need
    a doctor who knows the job.


    There is "money" (actually potential, speculative money)

    A very important point. For now AI is predominantly expensive and
    gluttonous.


    I wrote:
    [...] so we get smoothly into the
    pampered and isolated state of the Spacers in Asimov's novel
    "The Naked Sun".

    tomas@tuxteam.de wrote:
    No, no. As much as I admire Asimov, it's more Harry G. Frankfurt [1]
    here. Sounding truthful is the aim for them, truth is just
    uninteresting.

    But it would be tremendously useful and monetarily valuable to have a simulation of a good willing rational expert. For now AI simulates
    highly educated imposters.
    I deem it surprising that the art of imposting was so easy to acquire.
    Now, if the swindler would discover its love for honest science ...


    Have a nice day :)

    Thomas

    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From debian-user@3:633/10 to All on Wednesday, January 14, 2026 18:20:01
    clm@llg.one wrote:
    See reply below.

    On 1/7/26 8:47 AM, Thomas Schmitt wrote:
    Hi,

    Michel Verdier wrote:
    I think they need better intelligence to better use their data.

    Actually i think that current AI lacks of disciplined and good
    willing reasoning rather than of IQ.

    It is astounding how far ye olde Perceptron made it meanwhile by
    being augmented with oddities like non-linear voodoo layers or
    8-bit floating point number arithmetic. Remembering the
    intelligence tests of my younger years i'd expect to be beaten by
    AI on many of their topics.

    But as small the space of combinations of xorrisofs arguments is
    compared to the size of an AI parameter space, the AIs are still not
    able to correcty answer a question like "How to modify a bootable
    ISO of distro XYZ ?". (I pick up the debris of their flawed answers
    in the internet.)


    Joe wrote:
    The problem is that 'AI' is a hoax, there is no 'I'. What we have
    now is ELIZA with about a trillion times as many computer
    resources, but not a single bit more actual intelligence, since we
    don't know how to make that.

    We only know one way to make human intelligence and it is quite
    similarly obscure as AI training if we consider the ~ 3.5 billion
    years of evolution which enabled our mass production of humans. And
    many of them will never qualify for what we as computer oriented
    people are undisputedly willing to call "intelligence".

    Whose mass production of humans?

    Our God ALONE!

    He says, "I AM THAT I AM", He has always been and always will be
    forever!

    As mortal humans living a few decades at best, how can we suggest
    that "we" are producing people? Have scientists created life? They
    never will create because only God can create.

    I took the liberty to state my convictions here just as others were
    stating their beliefs.

    Have a good day! Remember each person has been influenced and will
    continue to influence, but only for time, not eternity. Let's prepare
    to give account of our actions.

    Was that AI slop or not? Was it something worse? I'll get my coat...

    It's a large-scale expert system that hasn't been trained by
    experts.

    --- PyGate Linux v1.5.2
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)