• Testing suggests Google?s AI Overviews tell millions of lies per hour

    From keithr0@3:633/10 to All on Wednesday, April 08, 2026 09:38:30
    https://arstechnica.com/google/2026/04/analysis-finds-google-ai-overviews-is-wrong-10-percent-of-the-time/


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Computer Nerd Kev@3:633/10 to All on Tuesday, April 14, 2026 09:05:27
    Subject: Re: Testing suggests Google's AI Overviews tell millions of lies per hour

    keithr0 <me@bugger.off.com.au> wrote:
    https://arstechnica.com/google/2026/04/analysis-finds-google-ai-overviews-is-wrong-10-percent-of-the-time/

    This reveals how little reasoning and cross-checking really goes on
    when these AIs answer questions:

    Scientists invented an obviously fake illness, and AI spread it
    like truth within weeks https://www.osnews.com/story/144787/scientists-invented-an-obviously-fake-illness-and-ai-spread-it-like-truth-within-weeks/

    This seems to be the fact some people here can't get their head
    around:

    "This shouldn't come as a surprise. After all, "AI" tools have no
    understanding, no intelligence, no context, and they can't actually
    make sense of anything. They are glorified pachinko machines with
    the output - the ball - tumbling down the most likely path between
    the pins based on nothing but chance and which pins it has already
    hit. "AI" output understands the world about as much as the
    pachinko ball does, and as such, can't pick up on even the most
    obvious of cues that something is a fake or a forgery."

    --
    __ __
    #_ < |\| |< _#

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Axel@3:633/10 to All on Tuesday, April 14, 2026 12:31:52
    Subject: Re: Testing suggests Google's AI Overviews tell millions of lies per hour

    Computer Nerd Kev wrote:
    keithr0 <me@bugger.off.com.au> wrote:
    https://arstechnica.com/google/2026/04/analysis-finds-google-ai-overviews-is-wrong-10-percent-of-the-time/
    This reveals how little reasoning and cross-checking really goes on
    when these AIs answer questions:

    Scientists invented an obviously fake illness, and AI spread it
    like truth within weeks https://www.osnews.com/story/144787/scientists-invented-an-obviously-fake-illness-and-ai-spread-it-like-truth-within-weeks/

    This seems to be the fact some people here can't get their head
    around:

    "This shouldn't come as a surprise. After all, "AI" tools have no
    understanding, no intelligence, no context, and they can't actually
    make sense of anything. They are glorified pachinko machines with
    the output - the ball - tumbling down the most likely path between
    the pins based on nothing but chance and which pins it has already
    hit. "AI" output understands the world about as much as the
    pachinko ball does, and as such, can't pick up on even the most
    obvious of cues that something is a fake or a forgery."


    interesting. but if they can't 'think' how do/can they write essays?

    --
    Linux Mint 22.3


    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Computer Nerd Kev@3:633/10 to All on Wednesday, April 15, 2026 08:30:50
    Subject: Re: Testing suggests Google's AI Overviews tell millions of lies per hour

    Axel <none@not.here> wrote:
    Computer Nerd Kev wrote:
    This seems to be the fact some people here can't get their head
    around:

    "This shouldn't come as a surprise. After all, "AI" tools have no
    understanding, no intelligence, no context, and they can't actually
    make sense of anything. They are glorified pachinko machines with
    the output - the ball - tumbling down the most likely path between
    the pins based on nothing but chance and which pins it has already
    hit. "AI" output understands the world about as much as the
    pachinko ball does, and as such, can't pick up on even the most
    obvious of cues that something is a fake or a forgery."

    interesting. but if they can't 'think' how do/can they write essays?

    Same as everything: Model the language structure of essays found on
    the web then reformat text believed relevent to the topic according
    to that model. This is the basis of the Large Language Models. Real
    AI is what's now being called "Artificial General Intelligence",
    the LLMs are a shortcut method that doesn't bother with
    understanding things but figures out how words go together in
    relevent ways, without regard to whether the words they're
    rearrangeing are completely irrelevent or in this case obviously
    nonsensical. Hence the convincing BS it shovels onto you when it
    hasn't found the equivalent of good search results and reformats
    words from irrelevent texts instead.

    https://en.wikipedia.org/wiki/Artificial_General_Intelligence

    AI generated images of people missing fingers are another example -
    it doesn't know what a finger is, and the model is built on example
    images where fingers are sometimes visible and sometimes not, so it
    follows a model of similar visual characteristics and produces
    something obviously wrong.

    --
    __ __
    #_ < |\| |< _#

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Computer Nerd Kev@3:633/10 to All on Wednesday, April 15, 2026 08:43:11
    Subject: Re: Testing suggests Google's AI Overviews tell millions of lies per hour

    Rod Speed <rod.speed.aaa@gmail.com> wrote:
    Computer Nerd Kev <not@telling.you.invalid> wrote
    "This shouldn't come as a surprise. After all, "AI" tools have no
    understanding, no intelligence, no context, and they can't actually
    make sense of anything. They are glorified pachinko machines with
    the output - the ball - tumbling down the most likely path between
    the pins based on nothing but chance and which pins it has already
    hit.

    Just did this one, https://grok.com/share/bGVnYWN5_ff894a2f-e630-4557-92e9-435a87b51d91

    That result is nothing even remotely like what you would
    get from a glorified pachinko machine and yes I got that
    question from that news item in the ABC Just In feed and
    didnt get as useful a result from google

    I get this page from a brewery by searching for "handles of beer"
    with Duck Duck Go, which contradicts the "425-450mL" range given
    in your AI answer ans disputes that it's the same as a schooner
    in the NT:

    https://stoneandwood.com.au/blogs/all/our-guide-to-australias-beer-sizes-and-names

    "The Northern Territory's beer sizes

    The Northern Territory's more tropical weather makes larger sizes
    slightly less popular to the average drinker, as they go warm quick
    if not enjoyed fast enough.

    Ask for a handle of beer if you're after a 285mL, smaller beer to
    enjoy (although middy or pot are generally accepted too). These may
    come with a handle, so you don't warm the beer too quickly holding
    it in your hand.

    Schooners are the same as most other regions of Australia, coming
    in at 425mL. Pints and jugs are also the same at 570mL and 1,140mL."

    --
    __ __
    #_ < |\| |< _#

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Computer Nerd Kev@3:633/10 to All on Wednesday, April 15, 2026 09:18:46
    Subject: Re: Testing suggests Google's AI Overviews tell millions of lies per hour

    Rod Speed <rod.speed.aaa@gmail.com> wrote:
    Computer Nerd Kev <not@telling.you.invalid> wrote
    Rod Speed <rod.speed.aaa@gmail.com> wrote
    Computer Nerd Kev <not@telling.you.invalid> wrote

    "This shouldn't come as a surprise. After all, "AI" tools have no
    understanding, no intelligence, no context, and they can't actually
    make sense of anything. They are glorified pachinko machines with
    the output - the ball - tumbling down the most likely path between
    the pins based on nothing but chance and which pins it has already
    hit.

    Just did this one,
    https://grok.com/share/bGVnYWN5_ff894a2f-e630-4557-92e9-435a87b51d91

    That result is nothing even remotely like what you would
    get from a glorified pachinko machine and yes I got that
    question from that news item in the ABC Just In feed and
    didnt get as useful a result from google

    I get this page from a brewery by searching for "handles of beer"
    with Duck Duck Go, which contradicts the "425-450mL" range given
    in your AI answer

    The grok response accurately points out that
    the volume varys with the location, fuckwit

    But inaccurately states the volume only varies between in range of
    425-450mL and always matches a schooner.

    ans disputes that it's the same as a schooner in the NT:

    https://stoneandwood.com.au/blogs/all/our-guide-to-australias-beer-sizes-and-names

    "The Northern Territory's beer sizes

    Pity we arent discussing the NT, fuckwit

    Your question to the AI never mentioned any location. It was
    speaking about all of Australia and in the NT it could be way off.

    --
    __ __
    #_ < |\| |< _#

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)