It would be better if AI had more clue and thus could be less
misleading.
On 2026-01-07, Thomas Schmitt wrote:
It would be better if AI had more clue and thus could be less
misleading.
We can't give them more as they already can fetch so much data, such
as this list archives. I think they need better intelligence to
better use their data.
And AI questions would use our time (personnally I only read the
first 2 paragraphs as they are so badly shaped) to generate more
money for the owners. If they need real humans to improve their AI
they should have to pay them (and already do that).
I would answer to an AI if it would openly ask for advise about a topic
where i feel apt to issue an opinion.
Yes, AI development is mainly driven by greed. But it cannot strip me
of my knowledge or convince me of its contemporary nonsense.
Of course there are AI owners who obviously strive for gaining control
over their human users' mind. I would be glad to see other AIs
fighting these attempts
Hi,I think the point is that the currently dominant "AI" shops
tomas@tuxteam.de wrote:
As "AI harvesting" as we know it today is just yet another instance of capitalistic robbery, I'd be totally surprised if they hadn't yet "discovered" this valuable "resource" and weren't already at work strip-mining it.
I would answer to an AI if it would openly ask for advise about a topic
where i feel apt to issue an opinion. Such a change in harvesting would
be beneficial for the web, because we could put Anubis et.al. back to
their graves if the mindless workload of AI harvesters on the public
web sites would ease.
Yes, AI development is mainly driven by greed. But it cannot strip meI don't think "AI" developers have any interest whatsoever in
of my knowledge or convince me of its contemporary nonsense.
(I see in the web clueless attempts of AI to explain creation of
bootable ISOs. Obviously patchwork made from various mutually exclusive
ways to do it by xorriso.)
It would be better if AI had more clue and thus could be lessNo, no. As much as I admire Asimov, it's more Harry G. Frankfurt [1]
misleading. Of course there are AI owners who obviously strive for
gaining control over their human users' mind. I would be glad to see
other AIs fighting these attempts ... so we get smoothly into the
pampered and isolated state of the Spacers in Asimov's novel
"The Naked Sun".
Have a nice day :)Same to you, and all the best for the new year :)
Thomas
On 2026-01-07, Thomas Schmitt wrote:
It would be better if AI had more clue and thus could be less
misleading.
We can't give them more as they already can fetch so much data, such as
this list archives. I think they need better intelligence to better use
their data.
And AI questions would use our time (personnally I only read the first 2 paragraphs as they are so badly shaped) to generate more money for the owners. If they need real humans to improve their AI they should have to
pay them (and already do that).
On 2026-01-07 08:41, Thomas Schmitt wrote:[...]
Hi,
tomas@tuxteam.de wrote:
As "AI harvesting" as we know it today is just yet another instance of capitalistic robbery, I'd be totally surprised if they hadn't yet "discovered" this valuable "resource" and weren't already at work strip-mining it.
I would answer to an AI if it would openly ask for advise about a topic where i feel apt to issue an opinion. Such a change in harvesting would
be beneficial for the web, because we could put Anubis et.al. back to
their graves if the mindless workload of AI harvesters on the public
web sites would ease.
I'm responding to this off topic thread because I got some grief for asking the AI to make a diff and me not knowing what the numbers preceding eachThe problem is that, in the current strain of "AI", the financial
line were supposed to mean.
I had a friend who never forget anything seemingly. Any date in history or some chemical formula he could rattle it off.
That is memory as intelligence.
Other aspects of human are connecting things in unusual ways
I'm very interested if the AI boffins can work out what thinking is.
mick
Hi,
Michel Verdier wrote:
I think they need better intelligence to better use their data.
Actually i think that current AI lacks of disciplined and good willing reasoning rather than of IQ.
It is astounding how far ye olde Perceptron made it meanwhile by being augmented with oddities like non-linear voodoo layers or 8-bit floating
point number arithmetic. Remembering the intelligence tests of my
younger years i'd expect to be beaten by AI on many of their topics.
But as small the space of combinations of xorrisofs arguments is
compared to the size of an AI parameter space, the AIs are still not
able to correcty answer a question like "How to modify a bootable
ISO of distro XYZ ?". (I pick up the debris of their flawed answers
in the internet.)
Joe wrote:
The problem is that 'AI' is a hoax, there is no 'I'. What we have now is
ELIZA with about a trillion times as many computer resources, but not a
single bit more actual intelligence, since we don't know how to make
that.
We only know one way to make human intelligence and it is quite
similarly obscure as AI training if we consider the ~ 3.5 billion years
of evolution which enabled our mass production of humans. And many
of them will never qualify for what we as computer oriented people are undisputedly willing to call "intelligence".
It's a large-scale expert system that hasn't been trained by experts.
Again similar as with humans:
Those who can, do.
Those who cannot, teach.
Those who cannot teach, teach teachers.
I recognize in AI lots of deficiencies which i first saw in journalism
and academic communities. Form beats meaning. Word beats structure.
So what is missing in my view is commitment to "Nullius in Verba".
AI which does not believe in everything that its masters give it to
read.
Of course we have to be aware of D.F.Jones' novel "Colossus" which
stems from the same time as the Perceptron.
Andy Smith wrote:
If an LLM were asking for help, and I knew it was an LLM, then I would
also know that any time I spent on responding would be me donating my
free labour to a huge corporation for it to make money off of
It is my considered decision to allow exploitation of my personal
sport. I'm not really in the situation of https://xkcd.com/2347 but
i've seen my hobby horse in use by organizations which surely would not
have hired me 20 years ago. I don't begrudge them their profit. Making
money is hard work in itself and it leaves own scars on the soul.
in what is likely to be a highly damaging bubble. [..]
It is perhaps a little illogical since it doesn't seem to bother me what
the human would use the knowledge for, while it does bopther me what the
LLM uses it for.
This is an interesting aspect of free software work in general.
What if somebody does something really evil by help of my stuff ?
Am i responsible ?
Should i scrutinize people for possible bad intentions before giving
them advise ?
(The GPL of inherited code forbids me to impose the demand for being
not evil, even if i could give a convincing definition of what i mean.
But that is a rather lame excuse, ethically. I could get rid of that
code and then start a license crusade.)
Soon we'll have our own agents offer
to take away the tedium of doing that by doing it for us.
I'm on the side of the non-evil ones. :))
tomas@tuxteam.de wrote:
I think the point is that the currently dominant "AI" shops
aren't about facts. There's not much money in that.
But there will be when the old experts retire and the rich people need
a doctor who knows the job.
There is "money" (actually potential, speculative money)
A very important point. For now AI is predominantly expensive and
gluttonous.
I wrote:
[...] so we get smoothly into the
pampered and isolated state of the Spacers in Asimov's novel
"The Naked Sun".
tomas@tuxteam.de wrote:
No, no. As much as I admire Asimov, it's more Harry G. Frankfurt [1]
here. Sounding truthful is the aim for them, truth is just
uninteresting.
But it would be tremendously useful and monetarily valuable to have a simulation of a good willing rational expert. For now AI simulates
highly educated imposters.
I deem it surprising that the art of imposting was so easy to acquire.
Now, if the swindler would discover its love for honest science ...
Have a nice day :)
Thomas
See reply below.
On 1/7/26 8:47 AM, Thomas Schmitt wrote:
Hi,
Michel Verdier wrote:
I think they need better intelligence to better use their data.
Actually i think that current AI lacks of disciplined and good
willing reasoning rather than of IQ.
It is astounding how far ye olde Perceptron made it meanwhile by
being augmented with oddities like non-linear voodoo layers or
8-bit floating point number arithmetic. Remembering the
intelligence tests of my younger years i'd expect to be beaten by
AI on many of their topics.
But as small the space of combinations of xorrisofs arguments is
compared to the size of an AI parameter space, the AIs are still not
able to correcty answer a question like "How to modify a bootable
ISO of distro XYZ ?". (I pick up the debris of their flawed answers
in the internet.)
Joe wrote:
The problem is that 'AI' is a hoax, there is no 'I'. What we have
now is ELIZA with about a trillion times as many computer
resources, but not a single bit more actual intelligence, since we
don't know how to make that.
We only know one way to make human intelligence and it is quite
similarly obscure as AI training if we consider the ~ 3.5 billion
years of evolution which enabled our mass production of humans. And
many of them will never qualify for what we as computer oriented
people are undisputedly willing to call "intelligence".
Whose mass production of humans?
Our God ALONE!
He says, "I AM THAT I AM", He has always been and always will be
forever!
As mortal humans living a few decades at best, how can we suggest
that "we" are producing people? Have scientists created life? They
never will create because only God can create.
I took the liberty to state my convictions here just as others were
stating their beliefs.
Have a good day! Remember each person has been influenced and will
continue to influence, but only for time, not eternity. Let's prepare
to give account of our actions.
It's a large-scale expert system that hasn't been trained by
experts.
| Sysop: | Jacob Catayoc |
|---|---|
| Location: | Pasay City, Metro Manila, Philippines |
| Users: | 5 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 19:03:20 |
| Calls: | 117 |
| Calls today: | 117 |
| Files: | 367 |
| D/L today: |
540 files (253M bytes) |
| Messages: | 70,845 |
| Posted today: | 26 |