So ... went at them. Both are now nearly half the
size and the logic is much improved and CAN be
followed. Only ONE de-facto flag now for a bit
that had to be 'adaptive'. Co-running processes
are prevented, confirmation that what's supposed
to be running IS running now too. Better, more
detailed logging.
All in about half the space.
So ... "I made it work" may be good, but "I made
it work WELL" is the GOAL.
Le 25-01-2026, c186282 <c186282@nnada.net> a ‚critÿ:
So ... went at them. Both are now nearly half the
size and the logic is much improved and CAN be
followed. Only ONE de-facto flag now for a bit
that had to be 'adaptive'. Co-running processes
are prevented, confirmation that what's supposed
to be running IS running now too. Better, more
detailed logging.
All in about half the space.
So ... "I made it work" may be good, but "I made
it work WELL" is the GOAL.
It's always like that: first you make it work then you improve it.
Well, a lot of times, the process stops when it works. I don't remember
who said that code is not finish when there is nothing more to add but
when there is nothing more to remove.
That's why it's stupid to consider the best programmer as the one who
produce more lines of code than others.
And, from what I saw, actually, the AI produce a lot of code which must
be removed.
That's why it's stupid to consider the best programmer as the one
who produce more lines of code than others.
On 2026-01-25, St‚phane CARPENTIER <sc@fiat-linux.fr> wrote:
Le 25-01-2026, c186282 <c186282@nnada.net> a ‚critÿ:
So ... went at them. Both are now nearly half the
size and the logic is much improved and CAN be
followed. Only ONE de-facto flag now for a bit
that had to be 'adaptive'. Co-running processes
are prevented, confirmation that what's supposed
to be running IS running now too. Better, more
detailed logging.
All in about half the space.
So ... "I made it work" may be good, but "I made
it work WELL" is the GOAL.
It's always like that: first you make it work then you improve it.
FSVO "improve". In many cases this means "make money", even
(or especially) at the expense of quality in the traditional sense.
Well, a lot of times, the process stops when it works. I don't remember
who said that code is not finish when there is nothing more to add but
when there is nothing more to remove.
Antoine de Saint-Exup‚ry
That's why it's stupid to consider the best programmer as the one who
produce more lines of code than others.
Unless you're being paid by the line.
And, from what I saw, actually, the AI produce a lot of code which must
be removed.
Reminds me of my early days when I'd take over maintenance of someone
else's code - or code that "just grew". I'd typically reduce the line
count by 30% - or even 50% in some cases.
=?UTF-8?Q?St=C3=A9phane?= CARPENTIER <sc@fiat-linux.fr> wrote or quoted:
That's why it's stupid to consider the best programmer as the one who
produce more lines of code than others.
These topics fit right in on comp.lang.misc, not comp.os.linux.misc.
On 25 Jan 2026 11:07:20 GMT, St‚phane CARPENTIER wrote:
That's why it's stupid to consider the best programmer as the one
who produce more lines of code than others.
I once took a program of about 8000 lines of code, written by someone
else, and cut its size in half.
Actually the basic idea behind the simplification was very simple. The program was a plugin doing import and export of object definitions
between the host application?s internal format and an external
database. The import function was one gigantic sequence of handlers
for all the database fields, which generated corresponding attributes
for application objects, while the export function went the other way.
What I did was replace the bulk of both functions with a single table
of the correspondences between the two data representations. That
shrank the import and export functions down to just a couple of dozen
lines each -- they became just interpreters of table entries. I also
found some discrepancies between the two original functions, which disappeared as a result of using the common table.
This is called ?data-driven? or ?table-driven? programming. It?s quite
a common technique for reducing code size. Less code to write means
less code to maintain going forward, and less opportunity for bugs to
sneak in. Win-win.
"Table-logic"/"Table-IQ" can work very well in many
cases. Can save a lot of more complex coding.
Sounds like your original author quickly thought of *A*
method and just ran with it - never going back to think
about any of it later (or maybe was just never given
the time).
... it might not be apparent that a table-driven approach is
feasible.
On 2026-01-26, c186282 <c186282@nnada.net> wrote:
"Table-logic"/"Table-IQ" can work very well in many
cases. Can save a lot of more complex coding.
True - but until you get a feel for the kind of data you're
working with, it might not be apparent that a table-driven
approach is feasible. Hindsight is 20/20...
Sounds like your original author quickly thought of *A*
method and just ran with it - never going back to think
about any of it later (or maybe was just never given
the time).
That often happens when the boss is less concerned with having
it done Right as in having it done Right Now. Yes, sometimes
there are legitimate reasons for this (e.g. an irate customer
who is about to walk), but sooner or later the quick-and-dirty
approach is going to come back to bite you.
"There's never time to do it right, but always time to do it over."
On Mon, 26 Jan 2026 04:33:48 GMT, Charlie Gibbs wrote:
... it might not be apparent that a table-driven approach is
feasible.
A certain repetitiveness in the code is a common giveaway.
On 2026-01-25, St‚phane CARPENTIER <sc@fiat-linux.fr> wrote:
Le 25-01-2026, c186282 <c186282@nnada.net> a ‚critÿ:
So ... went at them. Both are now nearly half the
size and the logic is much improved and CAN be
followed. Only ONE de-facto flag now for a bit
that had to be 'adaptive'. Co-running processes
are prevented, confirmation that what's supposed
to be running IS running now too. Better, more
detailed logging.
All in about half the space.
So ... "I made it work" may be good, but "I made
it work WELL" is the GOAL.
It's always like that: first you make it work then you improve it.
FSVO "improve". In many cases this means "make money", even
(or especially) at the expense of quality in the traditional sense.
Well, a lot of times, the process stops when it works. I don't remember
who said that code is not finish when there is nothing more to add but
when there is nothing more to remove.
Antoine de Saint-Exup‚ry
That's why it's stupid to consider the best programmer as the one who
produce more lines of code than others.
Unless you're being paid by the line.
And, from what I saw, actually, the AI produce a lot of code which must
be removed.
Reminds me of my early days when I'd take over maintenance of someone
else's code - or code that "just grew". I'd typically reduce the line
count by 30% - or even 50% in some cases.
On 25 Jan 2026 11:07:20 GMT, St‚phane CARPENTIER wrote:
That's why it's stupid to consider the best programmer as the one
who produce more lines of code than others.
I once took a program of about 8000 lines of code, written by someone
else, and cut its size in half.
Actually the basic idea behind the simplification was very simple. The program was a plugin doing import and export of object definitions
between the host application?s internal format and an external
database. The import function was one gigantic sequence of handlers
for all the database fields, which generated corresponding attributes
for application objects, while the export function went the other way.
What I did was replace the bulk of both functions with a single table
of the correspondences between the two data representations. That
shrank the import and export functions down to just a couple of dozen
lines each -- they became just interpreters of table entries. I also
found some discrepancies between the two original functions, which disappeared as a result of using the common table.
This is called ?data-driven? or ?table-driven? programming. It?s quite
a common technique for reducing code size. Less code to write means
less code to maintain going forward, and less opportunity for bugs to
sneak in. Win-win.
I suspect that stuff like programs may leave waking consciousness,
but somewhere deep down keep some little processes running. Then,
when you go back to it after awhile, new and better approaches and
tweaks seem to come easily.
On Sun, 25 Jan 2026 21:52:08 -0500
c186282 <c186282@nnada.net> wrote:
I suspect that stuff like programs may leave waking consciousness,
but somewhere deep down keep some little processes running. Then,
when you go back to it after awhile, new and better approaches and
tweaks seem to come easily.
Absolutely. Our lead developer at $EMPLOYER has often related stories
of working on a problem for hours or days, leaving off and going to
bed, and waking up the next morning with the answer waiting for him.
On Sun, 25 Jan 2026 21:52:08 -0500
c186282 <c186282@nnada.net> wrote:
I suspect that stuff like programs may leave waking consciousness,
but somewhere deep down keep some little processes running. Then,
when you go back to it after awhile, new and better approaches and
tweaks seem to come easily.
Absolutely. Our lead developer at $EMPLOYER has often related stories
of working on a problem for hours or days, leaving off and going to
bed, and waking up the next morning with the answer waiting for him.
On Mon, 26 Jan 2026 04:33:48 GMT, Charlie Gibbs wrote:
... it might not be apparent that a table-driven approach is
feasible.
A certain repetitiveness in the code is a common giveaway.
Charlie Gibbs wrote this post by blinking in Morse code:
Reminds me of my early days when I'd take over maintenance of someone
else's code - or code that "just grew". I'd typically reduce the line
count by 30% - or even 50% in some cases.
Heck, I keep finding bugs, created years ago, in my own software
projects.
On 2026-01-26, Lawrence D?Oliveiro <ldo@nz.invalid> wrote:
On Mon, 26 Jan 2026 04:33:48 GMT, Charlie Gibbs wrote:
... it might not be apparent that a table-driven approach is
feasible.
A certain repetitiveness in the code is a common giveaway.
True, but refactoring might be a better solution.
If you have a good algorithm you might not need no steenkin' tables
(which must be maintained when new data values crop up).
On 2026-01-26, Chris Ahlstrom <OFeem1987@teleworm.us> wrote:
Charlie Gibbs wrote this post by blinking in Morse code:
Reminds me of my early days when I'd take over maintenance of someone
else's code - or code that "just grew". I'd typically reduce the line
count by 30% - or even 50% in some cases.
Heck, I keep finding bugs, created years ago, in my own software
projects.
It's scary how many years a bug can lurk undetected.
Especially if it turns out to be a Schrodinbug.
On Sun, 25 Jan 2026 21:52:08 -0500
c186282 <c186282@nnada.net> wrote:
I suspect that stuff like programs may leave waking consciousness,
but somewhere deep down keep some little processes running. Then,
when you go back to it after awhile, new and better approaches and
tweaks seem to come easily.
Absolutely. Our lead developer at $EMPLOYER has often related stories
of working on a problem for hours or days, leaving off and going to
bed, and waking up the next morning with the answer waiting for him.
Sometimes, by luck, you never hit the exact combo
ÿ of events that trigger the bug. Doesn't mean it
ÿ won't happen.
John Ames <commodorejohn@gmail.com> wrote:
On Sun, 25 Jan 2026 21:52:08 -0500
c186282 <c186282@nnada.net> wrote:
I suspect that stuff like programs may leave waking consciousness,
but somewhere deep down keep some little processes running. Then,
when you go back to it after awhile, new and better approaches and
tweaks seem to come easily.
Absolutely. Our lead developer at $EMPLOYER has often related stories
of working on a problem for hours or days, leaving off and going to
bed, and waking up the next morning with the answer waiting for him.
That often happens, yes. But I also have the situation when I go to
bed with an open issue that it takes me too long to get to sleep
because I keep pondering about the issue.
It also helps to explain the problem so someone. MIT used to have a
teddy bear in next to the door to the user helldesk, people had to
explain their problem to the bear before being allowed in. People tell
that many people stopped right in their explanation and went back to
their console.
In the last months I made the experience that is also helps to explain
the issue to an LLM.
Marc Haber <mh+usenetspam1118@zugschl.us> wrote:
John Ames <commodorejohn@gmail.com> wrote:
On Sun, 25 Jan 2026 21:52:08 -0500
c186282 <c186282@nnada.net> wrote:
I suspect that stuff like programs may leave waking consciousness,
but somewhere deep down keep some little processes running. Then,
when you go back to it after awhile, new and better approaches and
tweaks seem to come easily.
Absolutely. Our lead developer at $EMPLOYER has often related stories
of working on a problem for hours or days, leaving off and going to
bed, and waking up the next morning with the answer waiting for him.
That often happens, yes. But I also have the situation when I go to
bed with an open issue that it takes me too long to get to sleep
because I keep pondering about the issue.
I usually find the obvious answer while lying in bed, and go to
sleep entirely satisfied. Then after working on it the next day I
discover that in my tiredness I'd just forgotten most of the
constraints that made the problem difficult in the first place.
Good for sleep, bad for problem solving (especially if I don't
remember in the morning either and start extensively redesigning
things).
It also helps to explain the problem so someone. MIT used to have a
teddy bear in next to the door to the user helldesk, people had to
explain their problem to the bear before being allowed in. People tell
that many people stopped right in their explanation and went back to
their console.
Most times I go to start a technical topic on Usenet I realise the
answer by the time I've got to the end of writing it and just keep
it as a private note.
In the last months I made the experience that is also helps to explain
the issue to an LLM.
Well I guess that's the modern version of what I do.
On 2026-01-25, St‚phane CARPENTIER <sc@fiat-linux.fr> wrote:
Well, a lot of times, the process stops when it works. I don't remember
who said that code is not finish when there is nothing more to add but
when there is nothing more to remove.
Antoine de Saint-Exup‚ry
That's why it's stupid to consider the best programmer as the one who
produce more lines of code than others.
Unless you're being paid by the line.
And, from what I saw, actually, the AI produce a lot of code which must
be removed.
Reminds me of my early days when I'd take over maintenance of someone
else's code - or code that "just grew". I'd typically reduce the line
count by 30% - or even 50% in some cases.
First-passed at code are often inefficient. The first
goal is just to Make It Work.
However no good programmer should LEAVE it at that. Refine, polish,
de-crap for the 2nd pass.
Le 26-01-2026, c186282 <c186282@nnada.net> a ‚critÿ:
First-passed at code are often inefficient. The first
goal is just to Make It Work.
Agreed.
However no good programmer should LEAVE it at that. Refine, polish,
de-crap for the 2nd pass.
I don't agree here. It really depends on your goal. If your program is a
one time script designed to avoid you hours of manual stuff, then if the first pass took you half an hour to work, it's enough and a good
programmer should leave it like that. Now, if your script is designed
only for you to be used from time to time, you should take some time to
take care of some edge case which can appear in your environment. But
you shouldn't take too much time with things that doesn't concern you.
And if your purpose is to provide your program to the entire world,
then, yes at that time you should polish it and take care of every edge
case that can happen.
But a good programmer doesn't have to consider he's writing a code
designed for the entire world each time he's starting to write a little program.
On 2026-01-31, St‚phane CARPENTIER <sc@fiat-linux.fr> wrote:
Le 26-01-2026, c186282 <c186282@nnada.net> a ‚critÿ:
First-passed at code are often inefficient. The first
goal is just to Make It Work.
Agreed.
However no good programmer should LEAVE it at that. Refine, polish,
de-crap for the 2nd pass.
I don't agree here. It really depends on your goal. If your program is a
one time script designed to avoid you hours of manual stuff, then if the
first pass took you half an hour to work, it's enough and a good
programmer should leave it like that. Now, if your script is designed
only for you to be used from time to time, you should take some time to
take care of some edge case which can appear in your environment. But
you shouldn't take too much time with things that doesn't concern you.
And if your purpose is to provide your program to the entire world,
then, yes at that time you should polish it and take care of every edge
case that can happen.
But a good programmer doesn't have to consider he's writing a code
designed for the entire world each time he's starting to write a little
program.
Although I agree with you in principle, you have to be careful.
Consider the case of a one-shot program you wrote to create a
listing that was only going to be used during a clean-up effort.
Then some manager sees it and says, "Hey, I _like_ this report!
Have a copy on my desk every Monday morning."
Hence one of my Words to Live By: A one-shot program is one
that you'll only need once... this week.
| Sysop: | Jacob Catayoc |
|---|---|
| Location: | Pasay City, Metro Manila, Philippines |
| Users: | 5 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 19:03:37 |
| Calls: | 117 |
| Calls today: | 117 |
| Files: | 367 |
| D/L today: |
540 files (253M bytes) |
| Messages: | 70,845 |
| Posted today: | 26 |