On 3/27/2026 10:35 AM, Joerg wrote:
Serious hint to anyone thinking about a career in software: Make sure
to develop a solid understanding of at least digital hardware. Build,
experiment with micro controller eval boards (they are cheap), learn
how to use a logic analyzer and an oscilloscope. That will hugely
increase your job security or your self-employed income prospects.
The reverse is also true for folks looking for careers in hardware design.
If you don't know how the software will interface with "your" hardware,
the answer is likely:˙ "poorly".
And, to all, "coding" isn't software design in much the same way
assembling a prototype isn't hardware design.
Don Y <blockedofcourse@foo.invalid> wrote: |------------------------------------------------------------------------------|
|"The fallacy of RAID is that the larger the array, the more likely |
|a *second* fault manifests while attempting to recover from the |
|first." |
|------------------------------------------------------------------------------|
Hmm. How so? I had not heard this before. (I never yet am responsible
for a RAID, but Don Y incites me to be skeptical about RAID boasts
should I ever become responsible for a RAID.)
|------------------------------------------------------------------------------|
|"It's like having gold speaker wires -- something to brag |
|about that has no real value in most cases." |
|------------------------------------------------------------------------------|
But redundancies are really valuable in most cases. I make backups. I
might not use a RAID, but I am not going to give up on backups!
|------------------------------------------------------------------------------|
|"This is the same mentality behind folks adopting RAID or ZFS needlessly. |
|They don't think their decision through and, instead, convince themselves |
|that they have taken concrete measures to improve the reliability |
|of their data! (how often should you do a patrol read? how do you |
|respond to errors detected/corrected? what criteria do you use to |
|retire media? do you support a hot spare? how many??)" |
|------------------------------------------------------------------------------|
I tried ZFS (via an open-Solaris distribution); UFS (via
e.g. FreeBSD); ext (via Linux); Minix; and even FAT (via FreeDOS,
avoiding installing a never installed copy of Windows XP still in its shrinkwrapping) in 2008. Only ZFS (Solaris) thereof became corrupted:
and it became corrupted (and unbootable) within only a few days of
testing!
On 3/27/26 11:27 AM, Don Y wrote:
On 3/27/2026 10:35 AM, Joerg wrote:
Serious hint to anyone thinking about a career in software: Make sure to >>> develop a solid understanding of at least digital hardware. Build,
experiment with micro controller eval boards (they are cheap), learn how to
use a logic analyzer and an oscilloscope. That will hugely increase your job
security or your self-employed income prospects.
The reverse is also true for folks looking for careers in hardware design. >> If you don't know how the software will interface with "your" hardware,
the answer is likely:˙ "poorly".
And, to all, "coding" isn't software design in much the same way
assembling a prototype isn't hardware design.
... and comment lines in the source code do _not_ constitute "documentation".
Jeff Liebermann <jeffl@cruzio.com> wrote: >|-----------------------------------------------------------------------| >|"Never mind that the experts and the consensus are often wrong: | >|<https://www.learnbydestroying.com/jeffl/crud/Premature-Judgement.txt>"| >|-----------------------------------------------------------------------|
Thanks for that stimulating text file but its end has an unproven
consensus about Gates:
""640K ought to be enough for anybody."
-- Bill Gates, 1981".
I never see this Gates purported quotation in its original (and I saw
a misquotation alleging that he said 16K!). During an old decade I
once read this insightful counterargument arguing that Gates never
actually says "640K ought to be enough for anybody." i.e. that if so
many persons claim that he had said so, then someone should be able to
show this original publication by Gates instead of hearsay. Did Gates
really ever say "640K ought to be enough for anybody."?
(S. HTTP://Gloucester.Insomnia247.NL/ fuer Kontaktdaten!)
Bill Gates was famous at Microsoft for program reviews which demanded
smaller and more efficient code. He knew that program memory and
computation space were limited and did his best to work within the
hardware limitations. For example: <https://blog.codinghorror.com/bill-gates-and-donkey-bas/>
"Gates, Allen and Davidoff threw every trick at the book to squeeze
the interpreter into 4 kilobytes. They succeeded and left some
headroom for the programs themselves - without which it would have
been pretty useless, of course."
Don Y <blockedofcourse@foo.invalid> wrote: |-----------------------------------------------------------------------|
|"> I tried ZFS (via an open-Solaris distribution); UFS (via |
e.g. FreeBSD); ext (via Linux); Minix; and even FAT (via FreeDOS, | avoiding installing a never installed copy of Windows XP still in its| shrinkwrapping) in 2008. Only ZFS (Solaris) thereof became corrupted:|| | |ZFS requires a fair bit of resources to work properly. |
and it became corrupted (and unbootable) within only a few days of | testing! |
| | |One thing folks tend to forget is the *hardware* can fail." | |-----------------------------------------------------------------------|
That hardware never fails. In particular it was a new computer which
was switched on for its 1st time in Summer 2008. Its ZFS installation
failed before 2009.
ECMMWF forecasts have been available online for about 4 years. ><https://www.ecmwf.int/en/forecasts>
On 3/27/2026 10:32 AM, Ross Finlayson wrote:
Yeah my O.S. design is basically to take advantage of the fact
that the modern commodity architectures have left behind lots
of assumptions of the single-core and about interrupts mostly
then about the ubiquity of PCIe bus and the necessity of the
efficient employment of DMA, then that many-core basically
means that modern commodity general-purpose boards need be
treated as models of self-contained distributed systems
themselves, so, fundamentally "asynchronous", as this simplifies
a lot of things, for models of co-operative multi-tasking,
while acknowledging that user program are nominally adversarial,
and the network is nominally un-trusted.
Divide-and-conquer, information hiding, one-page "programs"
all suggest an OS should cater to small, "decomposed" problems
executing in *true* parallelism (the multitasking illusion
doesn't work in the era of multiple cores/hardware threads,
distributed systems, etc.
To these criteria, I've added "accountability" as you want to
be able to wrap a virtual "box" around any set of actors
and pretend THAT is a product with real world constraints.
E.g., how do you ensure a task doesn't disproportionately (ab)use
resources meant to be shared with other co-operating tasks?
(And, what do you do if/when it does??)
[My most recent OS is, itself, "decomposed" so that parts of it
can be co-operating instead of having big locks on a monolithic
kernel]
Ah, here the idea of "co-operative scheduler" (vis-a-vis
"pre-emptive scheduler") has that there's a notion of the
model of an o.s. (scheduler, allocator) of co-operation
vis-a-vis "the re-routine", which is a sort of idea like
"co-routine", where basically everything is non-blocking
by design and convention, and instead of a co-routine stack
is a sort of memo-ized monad, then about matters of the
scheduling like "I cut you pick", "straw-pulling", and
"hot potato", with anti-gaming built in to the algorithm,
device drivers are provided as "generic universal drivers",
then that user-space gets a usual "quotas/limits" and
while a contrived user-space program may actually run
a hot inner loop, otherwise the deadlock/starvation and
other issues in concurrency are to be figured out,
for the allocator/scheduler.
I.e., the usual idea of the "co-operative" lives inside
the kernel, user-space is nominally adversarial and
the network is nominally un-trusted.˙ System calls it's
figured are implemented as of a "co-operative" implementation.
It's mostly as of a "design" while though I put it through
the wringer as it were of some "large, competent, conscientious,
co-operative reasoners" or a "bot panel", I can post a link
or reference or all the text of them.
On 3/28/2026 6:41 AM, Ross Finlayson wrote:
Ah, here the idea of "co-operative scheduler" (vis-a-vis
"pre-emptive scheduler") has that there's a notion of the
model of an o.s. (scheduler, allocator) of co-operation
vis-a-vis "the re-routine", which is a sort of idea like
"co-routine", where basically everything is non-blocking
by design and convention, and instead of a co-routine stack
is a sort of memo-ized monad, then about matters of the
scheduling like "I cut you pick", "straw-pulling", and
"hot potato", with anti-gaming built in to the algorithm,
device drivers are provided as "generic universal drivers",
then that user-space gets a usual "quotas/limits" and
while a contrived user-space program may actually run
a hot inner loop, otherwise the deadlock/starvation and
other issues in concurrency are to be figured out,
for the allocator/scheduler.
I assume tasks (processes) run without interruption until they
need an unavailable resource, at which point, they block.
But, *other* tasks are also doing so, concurrently.
As such, EVERY time a resource is released, the scheduler
(theoretically) runs. So, a task that causes a resource to
be made available for another blocking task can be immediately
preempted by that blocked task now being "ready" to run.
The distinction is important because tasks can reside in
different cores as well as on different nodes. So, even if a
task spins in a tight loop, not altering the availability of
any resources, it can still be preempted by the actions of some
other executing task.
[Of course, the round-robin scheduler ensures an equal priority
task is not indefinitely blocked, even if the deadline scheduler
sees no need to reschedule()]
Treating the design of the OS in a similar fashion, I can transfer "ownership" of specific objects to whichever tasks (servers) I
consider appropriate. Dynamically.
E.g., when physical memory is free'd, I give it to a task that
scrubs it (so the next user of said memory never sees any "data"
that may have occupied those memory locations by a previous
"user") and verifies its functionality. When some task NEEDS
additional memory, it blocks waiting on the availability of
such memory -- which causes this "scrubber" to make available
pages that it deems as "clean and functional".
Chopping responsibilities up like this makes it easier to "get it
right" -- at the expense of some performance (each of these interactions
have to cross protection domains so the interactions aren't as
lightweight as a simple function call in a monolithic kernel).
Processors are cheap. Memory is cheap. Developer time and latent
bugs are costly (figure you have to spend a man-week looking into
a suspected bug. If you're making 10,000 units, EACH such distraction
can justify an additional $1 in hardware costs, without factoring in
the externalities of cost to users, reputation, etc.)
I.e., the usual idea of the "co-operative" lives inside
the kernel, user-space is nominally adversarial and
the network is nominally un-trusted. System calls it's
figured are implemented as of a "co-operative" implementation.
The network is not a named resource. It is used by the OS to
exchange messages with other kernel instances running on other
nodes. So, when a task does:
object=>method(arguments)
it doesn't need to know if the referenced object is local or remote.
The kernels handle location independence.
Traffic is encrypted with different keys for each node. So, discovering a key (e.g., by attacking a specific node) only gives you access to the
traffic for that node.
This also allows for:
object=>move(new_server)
to force the object to be managed by another server (for that particular class of object) which will likely cause the object instance to
"physically"
move to whichever node on which that server is executing.
So, I can move every object off of a particular node -- or, onto a specific node!
[Of course, a task is also an object -- and servers are tasks -- so I
can move
entire tasks (processes) similarly.]
In this way, I can bring hardware and software on/off-line on demand to
adapt
to changes in needs and available resources. E.g, if I'm running on
battery
(backup) power, I can shut down individual nodes to reduce power
consumption
after migrating their current responsibilities to other nodes or outright killing them off -- after checkpointing their progress. If I have some new *need*, I can bring a node on-line and push tasks (objects) onto it. So,
if a particular object server becomes overloaded, I can span a new instance of it and migrate some of the objects that it is currently backing onto
that
new instance -- the tasks referencing those objects never know that the objects
have "moved"!
Objects are capability-based. So, you can only access objects for which
you
currently *have* a capability and only to the extent permitted by said capability.
Capabilities are un-named and managed in the kernel(s) so can't be counterfeited. You can *know* that a particular object exists (e.g., TheFrontDoor, TheGunSafe, TheBankAccount) but can't do anything
to/with it because you likely haven't been given access to it and can't
GET access to it as there isn't a central name registry that you could
hack!
It seems fairly obvious that devices can no longer act as islands in the
21st century. There's too little value to add for a single device to
be meaningful -- unless it interacts with other devices in meaningful
ways.
And, rather than the heavyweight client-server interface where things interact in a generic, high-level manner (CORBA-ish), it seems much
more practical to let them interact in a manner that is more natural
to their designs. Do you want to have to standardize on every such interaction with an industry-wide "committee" arguing about how many
humps the horse should have on its back? Or, do you want to make product that solves problems while your competitors are trying to define a
level playing field??
It's mostly as of a "design" while though I put it through
the wringer as it were of some "large, competent, conscientious,
co-operative reasoners" or a "bot panel", I can post a link
or reference or all the text of them.
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope
or the contents of (some of) memory and registers, of a process or task,
here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
Perhaps the most usual example is pre-emptive multithreading itself,
about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in,
then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense.
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope
or the contents of (some of) memory and registers, of a process or task,
here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running
on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have
a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself,
about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for >each process. If you only need a few registered names (stdin, stdout, stderr),
the code that implements that is entirely different than the implementation >that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you
use, you'd not want to bear the cost of an implementation that did more
than you needed -- just like you wouldn't develop a standalone device
with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an >implementation might use a single "char" to represent a name, giving
you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in,
then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive >implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive,
then you have to be wary of a developer spinning without ever giving
the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense.
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources
(of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints
on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date!
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., >methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an afterthought.
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope
or the contents of (some of) memory and registers, of a process or task,
here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running
on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have
a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself,
about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for each process. If you only need a few registered names (stdin, stdout, stderr),
the code that implements that is entirely different than the implementation that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you
use, you'd not want to bear the cost of an implementation that did more
than you needed -- just like you wouldn't develop a standalone device
with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an implementation might use a single "char" to represent a name, giving
you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in,
then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive,
then you have to be wary of a developer spinning without ever giving
the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense.
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources
(of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints
on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date!
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an afterthought.
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope
or the contents of (some of) memory and registers, of a process or task, >>> here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running
on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have
a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself,
about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for >> each process. If you only need a few registered names (stdin, stdout,
stderr),
the code that implements that is entirely different than the implementation >> that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you
use, you'd not want to bear the cost of an implementation that did more
than you needed -- just like you wouldn't develop a standalone device
with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an
implementation might use a single "char" to represent a name, giving
you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in,
then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive
implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive,
then you have to be wary of a developer spinning without ever giving
the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense.
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources
(of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints
on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date!
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e.,
methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an afterthought.
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD >instructions, then as with regards to fitting two operands in a word,
the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less",
there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes".
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads.
That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't know/care.)
Then, objects, according to "naming and directory interface" usually
enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of >serialization of memory, DMA is the bulk transfer medium in protocol,
a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
On 3/29/2026 2:29 PM, Ross Finlayson wrote:
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
Perhaps you should relate the types of applications you've been
tasked with in the past -- or, those you hope to target in the
future. Frankly, your comments read like word-salad -- failing
to appreciate or interact with my prior comments and, instead,
babbling bits and pieces you've read somewhere instead of reflecting
a genuine understanding of the issue(s).
E.g., there's no *why* behind your statements. No "hope" in their
proposals.
Experience provides both of these and, without it, future
endeavours tend to fail -- miserably. (if you don't learn...)
On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson <ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope
or the contents of (some of) memory and registers, of a process or task, >>>> here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running
on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have
a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself,
about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for >>> each process. If you only need a few registered names (stdin, stdout,
stderr),
the code that implements that is entirely different than the implementation >>> that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you >>> use, you'd not want to bear the cost of an implementation that did more
than you needed -- just like you wouldn't develop a standalone device
with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an
implementation might use a single "char" to represent a name, giving
you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in,
then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive
implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive,
then you have to be wary of a developer spinning without ever giving
the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense.
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources
(of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints
on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date!
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e.,
methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an afterthought. >>>
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD
instructions, then as with regards to fitting two operands in a word,
the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less",
there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes".
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads.
That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't know/care.)
Then, objects, according to "naming and directory interface" usually
enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of
serialization of memory, DMA is the bulk transfer medium in protocol,
a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
We build electronics. Our new PoE instrument line uses an RP2040
dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
quantity.
We call the two CPUs (and the two ends of the box) Alice and Bob.
Alice does the ethernet and usb i/o, command parsing, calibrations,
all that slow floating-point management. Bob does the realtime i/o, fixed-point, directly or through an FPGA.
All programmed in bare-metal c with no OS. Abstraction=0.
Seems to work.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On 03/29/2026 03:24 PM, john larkin wrote:
On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
<ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope >>>>> or the contents of (some of) memory and registers, of a process or
task,
here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running >>>> on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have
a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself, >>>>> about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries
-- for
each process. If you only need a few registered names (stdin, stdout, >>>> stderr),
the code that implements that is entirely different than the
implementation
that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that
you
use, you'd not want to bear the cost of an implementation that did more >>>> than you needed -- just like you wouldn't develop a standalone device
with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an
implementation might use a single "char" to represent a name, giving
you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in, >>>>> then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive
implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive,
then you have to be wary of a developer spinning without ever giving
the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense. >>>>>
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources >>>> (of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints
on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date!
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., >>>> methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an
afterthought.
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD
instructions, then as with regards to fitting two operands in a word,
the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less",
there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes".
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads.
That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't know/care.) >>>
Then, objects, according to "naming and directory interface" usually
enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of
serialization of memory, DMA is the bulk transfer medium in protocol,
a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
We build electronics. Our new PoE instrument line uses an RP2040
dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
quantity.
We call the two CPUs (and the two ends of the box) Alice and Bob.
Alice does the ethernet and usb i/o, command parsing, calibrations,
all that slow floating-point management. Bob does the realtime i/o,
fixed-point, directly or through an FPGA.
All programmed in bare-metal c with no OS. Abstraction=0.
Seems to work.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
That seems cool. So, you wrote your own USB and packet stack?
Or, it's a system-on-chip?
I drive my tractor with my hands on the wheels and the sticks
and the levers and the other levers and the feet on the pedals
and the other pedals and my rear in the seat, ..., abstraction = 0.
On 03/29/2026 08:54 PM, Ross Finlayson wrote:
On 03/29/2026 03:24 PM, john larkin wrote:
On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
<ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope >>>>>> or the contents of (some of) memory and registers, of a process or >>>>>> task,
here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running >>>>> on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have >>>>> a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself, >>>>>> about basically the state as a stack, and "process control block"I support a heterogeneous environment; an object can be migrated
and "thread control block" usually enough, about context-switching. >>>>>
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries
-- for
each process. If you only need a few registered names (stdin, stdout, >>>>> stderr),
the code that implements that is entirely different than the
implementation
that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that >>>>> you
use, you'd not want to bear the cost of an implementation that did
more
than you needed -- just like you wouldn't develop a standalone device >>>>> with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an >>>>> implementation might use a single "char" to represent a name, giving >>>>> you access to ~200+ unique names of the form "a", "b", "^X", "\b",
etc.
Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency, >>>>>> is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in, >>>>>> then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive
implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive, >>>>> then you have to be wary of a developer spinning without ever giving >>>>> the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense. >>>>>>
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts >>>>>> on the double-pipeline CPUs, the idea being that context-switching >>>>>> is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing
resources
(of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints >>>>> on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date! >>>>>
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator >>>>>> and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., >>>>> methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an
afterthought.
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD
instructions, then as with regards to fitting two operands in a word,
the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less",
there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes". >>>>
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads. >>>> That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't
know/care.)
Then, objects, according to "naming and directory interface" usually
enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of
serialization of memory, DMA is the bulk transfer medium in protocol,
a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
We build electronics. Our new PoE instrument line uses an RP2040
dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
quantity.
We call the two CPUs (and the two ends of the box) Alice and Bob.
Alice does the ethernet and usb i/o, command parsing, calibrations,
all that slow floating-point management. Bob does the realtime i/o,
fixed-point, directly or through an FPGA.
All programmed in bare-metal c with no OS. Abstraction=0.
Seems to work.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
That seems cool. So, you wrote your own USB and packet stack?
Or, it's a system-on-chip?
I drive my tractor with my hands on the wheels and the sticks
and the levers and the other levers and the feet on the pedals
and the other pedals and my rear in the seat, ..., abstraction = 0.
Attitudes are various about "abstraction" and "concreteness".
(Attitudes or "opinions".) Some prefer to write more or less
directly to the concrete adapter, others prefer to model the
interaction since usually only a tiny, tiny subset of the
"defined behavior" of the concrete adapter fulfills its
abstract function.
It takes all kinds, ....
"The Blind Men and the Elephant" is a usual sort of account,
and it's the same kind of idea since forever that any two
individuals are going to see things differently, and a question
whether they even see the same thing at all, "subjectivity",
then there's the great formal and practical account of
the formal or "interfaces" usually enough, interfaces to
the adapters, "inter-subjectivity", so when we clock out
we can say it's done.
When I see someone writing to directly to the concrete
adapter, sometimes it's hard to distinguish that or
easy to read that as from, "Hello, World".
Then, "layers" is usually enough the idea of making
models of modules, in layers, then that the boundaries
exist, then for example that the code its logic is
"separable and composable", then to point it at other
adapters their interfaces or harnesses, for example
for "systems under test" vis-a-vis "systems under load".
On 03/29/2026 03:24 PM, john larkin wrote:
On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
<ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope >>>>> or the contents of (some of) memory and registers, of a process or task, >>>>> here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running >>>> on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have
a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself, >>>>> about basically the state as a stack, and "process control block"
and "thread control block" usually enough, about context-switching.
I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for >>>> each process. If you only need a few registered names (stdin, stdout, >>>> stderr),
the code that implements that is entirely different than the implementation
that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you >>>> use, you'd not want to bear the cost of an implementation that did more >>>> than you needed -- just like you wouldn't develop a standalone device
with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an
implementation might use a single "char" to represent a name, giving
you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency,
is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in, >>>>> then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive
implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive,
then you have to be wary of a developer spinning without ever giving
the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense. >>>>>
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts
on the double-pipeline CPUs, the idea being that context-switching
is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources >>>> (of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints
on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date!
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator
and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., >>>> methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an afterthought. >>>>
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD
instructions, then as with regards to fitting two operands in a word,
the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less",
there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes".
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads.
That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't know/care.) >>>
Then, objects, according to "naming and directory interface" usually
enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of
serialization of memory, DMA is the bulk transfer medium in protocol,
a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
We build electronics. Our new PoE instrument line uses an RP2040
dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
quantity.
We call the two CPUs (and the two ends of the box) Alice and Bob.
Alice does the ethernet and usb i/o, command parsing, calibrations,
all that slow floating-point management. Bob does the realtime i/o,
fixed-point, directly or through an FPGA.
All programmed in bare-metal c with no OS. Abstraction=0.
Seems to work.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
That seems cool. So, you wrote your own USB and packet stack?
Or, it's a system-on-chip?
I drive my tractor with my hands on the wheels and the sticks
and the levers and the other levers and the feet on the pedals
and the other pedals and my rear in the seat, ..., abstraction = 0.
On Sun, 29 Mar 2026 20:54:28 -0700, Ross Finlayson <ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 03:24 PM, john larkin wrote:
On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
<ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope >>>>>> or the contents of (some of) memory and registers, of a process or task, >>>>>> here is described as "re-seating" which is also the usual enough
idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running >>>>> on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the
resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have >>>>> a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself, >>>>>> about basically the state as a stack, and "process control block"I support a heterogeneous environment; an object can be migrated
and "thread control block" usually enough, about context-switching. >>>>>
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for
each process. If you only need a few registered names (stdin, stdout, >>>>> stderr),
the code that implements that is entirely different than the implementation
that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you >>>>> use, you'd not want to bear the cost of an implementation that did more >>>>> than you needed -- just like you wouldn't develop a standalone device >>>>> with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an >>>>> implementation might use a single "char" to represent a name, giving >>>>> you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
(do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency, >>>>>> is a little different than the usual idea of a co-routine, which
is usually enough a fork in the process model, then about signals
as IPC with PID and PPID, vis-a-vis, fibers and threads or events
and task queues, basically the re-routine never "blocks" and has
no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in, >>>>>> then then the re-routine adds a penalty of basically n^2 in time
to be completely non-blocking and where asynchrony is modeled in
the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive
implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor.
OTOH, if you can only preempt when the task invokes an OS primitive, >>>>> then you have to be wary of a developer spinning without ever giving >>>>> the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a
huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory
of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense. >>>>>>
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts >>>>>> on the double-pipeline CPUs, the idea being that context-switching >>>>>> is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!)
actors who may wish to compromise performance by monopolizing resources >>>>> (of which the CPU is but one).
Using resource ledgers lets you constrain an application (process)
to a subset of the available resources. Putting runtime constraints >>>>> on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember
this decision so you don't "readmit" the process at some future date! >>>>>
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator >>>>>> and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., >>>>> methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the
objects themselves -- not layered onto a "filesystem" as an afterthought. >>>>>
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD
instructions, then as with regards to fitting two operands in a word,
the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less",
there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes". >>>>
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads. >>>> That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't know/care.) >>>>
Then, objects, according to "naming and directory interface" usually
enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of
serialization of memory, DMA is the bulk transfer medium in protocol,
a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
We build electronics. Our new PoE instrument line uses an RP2040
dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
quantity.
We call the two CPUs (and the two ends of the box) Alice and Bob.
Alice does the ethernet and usb i/o, command parsing, calibrations,
all that slow floating-point management. Bob does the realtime i/o,
fixed-point, directly or through an FPGA.
All programmed in bare-metal c with no OS. Abstraction=0.
Seems to work.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
That seems cool. So, you wrote your own USB and packet stack?
Or, it's a system-on-chip?
I drive my tractor with my hands on the wheels and the sticks
and the levers and the other levers and the feet on the pedals
and the other pedals and my rear in the seat, ..., abstraction = 0.
We use the WizNet ethernet chip and the code that they supply. It's
more than a mac/phy: it handles packets and protocols, including UDP.
The RP2040 has a built-in USB interface. The electrical interface to
the USBc connector is two resistors. It looks like a COM port to the
users. What's really slick is that the USB can also run in a mode
where it looks like a memory stick. To reload the systrem code, we
boot into memory stick mode and the user then drag-drops a single file
to update the box: Alice code, Bob code, and the FPGA config.
Tractors are cool. Physical and basic.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On 03/30/2026 07:44 AM, john larkin wrote:
On Sun, 29 Mar 2026 20:54:28 -0700, Ross Finlayson
<ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 03:24 PM, john larkin wrote:
On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
<ross.a.finlayson@gmail.com> wrote:
On 03/29/2026 12:39 PM, Don Y wrote:
On 3/29/2026 5:53 AM, Ross Finlayson wrote:
That seems to speak to "proximity" and "affinity", with regards
to "coherency", and "mobility". To "move" state, about state & scope >>>>>>> or the contents of (some of) memory and registers, of a process or task,
here is described as "re-seating" which is also the usual enough >>>>>>> idea in programming like C and C++.
My goal is NOT to disrupt the "programmer's model" for "average
developers". They should truly be able to think that they are running >>>>>> on a uniprocessor but without any guarantees as to throughput
(to accommodate sharing the physical processor, communication
overhead, etc.).
They shouldn't need to know where "they" are executing or that the >>>>>> resources on which they rely may not be local.
"Advanced developers" attend to the dynamic reconfiguration of
the system (at runtime). So, THOSE developers build applications
that are given the ability ("capability") to relocate resources,
kill off tasks, spawn new ones, etc. Because they, presumably, have >>>>>> a more detailed understanding of the "System" beyond the scope of
some particular application/task within it.
Perhaps the most usual example is pre-emptive multithreading itself, >>>>>>> about basically the state as a stack, and "process control block" >>>>>>> and "thread control block" usually enough, about context-switching. >>>>>>I support a heterogeneous environment; an object can be migrated
to a different processor *family* at any time (while executing).
You shouldn't care as long as the interface (methods) to the
object remain available (for the capabilities you have been
granted) along with the current state of the object.
Similarly, the algorithms used to implement those methods (as
well as internal data members) can change dynamically -- as long
as the interface functionality remains immutable.
For example, I create namespaces -- (name, capability) dictionaries -- for
each process. If you only need a few registered names (stdin, stdout, >>>>>> stderr),
the code that implements that is entirely different than the implementation
that tries to manage hundreds of named entries.
As it should be. (because "you" are "billed" for the resources that you >>>>>> use, you'd not want to bear the cost of an implementation that did more >>>>>> than you needed -- just like you wouldn't develop a standalone device >>>>>> with more complexity/cost than necessary!)
If names are insignificant (e.g., akin to file descriptors), then an >>>>>> implementation might use a single "char" to represent a name, giving >>>>>> you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ... >>>>>> (do you really *need* names like "Object 1", "Object 2", etc.?)
In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
the idea of the "re-routine" as a model of asychronous concurrency, >>>>>>> is a little different than the usual idea of a co-routine, which >>>>>>> is usually enough a fork in the process model, then about signals >>>>>>> as IPC with PID and PPID, vis-a-vis, fibers and threads or events >>>>>>> and task queues, basically the re-routine never "blocks" and has >>>>>>> no "yield" nor "async" keywords in the source text, instead any
call to a re-routine implicitly yields, and then the re-routine
is run again later, the re-run, where as the re-routine is filled in, >>>>>>> then then the re-routine adds a penalty of basically n^2 in time >>>>>>> to be completely non-blocking and where asynchrony is modeled in >>>>>>> the language as the normal procedural flow-of-control.
"Yield" is just a hint to the scheduler. If you have a preemptive >>>>>> implementation where "time" can be a preemption criteria, then a
task need never "suggest" a good place to relinquish the processor. >>>>>>
OTOH, if you can only preempt when the task invokes an OS primitive, >>>>>> then you have to be wary of a developer spinning without ever giving >>>>>> the system a chance to "interrupt".
Then, as that's only in the kernel itself, that n^2 might seem a >>>>>>> huge penalty, yet, it's actually quite under that, since as the
re-routine its data (in a stack) is filled in, then most of its
routine is cache hits, the "memoized" calls to the re-routine.
About the allocator, then this design concept basically is for
making use of virtual memory, to be able to "re-seat" the memory >>>>>>> of a process without changing a process' view of the memory.
Of course. But, with VMM, you can do so much more:
- CoW semantics
- DSM
- remapping "defective" memory
- releasing memory that will NEVER be revisited
- universal call by value (for large arguments)
etc. None of these things need impact the developer.
This can help avoid both syscalls and memory fragmentation,
since memory paging basically is performed by the user-space
process in its time instead of by the kernel. This has the
usual guarantees of process memory that it's to be visible only
to the process itself unless explicitly shared, that then being
treated as a usual sort of shared resource in the distributed sense. >>>>>>>
The syscalls by a process essentially yield (the process yields
to the scheduler), about ideas like round-robin and fairness
and anti-starvation and incremental-progress in the scheduler,
while it's so that until a process gets any other signal and
only touches its own memory that's non-yielding, then about
the machinery of pre-emptive multithreading or context-switch
and as with regards to hyper-threading or the interleaved contexts >>>>>>> on the double-pipeline CPUs, the idea being that context-switching >>>>>>> is along the lines of basically a periodic signal interrupt.
But you have to guard against "non-cooperative" (and even HOSTILE!) >>>>>> actors who may wish to compromise performance by monopolizing resources >>>>>> (of which the CPU is but one).
Using resource ledgers lets you constrain an application (process) >>>>>> to a subset of the available resources. Putting runtime constraints >>>>>> on memory, time, etc. lets you remove a "bad actor" from the set
of processes eligible to run. A persistent store lets you remember >>>>>> this decision so you don't "readmit" the process at some future date! >>>>>>
Notions of "Orange Book" and "mandatory access control" then
are considered "more than good ideas" with regards to the allocator >>>>>>> and scheduler of resources in computation.
Capabilities implicitly limit the actions that can be performed (i.e., >>>>>> methods that can be invoked) on an object. E.g., I can let you
LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
once, and never again!
How you refine your "permissions" is something that belongs in the >>>>>> objects themselves -- not layered onto a "filesystem" as an afterthought.
Hey, thanks for writing. Since all we know about each other
are these brief exchanges, filling in some detail helps a lot
to understand, or rather, get an idea, of an estimate of the
depth of the comprehension of the whole machine stack.
The idea of a kernel or operating system (executive, scheduler,
..., interactive "operating system") for "commodity" architectures
these days is that it's pretty ubiquitous the various chips'
architectures, then that PCIe is on everything PC, then about
usually enough USB, then about the NIC and USB root, those are
pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
and the SMI or as about DeviceTree, ..., it's ubiquitous,
after "economy of scale" a simple enough "economy of ubiquity".
So, the chips are almost all 64-bit their native word width, though
agreeably sometimes it's 128, and they have various vector or SIMD
instructions, then as with regards to fitting two operands in a word, >>>>> the SWAR approach, about vectorizing the scalars, and word and
double-word and word and half-word.
Then, here mostly the consideration is the "head-less" or "HID-less", >>>>> there's no human interface device involved in server runtime images
for things like running services or usually enough "boxes" or "nodes". >>>>>
Then, for compiling existing sources, it seems the easiest way to
do that is to implement profiles of POSIX, or posix base and pthreads. >>>>> That then of course is much the traditional UNIX account of where
"everything is a file", though, the operating system itself doesn't
need to be implemented that way, just surface the usual objects as
they are as primitives, and mostly as having file handles.
(If the sources compile and run the same behavior some won't know/care.) >>>>>
Then, objects, according to "naming and directory interface" usually >>>>> enough, Orange Book for example defines granular access controls,
so, including all things like files.
About quota and limits and the like, and about the perceived value
of pre-emptive scheduling to avoid "hogging", or thrashing, here
is an account of basically unmaskable uncatchable interrupts that
have as a signal handler the operating system code on the core
that results making the task yield, then necessarily enough using
the usually context-switch machinery to pause it and restart it.
Most code eventually touches system calls, and if there's a spare
core it might actually be the idea to let the compute-intensive
routine employ the entire core.
The many-core architectures of these days, even fifteen or twenty
years ago with "AMD Bulldozer and 8 cores" and the like, these
days usual PC or server chips have scores of cores, ..., often
for example with the idea of running a giant hypervisor then
as many virts, ..., like a Kubernetes cluster for example, ...,
or simply a ton of virts, ..., these days a single board is
as a model of a distributed system internal to itself.
The idea for allocation and sharing that "fairness is a matter
of mechanism, not policy", is for the usual ideas of "thoughput"
and "transput" as Finkel put it in "An Operating Systems Vade Mecum", >>>>> about I/O and queues, and limits.
"Interrupts" are the events, "coherent cache lines" the units of
serialization of memory, DMA is the bulk transfer medium in protocol, >>>>> a byte is the smallest addressable memory unit, these are mostly
the ordering guarantees, all else "undefined".
Proximity, affinity, coherency, mobility, ....
We build electronics. Our new PoE instrument line uses an RP2040
dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
quantity.
We call the two CPUs (and the two ends of the box) Alice and Bob.
Alice does the ethernet and usb i/o, command parsing, calibrations,
all that slow floating-point management. Bob does the realtime i/o,
fixed-point, directly or through an FPGA.
All programmed in bare-metal c with no OS. Abstraction=0.
Seems to work.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
That seems cool. So, you wrote your own USB and packet stack?
Or, it's a system-on-chip?
I drive my tractor with my hands on the wheels and the sticks
and the levers and the other levers and the feet on the pedals
and the other pedals and my rear in the seat, ..., abstraction = 0.
We use the WizNet ethernet chip and the code that they supply. It's
more than a mac/phy: it handles packets and protocols, including UDP.
The RP2040 has a built-in USB interface. The electrical interface to
the USBc connector is two resistors. It looks like a COM port to the
users. What's really slick is that the USB can also run in a mode
where it looks like a memory stick. To reload the systrem code, we
boot into memory stick mode and the user then drag-drops a single file
to update the box: Alice code, Bob code, and the FPGA config.
Tractors are cool. Physical and basic.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
Trying to figure out "commodity" computing above "embedded"
computing, and to be able to explain it and thusly give an
outline, an abstraction itself, of the connections and the
circuits, has that these days at least for "commodity" general
purpose computing, there's a great "economy of ubiquity" so
that whence there's a model of the bus as almost always PCIe,
then about clock signals and clock drivers and clock interrupts,
about power management and power states, about variously the
ideas, here they're mostly ideas first as I'm not that great
of a computer engineer, about differential pair lines and the
other serial protocols usually enough, then about SATA mostly,
is then mostly about PCIe and DMA and then a miniature fleet
of cores, these being themselves often single or "hyper" threaded,
it's not moving that fast the platform, to basically make for
it a model of computation as it embodies itself.
Here's a bit of a podcast, 44:35 - 49:55 sort of talks about
these things, there are others. "Reading Foundations: denser tensors".
This discussion drifted into operating systems, or schedulers
as they may be or executives plainly, in the context of the
software more generally there's much to be made of "logic
extraction", since, pretty much all sorts of source code
pretty much lives in a world of types and among models of
computation, and so, according to the shape in the logic,
there's much to be made of flexible or "polyglot" parsers,
basically into representations of state and scope, and for
example into making natural diagrams of the flow-graph,
this is just "modern tooling to make sense of complexity",
instead of "ignore the man behind the curtain in the
booming voice of Oz".
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
On Mon, 30 Mar 2026 23:30:01 +0000, someone ><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone >><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
The line of clouds crosses the California coast about 50 miles south
of San Francisco. The SF Bay area shows mostly clear skys, while to
the south, the missing rain clouds are moving inland: ><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> ><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
In Ben Lomond, I'm under the clouds and seeing only a little rain,
which barely registers on my rain gauge. The forecast is more of the
same through Weds evening. Sorry.
On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com>
wrote:
On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone >>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
The line of clouds crosses the California coast about 50 miles south
of San Francisco. The SF Bay area shows mostly clear skys, while to
the south, the missing rain clouds are moving inland: >><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
In Ben Lomond, I'm under the clouds and seeing only a little rain,
which barely registers on my rain gauge. The forecast is more of the
same through Weds evening. Sorry.
They just need bigger computers.
John Larkin--
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com>
wrote:
On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone
<cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
The line of clouds crosses the California coast about 50 miles south
of San Francisco. The SF Bay area shows mostly clear skys, while to
the south, the missing rain clouds are moving inland:
<https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>> <https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
In Ben Lomond, I'm under the clouds and seeing only a little rain,
which barely registers on my rain gauge. The forecast is more of the
same through Weds evening. Sorry.
They just need bigger computers.
They also want all the CPU's, memory, video cards, cooling water,
electrical power, government support and investors on the planet. If
they can't get these, they threaten to put the data centers in orbit.
All this to obtain better weather forecasts. Meanwhile, I can do as
well with my Ouija board and weather rock: <https://www.google.com/search?udm=2&q=ouija%20board> <https://www.google.com/search?q=weather%20rock&udm=2>
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
Jeff Liebermann <jeffl@cruzio.com>wrote:
On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com>wrote:
On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com> >>wrote:
On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone >>>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their
conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
The line of clouds crosses the California coast about 50 miles south
of San Francisco. The SF Bay area shows mostly clear skys, while to
the south, the missing rain clouds are moving inland: >>><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
In Ben Lomond, I'm under the clouds and seeing only a little rain,
which barely registers on my rain gauge. The forecast is more of the >>>same through Weds evening. Sorry.
They just need bigger computers.
They also want all the CPU's, memory, video cards, cooling water,
electrical power, government support and investors on the planet. If
they can't get these, they threaten to put the data centers in orbit.
All this to obtain better weather forecasts. Meanwhile, I can do as
well with my Ouija board and weather rock: ><https://www.google.com/search?udm=2&q=ouija%20board> ><https://www.google.com/search?q=weather%20rock&udm=2>
Jeff Liebermann <jeffl@cruzio.com>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone ><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional
weather forecasting operations as usual without it.
Baloney. ECMMF AI data has been available since July 2025 in the form
of AIFS (Artificial Intelligence Forecasting System). ><https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data> ><https://www.ecmwf.int/en/forecasts/datasets/open-data>
At this time, the crown jewels of AI startups are the details on how
they generate their predictions. In other words, their source code
and system architecture. They're not about to give that away for free
and certainly not without an NDA. There are probably some open source
AI initiatives which might include AI weather prediction models. Try
your luck: ><https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Do you really need to know how an internal combustion engine works in
order to operate the vehicle?
Jeff Liebermann <jeffl@cruzio.com>wrote:
On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com> >>wrote:
On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com> >>>wrote:
On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>>>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone >>>>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their
conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
The line of clouds crosses the California coast about 50 miles south
of San Francisco. The SF Bay area shows mostly clear skys, while to >>>>the south, the missing rain clouds are moving inland: >>>><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>>><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
In Ben Lomond, I'm under the clouds and seeing only a little rain, >>>>which barely registers on my rain gauge. The forecast is more of the >>>>same through Weds evening. Sorry.
They just need bigger computers.
They also want all the CPU's, memory, video cards, cooling water, >>electrical power, government support and investors on the planet. If
they can't get these, they threaten to put the data centers in orbit.
All this to obtain better weather forecasts. Meanwhile, I can do as
well with my Ouija board and weather rock: >><https://www.google.com/search?udm=2&q=ouija%20board> >><https://www.google.com/search?q=weather%20rock&udm=2>
windy.com is very good here.
And local weather radar:
https://www.knmi.nl/nederland-nu/weer/actueel-weer/neerslagradar
Jeff Liebermann <jeffl@cruzio.com>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone >><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional
weather forecasting operations as usual without it.
Baloney. ECMMF AI data has been available since July 2025 in the form
of AIFS (Artificial Intelligence Forecasting System). >><https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data> >><https://www.ecmwf.int/en/forecasts/datasets/open-data>
At this time, the crown jewels of AI startups are the details on how
they generate their predictions. In other words, their source code
and system architecture. They're not about to give that away for free
and certainly not without an NDA. There are probably some open source
AI initiatives which might include AI weather prediction models. Try
your luck: >><https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Do you really need to know how an internal combustion engine works in
order to operate the vehicle?
It may help a lot!
Same for motorbikes etc..
On Wed, 01 Apr 2026 06:45:52 GMT, Jan Panteltje <alien@comet.invalid>
wrote:
Jeff Liebermann <jeffl@cruzio.com>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone<cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional
weather forecasting operations as usual without it.
Baloney. ECMMF AI data has been available since July 2025 in the form
of AIFS (Artificial Intelligence Forecasting System).
<https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data>
<https://www.ecmwf.int/en/forecasts/datasets/open-data>
At this time, the crown jewels of AI startups are the details on how
they generate their predictions. In other words, their source code
and system architecture. They're not about to give that away for free
and certainly not without an NDA. There are probably some open source
AI initiatives which might include AI weather prediction models. Try
your luck:
<https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Do you really need to know how an internal combustion engine works in
order to operate the vehicle?
It may help a lot!
Same for motorbikes etc..
Oddly, driver training schools don't include much on how internal
combustion engines function. Most of the training is on how to
operate the vehicle with maybe a few maintenance hints (put air in the
tires, check the oil, keep the windows clean, etc). Some of my
friends struggle with opening the door when they forget their wireless
key fob at home. Judging by appearances, driving to the supermarket
and back, without hitting anything, is a major accomplishment that can
be achieved without knowing how the engine works.
It's the same with AI weather forecasting. Members of the GUM (great unwashed masses) do not need to know how an AI is used to produce a
weather forecast. To them, the process might involve a weather rock: <https://www.google.com/search?udm=2&q=weather%20rock>
or Ouija board:
<https://www.google.com/search?q=ouija%20board&udm=2>
and they would accept the results. Hopefully, they would also know
what to do about the results and be able to decode the forecast terms.
How many of these do you know?
<https://www.weather.gov/bgm/forecast_terms>
Knowing how sausage and weather forecasts are made does not make
either more digestible.
Incidentally, while attending college in the 1960's, I worked part
time as an auto mechanic (floor sweeper) at a Ford dealer. I met
quite a few drivers. I noticed that the stunt and race car drivers
were terrible at maintaining their vehicles, while the mechanically
inclined did well on maintenance, but were not very good drivers. I
guess that also applies to electronic design. There are those that
can design, but can't operate their designs and those who can do
amazing things with the final product, but couldn't design anything
that actually worked or could be manufactured. Similarly, computer programmers should not attempt to operate a screwdriver.
Oddly, driver training schools don't include much on how internal
combustion engines function. Most of the training is on how to
operate the vehicle with maybe a few maintenance hints (put air in the
tires, check the oil, keep the windows clean, etc). Some of my
friends struggle with opening the door when they forget their wireless
key fob at home. Judging by appearances, driving to the supermarket
and back, without hitting anything, is a major accomplishment that can
be achieved without knowing how the engine works.
It's the same with AI weather forecasting. Members of the GUM (great unwashed masses) do not need to know how an AI is used to produce a
weather forecast. To them, the process might involve a weather rock: <https://www.google.com/search?udm=2&q=weather%20rock>
or Ouija board:
<https://www.google.com/search?q=ouija%20board&udm=2>
and they would accept the results. Hopefully, they would also know
what to do about the results and be able to decode the forecast terms.
How many of these do you know?
<https://www.weather.gov/bgm/forecast_terms>
Knowing how sausage and weather forecasts are made does not make
either more digestible.
Incidentally, while attending college in the 1960's, I worked part
time as an auto mechanic (floor sweeper) at a Ford dealer. I met
quite a few drivers. I noticed that the stunt and race car drivers
were terrible at maintaining their vehicles, while the mechanically
inclined did well on maintenance, but were not very good drivers. I
guess that also applies to electronic design. There are those that
can design, but can't operate their designs and those who can do
amazing things with the final product, but couldn't design anything
that actually worked or could be manufactured. Similarly, computer programmers should not attempt to operate a screwdriver.
On Wed, 01 Apr 2026 06:41:06 GMT, Jan Panteltje <alien@comet.invalid>
wrote:
Jeff Liebermann <jeffl@cruzio.com>wrote:
On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com> >>>wrote:
On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com> >>>>wrote:
On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>>>>wrote:
On Mon, 30 Mar 2026 23:30:01 +0000, someone >>>>>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:
ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their
conventional weather forecasting operations as usual without it.
This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.
Dang, they promised us rain this week. Didn't get any.
The line of clouds crosses the California coast about 50 miles south >>>>>of San Francisco. The SF Bay area shows mostly clear skys, while to >>>>>the south, the missing rain clouds are moving inland: >>>>><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>>>><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
In Ben Lomond, I'm under the clouds and seeing only a little rain, >>>>>which barely registers on my rain gauge. The forecast is more of the >>>>>same through Weds evening. Sorry.
They just need bigger computers.
They also want all the CPU's, memory, video cards, cooling water, >>>electrical power, government support and investors on the planet. If >>>they can't get these, they threaten to put the data centers in orbit.
All this to obtain better weather forecasts. Meanwhile, I can do as
well with my Ouija board and weather rock: >>><https://www.google.com/search?udm=2&q=ouija%20board> >>><https://www.google.com/search?q=weather%20rock&udm=2>
windy.com is very good here.
And local weather radar:
https://www.knmi.nl/nederland-nu/weer/actueel-weer/neerslagradar
Windy is very similar to https://www.ventusky.com. V has nice visuals
but the forecasts are petty bad.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
| Sysop: | Jacob Catayoc |
|---|---|
| Location: | Pasay City, Metro Manila, Philippines |
| Users: | 6 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 493396:32:48 |
| Calls: | 141 |
| Files: | 538 |
| Messages: | 76,283 |