• Re: software situation

    From Joerg@3:633/10 to All on Friday, March 27, 2026 11:46:05
    On 3/27/26 11:27 AM, Don Y wrote:
    On 3/27/2026 10:35 AM, Joerg wrote:
    Serious hint to anyone thinking about a career in software: Make sure
    to develop a solid understanding of at least digital hardware. Build,
    experiment with micro controller eval boards (they are cheap), learn
    how to use a logic analyzer and an oscilloscope. That will hugely
    increase your job security or your self-employed income prospects.

    The reverse is also true for folks looking for careers in hardware design.
    If you don't know how the software will interface with "your" hardware,
    the answer is likely:˙ "poorly".

    And, to all, "coding" isn't software design in much the same way
    assembling a prototype isn't hardware design.


    ... and comment lines in the source code do _not_ constitute
    "documentation".

    --
    Regards, Joerg

    http://www.analogconsultants.com/

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Friday, March 27, 2026 12:04:34
    On 3/27/2026 11:24 AM, Niocl s P˘l Caile n de Ghloucester wrote:
    Don Y <blockedofcourse@foo.invalid> wrote: |------------------------------------------------------------------------------|
    |"The fallacy of RAID is that the larger the array, the more likely |
    |a *second* fault manifests while attempting to recover from the |
    |first." |
    |------------------------------------------------------------------------------|

    Hmm. How so? I had not heard this before. (I never yet am responsible
    for a RAID, but Don Y incites me to be skeptical about RAID boasts
    should I ever become responsible for a RAID.)

    It takes a long time to rebuild an array. And, while you are doing so,
    the array is operating in degraded mode -- there is no redundancy
    provided for the data present. The unrecoverable error rate, while considered miniscule, becomes a certainty as the size of the array grows (most of
    my data farm are 64T boxes).

    To rebuild the array, you have to read EVERY bit, compute the parity
    and rewrite, as needed. All the while, hoping nothing hiccups.
    (and, this assumes you've done this recently enough that you are
    awae of the gotchas that you may encounter as you can't hit "PAUSE"
    and come back to it when you've had time for research).

    <https://www.datacarelabs.com/blog/raid-5-rebuild-permanent-data-loss/>

    To *detect* corruption, you have to do this periodically (patrol reads)
    which competes with your use of the medium.

    I've approached this differently. I store plain copies of everything
    along with their hashes (in a separate database). When I spin up a
    box, I have a daemon that runs on said box which queries the DBMS
    to determine what should be checked "now" (how far has the patrol
    read on THIS collection of drives progressed?). It computes the
    hashes of the files in question and compares to the hashes stored
    in the DBMS, alerting me to any files whose hashes have been corrupted.

    It then reports the locations of alternate copies of those offenders
    (which might be on "cold" spindles). E.g., a file may exist under
    different names in different places on the same spindle, different
    spindle or in a different box.
    C:\2025\taxes\federal
    D:\tax_returns\federal\2025
    C:\mystuff\business_records.rar

    It requires automation for mainstream use but provides far more
    practical redundancy for a homelab where the goal is to just KEEP
    stuff and accesses are infrequent (e.g., if a file has been corrupted,
    is it because the volume should be replaced? if so, then you'll want
    to be able to find copies of everything ON that volume, along with
    a description of the filesystem structure -- which may no longer be
    visible!)

    |------------------------------------------------------------------------------|
    |"It's like having gold speaker wires -- something to brag |
    |about that has no real value in most cases." |
    |------------------------------------------------------------------------------|

    But redundancies are really valuable in most cases. I make backups. I
    might not use a RAID, but I am not going to give up on backups!

    Of course! See above. I no longer use "backup software" because they
    all want to wrap the backed up data in some proprietary container.
    Instead, just "throw" a copy of <whatever> off to <somewhere> and let
    something track where it is and what it duplicates.

    |------------------------------------------------------------------------------|
    |"This is the same mentality behind folks adopting RAID or ZFS needlessly. |
    |They don't think their decision through and, instead, convince themselves |
    |that they have taken concrete measures to improve the reliability |
    |of their data! (how often should you do a patrol read? how do you |
    |respond to errors detected/corrected? what criteria do you use to |
    |retire media? do you support a hot spare? how many??)" |
    |------------------------------------------------------------------------------|

    I tried ZFS (via an open-Solaris distribution); UFS (via
    e.g. FreeBSD); ext (via Linux); Minix; and even FAT (via FreeDOS,
    avoiding installing a never installed copy of Windows XP still in its shrinkwrapping) in 2008. Only ZFS (Solaris) thereof became corrupted:
    and it became corrupted (and unbootable) within only a few days of
    testing!

    ZFS requires a fair bit of resources to work properly.

    One thing folks tend to forget is the *hardware* can fail.
    Now, how do you "recover" your data?

    With simple volumes, I can move a drive from one machine
    to another, as may be appropriate, and still have access
    to the drive's contents. No special routines to remember
    to bring it online for access/recovery -- it's just a disk.


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Friday, March 27, 2026 14:54:53
    On 3/27/2026 11:46 AM, Joerg wrote:
    On 3/27/26 11:27 AM, Don Y wrote:
    On 3/27/2026 10:35 AM, Joerg wrote:
    Serious hint to anyone thinking about a career in software: Make sure to >>> develop a solid understanding of at least digital hardware. Build,
    experiment with micro controller eval boards (they are cheap), learn how to
    use a logic analyzer and an oscilloscope. That will hugely increase your job
    security or your self-employed income prospects.

    The reverse is also true for folks looking for careers in hardware design. >> If you don't know how the software will interface with "your" hardware,
    the answer is likely:˙ "poorly".

    And, to all, "coding" isn't software design in much the same way
    assembling a prototype isn't hardware design.

    ... and comment lines in the source code do _not_ constitute "documentation".

    Eschew comments as they are yet another thing that can get out-of-sync
    with the code. People are lazy; given the choice of reading the
    comments and the code, the comments are likely easier and exist at
    a higher level of abstraction (with which one would have to infuse
    the code). But, the *code* is what the compiler reads!

    Comments should describe the *design* and the skillset of the
    developer should be able to suss-out the implementation and
    its compliance with said design.


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Friday, March 27, 2026 15:06:12
    On Fri, 27 Mar 2026 17:57:57 -0000 (UTC), Niocl?s P?l Caile?n de
    Ghloucester <thanks-to@Taf.com> wrote:

    Jeff Liebermann <jeffl@cruzio.com> wrote: >|-----------------------------------------------------------------------| >|"Never mind that the experts and the consensus are often wrong: | >|<https://www.learnbydestroying.com/jeffl/crud/Premature-Judgement.txt>"| >|-----------------------------------------------------------------------|

    Thanks for that stimulating text file but its end has an unproven
    consensus about Gates:
    ""640K ought to be enough for anybody."
    -- Bill Gates, 1981".
    I never see this Gates purported quotation in its original (and I saw
    a misquotation alleging that he said 16K!). During an old decade I
    once read this insightful counterargument arguing that Gates never
    actually says "640K ought to be enough for anybody." i.e. that if so
    many persons claim that he had said so, then someone should be able to
    show this original publication by Gates instead of hearsay. Did Gates
    really ever say "640K ought to be enough for anybody."?
    (S. HTTP://Gloucester.Insomnia247.NL/ fuer Kontaktdaten!)

    No. He probably didn't say or write that.

    I bought one of the original IBM PC 5150 computers and spent a few
    years dealing with the limitations of the architecture, associated
    memory map and subsequent band aids such as Lotus-Intel-Microsoft
    expanded memory and expanded memory:

    Original IBM PC memory amp: <https://static.cambridge.org/content/id/urn%3Acambridge.org%3Aid%3Abook%3A9780511622885/resource/name/firstPage-9780511622885apx3_p137-139_CBO.jpg>

    Later additions (extended memory, expanded paged memory, and expanded
    video memory):
    <https://www.filfre.net/wp-content/uploads/2017/04/expanded.jpg>

    Without the various expanded and extended band-aids, the original
    IBM-PC was limited to 640K for user program memory. The remaining
    384K was reserved for various devices that required memory mapping.

    IBM designed the original PC hardware and memory map. Bill Gates and
    Microsoft were originally hired to port the Microsoft Basic
    interpreter to the new hardware after IBM discovered that they didn't
    own any software or programming languages for their new computer. If
    Bill Gates had said something like the quote, he would have said
    1MByte should be enough and NOT 640KBytes.

    Bill Gates was famous at Microsoft for program reviews which demanded
    smaller and more efficient code. He knew that program memory and
    computation space were limited and did his best to work within the
    hardware limitations. For example: <https://blog.codinghorror.com/bill-gates-and-donkey-bas/>
    "Gates, Allen and Davidoff threw every trick at the book to squeeze
    the interpreter into 4 kilobytes. They succeeded and left some
    headroom for the programs themselves - without which it would have
    been pretty useless, of course."

    There have been numerous discussions on this topic since 1980 (the
    year the IBM PC was introduced. You should be able to find something
    relevant with any search engine. For example, I found this article to
    be rather interesting: <https://forum.vcfed.org/index.php?threads/bill-gates-really-did-claim-that-the-640k-barrier-was-due-to-his-decisions-is-there-any-other-actual-evidence-that-this-was-not-the-case.1252671/>



    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Friday, March 27, 2026 15:48:09
    On 3/27/2026 3:06 PM, Jeff Liebermann wrote:
    Bill Gates was famous at Microsoft for program reviews which demanded
    smaller and more efficient code. He knew that program memory and
    computation space were limited and did his best to work within the
    hardware limitations. For example: <https://blog.codinghorror.com/bill-gates-and-donkey-bas/>
    "Gates, Allen and Davidoff threw every trick at the book to squeeze
    the interpreter into 4 kilobytes. They succeeded and left some
    headroom for the programs themselves - without which it would have
    been pretty useless, of course."

    Recall that CP/M machines were common when the PC was first introduced.
    And, had memory constrained to 64KB -- an order of magnitude less.

    With comparable performance (given the PC's 8086 was running at ~5MHz!).

    The Reading Machine (predated the PC by several years) had 16K words of
    memory and no possibility of secondary storage (disk) -- beyond what was
    used for IPL (in case the core got corrupted). The bootstrap was sixteen
    (16) words in a tiny bipolar ROM.

    <https://alphamoon.ai/wp-content/uploads/2022/07/Kurzweil-reading-machine-1024x535.png>

    Arcade pieces (1981) were in the 16-24KB range in terms of complexity and
    had to generate video and audio in near-real-time. (something that would
    have been laughable with a PC).

    The fact that you don't NEED lots of resources to achieve meaningful
    work is a red herring; you add resources to make implementations more
    robust or maintainable. Tweaking the boot loader on the Reading Machine required a disproportionate amount of effort as you couldn't add a *17th*
    word! It's a lot easier -- and more reliable -- to be able to load
    a second image of a bootstrap and *verify* that before relying on it
    (as anyone who had to flash from RAM learned when stuck with a
    "bad reflash" bricking their device!)

    It's amusing how much has been "wasted" in efforts to save a few
    pennies in most products!


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Friday, March 27, 2026 15:58:50
    On 3/27/2026 3:35 PM, Niocl s P˘l Caile n de Ghloucester wrote:
    Don Y <blockedofcourse@foo.invalid> wrote: |-----------------------------------------------------------------------|
    |"> I tried ZFS (via an open-Solaris distribution); UFS (via |
    e.g. FreeBSD); ext (via Linux); Minix; and even FAT (via FreeDOS, | avoiding installing a never installed copy of Windows XP still in its| shrinkwrapping) in 2008. Only ZFS (Solaris) thereof became corrupted:|
    and it became corrupted (and unbootable) within only a few days of | testing! |
    | | |ZFS requires a fair bit of resources to work properly. |
    | | |One thing folks tend to forget is the *hardware* can fail." | |-----------------------------------------------------------------------|

    That hardware never fails. In particular it was a new computer which
    was switched on for its 1st time in Summer 2008. Its ZFS installation
    failed before 2009.

    The point is, you need the electronics and software to remain operational
    in order to be able to recover the data on the disk. In practical terms,
    it means you need a redundant machine to have any hope of accessing or recovering data if the first machine dies (or, if the HBA on the first
    machine develops problems).

    My San resides on a bunch of similar (but not strictly identical)
    servers, chosen because they each support multiple spindles. Sadly,
    the HBAs in them differ in their native capabilities. Some can only
    create RAID volumes; JBOD isn't an option.

    Creating a RAID volume (even RAID0) on device X gives you no guarantees that
    it can be accessed on device Y -- even if the two devices are "the same".

    OTOH, if the HBA can be configured to operate as JBOD, then you can
    freely swap physical media between machines (or, between "ports" on
    a single machine).

    I've been systematically converting every box to this sort of disk
    interface to make "moving data" nothing more complex than ejecting
    a volume and plugging it in, elsewhere.

    The COTS NAS boxes are lined up for obsolescence each time I rescue
    another multi-spindle box that can host its volumes. (cuz I can't
    dick with closed software and the constraints it puts on the
    volumes it supports!)

    Of course, I have this luxury because I'm not pushing for performance
    and don't have to support "other users". OTOH, it's nice not to have to
    waste shelf space on text books or file cabinets full of records...

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Friday, March 27, 2026 16:04:37
    On Fri, 27 Mar 2026 07:41:26 -0700, Jeff Liebermann <jeffl@cruzio.com>
    wrote:

    ECMMWF forecasts have been available online for about 4 years. ><https://www.ecmwf.int/en/forecasts>

    Oops. The ECMWF was founded in 1975. I believe that forecasts were
    widely distributed starting in about 1992.


    Key Historical Milestones

    1971-1975: Conceptualization and signing of the ECMWF Convention to
    pool European scientific resources.

    Nov 1, 1975: The Convention formally comes into force, establishing
    the Centre.

    1979: First real-time operational medium-range forecasts produced.

    1992: Launch of operational ensemble forecasting (EPS) to predict
    weather uncertainty.

    2005-2010: Convention amended to allow new Member States to join.

    2020s: Post-Brexit, operations shifted to include Bonn, Germany, and
    the headquarters initiated plans to move to the University of
    Reading's Whiteknights Park campus.

    2025: Celebrated 50 years of operation and advancements in AI-based forecasting.


    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Saturday, March 28, 2026 06:41:13
    On 03/27/2026 11:34 AM, Don Y wrote:
    On 3/27/2026 10:32 AM, Ross Finlayson wrote:
    Yeah my O.S. design is basically to take advantage of the fact
    that the modern commodity architectures have left behind lots
    of assumptions of the single-core and about interrupts mostly
    then about the ubiquity of PCIe bus and the necessity of the
    efficient employment of DMA, then that many-core basically
    means that modern commodity general-purpose boards need be
    treated as models of self-contained distributed systems
    themselves, so, fundamentally "asynchronous", as this simplifies
    a lot of things, for models of co-operative multi-tasking,
    while acknowledging that user program are nominally adversarial,
    and the network is nominally un-trusted.

    Divide-and-conquer, information hiding, one-page "programs"
    all suggest an OS should cater to small, "decomposed" problems
    executing in *true* parallelism (the multitasking illusion
    doesn't work in the era of multiple cores/hardware threads,
    distributed systems, etc.

    To these criteria, I've added "accountability" as you want to
    be able to wrap a virtual "box" around any set of actors
    and pretend THAT is a product with real world constraints.
    E.g., how do you ensure a task doesn't disproportionately (ab)use
    resources meant to be shared with other co-operating tasks?
    (And, what do you do if/when it does??)

    [My most recent OS is, itself, "decomposed" so that parts of it
    can be co-operating instead of having big locks on a monolithic
    kernel]


    Ah, here the idea of "co-operative scheduler" (vis-a-vis
    "pre-emptive scheduler") has that there's a notion of the
    model of an o.s. (scheduler, allocator) of co-operation
    vis-a-vis "the re-routine", which is a sort of idea like
    "co-routine", where basically everything is non-blocking
    by design and convention, and instead of a co-routine stack
    is a sort of memo-ized monad, then about matters of the
    scheduling like "I cut you pick", "straw-pulling", and
    "hot potato", with anti-gaming built in to the algorithm,
    device drivers are provided as "generic universal drivers",
    then that user-space gets a usual "quotas/limits" and
    while a contrived user-space program may actually run
    a hot inner loop, otherwise the deadlock/starvation and
    other issues in concurrency are to be figured out,
    for the allocator/scheduler.





    I.e., the usual idea of the "co-operative" lives inside
    the kernel, user-space is nominally adversarial and
    the network is nominally un-trusted. System calls it's
    figured are implemented as of a "co-operative" implementation.


    It's mostly as of a "design" while though I put it through
    the wringer as it were of some "large, competent, conscientious,
    co-operative reasoners" or a "bot panel", I can post a link
    or reference or all the text of them.



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Saturday, March 28, 2026 13:56:53
    On 3/28/2026 6:41 AM, Ross Finlayson wrote:
    Ah, here the idea of "co-operative scheduler" (vis-a-vis
    "pre-emptive scheduler") has that there's a notion of the
    model of an o.s. (scheduler, allocator) of co-operation
    vis-a-vis "the re-routine", which is a sort of idea like
    "co-routine", where basically everything is non-blocking
    by design and convention, and instead of a co-routine stack
    is a sort of memo-ized monad, then about matters of the
    scheduling like "I cut you pick", "straw-pulling", and
    "hot potato", with anti-gaming built in to the algorithm,
    device drivers are provided as "generic universal drivers",
    then that user-space gets a usual "quotas/limits" and
    while a contrived user-space program may actually run
    a hot inner loop, otherwise the deadlock/starvation and
    other issues in concurrency are to be figured out,
    for the allocator/scheduler.

    I assume tasks (processes) run without interruption until they
    need an unavailable resource, at which point, they block.

    But, *other* tasks are also doing so, concurrently.

    As such, EVERY time a resource is released, the scheduler
    (theoretically) runs. So, a task that causes a resource to
    be made available for another blocking task can be immediately
    preempted by that blocked task now being "ready" to run.

    The distinction is important because tasks can reside in
    different cores as well as on different nodes. So, even if a
    task spins in a tight loop, not altering the availability of
    any resources, it can still be preempted by the actions of some
    other executing task.

    [Of course, the round-robin scheduler ensures an equal priority
    task is not indefinitely blocked, even if the deadline scheduler
    sees no need to reschedule()]

    Treating the design of the OS in a similar fashion, I can transfer
    "ownership" of specific objects to whichever tasks (servers) I
    consider appropriate. Dynamically.

    E.g., when physical memory is free'd, I give it to a task that
    scrubs it (so the next user of said memory never sees any "data"
    that may have occupied those memory locations by a previous
    "user") and verifies its functionality. When some task NEEDS
    additional memory, it blocks waiting on the availability of
    such memory -- which causes this "scrubber" to make available
    pages that it deems as "clean and functional".

    Chopping responsibilities up like this makes it easier to "get it
    right" -- at the expense of some performance (each of these interactions
    have to cross protection domains so the interactions aren't as
    lightweight as a simple function call in a monolithic kernel).

    Processors are cheap. Memory is cheap. Developer time and latent
    bugs are costly (figure you have to spend a man-week looking into
    a suspected bug. If you're making 10,000 units, EACH such distraction
    can justify an additional $1 in hardware costs, without factoring in
    the externalities of cost to users, reputation, etc.)

    I.e., the usual idea of the "co-operative" lives inside
    the kernel, user-space is nominally adversarial and
    the network is nominally un-trusted.˙ System calls it's
    figured are implemented as of a "co-operative" implementation.

    The network is not a named resource. It is used by the OS to
    exchange messages with other kernel instances running on other
    nodes. So, when a task does:

    object=>method(arguments)

    it doesn't need to know if the referenced object is local or remote.
    The kernels handle location independence.

    Traffic is encrypted with different keys for each node. So, discovering a
    key (e.g., by attacking a specific node) only gives you access to the
    traffic for that node.

    This also allows for:

    object=>move(new_server)

    to force the object to be managed by another server (for that particular
    class of object) which will likely cause the object instance to "physically" move to whichever node on which that server is executing.

    So, I can move every object off of a particular node -- or, onto a specific node!

    [Of course, a task is also an object -- and servers are tasks -- so I can move entire tasks (processes) similarly.]

    In this way, I can bring hardware and software on/off-line on demand to adapt to changes in needs and available resources. E.g, if I'm running on battery (backup) power, I can shut down individual nodes to reduce power consumption after migrating their current responsibilities to other nodes or outright killing them off -- after checkpointing their progress. If I have some new *need*, I can bring a node on-line and push tasks (objects) onto it. So,
    if a particular object server becomes overloaded, I can span a new instance
    of it and migrate some of the objects that it is currently backing onto that new instance -- the tasks referencing those objects never know that the objects have "moved"!

    Objects are capability-based. So, you can only access objects for which you currently *have* a capability and only to the extent permitted by said capability.

    Capabilities are un-named and managed in the kernel(s) so can't be counterfeited. You can *know* that a particular object exists (e.g., TheFrontDoor, TheGunSafe, TheBankAccount) but can't do anything
    to/with it because you likely haven't been given access to it and can't
    GET access to it as there isn't a central name registry that you could hack!

    It seems fairly obvious that devices can no longer act as islands in the
    21st century. There's too little value to add for a single device to
    be meaningful -- unless it interacts with other devices in meaningful
    ways.

    And, rather than the heavyweight client-server interface where things
    interact in a generic, high-level manner (CORBA-ish), it seems much
    more practical to let them interact in a manner that is more natural
    to their designs. Do you want to have to standardize on every such
    interaction with an industry-wide "committee" arguing about how many
    humps the horse should have on its back? Or, do you want to make product
    that solves problems while your competitors are trying to define a
    level playing field??

    It's mostly as of a "design" while though I put it through
    the wringer as it were of some "large, competent, conscientious,
    co-operative reasoners" or a "bot panel", I can post a link
    or reference or all the text of them.



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Sunday, March 29, 2026 05:53:24
    On 03/28/2026 01:56 PM, Don Y wrote:
    On 3/28/2026 6:41 AM, Ross Finlayson wrote:
    Ah, here the idea of "co-operative scheduler" (vis-a-vis
    "pre-emptive scheduler") has that there's a notion of the
    model of an o.s. (scheduler, allocator) of co-operation
    vis-a-vis "the re-routine", which is a sort of idea like
    "co-routine", where basically everything is non-blocking
    by design and convention, and instead of a co-routine stack
    is a sort of memo-ized monad, then about matters of the
    scheduling like "I cut you pick", "straw-pulling", and
    "hot potato", with anti-gaming built in to the algorithm,
    device drivers are provided as "generic universal drivers",
    then that user-space gets a usual "quotas/limits" and
    while a contrived user-space program may actually run
    a hot inner loop, otherwise the deadlock/starvation and
    other issues in concurrency are to be figured out,
    for the allocator/scheduler.

    I assume tasks (processes) run without interruption until they
    need an unavailable resource, at which point, they block.

    But, *other* tasks are also doing so, concurrently.

    As such, EVERY time a resource is released, the scheduler
    (theoretically) runs. So, a task that causes a resource to
    be made available for another blocking task can be immediately
    preempted by that blocked task now being "ready" to run.

    The distinction is important because tasks can reside in
    different cores as well as on different nodes. So, even if a
    task spins in a tight loop, not altering the availability of
    any resources, it can still be preempted by the actions of some
    other executing task.

    [Of course, the round-robin scheduler ensures an equal priority
    task is not indefinitely blocked, even if the deadline scheduler
    sees no need to reschedule()]

    Treating the design of the OS in a similar fashion, I can transfer "ownership" of specific objects to whichever tasks (servers) I
    consider appropriate. Dynamically.

    E.g., when physical memory is free'd, I give it to a task that
    scrubs it (so the next user of said memory never sees any "data"
    that may have occupied those memory locations by a previous
    "user") and verifies its functionality. When some task NEEDS
    additional memory, it blocks waiting on the availability of
    such memory -- which causes this "scrubber" to make available
    pages that it deems as "clean and functional".

    Chopping responsibilities up like this makes it easier to "get it
    right" -- at the expense of some performance (each of these interactions
    have to cross protection domains so the interactions aren't as
    lightweight as a simple function call in a monolithic kernel).

    Processors are cheap. Memory is cheap. Developer time and latent
    bugs are costly (figure you have to spend a man-week looking into
    a suspected bug. If you're making 10,000 units, EACH such distraction
    can justify an additional $1 in hardware costs, without factoring in
    the externalities of cost to users, reputation, etc.)

    I.e., the usual idea of the "co-operative" lives inside
    the kernel, user-space is nominally adversarial and
    the network is nominally un-trusted. System calls it's
    figured are implemented as of a "co-operative" implementation.

    The network is not a named resource. It is used by the OS to
    exchange messages with other kernel instances running on other
    nodes. So, when a task does:

    object=>method(arguments)

    it doesn't need to know if the referenced object is local or remote.
    The kernels handle location independence.

    Traffic is encrypted with different keys for each node. So, discovering a key (e.g., by attacking a specific node) only gives you access to the
    traffic for that node.

    This also allows for:

    object=>move(new_server)

    to force the object to be managed by another server (for that particular class of object) which will likely cause the object instance to
    "physically"
    move to whichever node on which that server is executing.

    So, I can move every object off of a particular node -- or, onto a specific node!

    [Of course, a task is also an object -- and servers are tasks -- so I
    can move
    entire tasks (processes) similarly.]

    In this way, I can bring hardware and software on/off-line on demand to
    adapt
    to changes in needs and available resources. E.g, if I'm running on
    battery
    (backup) power, I can shut down individual nodes to reduce power
    consumption
    after migrating their current responsibilities to other nodes or outright killing them off -- after checkpointing their progress. If I have some new *need*, I can bring a node on-line and push tasks (objects) onto it. So,
    if a particular object server becomes overloaded, I can span a new instance of it and migrate some of the objects that it is currently backing onto
    that
    new instance -- the tasks referencing those objects never know that the objects
    have "moved"!

    Objects are capability-based. So, you can only access objects for which
    you
    currently *have* a capability and only to the extent permitted by said capability.

    Capabilities are un-named and managed in the kernel(s) so can't be counterfeited. You can *know* that a particular object exists (e.g., TheFrontDoor, TheGunSafe, TheBankAccount) but can't do anything
    to/with it because you likely haven't been given access to it and can't
    GET access to it as there isn't a central name registry that you could
    hack!

    It seems fairly obvious that devices can no longer act as islands in the
    21st century. There's too little value to add for a single device to
    be meaningful -- unless it interacts with other devices in meaningful
    ways.

    And, rather than the heavyweight client-server interface where things interact in a generic, high-level manner (CORBA-ish), it seems much
    more practical to let them interact in a manner that is more natural
    to their designs. Do you want to have to standardize on every such interaction with an industry-wide "committee" arguing about how many
    humps the horse should have on its back? Or, do you want to make product that solves problems while your competitors are trying to define a
    level playing field??

    It's mostly as of a "design" while though I put it through
    the wringer as it were of some "large, competent, conscientious,
    co-operative reasoners" or a "bot panel", I can post a link
    or reference or all the text of them.



    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope
    or the contents of (some of) memory and registers, of a process or task,
    here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    Perhaps the most usual example is pre-emptive multithreading itself,
    about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in,
    then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.


    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.
    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense.


    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.


    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Sunday, March 29, 2026 12:39:33
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope
    or the contents of (some of) memory and registers, of a process or task,
    here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running
    on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself,
    about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for
    each process. If you only need a few registered names (stdin, stdout, stderr), the code that implements that is entirely different than the implementation that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you
    use, you'd not want to bear the cost of an implementation that did more
    than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
    Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in,
    then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense.

    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources
    (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e.,
    methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought.


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Sunday, March 29, 2026 13:13:24
    On Sun, 29 Mar 2026 12:39:33 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope
    or the contents of (some of) memory and registers, of a process or task,
    here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running
    on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself,
    about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for >each process. If you only need a few registered names (stdin, stdout, stderr),
    the code that implements that is entirely different than the implementation >that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you
    use, you'd not want to bear the cost of an implementation that did more
    than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an >implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
    Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in,
    then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive >implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense.

    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources
    (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., >methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought.

    Yikes. All that makes me glad to be a simple circuit designer.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Sunday, March 29, 2026 14:29:32
    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope
    or the contents of (some of) memory and registers, of a process or task,
    here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running
    on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself,
    about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for each process. If you only need a few registered names (stdin, stdout, stderr),
    the code that implements that is entirely different than the implementation that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you
    use, you'd not want to bear the cost of an implementation that did more
    than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
    Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in,
    then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense.

    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources
    (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought.



    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes".


    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads.
    That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.)

    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Sunday, March 29, 2026 15:24:41
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope
    or the contents of (some of) memory and registers, of a process or task, >>> here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running
    on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself,
    about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for >> each process. If you only need a few registered names (stdin, stdout,
    stderr),
    the code that implements that is entirely different than the implementation >> that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you
    use, you'd not want to bear the cost of an implementation that did more
    than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an
    implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
    Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in,
    then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive
    implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense.

    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources
    (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e.,
    methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought.



    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD >instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes".


    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads.
    That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.)

    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of >serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o, fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Sunday, March 29, 2026 17:09:57
    On 3/29/2026 2:29 PM, Ross Finlayson wrote:
    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    Perhaps you should relate the types of applications you've been
    tasked with in the past -- or, those you hope to target in the
    future. Frankly, your comments read like word-salad -- failing
    to appreciate or interact with my prior comments and, instead,
    babbling bits and pieces you've read somewhere instead of reflecting
    a genuine understanding of the issue(s).

    E.g., there's no *why* behind your statements. No "hope" in their
    proposals.

    Experience provides both of these and, without it, future
    endeavours tend to fail -- miserably. (if you don't learn...)

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Sunday, March 29, 2026 20:48:04
    On 03/29/2026 05:09 PM, Don Y wrote:
    On 3/29/2026 2:29 PM, Ross Finlayson wrote:
    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    Perhaps you should relate the types of applications you've been
    tasked with in the past -- or, those you hope to target in the
    future. Frankly, your comments read like word-salad -- failing
    to appreciate or interact with my prior comments and, instead,
    babbling bits and pieces you've read somewhere instead of reflecting
    a genuine understanding of the issue(s).

    E.g., there's no *why* behind your statements. No "hope" in their
    proposals.

    Experience provides both of these and, without it, future
    endeavours tend to fail -- miserably. (if you don't learn...)

    Heh. Let's say I've worked in distributed systems for a few decades.

    Here's how I'd begin to frame it. Let's say we all know "Turing tapes"
    from the theory. Everybody knows a mental model of a tape, with
    discrete differences in it or values with ordinal offsets,
    a tape and read-head and a write-head, the the idea that
    that's a "model of computation". That's pretty obvious to anybody,
    and everybody knows that in "the formal" that all kinds of results
    are said to follow from it, while at the same time, nobody thinks
    that way because it's just cumbersome or not particularly apt.
    Everybody has known that since they have one at the beach, a great
    computer, it's a giant scratch-pad and any way one cares to mark it with stick-marks and pebbles, and rules, is just as good a model of
    computation. Now, say we've all heard of "Towers of Hanoi" and
    otherwise, for example, how a stack machine is "equivalent" or "equi-interpretable" to other "models of computation", point being
    that results in one are the same results eventually as the other.

    Then, say we've all heard of something like Knuth's "Mix",
    an abstract of a virtual machine.

    So, today we all know about, for example, the instruction pointer
    and the stack pointer, so "von Neumann", then according to the
    architecture the "instruction set architecture", that there's
    mostly a model of synchronous routine, then though as with regards
    to "interrupt service routine", a model of asynchrony in the
    synchronous ("blocking").

    Then, also we know about that basically since when DMA direct memory
    access made for vector or scatter/gather I/O, that instead of a
    model of asynchrony in the synchronous, now it's a model of synchrony
    in the asynchronous ("non-blocking"), sort of like any other distributed system.


    So, what inspires my outline is that the model of computation,
    here "actors on the bus", should more than less be the model
    of computation, then that formal statements about it, directly apply,
    to the implementation.

    So, that's "why".


    Mostly I thought of these things myself, because nobody already
    bothered to point out that de facto the commodity hardware is
    best treated as a model of a self-contained distributed system.


    (In case you're wondering then also my approach to Mathematics
    and Physics in Foundations has been called "Word Salad", though
    I prefer to think of it as a nutritious enough "Word Soup",
    after "Alphabet Soup", which is full of letters even if you'd
    rather toss it than bother to make words in the spoon, i.e.,
    I'll aver that it's coming from and going to a good place,
    then those rambles or "the bot panel" from the "Critix/DeepOs"
    help provide some context.)


    I don't disagree with your previous comments, I understand
    from "generous reading" that accounts of reliability and
    robustness and flexibility and function and so on are usual,
    and sensible.

    "Equilibrium is always both at equilibrium and seeking equilibrium."





    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Sunday, March 29, 2026 20:54:28
    On 03/29/2026 03:24 PM, john larkin wrote:
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope
    or the contents of (some of) memory and registers, of a process or task, >>>> here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running
    on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself,
    about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for >>> each process. If you only need a few registered names (stdin, stdout,
    stderr),
    the code that implements that is entirely different than the implementation >>> that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you >>> use, you'd not want to bear the cost of an implementation that did more
    than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an
    implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc.
    Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in,
    then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive
    implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense.

    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources
    (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e.,
    methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought. >>>


    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD
    instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes".


    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads.
    That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.)

    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o, fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    That seems cool. So, you wrote your own USB and packet stack?
    Or, it's a system-on-chip?

    I drive my tractor with my hands on the wheels and the sticks
    and the levers and the other levers and the feet on the pedals
    and the other pedals and my rear in the seat, ..., abstraction = 0.




    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Sunday, March 29, 2026 21:07:22
    On 03/29/2026 08:54 PM, Ross Finlayson wrote:
    On 03/29/2026 03:24 PM, john larkin wrote:
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope >>>>> or the contents of (some of) memory and registers, of a process or
    task,
    here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running >>>> on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself, >>>>> about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries
    -- for
    each process. If you only need a few registered names (stdin, stdout, >>>> stderr),
    the code that implements that is entirely different than the
    implementation
    that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that
    you
    use, you'd not want to bear the cost of an implementation that did more >>>> than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an
    implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in, >>>>> then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive
    implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense. >>>>>
    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources >>>> (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., >>>> methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an
    afterthought.



    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD
    instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes".


    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads.
    That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.) >>>
    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o,
    fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    That seems cool. So, you wrote your own USB and packet stack?
    Or, it's a system-on-chip?

    I drive my tractor with my hands on the wheels and the sticks
    and the levers and the other levers and the feet on the pedals
    and the other pedals and my rear in the seat, ..., abstraction = 0.




    Attitudes are various about "abstraction" and "concreteness".
    (Attitudes or "opinions".) Some prefer to write more or less
    directly to the concrete adapter, others prefer to model the
    interaction since usually only a tiny, tiny subset of the
    "defined behavior" of the concrete adapter fulfills its
    abstract function.

    It takes all kinds, ....

    "The Blind Men and the Elephant" is a usual sort of account,
    and it's the same kind of idea since forever that any two
    individuals are going to see things differently, and a question
    whether they even see the same thing at all, "subjectivity",
    then there's the great formal and practical account of
    the formal or "interfaces" usually enough, interfaces to
    the adapters, "inter-subjectivity", so when we clock out
    we can say it's done.


    When I see someone writing to directly to the concrete
    adapter, sometimes it's hard to distinguish that or
    easy to read that as from, "Hello, World".

    Then, "layers" is usually enough the idea of making
    models of modules, in layers, then that the boundaries
    exist, then for example that the code its logic is
    "separable and composable", then to point it at other
    adapters their interfaces or harnesses, for example
    for "systems under test" vis-a-vis "systems under load".



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Sunday, March 29, 2026 21:16:37
    On 03/29/2026 09:07 PM, Ross Finlayson wrote:
    On 03/29/2026 08:54 PM, Ross Finlayson wrote:
    On 03/29/2026 03:24 PM, john larkin wrote:
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope >>>>>> or the contents of (some of) memory and registers, of a process or >>>>>> task,
    here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running >>>>> on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have >>>>> a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself, >>>>>> about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching. >>>>>
    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries
    -- for
    each process. If you only need a few registered names (stdin, stdout, >>>>> stderr),
    the code that implements that is entirely different than the
    implementation
    that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that >>>>> you
    use, you'd not want to bear the cost of an implementation that did
    more
    than you needed -- just like you wouldn't develop a standalone device >>>>> with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an >>>>> implementation might use a single "char" to represent a name, giving >>>>> you access to ~200+ unique names of the form "a", "b", "^X", "\b",
    etc.
    Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency, >>>>>> is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in, >>>>>> then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive
    implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive, >>>>> then you have to be wary of a developer spinning without ever giving >>>>> the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense. >>>>>>
    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts >>>>>> on the double-pipeline CPUs, the idea being that context-switching >>>>>> is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing
    resources
    (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints >>>>> on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date! >>>>>
    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator >>>>>> and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., >>>>> methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an
    afterthought.



    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD
    instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes". >>>>

    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads. >>>> That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't
    know/care.)

    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o,
    fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    That seems cool. So, you wrote your own USB and packet stack?
    Or, it's a system-on-chip?

    I drive my tractor with my hands on the wheels and the sticks
    and the levers and the other levers and the feet on the pedals
    and the other pedals and my rear in the seat, ..., abstraction = 0.




    Attitudes are various about "abstraction" and "concreteness".
    (Attitudes or "opinions".) Some prefer to write more or less
    directly to the concrete adapter, others prefer to model the
    interaction since usually only a tiny, tiny subset of the
    "defined behavior" of the concrete adapter fulfills its
    abstract function.

    It takes all kinds, ....

    "The Blind Men and the Elephant" is a usual sort of account,
    and it's the same kind of idea since forever that any two
    individuals are going to see things differently, and a question
    whether they even see the same thing at all, "subjectivity",
    then there's the great formal and practical account of
    the formal or "interfaces" usually enough, interfaces to
    the adapters, "inter-subjectivity", so when we clock out
    we can say it's done.


    When I see someone writing to directly to the concrete
    adapter, sometimes it's hard to distinguish that or
    easy to read that as from, "Hello, World".

    Then, "layers" is usually enough the idea of making
    models of modules, in layers, then that the boundaries
    exist, then for example that the code its logic is
    "separable and composable", then to point it at other
    adapters their interfaces or harnesses, for example
    for "systems under test" vis-a-vis "systems under load".



    I'm not a mechanic yet the metaphor about the tractor
    for example is about compression test. If the engine's
    misfiring or burning oil or something, then a usual enough
    idea is to remove one of the spark plugs and attach a compression
    tester and turn over the starter motor and and measure the
    compression and perhaps vacuum and compare it to a table
    from the manufacturer to diagnose what's going on. So,
    that's an abstraction, all the activity that goes into
    maintenance for operation is essentially abstraction.

    Or, you know, according to instruction.

    Exercises then in "abstraction and concreteness",
    are totally usual, and figuring them out is for
    example for the difference between writing logic
    and writing libraries, one for concreteness the
    other abstraction.

    It goes both ways, ..., it takes all kinds.

    (I am definitely a software engineer and
    have written many, many KLOCs in prod and under
    continuous test. Or, as my desk neighbor once
    told me, she said, "your code is solid". So,
    I'm familiar with distributed systems on
    the order of orders and events.)




    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Monday, March 30, 2026 07:44:48
    On Sun, 29 Mar 2026 20:54:28 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 03:24 PM, john larkin wrote:
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope >>>>> or the contents of (some of) memory and registers, of a process or task, >>>>> here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running >>>> on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have
    a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself, >>>>> about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching.

    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for >>>> each process. If you only need a few registered names (stdin, stdout, >>>> stderr),
    the code that implements that is entirely different than the implementation
    that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you >>>> use, you'd not want to bear the cost of an implementation that did more >>>> than you needed -- just like you wouldn't develop a standalone device
    with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an
    implementation might use a single "char" to represent a name, giving
    you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency,
    is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in, >>>>> then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive
    implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive,
    then you have to be wary of a developer spinning without ever giving
    the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense. >>>>>
    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts
    on the double-pipeline CPUs, the idea being that context-switching
    is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources >>>> (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints
    on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date!

    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator
    and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., >>>> methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought. >>>>


    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD
    instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes".


    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads.
    That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.) >>>
    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o,
    fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    That seems cool. So, you wrote your own USB and packet stack?
    Or, it's a system-on-chip?

    I drive my tractor with my hands on the wheels and the sticks
    and the levers and the other levers and the feet on the pedals
    and the other pedals and my rear in the seat, ..., abstraction = 0.



    We use the WizNet ethernet chip and the code that they supply. It's
    more than a mac/phy: it handles packets and protocols, including UDP.

    The RP2040 has a built-in USB interface. The electrical interface to
    the USBc connector is two resistors. It looks like a COM port to the
    users. What's really slick is that the USB can also run in a mode
    where it looks like a memory stick. To reload the systrem code, we
    boot into memory stick mode and the user then drag-drops a single file
    to update the box: Alice code, Bob code, and the FPGA config.

    Tractors are cool. Physical and basic.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Monday, March 30, 2026 09:02:28
    On 03/30/2026 07:44 AM, john larkin wrote:
    On Sun, 29 Mar 2026 20:54:28 -0700, Ross Finlayson <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 03:24 PM, john larkin wrote:
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope >>>>>> or the contents of (some of) memory and registers, of a process or task, >>>>>> here is described as "re-seating" which is also the usual enough
    idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running >>>>> on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the
    resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have >>>>> a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself, >>>>>> about basically the state as a stack, and "process control block"
    and "thread control block" usually enough, about context-switching. >>>>>
    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for
    each process. If you only need a few registered names (stdin, stdout, >>>>> stderr),
    the code that implements that is entirely different than the implementation
    that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you >>>>> use, you'd not want to bear the cost of an implementation that did more >>>>> than you needed -- just like you wouldn't develop a standalone device >>>>> with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an >>>>> implementation might use a single "char" to represent a name, giving >>>>> you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ...
    (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency, >>>>>> is a little different than the usual idea of a co-routine, which
    is usually enough a fork in the process model, then about signals
    as IPC with PID and PPID, vis-a-vis, fibers and threads or events
    and task queues, basically the re-routine never "blocks" and has
    no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in, >>>>>> then then the re-routine adds a penalty of basically n^2 in time
    to be completely non-blocking and where asynchrony is modeled in
    the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive
    implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor.

    OTOH, if you can only preempt when the task invokes an OS primitive, >>>>> then you have to be wary of a developer spinning without ever giving >>>>> the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a
    huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory
    of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense. >>>>>>
    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts >>>>>> on the double-pipeline CPUs, the idea being that context-switching >>>>>> is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!)
    actors who may wish to compromise performance by monopolizing resources >>>>> (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process)
    to a subset of the available resources. Putting runtime constraints >>>>> on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember
    this decision so you don't "readmit" the process at some future date! >>>>>
    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator >>>>>> and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., >>>>> methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the
    objects themselves -- not layered onto a "filesystem" as an afterthought. >>>>>


    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD
    instructions, then as with regards to fitting two operands in a word,
    the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less",
    there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes". >>>>

    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads. >>>> That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.) >>>>
    Then, objects, according to "naming and directory interface" usually
    enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum",
    about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol,
    a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o,
    fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    That seems cool. So, you wrote your own USB and packet stack?
    Or, it's a system-on-chip?

    I drive my tractor with my hands on the wheels and the sticks
    and the levers and the other levers and the feet on the pedals
    and the other pedals and my rear in the seat, ..., abstraction = 0.



    We use the WizNet ethernet chip and the code that they supply. It's
    more than a mac/phy: it handles packets and protocols, including UDP.

    The RP2040 has a built-in USB interface. The electrical interface to
    the USBc connector is two resistors. It looks like a COM port to the
    users. What's really slick is that the USB can also run in a mode
    where it looks like a memory stick. To reload the systrem code, we
    boot into memory stick mode and the user then drag-drops a single file
    to update the box: Alice code, Bob code, and the FPGA config.

    Tractors are cool. Physical and basic.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics



    Trying to figure out "commodity" computing above "embedded"
    computing, and to be able to explain it and thusly give an
    outline, an abstraction itself, of the connections and the
    circuits, has that these days at least for "commodity" general
    purpose computing, there's a great "economy of ubiquity" so
    that whence there's a model of the bus as almost always PCIe,
    then about clock signals and clock drivers and clock interrupts,
    about power management and power states, about variously the
    ideas, here they're mostly ideas first as I'm not that great
    of a computer engineer, about differential pair lines and the
    other serial protocols usually enough, then about SATA mostly,
    is then mostly about PCIe and DMA and then a miniature fleet
    of cores, these being themselves often single or "hyper" threaded,
    it's not moving that fast the platform, to basically make for
    it a model of computation as it embodies itself.


    Here's a bit of a podcast, 44:35 - 49:55 sort of talks about
    these things, there are others. "Reading Foundations: denser tensors".



    This discussion drifted into operating systems, or schedulers
    as they may be or executives plainly, in the context of the
    software more generally there's much to be made of "logic
    extraction", since, pretty much all sorts of source code
    pretty much lives in a world of types and among models of
    computation, and so, according to the shape in the logic,
    there's much to be made of flexible or "polyglot" parsers,
    basically into representations of state and scope, and for
    example into making natural diagrams of the flow-graph,
    this is just "modern tooling to make sense of complexity",
    instead of "ignore the man behind the curtain in the
    booming voice of Oz".




    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Monday, March 30, 2026 16:00:03
    On Mon, 30 Mar 2026 09:02:28 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/30/2026 07:44 AM, john larkin wrote:
    On Sun, 29 Mar 2026 20:54:28 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 03:24 PM, john larkin wrote:
    On Sun, 29 Mar 2026 14:29:32 -0700, Ross Finlayson
    <ross.a.finlayson@gmail.com> wrote:

    On 03/29/2026 12:39 PM, Don Y wrote:
    On 3/29/2026 5:53 AM, Ross Finlayson wrote:
    That seems to speak to "proximity" and "affinity", with regards
    to "coherency", and "mobility". To "move" state, about state & scope >>>>>>> or the contents of (some of) memory and registers, of a process or task,
    here is described as "re-seating" which is also the usual enough >>>>>>> idea in programming like C and C++.

    My goal is NOT to disrupt the "programmer's model" for "average
    developers". They should truly be able to think that they are running >>>>>> on a uniprocessor but without any guarantees as to throughput
    (to accommodate sharing the physical processor, communication
    overhead, etc.).

    They shouldn't need to know where "they" are executing or that the >>>>>> resources on which they rely may not be local.

    "Advanced developers" attend to the dynamic reconfiguration of
    the system (at runtime). So, THOSE developers build applications
    that are given the ability ("capability") to relocate resources,
    kill off tasks, spawn new ones, etc. Because they, presumably, have >>>>>> a more detailed understanding of the "System" beyond the scope of
    some particular application/task within it.

    Perhaps the most usual example is pre-emptive multithreading itself, >>>>>>> about basically the state as a stack, and "process control block" >>>>>>> and "thread control block" usually enough, about context-switching. >>>>>>
    I support a heterogeneous environment; an object can be migrated
    to a different processor *family* at any time (while executing).
    You shouldn't care as long as the interface (methods) to the
    object remain available (for the capabilities you have been
    granted) along with the current state of the object.

    Similarly, the algorithms used to implement those methods (as
    well as internal data members) can change dynamically -- as long
    as the interface functionality remains immutable.

    For example, I create namespaces -- (name, capability) dictionaries -- for
    each process. If you only need a few registered names (stdin, stdout, >>>>>> stderr),
    the code that implements that is entirely different than the implementation
    that tries to manage hundreds of named entries.

    As it should be. (because "you" are "billed" for the resources that you >>>>>> use, you'd not want to bear the cost of an implementation that did more >>>>>> than you needed -- just like you wouldn't develop a standalone device >>>>>> with more complexity/cost than necessary!)

    If names are insignificant (e.g., akin to file descriptors), then an >>>>>> implementation might use a single "char" to represent a name, giving >>>>>> you access to ~200+ unique names of the form "a", "b", "^X", "\b", etc. >>>>>> Or, treat that char as an 8 bit int -- 0x01, 0x02, 0x61, 0x62, ... >>>>>> (do you really *need* names like "Object 1", "Object 2", etc.?)

    In this "Critix" concept, or "DeepOs" (or "BeepOs/DeepOs"),
    the idea of the "re-routine" as a model of asychronous concurrency, >>>>>>> is a little different than the usual idea of a co-routine, which >>>>>>> is usually enough a fork in the process model, then about signals >>>>>>> as IPC with PID and PPID, vis-a-vis, fibers and threads or events >>>>>>> and task queues, basically the re-routine never "blocks" and has >>>>>>> no "yield" nor "async" keywords in the source text, instead any
    call to a re-routine implicitly yields, and then the re-routine
    is run again later, the re-run, where as the re-routine is filled in, >>>>>>> then then the re-routine adds a penalty of basically n^2 in time >>>>>>> to be completely non-blocking and where asynchrony is modeled in >>>>>>> the language as the normal procedural flow-of-control.

    "Yield" is just a hint to the scheduler. If you have a preemptive >>>>>> implementation where "time" can be a preemption criteria, then a
    task need never "suggest" a good place to relinquish the processor. >>>>>>
    OTOH, if you can only preempt when the task invokes an OS primitive, >>>>>> then you have to be wary of a developer spinning without ever giving >>>>>> the system a chance to "interrupt".

    Then, as that's only in the kernel itself, that n^2 might seem a >>>>>>> huge penalty, yet, it's actually quite under that, since as the
    re-routine its data (in a stack) is filled in, then most of its
    routine is cache hits, the "memoized" calls to the re-routine.

    About the allocator, then this design concept basically is for
    making use of virtual memory, to be able to "re-seat" the memory >>>>>>> of a process without changing a process' view of the memory.

    Of course. But, with VMM, you can do so much more:
    - CoW semantics
    - DSM
    - remapping "defective" memory
    - releasing memory that will NEVER be revisited
    - universal call by value (for large arguments)
    etc. None of these things need impact the developer.

    This can help avoid both syscalls and memory fragmentation,
    since memory paging basically is performed by the user-space
    process in its time instead of by the kernel. This has the
    usual guarantees of process memory that it's to be visible only
    to the process itself unless explicitly shared, that then being
    treated as a usual sort of shared resource in the distributed sense. >>>>>>>
    The syscalls by a process essentially yield (the process yields
    to the scheduler), about ideas like round-robin and fairness
    and anti-starvation and incremental-progress in the scheduler,
    while it's so that until a process gets any other signal and
    only touches its own memory that's non-yielding, then about
    the machinery of pre-emptive multithreading or context-switch
    and as with regards to hyper-threading or the interleaved contexts >>>>>>> on the double-pipeline CPUs, the idea being that context-switching >>>>>>> is along the lines of basically a periodic signal interrupt.

    But you have to guard against "non-cooperative" (and even HOSTILE!) >>>>>> actors who may wish to compromise performance by monopolizing resources >>>>>> (of which the CPU is but one).

    Using resource ledgers lets you constrain an application (process) >>>>>> to a subset of the available resources. Putting runtime constraints >>>>>> on memory, time, etc. lets you remove a "bad actor" from the set
    of processes eligible to run. A persistent store lets you remember >>>>>> this decision so you don't "readmit" the process at some future date! >>>>>>
    Notions of "Orange Book" and "mandatory access control" then
    are considered "more than good ideas" with regards to the allocator >>>>>>> and scheduler of resources in computation.

    Capabilities implicitly limit the actions that can be performed (i.e., >>>>>> methods that can be invoked) on an object. E.g., I can let you
    LOCK a door but never UNLOCK it. Or, let you open a door EXACTLY
    once, and never again!

    How you refine your "permissions" is something that belongs in the >>>>>> objects themselves -- not layered onto a "filesystem" as an afterthought.



    Hey, thanks for writing. Since all we know about each other
    are these brief exchanges, filling in some detail helps a lot
    to understand, or rather, get an idea, of an estimate of the
    depth of the comprehension of the whole machine stack.

    The idea of a kernel or operating system (executive, scheduler,
    ..., interactive "operating system") for "commodity" architectures
    these days is that it's pretty ubiquitous the various chips'
    architectures, then that PCIe is on everything PC, then about
    usually enough USB, then about the NIC and USB root, those are
    pretty much ubiquitously PCIe devices, or as after UEFI and ACPI
    and the SMI or as about DeviceTree, ..., it's ubiquitous,
    after "economy of scale" a simple enough "economy of ubiquity".

    So, the chips are almost all 64-bit their native word width, though
    agreeably sometimes it's 128, and they have various vector or SIMD
    instructions, then as with regards to fitting two operands in a word, >>>>> the SWAR approach, about vectorizing the scalars, and word and
    double-word and word and half-word.

    Then, here mostly the consideration is the "head-less" or "HID-less", >>>>> there's no human interface device involved in server runtime images
    for things like running services or usually enough "boxes" or "nodes". >>>>>

    Then, for compiling existing sources, it seems the easiest way to
    do that is to implement profiles of POSIX, or posix base and pthreads. >>>>> That then of course is much the traditional UNIX account of where
    "everything is a file", though, the operating system itself doesn't
    need to be implemented that way, just surface the usual objects as
    they are as primitives, and mostly as having file handles.
    (If the sources compile and run the same behavior some won't know/care.) >>>>>
    Then, objects, according to "naming and directory interface" usually >>>>> enough, Orange Book for example defines granular access controls,
    so, including all things like files.


    About quota and limits and the like, and about the perceived value
    of pre-emptive scheduling to avoid "hogging", or thrashing, here
    is an account of basically unmaskable uncatchable interrupts that
    have as a signal handler the operating system code on the core
    that results making the task yield, then necessarily enough using
    the usually context-switch machinery to pause it and restart it.
    Most code eventually touches system calls, and if there's a spare
    core it might actually be the idea to let the compute-intensive
    routine employ the entire core.

    The many-core architectures of these days, even fifteen or twenty
    years ago with "AMD Bulldozer and 8 cores" and the like, these
    days usual PC or server chips have scores of cores, ..., often
    for example with the idea of running a giant hypervisor then
    as many virts, ..., like a Kubernetes cluster for example, ...,
    or simply a ton of virts, ..., these days a single board is
    as a model of a distributed system internal to itself.


    The idea for allocation and sharing that "fairness is a matter
    of mechanism, not policy", is for the usual ideas of "thoughput"
    and "transput" as Finkel put it in "An Operating Systems Vade Mecum", >>>>> about I/O and queues, and limits.


    "Interrupts" are the events, "coherent cache lines" the units of
    serialization of memory, DMA is the bulk transfer medium in protocol, >>>>> a byte is the smallest addressable memory unit, these are mostly
    the ordering guarantees, all else "undefined".

    Proximity, affinity, coherency, mobility, ....


    We build electronics. Our new PoE instrument line uses an RP2040
    dual-ARM chip overclocked to 150 MHz. It costs 75 cents in any
    quantity.

    We call the two CPUs (and the two ends of the box) Alice and Bob.

    Alice does the ethernet and usb i/o, command parsing, calibrations,
    all that slow floating-point management. Bob does the realtime i/o,
    fixed-point, directly or through an FPGA.

    All programmed in bare-metal c with no OS. Abstraction=0.

    Seems to work.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    That seems cool. So, you wrote your own USB and packet stack?
    Or, it's a system-on-chip?

    I drive my tractor with my hands on the wheels and the sticks
    and the levers and the other levers and the feet on the pedals
    and the other pedals and my rear in the seat, ..., abstraction = 0.



    We use the WizNet ethernet chip and the code that they supply. It's
    more than a mac/phy: it handles packets and protocols, including UDP.

    The RP2040 has a built-in USB interface. The electrical interface to
    the USBc connector is two resistors. It looks like a COM port to the
    users. What's really slick is that the USB can also run in a mode
    where it looks like a memory stick. To reload the systrem code, we
    boot into memory stick mode and the user then drag-drops a single file
    to update the box: Alice code, Bob code, and the FPGA config.

    Tractors are cool. Physical and basic.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics



    Trying to figure out "commodity" computing above "embedded"
    computing, and to be able to explain it and thusly give an
    outline, an abstraction itself, of the connections and the
    circuits, has that these days at least for "commodity" general
    purpose computing, there's a great "economy of ubiquity" so
    that whence there's a model of the bus as almost always PCIe,
    then about clock signals and clock drivers and clock interrupts,
    about power management and power states, about variously the
    ideas, here they're mostly ideas first as I'm not that great
    of a computer engineer, about differential pair lines and the
    other serial protocols usually enough, then about SATA mostly,
    is then mostly about PCIe and DMA and then a miniature fleet
    of cores, these being themselves often single or "hyper" threaded,
    it's not moving that fast the platform, to basically make for
    it a model of computation as it embodies itself.


    Here's a bit of a podcast, 44:35 - 49:55 sort of talks about
    these things, there are others. "Reading Foundations: denser tensors".



    This discussion drifted into operating systems, or schedulers
    as they may be or executives plainly, in the context of the
    software more generally there's much to be made of "logic
    extraction", since, pretty much all sorts of source code
    pretty much lives in a world of types and among models of
    computation, and so, according to the shape in the logic,
    there's much to be made of flexible or "polyglot" parsers,
    basically into representations of state and scope, and for
    example into making natural diagrams of the flow-graph,
    this is just "modern tooling to make sense of complexity",
    instead of "ignore the man behind the curtain in the
    booming voice of Oz".



    Absolutely.

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From someone@3:633/10 to All on Monday, March 30, 2026 23:30:01
    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    --
    For full context, visit https://www.electrondepot.com/electrodesign/software-situation-4399817-.htm


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From someone@3:633/10 to All on Monday, March 30, 2026 23:30:02
    People start paying attention when extreme weather is involved. The forecasting is accurate enough these days to give people plenty of time to prepare. U.S. East and Gulf coasts rely on it.

    --
    For full context, visit https://www.electrondepot.com/electrodesign/software-situation-4399817-.htm


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From someone@3:633/10 to All on Monday, March 30, 2026 23:30:02
    The CEOs are now starting to lay themselves off.

    Most CEOs are using the embrace of artificial intelligence as a cover to lay off staff and cut payroll costs in the name of ?efficiency.? But a couple are using it as an excuse to lay themselves off.

    https://gizmodo.com/ai-just-one-shotted-another-ceo-2000738610

    --
    For full context, visit https://www.electrondepot.com/electrodesign/software-situation-4399817-.htm


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Tuesday, March 31, 2026 11:01:41
    On Mon, 30 Mar 2026 23:30:01 +0000, someone <cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Tuesday, March 31, 2026 13:54:57
    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone ><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south
    of San Francisco. The SF Bay area shows mostly clear skys, while to
    the south, the missing rain clouds are moving inland: <https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> <https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain,
    which barely registers on my rain gauge. The forecast is more of the
    same through Weds evening. Sorry.




    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Tuesday, March 31, 2026 14:15:51
    On Mon, 30 Mar 2026 23:30:01 +0000, someone <cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    Baloney. ECMMF AI data has been available since July 2025 in the form
    of AIFS (Artificial Intelligence Forecasting System). <https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data> <https://www.ecmwf.int/en/forecasts/datasets/open-data>

    At this time, the crown jewels of AI startups are the details on how
    they generate their predictions. In other words, their source code
    and system architecture. They're not about to give that away for free
    and certainly not without an NDA. There are probably some open source
    AI initiatives which might include AI weather prediction models. Try
    your luck:
    <https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Do you really need to know how an internal combustion engine works in
    order to operate the vehicle?

    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Tuesday, March 31, 2026 15:06:33
    On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com>
    wrote:

    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone >><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south
    of San Francisco. The SF Bay area shows mostly clear skys, while to
    the south, the missing rain clouds are moving inland: ><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> ><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain,
    which barely registers on my rain gauge. The forecast is more of the
    same through Weds evening. Sorry.

    They just need bigger computers.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Tuesday, March 31, 2026 16:14:37
    On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com>
    wrote:

    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone >>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south
    of San Francisco. The SF Bay area shows mostly clear skys, while to
    the south, the missing rain clouds are moving inland: >><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain,
    which barely registers on my rain gauge. The forecast is more of the
    same through Weds evening. Sorry.

    They just need bigger computers.

    They also want all the CPU's, memory, video cards, cooling water,
    electrical power, government support and investors on the planet. If
    they can't get these, they threaten to put the data centers in orbit.
    All this to obtain better weather forecasts. Meanwhile, I can do as
    well with my Ouija board and weather rock: <https://www.google.com/search?udm=2&q=ouija%20board> <https://www.google.com/search?q=weather%20rock&udm=2>



    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Tuesday, March 31, 2026 16:43:07
    On 03/31/2026 04:14 PM, Jeff Liebermann wrote:
    On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com>
    wrote:

    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone
    <cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south
    of San Francisco. The SF Bay area shows mostly clear skys, while to
    the south, the missing rain clouds are moving inland:
    <https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>> <https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain,
    which barely registers on my rain gauge. The forecast is more of the
    same through Weds evening. Sorry.

    They just need bigger computers.

    They also want all the CPU's, memory, video cards, cooling water,
    electrical power, government support and investors on the planet. If
    they can't get these, they threaten to put the data centers in orbit.
    All this to obtain better weather forecasts. Meanwhile, I can do as
    well with my Ouija board and weather rock: <https://www.google.com/search?udm=2&q=ouija%20board> <https://www.google.com/search?q=weather%20rock&udm=2>



    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    They can't much get _bigger_ computers, only _more_ computers.


    "Why do they hire economists?"
    "To make the weather man look good."



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jan Panteltje@3:633/10 to All on Wednesday, April 01, 2026 06:41:06
    Jeff Liebermann <jeffl@cruzio.com>wrote:
    On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com> >>wrote:

    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>>wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone >>>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their
    conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south
    of San Francisco. The SF Bay area shows mostly clear skys, while to
    the south, the missing rain clouds are moving inland: >>><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain,
    which barely registers on my rain gauge. The forecast is more of the >>>same through Weds evening. Sorry.

    They just need bigger computers.

    They also want all the CPU's, memory, video cards, cooling water,
    electrical power, government support and investors on the planet. If
    they can't get these, they threaten to put the data centers in orbit.
    All this to obtain better weather forecasts. Meanwhile, I can do as
    well with my Ouija board and weather rock: ><https://www.google.com/search?udm=2&q=ouija%20board> ><https://www.google.com/search?q=weather%20rock&udm=2>

    windy.com is very good here.
    And local weather radar:
    https://www.knmi.nl/nederland-nu/weer/actueel-weer/neerslagradar


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jan Panteltje@3:633/10 to All on Wednesday, April 01, 2026 06:45:52
    Jeff Liebermann <jeffl@cruzio.com>wrote:
    On Mon, 30 Mar 2026 23:30:01 +0000, someone ><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional
    weather forecasting operations as usual without it.

    Baloney. ECMMF AI data has been available since July 2025 in the form
    of AIFS (Artificial Intelligence Forecasting System). ><https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data> ><https://www.ecmwf.int/en/forecasts/datasets/open-data>

    At this time, the crown jewels of AI startups are the details on how
    they generate their predictions. In other words, their source code
    and system architecture. They're not about to give that away for free
    and certainly not without an NDA. There are probably some open source
    AI initiatives which might include AI weather prediction models. Try
    your luck: ><https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Do you really need to know how an internal combustion engine works in
    order to operate the vehicle?

    It may help a lot!
    Same for motorbikes etc..

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From john larkin@3:633/10 to All on Wednesday, April 01, 2026 00:49:31
    On Wed, 01 Apr 2026 06:41:06 GMT, Jan Panteltje <alien@comet.invalid>
    wrote:

    Jeff Liebermann <jeffl@cruzio.com>wrote:
    On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com> >>wrote:

    On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com> >>>wrote:

    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>>>wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone >>>>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their
    conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south
    of San Francisco. The SF Bay area shows mostly clear skys, while to >>>>the south, the missing rain clouds are moving inland: >>>><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>>><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain, >>>>which barely registers on my rain gauge. The forecast is more of the >>>>same through Weds evening. Sorry.

    They just need bigger computers.

    They also want all the CPU's, memory, video cards, cooling water, >>electrical power, government support and investors on the planet. If
    they can't get these, they threaten to put the data centers in orbit.
    All this to obtain better weather forecasts. Meanwhile, I can do as
    well with my Ouija board and weather rock: >><https://www.google.com/search?udm=2&q=ouija%20board> >><https://www.google.com/search?q=weather%20rock&udm=2>

    windy.com is very good here.
    And local weather radar:
    https://www.knmi.nl/nederland-nu/weer/actueel-weer/neerslagradar

    Windy is very similar to https://www.ventusky.com. V has nice visuals
    but the forecasts are petty bad.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Wednesday, April 01, 2026 09:11:16
    On Wed, 01 Apr 2026 06:45:52 GMT, Jan Panteltje <alien@comet.invalid>
    wrote:

    Jeff Liebermann <jeffl@cruzio.com>wrote:
    On Mon, 30 Mar 2026 23:30:01 +0000, someone >><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional
    weather forecasting operations as usual without it.

    Baloney. ECMMF AI data has been available since July 2025 in the form
    of AIFS (Artificial Intelligence Forecasting System). >><https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data> >><https://www.ecmwf.int/en/forecasts/datasets/open-data>

    At this time, the crown jewels of AI startups are the details on how
    they generate their predictions. In other words, their source code
    and system architecture. They're not about to give that away for free
    and certainly not without an NDA. There are probably some open source
    AI initiatives which might include AI weather prediction models. Try
    your luck: >><https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Do you really need to know how an internal combustion engine works in
    order to operate the vehicle?

    It may help a lot!
    Same for motorbikes etc..

    Oddly, driver training schools don't include much on how internal
    combustion engines function. Most of the training is on how to
    operate the vehicle with maybe a few maintenance hints (put air in the
    tires, check the oil, keep the windows clean, etc). Some of my
    friends struggle with opening the door when they forget their wireless
    key fob at home. Judging by appearances, driving to the supermarket
    and back, without hitting anything, is a major accomplishment that can
    be achieved without knowing how the engine works.

    It's the same with AI weather forecasting. Members of the GUM (great
    unwashed masses) do not need to know how an AI is used to produce a
    weather forecast. To them, the process might involve a weather rock: <https://www.google.com/search?udm=2&q=weather%20rock>
    or Ouija board:
    <https://www.google.com/search?q=ouija%20board&udm=2>
    and they would accept the results. Hopefully, they would also know
    what to do about the results and be able to decode the forecast terms.
    How many of these do you know?
    <https://www.weather.gov/bgm/forecast_terms>

    Knowing how sausage and weather forecasts are made does not make
    either more digestible.

    Incidentally, while attending college in the 1960's, I worked part
    time as an auto mechanic (floor sweeper) at a Ford dealer. I met
    quite a few drivers. I noticed that the stunt and race car drivers
    were terrible at maintaining their vehicles, while the mechanically
    inclined did well on maintenance, but were not very good drivers. I
    guess that also applies to electronic design. There are those that
    can design, but can't operate their designs and those who can do
    amazing things with the final product, but couldn't design anything
    that actually worked or could be manufactured. Similarly, computer
    programmers should not attempt to operate a screwdriver.


    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ross Finlayson@3:633/10 to All on Wednesday, April 01, 2026 09:43:53
    On 04/01/2026 09:11 AM, Jeff Liebermann wrote:
    On Wed, 01 Apr 2026 06:45:52 GMT, Jan Panteltje <alien@comet.invalid>
    wrote:

    Jeff Liebermann <jeffl@cruzio.com>wrote:
    On Mon, 30 Mar 2026 23:30:01 +0000, someone
    <cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their conventional
    weather forecasting operations as usual without it.

    Baloney. ECMMF AI data has been available since July 2025 in the form
    of AIFS (Artificial Intelligence Forecasting System).
    <https://www.ecmwf.int/en/forecasts/dataset/aifs-machine-learning-data>
    <https://www.ecmwf.int/en/forecasts/datasets/open-data>

    At this time, the crown jewels of AI startups are the details on how
    they generate their predictions. In other words, their source code
    and system architecture. They're not about to give that away for free
    and certainly not without an NDA. There are probably some open source
    AI initiatives which might include AI weather prediction models. Try
    your luck:
    <https://en.wikipedia.org/wiki/Open-source_artificial_intelligence>

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Do you really need to know how an internal combustion engine works in
    order to operate the vehicle?

    It may help a lot!
    Same for motorbikes etc..

    Oddly, driver training schools don't include much on how internal
    combustion engines function. Most of the training is on how to
    operate the vehicle with maybe a few maintenance hints (put air in the
    tires, check the oil, keep the windows clean, etc). Some of my
    friends struggle with opening the door when they forget their wireless
    key fob at home. Judging by appearances, driving to the supermarket
    and back, without hitting anything, is a major accomplishment that can
    be achieved without knowing how the engine works.

    It's the same with AI weather forecasting. Members of the GUM (great unwashed masses) do not need to know how an AI is used to produce a
    weather forecast. To them, the process might involve a weather rock: <https://www.google.com/search?udm=2&q=weather%20rock>
    or Ouija board:
    <https://www.google.com/search?q=ouija%20board&udm=2>
    and they would accept the results. Hopefully, they would also know
    what to do about the results and be able to decode the forecast terms.
    How many of these do you know?
    <https://www.weather.gov/bgm/forecast_terms>

    Knowing how sausage and weather forecasts are made does not make
    either more digestible.

    Incidentally, while attending college in the 1960's, I worked part
    time as an auto mechanic (floor sweeper) at a Ford dealer. I met
    quite a few drivers. I noticed that the stunt and race car drivers
    were terrible at maintaining their vehicles, while the mechanically
    inclined did well on maintenance, but were not very good drivers. I
    guess that also applies to electronic design. There are those that
    can design, but can't operate their designs and those who can do
    amazing things with the final product, but couldn't design anything
    that actually worked or could be manufactured. Similarly, computer programmers should not attempt to operate a screwdriver.



    Floor sweepers for whatever reason have their own sort of electrical
    profile (goldcarts, forklifts, floor sweepers/polishers, ...),
    perhaps because they're fundamentally not stationary. So, motors
    and batteries and the like are defined in terms of them as applications.


    Sometimes ignorance allows a false sort of confidence that translates
    to bravado, "invincible ignorance", "the perceived immortality of
    the unwary", ....

    Experience brings for most people a familiarity with maintenance.

    That's a good one the "sausage and weather forecasts" bit,
    speaking yet to usual accounts of "knowing where the food comes from"
    and besides "thoroughly chewing the food".



    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Don Y@3:633/10 to All on Wednesday, April 01, 2026 10:35:15
    On 4/1/2026 9:11 AM, Jeff Liebermann wrote:

    Oddly, driver training schools don't include much on how internal
    combustion engines function. Most of the training is on how to
    operate the vehicle with maybe a few maintenance hints (put air in the
    tires, check the oil, keep the windows clean, etc). Some of my
    friends struggle with opening the door when they forget their wireless
    key fob at home. Judging by appearances, driving to the supermarket
    and back, without hitting anything, is a major accomplishment that can
    be achieved without knowing how the engine works.

    Most software is taught as "do this to get that", "click here for <whatever>". Do you really understand how a wordprocessor stores a document? How
    it searches it, rejiggers it to accommodate text insertions and deletions?

    Do you care? (perhaps if you end up using one that has been naively
    designed where inserting a single character at the start of the document requires EVERYTHING that follows to be "moved down"...)

    It's the same with AI weather forecasting. Members of the GUM (great unwashed masses) do not need to know how an AI is used to produce a
    weather forecast. To them, the process might involve a weather rock: <https://www.google.com/search?udm=2&q=weather%20rock>
    or Ouija board:
    <https://www.google.com/search?q=ouija%20board&udm=2>
    and they would accept the results. Hopefully, they would also know
    what to do about the results and be able to decode the forecast terms.
    How many of these do you know?
    <https://www.weather.gov/bgm/forecast_terms>

    Traditional weather forecasters likely rely on an understanding of how fluids behave in a given "volume". AI forecasters will likely leave many of them scratching their heads as they look for patterns that may not be immediately explained by geography, winds, pressures, etc. Because AIs aren't limited
    to looking at the "current conditions".

    Knowing how sausage and weather forecasts are made does not make
    either more digestible.

    Incidentally, while attending college in the 1960's, I worked part
    time as an auto mechanic (floor sweeper) at a Ford dealer. I met
    quite a few drivers. I noticed that the stunt and race car drivers
    were terrible at maintaining their vehicles, while the mechanically
    inclined did well on maintenance, but were not very good drivers. I
    guess that also applies to electronic design. There are those that
    can design, but can't operate their designs and those who can do
    amazing things with the final product, but couldn't design anything
    that actually worked or could be manufactured. Similarly, computer programmers should not attempt to operate a screwdriver.

    This seems to (almost) be universally true. E.g., give a
    "repairman" a blank sheet of paper to start a design and
    the first thing he'll do is turn it over, expecting the
    "missing writing" to be on the back side. The same is true
    of most "coders", technicians, etc. The good ones can
    suss-out someone else's design but usually don't know where
    to *start* on one. And, are quickly ineffective if a change
    propagates "too far" into the existing design (as that would
    require more knowledge of its overall "structure")

    Perhaps the most challenging task is preparing *user*
    documentation for a product/design. Being able to imagine how
    someone (unfamiliar with the product) will best comprehend
    its purpose and usage. And, how a seasoned user will think of
    accessing that documentation, later -- what "keywords" will
    he seek in an index, etc.?

    The same is true of most hardware/software designs -- "where
    do I *start* to understand it?" (hardware often easier as
    it's largely visible)

    I am always amazed at how many "engineers/programmers" are
    tasked with designing products -- instead of domain experts.
    You invariably end up with something that looks (and acts)
    like it was designed by an engineer/programmer, completely
    clueless as to what a "real/typical" user would expect in
    such a device. And, the designer puzzled by why others
    find it such a mismatch!


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Jeff Liebermann@3:633/10 to All on Wednesday, April 01, 2026 17:39:52
    On Wed, 01 Apr 2026 00:49:31 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Wed, 01 Apr 2026 06:41:06 GMT, Jan Panteltje <alien@comet.invalid>
    wrote:

    Jeff Liebermann <jeffl@cruzio.com>wrote:
    On Tue, 31 Mar 2026 15:06:33 -0700, john larkin <jl@glen--canyon.com> >>>wrote:

    On Tue, 31 Mar 2026 13:54:57 -0700, Jeff Liebermann <jeffl@cruzio.com> >>>>wrote:

    On Tue, 31 Mar 2026 11:01:41 -0700, john larkin <jl@glen--canyon.com> >>>>>wrote:

    On Mon, 30 Mar 2026 23:30:01 +0000, someone >>>>>><cffbf4deb9142bce48974efc0e64dede@example.com> wrote:

    ECMMWF has access to an AI weather forecaster that they refuse to use beyond in-house study. They continue their
    conventional weather forecasting operations as usual without it.

    This is unlike many other applications, where people use AI without a clue as to how the AI is arriving at its results.

    Dang, they promised us rain this week. Didn't get any.

    The line of clouds crosses the California coast about 50 miles south >>>>>of San Francisco. The SF Bay area shows mostly clear skys, while to >>>>>the south, the missing rain clouds are moving inland: >>>>><https://www.windy.com/-Satellite-satellite?satellite,36.831,-117.809,6> >>>>><https://www.windy.com/-Menu/menu?rain,36.831,-117.809,6>
    In Ben Lomond, I'm under the clouds and seeing only a little rain, >>>>>which barely registers on my rain gauge. The forecast is more of the >>>>>same through Weds evening. Sorry.

    They just need bigger computers.

    They also want all the CPU's, memory, video cards, cooling water, >>>electrical power, government support and investors on the planet. If >>>they can't get these, they threaten to put the data centers in orbit.
    All this to obtain better weather forecasts. Meanwhile, I can do as
    well with my Ouija board and weather rock: >>><https://www.google.com/search?udm=2&q=ouija%20board> >>><https://www.google.com/search?q=weather%20rock&udm=2>

    windy.com is very good here.
    And local weather radar:
    https://www.knmi.nl/nederland-nu/weer/actueel-weer/neerslagradar

    Windy is very similar to https://www.ventusky.com. V has nice visuals
    but the forecasts are petty bad.

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    Windy does not generate their forecasts. They simply repeat, combine
    and reformat what they get from other sources: <https://community.windy.com/topic/12/what-source-of-weather-data-windy-use> <https://windy.app/support/windy-app-weather-forecast-models.html> <https://www.windy.com/info>
    If you switch between the various sources, you will get radically
    different forecasts. In my never humble opinion, the best sources are
    those derived from ECMWF which is the default.

    Most of the forecasts rely heavily on NWS (National Weather Service).
    NWS produces forecasts in various formats often using different
    models. The result is that if you compare the NWS forecasts, they're
    all quite different. In the list below, try comparing the various
    weather.gov forecasts (for my location), and you'll probably notice
    major variations.

    Incidentally, I subscribed to Windy.com and pay $19/year. The main
    benefit for me is 1 hr forecast resolution instead of the default 3 hr resolution.

    Here is a list of what I use for weather forecasting. All the links
    are centered on my house in Ben Lomond, California. It should be
    fairly easy enough to relocate them to your area and tweak the
    shortcut.

    California Weather Watch: <https://www.youtube.com/@CaliforniaWeatherWatch/videos>
    Daily forecasts for California. Produced daily at about 9am PST.
    Select the most recent video.

    College of DuPage - Nexlab <https://weather.cod.edu/satrad/?parms=regional-w_southwest-09-200-1-100-1&checked=map&colorbar=undefined>

    NWS Radar - 30 min history <https://radar.weather.gov/?settings=v1_eyJhZ2VuZGEiOnsiaWQiOiJ3ZWF0aGVyIiwiY2VudGVyIjpbLTEyMS43ODYsMzcuMzEzXSwibG9jYXRpb24iOlstMTIyLjEwMywzNy4wNTZdLCJ6b29tIjo5LCJsYXllciI6ImJyZWZfcWNkIn0sImFuaW1hdGluZyI6ZmFsc2UsImJhc2UiOiJzdGFuZGFyZCIsImFydGNjIjpmYWxzZSwiY291bnR5IjpmYWxzZSwiY3dhIjpmYWxzZSwicmZjIjpmYWxzZSwic3RhdGUiOmZhbHNlLCJtZW51Ijp0cnVlLCJzaG9ydEZ1c2VkT25seSI6ZmFsc2UsIm9wYWNpdHkiOnsiYWxlcnRzIjowLjgsImxvY2FsIjowLjYsImxvY2FsU3RhdGlvbnMiOjAuOCwibmF0aW9uYWwiOjAuNn19>

    NWS 6 day history graphs <https://forecast.weather.gov/MapClick.php?w0=t&w3=sfcwind&w3u=1&w5=pop&w6=rh&w7=rain&w13u=1&w14u=1&AheadHour=0&Submit=Submit&FcstType=graphical&textField1=37.0813&textField2=-122.093&site=all&unit=0&dd=&bw=>

    NWS 5 day local forecast <https://forecast.weather.gov/MapClick.php?lon=-122.09304015350371&lat=37.08133745279595>

    Windy.com <https://www.windy.com/-Rain-thunder-rain?rain,36.573,-122.095,8,p:cities>

    Zoom Earth <https://zoom.earth/maps/radar/#view=37.0905,-122.0892,8z/date=2026-03-31,14:40,-7/overlays=wind>

    There's a completely different set of links for fire related info,
    marine weather, and aircraft weather. They change constantly. The
    links listed about are the one's I've been using currently.


    --
    Jeff Liebermann jeffl@cruzio.com
    PO Box 272 http://www.LearnByDestroying.com
    Ben Lomond CA 95005-0272 AE6KS 831-336-2558


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)