• Page file size

    From Stan Brown@3:633/10 to All on Monday, March 09, 2026 14:07:14
    This article
    https://www.howtogeek.com/these-windows-background-processes-are-slowly-shortening-your-ssds-lifespan/
    suggests eliminating the page file, but I wonder if that's good
    advice. I have a 1 TB SSD (broken into six partitions) and 16 GB of
    RAM, and my pagefile is 16,384 MB. The system set that, I didn't.

    In an admin command prompt, sysdm.cpl?? Advanced?? Perormance??
    Advanced?? Virtual memory?? Change says
    Minimum allowed 16 MB
    Recommended 2904 MB
    Currently allocated 16384 MB

    I don't do video editing or any other massive jobs that I can think
    of.

    Any ideas --
    1. where Windows came up with that recommendation?
    2. Why it's currently got a page file the size of 100% of my RAM?
    3. Should I reduce the page file to 2904 MB (or some other value),
    eliminate it, or leave it as it is now?

    --
    "The power of accurate observation is frequently called cynicism by
    those who don't have it." --George Bernard Shaw

    --- PyGate Linux v1.5.12
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From JJ@3:633/10 to All on Tuesday, March 10, 2026 07:15:11
    On Mon, 9 Mar 2026 14:07:14 -0700, Stan Brown wrote:
    This article
    https://www.howtogeek.com/these-windows-background-processes-are-slowly-shortening-your-ssds-lifespan/
    suggests eliminating the page file, but I wonder if that's good
    advice. I have a 1 TB SSD (broken into six partitions) and 16 GB of
    RAM, and my pagefile is 16,384 MB. The system set that, I didn't.

    In an admin command prompt, sysdm.cpl?? Advanced?? Perormance??
    Advanced?? Virtual memory?? Change says
    Minimum allowed 16 MB
    Recommended 2904 MB
    Currently allocated 16384 MB

    I don't do video editing or any other massive jobs that I can think
    of.

    Any ideas --
    1. where Windows came up with that recommendation?
    2. Why it's currently got a page file the size of 100% of my RAM?
    3. Should I reduce the page file to 2904 MB (or some other value),
    eliminate it, or leave it as it is now?

    Page file is not actually required for the system. Page file is required for proper full RAM dump during BSOD - for later troubleshooting. Page file main purpose is to avoid applications from breaking when they want more memory
    space but there's no more free RAM space left. i.e. avoid having
    "Insuffisient memory" or "Nor enough memory" error messages and prevent the affected applications to continue what they were doing. It generally
    increase applications' stability - as long as they're not realtime applications, since page file is a memory storage slower than RAM.

    Advice/recommendation on page file size without knowing users's applications daily use, is mostly just for worst case scenario for most users. The RAM
    size in the computer which users get, is either accoring to the current standard, or according to users' specific need. Page file size of the same
    size as the RAM size is just a nice number which is ideal for most (if not
    all) users. If the page file size needs to be larger, it means that, the
    user should have a larger RAM size in the first place.

    --- PyGate Linux v1.5.12
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From VanguardLH@3:633/10 to All on Monday, March 09, 2026 21:33:48
    Stan Brown <someone@example.com> wrote:

    This article
    https://www.howtogeek.com/these-windows-background-processes-are-slowly-shortening-your-ssds-lifespan/
    suggests eliminating the page file, but I wonder if that's good
    advice. I have a 1 TB SSD (broken into six partitions) and 16 GB of
    RAM, and my pagefile is 16,384 MB. The system set that, I didn't.

    In an admin command prompt, sysdm.cplÿ¯ Advancedÿ¯ Perormanceÿ¯
    Advancedÿ¯ Virtual memoryÿ¯ Change says
    Minimum allowed 16 MB
    Recommended 2904 MB
    Currently allocated 16384 MB

    I don't do video editing or any other massive jobs that I can think
    of.

    Any ideas --
    1. where Windows came up with that recommendation?
    2. Why it's currently got a page file the size of 100% of my RAM?
    3. Should I reduce the page file to 2904 MB (or some other value),
    eliminate it, or leave it as it is now?

    For speed in loading objects and textures, like in video games, some
    will store that data in memory, but use the pagefile for storage since
    not all objects and textures are needed at once or at the same time. If
    there is no pagefile space, they cannot preload their data which means
    there are hesitations during gameplay to load that data from files on
    the drive (whether SSD or HDD). Some games will fail to start if they
    cannot allocate pagefile space to store (prefetch) their data.

    Windows will also use pagefile space. If the drives are equal in
    performance, like you have 2 HDDs or 2 SSDs, it is best to configure
    paging settings to first use pagefile space on the drive other than the
    one for the OS partition. For example, use D: for primary pagefile
    space when C: is where Windows resides (and those partitions are on
    different but equal drives). This allows overlap of page read and
    writes across the separate paging files.

    I have an SSD with the recommended pagefile space. That does NOT mean
    all that storage space is immediately allocated to the pagefile. It
    means how much you choose to reserve as a maximum. There is the
    recommended size (maximum), and the currently allocation size.

    In Task Manager, how much physical RAM is installed, and how much of it
    is currently free (unused)? There may be times when you have so many
    processes running that you use up the physical RAM, and need to borrow
    from pagefile space; else, you'll get warnings about low memory, or even
    worse, like extremely slow response of the OS or programs, or crashes.

    You won't extend your SSD's lifetime by eliminating pagefile space.
    Windows and programs will still use drive storage while they run. A far superior means to extend the lifespan of your SSD is to increase the
    amount of unallocated space on an SSD. That gets used for what is
    called overprovisioning.

    https://www.kingston.com/en/blog/pc-performance/overprovisioning https://www.seagate.com/blog/ssd-over-provisioning-and-benefits/ https://www.techtarget.com/searchstorage/definition/overprovisioning-SSD-overprovisioning

    Many SSD drives come with their own tool to alter overprovisioning, like Samsung's Magician. All they do is facilitate (dumbify) the process of changing the unallocated space on an SSD. You can use any partition
    manager to change partition size(s) to create a larger unallocation
    portion of an SSD. Overprovisioning uses the unallocated space on an
    SSD to reduce write amplification.

    Since I have lots of unused space on my SSD even after years of use, I
    upped the typical 7% of factory overprovisioning to 20%. Note that overprovisioning is already built into the SSD. It's part of the
    storage space that you can never access. It's internal. Unallocated
    space on an SSD just lets you add more overprovisioning space. You have
    less total space within the partition(s), but your SSD lasts longer.
    Consumer SSDs with their internal overprovisioning are designed to last
    10 years with the typical write load encountered on end-user hosts.
    However, some users incur a lot more writes. Server SSDs are typically configured with 10% internal overprovisioning, but many admins will up
    the unallocated space to twice, or more. Depends on how sensitive you
    are to reducing the size of your partition(s) to have more unallocated
    space to use for overprovisioning that will extend the life of your SSD.

    Focus first on increasing overprovisioning space (unallocated) on your
    SSD before deciding if you really need the partition space consumed by pagefile, or you can get by with less paging space. Also, instead of specifying a dynamic size for the pagefile, you can specify a fixed
    maximum size. Make minimum and maximum the same value. For example,
    make both something like 16,384, or 4,096 if you never encounter the OS reporting low memory or none of your games, drawing programs, and
    anything memory intensive complaining about lack of pagefile size.

    Personally I never use hibernation which writes a hiberfil.sys file onto
    your drive. I either shutdown, or leave the computer running 24x7. I
    still use power saving options, but not with hibernation or hybrid mode
    (which also uses hibernation). With an SSD, the time to resume from hibernation versus powering up afresh is about 7 seconds for my setup,
    so I don't need to incur all those writes to the SSD for a very small
    decrease in bootup time. Also, you might look into the size of your
    dump file (written on a crash if the crash isn't so severe as to
    preclude any disk writes). Very few users know how to diagnose the dump
    files although there are tools to help interrogate them. Turn off dump logging, or configure to use a mini-dump file (which can still help on
    those hard crashes due to, say, video driver issues).

    The article mentions SuperPrefetch. I leave it enabled. It is just a
    cache of pointers to programs. Lots of folks focus on this cache while completely ignorant of all the other caches used in Windows, or any OS,
    like all the MRU (Most Recently Used) caches in the registry (which is
    copied from disk files into memory to speed up access to the registry
    via the registry API - a database in memory is far faster than opening
    and reading disk files).

    If you want to see the current size of your SuperFetch files, see:

    https://forensics.wiki/windows_superfetch_format/

    Also remember that Windows comes with its Indexing Service that is
    enabled by default. Maybe you like that feature. Maybe you use
    voidtools' [Search]Everything, and don't see the point of the Windows
    Index service. I had it disabled until I discovered searching in MS
    Outlook (standalone local client) were unusable with Indexing disabled.

    By default, the %temp% folder (and other temp folders) are in the same partition on your SSD as is Windows. You might decide to move %temp% to
    a different drive, like over to an HDD, but that could impact programs
    that make lots of use of repeated writes to temp files. You would
    create, say, D:\TEMP, and change the environment variable to point over
    there. That other drive could also be an SSD, but could be a smaller
    and cheaper one. Similarly, you could move the default Documents,
    Pictures, and other user profile folders to another drive; however,
    unless you are creating, deleting, and creating again lots of user
    files, you don't gain much if anything regarding reduction in SSD write degradation on your SSD drive with the OS partition. Since you should
    be saving or moving copies of your documents to other drives, anyway,
    either as backups or alternate storage locations, you wouldn't have many
    doc files on the OS partition, anyway. It's the writes that degrade
    SSDs, not reading, copying, or deleting.

    As for the writer of the article you referenced, having a long
    experience in documentation does not equate to high knowledge of what
    they write. I've worked in several software publisher companies that
    had a Documentation department solely dedicated to producing the docs
    for the enterprise software (costing many thousand to hundreds of
    thousands of dollars). Their job was to accurately and completely
    document the software to the customers, but they did not have the
    expertise of the Development group in the intracies of the software.
    Far too often with articles like this, the authors are regurgitating "information" they found elsewhere, and professing as their own
    expertise. They didn't learn, test, or experience the effects (short
    and long range) of their suggestions. They collect what others said.

    --- PyGate Linux v1.5.12
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Paul@3:633/10 to All on Tuesday, March 10, 2026 01:14:51
    On Mon, 3/9/2026 5:07 PM, Stan Brown wrote:
    This article
    https://www.howtogeek.com/these-windows-background-processes-are-slowly-shortening-your-ssds-lifespan/
    suggests eliminating the page file, but I wonder if that's good
    advice. I have a 1 TB SSD (broken into six partitions) and 16 GB of
    RAM, and my pagefile is 16,384 MB. The system set that, I didn't.

    In an admin command prompt, sysdm.cplÿ¯ Advancedÿ¯ Perormanceÿ¯
    Advancedÿ¯ Virtual memoryÿ¯ Change says
    Minimum allowed 16 MB
    Recommended 2904 MB
    Currently allocated 16384 MB

    I don't do video editing or any other massive jobs that I can think
    of.

    Any ideas --
    1. where Windows came up with that recommendation?
    2. Why it's currently got a page file the size of 100% of my RAM?
    3. Should I reduce the page file to 2904 MB (or some other value),
    eliminate it, or leave it as it is now?


    Modern systems "don't want to use" the pagefile.sys for virtual memory.

    You will find some resistance to attacking the pagefile and attempting
    to wear it.

    You can fix the pagefile to anywhere from 350MB (the "write-once" size)
    to 512MB or 1GB. I don't think any machine in the room, or outside
    the room, uses a pagefile of more than 1GB. Even the Optiplex 780 with
    16GB of RAM, still has a pagefile of only 1GB. That's a Win10 machine
    that made it all the way to 22H2 (needed a video card to do it).
    That's Core2 Duo epoch.

    The value should potentially not be set to zero "because of the
    transient response". If two Ring3 processes race to consume all of memory
    at the same time, a "blip" leaks through and gets stored on the pagefile.
    It might be 50KB or some small value. If the pagefile was zero, such a transient might not be satisfied and something naughty could happen.
    When I have had two racers that go right to the wall (the System Write
    buffer was one party in the fight), determined ones, the OS just froze
    and there was no escape for me. I knew at the time, I had made a mistake,
    but I couldn't type fast enough to stop it :-) Oops.

    You could set up a separate HDD and put a pagefile.sys on it,
    then watch in Resource Monitor or Task Manager, for activity.
    There is probably a perfmon.msc statistic for pagefile use
    and you could set that plot up. Then try some things
    that you think will page out, and see if any paging
    actually occurs.

    Paul

    --- PyGate Linux v1.5.12
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From JJ@3:633/10 to All on Tuesday, March 10, 2026 18:42:24
    On Tue, 10 Mar 2026 20:29:26 +1100, Daniel70 wrote:

    Sorry!! What?? MicroSoft knows BSOD events occur but, rather than fixing THAT PROBLEM, they include a Work Around!!

    Really??

    Microsoft's biggest fault is to blindly halt the entire f###ing system and
    not letting kernel mode fault to be passed to user mode. Thus, not giving
    any chance for user to correct anything even if it should be possible.

    It's a legacy behaviour since Windows 3.0. And making BSOD looks "nicer" in later Windows version, only make troubleshooting more difficult, by
    providing less error information.

    --- PyGate Linux v1.5.12
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Paul@3:633/10 to All on Tuesday, March 10, 2026 12:12:34
    On Tue, 3/10/2026 5:29 AM, Daniel70 wrote:
    On 10/03/2026 11:15 am, JJ wrote:
    On Mon, 9 Mar 2026 14:07:14 -0700, Stan Brown wrote:

    Page file is not actually required for the system. Page file is
    required for proper full RAM dump during BSOD - for later
    troubleshooting.

    Sorry!! What?? MicroSoft knows BSOD events occur but, rather than fixing THAT PROBLEM, they include a Work Around!!

    Really??

    There are two sizes of error collection on Windows.
    Think of them as insanely big, or small.

    A mini-dump gives a stack trace. That might be
    the default configuration for a machine.

    If you set things up for a full dump (on a kernel panic AKA BSOD),
    then you can have a larger file. But what if your
    computer memory was large ? Your dump would be a
    very large file, and typically the I/O rate during
    a dump is perhaps 4MB/sec or so. During a dump,
    there might not be any DMA used for the dump sequence.
    This is molasses slow.

    Try setting the pagefile, to physical-memory plus one gigabyte,
    and that will ensure there is enough room. On a 128GB machine
    this would be 129GB as a file size choice.

    You may have to consult a web page for the details of
    enabling full dumps. Dumps can also use up space on C:
    and on a machine with piddly small storage, you definitely
    don't want full dumps on there. If your machine uses an eMMC,
    don't bother with full dumps.

    https://learn.microsoft.com/en-us/troubleshoot/windows-client/performance/generate-a-kernel-or-complete-crash-dump

    https://learn.microsoft.com/en-us/troubleshoot/windows-client/performance/how-to-determine-the
    -appropriate-page-file-size-for-64-bit-versions-of-windows#support-for-system-crash-dumps

    You can load a full dump into the windows debugger and
    get a stack trace. Doing any more than that is an
    "expert topic".

    Sending a full dump to someone else for analysis, takes forever.

    I used to modify Windows error handling, like disable Dr.Watson
    so I could do local program stack trace. But I don't bother any more.
    And full dumps are just too slow, to be sitting here waiting
    for the thing to finish. An SSD or an NVMe is unlikely
    to make the slightest difference to dump speed. The dump is
    basically done with the equivalent of PIO ("byte banging").

    Once you've configured for full dumps, crash the system and watch the fun :-) Then you can get a copy of Windbg and do your stack trace. This tool
    gives you a way to BSOD the machine (Ring 0). To create a program crash trace (Ring 3), you can write a short C program to "dereference location 0"
    and that should crash for you.

    https://learn.microsoft.com/en-us/sysinternals/downloads/notmyfault

    The secret to being an experimenter, is to make a backup first,
    before modifying something you're not familiar with. A Boy Scout
    is always prepared.

    Paul

    --- PyGate Linux v1.5.12
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)