Considering computer memory
Doghouse – More Performance
Computer memory is tied to increases in potential uses and performance and affected by ever-increasing machine complexity.
I belong to mailing lists for several far-flung groups that I had the privilege of knowing over the years. One of those lists is "Linux-aus", loosely formed around the Australian Linux Community.
Recently one of its members, Russell Coker, lamented about the huge memories that Linux systems use these days.
This led to a discussion about how inexpensive main memory is today, how fast processors are, and why should we really care how much memory is used by applications or how many cycles are used to run an application?
Main memories, even in laptops, have grown from the 64MB of the mid-1960s to 8, 16, or even 32GB (in top-end laptops) today. That is a growth of approximately 500 times.
A less discussed issue is the increase in clock speed and number of cores on modern-day systems. Perhaps this is not discussed as much because, for the most part, the speed has increased while electricity usage has dropped.
Some areas, such as GPUs, which were first used to accelerate graphics, are now used for other types of computations.
Originally graphics were controlled by "dumb frame buffers." The main CPU would put the right bits in the right places of graphic memory, sometimes for simple black-and-white graphics, but later in grayscale, and still later in 8, 16, and 24 bit color graphics. The main CPU had to do anti-aliasing, color mixing, and other graphical calculations as well as work on the main logic of the program.
By moving these graphic calculations off to a separate, specialized processor, you freed up the main CPU to do non-specialized calculations. This specialized processor was an accelerator, just like floating-point accelerators had offloaded floating-point calculations a decade before. And certain i386 processors only did integer arithmetic, so floating-point work was done with integer arithmetic. Later a separate "floating-point" chip (the i387) was added (for extra cost).
GPUs started to be added to unload the main CPU and to speed up graphics. Later the GPUs were designed as SIMD, single instructions that could be applied to multiple data points to accelerate graphics even more.
Eventually people wanted even more in graphics. They wanted GPUs that could do simulated 3D work. GPUs that could simulate light sources, shadows, and could even be given programs to execute separate routines in parallel from the main CPUs.
So far we only have been talking about video, but audio also raised its ugly head and audio processors followed the acceleration for realistic sound.
Of course all of this complexity also generated more functionality in libraries used to massage the data to go into these accelerators.
Another "accelerator" that came into the computer space was web browsers. Their own little operating systems, web browsers run applications and extensions that use a lot of memory, as well as allow people to open huge numbers of tabs that also use up memory and CPU cycles.
Separate database engines also required more main memory to run efficiently.
Virtualization, containers, and other modern-day applications also ballooned memory usage.
The biggest problem with all of this is that while main memories have grown 500 times, the amount of first, second, and third-level caches have not grown the same amount. The more diverse memory we use, the greater the number of cache misses we have and the more we wait for data and instructions to be brought into the CPU through the cache. Some CPUs have little or no cache.
The idea brought forth by Russell Coker and the other members of this Australian mailing list was to offer prizes for people who stepped back and took a long look at memory usage as well as more focused profiling of CPU utilization.
Years ago, Digital Equipment Corporation's (DEC) OSF/1 operating system needed 64MB of main memory to boot and run on a DEC AlphaServer-class machine. Laughingly small by today's standards, the marketing people wanted the system to boot and run on only 32MB of main memory. This would allow the DEC field-service people to install and verify the installation without forcing the customer to buy an additional 32MB of main memory from DEC, which was very expensive. After the machine was installed and signed off, the customer could add much less expensive memory from third-party vendors.
The DEC engineers spent a year squeezing the kernel and libraries so the system could boot in only 32MB. When it was finished, we got a surprise: The smaller system ran seven percent faster!
Because everything was smaller, more of the operating system remained in the system caches, resulting in fewer cache misses and faster delivery of the instructions and data to the cores of the CPU.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
First Release Candidate for Linux Kernel 6.14 Now Available
Linus Torvalds has officially released the first release candidate for kernel 6.14 and it includes over 500,000 lines of modified code, making for a small release.
-
System76 Refreshes Meerkat Mini PC
If you're looking for a small form factor PC powered by Linux, System76 has exactly what you need in the Meerkat mini PC.
-
Gnome 48 Alpha Ready for Testing
The latest Gnome desktop alpha is now available with plenty of new features and improvements.
-
Wine 10 Includes Plenty to Excite Users
With its latest release, Wine has the usual crop of bug fixes and improvements, along with some exciting new features.
-
Linux Kernel 6.13 Offers Improvements for AMD/Apple Users
The latest Linux kernel is now available, and it includes plenty of improvements, especially for those who use AMD or Apple-based systems.
-
Gnome 48 Debuts New Audio Player
To date, the audio player found within the Gnome desktop has been meh at best, but with the upcoming release that all changes.
-
Plasma 6.3 Ready for Public Beta Testing
Plasma 6.3 will ship with KDE Gear 24.12.1 and KDE Frameworks 6.10, along with some new and exciting features.
-
Budgie 10.10 Scheduled for Q1 2025 with a Surprising Desktop Update
If Budgie is your desktop environment of choice, 2025 is going to be a great year for you.
-
Firefox 134 Offers Improvements for Linux Version
Fans of Linux and Firefox rejoice, as there's a new version available that includes some handy updates.
-
Serpent OS Arrives with a New Alpha Release
After months of silence, Ikey Doherty has released a new alpha for his Serpent OS.