Stay nimble
Doghouse – Portability and Costs
Minimizing CPU overhead from the beginning can help you lower costs and maximize portability over time.
A number of years ago cloud computing came on the scene, with one of the first suppliers being Amazon. Amazon's use of their data farms peaked during the Christmas season, which left them with the capacity the rest of the year to sell Internet-accessible server computing to companies who needed it, at a fraction of what it would cost for a small company to supply it themselves. An industry was born.
Almost overnight (at least by most industry terms) companies started to turn over their server computing to other companies (AWS, Google, Microsoft, and others) who had the computers, staff, physical plant, security, and so on necessary to do the work.
I have advocated for the use of many of these cloud services when a fledgling company is starting out. In the open source space you could think of places like SourceForge, GitHub, GitLab, and others as "cloud services" that make the development and collaboration of software (and even hardware) easier and less expensive.
The problem comes with two issues: lock-in and growth. Both of these have been with the computer industry for decades.
Lock-in can happen when the developer uses interfaces or platform features that are nonstandard. Even in the days of simpler programs there were standards that allowed programs to be moved from computer to computer if the programmer only coded using the formal standard. In almost any commercial compiler, there were "extensions" to the standard that offered the programmer easier methods of coding or more efficient execution of the code. These extensions were usually documented in a gray (or some other color) documentation with the warning that this was an extension, allowing the programmer to stay away from that extension if they wanted portable code.
A good programmer might then code what is called a "fallback," which would execute using only the standard code, but if operating in an environment where the extension existed then the extension could be used, either with a recompile or a runtime determination.
I have used a simple example, but these "extensions" to standards occur at every level, from programming languages to library and system calls, to the interfaces of your cloud systems, which cloud service providers sometimes offer as their advantage over their competitors.
All of this might be fine if your cloud service is guaranteed to always be the least expensive, most stable, give you all the services you need as your company grows, and so on. I have, however, worked for some of the largest companies on earth, which I thought would be there "forever," and which are now completely gone. You have to be ready to move and sometimes comparatively quickly.
The second reason for being nimble with regard to portability is the growing expense of cloud computing versus the costs of running your own server systems, especially in certain environments. These costs can be both economical and political. The more your company grows in size and scale, the more you should plan for the contingency of having to move or separate your server loads.
This article was inspired by a whitepaper recently published showing how the cloud service costs of some users rapidly increased over time due to the growth of the data or computer load to the point where it might have been practical for some large companies to create their own data centers again, after moving to cloud services several years ago. Unfortunately, moving even to another cloud service, much less your own physical plant, is often complex and expensive.
Besides maintaining flexibility through portability, working with tools to modify the amount of CPU, data storage, data transfer, and Internet usage can reduce (sometimes dramatically) these charges from whatever cloud supplier you use.
Recently a programmer that I work with went through some older code on their project and found an application that did some dynamic allocation of memory needed for each partial transaction. For reasons too complex to explain here, this caused an overhead of 1,200 milliseconds (yes, if you do the math this means 1.2 seconds) for each transaction … painfully slow for the user but also putting an unnecessary strain on the server. The programmer changed the algorithm to calculate how many allocations would have to be done and then just allocated the space one time, and (in effect) the CPU overhead dropped to zero. This savings is one thing if there is only one user of this program, or if it is running on a laptop, but if there are hundreds of users utilizing it in a server environment, the CPU utilization mounts up quickly.
In summary, what is often inexpensive in small quantities can rapidly become a big expense in larger quantities, so plan to try and minimize the overhead from the very beginning and remember that "performance" is not just how fast your application runs, but how portable your code is and how many programming resources it can save if you have to move your code.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.
-
Gnome OS Transitioning Toward a General-Purpose Distro
If you're looking for the perfectly vanilla take on the Gnome desktop, Gnome OS might be for you.
-
Fedora 41 Released with New Features
If you're a Fedora fan or just looking for a Linux distribution to help you migrate from Windows, Fedora 41 might be just the ticket.
-
AlmaLinux OS Kitten 10 Gives Power Users a Sneak Preview
If you're looking to kick the tires of AlmaLinux's upstream version, the developers have a purrfect solution.
-
Gnome 47.1 Released with a Few Fixes
The latest release of the Gnome desktop is all about fixing a few nagging issues and not about bringing new features into the mix.
-
System76 Unveils an Ampere-Powered Thelio Desktop
If you're looking for a new desktop system for developing autonomous driving and software-defined vehicle solutions. System76 has you covered.