Linux++ [Issue 22] with Hayden Barnes of Canonical and WSL

Hello and welcome to the twenty-second edition of Linux++, a weekly dive into the major topics, events, and headlines throughout the Linux world. This issue covers the time period starting Monday, July 13, 2020 and ending Monday, August 10, 2020.

This is not meant to be a deep dive into the different topics, but more like a curated selection of what I find most interesting each week with links provided to delve into the material as much as your heart desires.

If you missed the last report, Issue 21 from July 12, 2020, you can find it here. You can also find all of the issues posted on the official Linux++ Twitter account here or follow the Linux++ publication on the Destination Linux Network’s Front Page Linux platform here.

In addition, there is a Telegram group dedicated to the readers and anyone else interested in discussion about the newest updates in the GNU/Linux world available to join here.

For those that would like to get in contact with me regarding news, interview opportunities, or just to say hello, you can send an email to linuxplusplus@protonmail.com. I would definitely love to chat!

There is a lot to cover so let’s dive right in!

Personal News

Chemical Reaction: Redox OS

Rust programming language logo. (Credit: wallup.net)

If you’ve been following the publication for some time, then you will know that I’m studying systems programming and have been working with Rust for a bit now. I believe I once called it a more complex, convoluted, and complicated C++, however, after quite a few months of actually sitting down with the language, I have to say that I really have come to love it for many reasons–memory safety, high-level abilities, and an incredible build tool (see: cargo)–even with a compiler that makes me want to pull my hair out sometimes (it’s for my own good!). I’ve even started using Rust at work in some small programs where I would normally use C to get better performance results, which is awesome.

However, when trying to learn extremely low-level stuff, like operating systems, C is still king. It definitely makes sense considering C was literally built for the UNIX operating system. Don’t get me wrong, I’m one of the people that actually likes reading C programs–I really do think that it is one of the cleanest and simplest languages (which explains why it hasn’t had a true contender to replace it in almost 50 years!). But, still, I doubted that Rust could be used to build an operating system from the ground up–that is until I found a little project called Redox OS.

I discovered Redox OS some years ago when looking at alternative operating systems and wrote it off for two reasons–Rust and the use of a microkernel architecture. To give some perspective, UNIX, Linux, and BSD all use what is called a monolithic kernel, and even though Apple’s Darwin and Microsoft’s Windows NT are said to be “hybrid” kernels, they really resemble monolithic kernels more favorably when all is said and done.

The difference between the two lies in where the different components of the OS live. At the base level, operating systems are usually split into two distinct areas–kernel space and user space. Without diving into too much detail, the monolithic kernel includes the majority of OS components like device drivers and the file system inside of kernel space, whereas the microkernel moves many of these into user space, except for the essential components that need near-instantaneous access the hardware.

Different structure of operatin system kernels. (Credit: wikipedia.org)

Because of this split, microkernels have generally been thought of as being significantly slower than monolithic kernels due to the need for message passing between major components in kernel space and user space. However, as I continue to dig deeper and deeper into learning about operating systems, there are definitely some major advantages a microkernel can provide such as increased security, an easier to maintain codebase, and increased modularity and configuration of the OS.

Well then, you may ask, what viable general-purpose operating systems out there actually use a microkernel? Well, there’s MINIX 3 (a major inspiration for Linux), seL4, Escape, Helen OS, and that one that GNU was working on so long ago that I forgot its name (just kidding, its called the Hurd). Never heard of those, huh?

Therefore, it’s easy to see why a younger, more naive and uniformed version of myself could quickly overlook a project that contains the word microkernel (and Rust wasn’t exactly what it is today back then either). However, with my deep dive into Rust as well as lower-level programming concepts, I was inevitably lead back to the Redox OS project.

Redox OS was created and is lead by Jeremy Soller, who many of you might know as the Principal Engineer at Denver-based Linux software and hardware company, System76. Jeremy’s influence can be seen by a quick glimpse at System76’s code base, which includes quite a few modern, “non-mainstream” programming languages like Rust (a lot of Rust!), TypeScript, Vala, and Elixir (another awesome language to check out if you’re interested in functional programming or if web development/any-other-highly-concurrent-application is more your speed!).

From the Redox OS homepage:

“Redox is a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications.

  • Implemented in Rust
  • Microkernel Design
  • MIT Licensed
  • Drivers run in Userspace
  • Includes optional GUI – Orbital
  • Supports Rust Standard Library
  • Includes common Unix commands
  • Custom libc written in Rust (relibc)”

So, it’s easy to see that Redox is more than just the kernel, unlike Linux. Redox contains both kernel level components and user level components, similar to the BSDs, but without the monolithic kernel. One of the great things about Redox’s design is that Jeremy took care to prioritize innovation and experimentation over conforming to POSIX standards.

Redox OS using the Orbital interface. (Credit: redox-os.org)

In addition, Redox definitely shows signs of inspiration from multiple operating systems like Linux, sel4, Plan 9, BSD, and MINIX. However, Jeremy also put quite a bit of thought into these different implementations–their strengths and weaknesses–and altered Redox accordingly in order to build an OS that makes fewer tradeoffs than most.

It’s still early days in my exploration of the Redox codebase, however, I have found it to be a really clean and relatively simple place to start really digging into the advanced features of Rust. The Redox project is still quite young and the amount of work that has been put into it thus far is extremely impressive, especially since much of the foundational work was done alone by Jeremy. So, if you have a propensity towards learning operating system internals and especially the Rust language, I would highly recommend you give this project a look. I just want to say thank you to Jeremey for creating such an interesting and ambitious project on top of all the incredible work he has done for System76.

And, with that, I’ll leave you with another exciting message that Jeremy recently sent out to the Linux Twitterverse, this time regarding future System76 hardware/firmware directions:

If you would like to check out Redox OS for yourself, you can find the official website here or the GitLab repository here. If you would like to learn more about the Rust programming language, check out their homepage here or get a quick overview of the language syntax here. In addition, if you would like to explore the world of operating systems, I would highly recommend the books Modern Operating Systems by Andrew Tanenbaum (the creator of MINIX) and Operating System Concepts by Silberschatz, Galvin, and Gagne.

Back to Table of Contents

Community News

Google & Canonical Bring Flutter to Ubuntu!

Flutter official logo. (Credit: flutter.dev)

There are few companies that ignite such controversial opinions in the Linux community and Free Software Movement than 1998’s startup-turned tech megalith, Google. Though the company was founded on humble beginnings and one of the most ambitious goals in existence–“to organize the world’s information and make it universally accessible and useful”–the company has made a multitude of questionable decisions in its 20-year-plus lifespan.

One reason that Google destroyed it’s competition in the early days was due to its simplicity of use and now-legendary search engine algorithms (such as Page Rank). Though Yahoo! had dominated the search engine market early, when Google was released onto the world, it quickly became the go-to portal into the new and adventurous frontier that was the World Wide Web. Today, the search engine is so popular that it is the de-facto implementation and is so ingrained in our culture that phrases like “just Google it!” have become common nomenclature.

Another reason that Google became so successful was how the company was set up and run compared to major traditional tech companies like Oracle, IBM, or Dell. In order to draw in and retain the best talent in the engineering world, Google provided unheard of amenities to their employees like free, healthy food in their cafeteria, massage appointments for stressed employees, and a number of extra-curricular activities like ping-pong tournaments in their office. Google was one of the first companies to realize that providing an unrivaled experience for their employees would entice and retain the top engineers in the world. On top of that, Google’s motto had become “Don’t be evil!”, something that permeated the company’s culture in its early years.

Adopted motto of Google that has since been removed from their offices. (Credit: mybroaband.co.za)

However, as a significant (in tech time) period has passed, Google has spread its influence as a global corporate superpower into almost every market it could, such as artificial intelligence, quantum computing, custom hardware solutions, the massive cloud computing industry, and even virtualization and container technology with their now de-facto Kubernetes orchestration platform. With this spread, Google began abusing their power of information, destroying much of the trust that privacy and security-minded people used to have for the innovative company. In essence, the “Don’t be evil!” motto was killed with their increasing grip on power over the information age.

So, whenever Google is mentioned in more tech-savvy communities like those that support Linux and FOSS, its name usually makes people shudder. Even so, it is impossible to deny the work that Google has done on making open-source software a standard today. With massive open source projects like Kubernetes, TensorFlow, Go, and Flutter, many other once-FOSS-allergic companies have realized the importance of the open-source model and have followed in Google’s footsteps, including other tech powerhouses like Facebook, Amazon, and even Microsoft.

Logo of the extremely popular Kubernetes open-source project from Google. (Credit: Ccaplat on medium.com)

Recently, an announcement came out of Google that appears to be a mostly positive move, even with the major and warranted critiques of the company throughout the open-source community. The major announcement involves a partnership between Google and Canonical to bring their open-source, cross-platform UI toolkit, Flutter, to Linux via Ubuntu.

For those unaware of Flutter, it is actually a project I would recommend checking out. As aforementioned, Flutter is an open-source project coming out of Google with the aim of making user interface development simpler and completely cross-platform with applications that can easily be packaged for ChromeOS, macOS, iOS, Android, Windows, and Google’s latest foray into operating systems research, Fuchsia. The goal is to allow developers to write a single codebase via the Flutter API that can run on any platform, reducing the development time and overhead of having to tweak their applications specifically for macOS, Windows, iOS, iPadOS, Linux, Android, and so on. In other words, its a huge time and headache saver!

Flutter uses the Dart programming language to build applications through the Flutter Framework and Software Development Kit (SDK), which provides a well-organized API for different widgets and layouts on different platforms as well as a ton of tools and utilities to make the process easier and maintainable. Dart is a language that was specifically created for building performant applications on multiple platforms and has a C-like syntax. Flutter uses the Dart virtual machine, which provides a just-in-time (JIT) execution engine (implemented in C++). In addition to compiling to native code, Dart can also compile to JavaScript, which is used heavily in client-side (and more recently, server-side) web development.

Example of asynchronous Dart code. (Credit: dart.dev)

So, what does this Google-Canonical partnership mean for the Linux community? Well, first of all, with the rapid adoption of Flutter by application developers all around the world, Linux could benefit via a much wider application ecosystem as well as access to the latest software projects due to Flutter’s ease of cross-platform development. And, well, nobody ever complained about more software available natively for Linux, right?

In addition, Dart was built to be a much faster language than current popular application frameworks like Electron, which uses JavaScript, a language known for incredible flexibility, but with a massive performance hit compared to lower-level languages like C or C++. Therefore, Dart was created with both performance and flexibility in mind, allowing for much higher quality applications.

Though many are wary of the presence of Google-related software making its way into Linux, Flutter is an open-source project, so the source code is available for anyone to peruse. All in all, I think that this is a great and much needed move for desktop Linux and is one that will allow the open-source operating system to run the applications of the future–something we have often struggled with.

What are your thoughts on the project? Do you believe this is a net positive move for Linux? Does Google’s involvement make you weary? Let me know in the comments below!

If you’d like to check out the official announcement from Canonical, you can find it here. If you would like to learn more about Flutter, you can find everything you need at their official website here. In addition, if you’d like to take a look at the Dart programming language, you can find information on how to get started here.

Back to Table of Contents

Linus Torvalds on the Future of the Kernel

Linus Torvalds, creator of the Linux kernel. (Credit: Mike Rogoway on oregonlive.com)

At this year’s Open Source Summit from The Linux Foundation, the main event involved a question-and-answer style keynote featuring Linux creator and master maintainer, Linus Torvalds, that included discussion topics curated by VMware‘s Chief Open Source Officer, Dirk Hohndel.

There were a ton of topics covered in the 40 minute discussion, but one topic appeared to stand out the most to many in the Linux community. The problem regards the difficulty that Torvalds and other kernel developers have experienced with bringing new, younger engineers into Linux kernel development and especially maintainership, something that is essential to the overall health of the Linux kernel as many of its current master developers and maintainers are becoming older and won’t be around forever.

Therefore, without an influx of young, intelligent engineers to continue the work, the Linux kernel could be in serious trouble somewhere down the line. When asked if the Linux developers were becoming “grey”, Torvalds responded with:

The new people are the ones who are often doing the [programming] work. We have managers and retainers who are old and starting to grey, that’s, I think, a completely different issue. But, we do have a generation of people in their 30s who are moving up the ranks of maintainers so that we have that next wave of people to take over eventually. I mean, look, we’ve been doing this for almost 30 years so we need to start thinking the next 20 to 30 years. And so, we need to have that next generation.

So, it appears that the problem doesn’t lie in finding engineers to program on the Linux kernel, but the hard part is finding kernel maintainers, which is a job that many younger people could consider boring, as it is much more of a manager-esque position instead of specifically writing code. It also requires quite a unique skill set and even more experience with working on the kernel, its many subsystems, and the connections between those subsystems.

Linux source code.

It appears that this is how the project tends to work. Someone starts by sending in patches of code that get reviewed, and if they are deemed up to standards, get merged into the mainline kernel. After years of working on the codebase, which is vast and requires several areas of expertise, they may take on a position as a maintainer of a small subsystem in the codebase. Again, after years of trusted work as a maintainer, they might begin moving on to become maintainers of larger and larger subsystems of the kernel, replacing the old maintainers as they leave.

This process takes a considerable amount of time and effort as well as a massive line of trust built between the up-and-coming developer and maintainers. It isn’t exactly an easy process to ensure that the patches that are submitted are necessary and won’t cause regressions in other, seemingly-unrelated parts of the nearly 30 million lines of code that make up the Linux kernel. So, trust is a must…and that can only be built up after years of experience and consistently good work.

One of the biggest barriers to entry for new software engineers interested in contributing to the Linux kernel today is the fact that they use a mostly outdated process for contributing that has been in place since it really took off in the mid-1990s. Namely, the use of mailing lists is the only way to provide your patches to the source code.

With the invention of version control systems like git (also from the mind of Torvalds) and especially the easy to use web integration platforms like GitHub and GitLab, many newer software developers and engineers become familiar with this easier to navigate service early on in their careers. Mailing lists can be chaotic and hard to follow when you come from the world of pull requests and the carefully recorded comments kept nice and tidy in a singular place.

GitLab and GitHub logos. (Credit: svitla.com)

Therefore, if the Linux kernel embraced a more modern way to take contributions from developers–one that is essential to basically any software role today–I think that the number of younger contributors might rise and diversify in a good way. Of course, organizing the near 30 million line codebase is another story, but it could be done.

As for me, I hope to see a shift in the kernel developer’s method of work to encourage contributions from talented people all around the world from all ages and backgrounds. Actually understanding the Linux kernel well enough to make changes to the codebase is hard enough–add in the difficulty of submitting those changes as well as keeping track of reviews and the pot of possible talent once again shrinks.

If you would like to learn more about the Linux kernel itself, you can find a really handy website as an introduction to new or prospective contributors here. Also, the main website for the Linux kernel itself (including the downloadable source code) can be found here.

If you’re curious about the Linus’ talk with Dirk Hohndel at the Open Source Summit, it is linked below for your viewing pleasure:

Back to Table of Contents

Apple’s Transition to ARM with “Apple Silicon”

Apple official logo. (Credit: Coral Murphy on usatoday.com)

One of the least shocking revelations in the past month happened at Apple‘s annual WWDC20 (World Wide Developers Conference), where Apple CEO, Tim Cook, announced that the company would be leaving Intel behind for their own custom ARM-based chips, labeled “Apple Silicon“, going forward. It definitely brings back memories of Apple’s decision to switch to the the PowerPC platform for many years before having to resort back to using Intel. You might say, “Eric, why is this the least shocking? This is huge!”, and you’d be right–it’s just that pretty much everyone saw this coming from a mile away.

Many people confuse ARM as a single architecture, like x86_64 or i386, but it isn’t. ARM is actually a collection of advanced reduced instruction set (RISC) architectures. The term ARM comes from the name of a company, Arm Holdings, that designed and implemented this newer architecture. Arm Holdings holds the right to the ARM architecture, but sells specific licenses to other companies who want to develop personalized processors for their devices.

Arm Holdings’ official logo. (Credit: arm.com)

Therefore, there are a significant amount of different ARM processors on the market, that share some of the base architecture, but are quite different from each other in other aspects. So, though this announcement usually mentions that Apple is moving to ARM architecture, the truth is that they are moving to a completely new design, named Apple Silicon, that will use the ARM architecture as a base, but will not be compatible with other ARM processors, like those found in PINE64‘s Pinebook Pro or the Raspberry Pi line of single-board computers.

Even so, this is definitely some exciting news, especially for the Apple lovers out there. One aspect that has made macOS an extremely popular platform is due to the fact that the company has control over nearly every aspect of their products–the hardware, firmware, and operating system can all be specially tweaked to work much better together than operating systems that have to account for a wide range of processors, firmware, and other stuff. With the removal of their reliance on Intel, we can expect to see an even more tightly integrated computing stack. To some, this will make macOS an even better, rock solid experience than before.

However, with this move, Apple is moving even further towards what is called “vendor lock-in”. This means that users will give up many of their computing freedoms for the comfort and convenience of the Apple platform. Of course, many people don’t really care much about this as Apple is still the ultimate status symbol in the computing world. For those that like to experiment with their hardware and software, this is a major blow, as the ability to mess with Apple devices now becomes non-existant.

Apple began this trend recently when they started gluing components into their devices and making it impossible to fix or switch out components without the proper tooling–of which Apple had a monopoly over. This means that in order to get your Apple computer fixed, you can no longer fix it yourself or take it to the local shop. Nope, it must be sent to Apple and you can best believe that you’ll be charged a premium for your service. Recently, Apple has given licenses to specific shops to become Apple certified so that they no longer hold a monopoly over the right to repair.

It’s hard to tell what the future holds for the tech megalith, but this is definitely something to keep your eye on. Like it or not, Apple is one of the largest tech influencers in the world, so when they decide on a massive direction change, you can expect others to follow suit as quickly as possible.

Luckily, in the Linux space, ARM processor work has been going on for quite some time and we’re likely to see exponential growth with this technology as the years move on. But, don’t get your hopes up for running Linux on the new Apple Silicon devices–you can be absolutely sure that Apple won’t allow that and will confine Linux users to virtual machine technologies within macOS instead.

This is definitely a new era for Apple’s Mac products and will be very interesting to see how this move affects their users as well as the Linux desktop community. I’d be lying if I said that I wasn’t curious to at least check out the upcoming Apple Silicon MacBooks, but would never pay the price tag asked. Should be interesting to see how this all plays out down the line!

If you would like to watch the keynote from WWDC20 hosted by Apple CEO Tim Cook, you can find the entire segment linked below:

Back to Table of Contents

SUSE Acquires Rancher Labs

SUSE and Rancher Labs official logos. (Credit: Sheng Liang on rancher.com)

SUSE is a company that has been taking some impressive strides these last few months including helping fight COVID-19 in a variety of ways, a complete rebranding and modernization of the company assets and marketing fundamentals, as well as the release of SUSE Linux Enterprise Server (SLES) 15 SP 2. However, one piece of recent news from SUSE may have a much larger impact on the focus of the enterprise open-source company going forward. That news? Well, the acquisition of Kubernetes cluster experts, Rancher Labs. From the official Rancher documentation:

Rancher is an open source software platform that enables organizations to run containers in production. With Rancher, organizations no longer have to build a container services platform from scratch using a distinct set of open source technologies. Rancher supplies the entire software stack needed to manage containers in production.

Basically, Rancher is an extremely powerful tool to empower information technology teams to build large Kubernetes, Docker Swarm, Mesos, or Rancher’s own “Cattle” clusters that can easily scale as needed. In the past few years, the open source Kubernetes project from Google has risen to become the de facto container orchestration platform in the technology area and has become one of the fastest growing open source projects in the world.

Containers are becoming one of the most important concepts in software systems today and are used extensively for anything from small-scale web development to scientific computing on the largest supercomputers in the world; you can find containers–with Docker being the most popular today–literally everywhere in the world.

With competitors like Red Hat and Canonical investing quite a bit in improving container technology, this acquisition by SUSE may allow the company to investigate, implement, and support these technologies even further. It will definitely be interesting to see how Rancher is integrated into the SUSE stack and I wish both companies the best on their merger.

If you would like to read the official announcement from Rancher Labs, you can find it here. In addition, if you would like to learn more about exactly what capabilities Rancher provides, an official explanation video is linked below:

Back to Table of Contents

Fedora 33 to Adopt Btrfs as Default!

Btrfs + Fedora by Michael Tunnell

A few years ago, there was one thing I always remember hearing–“Btrfs is not ready”. This was always confusing to me since the decision for utilizing Btrfs as a file system caused the two largest Linux-based enterprise companies to take polar opposite stances. Instead of allowing the file system as an option, Red Hat chose to completely remove it from Red Hat Enterprise Linux 8, their latest enterprise release. On the other hand, SUSE made a completely different decision–make it the default file system in all of their distributions. Huh?

It certainly confused me at the time, but I have to admit, I had very little interest in file systems back then. I didn’t run large production machines or anything like that, so ext4 (and for a brief time, ext3) has served me well for my personal machines. Of course, I’m an extremely paranoid user who backs up their data frequently to multiple locations, especially if something large hits, like a new kernel patch.

Even so, I’ve long heard the virtues sung of Sun Microsystem‘s (now Oracle) Zetibyte File System, or ZFS (you might know it as “Zed FS” ;)). Developed for Sun’s UNIX operating system, Solaris (and later their OpenSolaris platfom), ZFS was hailed as the “next generation” file system–and in many aspects that it true. The BSDs jumped on ZFS after OpenSolaris was ended and Solaris was closed sourced by Oracle and many BSD users have been enjoying its benefits for many years already.

But, Linux? Nope. Unfortunately the CDDL license that ships with ZFS is considered incompatible with Linux’s GPL v2 by many, including the man with the say, Linus Torvalds. So, in 2007 a new project was started to try to bring a next generation file system to Linux. That file system was named b-tree file system, or Btrfs for short (b-tree is a complex data structure that the file system was created around).

Though many problems were encountered with Btrfs in the early stages of the project, it has since found use in a variety of high profile companies including SUSE and Facebook. The improvements to Btrfs from engineers at these companies have brought Red Hat back to taking a serious look at the updated file system.

In turn, with input from their large contributor base, the Fedora steering council offered a vote on the subject of bringing Btrfs into the next iteration, Fedora 33, as the default file system, similar to how openSUSE has operated for years now. And, it appears that the vote passed quite significantly.

This is a huge turn of events for Btrfs. Though the file system has enjoyed extensive work from SUSE and Facebook engineers, throwing the might of Fedora and Red Hat engineers behind the project will only improve it drastically. As we know, Fedora serves as a testing grounds for Red Hat Enterprise Linux, so if the transition to Btrfs proves to be a net win for Fedora, it is likely that it will also become the default file system in the next iteration of RHEL, version 9.

Also, with the 19.10 release of Ubuntu, the developers added an option for experimental ZFS support for the hugely popular Linux distribution. Though Canonical is known for marching to the beat of their own drum, if Red Hat and SUSE begin backing Btrfs, along with the questionable issues around ZFS licensing, we may see Canonical adopt Btrfs as well instead of ZFS. This is not too farfetched, as Canonical has done this more than once–dropping Unity8 in favor of GNOME 3, dropping Upstart in favor of systemd, and dropping the Mir display server in favor of Wayland.

If this were to happen in the future, it is easy to say that the future of Btrfs is looking brighter than it ever has. There is no doubt that the next generation file system has grown considerably over the last few years and many now consider it to be in a stable and reliable state. Offering many advantages of ZFS without the licensing issue will definitely make Btrfs an enticing option for more Linux distributions to adopt as the default file system, or at least a choice upon installation.

It will be interesting to see how this all plays out, but I for one am extremely happy to see Btrfs received the praise that it certainly deserves today. I want to congratulate the Btrfs developers for sticking with the file system and putting in a ton of work into make it a true competitor against other, similar file systems for Linux. Best of luck to the Btrfs developers and the Fedora team as this experiment continues to unfold. I know one thing is for certain–I will definitely be trying this out as soon as Fedora 33 drops. After spending some significant time with it on openSUSE, I really think this is a great direction and provides the possibility of another united front for developers to place their trust. Only time will tell!

If you would like to read the official proposal from Fedora, which has now passed, you can find it here. If you would like to check out Btrfs, you can find the official documentation for it here and the Wikipedia entry for it here. In addition, if you would like to try out Btrfs for yourself, it is recommended to use one of the openSUSE distributions, where it ships by default. The openSUSE downloads can be found here.

Back to Table of Contents

elementary OS Prepares for Highly-Anticipated “Odin” Release!

A preview of elementary OS 6. (Credit: @CassidyJames on twitter.com)

elementary OS has undoubtedly become one of the most unique and highly popular Linux distributions in the past few years, in part due to their Pantheon desktop environment and overall focus on user interface/user experience, but also the consistency among elementary-supported applications and the major involvement by the core development team within not only the rapidly growing elementary community, but the greater open-source software movement as a whole.

When elementary OS 5.1 “Hera” was released in December 2019, it was met with overwhelmingly positive reviews from the Linux community. Hera built upon many of the awesome features of the previous release, 5.0 “Juno”, released the year before, but included some absolutely incredible polish and added an air of increased professionalism to the distribution.

It quickly became clear that the project was no longer a “mom and pop” operation, but was aiming for the sky–and has certainly made a name for itself up there with the big boys. With initiatives like the AppCenter For Everyone on the horizon (check out Linux++ Issue 3 for more on that), the momentum that the elementary project has been riding doesn’t appear to be slowing down anytime soon.

Also, the elementary team has recently been focusing on getting their brand out there into the market as it was recently announced that Linux hardware come-upper, Star Labs Systems, would include elementary OS as a pre-installed option on their computers, the first such instance of a pre-installed device for the elementary team (read more about that in Linux++ Issue 21). In addition, the team has been playing around with releasing an ARM image for different devices that may see the light of day sooner than later!

So, that brings us to today. With the release of Ubuntu 20.04 LTS earlier this spring, elementary now has a new base to build their next iteration on top of–elementary OS 6 “Odin” is on the way! There are a ton of really great optimizations coming with Odin, but the thing everyone is talking about is the increased customization available with the new release including a fully-supported “dark mode” as well as the choice of picking out complementary colors for your desktop. Obviously, this release will be about much more than some design enhancements and we’re excited to take a look at what the elementary team has in store!

To tide you over and pump you up, the YouTube creator, TechViper released an unofficial preview video for elementary OS 6 that is absolutely stunning and is some of the best Linux marketing I’ve seen in a while now. You can enjoy the video linked below as you wait for eOS 6 to come to life!

Back to Table of Contents

GTK Toolkit 3.99 Released

Adwaita Day, the default wallpaper for GNOME 3.38. (Credit: gitlab.gnome.org)

There has been a lot of anticipation for the release of GTK 4 for the past few years. Partially because the use of GTK 3 completely changed the game for the GNOME desktop environment, but it also added plenty of new features and really has become a modern, mature, and excellent graphical user interface toolkit. Now, many are wondering where the GTK developers are going to take GTK 4 for the future of application development on Linux.

As one of the most important steps towards GTK 4, the GNOME Project (stewards of the GTK toolkit) have released the final iteration of the current toolkit, GTK 3.99. By looking at the work that has been put into this release as well as the talks from GTK developers at the annual GUADEC conference, its clear to see that there is an emphasis on cleaning up the codebase, fixing major architectural problems, and providing more futuristic looking widget classes for developers.

Back in February, with the release of GTK 3.98, the GTK developers made a list of all the items they wanted integrated before releasing the feature-complete 3.99 version. From their official release notes:

  • Event controllers for keyboard shortcuts
  • Movable popovers
  • Row-recycling list and grid views
  • Revamped accessibility infrastructure
  • Animation API

Well, I’m happy to report that the GTK developers have successfully implemented all of these target goals for 3.99, except for the Animation API, “since it requires more extensive internal restructuring, and we can’t implement it in time“.

In addition to the above implemented features, the GTK developers also spent a significant amount of time working on their new scalable list infrastructure, something that developers have been asking about for some time. In addition, Christian Hergert, Red Hat employee and the mind behind the GNOME Builder IDE, has worked diligently on a new macOS GDK backend that has been merged into the project, despite a few problems that need some polishing. As well as this exciting new functionality, other smaller regressions were fixed–from tiny aspects like spinbutton sizing to treeview cell editing–and beyond.

The GTK team also commented that this would be a great time for GTK developers to start looking at porting their applications to the new GTK 4 standard. Of course, not every application will be able to accomplish this due to indirect dependencies on GTK 3 and the libraries that have not yet been ported to GTK 4.

This is awesome and extremely exciting for a GTK user such as myself. I’ve been keeping a close eye on the development of GTK 4 and the talks at GUADEC (that are just starting to show up in video format) sound really promising overall. Of course, there will be a lot of work to do in the upcoming months as GTK 4 becomes finalized and the transition will likely take quite some time. However, it is nice to see this massive step taken. Congrats to all the GTK developers out there working really hard to improve the product and overall experience of using GTK to build applications!

There is a ton of news regarding new items in GTK 4 that can be found at the end of the GTK 3.99 release announcement here. If you would like to follow along with GTK development, you can find them on Twitter, their news blog, and their official website. In addition, Matthias Clasen’s virtual talk regarding porting applications from GTK 3 to GTK4 from GUADEC 2020 is linked below:

Back to Table of Contents

Community Voice: Hayden Barnes

Hayden Barnes of Canonical and WSL. (Credit: linuxunplugged.com)

This week Linux++ is very excited to welcome Hayden Barnes, Developer Advocate for the Windows Subsystem for Linux (WSL) at Canonical. Hayden is an exemplary and extremely active member of the Linux community and acquired his current position at Canonical through his work on Penguin, a Debian-based distribution optimized specifically for WSL.

Hayden is also an extremely knowledgeable source on computer operating systems and a UNIX historian of sorts who maintains the very informative Awesome-UNIX repository. As WSL continues to improve with each passing day, Hayden will likely have an ever increasing influence on the modern computing platform–a place he deserves to be without a doubt. So, without further ado, I’m happy to present my interview with Hayden:

How would you describe Linux to someone who is unfamiliar with it, but interested?

“Linux is a collaboratively developed computer operating system that can be easily adapted to a wide range of uses, from general purpose desktop computing to highly-specialized tasks like servers, virtualization, embedded appliances, and development workstations. Linux offers a wide array of free high-quality services, development tools, and desktop applications readily available from distribution package archives and third parties. Linux offers an excellent learning opportunity for someone interested in science, programming, or engineering.”

What got you hooked on the Linux operating system and why do you continue to use it?

“I first heard about Linux in eighth grade in 1999. I bought Red Hat Linux at Best Buy with a giftcard from my grandfather for my birthday. Yes, back then not only did Best Buy sell boxed software, but you could buy full Linux distros there. Red Hat 5.2 didn’t work out of the box with my ATi Rage 128 video card, so the first thing I learned in Linux was how to rebuild the kernel. It came with a 200+ page book, which I devoured. What captivated me was all the different things I could do on my computer with Linux. Development tools like MSDN at the time were simply out of reach, but on Linux they were all just there.”

Red Hat Linux 5.2. (Credit: securitronlinux.com)

What do you like to use Linux for? (Gaming, Development, Casual Use, Tinkering/Testing, etc.)

“When I am using Windows, I will drop into Ubuntu on WSL to run my own automation scripts, download files, convert videos, connect to a remote device, git clone, and build interesting projects, you name it. Elsewhere, I use Ubuntu to run my personal blog and file server, to power my ROCKPro64 desktop, build the snaps I maintain in CI/CD, and occasionally to fix Windows.”

Do you have any preference towards desktop environments and distributions? Do you stick with one or try out multiple different kinds?

“In my job as a developer advocate for Ubuntu on WSL, I primarily use Windows 10 with Ubuntu on WSL. I believe in meeting users where they are. I want to experience the same issues and hassles our users do. I usually run the release undergoing active development, which is currently Groovy Gorilla and will become Ubuntu 20.10. On weekends though, I check DistroWatch for the latest new releases and go check them out–the more obscure, the better.”

Ubuntu 20.04 using WSL on Windows 10. (Credit: Joey Sneddon on omgubuntu.co.uk)

You also use other operating systems besides Linux in your daily workflow. Could you provide your view on the strength and weaknesses of the different operating systems you use and where you thing they fit into the overall picture of modern computing?

“In terms of strengths, Windows has stunning device driver support and long-term binary compatibility, Linux has raw power and high adaptability, and macOS is very pretty and required to make applications for the iPhone.

Interestingly, all three seem to be going through similar transitions right now.

All three are in the midst of updating UI toolkits and unifying experiences across desktops, laptops, and 2-in-1 devices: Microsoft is moving their OS and developers to Fluent design and working towards Windows 10X. Apple is merging the experience of iPadOS and macOS with mouse support on iPadOS and Catalyst on macOS. On Ubuntu, there are some exciting UI and UX things on the roadmap you should stay tuned for.

All three operating systems are also seeing increased reliance on containerization: Windows is now based on Windows Core OS, which is adapted for different editions of Windows, and has adopted a containerized application model. Ubuntu is now offered as Ubuntu Core, an immutable image for IoT and smart devices, and has adopted containerized applications in the form of snaps. macOS uses an immutable OS image and has a level of containerization of applications.

Linux will remain the general purpose UNIX-compatible environment for enthusiasts, developers, and enterprise. Among Linux distributions, Ubuntu continues to offer the best Linux experience. This is thanks to the work of my colleagues at Canonical, at upstream projects, with enterprise partners, by open source contributors, and, generally, the critical mass around Ubuntu.

Everyone wants a bit of Linux. Apple showed Debian running in a virtual machine on Apple Silicon at WWDC. Google has Crostini on ChromeOS. Microsoft has built Ubuntu into Windows with WSL. I think with a billion Windows 10 devices in use that Ubuntu on WSL has the biggest growth potential in this new area.

Ubuntu is the most popular distribution on WSL, joining Ubuntu as the most widely used desktop distribution and the most deployed cloud virtual machine distribution.”

What is your absolute favorite aspect about being part of the Linux and open source community? Is there any particular area that you think the Linux community is lacking in and can improve upon?

“I appreciate how worldwide the Linux community is. On any given day, I talk to people in the United States, United Kingdom, France, Canada, Hong Kong, and New Zealand.

I think a very small number of people in the Linux enthusiast community are incredibly toxic. You find them repeating the same old complaints in the comment sections or /r/linux and IRC channels. They hate systemd. They hate snaps. They hate codes of conduct. Basically, they hate anything new anyone tried to do to improve Linux beyond what it was when they discovered it. They rarely actually do anything about these complaints.

But, this unfortunately results in some new Linux community members having a very warped experience of Linux and the Linux community. Instead of meeting friendly, helpful, and curious Linux users, like most are, they encounter a tiny, but loud, minority who seem weirdly to hate everything about Linux. Under those circumstances, why would anyone want to try Linux or get involved in the community?

It is important as leaders in the community that we do not amplify the toxicity. Instead, we promote the people and wins that make open source awesome. I urge you to go find Daniel van Vugt’s incredibly detailed blog posts about his GNOME performance improvements which are speeding up GNOME desktop for every Linux distro.”

What is one FOSS project that you would like to bring to the attention of the community?

“If you are developing in Docker containers on Linux, it is time to check out LXD. LXD is a daemon that drives LXC containers. It is faster, lighter, and written in Go. LXD recently added virtual machine support, so you can run Windows virtual machines side-by-side with Linux containers. You configure LXD containers with cloud-init, the same configuration you can use on public clouds, and manage them with an API. You can use distrobuilder and a bit of YAML to build images of your preferred distro. LXD is built into and powers a lot of the tools we make and use at Canonical, like MAAS, Snapcraft, and Juju.”

Canonical’s LXD container architecture. (Credit: Margaret Rouse on searchitoperations.techtarget.com)

Do you think that the Linux ecosystem is too fragmented? Is fragmentation a good or bad thing in your view?

“No.

I spend too much time reading about that, thinking about it, and running old UNIX and Linux operating systems. So, I tend to take a historical approach.

I see Linux as serving the de-facto UNIX operating system, filling that role in our computing landscape since the collapse of the commercial UNIX market in the mid-1990s.

The GNU Project was very successful at building a UNIX-like userland that when combined with the Linux kernel, the rise of broadband, and the budding open source movement became the perfect combination to take over computing. For various reasons, the BSDs, Plan 9, and BeOS failed to hit that key critical mass at just the right time.

What persists as commercial UNIX today, as macOS, HP-UX, and AIX is expensive, proprietary, and mostly limited to very specific hardware. This is a top 5 reason, in my opinion, that Linux replaced commercial UNIX as the de facto *NIX environment.

HP-UX commercial UNIX system. (Credit: ykchee.blogspot.com)

With that in mind, the current Linux ecosystem is much less fragmented than the UNIX ecosystem was that came before it. In 1990, each hardware vendor, including Apple, IBM, Commodore, HP, DEC, SGI, and Sun [Microsystems] all had their own proprietary UNIX, plus third-party UNIX distributions were available such as Microsoft’s XENIX.

The Linux ecosystem also had a surge in distributions in the late 1990s through the early 2000s. Corel, Lycoris, Xandros, Turbolinmux, and Mandriva all came and went. Eventually, the market and hardware vendors settled on a few core distros that have largely remained the same since the 2010s. This is more sustainable and far less fragmented than the UNIX ecosystem that Linux mostly replaced.

Today, you can buy a Dell XPS laptop and Lenovo ThinkStation workstation each preloaded with Ubuntu and they are compatible, they are not running Dell Linux or Lenovo Linux.

What it comes down to is, from an enterprise perspective, you are going to choose from Ubuntu or Red Hat, and to a lesser extent Oracle or SUSE.

From an enthusiast perspective, you are going to pick from the Ubuntu-compatible family (Ubuntu, Ubuntu flavours, derivatives like elementary OS, or the community Debian) or the Red Hat compatible family (CentOS, RHEL, Fedora), and to a lesser extent openSUSE and the Arch-compatible family.

You might pick a derivative and then pick your own desktop environment to go on top, but what’s running underneath has more or less settled on one of these families, and it’s probably Ubuntu. Snaps go even further to reduce Linux fragmentation between Ubuntu and non-Ubuntu distros.”

What do you think the future of Linux holds?

“I think Ubuntu will win back macOS developers who may have been drawn there for the refined UI, performance, and terminal experience. Ubuntu now offers a very cohesive UI/UX, significant performance improvements, a curated application store, and all the benefits of the *NIX terminal on any hardware without Apple’s restrictions. Even if a few of those macOS developers move to Windows, we are ready to provide an Ubuntu environment for them.

In the long-term, Linux will remain the de facto *NIX environment for developers and servers.”

What do you think it will take for the Linux desktop to compete for a greater share of the desktop market space with the proprietary alternatives?

“More mainstream first-party applications. The way we get there is with a common Linux desktop application store that “just works”, regardless of the underlying distro. The Snap Store is the most developed and popular desktop application store. The work the Snap team at Canonical has done to get developers publishing into the Snap Store has been critical.

We need even more developers publishing their Linux applications as snaps to accelerate critical mass. If you know a project that you want to see in the Snap Store, but they are not there yet, you can approach a developer and ask if they would accept a PR (git pull request) for an official Snap build. It is a fun project to learn how to snap packages. Then, when you are done, you can push your snapcraft.yaml to the official upstream project. Becoming a maintainer of a snap is an easy way to get involved in your favorite projects. I maintain the official snaps for the Nim programming language.”

The Snapcraft project logo. (Credit: Kris Wouk on maketecheasier.com)

You often post snippets of UNIX history online. What interests you the most about UNIX and retrocomputing in general?

“I enjoy studying the evolution and development of complex systems, computing, and otherwise. I like thinking about how the strands came together. You can study UNIX this way from a software, hardware, economic, artistic, or even philosophical perspective.

There are audiophiles who can name their favorite band’s influences and then those musician’s influences. They can pick up on a hook in a song and can tell you who the artist sampled it from two decades prior. A bit like that, but operating systems.

A lot of the UNIX history I post about online is related to DOS (and later NT) interoperability. My objective here is to show that phenomena like WSL, a *NIX compatibility layer on NT is hardly new. Readers may be familiar with CYGWIN, but long before that there was interoperability between Microsoft XENIX and DOS 2.0.”

How did your adventure into developing for Linux come about?

“I was primarily in the Apple ecosystem. I wanted to automate tasks on my iPad in a bash-like environment, inspired in part by a Pythonista. At the time, the Shortcuts app was third-party and pretty limited. I knew it would be quite a bit of work and I thought to sustain it, I needed to sell the app in the App Store. It was unclear from the App Store terms if such an app would be accepted. I wasn’t going to invest hundreds of hours in development, which has real monetary opportunity cost, for an app that would be rejected. I reached out to Apple legal to get clarification, but never received a response.

I kept iterating over my idea of a minimal user-friendly terminal environment, tinkering with adapting MINIX. Eventually, I grew frustrated with Apple hardware as well. Apple sold laptops with keyboards with incredibly high failure rates for nearly five years and, frankly, I got tired of waiting for them to fix it.

I decided to see what all the fuss was about ThinkPads. I picked up a used T470s in eBay and loved it. It had an amazing keyboard, was built like a tank, and, unlike Apple, Lenovo offered next-day on-site service. I liked having that reliability in my primary dev machine. At the time though, my job still required me to use Windows-based applications. That is when I took a serious look at Windows Subsystem for Linux (WSL), which I had played with before.

Eventually, it occurred to me I could base a curated terminal environment on WSL and sell it through the Microsoft Store. Thus, the idea for Pengwin was born a little over two years ago now. After I started Pengwin, I was approached by Canonical to come over to Ubuntu. I felt I had gotten Pengwin to a good place and made the switch. Pengwin will always remain the community-focused self-funded artisanal distro for WSL. Ubuntu offers a different proposition–write your apps for Ubuntu on WSL and they will run on Ubuntu anywhere.”

Pengwin Linux. (Credit: @PengwinLinux on twitter.com)

Recently, some very exciting news was release at the Microsoft Build 2020 event regarding Windows Subsystem for Linux (WSL) 2. What are you most excited about concerning the future of WSL?

“I am excited about having native GUI support. More details on that are coming later this year. For now, I can say it will be based on Wayland, will support audio playback, uses components from FreeRDP, will be open source (so those FreeRDC improvements will be push upstream), and it will be backwards compatible with X applications.

I think opening up applications like GNOME Builder and KDevelop to Windows users will dramatically improve adoption of the GTK and Qt toolkits and mean desktop applications are developed on Linux first, then compiled back to Windows. We are already beginning to see things like Uno use WSL to provide AOT WASM compilation in Visual Studio, beating Microsoft to the feature.

GNOME’s Nautilus file manager running natively on Windows 10. (Credit: Craig on devblogs.microsoft.com)

Cross-compilation of desktop applications between Windows and Linux would be a major breakthrough for desktop Linux to address the app gap. I am particularly excited about the possibility of cross-compilation games for Linux using WSL as well.”

Due to some mistrust of Microsoft in the Linux community, are there any common misconceptions about WSL that you see floating out in the greater Linux community? What do you see as the most important benefit that Linux users will gain from something like WSL?

“A small number of people construct their entire worldview around animosity towards Microsoft and I can’t help them.

If you believe that more users running and looking at open source code makes it better code then it really doesn’t matter where it is running.

If you hold yourself out as an open source advocate, but then judge how and where people use open source, I think you might be missing a key point about open source.”

What is your favorite part about working for a company like Canonical? What has the relationship been like between your team and the WSL team at Microsoft? Having spent time with Microsoft employees, how has the company culture changed, in your opinion, from the days in which it “vilified” open source software?

“Canonical is a lot of fun because you are surrounded by incredibly bright people at the top of their respective fields.\

We have an excellent relationship with the WSL team at Microsoft. We are both committed to providing the best experience for Ubuntu users on Windows.

We collaborate with Microsoft on WSL, several parts of Azure, Hyper-V, and on snaps of Microsoft applications, including Code, Azure apps, PowerShell, and Skype, and some that have yet to be announced.

Microsoft Terminal, an open-source project. (Credit: Jason Evangelho on forbes.com)

The shift at Microsoft towards open source has been a 20 year process. I get so many questions about it, I made a timeline.

Entire generations of developers have come up at Microsoft, from interns to senior developers, working on Linux and just open source. You can spot Ubuntu on Surface devices at Microsoft events.”

What is the WSL community like? Do you find that a lot of Linux users are adopting WSL in order to get better results from a hybrid approach? What about Windows users?

“The WSL community grew up around getting things to run in WSL 1, like simple GUI apps, and porting distros. In that way, it was not a lot different from other new platforms. We shared hacks and other tools. Many of those hacks ended up in Pengwin.

Then WSL and the WSL community matured and it turned towards mainstream development workflows and getting tools like Docker and microk8s to run. Nowadays, open source projects like Crystal or Rust will direct users who want to install on Windows to use WSL.

We are seeing popular apps like Beekeeper Studio being built with WSL. Now that GPU compute is coming to WSL, I expect to see a surge in interest in AI/ML in the community.”

Beekeeper Studio interface. (Credit: docs.beekeeperstudio.io)

Do you have any major personal goals that you would love to achieve in the near future?

“Finish my book on Advanced WSL, coming 2021 from Apress.”

I just want to wholeheartedly say thank you to Hayden for taking time out of his extremely busy schedule to prepare an interview with Linux++. It is really incredible to see the work being done by his team at Canonical and the WSL team at Microsoft to produce a very high quality product. Hayden is such a kind, inclusive, and enthusiastic member of the Linux community–a role model that we all can strive to become. Thanks again Hayden, good luck with WSL and all your future endeavors, and I can’t wait to see what your team has in store for the Linux community!

If you would like to keep up with the latest news from Hayden and the, you can find him on Twitter or email him directly at hayden.barnes@canoncial.com. In addition, if you would like to check out WSL, you can find information about the project in the official Microsoft documentation here.

Back to Table of Contents

Exploring Linux History

In the previous issue, I released History of the Unix Family (Part 2) into the world, which covered the rise of Linux and the aftermath of the Unix Wars up to the IPO of Red Hat. This time, we will focus on some of the key aspects that brought Linux from the enterprise world into the everyday computer users realm as well as the ultimate downfall of commercial UNIX and the continued development of BSD even in the face of the growing success of Linux. This is the story of Unix growing up.

After a decade-long meteoric rise by the free and open source GNU/Linux, how did an operating system built for enterprise solutions begin falling into the hands of casual computer users–and even those with little to no technical inclination at all?

This time, the story doesn’t start with a single hero or even a small group of heros. Instead, it details the coordinated work of many thousands of passionate and determined people working extraordinarily hard to bring Unix into the modern day via its now-popular offspring, Linux.

History of the Unix Family: Modern Day Unix

Red Hat headquarters. (Credit: spectrumcos.com)

In the shadow of the media hysteria surrounding the Y2K phenomenon (or “Millenium Bug”), the open source developers working on the Linux kernel kept chugging along. After Red Hat‘s extremely successful IPO in 1999, Linux had more than proved itself as an enterprise solution that could certainly replace legacy UNIX systems, bypassing the enormous cost of the commercial license in turn for a free and openly developed Unix-like operating system with a much more affordable support contract model.

However, many Linux enthusiasts at the time realized that Linux could be so much more than simply a operating system sitting on some server in a data center. With the leaps and bounds made on Linux development in the 1990s, many thought that Linux had the potential to take a shot at Microsoft‘s extremely popular and proprietary Windows operating system, which absolutely dominated the personal computer market space.

One such man, Gaël Duval, dreamed of turning this dream into reality. In July 1998, he took the most popular enterprise Linux distribution, Red Hat Linux, and merged it with the up-and-coming K Desktop Environment (KDE) to provide a much simpler and more intuitive graphical user interface (GUI) for those who were familiar with the popular Windows 95-style GUI. The initial release, Linux-Mandrake 5.1, was based on Red Hat Linux 5.1 and followed the versioning scheme of it’s parent.

Linux-Mandrake 5.1 with KDE Version 1.0. (Credit: Thom Holwerda on osnews.com)

The response to the release of Mandrake was enormous, much greater than Duval had ever expected. Duval released Mandrake on a few FTP servers and announced it on a few Linux news websites before going on a two week vacation. When he returned, he found his inbox flooded with nearly 200 emails regarding his new project. Within a few months of the initial release, Duval, along with some developers that he had met via Mandrake’s success formed their own company, MandrakeSoft (a shot at Microsoft) in order to sell some CDs with Mandrake to pay for the development costs.

Besides the inclusion of KDE as the main desktop environment, Mandrake also included other tools for user-friendliness including the urpmi (RPMDrake) package manager, based on Red Hat’s RPM with the intent of addressing some of RPM’s limitations at the time, the MandrakeUpdate tool to enable updates through the user interface, and a graphical installer for the distribution. All of these features combined to make Mandrake the easiest distribution to install, use, and update over the multitude of other budding Linux distributions.

In a similar vein, the now-massive Debian introduced a much higher level package management tool on top of dpkg in 1999, called the Advanced Packaging Tool (APT) in order to compete with RPM. Though dpkg still worked with all of the individual DEB files, APT helped to manage dependencies between them as well as release tracking and version pinning. One of the greatest features implemented in the APT tool was that it performed topological sorting of all packages passed to it to ensure that they would arrive in the most efficient order for dpkg.

In 2000, Apple Inc., after purchasing the OPENSTEP for Mach operating system along with Steve JobsNeXT computers released details of a new operating system kernel that they would be using in the upcoming Mac OS X 10.0 release, codenamed “Cheetah”. The XNU kernel became the hybrid kernel that was at the heart of this new operating system, known as Darwin. Attempting to take advantage of the best features of both the microkernel and monolithic kernel design, XNU utilized an implementation of the Open Software Foundation‘s Mach kernel (based on the original designs from Carnegie Mellon in 1985). Additionally, Apple merged the OpenStep API that NeXT had built with Sun Microsystems with their old Mac OS interface to build the Aqua user interface, that would eventually become the Cocoa API.

In order to create a Unix-like operating system, Apple engineers began using code from various BSDs, especially the FreeBSD project. This part of the operating system provides the POSIX application programming interface along with many other Unix-like functionalities. Though Apple has used quite a bit of code from FreeBSD, it is obviously heavily modified for the XNU architecture, which is incompatible with FreeBSD’s monolithic design. The release of Mac OS X was the first step that Steve Jobs planned in order to bring his old company back from the grave.

Mac OS X 10.0 “Cheetah”. (Credit: Riccardo Mori on systemfolder.wordpress.com)

In early 2001, Judd Vinet was inspired by the CRUX distribution to create a minimalist version of Linux in order to give greater control to the user over what software they wanted to install on their system. The project became known as Arch Linux due to Vinet’s enjoyment of the word’s meaning of “the principal” as in “arch-enemy”. Vinet was enamored with the simplicity of certain distributions like Slackware, PLD Linux, and CRUX, but saw them as unusable due to their lack of a strong package management solution that could compare to Red Hat’s RPM or Debian’s APT. In response, Vinet wrote his own package manager, pacman, to handle package installation, removal, and upgrades smoothly. Arch Linux grew slowly at first, but more and more Linux users began to see the simplicity of the distribution as well as the control available.

Besides the idea of a minimal installation and rolling release updates, the Arch Linux developers became frustrated with the current state of documentation around Linux and open-source projects in general. To change this, they created the ArchWiki, which would become one of the most comprehensive documentation projects in Linux history. The idea partially grew from the fact that Arch Linux was more difficult to install than many other distributions because it forced the user to configure everything about their system–from networking to desktop environments–which made it difficult for people without extensive experience in Linux to get started. The Arch Linux developers wanted a place that anyone who was willing to put in the time to learn could benefit from the major advantages that Arch Linux provided.

In November 2002, University of Hawaii undergraduate student, Warren Togami, began a volunteer and community driven project called Fedora Linux, which provided some extra software bundled with Red Hat‘s extremely popular Red Hat Linux distribution. The name, Fedora, was derived from the hat used in Red Hat’s “Shadowman” logo. The goal of Fedora was to provide a single repository for all of the vetted third-party software packages in order to find and improve non-Red Hat software. Unlike the development of Red Hat Linux, Fedora was intended to be run as a collaborative project within the Red Hat community, governed by a steering council of members from all around the world.

At the same time that Fedora was launched, Red Hat Linux was discontinued in favor of the company’s new offering, Red Hat Enterprise Linux (RHEL). Red Hat took interest in the Fedora Project early on and began supporting it as a community run distribution for desktop users. RHEL became the only officially supported Linux distribution by Red Hat, though Fedora began to spread and soon become widely used throughout the company. As Fedora began to mature, Red Hat used it as a sort of “testing grounds” or “incubator” for updated packages before they made their way into the stable and much slower moving RHEL.

Fedora Core 1 interface. (Credit: wikipedia.org)

In 2003, the largest overhaul of the development system of the Linux kernel in the entirety of its history would happen with version 2.6. The development of Linux kernel 2.6 spanned nearly 8 years–ending in 2011–and introduced a massive amount of updates including 64-bit support, an improved scheduler for multi-core systems, increased modularity of components, and much more. As a consequence of the extremely long development period, the Linux kernel release period would transform into what it is today. Due to the massive architectural and tooling changes, the Linux kernel is split into two major periods of development–those prior to 2.6 and the releases after and including 2.6.

Also in 2003, Sun Microsystems would release details about its next generation 3D user interface, known as Project Looking Glass. The environment was originally built to replace the Java environment in Solaris. Looking Glass was an attempt to make a futuristic desktop environment that would be compatible with most Unix-like systems and even Microsoft Windows. However, the project never made it to fruition and the codebase was open-sourced for others to continue work on it. It would go on to greatly influence the look and feel of Apple’s Mac OS X, especially with the major user interface revamp that came with version 10.5 “Leopard”.

Sun Microsystem’s Project Looking Glass 3D interface. (Credit: Dr. Oliver Diedrich on heise.de)

In November 2003, SuSE (changed from S.u.S.E.) was acquired by Novell, a company that already had their hands in the commercial UNIX ecosystem with their NetWare and UnixWare operating systems. Earlier that year, the company had acquired the GNOME Project from Eazel (creators of the Nautilus file manager) and SUSE Linux’s default desktop environment was switched to GNOME from KDE. The result was that the second iteration of GNOME would become extremely successful, with engineers from Red Hat and SUSE working together on the desktop environment and application ecosystem along with an extremely fast growing community of talented developers from all around the world.

After many years of being the main BSD implementations in use, FreeBSD and NetBSD finally received some friendly competition in July 2004 with the release of a FreeBSD 4.8 fork by Matthew Dillion, which he named DragonFly BSD. Dillon was an Amiga developer from the late 1980s to the early 1990s and then joined the FreeBSD team in 1994. In 2003, Dillion had major disagreements with how the FreeBSD developers proposed implementing threading and symmetric multiprocessing and thought their techniques would cause massive maintenance problems as well as a significant performance hit.

DragonFly BSD bootloader. (Credit: wikipedia.org)

So, when Dillion had some major disagreements with the FreeBSD development team, they revoked his privilege to contribute directly to the codebase, causing him to announce the start of his forked project, DragonFly BSD, on the FreeBSD mailing lists. Though Dillion remains friendly with the FreeBSD developers, DragonFly BSD went in a completely different direction with components like lightweight kernel threads, an in-kernel message passing system, and DragonFly BSD’s own file system known as HAMMER.

In late 2004, a new distribution would make its way into the Linux community and would completely alter the desktop Linux experience forever. It began with the sale of an internet security company, Thawte, to Verisign for an undisclosed price, making the South African founder, Mark Shuttleworth, a multi-millionaire over night. Instead of retiring and living out a life of luxury, Shuttleworth decided to use his fortune in order to change the world.

Shuttleworth, a free and open source software enthusiast and Debian contributor, understood the importance of Linux as an open operating system, which had the potential to run the world. In April 2004, Shuttleworth brought together a group of some of the most talented Linux developers from all around the world to see if they could put together the ideal operating system based on Linux.

Being a huge fan of the Debian Project, the team chose to use Debian as a stable base to build upon. The ideas that these developers came up with for transforming Debian into an easy to use and comprehend operating system include this now-famous list:

  • Predictable and frequent release cycles.
  • A strong focus on localization and accessibility.
  • A strong focus on ease of use and user-friendliness on the desktop.
  • A strong focus on Python as the single programming language through which the entire system could be built and expanded.
  • A community-driven approach that worked with existing free software projects and a method by which groups could give back as they went along, not just at the time of release.
  • A new set of tools designed around the process of building distributions that allowed developers to work within an ecosystem of different projects and that allowed users to give back in whatever way they could.

Though these might seem like the bare minimum for popular Linux distributions today, at the time it was revolutionary. Even a distribution like Debian, which was considered one of the easier distributions to install and manage, was far out of reach for people who didn’t have a technical background or interest. Shuttleworth planned to change that.

The group became known as the “Warthogs” and were given a six month period where they would be fully funded by Shuttleworth to come up with a working prototype. In October of 2004, Shuttleworth and the Warthogs unveiled their project to the world as “Ubuntu“, a South African term that roughly translates to “humanity” in English, reflecting the best pieces of human beings like compassion, love, and community.

The first version, Ubuntu 4.10, was given the name “Warty Warthog”, as the team figured it would be riddled with bugs in the first release. Even without a single piece of promotion by Shuttleworth and the developers before its release, Ubuntu shot up the list of popularity in the Linux world almost overnight due to the ease of installation and use.

Ubuntu 4.10 “Warty Warthog”. (Credit: getmyos.com)

In the wake of Ubuntu’s massive success, Shuttleworth created a company, Canonical Ltd, in order to fund the project and those first Warthog developers became the original members of the Ubuntu core team. Though Canonical decided on the GNOME 2 desktop environment as their default interface, another community project would port KDE to the Ubuntu base. This project, started by Canonical employee Jonathan Richell, came to be named Kubuntu. Kubuntu would serve as a flagship offering of KDE for several years.

After seeing the success garnered by community run projects like Fedora and Ubuntu/Kubuntu, SUSE decided to open up development on their enterprise offering by introducing the SUSE Linux distribution with their 10.0 release of the enterprise offering, which would eventually be renamed and rebranded as the openSUSE Project. Similar to Fedora, openSUSE became a community run project with an elected steering council that oversaw the project’s direction. Though supported by SUSE, openSUSE was not considered an official product of the company.

SUSE Linux 10.0, which would become openSUSE. (Credit: wikipedia.org)

One of the glaring differences between the Red Hat and SUSE approach to their desktop distributions compared to Canonical was that Red Hat and SUSE effectively cut themselves off from these projects, citing community as the source. On the other hand, though Ubuntu was also branded as a community project and took contributions from developers outside of Canonical, the company still had the absolute say over what direction the distribution took, which would eventually became a source of contention within the growing desktop Linux community.

In 2005, Sun Microsystems, the company behind the most prolific commercial UNIX system left alive in industry, Solaris, made a massive announcement. Though Solaris had been a proprietary and closed-source operating system since its replacement of SunOS in 1993, the company was beginning to see the benefits of the open-source model that Linux was utilizing and decided to release most of the codebase under the CDDL license as OpenSolaris.

After seeing the massive progress for Linux on the deskop and feeling as though the open-source operating system was finally ready for prime time, two friends in Denver, CO decided to pursue a company dedicated to building hardware with Linux pre-installed. At the time, there was nobody (or very few, relatively unknown vendors) shipping personal computers with Linux, so most users had to install the operating system itself, which added to poor adoption by the general public. After dismissing Red Hat, Fedora, openSUSE, Yoper and other popular distributions at the time, the company settled on Ubuntu due to its rapid growth, ease of use, and the nature of Canonical’s business model, which co-founders Carl Richell and Erik Fetzer appreciated.

System76 office with CEO Carl Richell (standing to the right). (Credit: chzbacon on linuxunplugged.com)

The company became known as System76 as an allusion to the year 1776, when the United States gained its independence from Great Britain via the American Revolutionary War. The founders of System76 hoped that they could play a part in igniting another revolution–one of free and open-source software. They dreamed of a day when people could use their devices freely without the restrictions that apply to proprietary software, hardware, and firmware. The first System76 computers to ship ran Ubuntu 5.10 “Breezy Badger”.

On October 24, 2005, Andrew Tanenbaum announced his next iteration of MINIX, version 3.0. Though massively overshadowed by the open-source operating system that was inspired by it, MINIX 3 was not only designed as a research and learning operating system to accompany Tanenbaum’s textbook, Operating System Design and Implementation, it also was pushing to become a serious and usable system for the burgeoning embedded systems area, where high reliability was paramount. To that end, MINIX was designed to support smaller chipsets like i386 and the ARM architecture.

MINIX 3.1.5 brought in a massive amount of Unix utility programs by supporting a wide range of software found on most Unix systems like X11, emacs, vi, cc, gcc, perl, python, ash, bash, zsh, ftp, ssh, telnet, pine, and some several hundred more. The userland of version 3.2 was replaced mostly by that of NetBSD, making support for pkgsrc possible and increasing the amount of available software applications that could be utilized on MINIX. As the microkernel operating system gained more attention, Intel began using it internally as the software component making up the Intel Management Engine by 2015. MINIX 3 is still used in niche situations all over the world.

While Canonical continued to improve the user experience for desktop Linux enthusiasts, they found one area in particular to be nightmarish–the SysVinit init system, which was a collection of UNIX System V init programs from the 1980s that were ported to Linux by Miquel van Smoorenburg. Though there were other init systems available at the time, they hadn’t quite reached maturity on Linux and this was causing major problems for Ubuntu on a wide range of hardware.

In order to clean up the problem for Linux, Scott James Remnant, a Canonical employee at the time, began writing a new init system, called Upstart, which was released to the world on August 24, 2006. One of the main draws to Upstart was the fact that it was backward compatible with SysVinit, which meant that it could run unmodified SysVinit scripts, allowing for rapid integration into current system architectures. Of course, Ubuntu was the first Linux distribution to adopt Upstart by including it as the default init system in Ubuntu 6.10 “Edgy Eft”, however, many other popular distributions eventually followed suit including Red Hat with RHEL 6 and Fedora, as well as SUSE’s enterprise solutions and openSUSE.

In 2006, another massive innovation would come out of Sun Microsystems. In 2001, Jeff Bonwick and his team at Sun began work on an area that hadn’t seen a whole lot of creative innovation since Unix rose to maturity in the 1980s–the file system. Announced in September 2004, the Z File System (ZFS), named originally as “Zettabyte File System”, looked extremely promising and pushed the boundary for a “next generation” of file system quality. A ton of innovative and extremely useful functionalities were built into ZFS and it was even able to store approximately 256 quadrillion zettabytes of data (much more data than anyone had seen at the time).

A simplified picture of the ZFS architecture. (Credit: Yosu Cadilla on mauteam.org)

In time, the major BSDs would pick up support for ZFS and would even make it the default file system for their operating system. However, this would be done under the OpenZFS platform, which encouraged open-source variations on the specific version of ZFS that had been released with OpenSolaris. Unfortunately, due to the CDDL licensing of ZFS, Linux found itself to be incompatible with the file system, and attempts to integrate ZFS into Linux were largely ignored by the Linux kernel developers.

Though ZFS was found to be incompatible with the GPL v2 license that the Linux kernel keld, another next generation filesystem would soon become a reality, and this time, it was completely open-source and built specifically for Linux. At USENIX 2007, SUSE developer Chris Mason, would learn about an interesting data structure proposed by IBM researcher, Ohad Rodeh, called the copy-on-write B-tree. Later that year, Mason joined Oracle and began working on a new file system that would utilize this data structure at its core. Mason named his project the B-tree file system (Btrfs).

At the time, the ext4 filesystem had come to dominate the Linux landscape, however, even ext4’s principal developer, Theodore Ts’o, stated that his own file system had gained little in the way of innovation and that Btrfs would be a much more interesting direction to explore. Btrfs included many of the popular functionalities of ZFS, such as system snapshots, data deduplication, only saving diffs between snapshots to save space, and even some functionalites that weren’t present in ZFS. The growth of Btrfs was slow at first, but today it is used by major companies like Facebook and SUSE. There is even an initiative today by the HPC community to use Btrfs as a backend for the massively parallel file system that is the de facto implementation on supercomputers today, Lustre–an honor that has long been reserved specifically for ext4 or ZFS.

Though RHEL was available only through a paid subscription model, the project’s source code was available to the public. Because of this, a few RHEL clones would pop up to give the same capabilities of RHEL, without the need for payment, however, they did not include the official support that Red Hat provided. Two of the major competing RHEL clones were Gregory Kurtzer’s CAOS Linux and David Parsley’s Tao Linux.

In 2006, the two distributions decided to come together to work on the common goal of providing a free version of RHEL and would eventually rebrand the distribution as The CentOS Project. By 2009, CentOS had gained enough support from the developer community that it overtook Debian as the most popular distribution for web servers, however, it wouldn’t last for long (Debian regained the title in January 2014). CentOS became an extremely popular distribution for developers because it allowed them to use an operating system with all the perks of RHEL, without the need to pay for support contracts, especially for smaller projects like web hosting that didn’t require a ton of support from any company. The CentOS community stepped up and began helping those who chose the distribution through their own mailing lists, web forums and chat rooms.

Like its parent distribution, Debian, Ubuntu started to become a major source and stable base for other software developers to build their own distributions on top of. One of the first Ubuntu derivatives was created in August 2006 by Clément Lefèbvre (known as “Clem”), named Linux Mint. The developers who started Mint weren’t too keen on some of the decisions that Canonical made and were also afraid that if the company went under, the advancements made in Ubuntu would be lost along with it. Though their very first beta-quality release, codenamed “Ada”, was based on Kubuntu, every release of Mint from version 2.0 on would use Ubuntu as its official base.

Linux Mint 2.0 “Barbara”. (Credit: clem on segfault.linuxmint.com)

Part of the goal of Linux Mint was to create a better version of Ubuntu than Canonical. Because they were not bogged down in a lot of the lower-level details, Mint was able to build and package a bunch of extremely helpful tools designed by its own developers. These tools were built with the singular goal of making Linux as easy to use as Microsoft’s Window, even for users who were not technically inclined–a trend that would only grow with the rise of many Ubuntu-based distributions throughout the next 15 years.

However, not everyone was looking to build off of Ubuntu, Debian, Red Hat, or Arch Linux at the time. Instead some developers had other ideas for building their own distributions from the ground up. In 2008, a former NetBSD developer, Juan Romero Pardines, was looking for a system to test his new X Binary Package System (XBPS). He decided to experiment with a Linux distribution for this purpose as well as testing out other software from the BSD world for compatibility with Linux.

An example of Void Linux using the dwm window manager. (Credit: u/bonzis on reddit.com)

His creation, dubbed Void Linux, has since deviated from the norm on many aspects including using a smaller init system, runit, in place of SysVinit (and later, systemd), implementing LibreSSL in place of OpenSSL by default, and building install media that gives the choice of C standard library implementation between the GNU Project’s popular glibc or the newer and less-known musl libc. Though Void Linux started out slowly (in a similar vein to Arch Linux), it has grown impressively, especially in the last few years, in part due to its deviation from the norm as well as adopting the rolling release model.

By 2008, the X Windowing System (X11) was starting to show its age as Linux moved into the modern era. While traveling through Wayland, Massachusetts, Kristian Høgsberg, an X.Org developer and Red Hat employee, had a finalized vision of what a modern X display server would look like and began working on the Wayland display server protocol in his spare time at Red Hat. It would take some time before Wayland would become a major open-source project.

However, unlike the X.Org project, Wayland was simply a display protocol that did not include a compositor. This meant that desktop environments would have to implement compatibility with Wayland themselves. An experimental implementation of a Wayland compositor by the Wayland developers, called Weston, was released, but was more of a proof of concept as well as a reference for others attempting to build Wayland compositors. For that reason, Weston is still not recommended for use on production systems.

A few years later, Canonical would announce their own display server for their Unity8 desktop environment named Mir. The projects diverged significantly over the years and eventually Wayland was chosen by the vast majority of Linux ecosystem developers as the de-facto display server protocol to replace X11. However, with all the work put in to Mir, Canonical didn’t want to scrap the project for no reason. Instead, Mir was reimplemented as a Wayland compositor that is still in active development today.

At the same time that Wayland was moving from idea to implementation, another major event would push Linux into another breakthrough arena that would come to dominate human life–the smartphone. Android Inc. was founded in the heart of Silicon Valley in 2003 with the idea of creating a smarter operating system that could be “more aware of its owner’s location and preferences”. Early on, the startup focused on digital cameras before realizing that a significant market wasn’t there. A year later, they had pivoted to a different idea–one that was being explored by other major tech companies like Microsoft and Apple.

iOS 1.0 on the first iPhone. (Credit: John Voorhees on macstories.net)

In 2005, Android was acquired by Google, which brought a lot of attention to the litte-known company. Their first prototype of a mobile device resembled a BlackBerry phone, with an operating system built upon the Linux kernel. However, when Apple released their iPhone in 2007, it was clear that Android would once again have to pivot if they wanted to compete with iPhone OS (now iOS), Apple’s mobile operating system built on top of their BSD-flavored Darwin kernel (the same kernel as Mac OS X at the time).

After a year of extremely hard work, the Android developers released their mobile operating system of the same name to the public on September 23, 2008. The project became an immediate success and was able to penetrate the mobile market due to its much lower cost than Apple’s devices as well as a much more configurable interface. By 2013, Android was the most popular mobile operating system in the world, and that growth wouldn’t stop there. As of today, Android is utilized by over 2 billion people worldwide, making it the most popular operating system in existence.

Android 1.0, the first commercial release. (Credit: C. Scott Brown on androidauthority.com)

In 2010, one of the largest acquisitions in American technology history took place with the acquisition of Sun Microsystems by Oracle Corporation. Oracle, a company known for its proprietary database products, wasn’t fond of open-source software as they did not see the economic incentive behind it. So, as Oracle took ownership of Sun’s IP, they began shutting the doors on the major open-source projects that Sun had backed, like OpenSolaris and ZFS, which would influence many developers to attempt to continue with the last available open versions of the operating system and next generation file system.

This move resulted in the founding of the illumos project and subsequently, the non-profit Illumos Foundation. Announced in August 2010 by some core Solaris engineers, the illumos project’s mission was to pick up where OpenSolaris had left off by providing open-source implementations of the remaining closed-source code that was part of OpenSolaris. The illumos developers quickly moved to implement parts of the GNU codebase to fill in the missing pieces of code like libc with glibc and the OpenSolaris compiler collection, Studio, with GCC.

The most well known free and open source variant of OpenSolaris was developed by the Illumos Foundation and released in September 2010. It was named OpenIndiana in respect to the Project Indiana initiative that Sun Microsystems was building to construct a binary distribution around the OpenSolaris codebase. Project Indiana was originally led by Ian Murdock, the founder of the Debian Linux distribution.

In 2010, it seemed that Linux was on top of the world–absolutely dominating in the mobile space, Ubuntu was becoming about as close to “mainstream” as desktop Linux ever had, and all sorts of new innovation was happening around the operating system and the extended Unix-like ecosystem that it depended on. However, a massive rift was about to be felt throughout the entire desktop Linux enthusiast community, one that nobody could have foreseen.

In 2010, the GNOME developers announced the release of the next iteration of their extremely popular desktop environment. Their current GNOME 2 desktop was so universally accepted as the default desktop environment at the time, that Red Hat, SUSE, and Canonical all were united behind the development. However, when the design plans and beta tests of GNOME 3 started making the rounds, people lost their minds.

Though GNOME 2 followed the traditional desktop paradigm laid out by Windows 95 fifteen years prior, GNOME 3 was a complete redesign from top to bottom. Upon boot, a single bar at the top of the screen showed an “Activities” button on the left, date and time in the center, and a single drop down menu on the right that included all information usually found in a system tray. The rest of the screen was filled up by a wallpaper that included no folders or icons on it. Many people were upset as the GNOME developers stripped out most of the customization that was available in GNOME 2.

The minimalist GNOME 3.0 desktop. (Credit: Ryan Paul on arstechnica.com)

Besides the drastic stylistic departure, the first iterations of GNOME 3 were extremely buggy, slow, and resource heavy from the beginning. Many long-time Linux users, including Linus himself, moved to KDE or other desktop environments instead. Canonical, after having disagreements with the GNOME developers, decided to ditch the interface they had used for the past six years and began working on their Unity shell for Ubuntu’s future. Unity was built on top of the GNOME Shell and wasn’t exactly perfect on its trial runs either.

However, unlike GNOME 3, Unity made progress and became quite an enjoyable interface by the time Ubuntu 12.04 came around. GNOME 3 still struggled with the problems it had and, with many users and distributions leaving their ecosystem, it became more difficult to pull in talented developers.

In fact, a group of GNOME 2 fans and developers gathered together under the leadership of an Argentinian developer in August 2011 to fork their favorite desktop environment so that it could live on past its end of life. The new fork of GNOME 2 became known as MATE, a direct reference to the South American plant, yerba mate, which is used in an extremely popular drink. However, instead of going into maintenance mode, the MATE developers began porting the desktop environment to support the newer GTK 3 toolkit so that they could continue to receive the latest applications.

Besides the rise of MATE, another developer community was very unhappy with the direction that GNOME was headed. Linux Mint began actively developing a set of GNOME Shell extensions that made the desktop look like and work in a more traditional manner. Eventually those extensions blossomed into a completely separate GTK 3-based desktop environment that the Mint team named Cinnamon. Both MATE and Cinnamon have gone on to become full-fledged and popular desktop environments in the greater Linux community (and beyond in some cases).

Linux Mint 20 with the Cinnamon desktop. (Credit: linuxmint.com)

Even with the massive fragmentation of GTK-based desktops, 2010 wasn’t finished with its fragmentation of the Linux community. Though many had adopted Upstart as the init system of choice, two Red Hat employees had a different idea. Lennart Poettering and Kay Sievers began on a project that would attempt to address the woes with Upstart to create a much more reliable and configurable init system, which they named systemd.

Systemd arrived on the scene and from the very beginning became controversial. Even so, it very quickly began replacing the default init system in many of the most popular distributions including RHEL 8, Fedora, openSUSE, Debian, and even Arch Linux. The last distribution to accept systemd as the default init system was Ubuntu, who continued to use its in-house Upstart until 2015.

systemd boot screen. (Credit: commons.wikimedia.org)

One of the biggest criticisms of systemd was the fact that it didn’t follow the UNIX philosophy–that each piece of software should do one thing and do it well. Instead, systemd took a more monolithic approach, which found it being included in areas not directly related to system init. Even today, many argue that systemd should be split into several different services that work together to focus on the different goals of the project. However, it can’t be denied that systemd is one of the best init systems that Linux has ever enjoyed throughout its entire history and has contributed to making Linux much more stable and dependable since its widespread adoption.

By 2011, Arch Linux had really risen in popularity and became a favorite of Linux power users due to its flexibility, simplicity of design, minimal bloat, and bleeding edge software availability. However, unlike Ubuntu and its derivatives, Arch Linux was not a friendly system for new users. To circumvent this, Philip Müller and others in the Arch community imagined a distribution that would include all of the benefits of Arch Linux, but be as user friendly as distributions like Ubuntu. They came up with a solution that they called Manjaro.

At the time, a distribution called Antergos was gaining in popularity due to its use of the Calamares installer to provide a nice graphical installation tool for newer users. Pure Arch Linux installs were (and still are) done entirely through the command line, and the addition of a graphical installer was a gigantic step toward bringing Arch to the masses.

However, Antergos did little to modify the default installed Arch system. The developers had a tiny repository of their own packages and tools, but their goal was to stay as close to “vanilla” Arch as possible. Conversely, Manjaro’s goal was to morph Arch Linux to fit their own creative goals for what a great out-of-the-box operating system should look like, and in the end, gain its own identity that was completely different from Arch.

Manjaro 19.0 with flagship Xfce desktop environment. (Credit: forum.manjaro.org)

This inevitably meant a much larger Manjaro-specific repository filled with branding, user tools, and even applications for their users. The Manjaro team built their own graphical frontend to pacman called Pamac and also held back packages from being shipped out as soon as they hit the Arch repositories. Instead, the team tried to provide quality assurance as much as possible on their supported packages, which meant that users didn’t get software as quickly as they would on pure Arch, but it also provided as sense of stability. The result? Manjaro really began to bring in users who wouldn’t have touched Arch normally and it began skyrocketing in popularity along with Antergos.

Even though Linux was free and open source, it still lacked adoption in one area–the personal computer desktop. In literally every other area of computing, Linux was the most dominant operating system, but due to the power that Microsoft held over the personal computer market, Linux was still unable to make much of a dent in adoption. In 2011, the first Linux distribution to enjoy mainstream success as a desktop operating system would come out of the tech megalith, Google.

Announced in July 2009, Chrome OS would become Google’s challenge to Microsoft and Apple. Due to the rise in users relying on cloud-based products, Google realized that many people simply didn’t need the power and capabilities that came with the modern laptop or desktop computer. Instead, many people began using web applications as opposed to desktop applications and, therefore, only needed a device that would allow them to quickly and efficiently surf the web. To accommodate this large portion of the population, Google began their Chromebook project, which utilized a stripped down version of the Linux kernel based on an extremely minimalist distribution called Gentoo.

ChromeOS on an HP Chromebook. (Credit: JR Raphael on theverge.com)

Because Google didn’t need massively powerful machines to run their operating system, the Chromebook became an enticing bargain for a large majority of the population that had no interest in computers beyond their use as a basic appliance. Chromebooks took off and became the de facto choice in many educational institutions because of their low price and ability to access essential school items through Google’s GSuite applications. Chrome OS was optimized for the browser and eventually even gained support for Android applications.

Throughout the entire process, Google never mentioned that Chrome OS used the Linux kernel, likely due to some negative connotations that non-technical people thought of about the word Linux. To this day, Google’s Android and Chrome OS are the most popular Linux-based operating systems on the planet (and in the case of Android, the most popular operating system period).

Though ideas of a single board computer utilizing RISC-based instruction sets can be dated back to 2006, the first product to gain mainstream appeal was the Raspberry Pi, originally offered in February 2012. The name Raspberry Pi was chosen as a nod to the tradition of naming early computer companies after fruit (think Apple) and a reference to the Python programming langauge, which would become the main language used to configure their devices. The first iterations of the Raspberry Pi were so successful, that the Raspberry Pi Foundation was created to further the goals and capabilities of their innovative products.

Raspberry Pi 4, the latest iteration. (Credit: amazon.com)

Today, the Raspberry Pi name has risen to become popular even outside of comuting enthusiast circles. The possibilities of using a single board computing device for a variety of digital automation projects appears to be limitless and Raspberry Pi’s are common, partly due to their low price and reliability, in many different industries like robotics, sensors, home automation, home servers, and even in education systems as a way of introducing modern day computing and programming to younger generations.

By 2013, Android and iOS had risen as the only true competitors in the mobile phone operating system market place. However, another company looked to join the fray. Since its release, Canonical’s Ubuntu had gone from absolutely taking over the desktop Linux market to becoming the go-to solution for Linux distributions that ran the infrastructure of cloud computing. In order to bring Ubuntu to the mobile industry, Mark Shuttleworth announced a kickstarter campaign to bring their own mobile phone, Ubuntu Edge, to life, along with the next iteration of their Unity operating system, called Ubuntu Touch, that was specially built with the touch screens of mobile devices in mind.

The design of the Ubuntu Edge phone with Ubuntu Touch. (Credit: Cassidy James on theverge.com)

The goal of the project–convergence. The idea was to build an ecosystem of devices, similar to Apple’s strategy, however, they would focus on a single operating system that was able to be used across all of those devices. The implications of a move like this were huge, especially considering that Ubuntu was a free and open-source operating system contrary to the Android and iOS platforms. Though the idea was a massive one that many people wished to see come to life, Canonical did not have the manpower behind them to bring it into reality. The Ubuntu Edge phone was eventually dropped along with the Unity8 interface.

By early 2014, CentOS had grown into a major distribution, favorited by those hosting small applications and especially for systems adminstrators who didn’t need the full weight of Red Hat’s support. However, Red Hat saw the potential for using CentOS to bring widerspread adoption to the Red Hat brand. In January 2014, Red Hat announced official support for CentOS in order to use it as a platform for open-source developers to use to get all the benefits of RHEL without the cost. Consequently, CentOS transfered its trademarks to Red Hat and the entire CentOS team was hired by the open-source company to continue its work.

CentOS with the GNOME 3 desktop. (Credit: wikipedia.org)

CentOS provided a very unique opportunity to Red Hat. With RHEL available to customers who needed large scale support and Fedora available for desktop users of Red Hat’s technologies, CentOS became the introductory platform that could be used for teaching classes targeted at the Red Hat certification exams as well as a solution that could compete with Ubuntu and Debian on the servers. CentOS is still one of the most popular distributions for system administrators around the world.

In January 2015, the experimental Plan 9 operating system from Bell Labs would see its final official release as the Fourth Edition. However, as the research lab moved on to other projects, a small, but dedicated community would pick up where the project left off due to its open source code. The largest of these efforts centered around the 9front project, which brought Wi-Fi drivers, Audio drivers, USB support, and many other features to the research operating system.

Also in 2015, the community run openSUSE project would take a different route to its previous iterations. Instead of having a single system that followed their enterprise releases, openSUSE was split into a static release version, called Leap, and a rolling release version, called Tumbleweed. Tumbleweed followed a similar release cycle to the Red Hat community’s Fedora, where it was considered a rolling release, but wasn’t as bleeding edge as something like Arch Linux.

It was apparent that by 2015, the rolling release model was getting a much larger look from the Linux community. Previously, many rolling release distributions had been considered extremely unstable and prone to break at any moment with the quick application of system and application upgrades. However, because rolling release distributions used the latest stable Linux kernel, many users with brand new hardware found much better support in the newer kernels and many migrated to Arch Linux, openSUSE Tumbleweed, Manjaro, Antergos, and others.

To tame the wild beast that rolling release conjured in the heads of users, a brand new Linux distribution was released in December 2015 called Solus. Solus took a bit of a different approach in that all system upgrades were saved to be released every week on Friday, creating a semi-rolling release model. This was perceived by the Solus developers to make the system much more usable to newer users who wanted a stable, consistent, and visually appealing Linux-based operating system. Solus took the view that every part of the operating system should be able to be run through graphical user interfaces, instead of forcing users into the terminal, where many coming from Windows and macOS might feel uncomfortable.

Though originally branded as Evolve OS in 2013, distribution founder, Ikey Doherty wanted to build a Linux distribution from the ground up without any of the baggage carried over by most of the modern base distributions like Debian (and Ubuntu), Arch Linux, and Red Hat. Ikey’s idea was to create a distribution that would learn from the past problems of other distributions and try to mitigate that to offer the best desktop Linux product available. In addition to Solus’ debut, the developers also created their own GTK-based desktop environment, called Budgie, that was built from the ground up instead of using GNOME as a base like Unity, Cinnamon, and MATE did. Budgie would gain considerable attention from the Linux community and is used quite extensively outside of the Solus project today.

Solus 4.1 with the Budgie desktop. (Credit: getsol.us)

Solus also brought a new package manager, eopkg, to the Linux system, which used some innovative techniques like delta updates to improve upon the most popular solutions in the Linux space like Debian’s APT, Red Hat’s RPM and DNF, Arch Linux’s pacman, and SUSE’s Zypper. Many people praised eopkg for its ease of use as well as the speed of updates it provided, though it wouldn’t find much adoption outside of Solus. In addition, eopkg brought with it some undesireable behavior, which caused the Solus developers to look into creating a new package management tool from the ground up that would address eopkg’s warts.

2015 also brought a new hardware company into the Linux and open-source ecosystem–PINE64. Though originally started as a company with an ARM-based single-board computer in direct competition with the Raspberry Pi, PINE64 has since explored a vast array of different device types with ARM processors including smartphones, tablets, laptops, and smartwatches. The company is known for being extremely interactive with the community as well as providing low-cost products compared to the majority of the market.

However, the transparency that the company provides to its community is probably the biggest factor which brought PINE64 from obscurity to a household brand in the Linux enthusiast world. Instead of pretending that their devices are production ready, they made sure to notify people about the capabilities of their hardware prior to allowing orders. This is because the ARM architecture space for devices other than single board computers is largely unexplored and the company has had to put a lot of work in itself and with other software projects to bring their products to market.

PINE64 now supports a large number of open-source mobile operating systems for their PinePhone product including Ubuntu Touch, KDE’s Plasma Mobile, postmarketOS, LuneOS, Mobian, SailfishOS, PureOS, Nemo Mobile, Maemo Leste, as well as ports of popular desktop operating systems like Debian, Fedora, openSUSE, Manjaro ARM, Arch Linux ARM, KDE neon, and NixOS.

PINE64’s UBports edition PinePhone. (Credit: Luckasz Erecinski on pine64.org)

A surprise announcement in April 2017 would turn the Ubuntu community on its head. Canonical CEO, Mark Shuttleworth, announced that starting with Ubuntu 18.04 “Bionic Beaver”, Ubuntu would drop its development of the Unity8 destop environment in favor of returning to their roots with the GNOME 3 desktop. Shuttleworth cited the failure of Ubuntu Touch to take off as well as the drastic improvements made by the GNOME team to make GNOME 3 a very usable and capable desktop environment.

Though many were upset with the drop of Unity, it was definitely exciting for users as all three Linux companies–Red Hat, SUSE, and now Canonical–would be reuinted under a single desktop environment, just like it had been with GNOME 2 a decade before. The concentrated effort from all three would prove to be an incredible shot in the arm to GNOME development and it returned to its prior title as the most popular desktop environment in the Linux ecosystem with Ubuntu’s rejoining.

Ubuntu as it appears today, with GNOME 3. (Credit: Scott Gilbertson on arstechnica.com)

In the wake of this announcement, a new community sprung up. UBports was formed by a group of developers who wanted to continue working on Unity8 and Ubuntu Touch, picking up where Canonical had left off. Luckily, Canonical made it simple for UBports by open-sourcing all the work they had done as well as providing support wherever possible so that the dream of a Linux-based smartphone (that isn’t Android or Google controlled) might one day become a reality.

In August 2018, Oracle would release a major update to their Solaris operating system, one of the last majorly used UNIX System V operating systems in the world. Solaris 11.4 came out with a major upgrade to use the GNOME 3 desktop environment by default as well as many improvements to the support that the operating system provided, though it still had a major emphasis on the 64-bit SPARC and x86_64 architectures. Unlike Linux, little innovation remained for the old UNIX system as those companies not locked in by backwards compatibility chose Linux over the remaining commercial UNIXes–Oracle’s Solaris, IBM‘s AIX, and HPE‘s HP-UX. The death of commercial Unix was undeniable at the hands of its open-source sibling, Linux.

On July 9, 2019, one of the largest tech acquisitions in history became finalized as IBM purchased Red Hat Inc. for a pretty sum of 34 billion USD. The central idea behind the acquisition was to help foster IBM to become more open-source oriented as well as bring unprecedented integration of both company’s valueable technologies to the enterprise market place. Though Linux has more than proved its technological ability in almost every enterprise environment, the acquisition was one last reaffirmation that the Linux operating system and open-source methodology had ultimatly won the battle against proprietary software.

Today, Unix-based operating systems are everywhere. From the tinyest embedded devices to the largest and most powerful supercomputers in the world, Unix-based operating systems are the solution that drives them all. From cloud computing and container technology to the most complex scientific research on the planet, the Unix-based Linux is the operating system of choice.

Though the idea and philosophy of Unix was crafted in a small corner of Bell Labs by a tiny group of people, our world today would simply not run without those groundbreaking technologies built on top of it along the way. The Linux kernel has grown to become the largest single open-source project on the planet with hundreds of companies involved, thousands of contributors from every corner of the globe, millions of lines of code, and billions of users driving the future of technology every single day. Quite simply, the world runs on Unix, and it doesn’t appear to be slowing down any time soon.


A ton of research went into compiling this presentation on the history of Unix (and Linux) and, of course, many minor (and even some major) details were inevitably left out, as this could truly be the subject of an entire book series. If you feel that I have misinterpreted any of this information, please feel free to reach out to me with your corrections and sources so that I can update it and make sure that the information is as accurate as humanly possible for future readers!

I hope you enjoyed this piece and I look forward to releasing Part Three, entitled History of the Unix Family: Modern Day Unix into the world soon!

If you would like to learn more about the very early history of free software and Linux, check out the 2001 documentary entitled Revolution OS linked below:

Linux Desktop Setup of the Week

This week’s selection was presented by u/DrCracket in the post titled [awesome] Streets of Gruvbox. Here is the screenshot that they posted:

Desktop of the Week: Awesome WM. (Credit: u/DrCracket on reddit.com)

And here are the system details:

OS: Arch Linux
WM: awesome
Shell: bash
Terminal: Alacritty
Theme: Gruvbox
Wallpaper: Gruvboxified Streets of Rage

Thanks, u/DrCracket, for an awesome, unique, and well-themed Awesome desktop!

If you would like to browse, discover, and comment on some interesting, unique, and just plain awesome Linux desktop customization, check out r/unixporn on Reddit!

Back to Table of Contents

See You Next Week!

I hope you enjoyed reading about the on-goings of the Linux community this week. Feel free to start up a lengthy discussion, give me some feedback on what you like about Linux++ and what doesn’t work so well, or just say hello in the comments below.

In addition, you can follow the Linux++ account on Twitter at @linux_plus_plus, join us on Telegram here, or send email to linuxplusplus@protonmail.com if you have any news or feedback that you would like to share with me.

Thanks so much for reading, have a wonderful week, and long live GNU/Linux!

Share this post

Twitter
LinkedIn
Reddit
Threads
Facebook
Email

Comments:

  1. That is a gigantic and awesome article. Some information was even new to myself and I already thought I knew the history of Linux and Unix.
    Though one thing I am not sure about in the article is about the Mate project. I thought it was originally forked from Gnome 2 by an Argentinian Arch Linux user (therefore the reference to that popular Argentinian or Brazilian beverage) and not Karapetsas himself. Although he is indeed one of the founders, main devs and maintainers of the Mate desktop in Debian I thought he is from Italy.

    Anyway great content.

    Btrfs is also very interesting as now it gains more coverage because of Fedora’s decision and funny how I am personally doing a trial with openSUSE using Btrfs as file system. :slight_smile:

  2. Hey @vinylninja, thanks for that. I believe you might be right about MATE. I will change it, appreciate the comment!

    Yes, I also just threw Fedora Rawhide in a VM to check out how Btrfs is working. So far, so good!

Join the discussion at forum.tuxdigital.com

Participants

Avatar for vinylninja Avatar for londoed