Tangled in the ThreadsJon Udell, May 24, 2000
QNX, Linux, and WindowsWide-ranging discussion of OS niches and evolutionary strategies
The small and elegant QNX operating system keeps coming up in newsgroup discussions. Last week, Andrew Stuart kicked off another QNX thread:
Strikes me that the QNX realtime platform, now that it has been released as free for noncommercial use, is a terrific alternative to Linux.
It uses the same APIs, is smaller, more scalable, more flexible and customisable and generally more exciting, inspiring some of the old excitement that I used to feel back in '79.
In a similar vein, Jonathan Brickman observed some months ago:
Tried the QNX Web-demo disk, modem version, yesterday. I am very impressed.
Not only does it work, not only did it detect my COM3/IRQ5 modem perfectly, not only did it give me excellent 800x600x256 that looked suspiciously like 800x600x64K, not only did it connect flawlessly to my PAP ISP, but its Web browser ran about twice as fast as IE on the same hardware, and never hesitated on any of the sites I tried, including the toughest ones I know of (Microsoft, news.com.au, MSNBC).
What is QNX? It's a 32-bit, Unix-like real-time OS. The demo to which Jonathan refers shows that sophisticated, modern, and powerful software can be delivered in a very small package. Specifically, the QNX demo includes:
- The QNX OS microkernel (just 45KB). It's highly modular, enabling QNX to bundle just the services (TCP/IP, modem-, graphics-, and netcard detection-logic and drivers) needed for this demo.
- The Photon windowing system. Powerful enough to support a full-featured HTML 3.2 browser, but way smaller (also 45KB) than X or comparable services in Windows or MacOS.
- Oh, and just for fun, QNX tossed in its embedded Web server.
Full details about how this was done are available on the QNX site. I've followed QNX on and off for a number of years, and always been hugely impressed by the quality of the company's technology, and the breadth of its vision. A 1999 BYTE story on the Photon windowing system describes how Photon treats displays as pluggable pipeline components, enabling not just 2-way but n-way collaboration.
In a follow-on message, Andrew ticks off more reasons why he thinks QNX has the right stuff:
- Dynamic process execution across multiple CPUs
- Transparent network multiprocessing (dynamic process execution across multiple networked computers)
- Fleet network protocol allows high-performance load-balanced redundant network communication between QNX boxes; advanced message passing model for IPC
- No kernel recompilation ever required; all device drivers can be loaded and unloaded on the fly; device drivers are not practically distinguishable from user processes
- Does not have the scalability problems demonstrated to be present in the Linux kernel during the Mindcraft tests
- Tiny memory footprint
- Full hardware memory protection
Adds Daryl Low:
Remember, every component (except the kernel) is replaceable. This means that more advanced file system, network stacks and server apps can be swapped in place of the existing embedded ones. Just like "Linux" is really GNU with the "Linux" kernel, QNX can be made into GNU with the QNX kernel.
I think low-level OS developers will especially like QNX because they can run/debug their files system/network stack/drivers without having to worry about crashing or rebuilding the kernel. That alone should save many nights of frustration.
What's not to like? Clearly QNX has major geek appeal, and it sure is tempting to cite it as an example of what Linux (or Windows) ought to be. But as several correspondents pointed out, the contrast isn't as stark as Andrew makes it out to be.
How does [QNX's scalability] differ from the various clustering options available under Linux, for instance using Scali technology? You see, I do work on large, distributed systems and I do use enough "distributed CPU" to heat your house so the issue really interests me. I'd like to know what QNX can offer that, for instance, Scali and their solutions can't give me on Linux.
And why do people keep overlooking the fact that Linux can be made to work almost as dynamically as QNX? There are a few things that need to be statically linked into the kernel, but other than that you can load most of the drivers at runtime.
What Linux may lack is quick message-passing and a mode of operation where guarantees with regard to latency and maximum execution time are given, but those are mainly things you would use a realtime OS for -- like QNX. Not UNIX.
The two systems are, he concludes, "apples and oranges."
One size doesn't fit all
It's interesting how the discussion touches on very different issues that matter for very different applications. For an enterprise-class service you'd want ultimate scalability. For a desktop workstation you'd like a small memory footprint and a light, fast, flexible GUI. For an embedded application you'd like the tiniest kernel and most modular architecture imaginable.
As Bjørn points out, no system can be optimized for all these niches. But Andrew argues that to be able to evolve into any of them, a system requires certain fundamental properties.
If you think that many hands over a great length of time will cure all the ills of an operating system, then I would have to disagree. An operating system, like any building in the real world, must be designed at least upon foundations that will support future building and extension.
Evolutionary improvement and extension over time cannot necessarily fix problems that come up at the foundation level, because so many parts of the structure now rely on those foundations.
That sounds reasonable. Yet I can't help but notice that Windows, nominally a desktop OS and still admittedly constrained by that heritage, is making headway on two fronts, pushing up into the enterprise realm with Windows 2000, and down into the handheld space with Windows CE. It wasn't designed for these purposes, but it is being made -- over time -- to serve them increasingly well.
Alan Shutko points out that the same holds true for Linux.Many hands over much time can easily replace foundations. If they couldn't, we might be running Minux right now.
In software, it no longer possible to do a "top-down" architected design, in a time frame which would keep it from obsolescence prior to release. All one can hope for is an OS which is sufficiently modular as to allow growth and replacement of portions, while software keeps running.
Microsoft has failed at this. Linux is demonstrating success, because it is evolving. It has proven possible to have an open source project controlled by no one individual. Even Microsoft cannot afford the kind of development effort that is pouring into Linux.
I agree that componentization is the best way to expose a system to evolutionary forces. But I'm not willing to cede all authority to the blind watchmaker, and I don't agree that Microsoft has failed in the ways Dominic suggests. Windows demonstrably is evolving into niches for which it was not originally designed, just as Linux is. And while it's true that the Linux project has forever redefined what we think volunteer software-writers can collectively accomplish, I'm not convinced that the Linux bazaar is in every way preferable to the Microsoft cathedral.
Sometimes it takes a cathedral
Don't underestimate the power of the cathedral. One case in point is the ODBC initiative (see Why isn't ODBC a standard feature of Linux?). Another is the Windows 2000 system installer. Here Microsoft saw a problem -- DLL hell, uncontrolled system configurations -- and has attacked it with what I think is a correct solution. The system controls its own configuration; applications state their requirements; when the system configuration is insufficient for an app, the app asks the system to upgrade itself to the necessary level.
Microsoft is still in the enviable position of being able to simply decide on something like this, and then enforce it. For the next several years, at every Microsoft developer conference, every session will end with a PowerPoint slide entitled "Call to Action," and one of the bullet points on that slide will be "Support the system installer." Doing that will not be easy, or fun, and nobody is going to scratch his own itch in the process. But eventually, it will set a new standard for installation and configuration.
The Linux bazaar has been incredibly effective, and will continue to be. But the Microsoft cathedral has its uses too.
Jon Udell (http://udell.roninhouse.com/) was BYTE Magazine's executive editor for new media, the architect of the original www.byte.com, and author of BYTE's Web Project column. He's now an independent Web/Internet consultant, and is the author of Practical Internet Groupware, from O'Reilly and Associates. His recent BYTE.com columns are archived at http://www.byte.com/index/threads
This work is licensed under a Creative Commons License.