Drawing from the library stacks: Free and open source

Contents of this chapter

No service to free software (2010s)

Free and open source software is available for everyone to use, alter, and share. Communities have developed a new ethos for producing this software collaboratively, a model so powerful—the GNU/Linux operating system being a prime success story—that in the early 2000s, proponents had credible reason to believe that free software would become dominant and push out the other major model for software development. In other words, free software would drown proprietary software, which no one except its developers can view or change.

Instead, Software as a Service (SaaS) happened.

The Internet made it possible for worldwide communities to achieve the efficient, highly responsive development model of free software, and to share the fruits of their labors. Ironically, it is also the Internet that makes it possible to run an application on some server in California or Germany and interact with it from your desktop or cell phone. The Internet honed the weapon of SaaS, part of the popular concept of software in the “cloud”, which restored the scepter of world domination to proprietary software.

Historically speaking, free and open source software has been ubiquitous since the beginning of computing. Free software contrasts with proprietary software, which is can be controlled by one person or company through copyright, trademark, and patent laws. Free software recognizes the rights of the software’s developers, but the software’s legal license allows other people to use it freely, change it, and distribute their changes. I will release the reader from the pain of learning more about licenses in this memoir. Plenty of people have banged their heads on licenses in order to preserve the legal foundations for free and open source software.

If you use an Android phone, you are using free software, although Google and the phone vendors mix in proprietary software and maintain some tight controls on the phone. Android is based on Linux, which has revived the operations of the classic Unix operating system.

What if you use an iPhone, iPad, or Apple MacIntosh computer? You are also using a system with a foundation in free software. All those Apple systems run on another variant of Unix, based on the classic Berkeley Software Distribution (BSD). Apple released their customized version, which they called Darwin, as free software.

So if you’re wondering where free software is used, now you know: It’s everywhere. It has run the Internet from the beginning, and manages the web experience of millions of people through the popular Firefox browser.

As an exercise in understanding the effects of free versus proprietary software, think back to the incident in the first chapter when an author based a book on a web development kit that readers could download from one company’s site. When that company arbitrarily decided that this kit no longer met their marketing needs, and peremptorily took the link down, the book had to be rewritten. If the software had been free by the definition used here (free to share and to change), the source would have been available, so the author or publisher could have made a copy and there would be no risk of having the rug pulled out from under us. Even that tiny effort would probably be unnecessary, because other people around the world would make copies and collaborate on keeping it up to date. So we could probably have made a link to a public repository that someone would make sure to keep alive.

Waves of activism have promoted even greater use of free software, especially in schools and government agencies. These institutions have various obligation for transparency, promoting participation, and inclusiveness, which call for free software. One of the biggest initiatives in free software (at least in the United States) took place in my own state, Massachusetts, in 2005. The administration undertook a wholesale replacement of its Microsoft Windows systems with free systems. A friend and author I worked with, Sam Hiser, acted as a consultant. He invited me to report on their work, inviting me back to the State House where I had gone to promote use of the Internet in 1992. Proprietary software creates its own culture and demands on workers, making the adoption of free alternatives a multi-layered undertaking. The Massachusetts project was carefully planned and well executed, but the administration (run by a Republican governor) got into a tiff with the legislature (dominated by Democrats) and they killed the transition.

Let’s return to the state of computing in 2020. Most people tap their screens and check their messages with barely a thought that they’re connecting to servers far away. If they did, they might consider it odd that the picture they’re sharing with the friend sitting next to them has to pass from their phone to a server in Iowa or Bangalore. This is the cloud: servers and data storage so hidden from you that you can’t even tell in real time how far away they are.

Although the free software movement, following Richard Stallman’s lead, disparages the term “cloud” as a vague umbrella term, I think it’s very apt for such a secretive and shifting situation. You may interact with one cloud service, such as movies on Netflix, but Netflix runs its servers in turn on another cloud service, Amazon Web Services. Some companies are expert at providing user services, others at maintaining physical infrastructure. I wouldn’t be surprised if, someday, one or two companies that do an awesome job at maintaining the physical systems run all the data centers in the world. The Amazon.com retailers of the world would then lease their systems to run their user-facing services.

Conceptually, using the cloud is a form of outsourcing. The cloud vendor says, “Why not run your software on our hardware to simplify deployment?” in the same way that a professional house painter says, “Why not sit back and pay me to reach the high spots?” And a cloud vendor can pile on more and more services to add value, just as a house painter can offer repairs and other construction work.

Here I’ll lay out the most significant service currently offered by cloud vendors—artificial intelligence (AI)—and why their entry into that feverish area of innovation may change how innovation itself works.

As I write, software in the cloud is consolidating around a significant trend: Not only do the results of programmers’ efforts run on servers owned by the vendor. but the entire activity of developing the software runs on the vendors’ systems. Instead of popular free software tools, programmers use proprietary tools created by the vendor. And this potentially narrows the scope for O’Reilly’s content, because the company has always based its success on covering universally available software.

Complex computing advances—the types that shatter old assumptions and push brusquely past the barriers encountered by older researchers—take place in open forums. Such was the case with machine learning, which brushed the cobwebs off an old idea called neural networking. Neural networking was a classic model for artificial intelligence that had brought generations of earlier computer scientists to ruin. Those earlier researchers were shipwrecked upon the limitations of their hardware. As with most software advances, massive speed-ups in hardware let machine learning coast to success in the twenty-first century.

Strangely enough, Moore’s Law seemed to be reaching its limit a few years before machine learning took off. Conventional chips weren’t really up to the quantities of instruction cycles demanded by this repetitive crunching of huge data sets, but clever AI researchers turned to graphical processing units (GPUs). These weren’t exactly a dime a dozen, but they were mass-produced to meet the needs of high-speed display processing, particularly for computer games. GPUs had snuck into all computer systems, even those as small as cell phones. It turned out, unexpectedly, that GPUs were great for machine learning because they could run limited sets of transformation on blocks of data streaming in one after another. And like most great computer technologies, GPUs shamelessly stole ideas out of the distant past—in this case, a seemingly obsolete technology called array processors, which were the sole product of a company I worked for in the 1980s.

After the principles of machine learning were found worthy, both hardware and software evolved to serve it better. Hardware expanded through specialized chips to carry out common operations, such as Google’s tensor processing unit. In software, the free and open tools generated by early researchers were embraced and extended by Google, Microsoft, Amazon.com, IBM, and other companies.

This is where the cloud came in. Whereas the first successes of machine learning were hammered out in the sweat of researchers who installed new programming libraries and set up massive clusters of computers, later developers found it ever so much easier to let Amazon or Google run the computers and vet all the software.

At first, the cloud companies simply installed the free software tools with convenient interfaces for access. Then the companies started to develop some pretty awesome tools of their own. Sometimes they open sourced these tools, which they could safely do because in time, the convenience and robustness of their cloud platforms would wear down the determination of nearly any business that tried to deploy the tools on its own.

There are other precedents in society for the expropriation of creative producers. Gay and transgender activists (particularly people of color) like to complain that they invent lots of cool styles and fashions, while all the wealth goes to big companies that steal and commercialize the ideas. Thus it was with machine learning researchers and the free software they created. They drove an AI revolution that was ultimately taken over by big cloud vendors.

Let me be fair. The big companies, such as Google and Microsoft, have contributed enormously to research in computer science. They did so by offering numerous enticements to recruit leading researchers. For instance, two engineers from Google named Jeff Dean and Sanjay Ghemawat implemented the company’s MapReduce algorithm, perhaps the kick-off of the big data revolution. (I honored this achievement when these engineers revisited it for a chapter in the book Beautiful Code). Microsoft set up independent research facilities that published enormous numbers of articles in peer-reviewed journals—an achievement that drives home the importance of independent research communities.

These companies enabled true innovation when they participated as equals in an open research environment. They consciously protected research from the immediate pressures of business. This model goes back at least as far as the formation of Bell Labs a century or more ago.

Many have commented on the consequences of large companies controlling so much of our computing, particularly around privacy and biased analytics. I would like to pose two specific questions related to innovation: Will development on proprietary systems in the cloud be as fertile as that of free software communities and independent researchers releasing free software? Second, will the propriety innovation reflect the interests of the users, considering that it removes much of the choice offered by free software? (The choice is reduced simply to whether or not to participate in the platform at all.) While you ponder this, I will explain some of the achievements of free software, and how I navigated the fascinating communities that came together around it.

Free documentation (2007—2011)

People have been volunteering the fruits of their writing ever since literacy slipped away from the stranglehold of the high priests. Online, the power of community documentation—a term I invented to describe this volunteer effort—has been recognized at least since The Well in 1980s Berkeley.

The mystery of why volunteers do this work—and even fervently immerse themselves in it—has prompted research over many decades. Many people have tried to measure and explain the contributions of free software developers, Wikipedia authors, and other volunteers. What’s new about my small efforts in this area was a focus on contributions of educational material for software. Such contributions could be as small as an answer to a newbie’s awkward posting to a mailing list. When it extends to book-length form, even publishers can take interest. By limiting the topic of my inquiry to computer documentation, I could take on such questions as: Do people who ask questions get answers? What aspects of software make it hard to produce educational material? What kinds of projects are most likely to get volunteer contributions of documentation?

Although community documentation attracted my interest quite a while back in the 1990s, I really started trying to grasp the behavior of communities and to try to exert influence on them in the 2000 decade. My role as senior editor at O’Reilly was fading, so I put in some sputtering attempts to create a consulting business around editing community documentation.

My interest in community documentation developed along with my respect for the achievements of free and open source software. My basic thesis was in sync with multiple trends that fascinated technical, business, and political leaders. Throughout the academic and action communities in which I circulated, the concept of “openness” was being extended from software to hardware, academic and pharmaceutical research, business practices, and government. Although many free software advocates had resisted the term “open source” that Tim O’Reilly and our company were promoting, and although I use “free” most of the time myself, the word “open” offers a great fecundity. It has prodded people in every field and industry to look for opportunities for transparency and the inclusion of diverse voices.

Hence the other major trend influencing my ideas for documentation: the “wisdom of crowds” or “crowdsourcing” popularized by journalist James Surowiecki. Research showed that, under the right conditions, a sophisticated distillation of many views would produce more accurate assessments than a poll of so-called expert opinion. Cynics will claim that this thesis has been contradicted by recent political elections and other mob behavior. But those events actually confirm the research, which also showed how the crowd could be misled by premature influence and unhealthy biases.

Another concern powered my ideas for documentation: the emerging crisis in creative content triggered by widespread Internet access and digital formats. The trend is hollowing out media, as everybody has seen by now.

I believe the world is in the opening act of a great shift away from classic works by individual geniuses, toward evolving creative efforts to which many people contribute. Approaching documentation, I applied this hope to volunteer production efforts. During the early years of the 2000 decade, I conducted research into community-generated documentation.

One project tested the boasts one hears about the superb support offered by mailing lists and forums. Project leaders said that any problem people faced using their software could be solved merely by posting to the mailing list and scanning the replies, which senior members would generously serve up. These leaders furthermore claimed that future novices would be able to find all the answers they needed by searching the archives. This spontaneous generation of help has been called “passive documentation”.

I suspected that the reality was not so rosy. In my own archive searches for projects (such as Drupal web development software), I would find multiple contradictory answers, each blending correct details with wrong ones. I also found many outdated answers—and it was hard to tell what was outdated.

So I picked a few popular free software projects and followed a number of threads on their mailing lists—28 threads in one study and 14 in another. In each study, I tracked the answers to technical questions. The first study I titled “Do-It-Yourself Documentation? Research Into the Effectiveness of Mailing Lists”. The second was more conventionally titled “How to Help Mailing Lists Help Readers (Results of Recent Data Analysis)”.

My findings blasted the pretensions of the mailing list operators. Half of the questions I tracked were ultimately answered, and apparently satisfied the questioner (one could rarely be sure, because the original questioner rarely reported success). For a free, volunteer-driven forum, that’s actually rather impressive. But it by no means serves everybody. And a quarter of the questions didn’t receive even an attempt at an answer.

In another research project, I tried to determine what motivates people to answer questions online and contribute documentation. I recruited the O’Reilly web staff to help me run a survey, and ran some statistical tests to find the most important reasons. I hypothesized that people participated more for selfish reasons, such as to promote their expertise, than for altruistic reasons. But in my results, it seems that altruism barely edged out the self-promotional motivations.

I say only “seems” because results were close and because I discovered much later that I had made a novice error in my statistics. I assumed that answers on a 1-to-5 scale could be compared as ratio data—that is, I thought that someone who assigned a rating of 4 valued some item twice as much as someone who assigned a rating of 2. This is fallacious, because people don’t use rating systems that way. I should have compared the answers as ordinal data, which have a much weaker relationship and would have been much less conclusive. This mistake is only the second choice I regret during some three and a half decades of writing articles, the first regret was an endorsement of encryption key archives that I have discussed elsewhere.

Despite this error, I think the article that I based on my study, “Why Do People Write Free Documentation? Results of a Survey”, was a useful contribution to the free software movement. It’s a shame I didn’t save a copy, because the O’Reilly web team wiped it out and it wasn’t recorded in any archive.

Seeing an unrealized promise in passive documentation and other volunteer contributions, I sought ways to organize communities and help them meet their needs. I would compare free software documentation to government funding: Nobody wants to contribute to it, but everybody wants it to be there when they need it. I published web postings on my proposals and gave talks at computer conferences.

My ideas for structuring community documentation were as radical as my vision for boosting participation. Manuals would be entirely passé or would form a relatively small core of an enormous distribution ecosystem of education materials spanning mailing list archives, blog postings, and comments on various sites. Projects would not try to centralize documentation, but would thrive on the scattered contributions of individuals writing on their own sites. I imagined spawning a whole new discipline around community documentation, with benefits throughout the software world. Don’t accuse me of low expectations.

Huge problems remained in finding documentation and determining its quality. I was confident that I could address these problems too. I suggested a system of comments where people could indicate “documents to read before this one” and “documents to read after this one”. I imagined putting all these comments into a standard data schema and creating tools to crawl the comments and display learning paths to readers. In concept, this anticipated the learning paths that O’Reilly started generating in the late 2010 decade, although implemented very differently. I wasted untold hours on the schema and an API to underpin the comments and permit the automatic generation of learning paths.

My inability to drum up interest among developers and their companies eventually led me to abandon my ideas for formalizing the work done by volunteers. But O’Reilly also noticed that there was unexploited potential in community documentation, and they showed a willingness to capitalize on my research. I spoke about community documentation at two Open Source Conventions and even once helped O’Reilly try to develop it into a business. We got this opportunity when SAP, the huge business service firm, saw how much its users were contributing to documentation about their tools.

While highly proprietary—in fact, one of the early success stories for SaaS—the SAP company had developed an interest in free software and were avidly creating interfaces to their services for popular languages so that their customers and third parties could expand the available tools. Through various interactions with O’Reilly, to which I was not a party, SAP started to express interest in volunteer documentation. They hoped to recruit their enormous user community to create a knowledge base, which they called “social documentation”.

This potential project must have held out the promise of big bucks for O’Reilly, because Laura Baldwin took a great interest in it. She was not yet Chief Executive Officer, but still Chief Operations Officer. I think that she was directing policy for the whole company by then. It’s significant that, in 2007, she not only set up several days of negotiations at SAP headquarters in the Silicon Valley but attended all of them personally. She also brought in Andrew Odewahn, who was running a lot of our technical innovation and may have already been our Chief Technical Officer, along with one or two other leaders.

Thus, this meeting was a huge commitment of resources. Baldwin rented a van and shuttled the whole O’Reilly crew each day from the hotel to the vinyl and acrylic boxes of SAP’s Silicon Valley meeting rooms. This was an opportunity for team building and drawing closer.

My role stemmed from my volunteer work investigating and participating in free software documentation. My vision matched that of SAP: to tap the interests and expertise of the community by championing many small volunteer projects as a supplement to centralized, official documentation. The philosophy was that users within a community best understand their own needs, and can speak to other users more effectively more than a technical writer hired by the company.

I created a presentation for SAP that included an in-depth analysis of documentation for a popular library of the time (jQuery, widely used to simplify the production of spiffy web pages), to illustrate the strengths and weaknesses of online documentation. I also took notes during meetings and created some guidelines for our further cooperation. My term, “community documentation”, was adopted for O’Reilly’s proposal.

The SAP managers showed enthusiasm during our meetings, but never pursued the project. Perhaps it was just an ideal that wouldn’t have borne viable fruit. But the main problem I remember from these meetings was a personal one: I regularly fell asleep.

The problem stemmed from medication. I have suffered from Tourettes Syndrome since the age of five. Recently, I had accepted a round of the medication haloperidol, which brought on overwhelming waves of fatigue. It took me many months to titrate the dose and find a comfortable balance where the drug suppressed most of my Tourettes-related tics without tiring me too much. In the meantime, I found myself napping many times during the day. I would simply lie on the floor of my office and drift in a dreamy mist for 20 or 30 minutes. People may have seen me on the floor occasionally but did not bother me.

In meetings with a client, of course, falling asleep was highly irregular. Baldwin would send instant messages or email saying “Wake up!” Why I didn’t confess my medication issue to her, I don’t know. Doing so would not solve the problem, but might have earned me some sympathy. Anyway, I never talked about it. I’m sure my behavior disqualified me from any future meetings with clients. But Baldwin didn’t judge me entirely by this shameful lapse. She continued to seem fond of me, treating me professionally and boosting me occasionally with praise.

A couple software projects did allow me to volunteer in my spare time to organize documentation efforts, and some documents were actually produced. Usually, though, these projects decided to go down a more traditional path and just hire a technical writer. It’s no surprise that when I laid my vision out before SAP under the aegis of O’Reilly, they demurred.

The volunteer effort in software documentation that showed the most promise was not my invention, but an unlikely venture founded by a New Zealander named Adam Hyde. Hyde was an artist, a unique and inspiring individual who migrated to Europe, created interesting art installations, and learned a good deal of technology in order to spin up modern artistic experiences. Like me, he noted the paucity of good educational materials for the tools he was using, and launched an organization he named FLOSS Manuals. The acronym FLOSS is commonly used for free, libre, and open source software, so he slapped the label on his plucky venture.

The FLOSS Manuals notion of documentation was stodgy compared to mine. They focused on producing a small but useful manual for each tool. The manual would not be fluid and constantly evolving, as I saw software documentation. The book would be created at one fell swoop and then updated through a follow-up project sometime in the future.

I forget how I got involved with Hyde—I believe he approached O’Reilly as a company and that I was the only person to step up and take interest—but I participated in several projects and watched his vision gel into a rigorous documentation process he called a “book sprint”. Sprints were already common on software projects to accomplish focused tasks through group participation. After Hyde experimented with approaches of varying sophistication to book production, he arrived at a strict five-day sequence that FLOSS Manuals applied over the years.

I had no idea where my involvement with FLOSS Manuals would take me—certainly not expecting to cross continents. One of my first projects produced a highly praised book on the command-line interface for the Free Software Foundation. This achievement drew me closer to Richard Stallman and others in that institution. I should mention here that I’ve contributed to other, more traditionally generated manuals published by FSF, and consider many of them to have impressively high quality. My name was even on the cover of the GNC C Library manual for several years, and I eventually asked the organization to remove my name because I had written up just a few APIs.

I traveled to California and met with some 20 other FLOSS Manual volunteers at the GooglePlex for a Winter of Documentation that Google sponsored in conjunction with their well-known Summers of Code. (The so-called Winter actually took place in the Autumn, which I remember because I enjoyed seeing the sukkot that Jewish employees set up on the Google campus.)

Google was generous. They took care of us for the week, paying for expenses and providing us space, staff support, and meals. They imposed no conditions on FLOSS Manuals’ work. I could imagine Google objecting to the use of their facilities to work on documentation for OpenStreetMap, which might conceivably emerge as a competitor to Google’s own map service, but there was no such intrusion.

One detail in Google’s planning had an outsized impact on the volunteers: an incomprehensible lack of lunch diversity. Every day, the same sandwiches turned up. We all grumbled about it to each other, and I got a lucky reprieve. One of my authors, Steve Souders, was currently working at Google (a natural next step after the company where I first met him, Yahoo!). He came for me one day and treated me to lunch at the famous Googleplex cafeteria, where different stations served up foods from many parts of the world. You see the same plenitude in upscale college cafeterias nowadays, but I think Google was an early example.

At the Winter of Documentation, Hyde assigned me to work on a book about the free software KDE desktop. In this context, a “desktop” is a software package that helps programmers produce consistent and highly capable graphical interfaces. I evaluated the contributions that my team members chose to make and declared that this book wasn’t a technical manual but a guide to joining the KDE development community, this focus was accepted by the team. Most of them were very talented young developers from India, one was still a student. We not only finished a nice document but bonded during that week. We parted with sadness and mutual appreciation, after I drove them to San Franciso in my rental car.

I chronicled all this work with copious articles written on the spot. I described book sprints and contrasted the FLOSS Manuals approach with conventional publishing, which differed in almost every detail despite the goals they had in common. Most of these articles were happily published by the O’Reilly staff on our web site, although I think they have all vanished now in one of the web team’s blind purges.

In our most extensive endeavor, half a dozen FLOSS Manuals volunteers traveled to Amsterdam in March 2009. We had been invited by The Institute of Network Cultures to a conference called Winter Camp, housed at a college on the Eastern edge of the city, to explore with new forms of organization. Although I never really learned what the Institute of Network Cultures, did, I got an intriguing education in cultures at the conference. We were asked to hold meetings where we could distill our methods and share insights.

One really cannot grasp this unique event without being part of it: everything from live presentations to the evening meal we cobbled together after suffering through terrible cafeteria food for several days. (The meal we made was actually no better.) One might be able to derive some of the feel of the conference by watching the video interviews conducted there by Gabriella (Biella) Coleman, an anthropologist who achieved fame by analyzing voluntary online communities such as Anonymous and the Debian GNU/Linux project. Do not watch Coleman’s video of me, however, because I came in fatigued by jet lag, sleep deprivation, and excitement, so my Tourettes syndrome flared up and my attempts at articulate presentation were disrupted by sequences of nervous tics.

In addition to Coleman’s videos, the conference led to a book: From Weak Ties to Organized Networks: Ideas, Reports Critiques. It includes three sizeable blog postings I wrote for the O’Reilly web site from the conference.

Our campus was in an outlying district of Amsterdam where the city had relinquished traffic lights or any other form of vehicle control, save for a slightly raised crosswalk that provided a gesture of recognition to pedestrians. We occupied bunks in rooms designed like youth hostels, slept with five or six bunks in the room—meaning we did not sleep—and washed in a shower provisioned with one infinitesimal sliver of soap. We shared the cafeteria with high school students on some camp experience, because the conference had deliberately been scheduled when university students were on vacation and accommodations were at their cheapest.

Winter Camp, roughly put, explored ways of exploiting the Internet to conduct social change and build diverse, inclusive communities. The participants were all politically left of left, and many groups boasted of being “autonomous”. I don’t know how autonomous they could be, given that they took the same buses and ate the same atrocious cafeteria food as the rest of us. Most attendees enjoyed some academic position that permitted them to explore their autonomy freely.

While the conference participants staunchly opposed oppressive systems around us, we all benefited from our privileged place in the world. Our educations and skills allowed us to take our chosen directions in life, and to live wherever we wanted. I noticed during my conversations with other participants that many had been born on one continent, received advanced degrees on another continent, and were currently working in a third. It might have been hypocritical for them to critique the hegemony of the global elite—but I do appreciate that they were in a position to understand the manifold problems of our world, which travel even faster than they did.

Lost history: Why did the iPhone store open? (2007—2010)

This chapter covers various achievements by free and open source software communities. I’ll turn here to a change in the way billions of people interact with computers, and why they can thank free software hackers for it.

One of the most world-shaking computing advances of all time was the development of the platform for mobile devices, where outsiders could offer their own applications to owners of the devices. These applications (so common they are called “apps”) have transformed the way almost everyone leads their lives. Because the device’s owner can get apps from other places besides the device’s vendor, the sources for apps are often called “third parties”. Restaurants and stores offer apps to let visitors place orders before picking them up—a particular relief to have during COVID-19. A conference can create its own app to help you manage your schedule, and a museum can create a virtual docent to accompany you through its holdings. Third-party platforms are an invitation to the whole world to innovate.

It would be hard to exaggerate the value of the platform in bringing digital benefits to the world. Innumerable companies and web sites have adopted the idea of a platform for developers, unasked, to offer applications. A platform means that people can use their familiar setting—whether it be a mobile device, a social media site, or some other digital place—to run the program provided by an organization such as a restaurant or museum. Most platforms have restrictions, whether to protect visitors from malicious programs or to promote the interests of the platform owners, but they still represent a great advance in human communications.

But few people know how these platforms came into being. This is a story I’ve often told, and it goes back before what most people think is the beginning of the story.

Most people remember that Apple’s iPhone was the first significant technology to offer a third-party platform (subject to oversight by Apple). For years, whenever an organization wanted to bring in others to contribute to its community, it would say, “We want to build the iPhone store for…” whatever community they were representing. The iPhone store thus became a catch-all phrase for openness and crowdsourcing.

But why did Apple create this store? When the iPhone was released in 2007, the company announced that they would provide all the apps themselves. They couldn’t countenance offering space on their screens for other creative developers.

Some six months later, they made an about-face. Steve Jobs announced the iPhone store. Most observers didn’t know why. The trade press and high-paid consultants came together around a feeble justification: Jobs and his company must have decided their go-it-alone strategy was limiting and that bringing in new energy from outside would enhance the product.

But the truth is fascinating for what it says about free software and computing communities.

When the iPhone came to market, hackers around the world quickly grasped the power of a general-purpose computing device that could fit in your hand, especially when it could connect to the Internet through cell phone towers that were nearly universally available. These tinkerers wanted to realize their dream applications on that device, in the same way that the 1970s idealists in Steven Levy’s book Hackers glommed onto any digital system they managed to walk by.

Apple tried to maintain tight control over the iPhone, but the company had not reckoned with the power of free software. Some basic design choices left the door open.

As in recent versions of the Macintosh computer, Apple used free software to speed up development of the iPhone. Why reinvent things such as multiprocessing and memory management when these basic capabilities had been solved in free software decades before? Having chosen a version of Unix as their operating system, Apple hid within the iPhone all kinds of convenient tools used by Unix hackers over many decades. The free community quickly uncovered these tools and used them to explore the iPhone down to its guts. Along the way, they figured out how to break the security that Apple hoped would prevent them from loading their own apps.

But how could they develop apps, not knowing the functions provided by the iPhone? Here again Apple laid out a red carpet welcoming the hackers, through another design choice. Jobs had settled on Objective-C as his preferred computer language many years before, when he started NeXT Computer. He seemed to harbor a fondness for this language that basically no one else was using, because he came back to Apple and offered it as the main programming language on the Macintosh as well as the iPhone.

Objective-C may be obscure, but its design makes information hiding harder than in some other languages. Because the language does a lot of run-time evaluation, determined programmers can find function names and arguments in plain text within binary files. Objective-C is also supported by the popular free GNU compiler from the Free Software Foundation.

Result: Within weeks of the release of the iPhone, online communities were exchanging programs and running them on the devices.

Apple knew quite well what was going on. Whether the trade press and consultants knew, I can’t tell. But I know that a tremendously talented hacker and (ironically enough) security expert, Jonathan Zdziarski, wrote a book for us documenting the jail-broken iPhone interfaces. Another author wrote a similar book for another publisher. I also penned an article about this amazing community effort for the O’Reilly web site in January 2008. I did so partly to promote our book and partly to make sure the history was documented (although like most of my historically significant articles, this one disappeared from the O’Reilly web site during a reorganization). You can now find my original article, retrieved through the Internet Archive, on my own web site.

So Apple had no choice. The richness of unauthorized apps coming out from the community would eventually win over the public, contrasted with the paltry offerings from Apple. So they created a new, public API, and invited the world to contribute. The age of software platforms took off.

There’s an even bigger story I wanted to tell, but didn’t get the chance. I was angling to get this story into Steven Levy’s classic book Hackers.

Now you’re thinking, how could I have worked on that book? Wasn’t it released by Doubleday Publishing in 1984, a full eight years before I got into the publishing field? Indeed, when Hackers was published, the only role I played was that of an enthusiastic reader. But when Levy decided to re-release the book in 2010, O’Reilly managed to land the contract.

Neither Levy nor O’Reilly management wanted to actually update the text. I urged Levy to add a fourth part to reflect events I found crucial. Levy had finished his 1984 edition with an epilogue that came to feel off-key over time. He focused on Richard Stallman, but as I read it, presented him as a lone, sad figure wallowing in his MIT office. Even though Levy took another look at programming ten years later, he didn’t seem to understand the incredible tidal wave of free software in our time, spearheaded by Stallman, and I suggested he give it the coverage worthy of a book called Hackers. But Levy wasn’t interested.

The one thing we could do to enhance the new edition at O’Reilly was to add links, the way journalists offer doorways from their articles onto the Web by attaching links to key phrases. Mike Hendrickson and I spent hours doing this for Hackers.

It was great fun. Levy referred to events of past centuries that I suspected were unknown to many readers, so I put in links to history pages. I also tried to find a link for every company, university, or other organization mentioned. (I did not point to Wikipedia, because it isn’t a primary resource. I also refuse to include links to Wikipedia in works that I write or edit, although I allow authors to refer to phrases from Wikipedia as evidence of common conceptions.)

For Hackers, our most diligent sleuthing took place on my least favorite part of the book, the part that discusses early video games and their creation by companies presented in the book as exploitative and dissolute. Hendrickson and I found that most of these obscure video game machines, once widespread commodities, continue to be playable through software simulators available on the Web. Who developed all those simulators? Talk about dedicated hackers!

Colleagues and community (2000s)

I suppose people working in any field can form vibrant communities, but the free and open source software movement is especially conducive. Participation encourages personal and communal trust that spans decades and continents. This springs partly from idealism, but we must beware of attributing everything in free software to idealism—such an attitude downplays the movement’s resilience. A bigger contributor to the mood of free software is its collaborative ethos. The process of contributing is like an apprenticeship in how to care for others in one’s community.

I’ll pause here while listening to the shouts of scoffers who point to chauvinism of various types, nasty arguments over trivia, and other anti-communal behavior in free software communities. Yes, we acknowledge when bad things happen. But please also acknowledge that nastiness can appear widespread when it is actually confined to a few participants (as on the cyber-rights list I started for Computer Professionals for Social Responsibility), that many well-meaning people are trying to control destructive behavior, that destructive behavior usually ends by destroying communities that can’t control it, and—this may be hardest to accept—that rancorous words may emerge from a deep reserve of love.

Healthy online communities have taken time to develop. But members have gradually found ways for the community’s majority to exert its will, snuffing out the abusive and manipulative assaults on character that observers deplore on today’s Internet, and that actually have troubled online media from its early years. Free software communities have developed some of the most sophisticated strategies.

Because I was not a member of any of the commonly excluded groups, my own experience consists of warm associations with authors, tech reviewers, advisors, and leaders of the projects I worked with.

At one conference, while people threw their tired limbs across couches in the lobbies during one of the long evenings that followed the day’s formal sessions, I held a conversation with a free software user and consultant named Zak Greant. I’d put him on several projects as tech reviewer and saw him as very thoughtful and congenial. Greant started to confide to me about his his bouts of ADHD, depression, and burnout. This was years before people generally understood how widespread and normal such ailments are. But I had worked for a few years in the mental health field and approached depression without stigma. I listened closely to Greant and gave him support. In his review of this memoir, he wrote, “I should mention how glad I was for that chat nearly 20 years ago. I was really, really struggling. Talking with you made me confident enough to talk with others and that’s led me towards much better mental health.”

Several authors of mine, also, suffered from depression. I could be open to them as they confided the problem, and guide them toward finding an appropriate role while lining up a co-author.

One author named Arjen Lentz had perplexed me because he had started with great enthusiasm on a joint project with several other authors, and called in regularly to our weekly discussions, but was failing to meet his deadlines. Finally he asked to speak to me individually and confessed to me that he was clinically depressed.

I validated the pain this was causing him and went through a frank discussion of how he could continue to help the project without shouldering more of the work than he could handle. At the end of our conversation he said, “I knew I could talk to you about depression, because I talked to Zak and he said you had been very helpful when he told you about his depression.” I had no idea that Greant and Lentz knew each other at all. One lived in Canada, the other in Australia. But the free software movement knows no boundaries, and strong relationships erase all ethnic and national distinctions. My casual good deed—in this case, my willingness to listen sympathetically to Greant—repaid itself years later.

Lentz decided to go public with his problems and create an international support group called BlueHackers for computer professionals suffering from depression. He created small, blue square stickers for this group, and urged people to put them on their laptops to signal their support. At conferences where he spoke, he would ask the audience to indicate who had depression themselves or in their families. Many people would raise their hands, and he would hand out his stickers. I took a bunch and did this while speaking at some conferences too.

Religion often forms a point of contact with other people. I’ve shared stories of religious practice with a member of the Church of the Latter Day Saints at a party that his company hosted at the Open Source Convention. And I had many such conversations with my manager, Frank Willison. I never accepted the facile admonition to avoid discussing religion with people one knows casually. How could it hurt a relationship to let someone discuss their values, their commitments, and the most significant community in their lives?

Well, maybe it could hurt. At the Open Source Convention, I once asked someone whether she had plans for the summer, and she said, “I’m going for a walk.” An incomplete thought seemed to lie behind that statement, so I asked where the “walk” would be and she answered, “Northern Spain.” I immediately recognized the Camino de Santiago pilgrimage, and she confirmed she was engaging in that favorite activity for centuries of faithful Christians. She then let me know that she is reluctant to talk about religion among computer people, because they often become uncomfortable or hostile. No community achieves its ideals all the time.

Pragmatism and propaganda (1990s)

It’s not my intention to give an overview of free and open source software, but this memoir will make more sense if readers can discard some impediments to understanding the movement. We have to dispose of two myths in particular: that the free software and open source software movements are acrimoniously opposed, and that the Free Software Foundation (FSF) is doctrinaire.

The term “open source” was invented not because any proponents disagreed with the goal of freeing software, but because they thought “open source” was easier to understand and had less baggage to scare adopters. Proponents who stuck with “free software” criticized the new term on several grounds, but that doesn’t by any means indicate that the two groups refused to work together. The misconception that they were at loggerheads seriously mars Christopher Tozzi’s book For Fun and Profit: A History of the Free and Open Source Software Revolution, which otherwise offers fine historical insights.

If the free software and open source movements couldn’t work together, Molly de Blanc—a long-time manager at the Free Software Foundation—would not have served on the board of directors of the Open Source Initiative. And how could Allison Randal—a leader in the Perl community who worked with O’Reilly on conferences and other projects—serve as president of the OSI, after being invited by the Free Software Foundation to participate in a conference about the GPL in January 2006, and then serving on one of the drafting committees for version 3 of the GPL? But the myth appeals to people who like simple, black-and-white controversies.

This myth reminds me of facile stories about the 1960s civil rights movement that put the Reverend Martin Luther King, Jr. into contention with nationalist leaders such as Malcolm X and Stokely Carmichael/Kwame Ture. In fact, although disagreeing strongly, all these leaders had respect for each other, consulted with each other from time to time, and recognized that each had a role to play in freeing their people. Similar regard exists between the leaders of free software and open source.

I, too, was at that 2006 conference about the GNU General Public License, or GNU GPL, colloquially known as “copyleft”. This GPL was developed by Richard Stallman, one of his acts of genius, and it formed the lynchpin around which everything else in free software revolved. The GPL was adopted by Linus Torvalds for the Linux kernel, as well as many other software projects. The GNU GPL is also the template for Creative Commons licenses in literature, art, video, and other cultural contributions.

Over time, after GNU GPL proponents scrutinized the uses of free software by companies whose ethics were less commendable than the movement would like to promote, the GPL was declared in need of an update. The weaknesses of the license in use by the Linux kernel and others (version 2) were subtle, and the proposed changes were scattered. A huge outreach effort brought in all manner of interested observers to help design version 3.

I thought that the web site set up by the FSF for comments on the new draft license was the best design I had ever seen, and probably have ever seen, for displaying edits by a massive variety of people. One could easily see the exact word or passage each person was critiquing, even if half a dozen people offered comments on overlapping phrases.

The new license was finalized in 2007. Oddly, a lot of people rejected it. The Linux kernel developers decided to stick with version 2. And perhaps that was just as well, because there were hundreds and maybe thousands of developers who owned different updates made to the kernel. Getting them all to upgrade to version 3 would have been logistically tortuous.

But on to the other myth I wanted to dispel. Many people identified the FSF with Stallman, and considered Stallman some kind of fanatic. Stallman has personal oddities, and I believe he stayed on as president of the FSF too long, but it is unfair to call consistent and insistent principles “fanaticism”.

Think of how the ACLU or the Electronic Privacy Information Center hammer on any government proposal that would weaken privacy. Some of those proposals address real problems—notably terrorism, violent crime, and pandemics—but it’s up to someone to argue at every juncture for our fast-eroding rights, and that’s what the ACLU and EPIC do.

Or think of the early Jewish arrivals in nineteenth-century Palestine, bringing with them little but a conviction that they needed to form a Jewish state. One can certainly criticize some of their behavior and tactics, but one has to admit they created an unprecedented new reality that, before its success, most people thought both unnecessary and ridiculous, rescuing hundreds of thousands of Holocaust victims that the rest of the world was content to let die.

Although many people have heard of Stallman’s alleged fanaticism (that is, an insistence on the difficult but necessary steps to preserve privacy and freedom), I think that very few people understand his pragmaticism. I’ve encountered it a few times while working with Stallman on various policy issues. The first time, he was still living in his office at MIT, where I came in to strategize about how to work on a copyright issue where the Boston chapter of Computer Professionals for Social Responsibility was trying to get Representative Barney Frank’s support.

At the GPL 3 conference, members of the audience argued with Stallman, mistaking his pragmatism for inconsistency. (So you see, you can’t win. If you’re not dismissed for being a fanatic, you’re attacked for your inconsistencies.) I’ll illustrate his pragmatism through a conversation I overheard at a FISL conference in Brazil we both attended.

The FISL organizers tried to make logistics easy for attendees, housing us as much as they could in a single hotel and providing a bus to take us between the hotel and the conference venue. Unfortunately, this bus service was unreliable. One morning, I found myself waiting for an errant bus with Stallman and two American attendees who worked for the KDE project, a free desktop for Linux and Unix systems. We decided to share a cab, which for reasons I can’t remember I ended up paying for. I sat in the front, while Stallman shared the back with the KDE developers.

The developers complained to Stallman that many companies were building non-free applications on top of KDE, and were going on the KDE mailing lists to request advice. The KDE developers found that offensive and wanted to tell all the KDE supporters to refuse advice to anyone developing a proprietary app.

Whether that policy would be feasible didn’t come up. But Stallman advised against cutting off the proprietary companies. He said that having more and more applications run on KDE, including non-free ones, would make KDE more useful and hence promote their mission of providing a free desktop. So they should be generous and answer the companies’ questions. I consider this a victory of pragmatism over ideology, and one of several examples to refute the claim that Stallman is ideological. (In his review of this chapter, Stallman pointed out that he still wouldn’t advise the use of proprietary software—he just saw an advantage to helping the developers master the free platform.)

The FSF’s work over the years has increasingly taken on privacy as well as freedom. Their call to computer and Internet users to refrain totally from proprietary hardware and software is too tough for me to follow, certainly. I make heavy use of the cost-free services that Google and other sites offer in exchange for snooping on me. But now everyone recognizes the intrusions that corporations have made on us through digital surveillance and manipulation, vindicating the FSF.

I am lucky to live in the same metropolitan area as the FSF office. In fact, it is located only a few blocks from the Downtown Crossing office where O’Reilly was located during the final years of my work there. I have gone down to the FSF a couple times a year to stuff envelopes or to get volunteer training for their LibrePlanet conference. And I have attended LibrePlanet each year, sometimes as a speaker.

Toward the end of my tenure at O’Reilly in 2020, I came in to the FSF for volunteer work and found the office closed. I met Stallman in the lobby; he too had an errand in the office and was also surprised that no one was in. A third person came, hoping like me to volunteer. Eventually we reached one of the staff by (non-free) phone and were told that they unexpectedly decided to all go out to lunch together. So Stallman asked us if we’d like to have our own little lunch with him, and we jumped at the chance.

Accepting an invitation to lunch was not a casual decision; it required clear-headed principles. Just a few months earlier, Stallman had been caught up in one of those relentless recriminations that take over during highly publicized controversies. He had made a comment about the victims of billionaire Jeffrey Epstein, which unlike his decades of sage advice about freedom and privacy had popped up on the screen of some zealot for the cause of gender justice. We have already seen that Stallman expresses very nuanced views and is often misunderstood even by supporters—so this issue seemed guaranteed to turn into a noxious sore. Complaints by people who showed no understanding of what he said were picked up by the press and spread further. One would hope that journalists would go back to Stallman’s original words and try to explain the truth, but that didn’t happen. Ultimately, to save the Free Software Foundation from a backlash, he resigned as head of the organization he had founded and inspired.

To be honest, I had seen over the years that Stallman entered into controversies without always understanding the feelings he stirred up. In his review of this memoir, he told me the background behind some of the controversies and persuasively explained the importance of his stance. Still, I expected that eventually he would back away from some of his public roles. What disgusted me was to see it happen over a debased rumor.

Luckily, he was still active in many organizations and remained head of the GNU project, which had pushed free software to prominence in the 1980s and 1990s before the mainstream press noticed it. In fact, Stallman had come to the FSF office today to pick up things he needed for a business trip to Europe. I joked that maybe he could hitch a ride on the carbon-neutral boat that climate activist Greta Thunberg was taking at the time. He responded quite seriously that because he has no children, he creates a tiny carbon footprint in comparison to anyone who has them. Incidentally, another customer in the restaurant introduced himself and asked to have his photo taken with Stallman, so clearly there are people out in the world who still recall his contributions.

One of Stallman’s visionary acts was his 1997 story “The Right to Read”, which in addition to being quite a well-crafted work of fiction, lays out a chilling scenario that was completely fantastical at the time but has since become an everyday reality. In this story, Stallman imagined critical educational content locked up on encrypted servers so that students needed to go into great debt just to get their educations. Of course, this is precisely where the educational field is now. Stallman was truly prophetic.

Even though I work for a publisher, I’m disgusted with the business of textbooks today. They were a trivial part of my educational expenses back in the 1970s, but now constitute a heavy investment for students every semester. I accept that many of these texts are worth the hundreds of dollars they cost. They are painstakingly written by experts, carefully edited, and enhanced with lavish production values. Many come with web sites whose content is probably valuable. Yet they are financially burdensome.

The right way to create textbooks is this: Educational institutions around the world should band together and set aside funds to employ authors and publishers in the creation of content that is offered for free. The authors and publishers would still be amply compensated, but no one would be denied an education for financial reasons.

But simply in laying out this proposal, I can see why it would not be adopted. The educational institutions have woven access to textbooks into their own business models. If all the content was openly available to the public, they would have to prove that their professors added value and would have to compete on the quality of the classroom experience. And I know lots of great professors, but the pressure might be hard for some.

So where do the charges of fanaticism against free software advocates come from? Partly from the strictures of language, which can be the most liberating or the most controlling aspect of human culture.

Like all marginalized and historically downtrodden sectors, free software proponents need to fight norms so entrenched that they usually go unrecognized. Take racism, which is so normative that it requires constant tagging and critique by people of color and their allies. And “people of color” itself is an unfortunate compromise with racist reality, because without racism a person from Cambodia would have no reason to feel a special bond with a person from Nigeria.

Language is a central part of norms. It is almost impossible to talk about policy in areas of computing and networking without encountering terms that reinforce oppressive norms, such as “piracy”.

I launched a flank attack in 2004 against the use of the term “piracy” to describe the unauthorized distribution of copyrighted content. I discovered that, for all their wanton violence, pirates in the Golden Age of exploration represented a revolt against oppressive seafaring conditions and conducted themselves with a breadth of democracy that was unusual in their time. My article was thrown away along with most of the blog postings of that period when the O’Reilly web staff revamped their site, but many years later I managed to retrieve the text because, on a lucky impulse, I happened to post the full article to Dave Farber’s “interesting people” mailing list, and it was archived there.

The vast majority of software that technically qualifies as “free” also qualified as “open source”, and vice versa. More specifically, virtually all the licenses approved by the Free Software Foundation are approved by the Open Source Initiative, and vice versa. The terms carry different values whose philosophical, business, and semantic issues have been exhaustively explained elsewhere. So I’ll just mention my own approach to using the terms.

I was happy to adopt the term “open source” after Tim O’Reilly held a historic summit suggesting it as an alternative to “free software”. One simple reason for the change is that the term “free” in English is ambiguous, covering concepts expressed separately by words such as “libre” and “gratis” in Romance languages. Many free software advocates use the term “libre” in English, but that in turn requires explanations to listeners.

I developed a new depth of appreciation for the term “free software” after hearing a lecture by a researcher from the American Association for the Advancement of Science. He told us of his forensic work uncovering massacre sites from the wars following the split-up of the former Yugoslavia. This researcher said that the use of free software to process and present his results was critical, because only if the source code was a public asset would listeners accept his dangerously controversial and provocative findings. I decided that freedom was truly important and should be emphasized in the celebration of this software.

So now I prefer the term “free”, but still use “open source” where “free” might confuse or bias readers. In most articles, I start with a reference to “free and open source software” and then choose one or the other for subsequent references.

In short, free software developers take words seriously. The Free Software Foundation maintains a long web page detailing how to discuss software, licensing, and other aspects of the computer field. Outsiders may think of this obsession with correct wording as the ravings of an isolated minority. But isolated minorities have been persecuted throughout history through the manipulation of language, so they must combat the everyday use of words in order to combat the norms that oppress them. Recently, for instance, transgender people have labored hard to unpack the languages that confine them. The same problem goes for free software advocates in a legal and social terrain dominated by powerful institutions whose intentions toward their movement can be hostile.

Open wallets for open source software (early 1990s)

O’Reilly’s whole-hearted adoption of free software in the 1990s centered us in the social impacts of computing. But it also heralded commercial success, because we were often the figurehead leading each ship of the open source Armada into the uncharted future.

In the 1990s, we crept forward on several fronts, not initially seeing the common thread that ran through our efforts. We ran highly successful Open Source conventions and created new series for Linux and the MySQL database. I attended each Open Source convention, blogging incessantly, and edited almost all the books in the Linux and MySQL series. The marketing person assigned to these books, Betsy Waliszewski, communicated with me daily and recognized along with me that there was a strategic opportunity.

Together, we created a marketing strategy for O’Reilly in free and open source software. Conventionally there’s nothing new about marketing strategies—but ours was unique in several ways.

First, the field of free and open source software is immense, diverse, and globally distributed. It’s one thing to create a strategy for a strictly delimited domain such as the Oracle database or even a field such as security. It’s an entirely different endeavor to figure out how open source will affect markets and the next several years of technological progress. We had to be in strong sympathy with the leaders and creators in the various open source fields.

And man, were there free software projects! Tracking all of them was beyond the capabilities of any single person, but we talked to a lot of community members with their fingers on new developments.

We were actually lucky at that time, because the number of projects in free software have multiplied even more over the years. A few facile explanations tend to be offered: Free software is cool, companies recognize the benefits of releasing software under free licenses, developers would rather share software freely than deal with the headaches of commercialization, releasing a package as free allows it to tie in with a powerful ecosystem of other free software packages, etc. Going beyond these observations, I think there has been an explosion of new software in general because new high-level languages make development much faster, and robust test strategies bring high quality within reach of modestly funded teams. These benefits, in turn, are driven by the decreasing cost of high-speed computer hardware.

A second unusual aspect of the strategizing that Waliszewski and I dived into was our commitment to supporting a social movement, not just our own company. We knew that promoting free software to the larger society, and supporting the attempts of free software communities to promulgate their technical information, would benefit O’Reilly as well as the movement itself and society. Yes, we were idealists, and we were right. Other businesses in the free software space maintained a similar balance.

We had an important role mission at O’Reilly because it was hard for free software communities to articulate the implications of their work for the larger society. Some free and open source developers, along with media-savvy supporters, were advocating for the movement. But the business community was not helping much. Companies that built and promoted free software were small and relatively unknown. (Even Red Hat was once small.) At O’Reilly, where communication was the company’s fundamental mission, we played an outsized role in promoting free software. Within a few years, major companies such as IBM and Oracle—eventually even Microsoft, which originally embodied everything free software communities hated—would adopt open source and trumpet their support for it. Academics and governments would also come along and discuss the meaning of freedom and openness in software. Waliszewski and I, with support from the broader O’Reilly company, leapt into the arena and carried the torch before it was widely noticed.

This was an electrifying time for me. Waliszewski and I kept our feet down on the pedals of a long-distance race to make sense of free software and respond to the community’s documentation needs. Our views aligned. Our books sold well, but we also received the accolades of those who needed our support through conferences, documentation, and my recurring blog postings.

Many companies chose to present new products and services at the Open Source convention—particularly after the closing of the major Linux convention, the Linuxworld conference held by IDG annually at the Yerba Buena Center in San Franciso—and many sought me out to write up their announcements.

Free software needed a lot of vocal support in the 1990s. It was still the victim of simplistic, biased impressions—and not just among those who dismissed it, but those who embraced it.

The arguments dismissing free software all too well-known and tedious. Potential users were reluctant to use free software because they assumed it would be low-quality, oblivious to the reality that the old tie between cost and quality has not only been severed but rendered meaningless in the open source world. These opponents are also afraid that open source software lacks support, even though there are people around the world who are expert in these projects and eager to offer their services, precisely because the software is freely available.

But I’ve had to recognize over the years that the free software movement nestled into some myths of its own—at least in the 1990s. Maybe the movement has wised up recently as the harsh realities of maintaining a community-driven project become clear. There is no clear way to fund free software, if you want it to be done correctly and kept current with changes in its environment.

Business models for free software have never been well understood. The creation of free software is clearly a powerful movement, and many industries tilt toward freedom as they mature. That’s why, as I mentioned at the beginning of this chapter, many free software advocates assure us that the entire field of programming will end up where it began—as free software—before companies sold it in proprietary fashion.

But I have not seen anyone explain a coherent and comprehensive model for funding free software, a problem replicated with free culture. Eric Raymond, in his classic book The Cathedral & the Bazaar, laid out in 2001 some five ways to earn money doing free software, but none of them can be found in use today.

At the beginning, Richard Stallman held no expectations that free software could pay as well as commercial software. He seemed to think developers would choose lower rewards for releasing free software just because it was the right thing to do, as documented in Christopher Tozzi’s book, For Fun and Profit: A History of the Free and Open Source Software Revolution. And this was potentially reasonable, because salaries for software developers were hefty and one could consider taking a pay cut to do things for free. But others, such as Raymond, thought that free software would somehow pay for itself.

Many companies profess an “open core” strategy, which involves a basic package of free software surrounded by proprietary extensions. It’s an application of the “freemium” model described by Chris Anderson in WIRED magazine many years ago. The freemium model works for many companies, and was a part of O’Reilly’s strategy for decades. Although most of their content was for sale, they enticed potential customers to their site with free articles, and I spent much of my time writing them.

Open core makes sense in the abstract, but in practice rarely works. The open part appeals only to expert hackers who can get it up and running. If the core is really worth using, consultants outside the company can build on it and compete directly with the original company, as Monty Widenius’s MariaDB and Percona’s MySQL-based services compete with Oracle for its MySQL base. The MySQL ecosystem seems to be doing well, with all these companies tolerating each other’s presence and collaborating on the core. But other open core companies fail to generate excitement.

Most free software seems rather to come from a trend I labeled “closed core” in a 2011 article. Here, a company offering proprietary Software as a Service, or some other product such as hardware, produces auxiliary software for non-core functions like performance monitoring or administration, and releases that software under a free license. By offering it for free, they develop an ecosystem of expertise and outside contributions.

Breaking economic incentives: Free licenses (1990s)

Although I already mentioned O’Reilly’s brief fling with the documentation produced by free software teams—what I call community documentation—these writings have actually been intertwined with O’Reilly’s publishing work from very early in the company’s history. Their first big money-maker, the X Window System series, gobbled up material from the free documentation produced at MIT. And their biggest best-seller, Programming Perl, was written and updated in tandem with the Perl documentation. I myself had some success taking a free book from the Linux Community and producing a couple books from scratch that we published under free licenses.

These successes were uncharacteristic. Rarely does free documentation rise to the level where it’s worth publishing commercially. If you dispute this criticism and point to books that moved from the free domain to the commercial domain, I’ll narrow my assertion to this: Documentation rarely rises to the level where O’Reilly would publish it. I’ll start with two examples of failure.

In the 1990s, when Perl was still king of the programming languages (some would call it a usurper) and our series was flying off the shelves, some editor laid an eye on the huge bramble of Perl libraries providing important programming functions for all kinds of tasks. Because the choice of libraries was so copious and their use was so complex, our editors got the idea of tidying up the free documentation and releasing it under our brand, as we had done with the X Window System documentation. The editor’s hope was that a small investment of copy-editing would lend the material high enough quality to publish.

But the Perl documentation fell apart under this treatment, like rotting wood that one tries to nail up into a permanent structure. A small battalion of copy-editors jumped in to apply the clean-up techniques they had used with great success on drafts developed under the keen control of O’Reilly developmental editors. But the copy-editors found over and over again that their ministrations weren’t working, because the source text was beyond repair. The ambiguities, missing facts, and sheer nonsense in the texts doomed the project.

Another stab at putting out community documentation sprang from a similar motivation: that of rounding out a successful series with a body of material too large and arcane for us to tackle in traditional publishing fashion. In this case, the series covered the MySQL database. After we formed a friendly relationship with the MySQL company, and collaborated on conferences with them, someone suggested we print their documentation as a reference manual. The material was in reasonable shape, unlike the earlier Perl documentation, but there was no compelling reason for people to shell out money for a printed version of the documentation, and it never sold an appreciable number of copies.

Now for a success story, one that came near the beginning of our Linux work. While seeking authors for this brand-new technology, I discovered a complete manual on networking by a GNU/Linux enthusiast named Olaf Kirch. A German who had never lived in an English-speaking country, Kirch wrote English that was idiomatic, highly informed, and even graced with humor. I fell in love with his free Linux Network Administrator’s Guide and offered to edit and publish it, while leaving it under an open license that would let anybody republish it.

This was an expeditious way to launch an O’Reilly GNU/Linux series, and we made a good amount of money from the guide. But our experience also highlighted the risks of producing free and open books, as so many people—Cory Doctorow and Bradley Kuhn come to mind—have urged publishers to do.

While our book was in production, somebody scooped us by releasing a cheap edition with the exact text Kirch and I had carefully edited. When a marketing person at O’Reilly complained to me, I mumbled that our edition would have higher quality and would therefore push the interloper out of the market. But when I finally received my copy of our edition, I was horrified to find that the production team had omitted the first two paragraphs of the book. We sold enough copies to correct that error in the next printing, but I was embarrassed to have boasted of our high quality. It seems almost fated that this unique and egregious error would occur on this particular book, while it faced off against a direct competitor.

In the end, the competing edition disappeared, but not because of our edition’s quality—just because ours benefitted from O’Reilly’s marketing channels and reputation.

In general, I don’t think that publishing documentation developed by a community is a winning business plan. O’Reilly has always done well by writing excellent documentation where the community’s material was insufficient.

When I found an author to write one of our early Linux books, this time on the much more complex topic of how to code up a device driver, he asked for the book to be released under a free license. Subsequent authors have insisted that we honor this clause in the contract. (Because Linux Device Drivers was very successful for a long time, managers tried to scale back our commitment to the free license, and to offer merely an online PDF with no rights to alter and redistribute the book.) I never noticed anyone trying to upstage us with their own edition, the way someone briefly did on the Linux Network Administrator’s Guide.

A final experiment with free documentation, this one also successful, came with our guide Using Samba. The free software in question was quickly becoming part of core network infrastructure. To explain the high stakes behind this book, I have to offer again a bit of computer history. You may find it rewarding as a behind-the-scenes glimpse at how computers work in most of our homes and offices.

As computers became more and more connected during the 1980s, companies tried various schemes to make people in an organization feel like they were all connected in one gigantic system. The benighted Open Software Foundation and its DCE, which I have described elsewhere, revolved around this ideal. In particular, manufacturers wanted to give people access to files on their coworkers’ local systems. Sun Microsystems had developed the universally adopted Network File System to do this, but it had design flaws and suffered from security weaknesses. So when Microsoft came to an understanding of networks and realized they needed local file-sharing, they chose a different technology called Server Message Block (SMB).

An administrator in Australia named Andrew Tridgell, having found a need to share files between his GNU/Linux systems and Microsoft systems, dug out the use of SMB and decided to make file-sharing work through the extremely ambitious process of reverse engineering the protocol. He created a piece of free software called Samba that implemented a good chunk of SMB, and continued to chase upgrades and new features as Microsoft introduced them.

Samba was so valuable that Apple Computers incorporated it into their Macintosh systems, which could make use of Samba because they were based on a form of Unix. In an updated form known as Common Internet File System (CIFS), the protocols used by Microsoft technology and Tridgell’s matching free software are still the dominant way to share files on local networks.

I saw early on that Samba deserved a book, and found a good author. By the time we had finished the book and went to production, however, other publishers also had books in the works.

Encountering Tridgell at a conference, I asked him to write a foreword for our book, a form of endorsement. He in turn made an offhand comment that could be taken as nothing except an offer of a deal. “There are five books coming out soon about Samba,” he said. “I wish one of the publishers would put one out under a free license so that I could endorse it.”

I snapped up the offer instantly. But I had to persuade O’Reilly management that it would benefit us to gain Tridgell’s stamp of approval, at the cost of allowing other people to update and release our book. We held a high-level summit of a dozen or so editors, run by Tim O’Reilly in our Sherman Street office. Even though we had already enjoyed success with the Linux Network Administrator’s Guide, the going was tough. All the customary fears of opening up our own precious material to the public got tossed around for a long time.

But I had a powerful ally in this meeting: Mark Stone, who had recently joined us and was highly respected for his understanding of the free software community as well as the publishing industry. He calmed everybody down, indicated his unstinting support for my plan, and pushed it through. Our book Using Samba made us a mint and went through several editions. And no one put out a competing edition based on our text.

☞ Birds of a feather: Conferences