Note: This post was written before the world was shut-down by the coronavirus. I’ve since written a must-read post that is complementary to this one entitled: “41 Ways the Coronavirus Pandemic is a Watershed Moment That Argues for You Starting Your Own, Online ‘Lifestyle’ Digital Information Business.”
The following excerpts are from the book “Thank You For Being Late: The Tsunami of Humanity” [affiliate link], by Thomas Frieman, Nobel Prize Winner. The excerpts illustrate just how fundamentally our lives have been changed forever, and “we ain’t seen nothin’ yet”. And it all impacts how we think about and go about making a living.
That “Thank You For Being Late” reference in the book title is Friedman’s way of thanking all the people who have ever arrived late for meetings with him because – while he’s waiting with nothing else to do – he does his best thinking about big ideas like those that make up this book.
Technology has forever changed the world in every way fundamentally. It will never be the same again – ever. I will illustrate just how far and deep the implications run in every aspect of life.
Intel engineers did a rough calculation of what would happen had a 1971 Volkswagen Beetle improved at the same rate as microchips did under Moore’s law (processor speed doubles every two years): Today, the Beetle would be able to:
• Go about 300,000 miles per hour
• Get two million miles per gallon of gas
• Cost four cents
If fuel efficiency had improved at the same rate as Moore’s law, you could, roughly speaking, drive a car your whole life on one tank of gasoline.
Technology has always moved up in step changes. All the elements of computing power— processing chips, software, storage chips, networking, and sensors— tend to move forward roughly as a group. As their improving capacities reach a certain point, they tend to meld together into a platform, and that platform scales a new set of capabilities, which becomes the new normal.
As we went from mainframes to desktops to laptops to smartphones with mobile applications, each generation of technology got easier and more natural for people to use than the one before. When the first mainframe computers came out, you needed to have a computer science degree to use them. Today’s smartphone can be accessed by young children and the illiterate.
As step changes in technology go, though, the platform (iPhone) birthed around the year 2007 surely constituted one of the greatest leaps forward in history. It suffused a new set of capabilities to connect, collaborate, and create throughout every aspect of life, commerce, and government.
Suddenly there were so many more things that could be digitized, so much more storage to hold all that digital data, so many faster computers and so much more innovative software that could process that data for insights, and so many more organizations and people (from the biggest multinationals to the smallest Indian farmers) who could access those insights, or contribute to them, anywhere in the world through their handheld computers— their smartphones.
That is the power of exponential change. When you keep doubling something for fifty years you start to get to some very big numbers, and eventually you start to see some very funky things that you have never seen before. The authors argued that Moore’s law just entered the “second half of the chessboard,” where the doubling has gotten so big and fast we’re starting to see stuff that is fundamentally different in power and capability from anything we have seen before— self-driving cars, computers that can think on their own and beat any human in chess or Jeopardy! or even Go, a 2,500-year-old board game considered vastly more complicated than chess. That is what happens “when the rate of change and the acceleration of the rate of change both increase at the same time,” said McAfee, and “we haven’t seen anything yet!” Because there are also two other giant forces: accelerations in the Market and in Mother Nature.
“The Market” is my shorthand term for the acceleration of globalization. That is, global flows of commerce, finance, credit, social networks, and connectivity generally are weaving markets, media, central banks, companies, schools, communities, and individuals more tightly together than ever. The resulting flows of information and knowledge are making the world not only interconnected and hyperconnected but interdependent — everyone everywhere is now more vulnerable to the actions of anyone anywhere.
Craig Mundie, a supercomputer designer and former chief of strategy and research at Microsoft, defines this moment in simple physics terms: “In the world we are in now, acceleration seems to be increasing. [That means] you don’t just move to a higher speed of change. The rate of change also gets faster. And when the rate of change eventually exceeds the ability to adapt you get ‘dislocation.’ ‘Disruption’ is what happens when someone does something clever that makes you or your company look obsolete. ‘Dislocation’ is when the whole environment is being altered so quickly that everyone starts to feel they can’t keep up.”
(And now for the most important insight about the human experience that either you, your parents or your grandparents have ever witnessed in their combined lifetimes. -Terry)
That is what is happening now. “The world is not just rapidly changing,” adds Dov Seidman, “it is being dramatically reshaped— it is starting to operate differently” in many realms all at once. And this reshaping is happening faster than we have yet been able to reshape ourselves, our leadership, our institutions, our societies, and our ethical choices.
Indeed, there is a mismatch between the change in the pace of change and our ability to develop the learning systems, training systems, management systems, social safety nets, and government regulations that would enable citizens to get the most out of these accelerations and cushion their worst impacts. This mismatch, as we will see, is at the center of much of the turmoil roiling politics and society in both developed and developing countries today.
He wasted no time before launching into an explanation of how the accelerations in Moore’s law and in the flow of ideas are together causing an increase in the pace of change that is challenging the ability of human beings to adapt. At first, technology accelerates very gradually, then it starts to slope higher upward as innovations build on innovations that have come before, and then it starts to soar straight to the sky.
Think of the introduction of the printing press, the telegraph, the manual typewriter, the Telex, the mainframe computer, the first word processors, the PC, the Internet, the laptop, the mobile phone, search, mobile apps, big data, virtual reality, human-genome sequencing, artificial intelligence, and the self-driving car.
A thousand years ago, Teller explained, that curve representing scientific and technological progress rose so gradually that it could take one hundred years for the world to look and feel dramatically different. For instance, it took centuries for the longbow to go from development into military use in Europe in the late thirteenth century. If you lived in the twelfth century, your basic life was not all that different from if you lived in the eleventh century. And whatever changes were being introduced in major towns in Europe or Asia took forever to reach the countryside, let alone the far reaches of Africa or South America. Nothing scaled globally all at once.
But by 1900, Teller noted, this process of technological and scientific change “started to speed up” and the curve started to accelerate upward. “That’s because technology stands on its own shoulders— each generation of invention stands on the inventions that have come before,” said Teller.
“So by 1900, it was taking twenty to thirty years for technology to take one step big enough that the world became uncomfortably different. Think of the introduction of the car and the airplane.” Then the slope of the curve started to go almost straight up and off the graph with the convergence of mobile devices, broadband connectivity, and cloud computing (which we will discuss shortly). These developments diffused the tools of innovation to many more people on the planet, enabling them to drive change farther, faster, and more cheaply.
“Now, in 2016,” he added, “that time window— having continued to shrink as each technology stood on the shoulders of past technologies — has become so short that it’s on the order of five to seven years from the time something is introduced to being ubiquitous and the world being uncomfortably changed.”
This is a real problem. When fast gets really fast, being slower to adapt makes you really slow— and disoriented. It is as if we were all on one of those airport moving sidewalks that was going around five miles an hour and suddenly it sped up to twenty-five miles an hour— even as everything else around it stayed roughly the same. That is really disorienting for a lot of people.
If the technology platform for society can now turn over in five to seven years, but it takes ten to fifteen years to adapt to it, Teller explained, “We will all feel out of control, because we can’t adapt to the world as fast as it’s changing. By the time we get used to the change, that won’t even be the prevailing change anymore— we’ll be on to some new change.”
That is dizzying for many people, because they hear about advances such as robotic surgery, gene editing, cloning, or artificial intelligence, but have no idea where these developments will take us. “None of us have the capacity to deeply comprehend more than one of these fields— the sum of human knowledge has far outstripped any single individual’s capacity to learn— and even the experts in these fields can’t predict what will happen in the next decade or century,” said Teller. “Without clear knowledge of the future potential or future unintended negative consequences of new technologies, it is nearly impossible to draft regulations that will promote important advances— while still protecting ourselves from every bad side effect.”
In other words, if it is true that it now takes us ten to fifteen years to understand a new technology and then build out new laws and regulations to safeguard society, how do we regulate when the technology has come and gone in five to seven years? This is a problem.
Another big challenge is the way we educate our population. We go to school for twelve or more years during our childhoods and early adulthoods, and then we’re done. But when the pace of change gets this fast, the only way to retain a lifelong working capacity is to engage in lifelong learning.
All of these are signs “that our societal structures are failing to keep pace with the rate of change,” he said. Everything feels like it’s in constant catch-up mode. What to do? We certainly don’t want to slow down technological progress or abandon regulation. The only adequate response, said Teller, “Is that we try to increase our society’s ability to adapt.” That is the only way to release us from the society-wide anxiety around tech. “We can either push back against technological advances,” argued Teller, “or we can acknowledge that humanity has a new challenge: we must rewire our societal tools and institutions so that they will enable us to keep pace.
The first option— trying to slow technology— may seem like the easiest solution to our discomfort with change, but humanity is facing some catastrophic environmental problems of its own making, and burying our heads in the sand won’t end well. Most of the solutions to the big problems in the world will come from scientific progress.”
Enhancing humanity’s adaptability, argued Teller, is 90 percent about “optimizing for learning”— applying features that drive technological innovation to our culture and social structures. Every institution, whether it is the patent office, which has improved a lot in recent years, or any other major government regulatory body, has to keep getting more agile— it has to be willing to experiment quickly and learn from mistakes. Rather than expecting new regulations to last for decades, it should continuously reevaluate the ways in which they serve society.
Universities are now experimenting with turning over their curriculum much faster and more often to keep up with the change in the pace of change— putting a “use-by date” on certain courses. Government regulators need to take a similar approach. They need to be as innovative as the innovators. They need to operate at the speed of Moore’s law.
One of X’s mottos is “Fail fast.” Teller tells his teams: “I don’t care how much progress you make this month; my job is to cause your rate of improvement to increase— how do we make the same mistake in half the time for half the money?” In sum, said Teller, what we are experiencing today, with shorter and shorter innovation cycles, and less and less time to learn to adapt, “Is the difference between a constant state of destabilization versus occasional destabilization.” The time of static stability has passed us by, he added. That does not mean we can’t have a new kind of stability, “But the new kind of stability has to be dynamic stability. There are some ways of being, like riding a bicycle, where you cannot stand still, but once you are moving it is actually easier. It is not our natural state. But humanity has to learn to exist in this state.”
“The feedback loop is so short now,” explained Iorio, that “in a couple days you can have a concept, the design of the part, you get it made, you get it back and test whether it is valid” and “within a week you have it produced … It is getting us both better performance and speed.” In the past, performance worked against speed: the more tests you did to get that optimal performance, the longer it took. What only a few years earlier had taken two years was being reduced to a week. That is the amplified power of machines.
Then, summing up all that was new, Iorio told me that today, “Complexity is free.” I said to her: “What did you say?” “Complexity is free,” she repeated.
(Note from Terry: Really think about the implications of that statement for a few moments. To what extent does life on Earth change if even really complex things are not only not difficult to create any more, but also FREE!)
I thought that was a real insight. I never forgot it. But only in writing this book did I fully understand the importance of what she’d said. As we’ve noted, over the last fifty years microprocessors, sensors, storage, software, networking, and now mobile devices have been steadily evolving at this accelerating rate. At different stages they coalesce and create what we think of as a platform.
With each new platform, the computing power, bandwidth, and software capabilities all meld together and change the method, cost, or power and speed at which we do things, or pioneer totally new things we can do that we never imagined— and sometimes all of the above. And these leaps are now coming faster and faster, at shorter and shorter intervals.
What happened was that the dot-com boom, bubble, and then bust in that time period unleashed a massive over-investment in fiber-optic cable to carry broadband Internet. But bubbles are not all bad. The combination of that bubble and then it’s bursting— with the dot-com bust in the year 2000— dramatically brought down the price of voice and data connectivity and led, quite unexpectedly, to the wiring of the world to a greater degree than ever before. The price of bandwidth connectivity declined so much that suddenly a U.S. company could treat a company in Bangalore, India, as its back office, almost as if it were located in its back office.
To put it another way, all of these breakthroughs around 2000 made connectivity fast, free, easy for you, and ubiquitous. Suddenly we could all touch people whom we could never touch before. And suddenly we could be touched by people who could never touch us before. I described that new sensation with these words: “The world is flat.” More people than ever could now compete, connect, and collaborate on more things for less money with greater ease and equality than ever before. The world as we knew it got reshaped.
I think what happened in 2007— with the emergence of the supernova— was yet another huge leap upward onto a new platform. Only this move was biased toward easing complexity. When all the advances in hardware and software melded into the supernova, it vastly expanded the speed and scope at which data could be digitized and stored, the speed at which it could be analyzed and turned into knowledge, and how far and fast it could be distributed from the supernova to anyone, anywhere with a computer or mobile device. The result was that suddenly complexity became fast, free, easy for you, and invisible.
Suddenly, all the complexity that went into getting a taxi, renting someone’s spare bedroom in Australia, designing an engine part, or buying lawn furniture online and having it delivered the same day was abstracted into one touch via applications such as Uber, Airbnb, and Amazon or by innovations in the labs of General Electric. No technology innovation more epitomizes this leap forward than Amazon’s invention of “one-click” checkout from any e-commerce site. As Rejoiner.com, which tracks e-commerce, noted, thanks to its one-click innovation, “Amazon achieves extremely high conversion from its existing customers. Since the customer’s payment and shipping information is already stored on Amazon’s servers, it creates a checkout process that is virtually frictionless.”
If you read Apple’s original announcement of the iPhone in 2007, it was all about how Apple had abstracted away the complexity of so many complex applications, interactions, and operations— from e-mailing, to map searching, to photographing, to phoning, to web surfing— and about how the company had used software to neatly condense so much into one touch on the “iPhone’s remarkable and easy-to-use touch interface.” Or, as Steve Jobs put it at the time: “We are all born with the ultimate pointing device— our fingers— and iPhone uses them to create the most revolutionary user interface since the mouse.”
This brings us to the essence of what really happened between 2000 and 2007: we entered a world where connectivity was fast, free, easy for you, and ubiquitous and handling complexity became fast, free, easy for you, and invisible. Not only could you touch people whom you had never touched before or be touched by them, but you could do all these amazing, complex things with one touch. These developments were powered by the supernova, and when you put them together, computing became so powerful and so cheap and so effortless that it suffused itself “into every device and every aspect of our lives and our society,” said Craig Mundie. “It is making the world not just flat but fast. Fast is a natural evolution of putting all this technology together and then diffusing it everywhere.”
But with the emergence of Yahoo and AOL, billions and billions of bits and bytes of data were piling up on the Web, requiring steadily increasing amounts of storage and computation power to navigate them. So people just started combining computers. If you could combine two computers, you could store twice as much and process twice as fast. With computer memory drives and processors getting cheaper, thanks to Moore’s law, businesses started realizing that they could create football-field-sized buildings stocked with processors and drives from floor to ceiling, known as server farms. But what was missing, said Cutting, was the ability to hook those drives and processors together so they could all work in a coordinated manner to store lots of data and also run computations across the whole body of that data, with all the processors running together in parallel.
The really hard part was reliability. If you have one computer, it might crash once a week, but if you had one thousand it would happen one thousand times more often. So, for all of this to work, you needed a software program that could run the computers together seamlessly and another program to make the giant ocean of data that was created searchable for patterns and insights. Engineers in Silicon Valley like to wryly refer to a problem like this as a SMOP— as in, “We had all the hardware we needed— there was just this Small Matter Of Programming [SMOP] we had to overcome.”
We can all thank Google for coming up with both of those programs in order to scale its search business. Google’s true genius, said Cutting, was “to describe a storage system that made one thousand drives look like one drive, so if any single one failed you didn’t notice,” along with a software package for processing all that data they were storing in order to make it useful. Google had to develop these itself, because at the time there was no commercial technology capable of addressing its ambitions to store, process, and search all the world’s information. In other words, Google had to innovate in order to build the search engine it felt the world wanted. But it used these programs exclusively to operate its own business and did
“Google described a way to easily harness lots of affordable computers,” said Cutting. “They did not give us the running source code, but they gave us enough information that a skilled person could reimplement it and maybe improve on it.” And that is precisely what Hadoop did. Its algorithms made hundreds of thousands of computers act like one giant computer. So anyone could just go out and buy commodity hardware in bulk and storage in bulk, run it all on Hadoop, and presto, do computation in bulk that produced really fine-grained insights.
Soon enough, Facebook and Twitter and LinkedIn all started building on Hadoop. And that’s why they all emerged together in 2007! It made perfect sense. They had big amounts of data streaming through their business, but they knew that they were not making the best use of it. They couldn’t. They had the money to buy hard drives for storage, but not the tools to get the most out of those hard drives, explained Cutting. Yahoo and Google wanted to capture Web pages and analyze them so people could search them— a valuable goal— but search became even more effective when companies such as Yahoo or LinkedIn or Facebook could see and store every click made on a Web page, to understand exactly what users were doing. Clicks could already be recorded, but until Hadoop came along no one besides Google could do much with the data.
“With Hadoop they could store all that data in one place and sort it by user and by time and all of a sudden they could see what every user was doing over time,” said Cutting. “They could learn what part of a site was leading people to another. Yahoo would log not only when you clicked on a page but also everything on that page that could be clicked on. Then they could see what you did click on and did not click on but skipped, depending on what it said and depending on where it was on the page. Hadoop let people outside of Google realize and experience that, and that then inspired them to write more programs around Hadoop and start this virtuous escalation of capabilities.”
So now you have Google’s system, which is a proprietary closed-source system that runs only in Google’s data centers and that people use for everything from basic search to facial identification, spelling correction, translation, and image recognition, and you have Hadoop’s system, which is open source and run by everyone else, leveraging millions of cheap servers to do big data analytics. Today tech giants such as IBM and Oracle have standardized on Hadoop and contribute to its open-source community.
And since there is so much less friction on an open-source platform, and so many more minds working on it— compared with a proprietary system— it has expanded lightning fast. Hadoop scaled big data thanks to another critical development as well: the transformation of unstructured data. Before Hadoop, most big companies paid little attention to unstructured data. Instead, they relied on Oracle SQL— a computer language that came out of IBM in the seventies— to store, manage, and query massive amounts of structured data and spreadsheets. “SQL” stood for “Structured Query Language.” In a structured database the software tells you what each piece of data is. In a bank system it tells you “this is a check,” “this is a transaction,” “this is a balance.” They are all in a structure so the software can quickly find your latest check deposit. Unstructured data was anything you could not query with SQL. Unstructured data was a mess. It meant you just vacuumed up everything out there that you could digitize and store, without any particular structure.
But Hadoop enabled data analysts to search all that unstructured data and find the patterns. This ability to sift mountains of unstructured data, without necessarily knowing what you were looking at, and be able to query it and get answers back and identify patterns was a profound breakthrough. As Cutting put it, Hadoop came along and told users: “Give me your digits structured and unstructured and we will make sense of them. So, for instance, a credit card company like Visa was constantly searching for fraud, and it had software that could query a thirty- or sixty-day window, but it could not afford to go beyond that. Hadoop brought a scale that was not there before.
Once Visa installed Hadoop it could query four or five years and it suddenly found the biggest fraud pattern it ever found by having a longer window. Hadoop enabled the same tools that people already knew how to use to be used at a scale and affordability that did not exist before.” That is why Hadoop is now the main operating system for data analytics supporting both structured and unstructured data.
We used to throw away data because it was too costly to store, especially unstructured data. Now that we can store it all and find patterns in it, everything is worth vacuuming up and saving. “If you look at the quantity of data that people are creating and connecting to and the new software tools for analyzing it— they’re all growing at least exponentially,” said Cutting. Before, small was fast but irrelevant, and big had economies of scale and of efficiency— but was not agile, explained John Donovan of AT& T.
“What if we can now take massive scale and turn it into agility?” he asked. In the past, With large scale you miss out on agility, personalization, and customization, but big data now allows you all three.” It allows you to go from a million interactions that were impersonal, massive, and un-unactionable to a million individual solutions, by taking each pile of data and leveraging it, combing it, and defining it with software. This is no small matter. As Sebastian Thrun, the founder of Udacity and one of the pioneers of massive open online courses (MOOCs) when he was a professor at Stanford, observed in an interview in the November/December 2013 issue of Foreign Affairs:
With the advent of digital information, the recording, storage, and dissemination of information has become practically free. The previous time there was such a significant change in the cost structure for the dissemination of information was when the book became popular. Printing was invented in the fifteenth century, became popular a few centuries later, and had a huge impact in that we were able to move cultural knowledge from the human brain into a printed form.
We have the same sort of revolution happening right now, on steroids, and it is affecting every dimension of human life. And we’re just at the end of the beginning. Hadoop came about because Moore’s law made the hardware storage chips cheaper, because Google had the self-confidence to share some of its core insights and to dare the open-source community to see if they could catch up and leapfrog— and because the open-source community, via Hadoop, rose to the challenge. Hadoop’s open-source stack was never a pure clone of Google’s, and by today it has diverged in many creative ways. As Cutting put it: “Ideas are important, but implementations that bring them to the public are just as important. Xerox PARC largely invented the graphical user interface, with windows and a mouse, the networked workstation, laser printing, et cetera.
But it took Apple and Microsoft’s much more marketable implementations for these ideas to change the world.” And that is the story of how Hadoop gave us the big data revolution— with help from Google, which, ironically, is looking to offer its big data tools to the public as a business now that Hadoop has leveraged them to forge this whole new industry. “Google is living a few years in the future,” Cutting concluded, “and they send us letters from the future in these papers and we are
We now take software so much for granted that we forget what it actually does. “What is the business of software?” asks Craig Mundie, who for many years worked alongside Gates as Microsoft’s chief of research and strategy and has been my mentor on all things software and hardware. “Software is this magical thing that takes each emerging form of complexity and abstracts it away. That creates the new baseline that the person looking to solve the next problem just starts with, avoiding the need to master the underlying complexity themselves. You just get to start at that new layer and add your value. Every time you move the baseline up, people invent new stuff, and the compounding effect of that has resulted in software now abstracting complexity everywhere.”
The history of computers and software, explains Mundie, “Is really the history of abstracting away more and more complexity through combinations of hardware and software.” What enables application developers to perform that magic are APIs, or application programming interfaces. APIs are the actual programming commands by which computers fulfill your every wish. If you want the application you’re writing to have a “save” button so that when you touch it your file is stored in the flash drive, you create that with a set of APIs— the same with “create file,” “open file,” “send file,” and on and on.
“APIs make possible a sprawling array of Web-service ‘mashups,’ in which developers mix and match APIs from the likes of Google or Facebook or Twitter to create entirely new apps and services,” explains the developer website ReadWrite.com. “In many ways, the widespread availability of APIs for major services is what’s made the modern Web experience possible.
When you search for nearby restaurants in the Yelp app for Android, for instance, it will plot their locations on Google Maps instead of creating its own maps,” by interfacing with the Google Maps API. This type of integration is called “seamless,” explains Mundie, “since the user never notices when software functions are handed from one underlying Web service to another … APIs, layer by layer, hide the complexity of what is being run inside an individual computer—computer— and the transport protocols and messaging formats hide the complexity of melding all of this together horizontally into a network.”
And this vertical stack and these horizontal interconnections create the experiences you enjoy every day on your computer, tablet, or phone. Microsoft’s cloud, Hewlett Packard Enterprise, not to mention the services of Facebook, Twitter, Google, Uber, Airbnb, Skype, Amazon, TripAdvisor, Yelp, Tinder, or NYTimes.com— they are all the product of thousands of vertical and horizontal APIs and protocols running on millions of machines talking back and forth across the network. Software production is accelerating even faster now not only because tools for writing software are improving at an exponential rate.
These tools are also enabling more and more people within and between companies to collaborate to write ever more complex software and API codes to abstract away ever more complex tasks— so now you don’t just have a million smart people writing code, you have a million smart people working together to write all those codes. And that brings us to GitHub, one of today’s most cutting-edge software generators. GitHub is the most popular platform for fostering collaborative efforts to create software. These efforts can take any form— individuals with other individuals, closed groups within companies, or wide-open open source. It has exploded in usage since 2007.
Again, on the assumption that all of us are smarter than one of us, more and more individuals and companies are now relying on the GitHub platform. It enables them to learn quicker by being able to take advantage of the best collaborative software creations that are already out there for any aspect of commerce, and then to build on them with collaborative teams that draw on brainpower both inside and outside of their companies. GitHub today is being used by more than twelve million programmers to write, improve, simplify, store, and share software applications and is growing rapidly— it added a million users between my first interview there in early 2015 and my last in early 2016.
Imagine a place that is a cross between Wikipedia and Amazon— just for software: You go online to the GitHub library and pick out the software that you need right off the shelf— for, say, an inventory management system or a credit card processing system or a human resources management system or a video game engine or a drone-controlling system or a robotic management system. You then download it onto your company’s computer or your own, you adapt it for your specific needs, you or your software engineers improve it in some respects, and then you upload your improvements back into GitHub’s digital library so the next person can use this new, improved version.
Now imagine that the best programmers in the world from everywhere— either working for companies or just looking for a little recognition— are all doing the same thing. You end up with a virtuous cycle for the rapid learning and improving of software programs that drives innovation faster and faster.
Originally founded by three grade-A geeks— Tom Preston-Werner, Chris Wanstrath, and P. J. Hyett— GitHub is now the world’s largest code host. Since I could not visit any major company today without finding programmers using the GitHub platform to collaborate, I decided I had to visit the source of so much source code at its San Francisco headquarters. By coincidence, I had just interviewed President Barack Obama in the Oval Office about Iran a week earlier. I say that only because the visitor lobby at GitHub is an exact replica of the Oval Office, right down to the carpet! They like to make their guests feel special.
My host, GitHub’s CEO, Chris Wanstrath, began by telling me how the “Git” got into GitHub. Git, he explained, is a “distributed version control system” that was invented in 2005 by Linus Torvalds, one of the great and somewhat unsung innovators of our time. Torvalds is the open-source evangelist who created Linux, the first open-source operating system that competed head-to-head with Microsoft Windows. Torvalds’s Git program allowed a team of coders to work together, all using the same files, by letting each programmer build on top of, or alongside, the work of others, while also allowing each to see who made what changes— and to save them, undo them, improve them, and experiment with them. “Think of Wikipedia— that’s a version control system for writing an open-source encyclopedia,” explained Wanstrath. People contribute to each entry, but you can always see, improve, and undo any changes.
The only rule is that any improvements have to be shared with the whole community. Proprietary software— such as Windows or Apple’s iOS— is also produced by a version control system, but it is a closed-source system, and its source code and changes are not shared with any wider community. The open-source model hosted by GitHub “is a distributed version controlled system: anyone can contribute, and the community basically decides every day who has the best version,” said Wanstrath.
“The best rises to the top by the social nature of the collaboration— the same way books get rated by buyers on Amazon.com. On GitHub the community evaluates the different versions and hands out stars or likes, or you can track the downloads to see whose version is being embraced most. Your version of software could be the most popular on Thursday and I could come in and work on it and my version might top the charts on Friday, but meanwhile the whole community will enjoy the benefits. We could merge them together or
“We were saying to ourselves: ‘It is just so freaking hard to use this Git thing. What if we made a website to make it easier?” recalled Wanstrath. “And we thought: ‘If we can get everyone using Git, we can stop worrying about what tools we are using and start focusing on what we are writing.’ I wanted to do it all with one click on the Web, so I could leave comments about a program and follow people and follow code the same way I follow people on Twitter— and with the same ease.” That way if you wanted to work on one hundred different software projects, you didn’t have to learn one hundred different ways to contribute. You just learned Git and you could easily work on them all.
So in October 2007, the three of them created a hub for Git— hence “GitHub.” It officially launched in April 2008. “The core of it was this distributed version control system with a social layer that connected all the people and all the projects,” said Wanstrath. The main competitor at that time— SourceForge— took five days to decide whether to host your open-source software. GitHub, by contrast, was just a share-your-code-with-the-world kind of place.
“Say you wanted to post a program called ‘How to Write a Column,’” he explained to me. “You just publish it under your name on GitHub. I would view that online and say: ‘Hey, I have few points I would like to add.’ In the old days, I would probably write up the changes I wanted to make and pitch them in the abstract to the community. Now I actually take your code into my sandbox. That is called a ‘fork.’ I work on it and now my changes are totally in the open— it’s my version.
If I want to submit the changes back to you, the original author, I make a pull request. You look at the new way I have laid out ‘How to Write a Column’; you can see all the changes. And if you like it, you press the ‘merge’ button. And then the next viewer sees the aggregate version. If you don’t like all of it, we have a way to discuss, comment, and review each line of code. It is curated crowdsourcing. But ultimately you have an expert— the person who wrote the original program—‘ How to Write a Column’— who gets to decide what to accept and what to reject. GitHub will show that I worked on this, but you get to control what is merged with your original version. Today, this is the way you build software.”
A decade and a half ago Microsoft created a technology called .NET— a proprietary closed-source platform for developing serious enterprise software for banks and insurance companies. In September 2014, Microsoft decided to open-source it on GitHub to see what the community could add. Within six months Microsoft had more people working on .NET for free than they had had working on it inside the company since its inception, said Wanstrath. “Open source is not people doing whatever they wanted,” he quickly added. “Microsoft established a set of strategic goals for this program, told the community where they wanted to go with it, and the community made fixes and improvements that Microsoft then accepted. Their platform originally only ran on Windows.
So one day Microsoft announced that in the future they would make it work on Mac and Linux. The next day the community said, ‘Great, thank you very much. We’ll do one of those for you.’” The GitHub community just created the Mac version themselves— overnight. It was a gift back to Microsoft for sharing. “When I use Uber,” concluded Wanstrath, “all I am thinking about now is where I want to go. Not how to get there. It is the same with GitHub. Now you just have to think about what problem do you want to solve, not what tools.” You can now go to the GitHub shelf, find just what you need, take it off, improve it, and put it back for the next person. And in the process, he added, “we are getting all the friction out. What you are seeing from GitHub, you are seeing in every industry.”
When the world is flat you can put all the tools out there for everyone, but the system is still full of friction. But the world is fast when the tools disappear, and all you are thinking about is the project. “In the twentieth century, the constraint was all about the hardware and making the hardware faster— faster processors, more servers,” said Wanstrath. “The twenty-first century is all about the software. We cannot make more humans, but we can make more developers, and we want to empower people to build great software by lifting the existing ones up and opening up the world of development to create more coders … so they can create the next great start-up or innovation project.”
There is something wonderfully human about the open-source community. At heart, it’s driven by a deep human desire for collaboration and a deep human desire for recognition and affirmation of work well done— not financial reward. It is amazing how much value you can create with the words “Hey, what you added is really cool. Nice job. Way to go!” Millions of hours of free labor are being unlocked by tapping into people’s innate desires to innovate, share, and be recognized for it.
If you are interested in adapting to the new realities of making a living, here are some great resources for making money online in the manner that appeals to you most:
- If you need help weighing the pros and cons of remote working, or creating an online business of your own, or if you just want to clarify you passions that point to your ideal job, you would do well to get a little help from a life coach. One of the best places to link- up with coach that’s right for you is the free life coach matching service provided
by Wainwright Global.
- If you’d like to learn about getting trained and certified as a life coach and make a living as an online coach, check out Wainwright Global’s Life Coach Training Online.
- If you’re drawn to remote working but don’t know how to get started, the Laptop Lifestyle Academy is the perfect program for learning the ropes.
- If your preference is starting an online business of your own, there’s no better place to start than the free course at My Online Startup.