It’s helpful to step back and talk about how software has evolved over the history of computing, since this directly impacts our operating environment today.
Hardware and software have co-existed for a long period of time. As the capabilities of our hardware grew, software systems evolved to take advantage of new capabilities, or to solve more complex types of problems that required more powerful hardware.
We can examine prevalent hardware and software over the decades, identifying key trends and technologies that emerged1
Machine code; punch cards. The first programming language, Fortran, was created in the mid-1950s, and COBOL followed a few years later. Programmers were expert users, often engineers, often working alone or in small teams. There was no commercial software industry; software was created alongside hardware.
During the 1960s and 1970s, computers were large, very expensive and sold exclusively to large businesses (e.g. insurance companies, manufacturers). Software was primarily bundled with hardware, or developed in-house. e.g. as a company, you would purchase a mainframe or mini from a large vendor (e.g. IBM) who would also sell you the operating system, system software and application software for that system.
Research in programming languages led to the rise of procedural and then structured programming2. Object-oriented programming was invented with Smalltalk in the 1960s but wasn’t very widely used yet.
The 1970s saw the creation of some foundational technologies: C, Pascal, ADA, Unix. Commercial software started to appear towards the end of the 1970s. The Apple II was introduced in 1977, and marked a major shift towards affordable and widely available computing3.
This decade is often called the PC (personal computer) era. Apple and a number of other vendors were successful in the home PC market, but it was the introduction of the IBM AT in 1979 that revolutionized business. Small business that could never have afforded a mini or mainframe could instead purchase a PC, and use specialized software to help run their business.
The most important software of this era, and arguably some of the most significant of all time? Spreadsheets: Visicalc on the Mac, Lotus 1-2-3 on PC.
The popularity of the PC market through the 1980s led to the rise of Microsoft, Apple, and many other companies that we know today. Object-oriented programming because the major paradigm of that era, with C++ being the dominant programming language (not really challenged until Java appeared in 1996). The 1980s also saw the rise of software development companies who produced and sold software for these platforms.
In the early 1990s, it was common for households to have a single, shared family computer (often an IBC PC clone, running some version of Microsoft Windows). Although developed through the 1970s and 1980s, the Internet wasn’t widely and publically available until the mid-1990s.
The early 2000s saw mainstream adoption of parallel computing technologies: multi-core; distributed applications.
The smartphone existed in the late 1990s and early 2000s, but wasn’t really important until the launch of the iPhone in 2007. This efectively kicked off the smartphone industry.
The software landscape continues to change and evolve. Here’s a rundown of the environment in which we operate today.
Computing hardware has become cheap enough over the past 20 years that we can afford to manufacture relatively inexpensive but still powerful hardware. Computing is now cheap enough that we can afford to put chips in everything1.
This also means that instead of using one general-purpose device, we spread our computing needs across various different, specialized devices. Most people own a smartphone, and routinely own or use other computing devices e.g. notebooks, smartphones, smartwatches.
Users expect their software to provide capabilities across as many of these devices as possible. Software has spread to smart TVs, in-home networking, phones, tablets, notebooks and various other hardware platforms 2.
This means that our critical software needs to be able to “live” on all of these devices!
The mass adoption of smartphones accelerated the trend of hosting services and data in “the cloud” (i.e. a system available over the internet that provides remote software services on-demand). Once we had multiple devices, it made sense for them to be able to talk to each other and share data over the Internet. If they have a network connection already, they should also leverage remote services that are available over the Internet. Increasingly, companies want to provide a software service, where functionality and data is hosted “in the cloud” and they provide client software to access it.
We currently live in an era where data existing on devices is made available across all of my devices by some third-party service. e.g. I can open a web page on my phone, and without doing anything specific to trigger it, switch to my computer and open a browser to the same web page; edits on a document on one platform are propogated immediately to other systems. Ecosystems are primarily owned and driven by the largest technology companies that can design around this form of broad integration: Micrsoft, Apple, Google and Amazon. We need to be aware of these systems, and likely take measures to ensure that can operate in these environments.
The media hype around data science and AI means that everyone expects miracles. My phone can recognize my face and unlock automatically? Of course it can. My care can drive itself? Yawn. Consumer expectations are set very high.
So what is the landscape like as a developer - what concerns us?
Software is a multi-billion dollar business, and large companies need to entice software developers to their platform and technologies (ideally, exclusively). This means that many companies are in competition with respect to the development technologies that they promote.
For example, Apple, Microsoft and Google all have their own platforms that they want customers to use to the exclusion of others (e.g. iOS for Apple’s iPhone, Android for Google). Each company has programming languages and other technologies for their own platforms (e.g. Apple has Swift, Google has Dart, Kotlin).
Although it would be hugely beneficial for a developer to be able to target all of these platforms with a single programing language, that doesn’t benefit the vendors, who was exclusivity. Since none of these vendors will invest in making their technologies available on a competing platform, any developer trying to choose a technology stack needs to make some hard decisions on what to use.
There’s exceptions of course: web technologies are standard, and used by all of these companies. However, they still compete in related technologies (Google’s Chrome browser, vs. Micrsoft Edge for instance).
In spite of vendor competition, users expect software to run on every platform that they own, regardless of vendor, form-factor or operating system. If you’re developing commercial software, you will want your software deployed on every system that can potentially run it. It can be prohibitely expensive to develop an application from scratch on every platform, so ideally we want to reuse our code or somehow target multiple platforms from a single codebase.
As software systems have become more complex, we’ve relied more and more on preexisting libraries and tookits to provide functionality. We certainly write new software, but we also leverage existing solutions where we can. This includes frameworks and toolkits (e.g. SwiftUI, Jetpack Compose for Android, Boost C++ libraries).
These provide many advantages:
- These libraries are often peer-reviewed and highly scrutizinied. They have the potential to be the “best” possible solution to a particular problem.
- They are likely tested exhaustively.
- Using a third-party library can save you significant development time.
- They may provide capabilities that you are incapable of developing yourself (e.g. I don’t know enough about Computer Vision to write a better library than OpenCV).
However, there’s also downsides.
- You become dependent on that library. The vendors or authors decide not to support a new platform? You don’t support it either, unless you’re willing to do the work yourself (contribute to the library) or move to a diffferent solution.
For users, there has been a shift towards seeing software that leverages and integrates with other, often remote, software. e.g. we pull photos from Google Photos, read email from Outlook, skim through newsfeeds - all using software installed on our personal devices, which accesses many remote services. This is the norm for many applications.
Our software development stack needs to address all of these concerns.
Edsger Dijkstra’s letter “Goto Statement Considered Harmful”, published in the March 1968 Communications of the ACM (CACM) led to the creation of structured programming. You can thank him for loops and other control structures that we use today. ↩︎ ↩︎
There were also a number of popular, competing systems at the time. e.g. the Commodore PET from Commodore Business Machines and the TRS-80 from Tandy Corporation) that all existed around the same time. ↩︎