CS 346 (W23)
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Building Software

It’s helpful to step back and talk about how software has evolved over the history of computing, since this directly impacts our operating environment today.

Hardware and software have co-existed for a long period of time. As the capabilities of our hardware grew, software systems evolved to take advantage of new capabilities, or to solve more complex types of problems that required more powerful hardware.

We can examine prevalent hardware and software over the decades, identifying key trends and technologies that emerged1

broad history of programming

Early Computing (1940s-1950s)

Machine code; punch cards. The first programming language, Fortran, was created in the mid-1950s, and COBOL followed a few years later. Programmers were expert users, often engineers, often working alone or in small teams. There was no commercial software industry; software was created alongside hardware.

A punch card from the 1950s, used to encode a computer program

The Mainframe Era (1960s-1970s)

During the 1960s and 1970s, computers were large, very expensive and sold exclusively to large businesses (e.g. insurance companies, manufacturers). Software was primarily bundled with hardware, or developed in-house. e.g. as a company, you would purchase a mainframe or mini from a large vendor (e.g. IBM) who would also sell you the operating system, system software and application software for that system.

State-of-the-art mainframe

Research in programming languages led to the rise of procedural and then structured programming2. Object-oriented programming was invented with Smalltalk in the 1960s but wasn’t very widely used yet.

The 1970s saw the creation of some foundational technologies: C, Pascal, ADA, Unix. Commercial software started to appear towards the end of the 1970s. The Apple II was introduced in 1977, and marked a major shift towards affordable and widely available computing3.

The PC Era (1980s-1990s)

This decade is often called the PC (personal computer) era. Apple and a number of other vendors were successful in the home PC market, but it was the introduction of the IBM AT in 1979 that revolutionized business. Small business that could never have afforded a mini or mainframe could instead purchase a PC, and use specialized software to help run their business.

IBM Personal Computer/AT

The most important software of this era, and arguably some of the most significant of all time? Spreadsheets: Visicalc on the Mac, Lotus 1-2-3 on PC.

Lotus 1-2-3 running under DOS

The popularity of the PC market through the 1980s led to the rise of Microsoft, Apple, and many other companies that we know today. Object-oriented programming because the major paradigm of that era, with C++ being the dominant programming language (not really challenged until Java appeared in 1996). The 1980s also saw the rise of software development companies who produced and sold software for these platforms.

The Internet Era (1990s-2000s)

In the early 1990s, it was common for households to have a single, shared family computer (often an IBC PC clone, running some version of Microsoft Windows). Although developed through the 1970s and 1980s, the Internet wasn’t widely and publically available until the mid-1990s.

Mosaic Browser in 1993

The late 1990s saw a massive shift in development techologies to support the internet, and in particular the World Wide Web, including Javascript, Ruby, Python rise. Software was often developed by small-mid sized development teams and commercial software, developed and sold by software vendors, was the norm by this point.

The Smartphone Era (2000s)

The early 2000s saw mainstream adoption of parallel computing technologies: multi-core; distributed applications.

The smartphone existed in the late 1990s and early 2000s, but wasn’t really important until the launch of the iPhone in 2007. This efectively kicked off the smartphone industry.

iPhone 2007 Launch

Modern Era

The software landscape continues to change and evolve. Here’s a rundown of the environment in which we operate today.

People use multiple devices

Computing hardware has become cheap enough over the past 20 years that we can afford to manufacture relatively inexpensive but still powerful hardware. Computing is now cheap enough that we can afford to put chips in everything1.

This also means that instead of using one general-purpose device, we spread our computing needs across various different, specialized devices. Most people own a smartphone, and routinely own or use other computing devices e.g. notebooks, smartphones, smartwatches.

Devices over time

Users expect their software to provide capabilities across as many of these devices as possible. Software has spread to smart TVs, in-home networking, phones, tablets, notebooks and various other hardware platforms 2.

This means that our critical software needs to be able to “live” on all of these devices!

Our data lives in the “cloud”

The mass adoption of smartphones accelerated the trend of hosting services and data in “the cloud” (i.e. a system available over the internet that provides remote software services on-demand). Once we had multiple devices, it made sense for them to be able to talk to each other and share data over the Internet. If they have a network connection already, they should also leverage remote services that are available over the Internet. Increasingly, companies want to provide a software service, where functionality and data is hosted “in the cloud” and they provide client software to access it.

We work in software ecosystems

We currently live in an era where data existing on devices is made available across all of my devices by some third-party service. e.g. I can open a web page on my phone, and without doing anything specific to trigger it, switch to my computer and open a browser to the same web page; edits on a document on one platform are propogated immediately to other systems. Ecosystems are primarily owned and driven by the largest technology companies that can design around this form of broad integration: Micrsoft, Apple, Google and Amazon. We need to be aware of these systems, and likely take measures to ensure that can operate in these environments.

Consumer expectations are high

The media hype around data science and AI means that everyone expects miracles. My phone can recognize my face and unlock automatically? Of course it can. My care can drive itself? Yawn. Consumer expectations are set very high.

Development Challenges

So what is the landscape like as a developer - what concerns us?

Vendor competition

Software is a multi-billion dollar business, and large companies need to entice software developers to their platform and technologies (ideally, exclusively). This means that many companies are in competition with respect to the development technologies that they promote.

For example, Apple, Microsoft and Google all have their own platforms that they want customers to use to the exclusion of others (e.g. iOS for Apple’s iPhone, Android for Google). Each company has programming languages and other technologies for their own platforms (e.g. Apple has Swift, Google has Dart, Kotlin).

Although it would be hugely beneficial for a developer to be able to target all of these platforms with a single programing language, that doesn’t benefit the vendors, who was exclusivity. Since none of these vendors will invest in making their technologies available on a competing platform, any developer trying to choose a technology stack needs to make some hard decisions on what to use.

There’s exceptions of course: web technologies are standard, and used by all of these companies. However, they still compete in related technologies (Google’s Chrome browser, vs. Micrsoft Edge for instance).

Cross-platform challenges

In spite of vendor competition, users expect software to run on every platform that they own, regardless of vendor, form-factor or operating system. If you’re developing commercial software, you will want your software deployed on every system that can potentially run it. It can be prohibitely expensive to develop an application from scratch on every platform, so ideally we want to reuse our code or somehow target multiple platforms from a single codebase.

<a
  class="gdoc-markdown__link"
  href="modern-era/assets/things.png"
>Things</a>, a Mac-only app, supporting 4 different devices

Third-party technologies

As software systems have become more complex, we’ve relied more and more on preexisting libraries and tookits to provide functionality. We certainly write new software, but we also leverage existing solutions where we can. This includes frameworks and toolkits (e.g. SwiftUI, Jetpack Compose for Android, Boost C++ libraries).

These provide many advantages:

  • These libraries are often peer-reviewed and highly scrutizinied. They have the potential to be the “best” possible solution to a particular problem.
  • They are likely tested exhaustively.
  • Using a third-party library can save you significant development time.
  • They may provide capabilities that you are incapable of developing yourself (e.g. I don’t know enough about Computer Vision to write a better library than OpenCV).

However, there’s also downsides.

  • You become dependent on that library. The vendors or authors decide not to support a new platform? You don’t support it either, unless you’re willing to do the work yourself (contribute to the library) or move to a diffferent solution.

Service integration

For users, there has been a shift towards seeing software that leverages and integrates with other, often remote, software. e.g. we pull photos from Google Photos, read email from Outlook, skim through newsfeeds - all using software installed on our personal devices, which accesses many remote services. This is the norm for many applications.

Our software development stack needs to address all of these concerns.


  1. Keep in mind that this is a gross oversimplification. ↩︎ ↩︎

  2. Edsger Dijkstra’s letter “Goto Statement Considered Harmful”, published in the March 1968 Communications of the ACM (CACM) led to the creation of structured programming. You can thank him for loops and other control structures that we use today. ↩︎ ↩︎

  3. There were also a number of popular, competing systems at the time. e.g. the Commodore PET from Commodore Business Machines and the TRS-80 from Tandy Corporation) that all existed around the same time. ↩︎