Formally, Software architecture is the “fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution” [IEEE 1471-200].
Architecture can also be seen as a shared understanding how the system is structured. Martin Fowler (2003) attempts to pin down the term in a couple of different ways:
Definition 1: “Expert developers working on a project have a shared understanding of the system design. This shared understanding is called ‘architecture’ [and] includes how the system is divided into components and how the components interact through interfaces.”
Definition 2: “Architecture is the set of design decisions that must be made early in a project [and that we would like to get right]”.
Architecture is the also holistic analysis of a system, and how it’s parts relate to one another. Instead of examining requirements in isolation, we instead want to look at the consequences of the structure itself, including the qualities that emerge from this structure.
Architecture can be said to address the intersection of business goals, user goals and technical (system) qualities. The architect needs to determine how to deliver the functional requirements in a way that also addresses these qualities, and other potential business needs (e.g. cost). This may very well include making tradeoff decisions ahead of time. e.g. a user may want a system to return the results of a query in less than 5 seconds, but the cost of doing this might be prohibitively expensive!
The benefit to a careful architecture is that we have a more stable initial design that reflects our project concerns, while still allowing for adaptability, flexibility and other desireable qualities. We’ll discuss different qualities of a system below.
Diagrams and portions of the following sections have been taken from: Mark Richards & Neal Ford. 2020. Fundamentals of Software Architecture: An Engineering Approach. O’Reilly. ISBN 978-1492043454.
Architects need to be concerned with both the logical structure of systems, and the physical realization of that structure.
Modularity refers to the logical grouping of source code into related groups. This can be realized as namespaces (C++), packages (Java or Kotlin). Modularity is important because it helps reinforce a separation of concerns, and also encourages reuse of source code through modules.
When discussing modularity, we can identify two related concepts: cohesion, coupling.
Cohesion is a measure of how related the parts of a module are to one another. A good indication that the classes or other components belong in the same module is that there are few, if any, calls to source code outside of the module (and removing anything from the module would necessitate calls to it outside of the module).
Coupling refers to the calls that are made between components; they are said to be tightly coupled based on their dependency on one another for their functionality. Loosely coupled means that there is little coupling, or it could be avoided in practice; tight coupling suggests that the components probably belong in the same module, or perhaps even as part of a larger component.
When designing modules, we want high cohesion (where components in a module belong together) and low coupling between modules (meaning fewer dependencies). This increases the flexibility of our software, and makes it easier to achieve desireable characteristics e.g. scalability.
In Kotlin, modules can be created by assigning classes to the same package (using the
packagekeyword at the top of a class). If you do this, you also need to place your files in a directory with the same name as the namespace e.g. classes in the package
graphicswould need to be located in a common directory named
Modules are logical collections of related code. Components are the physical manifestation of a module2. Components can represent a number of different abstractions, from a simple wrapper of related classes, to an entire layer of software that runs independently and communicates with external systems.
- Library. A simple wrapper is often called a library, which tends to run in the same memory address as the calling code and communicate via language function call mechanisms. Libraries are usually compile-time dependencies.
- Layers or subsystems. Groups of related code deployed together that may communicate with one another directly.
- Distributed service. A service tends to run in its own address space and communicates via low-level networking protocols like TCP/IP or higher-level formats like REST or message queues, forming stand-alone, deployable units. These are useful in in architectures like microservices.
In Kotlin, a
jarfile is the component that we most often create to represent a module. Jar files are designed to be distributed much like a library in other languages.
Partitioning is the decision on how we organize and group functionality (we use the term top-level partitioning, because this is the highest level of organization).
There are multiple approaches to how we group functionality.
- Technical partitioning: we group functionality according to technical capabilities. e.g. presentation or UI layer, business rules or domain layer and so on.
- Domain partitioning: we group functionality according to the domain area or area of interest. e.g. a payment processing module, a shopping cart module, a reporting module and so on.
So which is correct? Good question.
Technical partitioning tends to be used more often. If we’re concerned about reusability, it’s much easier to design a third party library that can be injected into an application if you provide technical capabilities, as compared to designing a domain-specific library.
For example, we have lots of UI frameworks that sit at the presentation layer, that can be used to build any sort of application regardless of domain. There are very few CatalogCheckout libraries, since any code produced to address that functionality is likely designed around the assumptions of that specific instance of that domain - and is unlikely to be reusable.
From here, developers subdivide components into classes, functions, or subcomponents. In general, class and function design is the shared responsibility of architects, tech leads, and developers.
So we know what components are, but how do we determine what components we need to create? Here’s a couple of common approaches, both of which assume that you’ve identified Use Cases (in your Requirements phase).
Actor/Actions: From your Use Cases, identify actors who perform activities, and the actions that they may perform. This is simply a technique for discovering the typical users of the system and what kinds of things they might do with the system. These actions represent activities that can be mapped directly to a corresponding software component.
Workflow: This approach looks at the key activities being performed, determines workflows and attempts to build components to address those specific activities.
Where the requirements phase is concerned about functional requirements, this phase is concerned about the non-functional requirements or “architectural characteristics” [Richards & Ford 2020]. These are all of the things that software must do that aren’t directly related to the functional requirements. This includes considerations of the technical environment, choice of programming languages, technical constraints and so on.
An architectural characteristic meets three criteria:
- specifies a non-domain design consideration,
- influences some structural aspect of design,
- is critical or important to the success of the application.
Examples of an architectural characteristic include software performance, and data security. These tend to not be identified as functional requirements, but should be considered.
It’s important to identify only those concerns that are relevant to your project! Addressing one of these characteristics adds cost and complexity to your project:
“Applications could support a huge number of architecture characteristics…but shouldn’t. Support for each architecture characteristic adds complexity to the design. Thus, a critical job for architects lies in choosing the fewest architecture characteristics rather than the most possible”
– Richard & Ford. 2020. Fundamentals of Software Architecture
Here are some potential characteristics that are often considered [Richards & Ford 2020]:
These are often relevant for large-scale software systems with dedicated uptime, and typically a large number of concurrent users. e.g. Gmail, Facebook.
|Availability||How long the system will need to be available (if 24/7, steps need to be in place to allow the system to be up and running quickly in case of any failure).|
|Continuity||Disaster recovery capability.|
|Performance||Includes stress testing, peak analysis, analysis of the frequency of functions used, capacity required, and response times. Performance acceptance sometimes requires an exercise of its own, taking months to complete.|
|Recoverability||Business continuity requirements (e.g., in case of a disaster, how quickly is the system required to be on-line again?). This will affect the backup strategy and requirements for duplicated hardware.|
|Reliability||Assess if the system needs to be fail-safe, or if it is mission critical in a way that affects lives. If it fails, will it cost the company large sums of money?|
|Robustness||Ability to handle error and boundary conditions while running if the internet connection goes down or if there’s a power outage or hardware failure.|
|Scalability||Ability for the system to perform and operate as the number of users or requests increases.|
These include characteristics related to code structure and code quality and production concerns.
|Configurability||Ability for the end users to easily change aspects of the software’s configuration (through usable interfaces).|
|Extensibility||How important it is to plug new pieces of functionality in.|
|Installability||Ease of system installation on all necessary platforms.|
|Reuse||Ability to leverage common components across multiple products.|
|Localization||Support for multiple languages on entry/query screens in data fields; on reports, multibyte character requirements and units of measure or currencies.|
|Maintainability||How easy it is to apply changes and enhance the system?|
|Portability||Does the system need to run on more than one platform? (For example, does the frontend need to run against Oracle as well as SAP DB?|
|Supportability||What level of technical support is needed by the application? What level of logging and other facilities are required to debug errors in the system?|
|Upgradeability||Ability to easily/quickly upgrade from a previous version of this application/solution to a newer version on servers and clients.|
These don’t fit neatly into a category but are worth considering:
|Accessibility||Access to all your users, including those with disabilities like colorblindness or hearing loss.|
|Archivability||Will the data need to be archived or deleted after a period of time? (For example, customer accounts are to be deleted after three months).|
|Authentication||Security requirements to ensure users are who they say they are.|
|Authorization||Security requirements to limit access to prescribed functionality based on assigned user roles or capabilities.|
|Legal||What legislative constraints is the system operating in (data protection, Sarbanes Oxley, GDPR, etc.)? What reservation rights does the company require? Any regulations regarding the way the application is to be built or deployed?|
|Privacy||Ability to hide transactions from internal company employees (encrypted transactions so even DBAs and network architects cannot see them).|
|Security||Does the data need to be encrypted in the database? Encrypted for network communication between internal systems? What type of authentication needs to be in place for remote user access?|
|Usability||Level of training required for users to achieve their goals with the application/solution.|
Another reason to limit yourself to “critical” characteristics for your project is that are in often competition with one another. e.g. making a project highly secure will negatively impact performance, since you will likely need to encrypt/decrypt data throughout your application. This isn’t something to undertake lightly.
So how do you identify which characteristics are relevant? You should look at the functional requirement (and project goals if they apply) and determine if there are any characteristics that help address those needs.
|Domain Concern||What do we look for?||Relevant Characteristics|
|Time to market||Characteristics that ensure that we can execute quickly and reliably.||Agility; Testability; Deployability|
|User satisfaction||Does your user work directly with this product? These are characteristics that directly affect user’s perception, and availability of your product.||Performance; Usability; Configurability; Localization; Accessibility|
|Privacy||Some types of data are sensitive (e.g. working in Healthcare). Anything that would protect the integrity and access to data.||Authorization; Authentication; Security|
|Competitive advantage||In a competitive market, we want to be able to introduce and modify features quickly, even after we’ve released to the market.||Testability; Deployability; Supportability|
Once you’ve identified a characteristic that you should address, you need to document it as a non-functional requirement (NFR).
- Include details of how you will measure it (all NFTs need to be measurable)
- Document how you will verify that you have met the requirement.
- Include consideration of this NFT in your topology and design (below).
- Clearly communicate this as an undercutting requirement to your team.
Example of non-functional requirements that would be included in our product backlog:
- “The application should launch and restore all state in less than 5 seconds on our target platform. We will measure this by manually timing execution during integration testing”.
- “When the Save button is pressed, the entire state should be saved, and control returned to the user in less than 10 seconds. We will write unit tests to check the amount of time consumed by this action and ensure that it never exceeds this threadhold”.
- “User records in all cases should consume less than 100 Kb each on disk. We will design our data classes to address this, and write integration tests to verify that this is the case”.
An architectural style (or architectural pattern) is the overall structure that we create to represent our software. In describes how our components are organized and structured. Similar to design patterns, an architectural style is a general solution that has been found to work well at solving specific types of problems. The key to using these is to understand the problem well enough that you can determine if a pattern is applicable, and useful to your particular situation.
An architectural pattern describes both the topology (organization of components) and the associated architectural characteristics.
There are some fundamental patterns that have appeared through history.
Architects refer to the absence of any discernible architecture structure as a Big Ball of Mud.
A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. – Foote& Yoder 1997.
A Big Ball of Mud isn’t intentional - it’s what happens when you fail to consider architecture in a software project. Treat this as an anti-pattern.
A monolithic structure simply means an application that is designed to run on a single system, and not communicate with any other systems. Source code had very little structure, these systems generally worked in isolation, on data that was carefully fed to them.
However, one inviolatable rule is that systems increase in complexity and capabilities over time. As systems grow, software has to be more carefully structures and managed to continue to meet these requirements.
Client-server architectures were the first major break away from a monolithic architecture, and split processing into front-end and back-end pieces. This is also called a two-tier architecture. There are different ways to divide up the system into front-end and back-end. Examples include splitting between desktop application (front-end) and shared relational database (back-end), or web browser (front-end) and web server (back-end).
Three-tier architectures were also popular in the 1990s and 2000s, which would also include a middle business-logic tier:
In this particular example, the presentation tier handled the UI, the logic tier handled business logic or applicaiton logic, and the data tier managed persistance.
These tiers are commonly used in other architectures, and we’ll revisit them shortly.
Monolithic architectures consist of a single deployment unit. i.e. the application is self-contained and deployed to a single system. (It may still communicate with external entities, but these are separate systems).
A layered or n-tier architecture is a very common architectural style that organizes software into horizontal layers, where each layer represents some logical functionality.
There is some similarlty to client-server, though we don’t assume that these layers are split across physically distinct systems (which is why we describe them as logical layers and not physical tiers).
Standard layers in this style of architecture include:
Presentation: UI layer that the user interacts with.
Business Layer: the application logic, or “business rules”.
Persistence Layer: describes how to manage and save application data.
Database Layer: the underlying data store that actually stores the data.
The major characteristic of a layered architecture is that it enforces a clear separation of concerns between layers: the Presentation layer doesn’t know anything about the application state or logic, it just displays the information; the Business layer knows how to manipulate the data, but not how it is stored and so on. Each layer is considered to be closed to all of the other layers, and can only be communicated with through a specific interface.
The layered architecture makes an excellent starting point for many simple applications that have few external interactions. However, be careful to ensure that your layers are actually adding functionality to a request, otherwise they are just added overhead with no added value. Layered is well-suited for small simple applications, but may not scale well if you need to expand your application’s functionality across more than a single tier.
A pipeline (or pipes and filters) architecture is appropriate when we want to transform data in a sequential manner. It consists of pipes and filters, linked together in a specific fashion:
Pipes form the communication channel between filters. Each pipe is unidirectional, accepting input on one end, and producing output at the other end.
Filters are entities that perform operation on data that they are fed. Each filter performs a single operation, and they are stateless. There are different types of filters:
- Producer: The outbound starting point (also called a source).
- Transformer: Accepts input, optionally transforms it, and then forwards to a filter (this resembles a map operation).
- Tester: Accepts input, optionally transforms it based on the results of a test, and then forwards to a filter (this resembles a reduce operation).
- Consumer: The termination point, where the data can be saved, displayed etc.
These abstractions may appear familiar, as they are used in shell programming. It’s broadly applicable anytime you want to process data sequentially according to fixed rules.
Examples include: photo manipulation software, shells.
A microkernel architecture (also called plugin architecture) is a popular pattern that provides the ability to easily extend application logic to external, pluggable components. e.g. IntelliJ IDEA which uses application plugins to add functionality for new programming languages.
This architecture works by focusing the primary functionality into the core system, and providing extensibility through the plugin system. This allows the developer, for instance, to invoke functionality in a plugin when the plugin is present, using a defined interface that describes how to invoke it (without need to understand the underlying code).
An example would be a payment processing system, where the core system handles shopping and payment calculations, and behaviour specific to a payment vendor could be contained within a plugin (e.g. Visa plugin, AMEX plugin and so on).
One final note: interaction between other system components and plugins is done through the core system as a mediator. This reduces coupling of components and the plugins, and retains the flexibility of this architecture.
Examples of this architecture include web browsers (which support extensions), and IDEA (which support plugins for various programming languages).
Distributed architectures assume multiple deployments across different systems. These deployments communicate over a network, or similar medium using a defined protocol.
This overhead leads to some unique challenges that are referred to collectively as the fallacies of distributed computing. This includes concerns with network reliability, latency, bandwith, security and so on - things that are non-issues with monolithic architectures3.
A services-based architecture splits functionality into small “portions of an application” (also called domain services) that are independent and separately deployed. This is demonstrated below with a separately deployed user interface, a separately deployed series of coarse-grained services, and a monolithic database. Each service is a separate monolithic application that provides services to the application, and they share a single monolithic database.
Each service provides coarse-grained domain functionality (i.e. operating at a relatively high level of abstraction) and addresses a particular business-need. e.g. a service might handle a customer checkout request to process an order.
Working at a coarse-grained level of abstraction like this means that these types of services can rely on regular ACID (atomicity, consistency, isolation, durability) database transactions to ensure data integrity. In other words, since the service is handling the logic of the entire operation, it can consolidate all of the steps in a single database transaction. If there is a failure of any kind, it can report the status to the customer and rollback the transaction.
e.g. a customer purchasing an items from your online storefront: the same service can handle updating the order details, adjusting available inventory and processing the payment.
A microservices architecture arranges an application as a collection of loosely coupled services, using a lightweight protocol.
Some of the defining characteristics of microservices:
- Services are usually processes that communicate over a network.
- Services are organized around business capabilities i.e. they provide specialized, domain-specific services to applications (or other services).
- Service are not tied to any one programming language, platform or set of technologies.
- Services are small, decentralized, and independently deployable.
Each microservice is expected to operate independently, and contain all of the logic that it requires to perform its specialized task. Microservices are distributed, so that each can be deployed to a separate system or tier.
The advantage of microservices over services is that we have prioritized decoupling of components and maximized cohesion - each microservice has a specific role and no dependencies. This makes extending and scaling out new microservices trivial. However, the cost of this is performance – communication over the network is relatively slow compared to inter-process communication on the same system.
The driving philosophy of microservices is the notion of bounded context: each service models a domain or workflow. Thus, each service includes everything necessary to operate within the application, including classes, other subcomponents, and database schemas. – Mark Richards
Although the services themselves are independent, they need to be able to call one another to fulfil business requirements. e.g. a customer attempting to checkout online may have thier order sent to a Shipping service to organize the details of the shipment, but then a request would need to be sent from the Shipping service to the Payment service to actually process the payment.
This suggests that communication between microservices is a key requirement. The architect utilizing this architecture would typically define a standard communication protocol e.g. message queues, or REST.
Coordinating a multi-step process like this involves either cooperation between services (as described above), or a third coordinating service.
Coordination in Microservices [Richards 2020].
Orchestration in Microservices [Richards 2020].
Which style to choose?
Physical in the sense of where it is installed. This matters in systems where software can consist of components installed on different hardware, communicating and exchanging information. ↩︎
If you find this interesting, CS 454 Distributed Systems is highly recommended! This is far too complex a topic to cover in a few paragraphs. ↩︎