Desktop Applications
Overview
Graphical applications arose in the early 80s as we moved from text-based terminals to more technically capable systems. This was part of the Personal Computer (PC) movement of that time, which aimed to put a “computer in every home” Graphical User Interfaces (GUIs) were seen as more “user-friendly” and considered an important factor in the adoption of these systems. Introduced in 1984, the Apple Macintosh introduced the first successful commercial graphical operating system; other vendors (e.g. Microsoft, Commodore, Sun) quickly followed suit. The conventions that were adopted on the Mac became standard on other platforms as well e.g. Windows.
Desktop applications refers to graphical applications designed for a notebook or desktop computer, typically running Windows, macOS or Linux. Users interact with these applications using a mouse and keyboard, although other devices may also be supported (e.g. camera, trackpad).
Although desktop computers can run console applications in a shell, for this discussion, we’re focusing on graphical applications.
Features
Graphical desktop application should have the following features:
- Multiple application windows should be supported. Most applications will often present their interface within a single, interactive window, but it can sometimes be useful to have multiple simultaneous windows controlled by a single application1.
- Support for full-screen or windowed interaction: although graphical applications tend to run windowed, they should normally be usable full-screen as well. The window contents should scale or reposition themselves as the window size changes.
- Window decorations: Each window should have a titlebar, minimize/maximize/restore buttons (that work as expected).
- Windows may or may not be resizable: if they are resizable, the contents should scale or adjust their layout based on window size (for this reason, it may make sense to either contrain window dimensions when resizing, or make some windows fixed size). Convention allows the main window to be resized, and option dialogs (or similar non-essential windows) to be fixed-size.
- Interactive graphical elements: window contents could be any combination of graphics, animations, multimedia, or text that is desired for the target application. These contents should be dynamic (i.e. have the ability to change in response to system state) and should support a range of interactions - clicking, double-clicking, dragging - provided by both mouse and keyboard.
- Standard menubars: every application should have the following menus (with shortcuts). Although some applications choose to eliminate menus (or replace with other controls), most of the time you should include them. Exact contents may vary, but users expect at-least this functionality:
- File: New, Open, Close, Print, Quit.
- Edit: Cut, Copy, Paste.
- Window: Minimize, Maximize.
- Help: About.
- Keyboard shortcuts: you should strive to have keyboard shortcuts for common functionality. All standard shortcuts should be supported2. e.g.
- Ctrl-N for File-New, Ctrl-O for File-Open, Ctrl-Q for Quit.
- Ctrl-X for Cut, Ctrl-C for Copy, Ctrl-V for Paste.
- F1 for Help.
Benefits
There are obvious benefits to a graphical display being able to display rich colours, graphics and multimedia. However, this application style also encourages encourages a certain style of interaction, where users point-and-click to elements on-screen to interact with them.
There are numerous benefits to this style of user interface:
- The interface provides affordances: visual suggestions on how you might interact with the system. This can include hints like tooltips, or a graphical design that makes use of controlsobvious ((e.g. handles to show where to “grab” a window corner).
- Systems provide continuous feedback to users. This includes obvious feedback (e.g. dialog boxes, status lines) and more subtle, continuous feedback (e.g. widgets animating when pressed).
- Interfaces are explorable: users can use menus, and cues to discover new features.
- Low cost of errors: undo-redo, and the ability to rollback to previous state makes exploration low-risk.
- These environments encouraged developers to use consistent widgets and interactive elements. Standardization led to a common look-and-feel, and placement of common controls - which made software easier to learn, especially for novices. Many of the standard features that we take for granted are a direct result of this design standardization in the 80s. [ed. notice that Windows, macOS, Linux all share a very common interaction paradigm, and a common look-and-feel! You can move between operating systems and be quite comfortable because of this.]
Functionality
These are complex requirements, that outside of the scope of a programming language (in-part because they’re going to be intrinsically tied to the underlying operating system, so they’re difficult to standardize).
A widget or GUI toolkit is a UI framework which provides this functionality. This includes support for:
- Creating and managing application windows, with standard window functionality e.g. overlapping windows, depth, min/max buttons, resizing.
- Reusable components called widgets that can be combined in a window to build typical applications. e.g. buttons, lists, toolbars, images, text views.
- Dynamic layout that adapts the interface to change in window size or dimensions.
- Support for an event-driven architecture3 i.e. suport for standard and custom events. Includes event generation and propogation.
Implementation details will vary based on the toolkit that you’re using. We’ll discuss requirements first, and then in the next section we’ll provide implementation details for some common widgets toolkits.
Window Management
In the context of a applications, a window is simply a region of the screen that “belongs” to a specific application. Typically one application has one main window, an optionally other windows that may also be displayed. These are overlayed on a “desktop”, which is really just the screen background.
To manage many different windows, across many different applications, a part of the operating system called a windowing system is responsible for creating, destroying and managing running windows. The windowing system provides an API to applications to support for all window-related functionality, including:
- provide an mechanism for applications to create, or destroy their own windows
- handle window movement automatically and invisibly to the application (i.e. when you drag a window, the windowing system moves it).
- handles overlapping windows across applicaitons (e.g. so that your application window can be brought to the ““front” and overlap another application’s window).
A windowing system or windowing technology is typically included as part of the operating system, though it’s possible in some systems to replace windowing systems (e.g. Linux).
Coordinate systems
A computer screen uses a Cartesean coordinate system to track window position. By convention, the top-left is the origin, with x increasing as you move right, and y increasing as you move down the screen. The bottom-right corner of the screen is maximum x and y, which equals the resolution of the screen.
Note that its possible for screen contents to move moved out-of-bounds and made inaccessible. We typically don’t want to do this.
In the example below, you can see that this is a 1600x1200 resolution screen4, with the four corner positions marked. It contains a single 400x400 window, positioned at (500, 475) using these screen, or global coordinates.
Given that the windowing system manages movement and positioning of windows on-screen, an application window doesn’t actually know where it’s located on-screen! The application that “owns” the window above doesn’t have access to it’s global coordinates. It does however, have access to it’s own internal, or local coordinates. For example, our window might contain other objects, and the application would know about their placement. In this local coordinate system, we use the top of the window as the origin, with the bottom-right coordinate being the (width, height) of the window. Objects within the window are referenced relative to the window’s origin.
Window creation
Typically, the toolkit will provide a mechanism to create a top-level application window, typically as a top-level class that can instantated. That class will have properties to control its behaviour (some of which is used by the Windowing system to setup the window correctly).
- Sample properties: minWidth, width, maxWidth; minHeight, height, maxHeight; title; isFullScreen; isResizable
- Sample methods: close(), toFront(), toBack()
Window Movement
As application developers, we do not need to do anything to support window movement, since it’s provided by the windowing system. Any non-fullscreen windows created by a toolkit are automatically moveable.
Widgets and Layout
We’re going to refer to graphical on-screen elements as widgets. Most toolkits support a large number of similar widgets. The diagram below shows one desktop toolkit with drop-down lists, radio buttons, lists and so on. All of these elements are considered widgets.
Typically, using widgets us as simple as instantiating them, adding them to the window, and setting up a mechanism to detect when users interact with them so that appropriate actions can be taken.
Scene graph
It’s standard practice in graphical applications to represent the interface as a scene graph. This is a mechanism for modeling a graphical application as a tree of nodes (widgets), where each node has exactly one parent. Effects applied to the parent are applied to all of its children.
Toolkits support scene graphs directly. There is typically a distinction made between Container widgets and Node widgets. Containers are widgets that are meant to hold other widgets e.g. menus which hold menu_items, toolbars which can hold toolbar_buttons and so on. Nodes are widgets that are interactive and don’t have any children.
Building a UI involves explicitly setting up the scene graph, by instantiating nodes, and adding them to containers to build a scene graph. (For this reason, containers will always have a list of children, and a mechanism for adding and removing children from their list).
Layout
Layout is the mechanism by which nodes in the scene graph are positioned on the screen, and managed if the window is resized.
- Fixed layout is a mechanism to place widgets in a static layout where the window will remain the same size. This is done by setting properties of the widgets to designate each one’s position, and ensuring that the containers do not attempt any dynamic layout.
- Relative layout delegates responsibility for widget position and size to their parents (i.e. containers). Typically this means setting up the scene graph in such a way that the appropriate container is used based on how it adjusts position. Typical containers include a vertical-box that aligns it’s children in a vertical line, or a grid that places children in a grid with fixed rows and columns.
Design
Events
Applications often handle multiple types of processing: asynchronous, such as when a user types a few keystrokes, or synchronous, such as when we want a computation to run non-stop to completion.
User interfaces are designed around the idea of using events or messages as a mechanism for components to indicate state changes to other interested entities. This works well, due to the asynchronous nature of user-driven interaction, where there can be relatively long delays between inputs (i.e. humans type slowly compared to the rate at which a computer can process the keystrokes).
This type of system, designed around the production, transmission and consumption of events between loosely-coupled components, is called an Event-Driven Architecture. It’s the foundation to most user-interface centric applications (desktop, mobile), which common use messages to signal a user’s interaction with a viewable component in the interface.
What’s an event? An event is any significant occurrence or change in state for system hardware or software.
The source of an event can be from internal or external inputs. Events can generate from a user, like a mouse click or keystroke, an external source, such as a sensor output, or come from the system, like loading a program.
How does event-driven architecture work? Event-driven architecture is made up of event producers and event consumers. An event producer detects or senses the conditions that indicate thaat something has happened, and creates an event.
The event is transmitted from the event producer to the event consumers through event channels, where an event processing platform processes the event asynchronously. Event consumers need to be informed when an event has occurred, and can choose to act on it.
Events be generated from user actions, like a mouse click or keystroke, an external source, such as a sensor output, or come from the system, like loading a program.
An event driven system typically runs an event loop, that keeps waiting for these events. The process is illustrated in the diagram below:
- An EventEmitter generates an event.
- The event is placed in an event queue.
- An event loop peels off events from the queue and dispatches them to event handlers (functions which have been registered to receive this specific type of event).
- The event handlers receive and process the event.
To handle event driven architectures, we often subdivide application responsibility into separate components.
MVC Patterns
Model-View-Controller
The most basic structure is Model-View-Controller (MVC), which leverages the Observer design pattern to separate business logic from the user interface.
MVC divides any application into three distinct parts:
- Model: the core component of the application that handles state (“business logic layer”).
- View: a representation of the application state, often as a user-interface (“presentation layer”)
- Controller: a component that accepts input, interprets user actions and converts to commands for the model or view.
Similar to the observer pattern, the views monitor the system state, represented by the model. When the state changes, the views are notified and they update their data to reflect these changes. Notifications frequently happen through events generated by, and managed by, the toolkit that you’re using.
Often this is realized as separate classes for each of these components, with an additional main
class to bind everything together.
// main class
class Main {
val model = Model()
val controller = Controller(model)
val view = View(controller, model)
model.addView(model)
}
We use an interface to represent the views, which provides the flexibility to allow many different types of output for the program. Any class can be a view as long as it supports the appropriate method to allow notifications from the model.
interface IView {
fun update()
}
class View(val controller: Controller, val model: Model): IView {
override fun update() {
// fetch data from model
}
}
The model maintains a list of all views, and notifies them with state changes (indicating that they may wish to refresh their data, or respond to the state change in some way).
class Model {
val views = listOf()
fun addView(view: IView) {
views.add(view)
}
fun update() {
for (view : views) {
view.update()
}
}
}
The controller just passes input from the user to the model.
class Controller(val model: Model) {
fun handle(event: Event) {
// pass event data to model
}
}
One issue with this version of MVC is that the controller often serves little purpose, except to pass along events that are captured by the View (the View contains the user-interface and widgets, and generates events as the user interacts with it).
MVC remains common for simple applications, but tends to be implemented as just a model and one or more views, with the controller code included in the view itself.
Model-View-Presenter
Model-View-Presenter (MVP) keeps the key concept in MVC - separating the business logic from the presentation - and introduces an intermediate Presenter which handles converting the model’s data into a useful format for the views. This is typically done explicitly by the Presenter class. MVP arose from Taligent in the 1990s, but was popularized by Martin Fowler around 2006.
There have been multiple variants of MVP. We’ll focus on MVVM, probably the most popular variant.
MVVM
Model-View-ViewModel was invented by Ken Cooper and Ted Peters to simplify event-driven programming of user interfaces in C#/.NET. It’s similar to MVP, but includes the notion of binding variables to widgets within the framework, so that changes in widget state are are automatically propogated from the view to other components.
MVVM includes the following components:
- Model: as MVC, the core component that handles state. It can also map to a data access layer or database directly.
- View: a representation of the application state, presented to the user.
- ViewModel: a model that specifically interprets the underlying Model state for the particular view to which it is associated. Typically we rely on binding to map variables in the ViewModel directly to widgets in the View, so that updating one directly updated the other.
MVVM is much more common in modern languages and toolkits and has the advantage of replacing all “mapping” code with direct binding of variables and widgets by the toolkit. This greatly simplifies interface development.
Toolkits: JavaFX
We’re using Kotlin with the Java Virtual Machine (JVM) ecosystem, so we’ll discuss some toolkits that are available in that ecosystem.
Java launched in 1996, with AWT as its first GUI framework. AWT is a heavyweight toolkit that provided a thin abstraction layer over the system-specific widgets provided by OS vendors i.e. it provided wrappers for UI components that were built into the OS. However, this tight integration to the OS meant that AWT behaved very differently across different operating systems, which ran counter to Sun’s original goals of having a single cohesive toolkit that ran equally well on all platforms.
Swing was originally part of the Java Foundation Classes, and replaced the AWT in 1997. Unlike AWT, Swing is a lightweight toolkit: Swing components draw themselves using the Java2D Graphics Library, which makes Swing applications consistent across platforms. This also means that Swing can support a broader range of components, including some that aren’t directly supported by the OS. In other words, lightweight toolkits provide some tangible benefits:
- The largest collection of widgets, not limited to just the subset that can be assumed to be present on each OS.
- Consistency in how widgets behave, since they are designed as a set.
- An OS independent look-and-feel.
JavaFX was originally designed by Sun Microsystems in 2008 as a replacement for the Java AWT and Swing toolkits, and was designed to compete directly with Adobe Flash/Flex and similar web toolkits. In 2010, Oracle released JavaFX into the community as part of the OpenJDK initiative. The open source version of JavaFX is currently maintained by Gluon and the community.
JavaFX is an imperative toolkit, where the programmer describes the layout and how it should be managed in code (and XML). [This contrasts with a declarative toolkit like Jetpack Compose, where the programmer describes a layout and the system reflects state in that layout].
JavaFX is a lightweight toolkit that runs well on Windows, Mac, Linux. It provides a native look-and-feel on each platform, and even supports hardware acceleration! It’s not included with the JRE, but because it’s open source, we can distrbute the libraries with our applications.
Setup
Although JavaFX can be installed from the main JavaFX site, the recommended way to bundle these libraries into your application is to add it to your Gradle configuration file. Gradle will then download and install JavaFX as-needed.
In your project’s build.gradle
file, make the following changes to include the javafxplugin
and related settings in the javafx
block.
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
application
kotlin("jvm") version "1.6.20"
id("org.openjfx.javafxplugin") version "0.0.13"
id("org.beryx.jlink") version "2.25.0"
}
group = "net.codebot"
version = "1.0.0"
val compileKotlin: KotlinCompile by tasks
val compileJava: JavaCompile by tasks
compileJava.destinationDirectory.set(compileKotlin.destinationDirectory)
repositories {
mavenCentral()
}
dependencies {
testImplementation(kotlin("test"))
}
tasks.test {
useJUnitPlatform()
}
tasks.withType<KotlinCompile> {
kotlinOptions.jvmTarget = "1.8"
}
application {
mainModule.set("calculator")
mainClass.set("calculator.Main")
}
javafx {
// version is determined by the plugin above
version = "18.0.2"
modules = listOf("javafx.controls", "javafx.graphics")
}
// https://stackoverflow.com/questions/74453018/jlink-package-kotlin-in-both-merged-module-and-kotlin-stdlib
jlink {
forceMerge("kotlin")
}
In your Gradle menu in IntelliJ, press “Sync” to load the changes, and the JavaFX libraries should be loaded. If you expand the “External Libraries” in the Project view, you can see the JavaFX libraries have been installed:
Example: HelloFX
The following application shows how to create a simple window with some graphics. Athough longer than our console version of “Hello Kotlin”, it accomplishes quite a lot with minimal code. We’ll discuss this in further detail below.
class App: Application() {
override fun start(stage:Stage?) {
val image = Image("java.png", 175.0, 175.0)
val imageView = ImageView(image)
val label = Label(
System.getProperty("java.vendor")
+ System.getProperty("java.version") + "\n"
+ System.getProperty("javafx.version"))
val box = VBox(imageView, label)
VBox.setMargin(label, Insets(10.0))
val scene = Scene(box, 175.0, 225.0)
stage.setResizable(false)
stage.setScene(scene)
stage.show()
}
}
This is actually pretty impressive when you realize that we have just created:
- A resizable window with min/max/restore buttons
- A titlebar and content centred in the window.
- A UI that will inherit the appearance of any platform where it runs. Execute this on Windows, and the buttons will have a standard appearance and positioning for that platform!
Classes
In JavaFX, our highest level abstractions are the Application class, with one or more Stage classes representing the application windows, and one or more Scene classes to manage the contents of a window. Nodes represent the individual graphical elements.
As we saw in the previous chapter with JavaFX, it’s standard practice in 2D graphical applications to represent the interface as a scene graph of objects. In JavaFX, the Scene class maintains this scene graph, consisting of various nodes, for each Scene that we display. Note that it’s possible to have multiple windows, each with multiple scenes, each of which manages a different scene graph. (Multiple windows can be displayed at once, but only once scene graph can be displayed at a given time in a window, representing the current window contents).
Application
The Application class is top-level representation of the application. It serves as the application entry point, replacing the main() method. During launch, a JavaFX application will perform the followin steps:
- Constructs an instance of the specified Application class
- Calls the
init()
method - Calls the
start(javafx.stage.Stage)
method (passing in a default stage) - Waits for the application to finish, which happens when either of the following occur:
- the application calls
Platform.exit()
- the last window has been closed and the
implicitExit
attribute onPlatform
is true
- the application calls
- Calls the
stop()
method
The start()
method is abstract and MUST be overridden. The init()
and stop()
methods are optional, but MAY be overridden. It’s fairly normal to just override start()
and ignore the others most of the time.
Stage
The Stage class is the top-level container or application window. You can have multiple stages, representing multiple windows.
javafx.stage.Window
javafx.stage.Stage
A Stage
instance is automatically created by the runtime, and passed into the start()
method.
Stage methods operate at the window level:
setMinWidth()
,setMaxWidth()
setResizable()
setTitle()
setScene()
show()
Scene
The Scene
is a container for the content in a scene-graph. Although you can create multiple scenes, only one can be attached to a window at a time, representing the “current” contents of that window.
javafx.scene.Scene
To construct a scene, and set it up:
- Create a scene graph consisting of a container holding on or more nodes;
- Add the root node of the scene graph to the scene;
- Add the scene to a stage and make the stage visible.
Scene methods manipulate the scene graph, or attempt to set properties for the entire graph:
setRoot(Node)
setFill(Paint)
getX()
,getY()
Node
Node
is the base class for all elements of a scene graph. Types of nodes include:
- a drawing Canvas and drawable shapes like Circle, Rectangle, and Line.
- standard widgets like Button, MenuBar, Spinner, Label and TextField.
- media playback widgets like ImageView and MediaView.
- animations like SequentialTransition and FadeTransition.
- meta objects like Camera and LightBase to offer fine control of the scene.
Nodes have common properties for position (x, y), width and height, background colour and so on. These can be set manually in code, or in the case of visual properties, associated with a CSS stylesheet.
JavaFX is pretty comprehensive, but you might want to implement something that isn’t built into that toolkit e.g. date widgets.
Luckily, you can include projects that expand the standard widgets. These are intended to be imported and used alongside the standard JavaFX widgets.
- ControlsFX expands to include checklists, breadcrumb bars and other unique widgets.
- JFxtras includes a calendar widget, gauges and other useful widgets.
Layouts
Layout is how items are arranged on the screen. Layout classes are branch nodes that have built-in layout behaviour. Your choice of parent class to hold the nodes determines how its children will be laid out.
Layout Class | Behaviour |
---|---|
HBox | Layout children horizontally in-order |
VBox | Layout children vertically in-order |
FlowPane | Layout left-right, top-bottom in-order |
BorderPane | Layout across sides, centre in-order |
GridPane | 2D grid, with cells the same size |
Example: Java Version
Here’s the Java Version example from above, annotated. The sequence to setup a window is:
- Define the nodes (lines 4-11)
- Create a layout as the root of the scene graph (line 14), which will hold the nodes.
- Add the root node to the scene (line 18)
- Add the scene to the stage (line 19)
- Show the stage (line 23)
class App: Application() {
override fun start(stage:Stage?) {
// imageView is our first node
val image = Image("java.png", 175.0, 175.0)
val imageView = ImageView(image)
// label is our second node
val label = Label(
System.getProperty("java.vendor")
+ System.getProperty("java.version") + "\n"
+ System.getProperty("javafx.version"))
// box is our layout that will manage the position of our nodes
val box = VBox(imageView, label)
VBox.setMargin(label, Insets(10.0))
// create a scene from the layout class, and attach to the stage
val scene = Scene(box, 175.0, 225.0)
stage.setScene(scene)
// set window properties and show it
stage.setResizable(false)
stage.show()
}
}
Events
JavaFX expands on the Listener model that was introduce in Java Swing, and provides support for a wide varieties of events. The Event class is the base class for a JavaFX event. Common events include:
- MouseEvent − This is an input event that occurs when a mouse is clicked. It includes actions like mouse clicked, mouse pressed, mouse released, mouse moved.
- KeyEvent − This is an input event that indicates the key stroke occurred over a node. This event includes actions like key pressed, key released and key typed.
- WindowEvent − This is an event related to window showing/hiding actions. It includes actions like window hiding, window shown.
Nodes have convenience methods for handling common event types. They include:
setOnMouseClicked()
setOnMousePressed()
setOnMouseReleased()
setOnMouseMoved()
setOnKeyPressed()
setOnKeyReleased()
Additionally, there is a generic “action” handler which responds to the standard interaction with a control e.g. pressing a button, or selecting a menu item.
For example, here’s a handler for a “save” button (from sample-code/desktop/contacts
)
val save = Button("Save")
save.setOnAction { event ->
model.add(Contact(name.text, phone.text, email.text))
model.save()
}
Packaging
Scripts are a simple way to get your application to launch, but they struggle when you have complex dependencies, or resources that need to be included (like you often will with a GUI application). If you are building a JavaFX or Compose desktop application, you should consider using jlink
or jpackage
to build an installer.
JLink will let you build a custom runtime that will handle the module dependencies for JavaFX. The simplest way to do this is to add the JLink
plugin to your build.gradle
file and let Gradle handle it.
plugins {
id 'org.beryx.jlink' version '2.25.0'
}
You can also configure it in the build.gradle
file as well. For a full set of options see the Badass-JLink plugin page.
jlink{
launcher {
name = "clock"
}
imageZip.set(project.file("${project.buildDir}/image-zip/clock-image.zip"))
}
We can rebuild the clock sample using Gradle - build - jLink
to produce a runtime script in build/image
Here’s the resulting directory structure. Notice that it includes a number of libraries that our application needs to run.
$ tree build/image -L 2
AnalogClock/build/image
├── bin
│ ├── clock_advanced
│ ├── clock_advanced.bat
│ ├── java
│ ├── jrunscript
│ └── keytool
├── conf
│ ├── net.properties
│ ├── security
│ └── sound.properties
├── include
│ ├── classfile_constants.h
│ ├── darwin
│ ├── jawt.h
│ ├── jni.h
│ ├── jvmti.h
│ └── jvmticmlr.h
├── legal
│ ├── java.base
│ ├── java.datatransfer
│ ├── java.desktop
│ ├── java.prefs
│ ├── java.scripting
│ ├── java.xml
│ └── jdk.unsupported
├── lib
│ ├── classlist
│ ├── fontconfig.bfc
│ ├── fontconfig.properties.src
│ ├── jrt-fs.jar
│ ├── jspawnhelper
│ ├── jvm.cfg
│ ├── libawt.dylib
..... (continues)
Running the top-level bin/clock_advanced
image will execute our application.
$ ./clock_advanced
Creating installers
Finally, we can use jpackage
to create native installers for a number of supported operating systems. JPackage is included as a console application in Java JDK 16 or higher, and will work with any JVM language (e.g. Java, Kotlin, Scala). The full guide is here.
An installer is an application that when executed, installs a different application for the user. We need installers because most applications consists of many different files: executables, libraries, resources (images, sound files), preference files and so on. These need to be installed in the correct location, and sometimes registered, to function correctly.
Tasks that the installer performs include:
- Copying application files to the correct location.
- Installing and registering system libraries.
- Making changes to the system registry (or similar system databases).
- Creating icons on the desktop, or applications folder.
- Prompting the user if any of these tasks require elevated privileges.
Instead of running jpackage
manually, we will install a plugin into IntelliJ and use that environment to generate our installers. We can do this by installing the Badass-JLink plugin page. To use the plugin, include the following in your gradle.build
script:
plugins {
id 'org.beryx.jlink' version '2.25.0'
}
JPackage itself has a number of other options that you can specify in the build.gradle
file. The full list of options is on the plugin website.
// build.gradle file options for jpackage
jlink {
options = ['--strip-debug', '--compress', '2', '--no-header-files', '--no-man-pages']
launcher{
name = 'hello'
jvmArgs = ['-Dlog4j.configurationFile=./log4j2.xml']
}
}
If you install the plugin correctly, then you should see the jpackage
command in Gradle - build - jpackage
. Run this and it will create platform installers in the build/distribution
directory.
This is a standard macOS installer. Drag the clock_advanced icon to the Applications folder. You can then run it from that folder.
Installers are meant for graphical applications. If you are building a JavaFX or Compose desktop application, this is the right choice. If you’re building a console application, you probably want a script instead (see previous step) so that you can execute it from the console directly.
-
Photoshop, for instance, famously has multiple windows tiled over the desktop. It’s also a very, very complex program, so it needs to split up functionality like this. ↩︎
-
Ctrl on Windows and Linux, CMD on Mac. ↩︎
-
Though not strictly required, all modern toolkits are built around the idea of an event-driven architecture, where events or messages are used to communicate changes to system state, or to signal user intentions. ↩︎
-
Note that this is the resolution that the screen is set to display, which may be different from the resolution that the screen natively supports. For example, my monitor is meant to run at 2880x1440, but I can set it to any desired resolution up to those values. ↩︎