Real-World JAVA
Real-World JAVA
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Real-World Java®
Real-World Java®
HELPING YOU NAVIGATE THE JAVA ECOSYSTEM
Victor Grazi
Jeanne Boyarsky
Copyright © 2025 by John Wiley & Sons, Inc. All rights, including for text and data mining, AI training, and similar
technologies, are reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of
the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through
payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923,
(978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission
should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201)
748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permission.
Trademarks: WILEY and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its
affiliates, in the United States and other countries, and may not be used without written permission. Java is a trademark or
registered trademark of Oracle America, Inc. All other trademarks are the property of their respective owners. John Wiley &
Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and authors have used their best efforts in preparing this
book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book
and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be
created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not
be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware
that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not
limited to special, incidental, consequential, or other damages.
For general information on our other products and services, please contact our Customer Care Department within the United
States at (800) 762-2974, outside the United States at (317) 572-3993. For product technical support, you can find answers
to frequently asked questions or reach us via live chat at https://support.wiley.com.
If you believe you’ve found a mistake in this book, please bring it to our attention by emailing our reader support team at
[email protected] with the subject line “Possible Book Errata Submission.”
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in
electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Control Number: 2024934523
VICTOR GRAZI is a voracious learner who back in 1996 heard about this new technology called “the Internet”
and a new technology called Java that made the Internet blaze! Inducted as a Java champion in 2012, he serves
on the Java Community Process Executive Committee, leads a JSR, served as Java Lead Editor on InfoQ maga-
zine, and has traveled the world evangelizing Java. He is the author of Java Concurrent Animated, an application
that has helped thousands of developers visualize Java concurrency.
JEANNE BOYARSKY is excited to be publishing this, her 10th book. She was selected as a Java Champion in
2019 and is a leader of the NYJavaSIG. She has worked as a Java developer for more than 22 years at a bank
in New York City where she develops, mentors, and conducts training. Besides being a senior moderator at
CodeRanch.com in her free time, she works on the forum code base. Jeanne also mentors the programming
division of a FIRST robotics team, where she works with students just getting started with Java. She also speaks
at several conferences each year. Jeanne is also a Distinguished Toastmaster and a Scrum Master. You can find out
more about Jeanne at https://www.jeanneboyarsky.com.
ABOUT THE TECHNICAL EDITOR
RODRIGO GRACIANO is a seasoned software engineer, Java advocate, and technology leader. He has a BSc in
computer science from UNILASALLE, Brazil, and has been working with Java since 2006. In 2011, Rodrigo
moved to the United States, where he now lives with his wife, Manuela, and their two sons, Lucas and Theodore.
Rodrigo is a senior member and leader of the NYJavaSIG, the New York Java User Group, where he actively
contributes to organizing monthly events to promote the Java language and its community. A frequent speaker
at tech conferences across South America, the United States, and Europe, Rodrigo shares his passion for software
development and Java best practices.
Currently, he serves as a director of engineering at BNY. In recognition of his contributions to the Java com-
munity, Rodrigo was nominated as a Java Champion in 2022.
ABOUT THE TECHNICAL PROOFREADER
BARRY BURD is a professor in the Department of Mathematics and Computer Science at Drew University in
Madison, New Jersey. He’s a director of the Garden State Java User Group and, along with Jeanne, a leader of the
NYJavaSIG. He’s the author of several Java books, including Java For Dummies, soon to be in its 9th edition. His
most recent book is Quantum Computing Algorithms published by Packt. In 2020, he was honored to be named
a Java Champion.
ACKNOWLEDGMENTS
We would like to thank numerous individuals for their contributions to this book. Thank you to Patrick Walsh
and Archana Pragash for guiding us through the process and making the book better in many ways. Thank you
to Rodrigo Graciano for being our technical editor and Barry Burd for being our technical proofreader. They
made the book better in so many ways. And a special thank-you to our copy editor Kim Wimpsett for finding
subtle errors that everyone (including us!) missed. This book also wouldn’t be possible without many people at
Wiley, including Kenyon Brown, Pete Gaughan, Ashirvad Moses, and many others.
Victor would personally like to say thank-you to his wife, Victoria, for all the date nights she sacrificed to let
me work on this book. He continues to learn from her and be awed by her vision every single day. He also wants
to thank his parents, Jack and Claudie Grazi, for all they have invested in him. Finally, he thanks posthumously
Dr. Isadore Glaubiger, his math professor way back when at Brooklyn Tech, who inspired him and taught him to
think. And a special thanks to Jeanne for her organizational skills and attention to detail. Were it not for Jeanne,
he says he never would have made it past page 1.
Jeanne would personally like to thank Dani, Janeice, Kim, Norm, Scott, and Shweta during a difficult month
that overlapped with book writing. She also wants to thank Scott for his patience as Jeanne worked on two books
simultaneously (this one and Oracle Certified Professional Java 21 Developer Study Guide). A big thank-you to
Victor for coming up with the idea for the book and being a great co-author in bringing it to life. Finally, Jeanne
would like to thank all of the new programmers at CodeRanch.com and FIRST robotics teams FRC 694 and
FTC 310/479 for the constant reminders of how new programmers think.
CONTENTS
INTRODUCTION xxix
Introduction 1
Understanding the Stewardship of Java 2
Differentiating Key Java Versions 4
Coding Generics in Java 5 4
Coding with Functional Programming from Java 8 4
Coding Modules from Java 11 5
Coding Text Blocks and Records from Java 17 6
Learning About Virtual Threads from Java 21 7
Working with Deprecation and Retirement 7
Identifying Renames 8
Changing to Jakarta EE 8
Renaming Certifications 8
Understanding the Principles of Change 8
Further References 9
Summary 9
CHAPTER 2: GETTING TO KNOW YOUR IDE: THE SECRET TO SUCCESS 11
Remote Debugging 25
Debugging with Hot Swap 26
Refactoring Your Code 26
Avoiding Duplicate Code 26
Renaming Members 31
Inlining 31
Changing Signatures: Adding and Removing Parameters 32
Exploiting the Editor 32
Automated Reformatting of Code 32
Organizing Imports 32
Reformatting Code 32
Unwrapping 33
Comparing Code 34
Using Column Mode 34
Extending the IDE 35
Peeking at Eclipse 36
Peeking at VS Code 37
Comparing IDEs 39
Further References 40
Summary 40
CHAPTER 3: COLLABORATING ACROSS THE ENTERPRISE WITH
GIT, JIRA, AND CONFLUENCE 41
xviii
Contents
Cherry-Picking 60
Reverting and Resetting 60
Optimizing with IDE Support 61
Looking at the Commit Window 63
Using the Diff-Viewer Window 64
Creating README Files with Markdown Language 69
Using Gitflow for Collaboration 70
Using Jira for Enterprise Process Collaboration 72
Getting Started with Jira 72
Creating a Project 73
Creating an Issue 74
Linking to an Epic 75
Working with Boards 76
Creating a Sprint 76
Adding Users 78
Adding Columns 78
Using Filters 80
Seeing My Issues 80
Querying with JQL 80
Making Bulk Changes 82
Connecting to Git 83
Working with Confluence, the Enterprise Knowledge
Management System 83
Further References 86
Summary 86
CHAPTER 4: AUTOMATING YOUR CI/CD BUILDS WITH
MAVEN, GRADLE, AND JENKINS 87
xix
Contents
xx
Contents
xxi
Contents
xxii
Contents
xxiii
Contents
xxiv
Contents
xxv
Contents
* Wildcards 315
.. Wildcards 316
Using @AfterReturning 316
Using @AfterThrowing 317
Using @After 317
Using @Around 317
Using @Pointcut 319
Combining Pointcuts 320
Annotation-Based Pointcuts 321
Further References 323
Summary 323
CHAPTER 12: MONITORING YOUR APPLICATIONS:
OBSERVABILITY IN THE JAVA ECOSYSTEM 325
xxvi
Contents
xxvii
Contents
INDEX 415
xxviii
INTRODUCTION
CONGRATULATIONS ON MASTERING THE BASICS OF JAVA! You’ve journeyed through loops, switches, and
exception handling, you’ve made sense of lambdas and streams, and you’re starting to get comfortable with the
core Java APIs. You’ve built a solid foundation, and that’s no small accomplishment—You are ready to start your
elite career as a Java engineer!
But Java is more than a language; it’s a vast ecosystem filled with tools, platforms, libraries, and APIs that
empower you to extend Java’s capabilities into every corner of enterprise development.
This book is your guide to navigating that ecosystem. While numerous tomes delve into specialized areas of Java,
few provide an accessible roadmap across the broader landscape. Here, you’ll find clear explanations and practi-
cal coding examples of the technologies you can expect to encounter most often in enterprise Java environments.
The knowledge you gain here will prepare you to delve deeper into additional resources and broaden your skills
in this dynamic ecosystem.
xxx
1
How We Got Here: History
of Java in a Nutshell
WHAT’S IN THIS CHAPTER?
INTRODUCTION
You’ve learned Java, and you’re ready to start using it at work. Or you are already using it in work, but
you’ve discovered the daunting ecosystem that they never taught you in school. This book is for those who
have already learned how to code in Java and are looking for the next step. Whether you are a student,
career changer, or professional programmer switching languages, this is your opportunity to learn about
the Java ecosystem.
This book is not intended to be a comprehensive guide to the Java ecosystem; that would require many
thousand-page tomes. If you are just learning Java, we recommend Head First Java, 3rd Edition (O’Reilly
Media, 2022) or Java For Dummies (For Dummies, 2025). Then come back to this book. The goal of this
book is to expose the reader to some of the most common frameworks, tools, and techniques used in enter-
prise Java development shops, with enough background and examples to dive right in.
In this chapter, we will cover some information about Java in general before getting into specialized topics
in the later chapters. While chapters can be read out of order, we recommend reading Chapters 1–4 before
skipping around.
2 ❘ CHAPTER 1 How We Got Here: History of Java in a Nutshell
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch01-History. See the README.md file in that repository for details.
With the new release cadence, a new version of Java comes out every six months. You may be thinking that many
companies don’t want to upgrade every six months, and you’d be right! There are also less frequent releases
called Long-Term Support (LTS) releases. When the new release cycle started, LTS releases were every three years.
They are now every two years.
Table 1.1 shows releases from the start of the new release cadence to the publishing of this book. As you can
see, there are now regular, predictable releases. This book uses Java 21, which was the latest LTS at the time
of printing.
Figure 1.2 shows the LTS release schedule. You can predict LTS dates into the future now that there is a pattern.
This helps companies plan.
Many companies run their Java programs remotely, such as through a website. Until 2020, this was often done
through Spring or Java EE (Enterprise Edition.) See Chapter 6, “Getting to Know the Spring Framework,” and
Chapter 14, “Getting to Know More of the Ecosystem,” for more details. In 2020, Oracle handed over steward-
ship of Java EE to an open-source foundation. Since “Java” is a trademark, the Java EE framework was renamed
to “Jakarta EE” at that point.
module com.wiley.realworldjava.modulecode {
exports com.wiley.realworldjava.modulecode.utils;
requires java.sql;
}
The module keyword lets Java know we are specifying a module here rather than a class or interface or
other Java top-level type. The exports directive tells Java that callers can reference the utils package directly
from their code. Since the internal package is not mentioned in the module-info file, callers cannot use it
directly, only through BookUtils. Finally, the requires directive specifies that we use the java.sql module,
which is provided by Oracle with the JDK. There is another module we use called java.base, but that one is
provided automatically without having to specify it. It contains common classes such as String and List.
Line 8 demonstrates that the getter method is x() rather than getX(). Finally, line 9 implicitly calls
toString() showing you can get a useful implementation without writing any code.
Records are convenient when you need a simple immutable class. You can override methods in the record body
or add your own methods. When you need more customization power or setters such as for Java Persistence API
(JPA) entities, Project Lombok may better match your needs. See Chapter 8, “Annotation-Driven Code Using
Project Lombok,” for details.
Line 4 uses a Javadoc tag, @deprecated, to include in the documentation that using this method is discouraged.
The @link tag is used to explain that the magicNumber() call without a parameter is preferred instead.
Line 8 shows the annotation marking the method as @Deprecated. IDEs show the use of deprecated code as a
warning to call your attention to it. See Chapter 2, “Getting to Know your IDE: The Secret to Success,” for more
information on IDEs.
Line 8 also shows the forRemoval attribute, which was added in Java 9. This is used to signify whether the
intent is to remove the deprecated code in the future or merely encourage alternative application programming
interfaces (APIs). This allows developers to make intelligent decisions on whether to migrate code.
Oracle keeps a list of APIs that were removed at https://docs.oracle.com/en/java/javase/21/
migrate/removed-apis.html. This page shows what was removed in each version between Java 9 and Java
21. For example, in Java 21, this was one of two APIs that were removed:
java.lang.ThreadGroup.allowThreadSuspension(boolean)
You are unlikely to have used this method. Oracle values backward compatibility and does not remove code in
broad use. In fact, some of the methods in java.util.Date have been deprecated since Java 1.1!
IDENTIFYING RENAMES
Given the evolution of the Java ecosystem over the years, there are some renames you should be familiar with.
This section helps you identify whether old guides/documentation are equivalent or obsolete.
Changing to Jakarta EE
As mentioned, in 2020, Oracle announced the migration of Java EE to Jakarta EE. Since that wasn’t so long ago,
documentation mentioning Java EE is still relatively useful. After all, a Java EE tutorial on servlets, commonly
used to serve web pages, is mostly the same as Jakarta EE servlets, and the concepts described in good tutorials
will all apply.
The key difference is that Jakarta EE 9 renamed the package name prefix from javax to jakarta since Java is
trademarked by Oracle. Additionally, new features are covered only in the Jakarta EE documentation.
Renaming Certifications
Oracle had a number of certifications including Sun Certified Java Associate (SCJA), Sun Certified Java Profes-
sional (SCJP), and Sun Certified Master Java Developer (SCMJD) that you could earn. All of these certifications
were renamed to begin with “O,” for Oracle, giving us certifications like OCJA, OCJP, and OCMJD. When
looking at résumés, remember that these certifications are all equivalent.
Many of these certifications don’t exist anymore or have changed significantly since the renaming, so learning
from websites with the old Sun cert names is unlikely to be useful.
➤➤ Java Enhancement Proposals (JEPs): For example, virtual threads are JEP 444. Some Java features are
released as previews before they get committed to as final. Virtual threads had its first preview in Java
19 as JEP 425 and second preview in Java 20 as JEP 436 before being fully released in Java 21.
➤➤ Java Specification Requests (JSRs): JSRs are bigger and more impactful than JEPs. For example, lambdas
were JSR 335.
While committed to a six-month release cycle, Java heavily values backward combability of both the language
itself and APIs. Using a later version of Java shouldn’t prevent your code from compiling or running. Backward
compatibility isn’t always met, but Oracle tries to minimize the change as much as possible.
Preview features assist in this goal since they aren’t locked into backward compatibility yet. This is why you
shouldn’t use preview features in production; they could change in any way. Backward compatibility is why so
few deprecated APIs are actually removed.
FURTHER REFERENCES
➤➤ https://docs.oracle.com/en/java/javase/15/text-blocks/index.html
Oracle’s guide to text blocks
➤➤ https://docs.oracle.com/en/java/javase/14/language/records.html
Oracle’s guide to records
➤➤ OCP Certified Professional Java SE 17 Developer Study Guide (Sybex, 2022)
Jeanne’s certification study guide for more details on all features
➤➤ The Java Module System (Manning, 2019)
Book on modules with great detail
SUMMARY
The following are the key takeaways from the chapter:
➤➤ Java is owned by Oracle, with others helping guide the process.
➤➤ New versions of Java are released every six months, with less frequent LTS versions.
➤➤ New features are continually being added.
➤➤ APIs that are no longer encouraged for use are deprecated and, optionally, tagged for removal.
➤➤ Backward compatibility is a central tenet of Java’s evolution and ensures that existing Java applications
will run smoothly without requiring extensive modifications when a new version of Java is released.
2
Getting to Know your IDE: The
Secret to Success
WHAT’S IN THIS CHAPTER?
In this chapter, we’ll look at the “big three” IDEs—IntelliJ, Eclipse, and Visual Studio Code—and cover the
following topics:
➤➤ Getting to know your keyboard shortcuts: Your IDE is a productivity tool, and you can often iden-
tify coding professionals by their use of keyboard shortcuts. There are many, but in this section, we
will discuss some of the most important ones and how to learn them, and we’ll show you how to
discover more.
➤➤ Professional code refactoring: If you do as many code reviews as we have, you can easily spot nonpro-
fessionals by how they repeat code snippets throughout their codebase. You will see how your IDE can
help you locate and correct “copy-and-paste” code.
➤➤ Other productivity pearls: You should not forget that the primary purpose of your IDE is editing! We
will review some powerful, lesser-known editing features.
This chapter is certainly not attempting to be a comprehensive guide to every feature. You can get that from the
user manuals. Rather, we will emphasize techniques for leveraging your IDE features to greatly improve your
productivity.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch02-IDEs. See the README.md file in that repository for details.
There is a built-in code locator that finds duplicate code across disparate files, even in a large codebase, and refac-
torings are always intuitive, precise, and reliable! That said, Eclipse is still widely used, and Visual Studio has been
growing both in capability and traction.
In this chapter, we will discuss the big three IDEs, starting with an emphasis on IntelliJ and concluding with the
important differences between it, Eclipse, and Visual Studio. In the rest of this book, we will mostly be using Intel-
liJ. Nonetheless, if you are a loyal Eclipse or VS Code user, much of this discussion will translate easily, differing
mostly in keyboard shortcuts and menu offerings. IntelliJ provides a keyboard shortcut for all its important func-
tionality, and you can add and modify these as you like. You can also select an Eclipse theme that borrows many
of those shortcuts, in case you happen to have extensive Eclipse experience. The other IDEs also provide basic
key mappings out of the box and allow you to create your own custom mappings. But if you enjoy the power of
keyboard shortcuts, you will find IntelliJ provides the largest selection of these out of the box.
TIP When you launch IntelliJ, a window will come up offering feature tips. Read them!
There are versions of the top three IDEs available for Windows, Mac, and Linux. The more powerful your com-
puter, the faster and more responsive your IDE will be. See the “Further References” section to download a new
IDE if you’d like to try a different one than you are used to.
Let’s start by looking at IntelliJ. Remember to read it even if you plan to use Eclipse or VS Code as those sections
assume you have read the entire chapter. It is by far our favorite IDE, and who knows, you might even decide to
try IntelliJ once you see what it can do!
There are many other options including whether you want to use a build tool, which is covered in depth in
Chapter 4, “Automating Your CI/CD Builds with Maven, Gradle, and Jenkins.”
NOTE Once you learn about Spring Boot in Chapter 6, “Getting to Know the Spring
Framework,” you will often choose the Spring Initializer type instead.
TIP Creating a project from existing sources is particularly useful if you are switching
over from another IDE like Eclipse or if you are using a build tool and already have the
directories on your machine.
Starting a New Project ❘ 15
You can now navigate to the URL you provided where you will find your code, as shown in Figure 2.3.
All of these give you the option to choose Run from the context menu, which will create a default run configura-
tion. (Debug is also an option. See the “Debugging Your Code” section for more on debugging.) All of these also
give you the Modify The Run Configuration option, possibly in a submenu that lets you specify the details of
your run configuration.
Alternatively, you can click Current File on top and choose Edit Configurations from the drop-down that appears
(Figure 2.5). Despite the name, you have the option to add a new runtime configuration in addition to editing
existing ones.
Select an appropriate application type (for example, Application for a vanilla Java application). See Figure 2.6.
Then complete the configuration form by entering a main class, Java version, and so on, as in Figure 2.7.
If you are in a test class (annotated with @Test; see Chapter 7, “Testing Your Code with Automated Testing
Tools”), then you can click the green arrow next to any test method and select the Run option.
18 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
For example, to get a list of files that you recently worked on, press Ctrl+E/Cmd+E. It will take a few times to
remember it, so follow this advice:
1. Look it up in the menu.
2. Close the menu.
3. Use the shortcut.
TIP Many shortcuts differ between Windows and Mac. When that happens, we type the
Windows shortcut followed by a slash followed by the Mac version. If you are using Linux,
please follow the Windows shortcuts as they are usually the same.
In the following sections, you’ll see many useful shortcuts. There will be others throughout the chapter as we
cover specific topics. After a short time, the keyboard shortcuts will be etched in your memory, and you will find
your productivity increasing greatly.
Similarly, you can open a file that is not a class file using the shortcut Ctrl+Shift+N/Cmd+Shift+O. And you can
find a method name or class/instance variable name using Ctrl+Alt+Shift+N/Cmd+Option+O. Figure 2.10 uses
the camel case technique to search for the sendEmailConfirmation method by typing sendEmaCon.
You can navigate to the last position that you visited, and then the previous and so on, using Ctrl+Alt+Left
Arrow/Cmd+[. Then you can navigate in the forward direction using Ctrl+Alt+Right Arrow/Cmd+].
You can navigate to the last edit position using Ctrl+Shift+Backspace/Cmd+Shift+Backspace. While that moves
backwards through recent edits, we have not seen a shortcut to move forward afterward. As a work-around,
you can just move the cursor anywhere, and then the next Ctrl+Shift+Backspace/Cmd+Shift+Backspace will start
again from the most recent edit position.
20 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
If you want to find usages of a certain method, you can Ctrl/Cmd-click the method name to get a quick pop-up
with this information. Alternatively, you can use Alt+F7/Option+F7 to see the list in a window at the bottom
of your IDE.
You can create your own custom navigation by using bookmarks. In IntelliJ, Ctrl+Shift+1 (or any number from 1
to 9) will set a numbered bookmark at the cursor location. Then you can return to that bookmark using Ctrl+1
(or whichever number you used to create the bookmark). These are some of the few shortcuts that are the same
on Windows and Mac!
To get a quick outline of the API of the current file, use Ctrl+F12/Cmd+F12, as shown in Figure 2.11. You can
click within the outline to go right to that part of the file.
Reordering Code
Next try the unique Ctrl+W/Option+Up Arrow and Ctrl+Shift+W/Option+Down Arrow. Each press of Ctrl+W/
Option+Up Arrow starts at the cursor and selects increasing scope, first a word, then a phrase, then a block, a
method, a class, and so on. Ctrl+Shift+W/Option+Down Arrow reverses the order, bringing you back one click
Getting to Know Your Keyboard Shortcuts ❘ 21
at a time to your original selection. If you apply Ctrl+W/ Option+Up Arrow to an arithmetic expression, each
Ctrl+W/ Option+Up Arrow selects in the order it will calculate.
There’s a trick to selecting an entire method in two clicks, and that is to position your cursor to any space to the
left of the method name and hit Ctrl+W/ Option+Up Arrow twice. Another click will select Javadoc and annota-
tions. That is very useful for reordering the methods in your code, because with the entire method selected, you
can then use Ctrl+Shift+Up Arrow/Cmd+Shift+Up Arrow and Ctrl+Shift+Down Arrow/Cmd+Shift+Down Arrow
to move the method up or down in the file, past the next method above or below it, and will also move any
Javadoc along with it. (Good code is self-documenting; great code has great Javadoc!)
Next, when you want to find errors and warnings in your code, use F2 to find the ones after the cursor, and use
Shift+F2 to navigate backwards away from the cursor.
The Alt+Enter/Option+Enter shortcut opens a context-sensitive menu. For example, you can use it to fix a
compiler error or generate code for your class.
Each IDE has its own keyboard shortcuts. In this section, we have seen a few examples of some of the more use-
ful keyboard shortcuts, and as we cover more functionality, we will mention the related shortcuts. It is beyond
the scope of this book to cover every shortcut, but we encourage you to explore them and get fluent with them.
You can get the full listing from within the IDE for your operating system: select menu item Help ➪ Keyboard
Shortcuts PDF. Take the plunge with learning shortcuts. Mastery of the keyboard shortcuts will make you a better,
faster, and more professional programmer.
22 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
META SHORTCUTS
IntelliJ has a “keyboard-first” philosophy making for some very powerful shortcuts. Luckily,
there are three more general shortcuts that are good to learn as you master more specific ones.
➤➤ Shift+Shift opens a feature called Search Anywhere. It’s a quick way to search for files or
symbols or even IntelliJ actions themselves.
➤➤ Control+Control opens the Run Anything feature, which lets you run things like a main
method or a test.
➤➤ Ctrl+Shift+A/Cmd+Shift+A opens the ability to find any action. This is a subset of
Shift+Shift, but it’s useful to go directly to that list.
Debugging a Program
The sample code in the GitHub repository for this chapter is a simulated order processing system, where a cus-
tomer can order a part, and the system will (simulated) verify the customer’s charge account as well as the inven-
tory. If the order is successful, it will send a success email. Otherwise, it will send an order cancellation email. (In
our sample, it will just print a message to the console.)
First, you need to import the project. You can use menu item File ➪ New ➪ Project From Version Control and
supply the GitHub URL https://github.com/realworldjava/Ch02-IDEs.
Before starting a debugging session, take a moment to understand what this code does. The program flow starts
by calling the main method, which in turn creates a new OrderProcessor and then calls the processOrder
method on it.
To see the debugger in action, set two breakpoints where indicated in the comments, one in the main method on
line 6 and another in chargeAndAdjustInventory on line 28.
Next launch the program by clicking the green arrow in the margin to the left of the main method and selecting
Debug OrderProcessor.main(), or click the debug icon from the menu.
This will bring up the Debug window (see Figure 2.13).
The program will break at line 6. Choose Step Over (or hit F8) and see the execution step forward by one line.
Now choose Resume Program (F9), and the execution will continue until it reaches the next breakpoint in the
chargeAndAdjustInventory method.
When the program breaks, take a look at the variables in the debugger window (see Figure 2.14).
Now click the next frame in the call stack (processOrder: line 13), and see the variables in that scope. Then do
the same for the main:7 frame.
In this way, you can walk the call stack and see and even modify the variables in each scope. To modify a vari-
able, click the variable in the variables window, right-click the variable, and choose Set Value (or hit the F2 key).
Debugging Your Code ❘ 23
Step Run to
Step Step
into cursor
over out
Show Evaluate
execution expression
point
Rerun
Configure
Resume
Pause debugging
Stop debugging
View breakpoints
Mute breakpoints
Thread dump
Debugger settings
If there are library classes in the call stack, you can choose to ignore them by clicking the funnel Hide Frames
From Libraries icon at the top of the debug frames window, as in Figure 2.14.
You can also mouse over a variable in the editor to see its value. String values are also displayed (in an alternate
font) next to the variable in the editor. Strings and primitives will be displayed as is. As for other object types, you
can click the values to drill into them.
If you navigate away from the execution point by opening other windows or by scrolling around, you can always
return to the execution point by hitting the Show Execution Point icon (Alt+F10/Option+F10).
You can mute all breakpoints by hitting the Mute Breakpoints icon and then click Resume Program (F9). The
execution will continue uninterrupted, until you unmute.
By right-clicking a breakpoint you can find settings to temporarily disable the breakpoint, set a condition that
must be true for it to pause, and enable other settings. If you put a breakpoint on an interface method, then any
execution that hits that method will cause the debugger to break.
TIP Let’s say you’re stepping through the debugger and you get an unexpected exception
that takes you far from where you were. Where was that exception thrown? Press
Ctrl+Alt+Left/Cmd+Alt+Left Arrow to get back to the previous location.
24 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
You can make further performance improvements by making adjustments to the layout settings, at the top right
side of the Debug window. Threads, Memory, and Overhead should be unchecked, as shown in Figure 2.16.
Layout
Settings
Remote Debugging
Debugging is a handy tool when you are writing your application and when you are diagnosing issues that
are easily reproducible. However, as Murphy’s law states, “Anything that can go wrong will go wrong.” This
manifests as a problem that can go wrong will go wrong on the server, but you just can’t reproduce it locally.
In such cases, Java supports remote debugging, which lets you debug code running on a remote server, setting
breakpoints, and viewing variables, just as if the application were running locally.
We aren’t showing an example here, but to do remote debugging on the job, create a new run configuration and
select Remote JVM Debug as the type. Enter the host name, but retain the defaults unless the remote system
requires you to change them.
Before you can do remote debugging, you must launch the server using a special remote debugging agent. To do
so, copy the command-line arguments agentlib:jdwp=transport=dt_socket,server=y,suspend=n,
address=*:5005 from the runtime configuration, as shown in Figure 2.17, and add those to your application
runtime arguments.
Once remote debugging is configured, you can debug the application just as if it were running locally, using the
steps we are about to discuss.
LOG SCROLLING
One final word about debugging: when you are in the debugger tab, if you want to see the
logging output, switch to the console tab. If the logging is pouring out to the console very
quickly, you can stop it from scrolling by simply clicking into the console window. The log
messages will still be appended at the bottom, but the scrolling will stop at the line where
you clicked. This click-to-pause feature is also available in the run console, not just the
debugging console.
Debugging is a vital skill for allowing your IDE to give you a tour of your code’s execution flow. Every IDE has
debugging capabilities similar to the ones described here. In the next section, we will discuss another vital skill,
which is code refactoring.
TIP IntelliJ has by far the most extensive refactor options of the major Java IDEs.
To extract code into a method, select the code, go to the Refactor menu, and select Extract/Introduce ➪ Method,
as shown in Figure 2.18. The shortcut is Ctrl+Alt+M/Cmd+Option+M, and it will be one of your most used
shortcuts, so use it until you remember it.
In Figure 2.19, we have two methods for sending emails. The first does so weekly, and the second does so every
other week. A closer look reveals that the two methods are basically doing the same things, with some slight
variations. Why not extract the common functionality into a method and pass in the differences as parameters?
Here’s the technique: start with one method, say the first one, processWeeklyReminders(), in Figure 2.19.
Step 1 is to factor out the noncommon elements by extracting any differences to variables. In our example, our
weekly and biweekly methods are similar, although they do have noncommon elements, specifically the number of
weeks and the method to send the email. Select the number 1 (the weeksToAdd variable in line 10) and choose
menu item Refactor ➪ Extract/Introduce ➪ Variable and assign it to the variable named numWeeks. Rearrange
the lines if necessary so that numWeeks and the Comsumer are before the code to extract (see Figure 2.20).
Choose Ctrl+Alt+M/Cmd+Option+M, the shortcut for extracting a method. The IDE will assign a name, which
you want to change to something meaningful, sendIfTime in our case, as shown in Figure 2.22.
The IDE will offer to replace other places with the extracted method, as shown in Figure 2.23.
Refactoring Your Code ❘ 29
FIGURE 2.23: The IDE helps locate opportunities to apply the new method.
You can place the cursor in the method name and use Ctrl+Shift/Cmd+Shift with the Up or Down Arrow keys to
move the methods up and down in the class, as shown in Figure 2.24.
The code would look a lot cleaner if you inline the mailer parameters into the code. Just click the variable name
and choose Refactor ➪ Inline Variable (Ctrl+Alt+N) to inline the variables, as shown in Figure 2.25. You’ll see
inlining again shortly.
30 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
FIGURE 2.25: The final refactored version. Compare this to Figure 2.19.
You can see that the code is much neater now, with two smaller methods that delegate to the extracted method
containing the main code. Use this technique to safely refactor much more complex real-world code as well.
TIP You don’t need duplicate code for extract method to be valuable. It also helps in dealing
with methods that are too long by having smaller methods with clear names to take care of
some of the logic.
Extracting methods is arguably the most important refactoring pattern. But all the refactoring patterns contribute
to good clean code. Let’s look at another important refactoring: renaming members, that is, renaming methods
and variables.
Refactoring Your Code ❘ 31
Renaming Members
Our next rule for refactoring is naming your methods and variables with meaningful names. Remember,
while renaming, you want to avoid nonstandard abbreviations. This is important for making your code self-
documenting. If you are working through legacy code with variable or method names that are too short or
confusing, it will be helpful to rename the variables and methods to something meaningful.
To rename a variable, click the variable (no need to select the whole variable; just click anywhere in it), hit
Shift+F6, and then type the new name. It will be replaced with the new version throughout your codebase. And
guess what? This is another shortcut that is the same on Windows and Mac!
You can also rename methods and classes using the same Shift+F6 shortcut, and you can take it to the bank that
your code will still compile and preserve the same semantics.
Renaming members is an important pattern for making code more readable. But sometimes there are variables or
methods that are just unnecessary, and in those cases, we will benefit by inlining.
Inlining
The use of variables can often make code clearer, but not always. There will be times where the variable decla-
ration just takes up an extra line without adding any clarity at all. For example, in the following code snippet,
the declaration of the variable rgbInt is using up an extra line, arguably without adding any clarity at all. In
such cases, it would make sense to just inline the expression and remove the variable declaration, as shown in
Figure 2.26.
FIGURE 2.26: The variable declaration in line 8 adds no clarity; inline it.
You can inline the unwanted variable by clicking any occurrence of the variable in the code (for example, you can
see the location of the insert cursor in Figure 2.26, line 6) and then hit Ctrl+Alt+N/Cmd+Option+N. The previous
changes to what you can see in Figure 2.27.
We won’t go through every aspect of refactoring here, but if you want to take your programming skills to the
next level, please see the “Further References” section for an excellent book on refactoring.
Once upon a time IDEs had very limited refactoring capabilities, if at all. There were techniques for performing
refactoring in as safe a manner as possible. But now that most of the common refactoring patterns are built into
the IDEs, make an effort to study them and use them.
But refactoring is not the only thing that IDEs are great at. Let’s look at some other interesting tools.
32 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
Organizing Imports
Keeping your imports clean is a simple matter; just use the Code ➪ Optimize Imports menu option (Ctrl+Alt+O/
Ctrl+Option+O). This option removes unused imports and orders the imports to make them more organized.
Reformatting Code
To reformat your code, you can hit the menu item Code ➪ Reformat (Ctrl+Alt+L/Cmd+Option+L), which will
apply the formatting to the entire file. If you select a code snippet, that same command will just reformat the
Exploiting the Editor ❘ 33
selected bit of code. You can also select a package or directory in the Project Explorer, and that same command
will reformat everything in that directory.
The reformatting command understands the file types being reformatted and will apply appropriate formatting
for Java, JSON, CSS, XML, and so on.
The reformatting uses standard formats for each file type, but you can tweak things like adding spaces around an
equals sign in assignments, or where to place the squiggly bracket at the end of a method declaration. These are
found under the File menu item for Windows or IntelliJ menu item for Mac. Then select Settings ➪ Code Style for
each file type. For example, Figure 2.28 shows the Java options.
Unwrapping
One code smell we find often, even among experienced developers, is around exception handling. We are allergic to
any code that says throw new Exception("some message");. The proper way to handle exceptions is to
handle the exception as early as possible. If you must throw an exception, throw as specific an exception as possible.
One technique we use to discover opportunities for fixing exception handling is to search for instances of
catch(Exception e). If there is a throws Exception clause in the method signature, just delete it. Then
delete a character from the word Exception in the catch block, which will cause the IDE to believe the exception
was not handled and will put a red underline under every exception that must be handled. If there are none, you can
unwrap the try-catch block using the Unwrap feature. Just position your cursor inside the try-catch block
and hit Ctrl+Shift+Delete/Cmd+Shift+Delete. You get prompted to unwrap the try. After confirming, it is gone.
TIP You can also unwrap if and else clauses, Strings, etc. Give it a try!
34 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
Comparing Code
We should mention another allergy: duplicated code. If you think there are files that are similar, and which may
need to be refactored, you can do a side-by-side comparison by selecting the two or more files in the Project
Explorer and choosing the menu item View ➪ Compare Files (Ctrl+D/Cmd+D). This will open the files in side-by-
side windows, where you can edit in either side.
Similarly, you can copy a selection to the clipboard and then make another selection, right-click, and choose
Compare With Clipboard.
Now you can paste into that column from the clipboard, or you can edit in place. Hit the End key to move your
cursors to the end of the line, and the Home key will bring them the start. You can also use Ctrl+Left Arrow/
Cmd+Option+Shift+Left Arrow and Ctrl+Right Arrow/Cmd+Option+Shift+Left Arrow to move them left or
right, a word at a time.
In the following snippet, there is a series of RGB triples. Let’s say you want to change the middle triple to 255 for
each line. Place the cursor anywhere in the top line and select the column you want to edit using the three-key
keyboard sequence Ctrl/Option, Ctrl/Option, Down Arrow, until every row you want to change is selected, as
shown in Figure 2.29.
Then hit the End key, and the cursors will move to the end of each line, as shown in Figure 2.30.
Now hit Ctrl+Left Arrow/Option+Left Arrow to move all the cursors left one word, and press one more Left
Arrow to get them to the left of the comma, as shown in Figure 2.31.
Extending the IDE ❘ 35
Finally, pressing Ctrl+Backspace/Option+Backspace will delete the entire middle segment, as shown in
Figure 2.32.
You will find this technique useful when changing similar configurations, such as connection strings, text
prompts, and even changing adjacent lines of similar code.
PEEKING AT ECLIPSE
Now that you understand IntelliJ, it is time to look at Eclipse. First, there are two important vocabulary differ-
ences to understand. In IntelliJ, you open a project, which consists of one or more modules. In Eclipse, you open a
workspace, which also consists of one or more modules. Therefore, project means different things across IDEs: in
IntelliJ, project refers to the top-level entity, and in Eclipse it refers to a buildable artifact. Eclipse is open-source
and therefore free to use. It gets contributions from major companies such as IBM and Oracle.
ALWAYS GREEN?
IntelliJ has a guiding principle called “always green.” In IntelliJ, the code is expected to compile
at all times. If you have a compiler error in any class in the module, you will not be able to run
any main methods or tests. By contrast, Eclipse makes a best-effort attempt to run the code. If
the code that doesn’t compile doesn’t need to be executed by your code path, Eclipse will let
you run it.
To get started in Eclipse, open a new project by selecting File ➪ New ➪ Java Project, as shown in Figure 2.35.
Like IntelliJ, you get prompted for a project name and the version of Java at a minimum. Alternatively, you can
use File ➪ Import ➪ Git ➪ Projects from Git to pull a project from version control, or you can use File ➪ Import
➪ General ➪ Existing Projects Into Workspace to use a project that has already been created.
Peeking at VS Code ❘ 37
All of these approaches have an option for a run configuration, just like in IntelliJ where you can save settings for
running the application. To use the debugger, launch your program using the bug icon instead of the arrow icon.
The bug icon puts your program in debug mode.
Like IntelliJ, you can extend IDE behavior through plugins. Go to menu Help ➪ Eclipse Marketplace. Once you
chose the plugin you want, install it and restart Eclipse to have it take effect.
If you plan to use Eclipse, we recommend looking at the settings. On Windows, this is under Window ➪
Preferences. On Mac, it is under Eclipse ➪ Settings.
Eclipse has many keyboard shortcuts. We will explore some in the “Comparing IDEs” section. One particularly
useful one is Ctrl+3/Cmd+3, which opens a pop-up where you can type the name of any command to run.
PEEKING AT VS CODE
Visual Studio (VS) Code by Microsoft has its own set of vocabulary. Like Eclipse, the top-level entity is a
workspace. The lower-level units are simply referred to as folders. VS Code is termed an editor, rather than an
integrated development environment.
Visual Studio is a separate product, not to be confused with VS Code. Both are distributed by Microsoft, but
Visual Studio is a commercial IDE typically used by .NET developers, whereas VS Code is a free editor. This
distinction between editor and IDE typically shows up in the limited functionality although it has been quickly
improving.
TIP Since VS Code is an editor, it allows you to easily open a single file and edit it.
Like with IntelliJ and Eclipse, you can download code that extends the functionality of the IDE. VS Code calls
those extensions rather than plugins. You should install the Java extension before starting to write code in the
IDE. The extensions are under File ➪ Preferences ➪ Extensions on Windows and Code ➪ Settings ➪ Extensions
on Mac. In the Marketplace, search for “Extension Pack for Java,” click Install, and restart VS Code.
38 ❘ CHAPTER 2 Getting to Know your IDE: The Secret to Success
There are at least two extension packs adding Java support to VS Code.
➤➤ Extension Pack for Java (from Microsoft)
➤➤ Oracle Platform Extension for Visual Studio Code
Since Java support for VS Code is newer, these plugins are rapidly evolving. The Microsoft and
Oracle ones are both worth exploring to see which is best at the time you are reading this.
There is also a Language Support for Java from RedHat, but the Microsoft one bundles it in
their extension, so there is no need to try it independently.
When you open a new window in VS Code, you have three options. You can open an existing folder on disk, get
code from version control, or create a new Java project, as shown in Figure 2.37.
If you chose to create a new project, you are first asked which build tool you would like to use. You can choose
no build tool for now. In Chapter 4, you will learn about build tools like Maven and Gradle. Then you are asked
to navigate to the directory where you’d like the project stored. Finally, you enter a project name.
There are three common ways to run a Java program.
➤➤ Click the Run text in the file, as shown in Figure 2.38.
➤➤ Right-click the Java class name and choose Run Java.
➤➤ On the Run menu, choose Run Without Debugging.
The option to run in debug mode is located in the same locations. VS Code calls saved settings for running and
debugging a launch configuration. Click the icon in the left navigation (showing an arrow and a bug) to get the
option for creating a launch configuration, as shown in Figure 2.39.
If you plan to use VS Code, we recommend looking at the settings. On Windows, select File ➪ Preferences ➪
Settings. On Mac, select Code ➪ Settings ➪ Settings.
VS Code has many keyboard shortcuts. We will explore some in the “Comparing IDEs” section. One particularly
useful one is the shortcut to open the command palette. Pressing Ctrl+Shift+P/Cmd+Shift+P opens the palette so
you can type the name of any VS Code command!
COMPARING IDEs
In some enterprises, everyone uses the same IDE. In others, each developer can use their favorite. Even then, you
should be able to understand what people who use a different IDE are talking about. Table 2.1 shows the major
vocabulary differences.
All the IDEs offer a wide variety of keyboard shortcuts. Table 2.2 compares some common ones to get a feel
for the variety. Note that some IDEs allow you to import keyboard mappings from others. If you are used to
one IDE, you may be able to keep those settings in another. However, there is a benefit to learning the default
shortcuts so you can more easily work with others who use that IDE.
Shared settings and home for buildable units Project Project Folder
FURTHER REFERENCES
➤➤ https://www.jetbrains.com/idea
Download IntelliJ IDEA
➤➤ https://www.eclipse.org/downloads
Download Eclipse
➤➤ https://code.visualstudio.com/download
Download VS Code
➤➤ Refactoring: Improving the Design of Existing Code (Addison-Wesley, 2018)
Classic refactoring book by Kent Beck and Martin Fowler
➤➤ https://dev.java/learn/debugging
Article by Jeanne on using a debugger
SUMMARY
The IDE is arguably the most important item in the Java ecosystem, and your success as a programmer begins
when you learn to use the full power of this vital tool. The most important elements of that are mastering the
keyboard shortcuts, using the refactoring features, and learning how to use all the subtleties of the editor itself.
➤➤ In this chapter, we explored the evolution of the Java IDE, and we learned how familiarity with your
IDE can make you an expert programmer. We took a deep dive into some of the most important features
of IntelliJ IDEA, including keyboard shortcuts, refactoring, and other important editor capabilities.
➤➤ In the rest of the book, our emphasis will be more on programming, but we will discuss other IDE
capabilities appropriate to the technologies we cover.
3
Collaborating Across
the Enterprise with Git, Jira,
and Confluence
WHAT’S IN THIS CHAPTER?
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch03-Collaboration. See the README.md file in that repository for details.
In addition, you can access Git repos on other, remote machines (assuming you have the proper permissions).
In fact, technically any Git instance can act as a Git remote, although in most enterprises everyone shares a
centralized remote server that uses GitHub, GitLab, Bitbucket, or the like.
To pull the files from a remote project onto your local computer, you clone the project repository. All the files are
then downloaded and automatically included in your working area, where they are tracked by Git.
If you are creating a new project from scratch and you want to add it to Git, you must initialize the project by
calling git init in the root of the project directory. Then for any files that you want Git to track, you must call
git add <file1> <file2> specifying the name of each file (or filenaming pattern with * wildcards) or call
git add . to add everything in that directory and below.
Similarly, when you change files, you want to call git add to stage the changes. You can stage and commit in a
single operation using git commit - am "some meaningful log message.".
After you have staged your changes and you are happy with them, you want to save those changes to your
repository. The process for that is to commit them.
Differentiating Commits
The word commit is both a verb and a noun. As a verb, commit refers to the action of moving your changes to your
local Git repo. As a noun, a commit refers to all the files and context information included in that action of commit-
ting. A broad algorithm is applied to the files, their previous commits, the committer’s username, the log message,
and other information to produce a unique 40-character hexadecimal commit ID, also known as a commit hash.
Branching
All Git commits reside on a branch. A branch is a named pointer to a set of commits; or more accurately, it is a
pointer to a particular HEAD commit, which happens to include all the historical commits that produced that com-
mit. As you make further commits to that branch, the branch pointer moves forward to point to the latest commit.
You can create a branch from a selected branch, do work on the new branch, then compare that branch to the
original branch to review your changes, and finally merge your branch into the original branch. Traditionally, Git
would assign a default branch named master. But recently they have been transitioning to use the name main instead
of master to promote inclusivity. Depending on your Git provider and version, you will see one or the other.
Tagging
Tags are like branches, except that they are static. Where a branch pointer continuously moves to the latest com-
mit, a tag pointer never moves, and so it captures the state of your codebase at a given moment in time. A tag is
like a human-readable name for the commit hash.
TIP While the commit hash that the tag pointer references is immutable, the tag name itself
can be deleted and recreated.
44 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
Merging
Merging refers to the activity of integrating the changes from one branch into another branch. This is one step
in a common workflow in Git—you start with an existing branch, usually the common branch used by every-
one, which can be called anything, but traditionally it is called develop. You create your branch off that existing
branch into a new branch of your own naming, for example feature-123. You, and perhaps some other members
of your team, do some work on feature-123, you run your tests, everything passes, and you’re ready to release the
code. You must merge feature-123 into develop and then push that to the remote server.
TIP It’s common in the enterprise to name the feature branch using the JIRA ticket number
to facilitate tracking. We will see an example of this in the “Using Jira for Enterprise Process
Collaboration” section, later in this chapter.
added a display
This displays the commit ID, branch, author, commit date, and commit message.
TIP If you see a colon at the end of the log, that indicates a page break; hit the spacebar to
list more, or type q to quit and get back to your command line.
After performing git add, your next step will usually be to commit the staging area to the local repo. You will
want to add a commit message using the -m (for “message”) switch.
git commit -
m "Some meaningful commit message"
When you are happy with your changes, you add them to the staging area, again using the git add command.
You then commit the staging area (with all its added files) to the Git repository (residing on your local computer)
using the command git commit - m "Some meaningful commit message", and eventually you push
your repo to a remote server using the git push command.
You can both add and commit in a single command by calling git commit -
am. For example:
git commit -
am "Some meaningful commit message"
To summarize, Git is a critical code collaboration system used in most development shops, and you want to
become familiar with it early on.
Let’s get some hands-on practice, but first, please install Git, and then follow along.
Installing Git
To install Git, head over to https://git-scm.com/downloads. You will find installation instructions for
Windows, macOS, and Linux.
Git for Windows will generally work from the normal Windows command shell, but many developers prefer to
use Git Bash, which is installed with Git for Windows. Git Bash is a Windows implementation of the popular
Unix/Linux bash shell, which recognizes all the Linux commands, such as grep, ls, and ps, and is great for
developers with a Unix/Linux background.
Once Git is installed, set your email and name.
git config -
-global user.name "Your Name"
git config -
-global user.email [email protected]
The name will be used to annotate all your commits. The email address may be used by the Git provider to
communicate with you.
This book uses GitHub, so if you don’t already have an account set up, we encourage you to head over to
github.com and create a free account.
Now clone the fork of our project repo. Cloning is the operation of bringing the Git project to your development
machine for viewing and editing.
To clone the repo, open a command shell, cd to an empty directory, and clone your fork of our book project
using the following:
git clone https://github.com/<your git id>/Ch03-
Collaboration
That will clone and initialize the repository in the current directory, enabling you to work on it. (Git clone will
move all the files directly into the current directory, so you always want to start from an empty directory.) The
clone operation will only bring the default branch and will download the list of remote branches. If you want to
check out other branches, you need to check those out explicitly.
Collaborating with Git ❘ 47
The Uniform Resource Indicator (URI) we used in the Git clone operation earlier was simply
the HTTPS URL of our GitHub repo. This is a common cloning practice for public GitHub
repos, and you may continue to use that approach for this book. However, in enterprises they
will sometimes require you to use a secure SSH URI, which means you need to generate and
add SSH keys to your repo. That is not as scary as it sounds.
From a Linux or macOS shell or from Git Bash, generate the keys using this:
ssh-
keygen -
t rsa -
b 4096 -
f ~/.ssh/id_rsa -
C "github keys"
Or from a standard Windows shell, use this:
ssh-
keygen -
t rsa -
b 4096 -
f %USERPROFILE%\.ssh\id_rsa1 -
C
"github keys"
You can use the generated key with all your GitHub repos. You will need to send the key to
GitHub. Here’s how:
1. Copy the contents of the ~/.ssh/id_rsa file you just generated to the clipboard.
2. Click your profile picture in the top-right corner, and choose the Settings option.
3. In the left navigation, select the entry for SSH and GPG keys, as shown in Figure 3.3.
4. Select SSH Keys and New SSH Key.
5. Paste in the contents of your id_rsa file you copied in step 1, and click Add SSH Key.
In a few seconds, you should receive an email announcing that your key has been set up,
as in Figure 3.4.
We will see an example of the checkout and the switch commands next.
1. Please cd to the directory main/java/com/wiley/realworldjava/gitplay.
2. There is one file there (see Listing 3.1).
1: package com.wiley.realworldjava.gitplay;
2:
3: public class GitDemo {
4: private String description;
5:
6: public GitDemo(String description) {
7: this.description = description;
8: }
9:
10: public void displayDescription() {
11: System.out.println("Description: " + description);
12: }
Collaborating with Git ❘ 49
13:
14: public static void main(String[] args) {
15: GitDemo demo = new GitDemo("Hello, Git!");
16:
17: // Display the initial description
18: demo.displayDescription();
19: }
20: }
To make the editing and compilation easier, create a new project in your IDE from version control. Refer to Chap-
ter 2, “Getting to Know Your IDE: The Secret to Success,” to learn how to create a project from version control.
The git status command will hold your hand, displaying your current status and showing you available
options for what to do next.
git status
For my environment, this displays the following:
Your branch is up to date with 'origin/main'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
.idea/
Now you don’t want to check in your .idea directory (after all, you don’t want to dictate your dynamic .idea
state to others or even to your future self), and you certainly don’t want to check in your compile output
directory; Git is for source, not for compiled output.
Wouldn’t it be great if you could tell Git to completely ignore such files? Fortunately, Git provides a facility
for ignoring selected files and directories. To do so, create a file named .gitignore in the project root, and
include the names of files and directories to ignore. You can include the exact filename (temp.txt), a wildcard
(temp*.*), or directories. You can ignore entire directories by specifying the directory names. For example:
➤➤ /logs/ will ignore the logs directory at the root.
➤➤ logs/ will ignore any directory named logs at any level.
➤➤ logs without a path separator will match the logs/ directory and any file named logs.
If you want to make the recursion explicit, you can say **/logs, which will ignore everything in any directory
named logs. You can add comments to the .gitignore file by starting them with #. Blank lines are ignored.
You cannot ignore files that are already tracked. To untrack a tracked file, you can do this:
git rm -
-cached -
r com/wiley/realworldjava/gitplay/
In our case, we will use the following .gitignore file:
# Ignore the entire .idea directory
.idea
Like all good developers, we don’t want to commit our code until it is done and tested. Yet we do want to track
our code as it evolves. How can we push our changes to the remote repo without impacting other developers?
The answer is to create a branch and make your changes on that branch. Then you can make changes to that
branch, or even collaborate with others on that branch, without impacting the main branch.
Let’s name this branch refactoring1. The command for this is shown here:
git branch refactoring1
That will create a new branch named refactoring1 from the current commit. If you want to delete a branch, you
can do the following (but for now, don’t delete it):
git branch -
d refactoring1
Then to check out the branch, do the following:
git checkout refactoring1
You can both create and check out the branch in a single instruction using this:
git checkout -
b refactoring1
The -b flag tells the checkout instruction to create the branch and then check it out. Later Git added perhaps a
more indicatively named switch instruction.
To switch to an existing branch using switch, use the following:
git switch refactoring1
Using switch, the syntax for creating and checking out a new branch is as follows:
git switch -
c refactoring1
To list the names of all the known branches on your computer (the current branch gets a * and a color highlight),
use the following:
git branch
displays:
* refactoring1
main
If you want to list the remote branches as well, use this:
git branch -
a
Next you want to do some work and change some files. Let’s modify our main method by adding lines 19 to 22
as shown in Listing 3.2.
Now commit those changes. For convenience let’s first cd to the directory containing the file.
cd ./main/java/com/wiley/realworldjava/gitplay
Then stage our changes using this:
git add GitDemo.java
Look at the git status, which displays the following:
On branch refactoring1
Changes to be committed:
(use "git restore -
-staged <file>..." to unstage)
modified: GitDemo.java
and commit our changes to the current branch:
git commit -
m "Added a line to GitDemo"
Those changes are now committed to your local repo, and you can see the commits, along with the files they
contain, and their commit messages, using any variation of the git log command (stay tuned!).
Of course, you want to share those changes with the rest of the development team. The process for that is to push
to the remote repo. (There can be zero or more named remotes, and by default the remote is named origin.)
git push origin refactoring1
That means push any commits from your local repo to the remote machine (origin by default) to the branch
named main. (More on branching to come.)
To push the code to the current branch and the origin that you cloned from, you can dispense with the formality
and just call the following:
git push
To see the URL of the origin, use this:
git remote get-
url origin
To set the URL or the origin or to name a new remote, such as other-machine, simply use this:
git remote add other-
machine <url>
You can also get a list of the current remotes and associated URLs via this:
git remote -
v
That will display a list of fetch and push URLs for each remote repo. Fetch and push URLs can be the same, but
they don’t have to be. For example, if your network requires different protocols for pushing and fetching, they
will differ. Or if you have mirrored repos, where you push to a central “hub” repo but you pull from a “spoke”
node in your region, the fetch and push URLs will differ.
checked-out branch. In the following example, you are commanding Git to merge a branch named my-branch
that is on the remote origin into the currently checked-out branch on your machine.
git merge origin my-
branch
You can omit origin, and Git will merge the specified branch from the local repository into the currently
checked-out branch.
It is common to have an integration aka develop branch, which is essentially a catchall branch that is shared by
the full development team. This branch is generally used to stage features that are scheduled to be included in the
next release. (See the “Using Gitflow for Collaboration” section later in this chapter for a detailed explanation.)
When merging into develop and other shared branches, it is a good practice to use the “no fast-forward”
(no-ff) switch.
git checkout develop
git merge –no-
ff my-branch
By default, merges are fast-forwarded. That means Git looks at all commits on your source branch between the
time you created that branch and now. If there are no conflicts with the current state of your branch, then those
commits will be moved to the end of the target branch. In contrast, if you do a no-ff merge, then the branch
where you have been making your commits will remain as a separate parallel branch, with a single merge com-
mit to join them. This makes it easy to identify the commits that comprise a given feature. Figure 3.5 illustrates
this flow.
The git pull command combines git fetch and git merge into a single command, pulling the branch
from the remote server and merging it into the current branch.
git pull origin my-
branch
The --no-ff switch cannot directly be used on a pull operation. But you can direct Git to default to no-ff on
pulls by setting the global variable pull-ff.
git config -
-global pull.ff only
Collaborating with Git ❘ 53
18: demo.displayDescription();
19: <<<<<<< HEAD
20: // Make more changes and commit
21: demo.description = "Git is powerful.";
22: =======
23:
24: // Make more changes and commit
25: demo.description = "Version control with Git is fun and easy.";
26: >>>>>>> refactoring1
27: demo.displayDescription();
28: }
You can see that the code following <<<<<<< HEAD and up to ======= is the code from the current branch,
and the code following that, up to >>>>>>> refactoring1, is from the refactoring1 branch.
When you get a merge conflict like this, you have two options: either abort or fix it. To abort a merge after a
conflict and restore everything to be as it was before the merge, use this:
git merge --abort
Give that a try, and presto, the original code is restored! On the other hand, you can choose to fix it—remove the
lines <<<<<<< HEAD, =======, and >>>>>>> refactoring1 and adjust the code to your liking.
14: public static void main(String[] args) {
15: GitDemo demo = new GitDemo("Hello, Git!");
16:
17: // Display the initial description
18: demo.displayDescription();
19:
20: // Make more changes and commit
21: demo.description = "Git is fun and easy, and very powerful.";
22: demo.displayDescription();
23: }
Then call git add to mark the conflict as resolved, and call git commit to tell Git to continue the merge.
git add GitDemo.java
git commit -
m "resolved conflict"
Yay, peace on earth!
TIP Pull requests and merge requests are just different names for essentially the same thing.
The term pull request is used by GitHub and Bitbucket, where merge request is used by G itLab.
For the purposes of this book, we will use the term pull request, often abbreviated as PR.
56 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
Rebasing
When you merge branches, Git creates an extra “merge” commit that contains the results of the merge. This is
useful in situations where you want to emphasize the commits that resulted in this branch. On the other hand,
this tends to clutter up the commit history with extra commits.
If you want to bypass that extra merge commit, there is a workaround in some situations, and that is to rebase
instead of merge. A rebase is like a merge, except that it moves your commits to the end of the target branch,
instead of creating a parallel branch and creating a merge commit.
To understand it, you really need to experience it, so let’s set up an example. Let’s create a feature branch, make
some commits, and compare the results of a merge versus a rebase.
1. Create a feature branch from refactoring3, and call it feature.
git switch -
c feature
2. Make some commits to the feature branch: by adding lines 23 to 26, and then commit.
20: // Make more changes and commit
21: demo.description = "Git is fun and easy, and very powerful.";
22: demo.displayDescription();
23:
24: // Make another change
25: demo.description = "Changes for rebase";
26: demo.displayDescription();
and then commit.
git commit -
am "changes to feature for rebase"
3. Make some more changes by adding lines 27 to 30.
24: // Make another change
25: demo.description = "Change1 for rebase";
26: demo.displayDescription();
27:
28: // And yet another change
29: demo.description = "Change2 for rebase";
30: demo.displayDescription();
31:}
and then commit:
git commit -
am "more changes to feature for rebase"
58 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
TIP If you don’t include a -m commit message, then a VI editor window will appear and
prompt you for a commit message. If that happens, hit i to enter edit mode, and type your
message; then hit Enter to start a new line. When done, hit Esc to exit edit mode, type :w to
write your message to the commit, and type :q to quit. That’s basic VI editing, which Unix/
Linux users are intimately familiar with.
Either way the code is identical, but the rebase produces a much cleaner history. So what’s the downside? Why
would we ever use merge and not rebase?
The answer is that rebase rewrites your commit history. Remember, we said that a commit ID includes the
previous version. Well, since we are moving the commit history to the end of the branch, the previous version is
different, and therefore the commit ID is different. Why is that bad? Well, it is not bad if you are working on the
branch alone and if you have not merged your work into any other branch. But if you are collaborating or if you
have merged this branch into any other shared branch, then anyone sharing that work will have trouble merging
that work back into their own code.
As you can see in Figure 3.6, both branches refactoring3 and refactoring3a have the same exact version
of the code, but refactoring3a (the merged branch) consists of two parallel branches that are merged into a
single commit, whereas refactoring3 (the rebased branch) is one straight branch.
The golden rule of rebasing says to never rebase a branch that is shared with other developers
or has been pushed or merged to a remote repository. Rather, you should only ever rebase local
or private branches to your own repository.
What this means in practice is, don’t rebase anything that has already been pushed to the
remote, or anything that has been merged into another branch that will be pushed in the future.
60 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
Cherry-Picking
You have learned about developing your changes on a branch and then merging or rebasing those changes into
other branches. But what happens if you would like to selectively integrate some but not all changes from one
branch to another?
Git provides a solution to this problem in the form of cherry-picking. Cherry-picking is the capability to apply
specific commits from one branch to another. To do this, switch to the target branch, locate the commit ID of the
commit you want to cherry-pick, and then call the git cherry-pick command. You don’t need to specify the
entire commit ID; just the first few characters that identify the commit uniquely are sufficient. For example, to
cherry-pick commit ID 4954034c. . . into branch refactoring1, do this:
git switch refactoring1
git cherry-
pick 4954
Remember that you will have different commit IDs if you are following along, so use git log to find one to
cherry-pick. Beware: in many cases cherry-picking will lead to a merge-conflict that you will need to resolve, so
exercise caution. In this case, we get a conflict; resolve it manually as before, and to continue, call this:
git commit -
am "Cherry picked and then resolved conflict"
Git graciously responds with this:
1 file changed, 8 insertions(+)
git reset -
-soft <commit id> No Yes Yes
git reset -
-hard <commit id> No Yes No
The first time a file is tracked, it will be colored green to indicate a newly tracked file that has never been commit-
ted per Figure 3.8. (The HelloWorld class that is circled is colored in a green font.)
If a file was already created externally (perhaps it was moved into the project from the file system), you can still
track it; just right-click the file and select Git and then Add. This will add it to the staging area, as shown in
Figure 3.9.
62 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
TIP Ctrl+K (or Cmd+K) is also useful when you just want to see what has changed. It opens
the commit window, which displays all of your changes. Use F7 and Shift+F7 to navigate
forward and backward through all your changes.
Changes to tracked files are automatically staged and will appear in the commit window, but if you are not ready
to commit yet, you can unstage them by unclearing the check box next to the filename in the commit window.
Right-clicking in the commit window will bring up a context window. Most of the choices there are self-
explanatory, but draw your attention to the New Changelist option (see Figure 3.10). IntelliJ ensures that you
can commit only one changelist at a time, so if you want to keep some files out of your commit for now, you can
create a new changelist and move those files to that changelist to separate your commits.
TIP It is a good practice to double-click the files in the commit window to open the diff-
viewer window, where you can review your changes. This provides you with one last look
before you commit your code. When doing so, you will invariably locate opportunities to
remove duplicate code, correct typos, and perform similar cleanups.
F7 is a convenient shortcut for navigating from one change to the next. Shift+F7 will navigate back to the previous
change. When you get to the end of the file and click F7 again, IntelliJ will prompt you to move to the next file.
Changed lines will appear in the diff-viewer window, as shown in Figure 3.11, with a light blue highlight and a
check box in the margin. You can choose which changes to commit by checking or unchecking the check box
next to each change in the diff-viewer window. Only the selected changes will be committed. If you right-click
a change, you can choose to send that change to another changelist so that commits to that file will not include
those changes.
To be safe, if you have a large number of changes, it is easy to uncheck something you actually wanted to commit
or check something you didn’t.
When you have code you don’t want to commit, it is good to surround it with something very conspicuous, like
the following. This will make it hard to commit by accident!
///////// Don't Commit ///////////
someCodeToNotCommit();
someMoreCodeToNotCommit();
///////// Don't Commit ///////////
Collaborating with Git ❘ 65
The Git menu provides all the commands described and more, as you can see in Figure 3.12.
TIP A patch is a file that contains the differences between sets of changed files and the
original version. You can apply a patch to a set of source files to produce a target version
containing the changes captured in the patch file. You can also share patch files with others
to have them produce the target state of changes.
➤➤ Uncommitted Changes: This one deserves digging into to all its submenus; Figure 3.13 shows the
options on the Uncommitted Changes submenu.
➤➤ Shelve Changes: Lets you temporarily set aside or shelve the current changes in
IntelliJ’s repo, where you can inspect and retrieve them later.
➤➤ Show Shelf: Displays all of the shelved changes and provides selection tools for recov-
ering some or all of the lines of change.
➤➤ Stash Changes: Like shelving, but uses Git tooling under the covers. If you’re using
the IDE, IntelliJ’s shelf feature is convenient, but if you prefer to run things from the
command line, you might prefer stashing.
➤➤ Unstash changes: Allows you to re-apply the stashed changes.
➤➤ Rollback: Selects files to restore to the original state.
➤➤ Show Local Changes as UML: Draws a UML class diagram containing the classes
and methods containing local changes.
➤➤ Current File: This one also deserves an exploration of its submenus; see Figure 3.14.
➤➤ Commit File: Brings up the commit window, where you can select changes to commit.
➤➤ Add: Tells Git to track a new file.
➤➤ Annotate with Git Blame: Displays in the margin the name of the person who last
updated each line of the current file.
➤➤ Show Diff: Shows side-by-side the diff-viewer window of the current file as it is now
versus how it was when it was pulled.
➤➤ Compare with Revision: Provides a list of recent commits for you to compare with
the local state.
➤➤ Compare with Branch: Provides a list of branches for you to compare with the
local state.
➤➤ Show History: Displays a list of all commits for this file and allows you to compare
against previous states.
➤➤ Show History for Method: Displays a list of all commits for the current method and
allows you to compare against previous states.
➤➤ Show History for Selection: Appears only when you have something selected. This
menu item displays a list of all commits for that selection and allows you to compare
them against previous states.
You can get a broader list of Git options by right-clicking a file in the editor and choosing Git, as shown in Figure 3.15.
IntelliJ has a Git Log window that you can open by clicking the Git icon on the lower-left side. It includes a
graphical log display that lets you select a branch, commit ID, and so on, and displays a tree illustrating your
branches. This is much easier to use than typing log commands, as you can see in Figure 3.16.
Right-click Branch
Git menu
context menu view
Detailed commit
changes
You can select local or remote branches, but beware the list of branches and the contents of the branches will be as
of your most recent update. Select Git ➪ Update to see a fresh list of branches and to refresh the currently checked-
out branch from the server. To refresh other branches, you can select them individually from the Git ➪ Pull menu.
One more feature you should know is the Git ➪ History feature. You can right-click in any managed file and
choose Git ➪ History to get a listing of all versions of that file. It is a life-saver when you discover something is
broken and you want to locate a last known good version. You can get the history for an entire file, or you can
make a selection and see the history for just that selection.
In related news, IntelliJ also offers a local history feature, where you can get a fast version history without Git,
and even for files that are not tracked by Git.
TIP While any .md file will be considered Markdown in GitHub, only the README.md file
is automatically shown on the repository home page.
Markdown supports straight text, as well as headings, emphasis, lists, links, and code blocks. The language is pretty
simple, and there are just a few syntax elements you will need to learn. IntelliJ and VS Code are nice enough to show
you a side-by-side view of your markup code and the visual rendering of it, so we encourage you to experiment with
the README.doc in this chapter repo. Eclipse doesn’t have a side-by-side view but it does have a preview tab.
Text can be entered directly. Contiguous lines of text will be merged into a single line, unless they are followed by
a blank line, in which case a new paragraph will start. If you want to override this behavior and start a new line,
just add two spaces to the end of the line, which will cause the next line to start on a new line.
Markdown supports six levels of headings. To designate a level-one heading, start the line with # followed by a
space. A level-two heading starts with ## and a space, and so on.
To add italics, surround the text with *. Here’s an example:
*This is italicized*
Be sure not to include a space after the first asterisk.
To bold selected text, surround the text with **. Here’s an example:
**This is bold**
Again, there are no spaces after the first **.
Lists come in ordered and unordered flavors. To create an unordered list, place a * as the first character on the
line, followed by a space. Here’s an example:
* This is the first item
* This is the second item
* Etc.
To create an ordered list, start the line with an item number, followed by a period and a space. Here’s an example:
1. This is the first item
2. This is the second item
3. Etc.
70 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
Regardless of what numbers you type, Markdown will display them in the natural order. We like to number every list
item with 1. so that if we switch things around, Markdown deals with the proper numbering so that we don’t have to.
To specify a link, just start with http:// or https:// and Markdown will render it as a link. If you want to
supply text for the link, use this syntax:
[This is the text](https://www.some-
link.com)
The text in the square brackets will be displayed, and the text in the round brackets will be the link.
To specify code, you can specify the language and the code using the “triple backtick” syntax. Markdown recog-
nizes the many language tags, including the following:
➤➤ java
➤➤ python
➤➤ javascript
➤➤ html
➤➤ css
➤➤ markdown (for nested code blocks)
➤➤ bash
➤➤ sql
Here’s an example:
```java
public class HelloMarkdown {
public static void main(String[] args) {
System.out.println("Hello, Markdown!");
}
}
```
Again, we encourage you to play with all of these to get the feel of using Markdown.
1. The first step is to create a new branch called develop from the main branch. This is a one-time task.
The develop branch is a collect-all branch, but nothing gets directly committed to develop. The develop
branch only gets merges from other branches. Once develop is created, it remains for the lifetime of
the project.
2. When a feature is agreed upon, a feature branch is created for that feature. Typically, the business will
define a requirement for that feature and track it in some tracking system like Jira (which we will cover
in the next section). The tracking system will assign a key to this requirement, and although the naming
for the feature branch is flexible, it is common to give it a name that begins with feature/ followed by
the key and an extremely short but explanatory description (no spaces allowed, so use hyphens). An
example feature branch name might be something like feature/RWJ-1234-add-ui-login, where RWJ-1234
is the Jira key for this new “add a UI login” feature.
3. As the teams work on their feature branches, they continue to develop code and commit. The build
server runs a build on every commit. See Chapter 4, “Automating Your CI/CD Builds with Maven, Gra-
dle, and Jenkins,” for more details.
4. When a feature is deemed complete, it is merged back into the develop branch. Remember that the
develop branch has been evolving in parallel while you were working since other team members were
writing their own code. Therefore, it will look different than when you originally branched the fea-
ture branch. Therefore, a new build and test cycle is again required, and that is exactly what the build
server does.
5. The develop branch is ready for release. Create a new release branch from develop named release/ fol-
lowed by a release version. But the process does not end here. Now that the release branch has been cut,
development can continue to evolve on the develop branch for future releases, and only this release is
maintained on the release branch.
6. The release branch is now tested, possibly by a quality assurance (QA) team.
7. Bugs are reported, fixed, and merged back to develop.
8. When the release branch is certified as ready for release, it is merged to the main branch, and a release
tag is cut on the main branch. The main branch tracks all the historical releases, and it is easy to repro-
duce a prior release by checking out the release tag for any release.
72 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
9. We have released the code to production, but we are still not done! Someone discovers a bug in produc-
tion, a hotfix branch is created, work is done to swat the bug, and we repeat from step 5.
10. As with changes to any branch, changes made to the hotfix branch must be merged back to develop.
11. When ready the hotfix branch is merged to main and tagged for release.
TIP One downside: if you are downloading the snapshot builds, you can find version con-
flicts when different teams build their features. When that happens, the generated binaries
will all have the same -SNAPSHOT version number for entirely different feature sets. This is
not a showstopper, because if you are deploying snapshots, teams can coordinate with each
other to decide what to release.
Jira is a web-based UI that has evolved greatly over the years, starting life as a clunky ticket management pro-
gram and evolving to become a full-featured tool for sprint planning and maintenance, issue-creation and editing,
search, user administration, reporting, and so on.
Whichever type of board your project uses, Jira tickets represent a piece of work. Jira started life as a bug-
tracking system; thus, the tickets are called issues, but many people affectionately refer to them simply as jiras.
Issues come in many flavors, such as epic, story, bug, task, subtask, and so on. You can also create your own
issue types.
Generally, epics are used as a parent for a group of smaller, related stories, bugs, and tasks.
These smaller issues themselves can contain their own issues, which are called subtasks. These may be created
in any order, but in enterprise development a common practice is to attach most issues to an epic. Further, every
subtask belongs to a parent issue, as in Figure 3.18.
Jira uses this hierarchy to provide a top-down view to management, which might be interested in only the larger
epics, and a bottom-up view to individual contributors, who care more about the implementation details. While
it is fine to have a small percentage of “loose” tasks or bugs not in an epic, these aren’t the ones management is
most interested in.
Creating a Project
Issues are assigned to projects. If you are coming into an existing team, there will usually be one or more projects
set up. If you need to create a new project, go to the Projects tab and select Create Project, select an appropriate
template, and follow the instructions there, as in Figure 3.19.
There are stock board types for each template. For example, for software development, you can select Kanban,
Scrum, and so on. We will choose Scrum for our example project. Scrum allows you to manage your deliveries in
time-boxed sprints, which are two weeks long by default. Kanban has no time-boxing, but it divides the board
into columns, representing general tasks or teams, and there is a WIP limit for each column on the board, as in
Figure 3.20.
Creating an Issue
You create a new issue in Jira by clicking the Create button, as in Figure 3.21.
Select the issue type and complete all the details. Like all things Jira, the screen can be (and usually is) customized,
so yours might look different. Each team will have its own fields, some mandatory. In the late 1990s an early agile
methodology called the Extreme Programming (XP) was introduced to provide a user-friendly way of managing
requirements in an agile way. Many organizations have adopted the XP practice of expressing issues of type Story
in the form “As a <role> user, I want to perform <some activity> so that I can achieve <some benefit>.”
Using Jira for Enterprise Process Collaboration ❘ 75
Enter a summary, description, assignee (if known), and other fields as mandated by your team, as in Figure 3.22.
Linking to an Epic
As a best practice, most teams will have a policy of assigning most issues to a parent epic. You can link an issue to
an epic when you create the issue, or you can add or change the epic link later. You can then get a high-level view
of the whole project by viewing the epics on the project board as in Figure 3.23.
OPEN RESOLVED
CLOSED
QA
IN PROGRESS REOPENED
Creating a Sprint
Now that you have created some Jira issues, it is time to allocate the work. The work that the team needs to per-
form is divided into sprints. A sprint is a named (e.g.,VM Sprint 4 Aug 12-Aug 23), time-boxed (e.g., two weeks)
period, where work is assigned to the team.
Starting from the backlog tab in Figure 3.25, you will find all your issues at the bottom of the page, in the Back-
log section. Your first sprint will appear with a default name and no dates, but you can click the Add Dates icon
to manage the sprint. You can create future sprints as well, using the same method per Figure 3.26.
Using Jira for Enterprise Process Collaboration ❘ 77
Now that one or more sprints are created, you can drag and drop issues into the appropriate sprint, as in
Figure 3.27.
Once the sprint is allocated, it can be started by clicking the Start Sprint button at the top left.
Adding Users
A project administrator can add and maintain users and assign roles. Click the gear icon at the bottom left and
click Access. Add your users and assign a role.
Once users are added to the project, click Back To Project on the left. Now you can assign users to issues, as
shown in Figure 3.28.
Adding Columns
To add, remove, or re-sequence columns, a board administrator must click Columns And Statuses on the left side
to launch the column maintenance screen.
Using Jira for Enterprise Process Collaboration ❘ 79
Click the Create Column + sign button on the right side to add columns, as in Figure 3.29.
Now the board administrator can add new columns, rename existing ones, re-sequence them, and assign a status
from the workflow to each column, as in Figure 3.30.
Using Filters
As more and more jiras fill the board, it can become difficult to separate issues. For example, during stand-up
meetings, teams may move from person to person to ask what you accomplished yesterday, what you will be
working on today, and if there are any blockers. When your turn comes, wouldn’t it be convenient to display the
board with just your jiras and hide all the others? That is the purpose of filters.
Filters can be configured to filter by users or in fact by any other property or properties of your jiras, using JQL,
as we will see in the following sections.
Seeing My Issues
You can see all your issues, across all boards and backlog, by selecting Filters/My Open Issues per Figure 3.31.
From the same screen, you can enter queries in JQL by clicking Switch To JQL.
A field selection clause consists of a field followed by an operator and a value. Tables 3.2, 3.3, and 3.4 show the
valid operators.
OPERATOR DESCRIPTION
!= Not equals
IN In a list of values
IS Is a specified value
OPERATOR DESCRIPTION
OR Logical OR
Clauses can be separated by the keywords AND, OR, and NOT. To order by one or more fields, you can end the
query with ORDER BY and a field name. You can control the sort order with ASC (default) or DESC.
82 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
OPERATOR DESCRIPTION
You can also use wildcard characters (*) for partial matches. For example, this query finds all jiras with a sum-
mary that starts with improvement:
summary ~* "improvement*
You can use the special characters of d for days, h for hours, and so on, as shown in this query to find all jiras
resolved in the last three days:
status = resolved AND resolved >= -
3d
Parentheses can be used for grouping clauses to control what the logical operators apply to. This query finds jiras
in the project Value Mark that are in the Unresolved state, assigned to the currently logged in user, with priority
High or Critical:
project = "Value Mark" and assignee = currentUser() AND resolution = Unresolved AND
priority in (High , Critical) order by updated DESC
You can then choose to edit the selected issues, move them to a new project, transition them, and so on, as in
Figure 3.33 and Figure 3.34.
Connecting to Git
Your Jira admin can connect your Jira project to your Git provider. Once the Git provider is connected, include
the Jira key in your commit messages to attach your commit to the associated Jira.
Here’s an example:
git commit -
m "Implemented credentials validation. VM-
8"
You can then see a list of commits associated with any jira within the Jira ticket itself.
Every team has a stated mission, which can include its purpose, the underlying technologies and processes, how
the teams are structured, general knowledge sharing, and so on.
Confluence represents the culmination of the evolution of wiki tools over the years. It includes standard wiki
features such as multiple people editing at the same time, viewing version history, and an impressive collection of
macros for features such as tables and images.
These are some of the page types supported by Confluence out of the box:
➤➤ Documentation Page: Used to create documentation. Like all pages, it can include headings, bullet
points, numbered lists, tables, and other styles you would expect to find in technical documentation.
➤➤ Meeting Notes: Specifically designed for capturing meeting minutes and things like action items, as well
as other information discussed during meetings.
➤➤ Blog Post: Generally used for contributing informal insights in a blog-like format.
➤➤ Blank Page: An empty page where you can create content using Confluence’s editing tools.
Table 3.5 shows some useful macros so you can get a feel for the functionality. Feel free to experiment with them
at your leisure.
As you use Confluence, you’ll learn fun optimizations like typing {note} to create a note box or pasting in a Jira
URL to automatically connect the ticket to the page including the status at the time of page rendering!
86 ❘ CHAPTER 3 Collaborating Across the Enterprise with Git, Jira, and Confluence
OPERATOR DESCRIPTION
FURTHER REFERENCES
➤➤ https://git-scm.com/docs
Complete Git Reference documentation
➤➤ https://nvie.com/posts/a-successful-git-branching-model
Original Gitflow article by Vincent Driessen
➤➤ https://www.atlassian.com/software/jira
Jira documentation
➤➤ https://www.atlassian.com/software/confluence
Confluence documentation
SUMMARY
In this chapter, we looked at some of the most ubiquitous tools for enterprise collaboration.
➤➤ Git for code collaboration
➤➤ Jira for process collaboration
➤➤ Confluence for enterprise knowledge management
4
Automating Your CI/CD Builds
with Maven, Gradle, and
Jenkins
WHAT’S IN THIS CHAPTER?
So far, you have seen how to write code on your machine in Chapter 2, “Getting to Know your IDE:
The Secret to Success,” and share it with others in Chapter 3, “Collaborating Across the Enterprise with
Git, Jira, and Confluence.” In this chapter, you’ll see how to build and package the code using Maven
and Gradle.
While it is possible to run Jenkins on your own computer, companies run it on a remote server, just as they
run their version control systems on a remote server. This ensures the build is clean and not reliant on a
single developer’s environment. It also allows Jenkins to run faster by not competing with a developer for
computer resources. Plus, the build server never needs to sleep or go on vacation!
Jenkins performs continuous integration, which is the name for an automated build process that retrieves
the latest code from the version control repository, builds it, runs tests and quality checks on it, and then
packages it up and optionally deploys it. Despite the name, continuous integration does not run continu-
ously. It is typically configured to run after each push to the repository. Running continuously would just
waste resources if the code has not changed.
As we said, Jenkins continuous integration can be configured to deploy the software as well. Continuous
delivery goes one step further than continuous integration and ensures the software is ready to deploy to
88 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
production. With continuous delivery, the software is typically deployed to a preproduction environment. With
continuous delivery, the deployment to production requires a manual step.
The most advanced practice is continuous deployment, which automatically deploys to production without any
manual intervention. This requires a lot of automation and process maturity to avoid introducing problems into
production.
There are many continuous integration tools used in the enterprise, for example GitLab CI and GitHub Actions.
In this chapter, we cover Jenkins, but the concepts are similar across technologies.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch04-CICD. See the README.md file in that repository for details.
That’s a lot of work for a POM with 13 lines, most of them boilerplate. This is the power of convention over
configuration. Since we didn’t state otherwise, Maven assumes the defaults: that we want to compile and test the
code, storing the results in a JAR file!
90 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
This book uses Maven 3 since that was the latest version at the time of writing. Maven 4 has a
high degree of backward compatibility, so everything you learn will still apply. Note that the
boilerplate and modelVersion will switch to 4.1.0 when you are ready to use Maven 4.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-
4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>with-
dependency</artifactId>
<version>1.0.0-
SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections</artifactId>
<version>11.1.0</version>
</dependency>
</dependencies>
</project>
You can specify as many dependencies as you’d like simply by adding more <dependency> tags inside the
<dependencies> tag.
NOTE Eclipse Collections is not related to the Eclipse IDE. It gets its name from the Eclipse
Foundation and goes beyond the collections framework included in the Java development kit
(JDK).
Now that you’ve seen two examples of a POM, we are going to omit the boilerplate and start examples with the
<groupId> tag.
Often the GAV is sufficient to specify a dependency. However, there is more information that
can be included in a <dependency> tag.
One optional tag is the classifier. This is used when there are multiple artifacts published using
the same artifactId. For example, a classifier could be used to differentiate between JAR files
for different versions of the JDK or different operating systems.
<classifier>jdk21</classifier>
Another optional tag is the type tag, which is typically used when the packaging type is
something besides jar, such as a war (web archive).
<type>war</type>
You should also be familiar with the optional scope tag.
continues
92 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
(continued)
Maven supports the following scopes:
PHASE DESCRIPTION
validate Check that the POM file is correctly formatted and all the necessary information is
provided.
install verify + install the package into your local Maven repository for use by other builds.
Become best friends with verify as it is the most common Maven phase to run on your machine. If you are
working on multiple projects that interrelate, you’ll need install, but that shouldn’t be your “go to” since it
takes longer to run and isn’t usually needed.
Building with Maven ❘ 93
The clean phase deletes anything generated from prior runs ensuring a, well, clean build. It
doesn’t run by default with the previously mentioned phases, because builds run faster when
they don’t have to recompile everything from scratch. When you are running on a build server
like Jenkins, it can be useful to run a clean build. But you generally only have to run clean on
your own machine when having problems.
Also note that you can run more than one phase in the same command, making it common to
write this:
mvn clean verify
<properties>
<eclipse.collections.version>11.1.0</eclipse.collections.version>
</properties>
<dependencies>
<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-
collections</artifactId>
<version>${eclipse.collections.version}</version>
</dependency>
</dependencies>
You can define as many properties as you want in a <properties> tag. In this case, there is one with the name
eclipse.collections.version and the value 11.1.0. To use the property, refer to the variable name
inside the ${} placeholder and Maven will automatically replace the placeholder with the property value.
Configuring a Plugin
Let’s take a look at the compiler plugin. Using your favorite search engine, look for “maven compiler plugin,” which
will take you to https://maven.apache.org/plugins/maven-compiler-plugin, as shown in Figure 4.2.
Most plugins are easy to find with a simple search. Conveniently, the common plugins also have a shared format for
the documentation, so once you understand the compiler plugin, you know what you need for others!
Building with Maven ❘ 95
The plugin documentation page lists any goals that are part of the plugin, in this case compiler:compile and
compiler:testCompile. It also tells you what phase in the life cycle they are automatically bound to if any.
These goals are bound to the compile and test-compile phases, respectively. Plugins don’t need to be bound
to a specific phase, and you can just call them explicitly. However, in this chapter all plugins are bound.
Chapter 7 shows an example of using code to bind integration-tests to a life-cycle phase so that they
will be automatically run during the build.
The left navigation of the documentation page includes menu options for the following:
➤➤ Goals: Details on any plugins included with a focus on the required and optional parameters
➤➤ Usage: Shows how to include the plugin in your POM
➤➤ Examples: One or more links showing how to configure common scenarios
This example shows how to configure the compiler plugin to use Java 21:
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>compile</artifactId>
<version>1.0.0-SNAPSHOT</version>
<properties>
<java.version>21</java.version>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.13.0</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugin>
</plugins>
</build>
Pay attention to the configuration tag. That’s where the parameters in the “goals” documentation go. In this
case, it specifies that you want to use Java 21 both to compile and to produce the bytecode.
TIP If you get error: invalid target release: 21, this means you have an older
version of Java on your path.
One of the examples in the documentation is how to “Pass Compiler Arguments” and is as follows:
<project>
[...]
<build>
[...]
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-
compiler-
plugin</artifactId>
Building with Maven ❘ 97
<version>3.13.0</version>
<configuration>
<compilerArgs>
<arg>-verbose</arg>
<arg>-Xlint:all,-options,-path</arg>
</compilerArgs>
</configuration>
</plugin>
</plugins>
[...]
</build>
[...]
</project>
Comparing this example to the previous one, you can see the configuration section contains compiler argu-
ments. The documentation uses [...] to indicate that there may be other code in your POM in those spots.
Not all plugins are in the same group id. Some are supplied by other organizations. For example, Sonar, a static
analysis tool, supplies a plugin. It is in group id org.sonarsource.scanner.maven and artifact id
sonar-maven-plugin. You can see from the group id that it is supplied by SonarSource rather than Apache.
98 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
TIP Once you have created a parent POM, remember to run mvn install instead of
mvn verify for the parent POM, as that places it in your local Maven repository and
makes it available to all potential children.
A child POM specifies the parent it would like to use by using the GAV:
<artifactId>child</artifactId>
<version>1.0.0-SNAPSHOT</version>
<parent>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>parent</artifactId>
<version>1.0.0-
SNAPSHOT</version>
</parent>
Notice that the child does not specify a groupId. This tag is optional if the parent and child have the
same groupId.
<properties>
<java.version>21</java.version>
Building with Maven ❘ 99
<eclipse.collections.version>11.1.0</eclipse.collections.version>
<compiler.plugin.version>3.13.0</compiler.plugin.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-
collections</artifactId>
<version>${eclipse.collections.version}</version>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-
plugin</artifactId>
<version>${compiler.plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
First notice that in the properties section several properties are defined so that they can be used rather than
hard-coding numbers in the other parts of the POM. This technique allows child POMs to use standard defaults
but also lets them override any of the properties. This is like having a protected variable in a superclass in
Java. It is helpful to be able to change any of these values independently without having to change to a different
parent POM or rely on a different project to make a change.
Next comes the dependencyManagement section. The contents should look like dependencies in a regular
POM. The key difference is that the artifacts in dependencyManagement are not actually loaded until they are
referred to in a child POM. Similarly, pluginManagement configures the plugins in case the child POM uses
that plugins.
How does a child specify it wants to use a dependency or plugin? Take a look at a child POM that uses
this parent:
<artifactId>child</artifactId>
<version>1.0.0-SNAPSHOT</version>
<parent>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>parent</artifactId>
<version>1.0.0-
SNAPSHOT</version>
</parent>
<dependencies>
<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections</artifactId>
100 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
</plugin>
</plugins>
</build>
Notice how the configuration is minimal. Version numbers and configuration don’t need to be specified; all of
that comes from the parent POM.
Suppose you didn’t want the eclipse-collections dependency from the parent POM. No problem—in the
child POM just omit the dependency and it will be ignored:
<artifactId>child-without-dependency</artifactId>
<version>1.0.0-SNAPSHOT</version>
<parent>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>parent</artifactId>
<version>1.0.0-
SNAPSHOT</version>
</parent>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-
compiler-
plugin</artifactId>
</plugin>
</plugins>
</build>
Other changes are also easy. If you want a different version of eclipse-collections or the
maven-compiler-plugin, override the version property in the child POM, which will override the one
from the parent. Alternatively, you can add an explicit version tag rather than inherit it.
<modules>
<module>module-services</module>
<module>module-util</module>
</modules>
The POM contains a new modules section, which contains a list of the modules from the project
subdirectories that should be included in the build. The order the modules are listed in is not necessarily
the order they will be built.
One child can optionally contain another in its list of dependencies. In our example, module-services will
depend on module-util. Maven is smart enough to figure out the proper order to build the children based on
their dependencies.
Next, we’ll look at the module-util POM, which is nice and simple, only referencing the parent:
<artifactId>module-util</artifactId>
<version>1.0.0-SNAPSHOT</version>
<parent>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>module-parent</artifactId>
<version>1.0.0-
SNAPSHOT</version>
</parent>
The module-service POM does the same but also specifies a dependency:
<artifactId>module-services</artifactId>
<version>1.0.0-SNAPSHOT</version>
<parent>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>module-parent</artifactId>
<version>1.0.0-
SNAPSHOT</version>
</parent>
<dependencies>
<dependency>
<groupId>com.wiley.realworldjava.maven</groupId>
<artifactId>module-util</artifactId>
<version>1.0.0-
SNAPSHOT</version>
</dependency>
</dependencies>
Since in this example everything is contained in a single project, you can run mvn clean verify on the
module-parent level. The output contains the list of modules built and whether they were successful.
[INFO] --------------------------------------------------------
[INFO] Reactor Summary for module- parent 1.0.0- SNAPSHOT:
[INFO]
[INFO] module- parent ..................... SUCCESS [ 0.077 s]
[INFO] module- util ....................... SUCCESS [ 0.571 s]
[INFO] module- services ................... SUCCESS [ 0.040 s]
[INFO] --------------------------------------------------------
102 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
<properties>
<jackson.bom.version>2.17.0</jackson.bom.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson</groupId>
<artifactId>jackson-
bom</artifactId>
<version>${jackson.bom.version}</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
Building with Maven ❘ 103
Notice that the scope is import, which is allowed only when type is pom. This scope expands the value of
the imported POM dependencies as if you had included them directly in dependencyManagement. It is much
easier to read than if you typed them all in. For example, jackson-bom has more than 60 dependencies speci-
fied. And you have only one number (jackson.bom.version) to keep up-to-date instead of all the dependen-
cies individually!
Using IntelliJ
When you have a Maven project, there is a lowercase “m” icon representing Maven in the right sidebar. Click it
to expand the Maven window. Expand the Lifecycle option as in Figure 4.3. You can double-click your desired
goal (for example verify) to run a build or right-click for more options.
IntelliJ provides other options for Maven. For example, the effective POM is useful for seeing what your POM
looks like after all the inherited settings have been applied. Figure 4.4 shows that you can find this option by
right-clicking the project and choosing the Maven menu.
If your project has a lot of dependencies, you may find it useful to right-click the project in the Maven sidebar
and choose Analyze Dependencies. In this list you can click any dependency and see where it came from, even if it
is a transitive dependency.
DEPENDENCY GRAPH
IntelliJ also has the ability to show a dependency graph, which is a nice visual that shows
transitive dependencies and their relationships. To use it, click Maven in the right sidebar and
click the “show diagram” icon on the top, as shown in Figure 4.5.
Using Eclipse
In Eclipse, you run a Maven build by right-clicking the project in Package Explorer and choosing the Run menu.
Figure 4.6 shows this menu where you can click to choose a goal to run.
Eclipse also lets you view the effective POM and dependency hierarchy by opening the pom.xml file. The editor
has tabs for both, as you can see in Figure 4.7.
Using VS Code
In VS Code, you have a “Maven” accordion menu in the bottom-left corner. After you expand it, you see the goals
in Figure 4.8. You can right-click any of them to run your Maven build.
To see the effective POM, right-click the pom.xml file and choose Maven in the menu. Then click Show Effective
POM, as in Figure 4.9.
Building with Maven ❘ 105
While there is a dependency list in VS Code, it isn’t currently at the level of IntelliJ and Eclipse.
All of the general Maven settings are configured in a file called settings.xml, including repository locations,
authentication credentials, default profiles, and other configuration settings.
You can have the settings.xml files in one or two places. Global properties such as the URL of your binary
repository are generally configured in <maven install>/conf/settings.xml. User-specific properties
like token passwords (ideally encrypted) can be contained in <user home>/.m2/settings.xml. If both are
specified, then they are merged, and if there are any duplicates, the user-specific ones are used.
The settings.xml files will generally be customized for your environment. This is not required, and if omitted,
Maven will use the default local repository at ~/.m2/repository on Linux and Mac systems or
C:\Users\<username>\.m2\repository on Windows. It will use Maven Central (https://repo
.maven.apache.org/maven2) as the default remote repository for binary dependencies. This is fine for home
use, and you can ask your teammates how to set it up in your enterprise.
Where the build.gradle file contains project-specific configuration, the optional settings.gradle file can
contain more information about your project.
Gradle will generate a file called gradlew.bat for running on Windows and a gradlew file for running on
Linux and Mac. The gradle directory contains a wrapper subdirectory, which contains a JAR file with the
code that actually runs Gradle.
NOTE Gradle syntax gets more involved as your build needs grow. This chapter only covers
the basics.
Finally, lines 8–10 say that this build will be using Maven Central as the default repository for downloading
dependencies. Maven and Gradle generate the same format of artifacts, and therefore they can use the same
remote binary repository.
To run a build at the command line, execute ./gradlew build from the directory that contains
build.gradle.
This will generate a lot of dynamic output on the screen. Instead of logging all the output sequentially as with
Maven, Gradle dynamically updates the console output with the current build progress. Finally, it displays a
summary of the results:
Starting a Gradle Daemon (subsequent builds will be faster)
> Task :compileJava
> Task :processResources
> Task :classes
> Task :jar
> Task :assemble
> Task :compileTestJava
> Task :processTestResources
> Task :testClasses
> Task :test NO-SOURCE
> Task :check UP-TO-
DATE
> Task :build
BUILD SUCCESSFUL in 0s
5 actionable tasks: 5 executed
The tasks listed in this output will look similar to the Maven output. They compile the project, run any tests, and
create an artifact. The files created are in the following locations:
➤➤ build/lib: Contains the basic-gradle-1.0.0.jar file.
➤➤ build/classes: Contains the class files generated from compiling. They are separated into build/
classes/java/main and build/classes/java/test.
➤➤ build/resources: Contains copies of the resources separated into build/resources/main and
build/resources/test.
You have the option of using Kotlin for your build.gradle instead of Groovy. These two
language choices are referred to as the domain-specific language (DSL) for Gradle. Kotlin
support was added more recently, but the documentation has been fully updated to
support both.
Basic Gradle examples look similar in Groovy and Kotlin. First is the settings
.gradle.kts file:
rootProject.name = "basic-
gradle"
The only difference is that Kotlin requires double quotes. In Groovy, single or double quotes
are allowed, with single quotes being more common. Next is the build.gradle.kts file.
plugins {
`java-library`
}
group = "com.wiley.realworldjava.gradle"
Building with Gradle ❘ 109
version = "1.0.0-
SNAPSHOT"
repositories {
mavenCentral()
}
In addition to the double quotes, note that in the Kotlin version java-library is in
backticks. Finally, the Kotlin build filenames have a .kts extension.
In this chapter, we will show other differences between Groovy and Kotlin build files as we
introduce each feature.
In the previous example, you saw implementation as the dependency configuration type.
Some of the common dependency types include the following:
➤➤ implementation: Available at runtime to your application
➤➤ testImplementation: Available from test/java but not from src/java
➤➤ compileOnly: Available for compiling but not at runtime
➤➤ runtimeOnly: Available at runtime but not for compiling
Specifying Variables
Since a Gradle build file is actually Groovy or Kotlin code, you can specify variables in code. For example, this
Groovy code declares a variable:
def collVersion = "11.1.0"
dependencies {
implementation
"org.eclipse.collections:eclipse-
collections:${collVersion}"
}
Groovy uses def to declare a variable. The ${} syntax tells Gradle to substitute the variable value. You need to
use double quotes for that expansion to work. In Groovy, single quoted strings do not allow this feature.
By contrast, a Kotlin build file can also use a variable:
val collVersion = "11.1.0"
dependencies {
implementation
"org.eclipse.collections:eclipse-
collections:${collVersion}"
}
The only difference between the two is that Groovy uses def to declare a variable, where Kotlin uses val for
declaring immutable variables and var for mutable variables. Since the version is declared once and unchanged
in the build file, this example uses val.
TIP In addition to the Gradle java plugin, there is a java-library plugin that extends
it. The java-library plugin adds extra functionality like an API configuration.
The Java plugin page includes configuration for tests, Javadoc, and packaging. It also includes descriptions for
configuring changes to any defaults such as source directories.
NOTE Gradle also supports multiproject builds like Maven’s multimodule builds. You add
an include statement in the settings.gradle file.
TIP The Analyze Dependencies menu is also available for Gradle projects, just like for
Maven ones. Just open the Gradle sidebar, right-click the project, and choose Analyze Depen-
dencies.
In Eclipse, start with the Gradle Tasks view at the bottom of the screen. Then follow these steps to run a
Gradle build:
1. Expand the project.
2. Expand Build. Figure 4.12 shows this view.
3. Right-click Build and choose Run Gradle Tasks.
In VS Code, the Maven extension is included in the Java extension pack. For Gradle, you need to install the
extension yourself. Go to Marketplace and install the Gradle for Java extension.
Once you have the extension installed and a Gradle project opened, you’ll see the elephant logo in the left
navigation. Then follow these steps to run a Gradle build:
1. Expand Gradle Projects.
2. Expand your project name.
3. Expand Tasks.
4. Expand Build. Figure 4.13 shows what VS Code looks like at this point.
5. Right-click Build and choose Run Task.
FINDING A DEPENDENCY
Whether you are using Maven or Gradle, you will frequently need to find the project coordinates for projects
in Maven Central. One way to find them is to go to https://central.sonatype.com and search for your
artifact. The resulting page has a pull-down for Maven and the Gradle variants so you can copy/paste it into
your build.
Finding a Dependency ❘ 113
However, there is a faster way. If you search for “maven central” and the artifact name on your favorite search
engine, the first hit is likely to be at https://mvnrepository.com. For example, searching for “maven
central eclipse collections” takes you to https://mvnrepository.com/artifact/org.eclipse
.collections/eclipse-collections. The Maven Repository site was built by a developer in Spain and
has the same information as Maven Central. We use mvnrepository.com in this book when mentioning
URLs due to its better search engine optimization.
Figure 4.14 shows the options available when you click a specific version of an artifact.
At the time of this writing, we chose Eclipse Collections version 11.1.0 for the example in this chapter. Notice
that there is a later version, 12.0.0.M3. The M3 stands for “milestone 3.” A milestone release is a publicly avail-
able release for testing new features and comes before the final version. Since it is not common to use milestone
dependencies in production code, we chose the most recently released version.
In Figure 4.14, under the warning about the later version, there are tabs for many build systems. Select your build
system, copy the dependency information, and paste that into your POM or Gradle build file.
TIP In the figure, the tabs SBT, Ivy, and Grape are other specialized build systems that are
not covered in this book.
114 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
Installing Jenkins
To install Jenkins, you can go to the download page in “Further References.” You’ll notice that there are many
choices on that page.
First decide whether you want the stable version or the weekly release. The stable packages are released monthly,
and Jenkins applies security and bug fixes to them for a period of time. Every three months, one of the stable
releases becomes the long-term support (LTS) release. When running at home, any release is fine. Businesses will
usually choose an LTS release or a vendor-supported version. We’ll use the latest stable release in our example.
Integrating with Jenkins ❘ 115
Next you must decide on the format for your Jenkins download. One option is to get it as a web archive (WAR)
file to run on a web application server like Tomcat. Alternatively, you can get it as an operating system package
to install as a binary or a Docker image. We’ll use the Docker image in this chapter as an example. The link to
download Docker is also in the “Further References” section. Any of these options is fine, though, when trying
Jenkins on your personal machine.
For Docker, start by opening the “Docker Desktop” application to ensure Docker is running on your machine.
Then open a command line and run the following command to download the latest stable Jenkins release:
docker pull jenkins/jenkins
The output will look something like this where each of the layers in the Docker image is downloaded:
Using default tag: latest
latest: Pulling from jenkins/jenkins
60bdaf986dbe: Pull complete
dfad4ee37376: Pull complete
206558d801c7: Pull complete
a5c2ffb5ffef: Pull complete
f0c0bc8bfcc6: Pull complete
064531224ab4: Pull complete
96aa304ced3c: Pull complete
056f1f47a471: Pull complete
ac5fc7f80726: Pull complete
8a59881e61b3: Pull complete
361281efe43a: Pull complete
aa0d9cfb3420: Pull complete
Digest: sha256:d4f805f9c225ee72c6ac8684d635eb8ec236569977f4cd6f9acd7c24a5d56985
Status: Downloaded newer image for jenkins/jenkins:latest
docker.io/jenkins/jenkins:latest
Next you run the Docker container:
docker run -
-name jenkins -
p 8080:8080 jenkins/jenkins
This command gives your Docker container the name “jenkins” to make it easier to refer to later. It also exposes
port 8080 so you can access it in a web browser. If that port is in use on your machine, feel free to use any
available port.
NOTE You only use the run command the first time. After that, start Jenkins by calling
docker start jenkins instead.
4ab39e58b99e4519b3edbf7e4df21d51
Jenkins will ask you whether you want to “Install suggested plugins” or “Select plugins to install.” Choose the
suggested plugins so you can get started quickly. If you watch the screen during the plugin install, you’ll notice
that Jenkins downloads more than just the plugins on the screen. Just like Maven and Gradle dependencies,
Jenkins plugins depend on other Jenkins plugins. Therefore, Jenkins will download those transitive plugin
dependencies as well.
Next Jenkins asks you to set up your first admin user. Be sure to remember the username and password you pick
as you’ll need it to get back in. You no longer need the long initial password from the install anymore, though!
Finally, Jenkins asks you to confirm the URL. Using the default of http://localhost:8080 is just fine.
Jenkins will display the “Jenkins is ready!” screen. Just click Start Using Jenkins.
Creating Jobs
In the following sections, you’ll learn how to create three kinds of jobs.
➤➤ A simple freestyle job
➤➤ A job that runs Maven
➤➤ One that runs Gradle and a pipeline
Freestyle jobs require you to specify your build configuration using the Jenkins user interface. The freestyle
approach has an easier learning curve than pipelines and will work for the majority of cases. Freestyle jobs let you
see the workspace via a link in the left navigation.
Integrating with Jenkins ❘ 117
Pipelines allow you to express your build using code. They also let you split your build into stages.
There are two types of pipelines: scripted and declarative. In this book, we use the scripted pipeline for Maven
and the declarative pipeline for Gradle so you can see an example of each. Pipeline jobs provide a workspaces
link on specific builds instead of on the job level.
Jenkins will create the job and automatically take you to the configure screen. There are several configuration
sections in a job, as shown in Figure 4.15.
General contains checkboxes for you to select your options. For example, you can control how many old builds
Jenkins saves or whether to run another build if the current one is currently running.
Source Code Management allows you to specify the version control system you are using. Git is included with the
recommended plugins. If you are using a different VCS, you can install a plugin for it from Jenkins Plugin
Manager page. There are also fields for the repository URL and any credentials needed to connect.
Build Triggers is where you choose what kicks off your build. In this example, we will use Build Periodically to
show how a schedule is used. There are also options for a GitHub commit trigger and for polling the repository.
The trigger for telling GitHub to initiate a build is called a push. By contrast, polling asks the repository to
periodically check if there have been any changes, and if so, execute the build.
Build Environment lets you configure information about how Jenkins should behave. For example, you can
have Jenkins add timestamps to each log entry, or you can set a strategy for when Jenkins should kill a long-
running build.
Build Steps is the actual build, where you configure one or more build steps to run. For example, you can have
Jenkins run a command from the command line or you can configure a Maven build here.
Post-build Actions allow you to deal with the result of a build. For example, you can send an email notification or
publish a report.
118 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
Now that you understand what a freestyle job can do, let’s create a simple one that prints hello every hour. First,
you need to set the build trigger to build periodically. You can use a cron-like syntax (cron is the Unix built-in
scheduler).
# MINUTE HOUR DOM MONTH DOW
* * * * *
Jenkins will advise you can use H, rather than specifying a precise time. H calculates a hash based on the job name
to distribute jobs at different times, as shown here:
H * * * *
However, there is a better way. Jenkins provides an alias for each of the common frequencies, such as @hourly,
@daily, @weekly, and @monthly. That means all you need to write is this:
@hourly
When you tab out of the schedule text area, Jenkins displays when the job would have last run and will next run.
This is a convenient way of checking that your expression does what you expect.
Now it is time to add a build step. If you are on Linux/Mac, choose Execute Shell, and if you are on Windows,
choose “Execute Windows Batch Command.” Either way, enter the following command:
echo "hello"
Figure 4.16 shows the configuration for this job when run on Mac.
Now save your configuration. You can wait for the job’s hourly run to have it run automatically. Or you can choose
Build Now from the left navigation to trigger it to run right away. Figure 4.17 shows the build history after a few runs.
There is a green icon next to each run, since the build was successful. It will be yellow if the build is unstable
or red if it failed. Clicking that icon takes you to the console output for your build. On a Mac, it shows the
following:
Integrating with Jenkins ❘ 119
TIP You might have noticed the times are in Coordinated Universal Time (UTC). You can go
to your user profile to change it for yourself. Or you can set the default by passing a param-
eter to Docker.
docker run --
name jenkins -p 8080:8080 \
-e JAVA_OPTS=-Duser.timezone=America/New_York jenkins/jenkins
Now that Maven is configured, you can create a new freestyle job. The next step is to pull the code from GitHub.
Jenkins has a Source Code Management section that provides a radio button for Git. Enter the repository URL
for the code you want. For example, this chapter uses https://github.com/realworldjava/Ch04-CICD.
Since this repository is available on the Internet, Jenkins will be happy with the URL. If it were a private
repository or if you had mistyped the URL, you would get a message like this:
Failed to connect to repository : Command "git ls-
remote -
h –
https://github.com/realworldjava/Ch04-
CICD HEAD" returned status code 128:
120 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
In the enterprise, your repository will invariably require credentials. The Jenkins instance will
use a token associated with a service account (that is, a nonhuman account) so that it will be
clear that the activity resulted from an automated system. In this book, we will show you how
to use a GitHub personal access token using your account.
GitHub may encourage you to use fine-grained tokens. You can see if their functionality has
changed. We used classic tokens because at the time of this writing, only the repository owner
can create fine-grained tokens and you are not always the owner of repositories you use.
To generate a token, follow these steps:
1. In GitHub, click your picture at the top right.
2. Click Settings.
3. In the left navigation, choose Developer settings.
4. Click Personal access tokens in the left navigation.
5. Click Tokens (classic) in the left navigation.
6. Click Generate new token in the top right and choose classic again.
7. Type a name for your token to remind you of the purpose. For example, Jenkins.
8. Choose the number of days for expiration. Many enterprises have rules on how
long a token can exist before being cycled.
9. Generate the token.
Now that you have a token, you can set it up as a credential in Jenkins:
1. Click Manage Jenkins in the left navigation.
2. Choose Credentials.
3. Click (global) to make the credential available to all of Jenkins.
Integrating with Jenkins ❘ 121
In addition to specifying the repository, you need to specify the branch. The branch specifier field defaults to
*/master, which is the name old versions of Git used for the default branch before they changed it to main.
Change it to */main or whatever your main branch name is.
The next step is to tell the freestyle job to run Maven. To do so, create a build step of type Invoke top-level
Maven targets. Choose the Maven version you created earlier from the pull-down and type clean verify for
the goals. This repository uses subfolders rather than having the pom.xml filename in the root; click Advanced,
and enter maven/with-dependency/pom.xml for the POM. Figure 4.19 shows the configuration for this job.
This will cause Jenkins to restart. Now that you have the plugin, you are ready to create a pipeline. Choose Pipe-
line instead of Freestyle Project when creating the job.
Pipeline jobs give you the choice of including the pipeline directly in the job configuration or getting it from a
repository. For the former, choose Pipeline Script under the Definition label. Then in the text area beneath it, paste
your pipeline as shown in Figure 4.21.
node {
stage('Source Control') {
git branch: 'main', url: 'https://github.com/realworldjava/Ch04-
CICD'
}
stage('Build') {
withMaven(maven: 'Maven 3.9') {
sh "\$MVN_CMD -
f maven/with-dependency clean verify"
}
}
}
This scripted pipeline has two stages, Source Control and Build. The first pulls the main branch of the
repository. The latter uses functionality from withMaven to set up a Maven environment. Within that context
it calls the Maven command at the operating system shell. If you are running on a Windows machine, use bat
instead of sh.
Alternatively, you can store the pipeline itself in source control by following these steps, as shown in Figure 4.22:
1. Choose Pipeline script from SCM under the Definition label.
2. Choose Git for the SCM.
3. Enter a repository URL, for example, https://github.com/realworldjava/Ch04-CICD.
4. Change Branch Specifier to */main.
5. Change Script Path to maven/using-jenkinsfile/Jenkinsfile.
124 ❘ CHAPTER 4 Automating Your CI/CD Builds with Maven, Gradle, and Jenkins
When you open any pipeline job in Jenkins, there is a Pipeline Syntax link on the left naviga-
tion. This takes you to a wizard for generating code for both scripted and declarative pipelines.
Fill in configuration values, and Jenkins will give you corresponding code. While this doesn’t
write your entire pipeline for you, it is an excellent starting point.
pipeline {
agent any
stages {
stage('Source Control') {
steps {
git branch: 'main',
url: 'https://github.com/realworldjava/Ch04-
CICD'
}
}
stage('Build') {
steps {
sh "gradle/groovy/with-dependency/gradlew "
+ "-
p gradle/groovy/with-dependency build"
}
}
}
}
This time, the root is pipeline instead of node. The pipeline keyword indicates that the code can be run
on any agent. In this case, we have only one agent. There are also more levels, for example, the steps level. The
ideas are the same across pipeline formats, and either pipeline format is fine to use. Just like the Gradle freestyle
job, the gradlew from the repository is used to run the build.
TIP Pipelines have many advanced features including the ability to run steps in parallel,
controlling when to fail the build, or even adding pauses.
Organizing Jobs
When you choose New Item, you aren’t limited to jobs. You can also create a folder. Folders on Jenkins work like
those on your computer. You can nest folders inside other folders to help keep your jobs organized.
You can also create views as an alternate way of looking at your jobs. Click the plus (+) above your jobs list to
create a new view. By default, a configurable list view is available. You can manually select the jobs that go in a
view, or you can provide a regular expression to specify a pattern. Chapter 10, “Pattern Matching with Regular
Expressions,” will cover how to use regular expressions in general. A list view also lets you control which col-
umns appear. Views can also traverse into folders.
The Dashboard View plugin provides a much more powerful view type. It lets you select portlets to anchor to the
top, bottom, left, and right of the Jenkins screen. You can have as many portlets in each area as you like, making
for good layout control. The portlets include many statistics such as builds, jobs, and tests. You can even make
your dashboard a full-screen view hiding the built-in Jenkins navigation.
Notifying Users
A failing job won’t get fixed if nobody knows about it! By default, Jenkins provides email notifications on build
failures. You can provide one or more email addresses and specify whether you want an email on every build that
doesn’t pass or just the first one. Additionally, you can include whoever made the commit that broke the build.
The editable email notification provides much more control. You can specify fields for email, subject, and content-
template. You also have the option to attach the build log file. One of the powerful features of this plugin is that
it provides you with fine-grained control over when emails are sent. You can choose any combination of success
and failures, even sending different people the notifications for different cases.
Besides email notifications, Jenkins supports many other notification types via plugins. For example, the SMS
plugin can send a text message when there is a failure. The Notify-Events plugin supports Skype, Telegram, voice
call, and more. There are specialized notifiers for Slack or Amazon SNS topics.
If these active/push notifiers don’t meet your needs, you can use a passive/pull approach. Jenkins supports RSS via
the /rssAll, /rssFailed, and /rssLatest feeds.
Whichever notifier you choose, make sure it is one that will get someone’s attention. After all, fixing a build is a
high priority, and that won’t happen unless someone knows that it is broken!
Reading Changes
Each Jenkins job has a Changes link in the left navigation. This tells you what has changed in the repository since
your last build. Here’s an example:
#8 (May 7, 2024, 9:36:22 PM)
TIP SonarLint is the IDE version of Sonar, so you can discover problems before even com-
mitting your code to version control.
FURTHER REFERENCES
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
(Addison-Wesley, 2020)
➤➤ https://maven.apache.org/guides/index.html
Maven documentation
➤➤ https://maven.apache.org/download.cgi
Download Maven for use outside the IDE
➤➤ https://maven.apache.org/ref/3.9.6/maven-model-builder/super-pom.html
Maven super POM
➤➤ https:docs.gradle.org/current/userguide/userguide.html
Gradle documentation
➤➤ https://docs.gradle.org/current/userguide/java_plugin.html
Gradle Java plugin
➤➤ https://gradle.org/install
Download Gradle for use outside the IDE
➤➤ https://docs.docker.com/engine/install
Download Docker
➤➤ https://www.jenkins.io/download
Download Jenkins
➤➤ https://www.jenkins.io/doc/book/pipeline
Jenkins Pipeline documentation
➤➤ https://plugins.jenkins.io
Jenkins Plugins
➤➤ https://www.sonarsource.com
SonarQube
SUMMARY
In this chapter, you learned about CI/CD. Key concepts included the following:
➤➤ Building with a Maven pom.xml
➤➤ Building with a Gradle build.gradle in Groovy
➤➤ Building with a Gradle build.gradle.kts in Kotlin
➤➤ Creating Jenkins freestyle and pipeline jobs
➤➤ The purpose of SonarQube, a static analysis tool
➤➤ Key CI/CD principles
5
Capturing Application State
with Logging Frameworks
WHAT’S IN THIS CHAPTER?
When you first started programming in Java, you probably used System.out.println to output
information for use in troubleshooting why your code didn’t work as expected. For example:
public void run(int count) {
System.out.println("count=" + count);
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch05-Logging. See the README.md file in that repository for details.
Line 9 is the logging statement. Java Util Logging, severe() tells the logger to use the highest priority level of
logging. Running this code outputs something like this:
Feb 11, 2024 9:16:22 PM com.wiley.realworldjava.logging.jul.BasicLogging main
SEVERE: Something bad happened!
By default, this output contains the following information from where the logging occurred:
➤➤ Date and time
➤➤ Fully qualified class name
➤➤ Method name
➤➤ Logging level (SEVERE)
➤➤ Log message
LOGGER NAMES
While it is common to use the package/class name for a logger, you can choose any logger
name. To do this, you can write the following:
private static final Logger LOGGER = Logger.getLogger(
"CustomLogger");
Your choice of logger name does not impact what gets logged unless you specifically choose to
include the logger name.
In the following sections, you’ll see more details about using Java Util Logging.
Java Util Logging has seven logging levels along with two special options. Table 5.1 shows the logging levels and
their intended purpose.
Suppose the logging level is set to INFO. Calls to severe(), warning(), and info() will be logged. To make
sure you fully understand this, see Table 5.2.
FINEST No No No No No Yes
OFF No No No No No No
Formatting Values
Often, you want to log variable values and not just static messages. The built-in String.format method
formats values for Java Util Logging. For example:
LOGGER.severe(String.format("%s: %.1f is %d/%d", "Division", 1.5, 3, 2));
Using Java Util Logging ❘ 133
NOTE In your applications, you are likely to set the system property as a launch parameter
in whatever system is running your application. See Chapter 14, “Getting to Know More of
the Ecosystem,” for examples of such environments.
Another way to set the system property is in code. This is not a common approach in real code. This approach is
not normally recommended because it requires a recompile to change the location. However, the repository for
this chapter uses this approach to facilitate using a variety of property files.
Since the property must be set before the Logger is instantiated, it is done in a static block.
134 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
import java.util.logging.Logger;
static { System.setProperty("java.util.logging.config.file",
"src/main/resources/logging-warning.properties"); };
private static final Logger LOGGER =
Logger.getLogger(BasicConfig.class.getName());
NOTE Multiple handlers can be used by separating their types with commas:
handlers=java.util.logging.ConsoleHandler,
java.util.logging.FileHandler
The following configuration shows how to use the basic features of a FileHandler:
1: .level= WARNING
2: handlers=java.util.logging.FileHandler
3:
4: java.util.logging.FileHandler.pattern = java-log-%u.log
5: java.util.logging.FileHandler.limit = 100
6: java.util.logging.FileHandler.count = 2
Line 4 specifies the location for the log files to go. In this example, they are in the root directory of the code. In
a real application, an absolute path is most common. The %u indicates to use a unique number. Each file gets a
number to avoid naming conflicts.
Line 5 says how many bytes should be in each file. It is common for this to be a much larger number, like
100,000, but we used a smaller number for the purposes of our demonstration. Once this number is exceeded, a
new file will automatically start. If you omit the FileHandler.limit property, there will be no upper bound
to the file, potentially resulting in an unmanageably large file.
Line 6 specifies a maximum of two log files. Running this against a program that generates a lot of data creates
two files:
java-log-0.log0
java-log-0.log1
The rest of the data is gone. That’s what log rotation is; the older data gets rotated out and replaced by the latest
data. Beware, if you leave out line 6, Java Util Logging will keep generating files potentially using up all the space
Using Java Util Logging ❘ 135
on your disk! Typically you would keep a week’s or month’s worth of logs or so on disk, but that varies greatly
depending on your team and your application.
Each file contains something like the following. This is a format called eXtensible Markup Language (XML). For
more details about XML, see Appendix, “Reading and Writing XML, JSON, and YAML.” Each handler has a
default format. ConsoleHandler uses a text format. FileHandler uses XML.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE log SYSTEM "logger.dtd">
<log>
<record>
<date>2024-
02-
14T00:07:43.388849Z</date>
<millis>1707869263388</millis>
<nanos>849000</nanos>
<sequence>999</sequence>
<logger>com.wiley.realworldjava.logging.jul.FileRotationLogging</logger>
<level>SEVERE</level>
<class>java.util.stream.ForEachOps$ForEachOp$OfRef</class>
<method>accept</method>
<thread>1</thread>
<message>Logging data: 0.707243274304531</message>
</record>
</log>
You can change the format from the default through configuration. The following example shows how to
specify a custom format. Don’t worry if the format seems complicated; you will want to keep the reference
documentation open while you are configuring these things.
.level= WARNING
handlers=java.util.logging.FileHandler
java.util.logging.FileHandler.pattern = java-
log.log
java.util.logging.FileHandler.limit = 100000
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=DATE=%1$tc, MESSAGE=%5$s%n
This logs a single file up to 100 MB with lines like this:
DATE=Tue Feb 13 19:34:27 EST 2024, MESSAGE=Logging message
Some of the format is raw text like DATE= and MESSAGE=. The %1 inserts the date, and %5 inserts the message.
The rest (for example, $s%n) are formatting characters.
The Javadoc for the SimpleFormatter class provides further detail.
Logging Lazily
So far, the output has been a simple String. Sometimes what you want to log is expensive to construct, and you
want to avoid constructing it unless it will actually be logged. In such cases, you can use a lambda so that the
message will be only constructed when the logging level is met:
public static void main(String[] args) {
LOGGER.fine(() -> generateMessage());
}
Inheriting Loggers
Up until now, the examples have used .level to specify the logging level. The . (dot) means the root logger. You
can specify different levels of loggers, as in the following example. The logging level will apply to the specified
logger and any children. If there are overlapping specifications, the most specific one will apply.
Suppose we have the following configuration, which sets three rules for logging levels:
.level= INFO
com.wiley.realworldjava.logging.jul.level= SEVERE
com.wiley.realworldjava.logging.jul.child.level= WARNING
handlers=java.util.logging.ConsoleHandler
Java Util Logging builds a logging hierarchy that looks like Figure 5.1. Therefore, the logger com.wiley inherits
from the logger com. This is used to determine what logging configuration to use for a given logger.
import java.util.logging.Logger;
import com.wiley.realworldjava.logging.jul.child.ChildLogging;
log();
}
}
Java Util Logging looks for the most specific logger it can. In this case, it is the com.wiley.realworldjava
.logging.jul logger, so SEVERE is used and Parent logging gets logged only once for MultipleLevels
Logging. By contrast, the following class uses the com.wiley.realworldjava.logging.jul.child
logger, which we configured earlier to log at the WARNING level and which therefore causes Child logging to
get printed twice, one for SEVERE and one for WARNING:
package com.wiley.realworldjava.logging.jul.child;
import java.util.logging.Logger;
parentLogger.severe("Parent logging");
parentLogger.warning("Parent logging");
childLogger.severe("Child logging");
childLogger.warning("Child logging");
}
}
All the logging frameworks have inheritance that works this way, so the section only appears in reference to Java
Util Logging since it is covered first in the chapter.
138 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
USING LOG4J
Log4j is one of the oldest Java logging frameworks, even predating Java Util Logging. It has received many
updates over the years and is in widespread use.
NOTE Log4j 2 was created as a replacement to Log4j 1 more than a decade ago. All refer-
ences to Log4j in this chapter are to Log4j 2. If you are looking at code, you can tell you are
on Log4j 2 if the import starts with org.apache.logging rather than Log4j 1’s org
.apache.log4j.
You might have read about Log4j in the news. On December 9, 2021, a major security vulnerability known as
Log4Shell was announced, which allowed bad actors to run arbitrary code in applications. Yikes! You don’t have
to worry about this as long as you are using a version of Log4j 2.17.2 or higher.
To use Log4j in your Maven or Gradle build, go to Maven Central https://mvnrepository.com/artifact/
org.apache.logging.log4j/log4j-core and find the latest version number for your build tool. Table 5.3
shows what this looks like for each configuration at the time of this writing. Notice that you need both the API
and Core dependencies. The API is the interface, and core is the implementation.
TOOL SYNTAX
Maven <dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-
api</artifactId>
<version>2.22.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-
core</artifactId>
<version>2.22.1</version>
</dependency>
Grab the appropriate syntax for your build tool and include that in your dependencies. The following is a simple
program that uses Log4j:
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
Using Log4j ❘ 139
➤➤ Time
➤➤ Method name
➤➤ Logging level
➤➤ Fully qualified class name
➤➤ Log message
Finally, LogManager.getLogger() has three overloaded versions. The one without parameters derives a log-
ger with the package/class name from the calling code. The other two take a specific name or the class reference
itself. These three calls are all equivalent:
LogManager.getLogger();
LogManager.getLogger(GetLoggerEquivalents.class);
LogManager.getLogger(GetLoggerEquivalents.class.getName());
The first one is preferable because there is less code and you don’t have to worry about copy/pasting it into a
different class where you would have to remember to change the name of the logger!
In the following sections, you’ll see more details about using Log4j.
Formatting Values
You have two options for formatting values in Log4j. The syntax varies depending on whether you obtain the
logger by calling getLogger() or the alternative form getFormattingLogger(). Contrast the two versions
in this example:
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
The configuration file approach is most common because having the file on your classpath (typically in
src/main/resources) is sufficient.
The next section describes the difference in formats. Note that each of the format files listed have versions with
and without “test” in the name. The “test” versions will be picked up by your JUnit tests if you place the file in
src/test/resources, which allows your automated tests to easily use a different logging configuration than
your application itself.
Finally, if no configuration is found, Log4j uses the default ERROR logging level, which sends all output to the
console. To override that default, set the system property org.apache.logging.log4j.level to the desired
level: INFO, DEBUG, etc.
142 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
static { System.setProperty("log4j2.configurationFile",
"src/main/resources/log4j2-config.xml"); }
be careful
just to let you know
None of this should be surprising as it matches the description of how we want logging to behave. There’s one
other attribute in the example, on line 10, additivity. By default, loggers inherit from each other in Log4j.
Without setting additivity to false, the output would include duplicate rows because both the root logger and the
com.wiley.realworldjava logger would be run for the same message.
XML was the original format supported by Log4j and so has the most examples online and is therefore often the
best choice. Since it is the oldest, you may encounter Log4j 1.x examples on the Internet. If you see XML that
contains <log4j:configuration, it is from Log4j 1.x, and you should find a newer example! Log4j2
examples have <Configuration> as the root XML tag.
Next, let’s set the same configuration using a properties file, using name/value pairs. Similarly, if you see
log4j.rootLogger, you are looking at Log4j 1.x documentation and should find a more modern example
that doesn’t have the log4j attribute prefix and simply has rootLogger.
appenders = console
rootLogger.level = error
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
logger.book.name=com.wiley.realworldjava
logger.book.level=INFO
logger.book.appenderRefs = stdout
logger.book.appenderRef.stdout.ref = STDOUT
logger.book.additivity = false
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %m%n
The concepts are the same in the XML example. You see the root logger, appenders, console, logging levels, etc. A
significant difference is that you must give each logger (other than the root logger) a reference that you use in the
property file keys as a grouping. You can think of book as a variable name in this example. All of the rows that
begin with logger.book are grouped together and define the settings that are applied to the book logger.
The .json/.jsn and .yaml/.yml versions of the configurations require you to add some additional depend-
encies in your project. Both use different APIs from the popular Jackson library for parsing. These APIs are not
included as transitive dependencies of Log4j itself since they are optional dependencies, only required if you are
using JSON or YAML, but not for the properties or XML flavors.
To use Log4j2, find the dependencies on Maven Central: https://mvnrepository.com/artifact/com
.fasterxml.jackson.core/jackson-databind and https://mvnrepository.com/artifact/
com.fasterxml.jackson.dataformat/jackson-dataformat-yaml. Find the latest version number
for your build tool. See Tables 5.5 and 5.6 for the additional dependencies for JSON and YAML, respectively,
using the latest version numbers at the time of this writing. Note that both pull in jackson-core as a transitive
dependency of the one you specify.
NOTE If you omit the dependencies, Log4j silently ignores the JSON and YAML
configuration files and will go on to the next file it can find in the lookup order or use the default.
You will not get a failure message about the configuration file being ignored.
144 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
TOOL SYNTAX
Maven <dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-
databind</artifactId>
<version>2.16.1</version>
</dependency>
TOOL SYNTAX
Maven <dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-
dataformat-
yaml</artifactId>
<version>2.16.1</version>
</dependency>
The following shows the configuration from the previous examples represented in JSON. Log4j supports both the
.json and .jsn file extensions. The content in the file is the same.
{
"configuration": {
"status": "warn",
"appenders": {
"Console": {
"name": "STDOUT",
"PatternLayout": {
"pattern": "%m%n"
}
}
},
"loggers": {
"logger": {
"name": "com.wiley.realworldjava",
"level": "info",
"additivity": "false",
"AppenderRef": {
"ref": "STDOUT"
}
},
Using Log4j ❘ 145
"root": {
"level": "error",
"AppenderRef": {
"ref": "STDOUT"
}
}
}
}
}
Finally, the configuration in YAML is as follows. Log4j supports both the .yaml and .yml file extensions, which
are both commonly accepted file types for YAML files.
Configuration:
status: warn
appenders:
Console:
name: STDOUT
target: SYSTEM_OUT
PatternLayout:
Pattern: "%m%n"
Loggers:
logger:
-
name: com.wiley.realworldjava
level: info
additivity: false
AppenderRef:
ref: STDOUT
Root:
level: error
AppenderRef:
ref: STDOUT
Both the JSON and YAML examples use the same concepts as the XML example. The only difference is the syn-
tax. Any of the formats is fine to use as the configuration. You can pick whichever one you or your team prefer.
In this section, you’ll learn how to use RollingFile. (RollingRandomAccessFile works the same way.)
Each appender has its own set of attributes to control behavior. See the “Further References” section at the end of
this chapter for a link to the full list.
146 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
For rolling files, you can control things like the name, formatting, how often to write to disk, and how to han-
dle rotation. For example, you can rotate the file based on time, such as once a day, or size, such as when it
gets to 25 MB.
This example rotates files after a day or 1 MB, whichever comes first:
1: <?xml version="1.0" encoding="UTF- 8"?>
2: <Configuration status="warn" name="MyApp">
3: <Appenders>
4: <RollingFile name="RollingFile" fileName="java-log.log"
5: filePattern="java-log-%d{yyyy-MM-dd}-%i.log.gz">
6: <PatternLayout>
7: <Pattern>%m%n</Pattern>
8: </PatternLayout>
9: <Policies>
10: <TimeBasedTriggeringPolicy/>
11: <SizeBasedTriggeringPolicy size="1 MB"/>
12: </Policies>
13: <DefaultRolloverStrategy max="3"/>
14: </RollingFile>
15: </Appenders>
16: <Loggers>
17: <Root level="error">
18: <AppenderRef ref="RollingFile"/>
19: </Root>
20: </Loggers>
21: </Configuration>
On lines 4 and 5, the configuration specifies both the filename for the file actively being logged to (java-log
.log), along with a naming convention for the rotated files (java-log-%d{yyyy-MM-dd}-%i.log.gz). The
.gz extension means GNU Zip, which is a commonly used compression format.
Line 10 sets the time-based rotation option. The unit of measure is the most specific duration in the file pattern
on line 5. In this case, that is day. By default, the interval is set to 1 since that is most common. You can set the
attribute interval explicitly on the TimeBasedTriggeringPolicy tag in the XML to specify a different
frequency.
The configuration allows at most three archive files as specified on line 13. After running a program that logs a
million statements using this configuration, there will be four files: the active one and three archives:
java-log-2024-02-17-1.log.gz
java-log-2024-02-17-2.log.gz
java-log-2024-02-17-1.log.gz
java-log.log
Let’s repeat the same configuration using a properties file. Notice how the information specified is the same; just
the format changes:
appenders = rolling
rootLogger.level = error
rootLogger.appenderRefs = RollingFile
rootLogger.appenderRef.rolling.ref = RollingFile
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = java-
log.log
appender.rolling.filePattern = java-log-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.layout.type = PatternLayout
Using Log4j ❘ 147
appender.rolling.layout.pattern = %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 1MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 3
Contrast that with the comparable JSON version:
{
"configuration": {
"status": "warn",
"appenders": {
"RollingFile": {
"name":"FILE",
"fileName":"java-log.log",
"filePattern":"java-log-%d{yyyy-MM-dd}-%i.log.gz",
"PatternLayout": {
"pattern":"%m%n"
},
"Policies": {
"TimeBasedTriggeringPolicy" : {},
"SizeBasedTriggeringPolicy": {
"size":"1 MB"
}
},
"DefaultRolloverStrategy": {
"max":"3"
}
}
},
"loggers": {
"root": {
"level": "error",
"AppenderRef": {
"ref": "FILE"
}
}
}
}
}
Finally, here’s the YAML version:
Configuration:
status: warn
appenders:
RollingFile:
-name: FILE
fileName: $java-log.log
filePattern: java-log-%d{yyyy-MM-dd}-%i.log.gz
PatternLayout:
pattern: "%m%n"
Policies:
TimeBasedTriggeringPolicy:
SizeBasedTriggeringPolicy:
size: 1 MB
148 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
DefaultRollOverStrategy:
max: 3
Loggers:
Root:
level: error
AppenderRef:
ref: FILE
Logs can be read into other systems or stored for later manual reading. If you are ingesting the logs into another
system, choose the layout that is easiest for the consuming system. If you are reading them yourself, CSV or Pat-
tern layout is best.
The Pattern layout has many special format characters. The following is a common example used in documentation:
%d %p %c{1.} [%t] %m%n
Here is what each part means:
➤➤ %d: Date using the default date format.
➤➤ %p: Logging level (for example, INFO).
➤➤ %c{1.}– How much of the fully qualified class name to show. The part in brackets provides the desired
format. In this example, %c means the class name, and {1.} means one fully expanded item, which is
the class name itself and the first letter of each part of the package name resulting in
c.w.r.l.l.LoggingWithPropertiesConfig.
➤➤ [%t]: The method name in braces.
➤➤ %m: Log message.
➤➤ %n: A newline character.
Logging Lazily
Logging lazily for Log4j is similar to Java Util Logging. You use a lambda expression to compose the logging
message. The lambda will be executed only if the logging level requires a message to be logged:
public static void main(String[] args) {
LOGGER.error(() -> generateMessage());
}
USING SLF4J
SLF4J stands for Simple Logging Façade for Java. You’ve already seen that each logging framework requires
different API calls in your Java code. This makes it hard to switch logging libraries. SLF4J to the rescue! SLF4J is
intended to be used as a wrapper so you can switch more easily.
Go to Maven Central at https://mvnrepository.com/artifact/org.slf4j/slf4j-api and find the
latest version number for your build tool. Table 5.7 shows the dependencies, using the latest version numbers at
the time of this writing.
TOOL SYNTAX
Maven <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-
api</artifactId>
<version>2.0.12</version>
</dependency>
In the following sections, we will look at the basics of using SLF4J and how to configure it to use different
underlying logging frameworks.
Since SLF4J doesn’t know which logging framework to use, it doesn’t do any logging and instead complains with
the previous message. It uses a “no-operation” default, which does not do any logging. Clearly that is not your
intent here!
TOOL SYNTAX
Maven <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-
simple</artifactId>
<version>2.0.12</version>
</dependency>
slf4j-simple is the SLF4J default logging implementation. Re-running with the slf4j-simple as a
dependency now gives the desired output:
[main] ERROR com.wiley.realworldjava.logging.SLF4J.Slf4jSimple
-just to let you know
It is more common to use SLF4J with other logging frameworks rather than slf4j-simple, as you’ll see later
in the section. But first, let’s cover how to use SLF4J itself.
Formatting Values
Like Log4j, SLF4J supports the {} syntax for placeholders with logical default formatting. If you need the more
granular control of String.format(), you can call it directly. Both of the approaches are used here:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Logging Lazily
Logging lazily gives you more control in SLF4J than Java Util Logging or Log4j. You still use a lambda that will
only be executed if the logging level requires a message to be logged. However, there are separate methods for
specifying the message and each argument to be logged.
public static void main(String[] args) {
LOGGER.atDebug()
.setMessage(() -> generateMessage())
.addArgument(() -> generateParameterToFillInMessage())
.log();
}
It is much more common to configure SLF4J to use an external logging framework with more advanced features.
The next section shows how to do this.
Each logging framework has its own SLF4J provider dependency. To find the Java Util Logging dependency, go
to Maven Central at https://mvnrepository.com/artifact/org.slf4j/slf4j-jdk14 and find the
latest version number for your build tool. Table 5.10 lists the provider dependencies for the common build tools,
along with the latest version numbers at the time of this writing. Note that jdk14 is in the name because that’s
when Java Util Logging was introduced. This is still the artifact name even for modern versions of Java.
TOOL SYNTAX
Maven <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-
jdk14</artifactId>
<version>2.0.12</version>
</dependency>
Next take a look at the structure of the Maven project, as shown in Figure 5.2. You can see only slf4j-api and
slf4j-jdk14 are set as dependencies.
Using SLF4J ❘ 153
The logging configuration file tells Java Util Logging to log WARNING and higher to the console:
.level= WARNING
handlers= java.util.logging.ConsoleHandler
Finally, the actual Java code to log is as follows:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Now suppose you got a requirement to switch to Log4j since Java Util Logging didn’t meet the project’s needs.
First you need to change the dependency from slf4j-jdk14 to a Log4j 2 provider version, such as
log4j-slf4j2-impl (see Table 5.11). Remember, the table uses the latest version number at the time of
writing. To find the latest version number, go to https://mvnrepository.com/artifact/org.apache
.logging.log4j/log4j-slf4j2-impl. The group ID is different from the previous example as the Log4j
project maintains this integration, and the version number matches your version of Log4j rather than your ver-
sion of SLF4J.
TOOL SYNTAX
Maven <dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-
slf4j2-impl</artifactId>
<version>2.22.1</version>
</dependency>
After updating pom.xml, the next step is to create a log configuration for Log4j:
<?xml version="1.0" encoding="UTF-
8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d [%p] %m%n"/>
</Console>
</Appenders>
<Loggers>
<Logger name="com.wiley.realworldjava"
level="warn" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
<Root level="error">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
At this point, the project looks like Figure 5.3.
Notice how the configuration now has warn as used by Log4j instead of WARNING from Java Util Logging. The
key is that all the changes are in the dependencies and config files rather than the Java code. Running the Java
code without any changes now gives this:
2024-02-18 12:32:17,719 [ERROR] uh oh
2024-02-18 12:32:17,721 [WARN] be careful
You can see the format has changed and the log levels are now the ones Log4j uses.
As cleanup, you can remove the old property file and system property reference from the Java file. Or you
can leave it there if you want to be able to quickly switch back and forth. Switching is easy; just change the
dependencies to the one you want to use!
Using Logback ❘ 155
USING LOGBACK
The final logging framework in this chapter is Logback. Both Log4j and Logback are popular choices in today’s
enterprises. Logback was designed to improve on Log4j, for example, to improve performance. Logback actually
uses SLF4J natively rather than providing a separate API. In fact, the founder of Logback, Ceki Gülcü, is also the
founder of SLF4J! That means everything you learned in the previous section will apply.
To get started, you need slf4j-api and two Logback-specific files. Go to Maven Central at https://
mvnrepository.com/artifact/ch.qos.logback/logback-classic and https://mvnrepository
.com/artifact/ch.qos.logback/logback-core. Find the latest version number for your build tool and
substitute it into that what is shown in Table 5.12.
TOOL SYNTAX
Maven <dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.5.0</version>
</dependency>
Note that logback-core is the main logging code. logback-classic adds additional features including
being a SLF4J provider. You should always include both dependencies when using Logback. Since Logback
uses the SLF4J API, you can refer to the SLF4J section for things that are exactly the same, such as
formatting values.
156 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
A simple program that uses Logback looks just like the earlier SLF4J sample:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Logback iterates through the previous files in order, and when it finds one file, it stops iterating. Therefore, if
logback-test.xml is in your src/test/resources folder, it will be loaded up first when executing unit
tests, whereas logback.xml will be loaded for your standard program execution. Finally, if no configuration is
found, Logback uses the default, which is DEBUG-level logging with all output written to the console.
Note that unlike Log4j, Logback only supports the XML configuration format. This makes finding examples
online easier and reflects that XML is the most common configuration format for Log4j anyway.
In this example, the logback.xml file is used to log our package at INFO level and everything else at ERROR
level with all the output logged to the console.
<?xml version="1.0" encoding="UTF-
8"?>
<!DOCTYPE configuration>
<configuration>
<import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder" />
<import class="ch.qos.logback.core.ConsoleAppender" />
<root level="error">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Like Log4j, there is a configuration to write to the console, a root logger, and a custom logger.
<configuration>
<appender name="ROLLING"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>java-
log.log</file>
<rollingPolicy
158 ❘ CHAPTER 5 Capturing Application State with Logging Frameworks
class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>java-log-%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
<maxFileSize>10KB</maxFileSize>
<maxHistory>3</maxHistory>
<totalSizeCap>2GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-
ref ref="ROLLING" />
</root>
</configuration>
Like Log4j, the configuration specifies both the filename for the file actively being logged to, and a naming con-
vention for the rotated files. Since the fileNamePattern ends in .gz, it is automatically compressed.
We set the maximum file size here to 10 KB, which is lower than the 1 MB we set for Log4j. The reason is that
Logback only checks to see if the file size limit has been exceeded every 60 seconds, whereas Log4j checks the file size
more often. Therefore, we set the size lower in this example so that you get to see the files rotate without waiting a
long time.
After running a program that logs 100 large statements using this configuration, there are two files: the active one
and one archive:
java-log-2024-02-18.0.gz
java-log.log
Both files are larger than the 10 KB configured due to the heavy logging and short runtime of the test.
There are many other options in the documentation referenced at the end of the chapter. For example, you can
log the number of milliseconds the application has been running.
FURTHER REFERENCES
Each logging framework comes with useful references.
➤➤ Java Util Logging
➤➤ https://docs.oracle.com/en/java/javase/21/core/java-logging-overview
.html: Documentation for Java Util Logging
➤➤ https://docs.oracle.com/en/java/javase/21/docs/api/java.logging/java/
util/logging/LogManager.html: For customizing log configuration in Java Util Logging
➤➤ https://docs.oracle.com/en/java/javase/21/docs/api/java.logging/java/
util/logging/FileHandler.html: For configuring file logging
➤➤ https://docs.oracle.com/en/java/javase/21/docs/api/java.logging/java/
util/logging/SimpleFormatter.html: For customizing log message format
➤➤ https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/
Formatter.html: Formatter syntax
➤➤ Log4j
➤➤ https://logging.apache.org/log4j/2.x/javadoc/log4j-api/index.html:
Documentation for Log4j
➤➤ https://logging.apache.org/log4j/2.x/manual/appenders.html: Full documenta-
tion of appenders
➤➤ https://logging.apache.org/log4j/2.x/manual/layouts.html: Full documentation
for layouts
➤➤ SLF4J
➤➤ https://www.SLF4J.org/manual.html: Documentation for SLF4J
➤➤ https://www.SLF4J.org/api/org/slf4j/simple/SimpleLogger.html: System
properties if want to configure slf4j-simple
➤➤ Logback
➤➤ https://logback.qos.ch/manual: Documentation for Logback
➤➤ https://logback.qos.ch/manual/appenders.html: Full documentation for appenders
➤➤ https://logback.qos.ch/manual/layouts.html: Full documentation for layouts
SUMMARY
➤➤ Java Util Logging is built into Java.
➤➤ Log4j and Logback are direct logging frameworks.
➤➤ SLF4J is a façade/wrapper for other logging frameworks like Java Util Logging, Log4j, and Logback.
➤➤ Logging frameworks provide file configuration options to specify the level of logging, format,
destination, and more.
6
Getting to Know the Spring
Framework
WHAT’S IN THIS CHAPTER?
➤➤ Configuring Spring
➤➤ Customizing Spring Applications with Properties
➤➤ Turbocharging Development with Spring Boot
➤➤ Working with Spring MVC
➤➤ Handling Errors in Spring
➤➤ Inspecting Your Application with Actuator
➤➤ Securing Your Application with Spring Security
➤➤ Exploring the Spring Projects
Spring, how do I love thee? Let me count the ways. Paraphrasing Elizabeth Barrett Browning’s famous
sonnet perfectly captures how we adore the Spring Framework. Born in 2003, the Spring Framework has
become as important to the Java ecosystem as Java itself.
Spring is huge, and full coverage would require many voluminous tomes. In this chapter we cover the
features you’ll need to get up and running in everyday enterprise life.
Spring provides annotations where you configure beans, components, services, and repositories—all fancy
names for “Spring managed instances.”
164 ❘ CHAPTER 6 Getting to Know the Spring Framework
Two design patterns are fundamental to Spring: inversion of control (IoC) and the related
dependency injection (DI).
IoC is a design pattern that inverts the normal pattern of control of objects. Traditionally an
application might create objects in the program and then assign them for use. In IoC, however,
the creation and assignment of new objects is managed by the framework rather than your
program code. This pattern decouples object creation from object use, which simplifies pro-
gram maintenance. The value of this is that class implementations can easily be snapped out
and snapped in, interchanged in different environments, or mocked for testing.
DI is a specific form of IoC employed by Spring, where the dependencies are defined in the
configuration, injected by the framework, and largely managed via annotations.
Spring has introduced some important improvements during its life, and all these paradigms are still heavily
used, so let’s study the evolution. Spring is built around the concept of a Spring bean. Beans are simply Java class
instances that are managed by the Spring framework.
In the earlier days of Spring, you had to configure beans in an XML file that told Spring which classes and which
properties to manage. Then Spring introduced Configuration classes, regular Java classes that are annotated
with @Configuration or other specific annotations and that tell the Spring framework that these classes
configure the Spring bean instances used in the application.
There is also a more succinct way of defining beans simply by annotating your configuration class with
@ComponentScan, as we will see.
Later Spring introduced the game-changing Spring Boot framework. Where most frameworks come with some
learning curve, Spring Boot actually has a negative learning curve! Spring Boot provides “opinionated” starter
dependencies, each of which contains all of the required dependencies needed for a wealth of use cases, including
web applications, relational and NoSQL databases, security, logging, and so much more. You just add the starter
dependency to your Maven POM or Gradle build file, and Spring Boot loads up all the configuration and librar-
ies required to support those use cases. Once you build your Spring Boot application, it automatically generates a
self-executing JAR that you can just double-click to launch your application or to call:
java -jar <jar-name>.jar
Enterprises are migrating to Spring Boot in droves, but before we jump in, let’s start with a good old-fashioned
Spring example, where you can learn some basic Spring concepts including component scanning and DI. Then we
will jump to Spring Boot and see how that simplifies things greatly.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. You can also find the code at https://github.com/realworldjava/
Ch06-Spring. See the README.md file in that repository for details.
CONFIGURING SPRING
In this section, you’ll get a taste of Spring by building a basic project. First you’ll see a sample application, and
then you’ll see how to configure it using both the XML and Java annotation approaches. Then you’ll build on the
Java approach using component scanning.
Configuring Spring ❘ 165
TIP The Spring Boot framework greatly simplifies the development and deployment of
Spring-based applications, eliminating the need for configuring the explicit scaffolding.
Our application is a mortgage calculator, where you can supply values for
the principal, annual interest rate, and duration in years, and get back a
monthly payment.
Our calculator uses the mortgage formula as shown in Figure 6.1. FIGURE 6.1: Mortgage formula
Payment is the monthly payment, principal is the amount of the mortgage,
rate is the noncompounded monthly interest, and N is the number of months.
While the formula uses a monthly rate, traditionally the rate is expressed as a percentage of the annual rate, such
as 4.5%, so our implementation class will divide the annual rate by 12 to get the monthly rate and then divide
by 100 to convert the percentage to a decimal. Similarly, we multiply the number of years by 12 to get the total
number of payments in months.
Best coding practices tell us to code to interfaces, so let’s set up the MortgageCalculatorService interface
defining our basic operation.
3: public interface MortgageCalculatorService {
4: double payment(double principal,
5: double annualInterestRate, int years);
6: }
Let’s organize these into a “service” package to indicate that these classes perform the main application services.
So, the corresponding implementation looks like this:
1: package com.wiley.realworldjava.service;
2:
3: public class MortgageCalculatorServiceImpl
4: implements MortgageCalculatorService {
5: @Override
6: public double payment(double principal,
7: double annualInterestRate, int termInYears) {
8: double monthlyInterestRate = annualInterestRate / 12 / 100;
9: int numberOfPayments = termInYears * 12;
10:
11: // mortgage formula: P * R * [(1+R)^N] / [(1+R)^N -1]
12: return principal * (monthlyInterestRate
13: * Math.pow(1 + monthlyInterestRate, numberOfPayments))
14: / (Math.pow(1 + monthlyInterestRate, numberOfPayments) -1);
15: }
16: }
Let’s call our configuration file applicationContext.xml, the traditional name used by Spring. By
convention, Spring expects this in the src/main/resources directory:
1: <?xml version="1.0" encoding="UTF- 8"?>
2: <beans xmlns="http://www.springframework.org/schema/beans"
3: xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4: xsi:schemaLocation="
5: http://www.springframework.org/schema/beans
6: http://www.springframework.org/schema/beans/spring-beans.xsd">
7:
8: <!-
-Define the MortgageCalculatorService bean --
>
9: <bean id="mortgageCalculatorService"
10: class="com.wiley.realworldjava.service.
MortgageCalculatorServiceImpl"/>
11:
12: <!-
-Define the Main class bean --
>
13: <bean id="app" class="com.wiley.realworldjava.App">
14: <constructor-arg ref="mortgageCalculatorService"/>
15: </bean>
16: </beans>
This XML references two beans (remember, bean is Spring-speak for Spring-managed Java classes). Line 9 refers
to the MortgageCalculatorServiceImpl declared in the previous section. Line 13 references a class named
App, which we will introduce shortly. The id attributes of both are like variable names you can reference later.
For example, line 14 uses ref to refer to one such id.
Spring exposes an ApplicationContext class instance, which contains references to all your beans, as well as other
important Spring information, such as the active profile, which we will cover in short order.
To initialize the ApplicationContext instance and tell it to use the xml file, you call this:
ApplicationContext context
= new ClassPathXmlApplicationContext("applicationContext.xml");
We define an App class to initialize the context and kick off the process.
7: public class App {
8:
9: private final MortgageCalculatorService calculatorService;
10:
11: public App(MortgageCalculatorService calculatorService) {
12: this.calculatorService = calculatorService;
13: }
14:
15: public static void main(String[] args) {
16: ApplicationContext context
17: = new ClassPathXmlApplicationContext("applicationContext.xml");
18: // access the bean from its id specified in the applicationContext.xml
19: App app = (App) context.getBean("app");
20: // alternatively, you can access it via its class type
21: // App app = context.getBean(App.class);
22:
23: double principal = 250_000;
24: double annualInterestRate = 6.5;
25: int termInYears = 30;
26: double payment = app.calculatorService.payment(principal,
27: annualInterestRate, termInYears);
Configuring Spring ❘ 167
28:
29: // display result to 2 decimal places and commas
30: // for thousands separators
31: System.out.printf("Monthly Payment: $%,.2f%n", payment);
32: }
33: }
Lines 16 and17 create an ApplicationContext, which you can use to get your beans. You see in line 17 that it
references the XML configuration file. Lines 19 and 20 show that you can look up a bean by name or by type.
Now let’s try a thought experiment: suppose you request the same bean name (say myBean) in various places in
your application. Will you get the same instance, or will Spring construct a new instance for each access?
The answer to that is it depends on the scope.
By default, beans are created with singleton scope. That means every time you request an
instance of bean myBean, Spring will return the same instance. However, you can change the
scope to modify that behavior.
The most common scopes are:
➤➤ singleton: Returns the same instance for every access of a given bean name.
➤➤ prototype: Returns a new instance for every access of a given bean name.
➤➤ request: In a web application, Spring returns the same instance of the bean for every access
of the given bean name during the processing of a single request. Surprisingly this is true
even if you have multiple threads working on the same request; if they access the same
bean name, they will get the same instance, so long as they are processing the same client
request. But for each new client request, a new instance is created.
➤➤ session: In a web application, Spring will return the same instance for a given bean, even
across multiple requests, as long as it is processing the same client HTTP session. A session
is a series of associated requests and responses, which remains open until a timeout is
reached or until the user logs out.
To specify a scope, modify the bean definition in applicationContext.xml by passing in
the scope. For example, if you wanted to change the app bean in our example to be
prototype scope, you would add the scope attribute to the XML as follows:
<bean id="mortgageCalculatorService"
class="com.wiley.realworldjava.service
.MortgageCalculatorServiceImpl"
scope="prototype"/>
you can define one or more methods, each returning a bean instance. The name of the bean is just the method
name, and the bean type is the return type of the method. You annotate those methods with @Bean, as follows:
9: @Configuration
10: public class AppConfig {
11: @Bean
12: MortgageCalculatorService mortgageCalculatorService(){
13: return new MortgageCalculatorServiceImpl();
14: }
15: @Bean
16: App app(MortgageCalculatorService mortgageCalculatorService){
17: return new App(mortgageCalculatorService);
18: }
19: }
This code declares a configuration class named AppConfig, which declares a bean (in line 12) named
mortgageCalculatorService, which returns an instance of the MortgageCalculatorService interface,
with implementation class MortgageCalculatorServiceImpl.
TIP The configuration class name can be any legal class name, but it is good form to include
Config in the name or keep it in a package named config. We do both, calling it
com.wiley.realworldjava.config.AppConfig.
The application grabs the MortgageCalculatorService bean from the ApplicationContext. We can see
an example of that in the App class:
8: @Component
9: public class App {
10: private final MortgageCalculatorService calculatorService;
11:
12: public App(MortgageCalculatorService calculatorService) {
13: this.calculatorService = calculatorService;
14: }
15:
16: public static void main(String[] args) {
17: ApplicationContext context
18: = new AnnotationConfigApplicationContext(AppConfig.class);
19: App app = context.getBean(App.class);
20:
21: double principal = 250_000;
22: double annualInterestRate = 6.5;
23: int termInYears = 30;
24: double pmnt = app.calculatorService.payment(principal,
25: annualInterestRate, termInYears);
26:
27: // display result to 2 decimal places and commas
28: // for thousands separators
29: System.out.printf("Monthly Payment: $%,.2f%n", pmnt);
30: }
31: }
Notice in line 19 how we grab an instance of App from the ApplicationContext. Even though the construc-
tor of App requires an instance of MortgageCalculatorService, Spring supplies that instance for us by
Configuring Spring ❘ 169
injecting the MortgageCalculatorService bean that was defined in the configuration class. That’s depend-
ency injection in action. Alternatively, we could remove the constructor parameter altogether and annotate the
calculatorService with @Autowired. Using that approach, Spring would inject the bean automatically.
The approaches are essentially equivalent, except that the constructor injection allows you to make the instance
final. Here is the equivalent dependency injection using @Autowired:
@Autowired private final MortgageCalculatorService calculatorService;
If you choose to use @Autowired, then remember to remove the constructor parameter from the App bean
constructor. If you use constructor injection, you don’t need to annotate the constructor with @Autowired, but
you may do so for emphasis if you like.
While the @Autowired annotation is still widely used for injection, the Good Housekeeping seal of approval is
going to constructor injection rather than @Autowired for new code.
SPECIFYING SCOPE
To specify a scope using configuration classes, add an @Scope annotation above the
@Component declaration, containing the scope name. For example, to change the App class to
prototype scope, add an @Scope annotation containing the scope name (case insensitive)
as follows:
10: @Scope(scopeName = "prototype")
11: @Component
12: public class App {
The finished code for this example is in Git branch component-scan. Our application is a simple service, so
we’ll annotate our MortgageCalculatorServiceImpl class with @Service to indicate that our calculator
is performing a service. We also annotated our App with @Component to tell Spring we want to access this class
as a bean.
Now we can remove the bean declarations from our AppConfig class and just add the packages to scan:
6: @Configuration
7: @ComponentScan(basePackages = "com.wiley.realworldjava")
8: public class AppConfig {
9: }
To use this approach, replace the ApplicationContext initializer in lines 17 and 18 of our App class with the
following, passing in the config class name:
ApplicationContext context =
new AnnotationConfigApplicationContext(AppConfig.class);
We can even remove the Configuration class entirely and simply pass the package name to scan into our
ApplicationContext initializer:
ApplicationContext context
= new AnnotationConfigApplicationContext("com.wiley.realworldjava");
TIP You can pass in a list of packages to scan by supplying multiple String parameters
into the varargs AnnotationConfigApplicationContext constructor.
Injecting Properties
To inject property values into your variables, create a file called application.properties in your src/
main/resources folder, the default location where Spring expects your resource files to be defined. The file can
contain your application-specific properties, expressed as key-value pairs. You can also use YAML files, which is
an equivalent approach except using YAML, but in this book we use properties files. For more information on
YAML, see the appendix of this book.
For the mortgage calculator, let’s add the following properties:
default.interestRate=3.5
preferred.mortgage.holder=ABC Mortgage, Inc.
country=United States
In the @Configuration class (or in any Spring managed class), add the @PropertySource annotation above
the class name, and pass in the name of the properties file. For example:
@Configuration
@PropertySource("classpath:application.properties")
public class AppConfig {
Now, all the properties specified in application.properties will be available to your application.
Customizing Spring Applications with Properties ❘ 171
There are a few ways to inject those values into your Java application. One way is to annotate variables in your
Java code with @Value and pass in a placeholder (in the form of "${property.name}") with the property
name. Spring will automatically inject the property with that name specified in your application
.properties into that variable.
For example, to inject the default.interestRate from the previous application.properties into a
variable of type double named defaultInterestRate, you can declare the variable as follows in any Spring
managed class:
@Value("${default.interestRate}")
private double defaultInterestRate;
The variable can be any Java type, such as String, int, Integer, double, and so forth. Spring will do the
String to type conversion for you, provided the value is legal for that data type. (In other words, don’t try to
assign an alphabetic value to a variable of type double!)
TIP Take note of the syntax for specifying the placeholder—you must include the property
name inside ${ } and all that inside quotes.
Alternatively, you can include that placeholder as a constructor argument. For example, let’s include the default
interest rate placeholders as follows:
public MortgageCalculatorServiceImpl(
@Value("${default.interestRate}")
double defaultInterestRate) {
this.defaultInterestRate = defaultInterestRate;
}
Alternatively, you can edit the launch configuration to set the profile name as in Figure 6.3.
Or you can simply pass in the profile name from the command line, as follows:
-Dspring.profiles.active=dev
Turbocharging Development with Spring Boot ❘ 173
For our current purposes, we’ll specify the active profile to be dev, causing Spring to use the properties from
application- dev.properties, in addition to the default properties from application.properties,
which are always included regardless of profile.
Run that code and you’ll see this output:
Profiles:[dev]
Preferred mortgage holder:ABC Mortgage, Inc.
Company home:United States
Loading Dev datasource
Monthly Payment: $1,122.61
Now change the active profile to prod, and run it again.
Profiles:[prod]
Preferred mortgage holder:ABC Mortgage, Inc.
Company home:UK
Loading Prod datasource
Monthly Payment: $1,531.17
The difference is that in dev, we did not specify a default interest rate in application-dev.properties, so
the program picked up the default from application.properties.
default.interestRate=3.5
However, for the prod profile, the program saw the overridden property in application-prod.properties
and so used that value for the calculation.
default.interestRate=6.2
Notice how when there is a conflict, the environment-specific version wins.
Next, on the right side of the screen, click the Add dependencies button and select all dependencies you want
included in your project. In our case, we will just use Spring Web for our web application. Finally, click the
Generate button, which will produce a zip file (mtgcalc.zip) that you can unzip and open in your IDE.
Click Next to select all your required dependencies. In our case, choose Spring Web, as in Figure 6.6.
Finally, click Create to generate the module. You can see the new module in your IDE. Open the pom.xml file,
and you can see that Spring Boot generated a reference to the Spring Boot parent.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.3.3</version>
<relativePath/> <!--lookup parent from repository - -
>
</parent>
The spring-boot-starter-parent is the parent of all Spring Boot projects, and it provides all of the default
versions for the dependencies and plugins, under the tags dependencyManagement and pluginManagement,
with references for every supported project. Maven looks at those Management references and only brings in
the ones that are referenced as explicit dependencies in your POM. When you specify a dependency in the
dependencies or plugins section, Spring pulls the versions from the parent POM management section, so
you don’t need to specify the version. That way when you upgrade your POM to the newest version, your project
dependencies will also be automatically upgraded. If you specify an explicit version in your own POM, that will
override the one in the parent.
176 ❘ CHAPTER 6 Getting to Know the Spring Framework
The POM also contains references to spring-boot-starter-web (since you selected Spring Web) and
spring-boot-starter-test, which is always included.
You are going to want to add logging to your application, so add the following dependencies to your POM (no
versions required; as usual Spring Boot provides the defaults):
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
Open the file application.properties in your src/main/resources directory. Initializr was nice
enough to insert a spring.application.name=mtgcalc entry for you, which you will see in your log mes-
sages and other applications. The application.properties file understands many default properties, for
configuring every imaginable setting for your application. You can see the full list of recognized properties by
clicking Ctrl+Space anywhere in that file.
Also, Initializr was generous enough to create a Spring Boot application class (named MtgCalcApplication)
in the package root. A Spring Boot application class is required for all Spring Boot applications. It is a class in the
Turbocharging Development with Spring Boot ❘ 177
root package that is annotated with @SpringBootApplication and is the entry point for the application. It is
important to keep this at the package root, because Spring Boot automatically scans this package and everything
under it by default, looking for Spring annotated classes.
For the fun of it, let’s put our knowledge of logging, which you learned about in Chapter 5, “Capturing Applica-
tion State with Logging Frameworks,” and add a logger to print out the words “Hello, Mortgage Calculator” in
the main method of MtgCalcApplication so that you can see that message when you run the application.
@SpringBootApplication
public class MtgCalcApplication {
private final static Logger logger =
LoggerFactory.getLogger(MtgCalcApplication.class);
public static void main(String[] args) {
SpringApplication.run(MtgCalcApplication.class, args);
logger.info("Hello, Mortgage Calculator");
}
}
Before we do any more work on our web application, let’s launch the generated application to verify that
everything is configured correctly.
There are several ways to run the application. The IDE should have created a run configuration for you, so you
could just launch that by clicking the run button.
However, let’s do it from the command line. Open a command prompt and cd to the project root; then build the
code using the following (let’s skip tests for now until we have implemented them):
mvn clean verify -
DskipTests
That will generate a JAR file named mtgcalc-0.0.1-SNAPSHOT.jar in your target directory. Before
running it, make sure the environment variable JAVA_HOME is set.
We saw earlier that Spring lets you specify an active profile, which you can use for creating environment-specific
properties. For example, to specify that this is dev, pass in the JVM argument ‑Dspring.profiles
.active=dev. This tells Spring to look for properties in application-dev.properties, in addition to the
default application.properties. If there are overlapping properties, dev will win.
TIP Spring Boot lets you have many active profiles at the same time. You can specify
additional profiles by listing their comma-separated names as in -Dspring.profiles
.active=dev,cloud. If there is a conflict between property names, cloud would win in
this case, since it appears later in the list.
@GetMapping("/hello")
public String hello(){
return LocalDateTime.now() + ": Hello, World ";
}
}
Rebuild and launch, and from a browser hit the URL:
http://localhost:8081/hello
You should see the date, time, and a happy “Hello, World” in the browser! Now we are in a good position to
implement our actual mortgage calculations. Let’s autowire the MortgageCalculator into our controller class
using constructor injection. When Spring sees a bean class like MortgageCalculatorService used as a
constructor argument, it automatically creates an instance of that bean and injects it into the constructor.
@RestController
public class MortgageController {
private final MortgageCalculatorService mortgageCalculator;
public MortgageController(MortgageCalculatorService mortgageCalculator) {
this.mortgageCalculator = mortgageCalculator;
}
To tell Spring MortgageCalculator is a bean, annotate your implementation with one of the annotations
@Component, @Service, and so forth, as we discussed in the earlier section “Using Component Scanning.”
Since this is a service class, we will use @Service.
@Service
public class MortgageCalculatorServiceImpl
implements MortgageCalculatorService {
For our calculator to be useful, we need to provide users with a way to pass in the required parameters, in
this case the principal, years, and interest. We can do that by annotating the method arguments with
@RequestParam in the MortgageController,
@GetMapping("payment")
public String calculateMonthlyPayment(
@RequestParam double principal,
@RequestParam int years,
@RequestParam double interest) {
USING @REQUESTPARAM
You can override the query string parameter names by supplying the desired name in the
@RequestParam annotation. For example, changing the principal argument to
@RequestParam("amount") double principal will alter the endpoint so that the call
will become as follows:
http://localhost:8081/payment?amnt=100000&years=30&interest=6.5
Note that the calls we have so far implemented are HTTP GET requests, which is the default HTTP method when
you call a URL from a browser. If you want to create endpoints for other HTTP methods such as POST or PUT,
use the corresponding annotations @PostMapping or @PutMapping. There are annotations corresponding to
all the possible HTTP methods including GET, POST, PUT, PATCH, DELETE, and so forth. These are all used in
designing good resource-oriented architecture (ROA) RESTful web service applications. According to ROA prin-
ciples, records are called resources. A GET request will query for a resource or list of resources. POST requests
create a resource. PUT is used to replace an entire resource, PATCH updates fields of a resource. DELETE deletes
the resource (or marks it for deletion). To learn more about RESTful web services, refer to the “Further
References” section.
If your application exposes a complicated set of endpoints (which is typical in enterprise services), it is helpful to
organize them into separate controllers, each with a common set of functionality. You can then assign a
“context path,” a prefix to all the endpoints in a given controller, by annotating the controller class itself with
@RequestMapping("/some-prefix"), in which case every endpoint in that class would begin with
/some-prefix. For example, in our case the class becomes the following:
@RestController
@RequestMapping("/mtg")
public class MortgageController {
Now all endpoints in this controller will need a "/mtg/" prefix. To use this new endpoint, restart the application
and adjust your call to read as follows:
http://localhost:8081/mtg/payment?principal=100000&years=30&interest=6.5
This code is sufficient for serving browser pages, but for true resource-oriented RESTful web service APIs, you
will often want to provide more information about the response, such as detailed response codes and response
header messages.
A simple modification to your method signature lets you wrap your response in a generic ResponseEntity,
a special Spring class that you can use to embed response metadata. For example, let’s embed a header named
"Request time" in our response, which will contain the time of the request.
@GetMapping("/payment")
public ResponseEntity<String> calculateMonthlyPayment(
@RequestParam double principal,
@RequestParam int years, @RequestParam double interest) {
Principal:100,000.00<br>Interest: 6.70<br>Years: 30
<br>Monthly Payment:645.28
That’s how to do a GET request. Now orthodox resource-oriented architecture mandates using GET requests
for read-only data access requests, and POST requests to update data. PUT requests are for operations that
change data but are idempotent. Idempotent requests are calls that don’t alter any underlying data after the
initial request.
For read requests we’d like to use a GET; however, GET requests generally do not accept a body, and there are
limitations to how much data you want to have in a URL query string. So, for querying a complex payload, such
as an arbitrarily large list of data, enterprises will often use a POST request even for read-only queries.
A POST request supports a complex body, such as a JSON object. Let’s learn some Spring magic and see how
Spring automatically creates Java objects from JSON and vice versa.
We are going to create a POST request endpoint that will accept a JSON list composed of any number of
mortgage calculations to be performed. We will then supply JSON object to the endpoint as a request body.
First let’s create a new package called com.wiley.realworld.java.mtgcalc.domain and add to it three
new classes: User, Mortgage, and Response:
public class User {
private String name;
private String location;
// getters and setters go here
}
{
"user": {
"name": "Mary Michaels",
"location": "New York, NY"
},
"principal": 100000,
"years": 30,
"interest": 6.25
}
]
You can find a copy of this JSON payload in the request.json file in the chapter Git repository, in the
mtgcalc/src/main/resources directory of the spring-boot branch. Download that, then rebuild and
launch your application, and call:
curl -X POST -
H "Content-
Type: application/json"
-d @request.json http://localhost:8081/mtg/payment
The result should be similar to the request body but populated with the resulting calculations, along with the date
and time of the request (which is very helpful considering how mortgage rates change by the minute these days!).
{
"mortgages": [
{
"principal": 250000.0,
"years": 30,
"interest": 6.5,
"user": {
"name": "John Jones",
"location": "Miami, Florida"
},
"payment": 1580.170058732413
},
{
"principal": 100000.0,
"years": 30,
"interest": 6.25,
"user": {
"name": "Mary Michaels",
"location": "New York, NY"
},
"payment": 615.7172004263946
}
],
"now": "2024- 09-
03T17:42:07.017027"
}
For example, let’s create two classes in the com.realworldjava.mtgcalc.exception package. First, we
create a MortgageException:
public class MortgageException extends RuntimeException{
public MortgageException(String message) {
super(message);
}
As an example, a GET call to http://localhost:8081/actuator returns a response that starts with this:
{
"_links": {
"self": {
"href": "http://localhost:8081/actuator",
"templated": false
},
186 ❘ CHAPTER 6 Getting to Know the Spring Framework
"beans": {
"href": "http://localhost:8081/actuator/beans",
"templated": false
},
"caches-
cache": {
"href": "http://localhost:8081/actuator/caches/{cache}",
"templated": true
},
"caches": {
"href": "http://localhost:8081/actuator/caches",
"templated": false
},
"health": {
"href": "http://localhost:8081/actuator/health",
"templated": false
},
"health-
path": {
"href": "http://localhost:8081/actuator/health/{*path}",
},
"metrics-requiredMetricName": {
"href": "http://localhost:8081/actuator/metrics/{name}",
"templated": true
},
"metrics": {
"href": "http://localhost:8081/actuator/metrics",
"templated": false
},
Adding Actuator to an application is actually quite simple. First, add the actuator starter dependency.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-
boot-
starter-
actuator</artifactId>
</dependency>
The endpoint actuator/health is available by default, but the other endpoints need to be enabled explic-
itly. To do so, add the following property to your application.properties file, a comma-separated list of
endpoints:
management.endpoints.web.exposure.include=info,metrics
To expose all endpoints, provide the value *:
management.endpoints.web.exposure.include=*
Now, if you call http://localhost:8081/actuator/metrics, it will return a list of metrics supported by
actuator. There is also a templated URL http://localhost:8081/actuator/metrics/{name}.
Substitute one of the metric names to see the values for that metric. For example, a call to http://
localhost:8081/actuator/metrics/jvm.info returns all you want to know about the current JVM:
{
"name": "jvm.info",
"description": "JVM version info",
Securing Your Application with Spring Security ❘ 187
"measurements": [
{
"statistic": "VALUE",
"value": 1.0
}
],
"availableTags": [
{
"tag": "vendor",
"values": [
"Oracle Corporation"
]
},
{
"tag": "runtime",
"values": [
"Java(TM) SE Runtime Environment"
]
},
{
"tag": "version",
"values": [
"21.0.1+12-
LTS-
29"
]
}
]
}
AUTHORIZATION
Those are the general security-related terms you will hear in common security conversations. Now let’s see how
Spring puts it all together to secure your application.
The process starts when the user initiates a request. The servlet filters intercept the request and start processing.
The first filter in the chain is the AuthenticationFilter, which extracts login credentials from the incoming
request and passes them to the AuthenticationManager. The filter produces an Authentication token
and passes that to the AuthenticationManager. The AuthenticationManager then passes the
token to an AuthenticationProvider, which validates it using the UserDetailsService and
PasswordEncoder interfaces. If the credentials are valid, it creates a SecurityContext to contain the token and
stores that in the SecurityContextHolder, which is then accessible to subsequent filters. Applications can
then access the Authentication token to check authorization.
After the request processing has completed, Spring Security clears the SecurityContextHolder to
prevent reuse.
Spring Security by default uses session-based authentication. In that scheme, a session ID is preserved in a cookie
and is sent with each request, allowing the server to retrieve the session and associated SecurityContext. In
this way, the login remains valid until it times out or until the session ends, so there is no need to log in again for
each request.
Applications can also choose to use token-based authentication. In that case a JWT token containing the user
information is generated and passed in the request header for each request. Again, this allows the login to remain
valid for the life of the token or until the session is terminated.
Let’s implement Basic Authentication in our application. As a reminder, this will require a user ID and pass-
word to gain passage to our endpoint. Normally such credentials are stored in a database or in LDAP, and the
AuthenticationManager and AuthenticationProvider work to validate logins. But for our application
we will hard-code it in our application. A user will need to pass in a Base64-encoded username and password in
the request headers, as we will soon see.
To get started, let’s add the Spring Security dependencies to our POM. To determine what dependencies to add to
an existing project, you can head back to the Spring Initialzr website at https://start.spring.io and
pretend you are creating a new project. Type Spring Security to select dependency; then on the bottom of the
screen select Explore. This will display the dependencies associated with Spring Security, which you can copy
and paste into your POM (see Figure 6.12).
190 ❘ CHAPTER 6 Getting to Know the Spring Framework
Next, we have to create an @Configuration annotated class that enables Web Security.
18: @Configuration
19: @EnableWebSecurity
20: public class SecurityConfig {
21:
22: @Bean
Securing Your Application with Spring Security ❘ 191
For the POST, you need to create a string composed of user:password, and then Base64 encode that
whole string. You can do so easily by heading over to https://www.base64encode.org and paste in
user:password, taking care not to leave any excess characters or spaces, and click the encode button.
That should return dXNlcjpwYXNzd29yZA==, the keys to the castle. Create a new header named
"Authorization" and provide the value as Basic dXNlcjpwYXNzd29yZA==, as follows:
Authorization: Basic dXNlcjpwYXNzd29yZA==
Content-Type: application/json
In curl, you can now run:
curl -X POST -
H "Content-
Type: application/json"
-H "Authorization: Basic dXNlcjpwYXNzd29yZA=="
-d @request.json http://localhost:8081/mtg/payment
This time you should get a happy response.
PROJECT DESCRIPTION
Spring The core of Spring including dependency injection, web applications, messaging, and
Framework more. For more features of the Spring Framework, see Chapter 7, “Testing Your Code
with Automated Testing Tools”; Chapter 9, “Parallelizing Your Application Using Java
Concurrency”; and Chapter 11, “Coding the Aspect-Oriented Way.”
Spring Provides reactive capabilities to web and REST endpoints (based on Project Reactor).
WebFlux
Summary ❘ 193
FURTHER REFERENCES
➤➤ http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
➤➤ Roy Fielding’s original dissertation on REST
➤➤ https://start.spring.io
➤➤ Initializr
➤➤ https://docs.spring.io/spring-framework/reference/index.html
➤➤ Spring Framework Documentation
➤➤ https://www.base64encode.org
➤➤ Base 64 encoder
➤➤ RESTful Web Services (O’Reilly 2007)
➤➤ Spring in Action (Manning, 2022)
SUMMARY
In this chapter, you learned much of what you will need to get comfortable with Spring. The Spring framework is
huge, containing integrations with every conceivable framework. You learned about how to configure beans using
XML and Java. Then you learned how to use the component scan to make the code even shorter. You learned to
work with properties and profiles along with Spring Boot and Intitializr. Then you learned how to handle errors
and deal with security. But most of all you learned that there is a lot to love about Spring. There is much, much
more to learn, so if your enterprise uses specific features, you will want to read the documentation for those.
7
Testing Your Code with
Automated Testing Tools
WHAT’S IN THIS CHAPTER?
I am sure you have contemplated the great existential question of the century: how do you know your code
works as expected? Early in your coding career, you probably attempted to answer that question by adding
print statements or logging statements to see if the code was behaving as expected. Then you might have
learned to use a debugger to step through your code. After a bunch of effort, you were fairly confident you
had some working code. Now all you needed was a way to ensure that it stayed working!
In Chapter 4, “Automating Your CI/CD Builds with Maven, Gradle, Jenkins,” you saw how to run builds
every time a change is pushed to version control. By including automated tests in these builds, you benefit
from regression testing, a kind of safety net that warns you when the behavior of your code differs from
what you were expecting. This chapter covers automated testing and the related ecosystem.
The examples in this chapter use the JUnit 5 testing framework; however, there is a lot of JUnit 4 code in
applications you may work on, so we will highlight common differences between JUnit 4 and 5. You may
encounter another framework for unit testing in Java called TestNG. This chapter does not cover TestNG
since JUnit is more popular. Before JUnit 5 was released, TestNG had some key features that JUnit was
missing, but JUnit 5 has added these as well, and you are much more likely to need to know JUnit than
TestNG. Regardless, once you know JUnit 5, you will find it much easier to pick up TestNG if needed.
196 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. You can also find the code at https://github.com/realworldjava/
Ch07‐Testing. See the README.md file in that repository for details.
By contrast, the src/test directory structure exists to ensure the code in src/main works as advertised. As
with src/test, the java and resources subdirectories exist in src/test as well, to separate Java code
from other files. This parallel directory structure becomes second nature very quickly.
Code under src/test/java can access both src and test subdirectories. Code under
src/main/java can access src subdirectories but cannot access test subdirectories.
Understanding Testing Basics ❘ 197
Our project contains a class called Bingo. You will notice that there are two test classes for Bingo. BingoTest
is a unit test, and BingoIT is an integration test. Yes, IT stands for integration test, which tests the interactions
between two or more parts of the system. It is common for an integration test to interact with a REST API or
database or to make use of the file system, whereas unit tests abstract such details away with mocks, which you’ll
see later in the chapter.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.junit</groupId>
<artifactId>junit‐bom</artifactId>
<version>${junit5.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit‐jupiter</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
JUnit Jupiter is the English name for JUnit 5. The imports used in your JUnit tests have jupiter in the name as
well. The BOM is an exception and uses org.junit (without jupiter) for the group ID. The
junit‐jupiter dependency is scoped as test, so it is available to the tests, but not the main code.
NOTE The group IDs are different for the JUnit 5 BOM and the junit‐jupiter
dependency. Be sure to use the right one in the right place.
For actually running the tests, Maven uses the Surefire plugin for unit tests and the Failsafe plugin for integration
tests. These plugins always keep their version numbers in sync, so you can look up the latest version of either one
on Maven Central at https://mvnrepository.com/artifact/org.apache.maven.plugins/
maven‐surefire‐plugin. This version number goes in the properties section of the POM:
<surefire.version>3.2.5</surefire.version>
198 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
To tell Maven to run the tests, you need to add the maven‐surefile‐plugin and the
maven‐failsafe‐plugin to the <build> section of your Maven POM file. Here’s an example:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven‐surefire‐plugin</artifactId>
<version>${surefire.version}</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven‐failsafe‐plugin</artifactId>
<!‐
‐use same version as surefire ‐‐
>
<version>${surefire.version}</version>
<executions>
<execution>
<goals>
<goal>integration‐test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
NOTE You can remember which is which if you remember that unit tests will fail the build,
but integration tests are fail safe unless you tell them to execute.
When running verify, the output will contain something like this:
[INFO] ‐‐‐ surefire:3.2.5:test (default‐test) @ junit5‐maven ‐‐‐
[INFO] Using auto detected provider
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider
[INFO]
[INFO] ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
[INFO] T E S T S
[INFO] ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
[INFO] Running com.wiley.realworldjava.testing.BingoTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.020 s
‐‐ in com.wiley.realworldjava.testing.BingoTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO]
[INFO] ‐‐‐ jar:3.3.0:jar (default‐jar) @ junit5‐maven ‐‐‐
[INFO] Building jar: /path/real‐ world‐ code‐
repos/Ch07‐Testing/junit5‐maven/target/junit5‐maven‐0.0.1‐SNAPSHOT.jar
[INFO]
[INFO] ‐‐‐ failsafe:3.2.5:integration‐test (default) @ junit5‐ maven ‐ ‐‐
[INFO] Using auto detected provider
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider
[INFO]
[INFO] ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
[INFO] T E S T S
[INFO] ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
[INFO] Running com.wiley.realworldjava.testing.BingoIT
[ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.048 s
<<< FAILURE! ‐ ‐in com.wiley.realworldjava.testing.BingoIT
[ERROR] com.wiley.realworldjava.testing.BingoIT.connectivityTest –
Time elapsed: 0.006 s <<< FAILURE!
org.opentest4j.AssertionFailedError: database not available
at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38)
at org.junit.jupiter.api.Assertions.fail(Assertions.java:138)
at com.wiley.realworldjava.testing.BingoIT.connectivityTest(BingoIT.java:26)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] BingoIT.connectivityTest:26 database not available
[INFO]
[ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0
[INFO]
[INFO]
[INFO] ‐‐‐ failsafe:3.2.5:verify (default) @ junit5‐maven ‐‐‐
[INFO] ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
[INFO] BUILD FAILURE
[INFO] ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
200 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
The unit tests are run with Surefire. After that the integration tests are run with Failsafe. This separation allows
the unit tests, which are faster, to run first. It also ensures that the slower integration tests are run only if the unit
tests all succeed.
Both Surefire and Failsafe generate a report for each test class they execute. The report includes how many
test‐methods are present. When all tests complete, they each print a report on the total number of tests run, how
many had issues or were skipped, and details about the issues. In the previous example, there was one unit test
class containing one method that succeeded, and there was one integration test class containing two tests, one
that passed and one that failed. The Maven console output displays the stack trace of the failure and also includes
that stack trace in the summary results at the end of the Failsafe section. The single test failure automatically
caused the Maven build to fail.
While the console shows just the highlights, the full tests results are available in the target directory. The unit
test results are in surefire‐reports, and the integration test results are in failsafe‐reports. Each has
XML files for automated systems to read and TXT files for developers to read, as you can see in Figure 7.2.
Maven provides many options to control the behavior of Surefire and Failsafe. For example, you can pass in
system properties, control whether tests are run in parallel, include/exclude specific test patterns, decide if failing
tests should fail the build, and more.
As you can see, this example uses the same BOM as the Maven project. It also allows code in test to reference
JUnit classes through the junit‐jupiter dependency. The junit‐platform‐launcher dependency is new
and is used by your build tool to hook into running JUnit. This dependency was optional in the past, so you may
not see it in older documentation.
The following Gradle code uses Groovy syntax to tell Gradle to run all the JUnit unit and integration tests:
test {
useJUnitPlatform()
testLogging {
events "passed", "skipped", "failed"
exceptionFormat "full"
}
}
The useJUnitPlatform() call has the Gradle built‐in code kick off your tests. The testLogging code is
how you tell Gradle to list more detail about the tests. By default, Gradle will list only the failing tests but will
not include the failure message. But with this configuration, all the test methods get logged, whether they passed,
were skipped, or failed, including any assertion failures. Using Groovy’s Kotlin syntax, we can express the same
thing as follows:
tasks.test {
useJUnitPlatform()
testLogging {
events("passed", "skipped", "failed")
exceptionFormat = org.gradle.api.tasks.testing.logging.
TestExceptionFormat.FULL
}
}
When running build, the output contains the following:
> Task :test FAILED
TEST-DRIVEN DEVELOPMENT
Test‐Driven Development is a software development approach that has its roots in the early
days of agile development. Using TDD, tests are written before the actual implementation code.
Since the test is written before the implementation, it is expected to fail. Once the
implementation is added, the test passes and becomes part of your automated test suite.
In this example, we will create a simple test using Test‐Driven Development. When coding using TDD, you write
code in much smaller chunks than you might be used to, because you write the main code only when the test code
you have written requires you to do so.
Using this approach ensures that all your code is tested. It also helps you think about the behavior of the code
before writing each bit of it. Since you are writing the code one test case at a time, you will find that you need to
refactor as you are writing. This is expected! The tests make refactoring safe because you are guaranteed to have
full test coverage. The phrase red‐green‐refactor is used to sum up the TDD process.
1. Red: Write a test. At this early stage the test should fail (or the code should have compiler errors). We
call this “red,” but some IDEs show the failure in yellow instead of red. Nonetheless, it is still an error or
failure, and the flow still applies.
2. Green: Make the test pass. Add all the code required to make the test pass.
3. Refactor: Make any changes needed to make the code cleaner and easier to understand.
4. Start over with a new test.
You can be more granular if needed by adding part of a test and making it compile before completing the test. Or
you can add just an assertion instead of a whole test.
Understanding Testing Basics ❘ 203
Let’s get started. Imagine you are working on designing software for building robots that cannot weigh more than
a certain amount. In this example, you’ll design a class that provides information about a simple robot. The first
step is to start writing the test by creating a class in src/test/java.
1: package com.wiley.realworldjava.testing;
2:
3: import org.junit.jupiter.api.BeforeEach;
4: import org.junit.jupiter.api.Test;
5: import static org.junit.jupiter.api.Assertions.assertEquals;
6:
7: class FirstRobotTest {
8:
9: private FirstRobot robot; // DOES NOT COMPILE
10:
11: @BeforeEach
12: void setUp() {
13: robot = new FirstRobot(); // DOES NOT COMPILE
14: }
15:
16: @Test
17: void name() {
18: robot.setName("Izzy"); // DOES NOT COMPILE
19: assertEquals("Izzy", robot.toString());
20: }
21: }
This code does not yet compile as the FirstRobot class does not yet exist. However, you can already see a
few features of JUnit. Line 11 uses the @BeforeEach annotation to instruct JUnit to run the setUp() method
before each test method in the class. Line 16 tells JUnit that the name() method is a test. Finally, line 19 tells
JUnit to check that robot.toString() returns the expected value: Izzy.
NOTE In JUnit 5, the convention is to have package‐private (aka default) access for the test
class and methods.
Now it is time to write the minimal code required to make this test pass. Your IDE can help with this. Click the
word FirstRobot on line 9 and let the IDE create the class (using IntelliJ, this is Alt+Enter on Windows and
Option+Enter on Mac). Be sure to select src/main/java as the destination directory. The minimal code to get
this test to pass looks like this:
package com.wiley.realworldjava.testing;
@Override
public String toString() {
return name;
}
}
204 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
The FirstRobot class is called the code under test to differentiate it from the test code itself. Now that you
have written a test and have gotten it to pass, the next step is usually to refactor, but in this case the code looks
good as is, so there’s nothing to refactor, so let’s move on to the next test. This time you’ll implement a require-
ment that states that the robot can add additional weight so long as the robot’s total weight remains less than 125
pounds. One test you might start with is a 100‐pound robot; add one component and test the current weight.
@Test
void singleComponent() {
robot.setComponent(100); // DOES NOT COMPILE
assertEquals(100, robot.getWeight()); // DOES NOT COMPILE
}
Again, the code does not yet compile. This is easy enough to fix by adding the following to the code under test:
private int weight;
The assertThrows() method confirms that the code does in fact throw the exception you said it would and
returns the exception, which you can inspect for additional assertions. Running this test fails with the following:
org.opentest4j.AssertionFailedError: Expected
java.lang.IllegalArgumentException to be thrown, but nothing was thrown.
Since there is a failing test, the TDD methodology requires you to write the code to fix it.
public void addComponent(int weight) {
int newWeight = this.weight + weight;
if (newWeight > 125) {
throw new IllegalArgumentException(
"Cannot add component. Robot would be too heavy");
}
this.weight = newWeight;
}
While this code works, we can consider some more opportunities for refactoring. One such refactoring would be
to extract the hard‐coded value 125 to a constant, which we can call MAX_WEIGHT. Another refactoring
opportunity would be to notice that the method argument and instance variable have the same name, so you
might want to rename weight to componentWeight, leaving us with a final refactored method as follows:
public void addComponent(int componentWeight) {
int newWeight = this.weight + componentWeight;
if (newWeight > MAX_WEIGHT) {
throw new IllegalArgumentException(
"Cannot add component. Robot would be too heavy");
}
this.weight = newWeight;
}
In this section, we looked at TDD using a few features of JUnit 5. That concludes our coverage of TDD. The next
sections cover many more JUnit 5 features.
LEARNING JUNIT
In the previous section, we started to explore JUnit 5, but it is important to get a more comprehensive understand-
ing so you can efficiently read and write JUnit tests. In this section, we will cover many additional features, and
where applicable we will mention the corresponding JUnit 4 syntax, in case you are working on an old codebase.
@BeforeAll
static void first() {
System.out.println("BeforeAll");
}
@BeforeEach
void setUp() {
System.out.println("BeforeEach");
}
206 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
@AfterAll
static void last() {
System.out.println("AfterAll");
}
@AfterEach
void tearDown() {
System.out.println("AfterEach");
}
@Test
void test1() {
System.out.println("Test");
}
@Test
void test2() {
System.out.println("Test");
}
}
The @BeforeAll and @AfterAll annotated methods run once for the whole class before the first test and after
the last test, respectively. Note that these two annotations must be applied to static methods since they do not
correspond to any individual tests.
The @BeforeEach and @AfterEach annotated methods run right before and after each individual test. In earlier
versions of JUnit, the methods that served this purpose were required to be named setUp() and tearDown().
While you can now name the methods anything, you might often see these traditional names due to their history.
The output of the previous code is as follows:
BeforeAll
BeforeEach
Test
AfterEach
BeforeEach
Test
AfterEach
AfterAll
As you can see, the @BeforeEach and @AfterEach annotated methods each ran twice since there are two test
methods. In this example, we can see from the output that test1 was executed before test2, but this is not always
the case. JUnit uses an algorithm for determining the execution order that, although deterministic, is not always
obvious. In any case, tests should be independent of each other, so never rely on the order the tests are written.
Table 7.1 reviews these annotations.
NOTE In JUnit4, these annotations were called @BeforeClass, @Before, @After, and
@AfterClass.
Skipping a Test
Sometimes you have a test that is partially written or has broken and needs more time to be fixed. But a failing
test will fail your build, so what is a programmer to do? JUnit provides the @Disabled annotation that you can
use to skip the test and report that it has been skipped in the JUnit result. Here’s an example:
@Test @Disabled("need to finish implementation")
void notReadyYet() {
assertEquals("dog", "cat");
}
As you can see, the @Disabled annotation accepts an optional parameter, which describes the reason the test is
disabled. This is useful for people who are reading the code. It also appears in the test output if you look at the
result in your IDE.
need to finish implementation
Running this test in Maven includes this output:
[INFO] Running com.wiley.realworldjava.testing.SkippingTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1
...
[WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 1
And Gradle would include the following:
SkippingTest > notReadyYet() SKIPPED
Asserting Logic
JUnit 5 provides a helper class called org.junit.jupiter.api.Assertions, which contains many static
methods for asserting values.
The basic assertion checks whether two values are equal. Here’s an example:
int expected = 3;
int actual = 1+2;
assertEquals(expected, actual);
The value you expect to receive is the first parameter. The actual value from the code under test is the second
parameter.
There’s an optional third parameter with a message to include if the assertion fails. It’s good to include this
message so you have clearer failures.
assertEquals(expected, actual, "numbers are not equal");
The assertEquals() method is overloaded so you can call it with any data type. There’s even an Object
overloaded version that calls the equals() method to determine whether the expected and actual values
are the same.
208 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
NOTE In JUnit4, the message parameter is supplied first, unlike JUnit 5, where the message
is supplied last on all the methods that use it.
Asserting Exceptions
Sometimes it is useful to check whether an exception is thrown and whether details about the exception are
correct. Suppose you want to test this class:
public final class Validator {
This method’s first parameter is the type of exception you expect to be thrown. The second parameter is a lambda
that takes no parameters and calls the code under test. Then there is an optional third parameter for a message to
be included in the output if the test fails.
The assertThrows() method actually returns the thrown exception, which lets you control the details of what
else is verified.
IllegalArgumentException actual =
assertThrows(IllegalArgumentException.class,
() ‐> Validator.validatePositive(‐1),
"exception");
assertEquals("Must be positive", actual.getMessage(), "Message");
Failing Programmatically
Sometimes you need to fail a test. For example, say you have an if/else clause, where the else clause should
never happen. So, in the else clause, you can include this instruction:
fail("Fail here");
Like with the assertions, the message parameter in the fail instruction is optional, but it’s useful to know why
the developer wants the test to fail!
However, the first one is better because it gives a clearer message on failure, as you can see
from the following output for each of these assertions when run independently:
org.opentest4j.AssertionFailedError: animal ==>
Expected :dog
Actual :cat
org.opentest4j.AssertionFailedError:
Expected :true
Actual :false
The first one tells you what is actually wrong, which is much more useful when you have a
failing test. Similarly, assertNull() is better because it tells you cat is not null, whereas
assertTrue() merely tells you that it expected true but was false.
assertNull(actual, "not null");
assertTrue(actual == null, " not null");
Parameterizing Tests
JUnit 5 provides a way to easily run the same test multiple times with different values using the
@ParameterizedTest annotation.
@ParameterizedTest
@ValueSource(strings = { "cat", "dog" })
void values(String value) {
assertEquals(3, value.length(), "# chars");
}
Figure 7.4 shows how running this code creates two tests that run independently.
NOTE You can list String and primitive values in the @ValueSource parameter list.
Beside @ValueSource, there are a few other types of sources. For example, @CsvSource reads from a file.
@MethodSource is the most general‐purpose one; it allows you to inject values returned from a static method
into your test method signature. In the following example, we use @MethodSource to pass both test data and
expected values:
static List<Object[]> fetchTestData() {
return List.of(
new Object[] { "cat", 3},
new Object[] {"doggy", 5});
}
@ParameterizedTest
@MethodSource("fetchTestData ")
void values(String value, int expected) {
assertEquals(expected, value.length(), "# chars");
}
In this case, JUnit calls the fetchTestData() method to discover the expected pairs of parameters for the
parameterized test. If the method you want to call to get the parameterized data is in another class, you can use a
longer form of @MethodSource.
@MethodSource(
"com.wiley.realworldjava.testing.ParameterizedTests#fetchTestData")
Notice how the fully qualified class name is used this time. Additionally, the method to call in that class appears after
a #. Regardless of the location of the target method, the method must be static to be called by @MethodSource.
Working with Common Testing Libraries ❘ 211
TOOL SYNTAX
Maven <dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj‐
core</artifactId>
<version>3.25.2</version>
<scope>test</scope>
</dependency>
Gradle (Groovy) testImplementation group: 'org.assertj',
name: 'assertj‐core', version: '3.25.2'
Gradle (Kotlin) testImplementation("org.assertj:assertj‐core:3.25.2")
The following shows how to write a single‐line assertion for something that would otherwise require more logic:
package com.wiley.realworldjava.testing.assertj;
import org.junit.jupiter.api.Test;
class AssertThatTest {
@Test
void spaces() {
assertThat("cat").isEqualToIgnoringWhitespace("c a t");
}
}
When using AssertJ, you start with assertThat() and supply a parameter to be tested. AssertJ allows chaining
checks so you can write more succinct code such as this:
assertThat("cat")
.startsWith("c")
.endsWith("t");
AssertJ has lots of custom assertions for strings, lists, files, and more. Here are a few:
➤➤ isSubstringOf()
➤➤ isUpperCase()
➤➤ matches()
Most of the time, typing assertThat() and using autocomplete in your IDE will display all the assertion
methods. The guide and Javadoc in the “Further References” section shows you where you can find more details.
212 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
And that’s just in assertj‐core! There are specific AssertJ libraries for more specialized needs.
In addition to custom assertions with assertThat(), AssertJ supplies soft assertions where you can easily col-
lect multiple assert failures and list them all at once when the test fails. This is particularly useful for long end‐to‐
end integration tests or monitoring tests so you can get a list of everything that is wrong at the same time.
This is the basic way of writing a soft assertion:
@Test
void callingAssertAll() {
SoftAssertions softly = new SoftAssertions();
softly.assertThat("robot").isEqualTo("izzy");
softly.assertThat(126).isLessThanOrEqualTo(125);
softly.assertAll();;
}
This test has two failing conditions. If you use a simple assertion, the test will fail after the first assertion. Using
soft assertions, the test fails with both issues rather than just the first one. The output contains the following:
org.assertj.core.error.AssertJMultipleFailuresError:
Multiple Failures (2 failures)
‐‐ failure 1 –
expected: "izzy"
but was: "robot"
at SoftAssertionsTest.callingAssertAll(SoftAssertionsTest.java:14)
‐‐ failure 2 –
Expecting actual:
126
to be less than or equal to:
125
at SoftAssertionsTest.callingAssertAll(SoftAssertionsTest.java:15)
One caveat: you must call assertAll() at the end of the assertion chain, although there are a variety of ways
to avoid having to remember to call assertAll(). See the Selikoff.net blog post in “Further References”
for examples.
TOOL SYNTAX
Maven <dependency>
<groupId>org.junit‐pioneer</groupId>
<artifactId>junit‐
pioneer</artifactId>
<version>2.2.0</version>
<scope>test</scope>
</dependency>
Gradle (Groovy) testImplementation group: 'org.junit‐pioneer',
name: 'junit‐pioneer', version: '2.2.0'
Gradle (Kotlin) testImplementation(
"org.junit‐pioneer:junit‐pioneer:2.2.0")
Working with Common Testing Libraries ❘ 213
In this section, you’ll see three of the features to get a feel for the functionality in this project.
Retrying a Test
Sometimes you are dependent on an operation in another service that is unreliable. Or you need to run tests on an
overloaded network. JUnit Pioneer provides an annotation to retry.
@RetryingTest(10)
void flakeyTest() {
Random random = new Random();
if (random.nextInt(1, 100) < 75) {
fail("too high");
}
}
The previous code retries the test up to 10 times and stops with a passing status if one succeeds. If all 10 attempts
fail, the test fails. The skipped runs that failed will output a message like this:
org.opentest4j.TestAbortedException: too high
Test execution #1 (of up to 10) failed ~> will retry in 0 ms...
There are optional parameters for attributes like how long to wait and which exceptions to consider.
TIP This annotation works only for regular tests. If you need to retry a
@ParameterizedTest, use https://github.com/artsok/rerunner‐jupiter.
214 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
@Test
@SetSystemProperty(key = "JAVA_HOME", value = "c:/java/java21")
void maskProperty() {
assertEquals("c:/java/java21", System.getProperty("JAVA_HOME"));
}
JUnit provides an option to speed up tests by running multiple tests at the same time. Since system properties are
global, you don’t want to run such tests in parallel. After all, JAVA_HOME can’t be set to a specific value in one test
and be empty in another at the same time. If you use JUnit Pioneer’s annotations when using system properties in
your test, Pioneer ensures that methods with conflicting system property annotations are not run at the same time.
@CartesianTest
void cartesianTest(
@Values(ints = {1, 2}) int time,
@Values (ints = {3, 4}) int tide) {
// Your test implementation
}
}
In this example, the cartesianTest method will be executed for every combination of time and tide values.
That is, it will run with (1, 3), (1, 4), (2, 3), and (2, 4).
MOCKING OBJECTS
Needless to say, you want your unit tests to be fast and independent so you can provide fast feedback and ensure
you are testing logic in isolation. But what can you do when your code needs to access a database, a REST API
call, or another network call? Any of these might be slow or unavailable, and the results may vary. Or what if you
want to call an interface for which the implementation hasn’t even been written yet? Mock objects allow you to
stub such calls by providing default implementations that you control from within the test.
A mock is a type of test double where an object other than the real class is used. Mock objects come in
different flavors.
➤➤ Dummy: Objects that are passed into a method signature but never used. They are just supplied to
satisfy a method signature.
➤➤ Fake: Objects that have some working implementation but are much simpler than the service they are
mimicking.
➤➤ Stubs: Objects that provide contrived responses to expected calls.
➤➤ Spy: Objects that merge real‐world behavior and mock behavior and/or verify information about the
number of calls.
➤➤ Mocks: Objects that are preprogrammed to verify some expectations, for example that a certain method
was called n number of times.
Mocking frameworks provide APIs for creating all of these. In this chapter, we will cover some of the most popu-
lar mocking frameworks: Mockito, EasyMock, and Spring MockMvc.
TOOL SYNTAX
Maven <dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito‐
core</artifactId>
<version>5.11.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito‐
junit‐
jupiter</artifactId>
<version>5.11.0</version>
<scope>test</scope>
</dependency>
continues
216 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
testImplementation(
"org.mockito:mockito‐junit‐jupiter:5.11.0")
Now let’s look at an example to understand how Mockito works. Suppose you have an interface that retrieves scores.
public interface ScoreService {
int retrieveScore(int matchNumber);
}
Now, you want to unit test a class called Dashboard, which uses an interface called ScoreService.
Public class Dashboard {
16: when(scoreServiceMock.retrieveScore(1)).thenReturn(76);
17: when(scoreServiceMock.retrieveScore(2)).thenReturn(91);
18:
19: List<Integer> expected = List.of(76, 91);
20: List<Integer> actual = dashboard.getScores(2);
21: assertEquals(expected, actual, "scores");
22:
23: verify(scoreServiceMock, times(2)).retrieveScore(anyInt());
24: }
25: }
Line 1 of our test class tells JUnit this is a Mockito test. The @Mock annotation in line 6 tells JUnit to inject a
mock instance of ScoreService into the variable scoreServiceMock on line 7. The mock object allows
you to specify behavior without having a real instance of the class or even without such an implementation class
even existing.
Lines 16 and 17 register the desired behavior with the mock (which technically makes it a stub). When the
dashboard asks the mock to retrieve the score, the mock returns 76 or 91 depending on the parameter. Line 23
confirms the retrieveScore() method was called exactly twice with any values (which in this sense makes it
a true mock).
The @Mock annotation works only for mocking instance variables. An alternative syntax for creating mocks is
shown here:
ScoreService scoreServiceMock = mock(ScoreService.class);
This alternative syntax allows you to create mocked local variables.
MULTIPLE CONSTRUCTORS
If your class under test already has a constructor that accepts the required arguments, that’s
great as the mock can call it. Otherwise, just create a no‐argument constructor and use
this() to pass an instance of the real object to the constructor that requires one.
public TripPlanner() {
this(new Champs());
}
The first constructor is used by the real code and the second by the tests. This allows full test-
ing of the code without making life harder for the other callers of the class.
Since mock objects can be tricky to understand, let’s look at the flow of the code in another way:
1. Lines 6–7 of the test have Mockito inject a mock ScoreService.
2. Line 11 of the test passes that mock to the constructor of Dashboard, which stores the mock in the
service instance variable.
3. Line 16 of the test tells the mock what to do if and when retrieveScore(1) is called.
4. Line 17 of the test tells the mock what to do if and when retrieveScore(2) is called.
218 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
5. Line 20 of the test calls Dashboard, which is the code under test.
6. The dashboard.getScores(2) call in line 20 creates a range from 1 to 2 on the stream, ostensibly
to retrieve a score for each element of the range.
7. The first call in the stream is service .retrieveScore(1).
8. The mock consults the initial setup expectations and returns the requested 76.
9. Next is the service.retrieveScore(2).
10. The mock consults the initial setup expectations again and returns the requested 91.
11. The getScore() method completes returning control to the test.
12. Line 21 of the test asserts the list values.
13. Line 23 is the verify() call where the mock thinks “You said you were planning on making two calls;
let me check whether that happened. Yes, you did make two calls. All good!”
Now that you understand the flow, it is time to cover the most important Mockito features in the next sections.
Configuring When/Then
To tell Mockito what to do when a specified call is made, start with Mockito.when(). It is common to use a
static import so you don’t have to type the Mockito class name part. The when() method takes as a parameter
the method you are going to call. You can pass in either the parameter that you expect or a matcher like
anyInt(). Finally, you specify one or more return values. The thenReturn() method takes a varargs
parameter so you can specify unique values to be returned on subsequent calls.
when(scoreServiceMock.retrieveScore(1)).thenReturn(76);
when(scoreServiceMock.retrieveScore(anyInt())).thenReturn(76);
when(scoreServiceMock.retrieveScore(anyInt())).thenReturn(76, 82);
Mocking Objects ❘ 219
If you want to have the mock throw an exception instead of returning a value, you do that by calling
thenThrow(), as shown here:
when(scoreServiceMock.retrieveScore(anyInt()))
.thenThrow(new IllegalStateException());
When stubbing calls with multiple parameters, if you use one matcher (ex: anyInt()), then
all of the parameters must be matchers. You cannot mix and match. If you want to mix and
match, you can use the eq() method to wrap the specific value. For example:
when(myMock.myMethod(anyInt(), eq(1))).thenReturn(76)
There are lots of matchers you can pass instead of the direct primitive or object reference. For example:
➤➤ Any value of a certain type: any(Class<T>) or anyXXX() matches primitives, List, Set, Map, or
Collection.
➤➤ Equality: eq(X) matches the primitive or object X.
➤➤ Null checking: isNull() or isNotNull() matches depending on whether null.
➤➤ String operations: contains(String), matches(Pattern), startsWith(String), or
endsWith(String) matches if String operation returns true.
TIP If you supply a null value, Mockito may not find the matching method. Therefore, if
you know that a null value might be passed, you should specify nullable(MyClass
.class) instead of any(MyClass.class).
Verifying Calls
It is often useful to check that the methods you expected were actually called. If you stub a mock call and never
make that call on the code under test, Mockito will fail that by default. You can tell Mockito not to fail in such
cases with the following:
@Mock(strictness = Mock.Strictness.LENIENT)
Independently, you can explicitly verify any or all of the calls in code. Here are some examples:
verify(scoreServiceMock).retrieveScore(1);
verify(scoreServiceMock, times(1)).retrieveScore(1);
verify(scoreServiceMock, atLeast(1)).retrieveScore(1);
verify(scoreServiceMock, atMost(1)).retrieveScore(1);
verify(scoreServiceMock, atMostOnce()).retrieveScore(1);
verify(scoreServiceMock, never()).retrieveScore(9);
The first and second lines of code in the previous block are equivalent ways of telling Mockito to verify that the
call to retrieveScore() was made exactly once. (Since verifying that something happened exactly once is
most common, it is the default.) The optional verification modes allow you to easily specify how many times a
mock call should happen with given parameters. Additionally, you can use the matchers from the previous sec-
tion. For example, the following code says to verify that exactly one call with any int parameter was made:
verify(scoreServiceMock, times(1)).retrieveScore(anyInt());
220 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
Mocking Statics
Injecting mocks works well for instance variables or method parameters, but what if you want to mock a static
method call? Since the static is directly in the code under test, you can’t simply inject an object. Luckily, you can
still mock it out with Mockito!
Suppose DashboardViewer has a new method you need to test.
public String header() {
return "%s Competition"
.formatted(LocalDate.now().toString());
}
Testing this requires requesting a mock for the LocalDate.now() static method.
@Test
void header() {
LocalDate testDate = LocalDate.of(2024, Month.NOVEMBER, 15);
try (MockedStatic<LocalDate> localDateMock =
Mockito.mockStatic(LocalDate.class)) {
localDateMock.when(LocalDate::now).thenReturn(testDate);
TOOL SYNTAX
Maven <dependency>
<groupId>org.easymock</groupId>
<artifactId>easymock</artifactId>
<version>5.2.0</version>
<scope>test</scope>
</dependency>
Gradle (Groovy) testImplementation group: 'org.easymock',
name: 'easymock', version: '5.2.0'
Gradle (Kotlin) testImplementation("org.easymock:easymock:5.2.0")
222 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
Looking at the unit test for Dashboard using EasyMock, you should see some familiar concepts.
1: @ExtendWith(EasyMockExtension.class)
2: public class DashboardTest {
3:
4: private Dashboard dashboard;
5:
6: @Mock
7: private ScoreService scoreServiceMock;
8:
9: @BeforeEach
10: void setUp() {
11: dashboard = new Dashboard(scoreServiceMock);
12: }
13:
14: @Test
15: void getScores() {
16: expect(scoreServiceMock.retrieveScore(1)).andReturn(76);
17: expect(scoreServiceMock.retrieveScore(2)).andReturn(91);
18:
19: replay(scoreServiceMock);
20:
21: List<Integer> expected = List.of(76, 91);
22: List<Integer> actual = dashboard.getScores(2);
23: assertEquals(expected, actual, "scores");
24:
25: verify(scoreServiceMock);
26: }
27: }
Just like Mockito, line 1 has an @ExtendWith annotation to give the mocking framework control over the test
execution flow. Line 6 has an @Mock annotation to inject a mock object. Lines 16 and 17 set the expected return
values for when the retrieveScore() method is called. While the method names are different than Mockito,
the concept is the same.
EasyMock has many more features. As you can see from the similarities with Mockito, it is easy to learn another
mock framework once you know one!
Tool Syntax
Maven <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring‐boot‐
starter‐
test</artifactId>
<version>3.2.4</version>
<scope>test</scope>
</dependency>
To begin, we will create three example classes to show how to use MockMvc to test a controller. The example
is a match‐tracker service that retrieves the current match number at a competition. The first class is part of the
application and exists only once, regardless of how many test classes you have.
@SpringBootApplication
public class DashboardApp {
}
The SpringBootApplication would start the app for real if we were running it. Even though this is
only a test, it needs to exist for MockMvc to be able to tell if your Spring application will work. Next is the
MatchTrackerService the controller will reference:
@Service
public class MatchTrackerService {
@RestController
public class DashboardController {
@GetMapping("/match")
public @ResponseBody String displayCurrentMatch() {
return "In match %s".formatted(tracker.getCurrentMatchNumber());
}
}
This simple controller uses dependency injection (as you learned in Chapter 6 to get the
MatchTrackerService). It uses the data returned from the service to format a nice display. You might be
thinking that you could test the DashboardController class using Mockito or EasyMock directly, and you
would be right! You can directly inject the dependency and test the displayCurrentMatch() logic. But what
MockMvc adds is the ability to test your endpoints using REST calls and have Spring do all the dependency injec-
tion and testing that your annotations and Spring configuration are correct.
1: @SpringBootTest
2: @AutoConfigureMockMvc
3: class DashboardControllerTest {
4:
5: @Autowired
6: MockMvc mockMvc;
7:
8: @MockBean
9: MatchTrackerService trackerMock;
10:
11: @Test
12: void matchNumber() throws Exception {
13: when(trackerMock.getCurrentMatchNumber()).thenReturn(35);
14:
15: mockMvc.perform(get("/match"))
16: .andExpect(status().isOk())
17: .andExpect(content().string(containsString("In match 35")));
18:
19: verify(trackerMock, times(1)).getCurrentMatchNumber();
20: }
21: }
Lines 1 and 2 take care of telling JUnit to use Spring MockMvc for testing. Lines 5 and 6 autowire the MockMvc
object to make it available for the test. Lines 8 and 9 create a mock that uses Mockito under the covers. Line 13
sets up the Mockito mock’s expected configuration.
Lines 15–17 use MockMvc to call the REST API /match. This simulates a call you could make in the browser as
if the application were actually running! The call asserts the HTTP status code and expected return value. Finally,
line 19 uses Mockito to verify the expected method was called on the mock.
MockMvc has many features such as methods to check the headers and content type.
Measuring Test Coverage ❘ 225
In addition to mocking, Spring has many other useful features for testing. For example,
WebTestClient helps test HTTP calls.
Note that Spring is using servlets in the background, so servlet filter chaining is probably
occurring, which could interfere with your tests, for example for checking Spring Security.
To prevent the servlet chaining from occurring, you can inject a OncePerRequestFilter,
which will bypass all of the filter chaining.
NOTE Other code coverage tools in use are Clover and Cobertura. Clover started as a
commercial tool from Atlassian and was open‐sourced in 2017. Cobertura has always been
open‐source but is less popular than JaCoCo.
JaCoCo works with Maven and Gradle for building and also integrates with SonarQube. Table 7.7 shows
JaCoCo’s IDE integration.
IDE INTEGRATION
IntelliJ Change in coverage settings pull down from IntelliJ coverage to JaCoCo.
VS Code Java extensions or Code Gutters plugins display results generated from Maven/Gradle.
@BeforeEach
void setUp() {
match = new Match(4);
}
@Test
void win() {
match.setWin();
Once you run one or more tests with coverage, you can see the results in two locations. First there is a coverage
report. Figure 7.7 shows IntelliJ’s JaCoCo report, and Figure 7.8 shows Eclipse’s JaCoCo report. Notice how even
when the underlying calculator is the same, the IDEs categorize the results in different ways.
The other location where you can see the coverage is in the editor itself. Each line is annotated with a color so
you can see the results directly while looking at the code and quickly identify untested code. The color legend is
as follows:
➤➤ Red: The line is not tested at all.
➤➤ Yellow: The line is not tested for at least one branch.
➤➤ Green: The line is tested for all branches.
Then you add the plugin to the <build> section of your POM file.
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco‐maven‐plugin</artifactId>
<version>${jacoco.version}</version>
<executions>
<execution>
<goals>
<goal>prepare‐agent</goal>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
Running the Maven build creates a jacoco.exec file in the target directory. The .exec file extension makes it
easy to recognize the file is from JaCoCo. This contains the same data your IDE would generate if you ran all the
tests in your project with coverage turned on. In fact, the jacoco.exec file can be read by your IDE to show the
coverage directly on the class files or by Sonar to show the coverage results with your other Sonar reporting.
For Gradle, the setup is even shorter. Just add the ID jacoco to the plugins section. When you build, a
jacoco directory will be created with a test.exec file in it.
Finally, clicking the clock icon allows you to see test history and the results of each run.
Match Test class does not exist. IntellIJ offers to create a new test class.
Match Test class exists. IntellIJ offers to switch to the test class or create
another one.
MatchTest Class under test exists. IntelliJ automatically navigates to the class.
MatchTest Class under test does not IntellIJ gets confused and offers to switch to itself or
exist. create a new test class.
@Test
void testNext() {
Weather.Season result = weather.next(Weather.Season.WINTER);
assertEquals(Weather.Season.WINTER, result);
}
}
Testing Contracts
It is often useful to test the contracts of APIs, HTTP calls, or microservices. Consumer‐driven contact (CDC) test-
ing lets you test the components independently while ensuring that the components meet the contract requirements.
There are two parties to a contract. The provider supplies the API, and the consumer calls the API. Both need to
respect the contract or the offending tests will fail.
To use CDC, you create a contract, which generates provider and client stubs. For example, you can specify an
HTTP request and code the expected status, headers, and response. This represents the agreement, or “pact,”
between the consumer and provider.
Two popular frameworks for managing CDC are Spring Cloud Contract and Pact. Spring Cloud Contract uses
Groovy or YAML to write the contracts and generates tests to validate the contract. Spring Cloud Contract can
also integrate with Pact. In Chapter 6, we covered Spring Cloud Contract.
Testing Mutations
Earlier, you learned that code coverage is not necessarily sufficient to determine if tests are good. One way to
ensure good tests is to conduct a code review. Another approach is mutation testing. When you run a mutation
test, the mutation testing library inserts tiny changes and runs your tests. If the tests fail, that is good. It means the
tests detected the change. If the tests still pass, they aren’t thorough enough.
For example, a mutation test might get rid of an if statement, or perhaps it will change a < to a <=. Since there
are a large number of possible mutations, a mutation testing library automatically takes care of this for you. In
Java, the PIT (PITest) framework generates mutations and reports on the results. It can integrate with both your
build and IDE. PIT used to stand for Parallel Isolated Test. As it evolved, or mutated, the acronym didn’t apply
anymore, but the name stuck.
@Test
void moveDown() {
Coords original = new Coords(4, 7);
Coords actual = Shift.moveDown(original);
@Test
void moveDown() {
Coords original = new Coords(X, 7);
Coords actual = Shift.moveDown(original);
assertCoords(actual, X, 6);
}
First, notice that the expected value 4 is no longer repeated in the test as it is a constant. Second, the duplication
is abstracted into a common method called assertCoords(). Deciding which of these choices is the clearer one
is subjective. The key in tests is to make sure you aren’t avoiding repetition at the expense of readability. Some
duplication is OK!
FURTHER REFERENCES
➤➤ https://junit.org/junit5/docs/current/user‐guide
JUnit 5 User Guide
234 ❘ CHAPTER 7 Testing Your Code with Automated Testing Tools
➤➤ https://github.com/junit‐team/junit5‐samples
Sample projects by build tool
➤➤ https://assertj.github.io/doc
AssertJ documentation
➤➤ https://www.javadoc.io/doc/org.assertj/assertj‐core/latest/org/assertj/
core/api/Assertions.html
AssertJ Assertion Javadoc
➤➤ https://www.selikoff.net/2024/03/23/multiple‐ways‐of‐using‐soft‐asserts‐
in‐junit‐5
Variety of ways to use Soft Assertions without calling assertAll()
➤➤ https://junit‐pioneer.org/docs
JUnit Pioneer documentation
➤➤ https://javadoc.io/doc/org.mockito/mockito‐core/latest/org/mockito/
Mockito.html
Mockito documentation
➤➤ https://easymock.org/user‐guide.html
EasyMock documentation
➤➤ https://docs.spring.io/spring‐framework/reference/testing/
spring‐mvc‐test‐framework.html
Spring MockMvc documentation
➤➤ www.eclemma.org/userdoc/index.html
JaCoCo documentation
➤➤ https://openclover.org/documentation
Clover documentation
➤➤ https://docs.pact.io
Pact Documentation
➤➤ https://spring.io/projects/spring‐cloud‐contract
Spring Cloud Contract documentation
➤➤ https://pitest.org/quickstart
PIT documentation
SUMMARY
In this chapter, you learned about how to write tests using Java. Key concepts included the following:
➤➤ Using JUnit for assertions and test flow
➤➤ Creating mock objects using Mockito, EasyMock, and Spring MockMvc
➤➤ Measuring test coverage to identify missing tests
➤➤ Types of testing
8
Annotation Driven Code with
Project Lombok
WHAT’S IN THIS CHAPTER?
Project Lombok is one of life’s little conveniences. Lombok uses simple annotations to eliminate a ton of
boilerplate code, such as getters and setters, logging, equals, and hashCode, as well as other scaffolding
that might otherwise obscure a Java codebase. Lombok does this through the magic of bytecode reweaving,
a process that rewrites the .class files by adding the desired code, without modifying your source files.
You might worry that this will complicate debugging and logging since the source code line numbering
won’t match the modified class. But in fact, Lombok adjusts for that in the executing code, and the major
Integrated Development Environments (IDEs) also handle it gracefully. Admittedly, there are times during
intricate debugging sessions where stepping into generated code might be difficult, or cryptic logging
messages might surface, for example if an exception is thrown from code that isn’t in your codebase!
Some of this might sound like Java “records,” introduced in Java 14. For cases where you need immutable
data objects, records would usually be the way to go. But Lombok offers much broader capability, and
many enterprise development teams are using it. For those that do, it adds great value.
You also might be thinking that you can use code generation to create getters, setters, equals, etc. The
problem with code generation is that you have code that needs to be maintained and risks getting
out of sync. Additionally, code coverage tools will show this generated code as having low coverage.
As a bit of trivia, Lombok was named after an island in Indonesia about a day’s drive east of the great
island of Java.
In this chapter, we will cover the most important parts of Project Lombok, including installing Project
Lombok and using all of the major annotations, such as @Data, @ToString, @Log, etc.
236 ❘ CHAPTER 8 Annotation Driven Code with Project Lombok
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch08-Lombok. See the README.md file in that directory for details.
Installing in IntelliJ
As of IntelliJ version 2020.3, Lombok comes bundled, so there is not much to do. The first time you use Lombok,
IntelliJ will prompt you to enable annotation processing, but you can be proactive inside IntelliJ by going to
Settings ➪ Build, Execution, Deployment ➪ Compiler ➪ Annotation Processors, as shown in Figure 8.1.
To be sure, if you include the Lombok JAR in your build, the code will compile even without the plugin. But
without the plugin, the IDE will think any calls to the Lombok-generated getters, setters, etc., are errors because
the IDE will not see this code in the codebase. In earlier IntelliJ versions, developers would have needed to install
the Lombok plugin. But nowadays, Lombok is included, so that is no longer required.
Installing in Eclipse
To install Lombok in Eclipse, follow these steps:
1. Head over to https://projectlombok.org/download and download the latest version of Lom-
bok, naming it lombok.jar. Double-click lombok.jar, and in a few seconds it will locate your Eclipse
installation (see Figure 8.2).
WARNING On a Mac, you may get the message “lombok.jar cannot be opened because it is
from an undefined developer.” If that happens, right-click the file and choose Open With and
then Java Launcher. Then click Open on the subsequent pop-up that says “macOS cannot
verify the developer of lombok.jar” confirming you want to open it.
Installing in VS Code
The two major Java extensions for VS Code (Oracle and Microsoft) include Lombok support. There was an
older extension from Microsoft called “Lombok’s Annotation Support for VS Code.” That is deprecated and only
meant for people on old versions of the VS Code plugins.
238 ❘ CHAPTER 8 Annotation Driven Code with Project Lombok
TOOL SYNTAX
Maven <dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.30</version>
<scope>provided</scope>
</dependency>
Grab the dependency for your build tool and include that in your dependencies.
Please note that Lombok is only needed to compile your project. Once the project is compiled, you will no longer
need it for executing your code. In Maven language, this is called a provided scope: the JAR is “provided” exter-
nally for execution (or in our case, not needed at all). In Gradle-speak, this is called a compileOnly dependency.
If you are not using Maven or Gradle, you will need to add Lombok to the classpath for your project. See the
README.md file in the GitHub project folder for more details.
Implementing Lombok ❘ 239
IMPLEMENTING LOMBOK
The best way to learn about Lombok annotations is to see them in action. Let’s start with an example, a class
called User that contains a user ID, name, address, and perhaps some other instance variables. First, we’ll do it
the old-fashioned way, without Lombok.
The class has three members: userID, name, and address.
public class User {
// userID does not compile. We made it final to demo a constructor. Stay tuned!
private final int userID;
private String name;
private String address;
}
But there is a lot of boilerplate code required to make this class useful. Adding getters, settings, toString(),
and hashCode() turns the nice clean User class into this somewhat cluttered beast:
import java.util.Objects;
public class User {
private final int userID;
private String name;
private String address;
@Override
public boolean equals(Object o) {
if(this==o) return true;
if(o==null) return false;
return o instanceof User user
&& getUserID()==user.getUserID()
&& Objects.equals(getName(), user.getName())
&& Objects.equals(getAddress(), user.getAddress());
}
@Override
public int hashCode() {
return Objects.hash(getUserID(), getName(), getAddress());
}
@Override
public String toString() {
return "User{" +
"userID=" + userID +
", name='" + name + '\'' +
", address='" + address + '\'' +
'}';
}
}
➤➤ A required-args constructor, containing the unassigned final, nonstatic, non-null instance variables
➤➤ canEquals(), which is used in special cases (if you need to redefine equality for inheritance,
for example)
Lombok does not touch your source code; it just weaves the boilerplate code into your bytecode after
compilation.
You will see soon that each of those methods can also be rewoven individually for more granular control,
by the more specific annotations: @ToString, @EqualsAndHashCode, @Getter, @Setter, and
@RequiredArgsConstructor.
@Getter @Setter
public class Automobile {
private String manufacturer;
private String make;
private String model;
}
This will generate public getters and setters for all your instance variables.
In case you want to change the access level for specific fields, you can annotate those individually, which will
override the default behavior. You can also specify AccessLevel.NONE to prevent that accessor from being
autogenerated at all.
import lombok.AccessLevel;
import lombok.Getter;
import lombok.Setter;
@Getter @Setter
public class Automobile {
// Generate a protected setter for manufacturer
// generates method: protected setManufacturer(String m)
@Setter(AccessLevel.PROTECTED) private String manufacturer;
Using @ToString
You can annotate any class with the @ToString annotation, which will generate a
public String toString() method, containing all of the nonstatic fields.
Some optional switches let you format the String output, for example, how verbose to make it. We won’t
mention them all here; refer to the Lombok website at https://projectlombok.org/ for full documenta-
tion. However, one common feature we will mention is how to control which fields to include. By default, every
nonstatic field is included. You can declaratively exclude fields by annotating them with @ToString.Exclude;
for example in the following code, the address field is excluded. You can see in the printout from executing that
code that the field is omitted in the output.
import lombok.AllArgsConstructor;
import lombok.ToString;
@AllArgsConstructor @ToString
public class User {
private final int userID;
private String name;
@ToString.Exclude private String address;
@AllArgsConstructor @ToString
public class User {
private final int userID;
private String name;
@ToString.Exclude private String address;
private List<String> friends;
Using @EqualsAndHashCode
One of the fundamental rules of Java programming is that the equals() and hashCode() methods must agree;
i.e., any objects that are equals() must return the same hashCode(). In addition, we would prefer different
instances to produce different hash codes.
We can achieve this by annotating classes with @EqualsAndHashCode, which applies a prime hash against all
of the nonstatic, nontransient fields of the object instance.
You can specifically exclude fields from the hashCode and equals, by annotating the fields to exclude with
@EqualsAndHashCode.Exclude, in a similar fashion to what we saw with @ToString.
import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
@AllArgsConstructor
@EqualsAndHashCode
public class Automobile {
private String make;
private String model;
@EqualsAndHashCode.Exclude
private String color;
System.out.println(car1.equals(car2)); // true
System.out.println(car1.equals(car3)); // false
}
}
You can see that color is excluded from the equals and hashCode methods. So, car1 and car2 are equal,
since they differ only in color. However, car2 and car3 are not equal since they differ in Model.
You can annotate your class with any or all the constructor annotations. Here’s an example:
import lombok.AllArgsConstructor;
import lombok.NoArgsConstructor;
import lombok.RequiredArgsConstructor;
import lombok.ToString;
You don’t need to memorize these; your IDE will insert the appropriate import for you automatically.
For example, the following snippet will write the appropriate log messages using Slf4j.
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class LombokLogging {
public static void main(String[] args) {
log.debug("Starting");
log.info("Hello, Words");
log.debug("Goodbye");
}
}
246 ❘ CHAPTER 8 Annotation Driven Code with Project Lombok
Slf4j is a wrapper around any logging implementation. The default logger used by the Lombok @Slf4j
annotation will be Logback, but you can override that by including a dependency to your preferred logging
framework in your pom.xml or build.gradle file.
Additionally, there are many tweaks you can make in the lombok.config file to override the Lombok defaults.
Just include lombok.config in or under your Java source root directory. The config file will apply to all source
files in or under that directory.
As an example, let’s say you want to change the default log variable name to logger. You would specify the
following configuration in the lombok.config file:
lombok.log.fieldName=logger
Or consider the fact that the loggers are created to be static. But suppose you want to change these to be not
static, say to avoid contention with various instances of the same class. Then you can include the following in
lombok.config:
lombok.log.fieldIsStatic=false
To see the full list of lombok.config configuration keys with descriptions, use the following command:
java -
jar lombok.jar config -
g –-
verbose
Or see the Lombok online documentation at https://projectlombok.org/features/configuration.
Summary ❘ 247
FURTHER REFERENCES
➤➤ https://projectlombok.org
The complete guide to Lombok installation, features, and community
SUMMARY
In this chapter, you looked at the most important annotations provided by Project Lombok for getting rid of
heaps of boilerplate code, you saw how to override default behavior, and you learned how to “Delombok” your
code. There are more annotations, and there are configuration switches to modify the behavior of the ones you
have seen, so refer to the excellent Project Lombok documentation at projectlombok.org.
Specifically, you learned about the following annotations:
➤➤ @Data: Injects into your bytecode (“weaves”) toString, equals, and hashCode, as well as getters
on all fields, setters on all nonfinal fields, and a required-args constructor.
➤➤ @Getter and @Setter: Weave getter and setter methods into your bytecode.
➤➤ @ToString: Weaves toString method based on some or all instance variable values.
➤➤ @EqualsAndHashCode: Weaves equals and hashCode methods based on some or all instance
variable values as you like.
➤➤ @NoArgsConstructor, @RequiredArgsConstructor, and @AllArgsConstructor: Weave
constructors into your bytecode.
➤➤ @Log, @Slf4j, @Log4j2, etc.: Create a static logger class variable.
9
Parallelizing Your Application
Using Java Concurrency
WHAT’S IN THIS CHAPTER?
Let’s not pull any punches: concurrency is hard. We human beings are conditioned to think in a synchro-
nous fashion. But if we want to have web servers that process zillions of requests, if we want to extract and
transform tens of thousands of files per hour, or if we generally want to make our applications performant,
we need to execute things concurrently. A thread is Java’s unit of concurrency, and multiple threads can
run at the same time. The challenge is that multiple threads can share common resources, and if some are
writing and some are reading, there could be unexpected results. To make your applications thread‐safe,
you need to think about how two or more concurrent threads interact with each other and what happens
when multiple threads try to access the same resource. Thankfully there are tools, rules, and frameworks
that make things easier.
In this chapter we will review some low‐level concurrency basics, followed by higher‐level concurrency
tools introduced to the Java language for synchronization, and in particular CompletableFuture and
virtual threads.
Traditionally you would launch a thread by passing a Runnable to the Thread constructor and call
start() on the newly created thread. Java 21 introduced some newer patterns for creating and starting
threads, and we will see the new approach in this chapter. Finally, we will look at some of the concurrency
tools included in Spring.
250 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch09‐Concurrency. See the README.md file in that repository for details.
A more common way is to provide a Runnable instance to a thread (in our case doSomeWork() is a method
that will do some work):
// Create a Runnable that defines the work to be done:
Runnable someRunnable = new Runnable() {
@Override
public void run() {
doSomeWork();
}
};
// assign the Runnable to a thread and start it running:
new Thread(someRunnable).start();
You can get rid of some of the boilerplate code by defining your Runnable as a lambda. The following is
equivalent to the previous code:
new Thread(()‐> doSomeWork()).start();
A more modern and convenient way of creating and launching threads is to use one of the factory methods of the
Thread class introduced in Java 21. These use a fluent builder pattern to configure, start, and return the thread.
Thread thread = Thread
.ofPlatform()
.start(() ‐> doSomeWork());
Using this syntax you can create a new thread, supply functionality in the form of a Runnable functional
interface (coded as ()‐> doSomeWork()), and start it in one command.
You can optionally configure attributes such as the thread name, thread group, daemon, priority, and more using
the builder syntax, as follows:
Thread thread = Thread.ofPlatform()
.name("Some meaningful name")
.priority(Thread.NORM_PRIORITY+1)
.group(new ThreadGroup("Some thread group"))
.daemon(false)
.start(() ‐> doSomeWork());
To create a thread without starting it, use .unstarted() instead of .start().
Thread thread = Thread.ofPlatform()
.name("Some meaningful name")
.priority(Thread.NORM_PRIORITY+1)
.group(new ThreadGroup("Some thread group"))
.daemon(false)
.unstarted(() ‐> doSomeWork());
By default, platform threads will prevent an application from exiting until they have all
completed. Sometimes, however, we have support threads that just run in the background, but
we don’t want them to block the application from exiting. An example of this would be
threads in a thread pool, where they are pooled in advance to execute anticipated work, but we
don’t want them to block the application from exiting. We can designate such a thread as a
daemon thread by calling thread.setDaemon(true), or by calling .daemon(true) using
the builder pattern. Daemon threads will not prevent an application from exiting, even though
they are still in the runnable state.
Thread priority is used to hint to the thread scheduler which threads to dole out more
time slices.
252 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
1. Read
34
34
2. Increment
35
35
3. Write
In the happy path, a user request comes in on Thread 1 and we update the hit counter. Imagine the hit counter is
up to hit #34 when the request comes in. It processes the request as follows:
➤➤ Step 1: User on Thread 1 reads the value of the hit counter and sees value = 34.
➤➤ Step 2: Thread 1 increments the value to 35.
➤➤ Step 3: Thread 1 writes back the incremented value to the hit counter, which now has a value of 35. And
all is well.
But now let’s look at the less fortunate path in Figure 9.2, where Thread 1 and Thread 2 are both trying to update
the hit counter at the same time.
UNDERSTANDING CONCURRENCY BASICS ❘ 253
1. Read
2. Read
34 34
34
3. Increment
35
4. Increment
35
5. Write
35 35 6. Write
5
35
➤➤ Step 1: Thread 1 reads the Hit Counter and sees value = 34.
➤➤ Step 2: Before Thread 1 has a chance to update the Hit Counter to 35, Thread 2 reads the Hit Counter
and sees 34 as well.
➤➤ Step 3: Thread 1, not knowing about Thread 2, increments its 34 to 35.
➤➤ Step 4: Just then, Thread 2 also increments its 34 to 35.
➤➤ Step 5: Thread 1 writes its 35 back to the Hit Counter.
➤➤ Step 6: Thread 2 writes its 35 back to the Hit Counter. Lo and behold, two threads updated the Hit
Counter, but due to the unfortunate interweaving, only one hit was recorded. Not good. This is what is
known as a race condition, and it is our job to protect against this in our multithreaded code.
Thread 1 Thread 2
AAAA DDDD
High Low
FIGURE 9.3: Each thread sets one‐half of the variable producing an unexpected hybrid result.
TIP Even on a 64‐bit machine, getting or setting long or double variables is not
guaranteed to be atomic.
TIP Don’t assume that means the ++ and ‐‐ operators are atomic; there are no such
guarantees. Even if a variable is volatile, it would be entirely feasible for a reader to see a
halfway set result of ++ or ‐‐.
NEW
NEW is for a thread that has been initialized but not yet started.
RUNNABLE
RUNNABLE is the thread state for a thread that is currently running or is able to run once a processor frees up.
Threads transition from NEW to RUNNABLE when start() is called. Similarly, when a thread is no longer
waiting or blocked, it changes back to RUNNABLE in preparation for its next turn to run.
UNDERSTANDING CONCURRENCY BASICS ❘ 257
BLOCKED
BLOCKED is the thread state for a thread waiting on a shared lock to enter or re‐enter a synchronized block/
method, which you’ll see in the next section.
WAITING
WAITING is the thread state for a thread waiting for a notify() or notifyAll()call from another thread that
is locked on the same object. A thread will call object.wait() or thread.join() without a timeout to enter
the WAITING state.
TIMED_WAITING
TIMED_WAITING is the thread state for when waiting for a specified amount of time to pass. A thread will call
Thread.sleep(long), thread.join(long), or object.wait(long), supplying a long timeout, to enter
the TIMED_WAITING state.
TERMINATED
TERMINATED is the thread state for a thread that has completed execution.
Synchronizing
As an aid to preventing race conditions, Java allows you to call synchronized(someObject) before a block
of code. That code block will then allow only one thread to grab a lock instance. In fact, any block of code that is
locked in that way will be blocked from access by any thread except for the owning thread. Any waiting threads
will be in the BLOCKED state and cannot proceed until the lock owner releases the lock by exiting the
synchronized block.
There is one exception. When a thread is in a synchronized block, it can call wait on the lock. While that thread
is in the WAITING (or TIMED_WAITING state, if it called someObject.wait(long)), then another thread can
enter a synchronized block that is locked on the same lock instance. In this way, many threads can grab the same
lock instance and call wait on the lock, and effectively all are waiting in the same lock. Finally, another thread can
grab the someObject lock by calling synchronized(someObject) and then call someObject.notify()
to wake up one random thread, or someObject.notifyAll() to wake up all of the waiting threads. Hmm,
will there now be many threads owning the same lock? Not quite. The threads that are “awakened” will actu-
ally be transitioned to the BLOCKED state and will wait for the RUNNING thread to exit the synchronized block,
thereby releasing the lock, at which point the thread scheduler will select one waiting thread and transition it to
the RUNNING state.
258 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
In this way you can have one thread set some state and then call notifyAll() and exit the synchronized block.
Then all waiting threads will have an opportunity to wake, check the state, and act accordingly, or go back to
wait. First, suppose you have an object to use for locking:
final Object lock = new Object(); // can be any class type
Here is the syntax for entering the WAITING state:
synchronized(lock) {
lock.wait();
}
Here is the syntax for a thread to notify one WAITING thread to wake up.
synchronized(lock){
lock.notify();
}
Here is the syntax for a thread to notify all WAITING threads to wake up. Keep in mind that only one thread can
be in the synchronized block in the RUNNING state, so the thread scheduler will wake up each waiting thread in
random order.
synchronized(lock){
lock.notifyAll();
}
There are two different syntaxes for synchronizing. First, we synchronize an entire method:
public synchronized void someMethod(){
// ... do stuff ...
}
Next, we synchronize on an object:
private final Object LOCK = new Object();
public void someMethod(){
// ... do stuff ...
synchronized(LOCK){
// ... do more stuff, this block is protected
}
}
When you synchronize a method, you are synchronizing on the object instance containing that method.
TIP When you synchronize on a static method, you are synchronizing on the class object
associated with that instance.
Of course, the Object class is the ancestor of all other classes. These interthread communication methods are
defined in the Object class:
➤➤ wait(time) where time is a long
➤➤ wait(time,nanos) where time is a long and nanos is an int
➤➤ notify()
➤➤ notifyAll()
INTRODUCING CONCURRENT COLLECTIONS ❘ 259
One drawback with synchronized is that when a RUNNABLE thread owns a synchronized lock, any other thread
that tries to grab the same lock will be blocked, that is, forced to wait, with no good way to cancel it. When we
look at some of the higher‐level concurrency components such as ReentrantLock, we will see ways to make
locks that can be canceled.
GARBAGE COLLECTION
We won’t go into garbage collection here, but suffice it to say that a garbage collector is
invoked in the background to analyze the heap. It applies an algorithm to mark and sweep
(remove) objects that are no longer referenced. It can then perform a compaction and reclaim
valuable memory space. Depending on the garbage collector, the algorithm will vary, and later
versions of Java will generally have faster and more predictable garbage collectors.
The stack is a last‐in‐first‐out data structure associated with a thread. When a thread is executing a method,
then every time a variable is assigned in that method, the variables in that method are pushed to the stack. If the
method calls deeper and deeper methods, then the deeper method will push its variables to the stack. When the
deeper method returns, the stack variable associated with that method will pop off the stack.
When a method is invoked, any non‐primitive values created in the method are assigned on the heap, and the
stack will contain a pointer to that object on the heap. When the method exits, the reference will be popped off
the stack, but the instance it refers to will remain on the heap until it is garbage collected.
TIP Primitive variables are not allocated to the heap and will disappear when the method
exits.
Metaspace is where things go if their lifetime is required to be the life of the VM. For example, class objects and
static references are never garbage collected and so these are created in Metaspace.
add(data), which throws an IllegalStateException if the queue is full. Similarly, offer(data) will
insert the data and returns true if there is space available, or if the queue is full returns false. The put(data)
method will wait for space to become available. To get data from the queue, use the corresponding accessors:
remove(), which throws an IllegalStateException if nothing is available; poll(long, TimeUnit),
which removes and returns the head element, waiting for up to the specified time if necessary; and take(),
which removes and returns the head, waiting indefinitely until data is available.
ConcurrentSkipListMap is another ConcurrentMap implementation that is backed by a Skip List data
structure. A skip list uses an interesting algorithm to drill down to the desired data. (See the “Further
References” section.)
CopyOnWriteArrayList is a concurrent List that works by ensuring that writers write to a fresh clone of the
original backing array, which is not returned to readers until the write operation is complete, after which time the
original is discarded. It implements the List interface, so you can add(data), get(), and so forth, just as you
would do with any List implementation.
You can see from the listing that there were eight threads used to process this request: worker‐1 through
worker‐7, and the main thread. The eight threads correspond to the number of cores on this machine.
We will first introduce the concepts and then see some code examples later in the chapter.
Condition has its own APIs that work in coordination with Lock. The most common ones are:
➤➤ void await(): Waits for the current thread to be singled or interrupted.
➤➤ void signal(): Wakes up thread that is waiting on this lock. The thread will not actually awaken
until the running thread releases the lock.
➤➤ void signalAll(): Wakes up all waiting threads. The threads will not actually awaken immediately;
rather, they must wait until the running thread releases the lock, and even then they will wake up only
one at a time, as each thread releases the lock.
You may recall that earlier we mentioned that if a thread tries to grab a synchronized lock that is being held by
another thread, there is no way to get the blocked thread to back off. This is clear from the signature—it does not
throw an InterruptedException, so there is no way to interrupt it!
262 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
ReentrantLock solves that by sporting the lockInterruptibly() method, which does throw an
InterruptedException. If you call interrupt() on a thread that is waiting on a
lockInterruptibly() call, it will be interrupted and back off.
ReentrantLock also solves another issue with synchronized blocks. If you have many threads waiting on the
same synchronized lock, let’s say you want to wake up one of them based on some condition (for example,
collection is empty) and a different one based on some other condition (for example, collection is full). Using
synchronized locks, you would need to call notifyAll() to wake up both threads and have each one check
for their condition.
ReentrantLock provides a better solution, with the newCondition() method. When a thread locks a
ReentrantLock, it can call newCondition() on that lock. Many threads can do the same.
Then when some detector observes a condition and wants to signal a particular thread to handle that condition, it
just has to call condition.signal(), and just the thread awaiting that condition will wake up. We will see an
example of this in the PingNetPong code later in this chapter.
Future has methods for checking if a computation was completed and for obtaining the result. It also has
methods for forcing the future to complete with or without any exception.
TIP The Future interface would be more useful if it had methods for getting notified when
a request is available or for performing follow‐on actions when a request is complete. While
Future does not offer these capabilities, CompletableFuture does, and we will see that
a bit later.
From what we have described, the Future seems like a useful construct, but how do we get one? One way is by
submitting a Runnable or Callable to an ExecutorService.
all threads are busy, then subsequent calls to the ExecutorService will cause the caller to wait until a thread is
freed up or the wait times out.
TIP Virtual threads are the exception; they are very light weight, as we will soon see, and so
do not require pooling.
There is no scheduled executor for virtual threads, but it is easy to hack one together by supplying a virtual
ThreadFactory (an optional parameter) to the newScheduledThreadPool() method:
ThreadFactory factory = Thread.ofVirtual().factory();
ScheduledExecutorService x = Executors.newScheduledThreadPool(2, factory);
AtomicInteger counter = new AtomicInteger(0);
x.scheduleAtFixedRate(()‐> logger.info(counter.incrementAndGet()),
0, 1, TimeUnit.SECONDS);
You generally want to be careful about putting scheduled executers in a try‐with‐resources block, because the
block will exit as soon as the job is kicked off, terminating the executor.
Interface: Runnable
Class name: java.lang.Runnable
Method: void run()
Accepts no arguments and returns no result; Runnable generally provides the functionality supplied
to a Thread.
Interface: Supplier<T>
Class name: java.util.function.Supplier
Method: T get()
Accepts no arguments, performs some process, and produces a result. For example, this code defines a Supplier
that produces a random integer between 100 and 200.
@Test
public void testSupplierGet(){
Random random = new Random();
// Create a Supplier that produces a random integer between 100 and 200
Supplier<Integer> random100To200 = () ‐
> random.nextInt(100,200);
for(int i =0; i < 5; i++){
Integer value = random100To200.get();
System.out.println("Next random: " + value);
}
}
When we ran the previous code, it printed the following (yours will differ since the generated values are random):
Next random:118
Next random:194
266 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
Next random:153
Next random:165
Next random:182
It is important to note here that our Supplier took no input parameters and returned a value.
Interface: Consumer<T>
Class name: java.util.function.Consumer
Method: void accept(T t)
Accepts a single argument; returns no result. For example, this code defines a Consumer that accepts a mutable
List of Integer values and doubles each value in place:
@Test
public void testConsumerAccept(){
// Define a Consumer that accepts a mutable List of
// ints and replaces each value with twice its value
Consumer<List<Integer>> twoTimes =
list‐
> list.replaceAll(value ‐
> value * 2);
personToFullName) and then supplies the result of that call to the andThen function (in our case the
toUpperCase function of the String class).
Similar to andThen is the compose method, which works like andThen only in reverse; with compose, the
function argument is applied first, followed by the base function. For example, we can rewrite the previous
andThen code using compose as follows:
@Test
public void testFunctionCompose() {
record Person(String firstName, String lastName) { }
// Function takes Person as input and produces a String representing
// their full name
Function<Person, String> personToFullName = (Person p) ‐
>
p.firstName + " " + p.lastName;
CompletionStage
CompletableFuture implements the CompletionStage interface, which contains the methods used for
supplying and chaining asynchronous computations.
The methods in CompletionStage include the following:
➤➤ thenAccept(Consumer)
➤➤ thenApply(Function)
USING CONCURRENCY COMPONENTS TO REDUCE COMPLEXITY ❘ 269
➤➤ thenRun(Runnable)
➤➤ thenCompose(Function, CompletionStage)
➤➤ thenCombine(CompletionStage, BiFunction)
Those are the synchronous forms, but each comes in asynchronous flavors as well, for example,
thenAcceptAsync()and thenApplyAsync().
Next, we want to tell the CompletableFuture that as soon as it has computed the result, it should then
accept (thenAccept()) a Consumer. Remember that a Consumer interface accepts one parameter, executes a
process, and returns no result. That is perfect for our case, because we just want to print something out, and not
return anything.
Now if we want to wait for thenAccept() to complete before proceeding, we can assign the
thenAccept() to a new CompletableFuture instance. In the first CompletableFuture shown
earlier, we called the get() method to wait for the result until it was ready. We can’t call get() on the
thenAccept() CompletableFuture because it is a Consumer, and so it does not produce any result.
For such cases, you can call completableFuture.join(), which will wait until the consumer has completed
processing without expecting any result.
7: @Log4j2
8: public class CompletableFutureTests {
9: @Test
10: public void testCompletableFutureThenAccept() {
11: try {
12: // Create a CompletableFuture that completes with a
13: // result after a delay
14: CompletableFuture<String> future1 =
15: CompletableFuture.supplyAsync(() ‐
> {
16: try {
17: log.info("supplyAsync called");
18: // Simulate a long‐
running computation
19: Thread.sleep(5_000);
20: log.info("supplyAsync complete");
21: return "Hello";
270 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
22: } catch(InterruptedException e) {
23: throw new RuntimeException(e);
24: }
25: });
26:
27: // Attach a thenAccept callback to the CompletableFuture
28: CompletableFuture<Void> future2 = future1.thenAccept(result‐>{
29: log.info("Computation complete. thenAccept sees result:" +
30: " " + result);
31: log.info("thenAccept performing another operation. " +
32: "Result: " + (result + ", World"));
33: try {
34: log.info("thenAccept sleeping");
35: Thread.sleep(5_000);
36: log.info("thenAccept waking");
37: } catch(InterruptedException e) {
38: throw new RuntimeException(e);
39: }
40: });
41:
42: // Perform other work while the CompletableFuture is running
43: log.info("Main thread continuing with other work...");
44:
45: // wait for the CompletableFuture to complete
46: String result1 = future1.get();
47: log.info("CompletableFuture #1 is complete. Result:"+result1);
48: future2.join();
49: log.info("CompletableFuture #2 is complete.");
50: } catch(InterruptedException e) {
51: Thread.currentThread().interrupt();
52: } catch(ExecutionException e) {
53: throw new RuntimeException(e);
54: }
55: }
56: }
Here is the output. Notice the timing of each call.
2024‐07‐23 17:35:31 [main]
INFO c.w.r.c.C ‐Main thread continuing with other work...
2024‐07‐23 17:35:31 [ForkJoinPool.commonPool‐worker‐1]
INFO c.w.r.c.C ‐supplyAsync called
2024‐07‐23 17:35:36 [ForkJoinPool.commonPool‐worker‐1]
INFO c.w.r.c.C ‐supplyAsync complete
2024‐07‐23 17:35:36 [main]
INFO c.w.r.c.C ‐CompletableFuture #1 is complete. Result:Hello
2024‐07‐23 17:35:36 [ForkJoinPool.commonPool‐worker‐1]
INFO c.w.r.c.C ‐Computation complete. thenAccept sees result: Hello
2024‐07‐23 17:35:36 [ForkJoinPool.commonPool‐worker‐1]
INFO c.w.r.c.C ‐thenAccept performing another operation.
Result: Hello, World
2024‐07‐23 17:35:36 [ForkJoinPool.commonPool‐worker‐1]
INFO c.w.r.c.C ‐thenAccept sleeping
2024‐07‐23 17:35:41 [ForkJoinPool.commonPool‐worker‐1]
INFO c.w.r.c.C ‐thenAccept waking
2024‐07‐23 17:35:41 [main]
INFO c.w.r.c.C ‐CompletableFuture #2 is complete.
USING CONCURRENCY COMPONENTS TO REDUCE COMPLEXITY ❘ 271
Line 28 shows how you can provide a Consumer to be executed only after a threaded task is complete, by
saying thenAccept(Consumer). That second task will execute in the original thread. You could also say
thenAcceptAsync(Consumer), which has the same semantics, except that it runs the consumer in a different
thread. There can be many reasons for doing this. First, in a user‐facing application, we might want to execute the
Consumer in a separate thread to improve responsiveness. Or generally we might want to optimize performance
by having these run in separate threads.
The thenAccept(Consumer) method is not the only way to chain a follow‐on process. Alternatively, you can
call any of the following to launch a process after the main process has completed:
➤➤ thenAccept(Consumer): Launches a process to consume the result from the
CompletableFuture.supplyAsync() method.
➤➤ thenRun(Runnable) : Launches a process that doesn’t accept any parameter or return any result.
➤➤ thenApply(Function) : Launches a process that consumes the result of the original
CompletableFuture and uses that to produce a new result.
➤➤ thenCombine(CompletionStage, BiFunction) : When the main thread and the supplied
CompletableFuture both complete, the results will be applied to the BiFunction to compute a result.
Each of the async versions can accept an optional ExecutorService to provide you with more control over the
number or schedule of threads.
Post Java 21, that has been reengineered per Figure 9.6.
When a virtual thread executes, behind the scenes it is assigned to a platform thread. You can see in Figure 9.6
line 590 that when it is parked sleepNanos() is called, which basically tells the thread scheduler to park it
(that is, unmount it from the carrier) for the duration, and then unpark it; that is, assign it to an available plat-
form thread in the JVM thread pool. Consequently, a single platform thread can execute many virtual threads.
It would be interesting to write some code that creates some threads, has them wake and sleep a few times, and
capture their carrier threads each time, to see them change.
Let’s write some example code to do exactly that.
13: @Slf4j
14: public class VirtualThreadStatesTests{
15: private final Map<String, Set<String>> carrierMappings
16: = new ConcurrentHashMap<>();
17:
18: @Test
19: public void getCarriers(){
20: // Launch a number of threads, get them to spin a million
21: // times each, and collect the carriers in a map.
22: // Then sleep for a bit to see the carrier unmount.
23: // Do this a few times and capture the results.
USING CONCURRENCY COMPONENTS TO REDUCE COMPLEXITY ❘ 273
The output of this program displays the ID of each virtual thread, followed by the list of each of the platform
threads that carried it. You can see that there were four platform threads in the process, and each virtual thread
was carried by three or four of them during its lifetime:
Summary
2024‐07‐23 18:14:59 [main]
INFO c.w.r.c.VTS ‐VirtualThread[#26]: Carriers:
2024‐07‐23 18:14:59 [main]
INFO c.w.r.c.VTS ‐ForkJoinPool‐
1‐
worker‐
1
2024‐07‐23 18:14:59 [main]
INFO c.w.r.c.VTS ‐ForkJoinPool‐
1‐
worker‐
4
2024‐07‐23 18:14:59 [main]
INFO c.w.r.c.VTS ‐ForkJoinPool‐
1‐
worker‐
2
There is one more important thing to know about virtual threads, and that is about pinned threads. A pinned
thread is a virtual thread that is pinned to its carrier. Why would this ever happen? Let’s consider an architectural
issue that the designers of Java had to consider. When a thread enters a block that is synchronized on some lock,
then no other thread can enter a block that is synchronized on the same lock. However, the same thread can! So
now let’s say virtual thread #V has carrier thread #P, and let’s say #V grabs a synchronized lock and goes to sleep.
Now #P goes to another virtual thread (or is used as a platform thread by the application), and it tries to grab the
same lock. We expect that should block since it is a different thread. But the synchronized construct sees the same
thread that grabbed the original lock and so allows it in! To prevent this the designers of Java designed virtual
threads such that whenever they enter a synchronized block, they do not unmount from their carrier.
To see this, go back to the getCarriers() method and uncomment the synchronized block in line 39, and run
the program again. This time we see that there is exactly one platform thread per virtual thread. When a virtual
thread grabs a synchronized lock, it never unmounts as long as it retains that lock.
Summary
2024‐07‐23 18:20:41 [main]
INFO c.w.r.c.VTS ‐VirtualThread[#26]: Carriers:
2024‐07‐23 18:20:41 [main]
INFO c.w.r.c.VTS ‐ForkJoinPool‐
1‐
worker‐
7
Interthread Communication
A common problem in concurrency is interthread communication, where each thread in a group must do some
job and then signal the next thread to perform a follow‐on action.
Java provides native support for interthread communication. Many threads can synchronize on the same lock and
then wait.
A simple example will help drive home the concept. You are programming a ping‐pong volley. You have three
threads, one just says “Ping,” the second says “Over the net,” and the third just says “Pong.” You want them to
alternate so that the first thread prints Ping and then signals Over the net to print, which then signals Pong,
and back to Over the net and then back to Mr. Ping thread, and so on, for a specified number of volleys. Let’s
prefix a volley number to the printout, so 1 Ping, 1 Over the net, 1 Pong, 1 Over the net, 2 Ping,
2 Over the net, 2 Pong, 2 Over the net, and so forth, continuing as such for N iterations.
The code is as follows:
10: public class PingNetPong{
11: public static void main(String[] args){
12: new PingNetPong().playPingPong(3);
13: }
14:
15: public void playPingPong(int volleys){
16: ReentrantLock lock = new ReentrantLock();
17: Condition ping = lock.newCondition();
18: Condition overTheNet = lock.newCondition();
19: Condition pong = lock.newCondition();
20: boolean[] pingOrPong = {true};
21: try(ExecutorService executor
22: = Executors.newVirtualThreadPerTaskExecutor()){
23: Phaser phaser = new Phaser(3);
24:
25: executor.submit(() ‐ > {
26: lock.lock();
27: IntStream.rangeClosed(1, volleys).forEach((i) ‐> {
28: try{
29: phaser.arrive();
30: ping.await();
31: System.out.println(i + " Ping");
32: overTheNet.signal();
33: } catch(InterruptedException e){
34: e.printStackTrace();
35: }
36: });
37: lock.unlock();
38: });
39:
40: executor.submit(() ‐ > {
41: lock.lock();
42: IntStream.rangeClosed(1, volleys * 2)
43: .forEach((i) ‐
> {
44: try{
45: phaser.arrive();
46: overTheNet.await();
47: System.out.println((i + 1) / 2 + " Over the net");
48: pingOrPong[0] = !pingOrPong[0];
USING CONCURRENCY COMPONENTS TO REDUCE COMPLEXITY ❘ 277
49: if(pingOrPong[0]){
50: ping.signal();
51: }else{
52: pong.signal();
53: }
54: } catch(InterruptedException e){
55: e.printStackTrace();
56: }
57: });
58: lock.unlock();
59: });
60:
61: executor.submit(() ‐ > {
62: lock.lock();
63: IntStream.rangeClosed(1, volleys)
64: .forEach((i) ‐
> {
65: try{
66: phaser.arrive();
67: pong.await();
68: System.out.println(i + " Pong");
69: overTheNet.signal();
70: } catch(InterruptedException e){
71: e.printStackTrace();
72: }
73: });
74: lock.unlock();
75: });
76:
77: phaser.awaitAdvance(0);
78: lock.lock();
79: ping.signal();
80: lock.unlock();
81: }
82: }
83: }
We start by creating a ReentrantLock in line 16 to manage our synchronization. Since we want to benefit from
virtual threads, we want to avoid the synchronized keyword, so we use ReentrantLock instead.
Lines 17, 18, and 19 create new Condition objects for each of the communication states we want to signal.
Lines 21–22 use try‐with‐resources to create a newVirtualThreadPerTaskExecutor service, which is the
ExecutorService used for creating and executing virtual threads. Unlike the fixed ThreadPoolExecutor
the virtual thread executor does not pool, since we don’t want to pool virtual threads.
Line 23 creates a Phaser with three permits, and in lines 29, 45, and 66, each of our three threads will signal its
arrival when it is started. The phaser.awaitAdvance() in line 77 awaits all three threads to signal that they
have started so that it can kick off the process by calling ping.signal() in line 79. Before it can call signal
though, it must lock in line 78.
When all three signals arrive, the process begins.
Lines 25–37 configure the Ping runnable, 40–59 configure the “Over the net” thread, and 61–75 configure the
Pong runnable. Let’s drill down into those. The first thing we do is grab a lock. Each thread will hold the lock
while it initializes itself, and when it is ready, it calls await on its Condition instance, allowing the next thread
to configure itself.
278 ❘ CHAPTER 9 Parallelizing Your Application Using Java Concurrency
Then each thread in turn wakes up, signals the next thread, and then awaits. The only other nuance is the
boolean[] in line 20. The “Over the net” thread must alternate the next thread to signal, either Ping or Pong.
We had to make it an array and not a plain old boolean, because it is being set inside our Lambdas, and there-
fore must be effectively final, so we carry the boolean value in one (effectively final) array instance.
TIP We recently encountered a bug in our code, where a class that was annotated as
@Component was not respecting the annotations we describe in the following sections. The
reason turned out to be that the program was calling the constructor of that class, rather
than having it autowired. Rule of thumb: if Spring is not managing the class, it will not
recognize Spring annotations!
Using @Async/@EnableAsync
Spring provides the @Async annotation, which can be applied above the signature of any method so that when
that method is invoked, it will return immediately and process its functionality in a separate thread. To use the
@Async annotation, you must tell Spring to enable async processing. You can do this by annotating any Spring
managed class (usually the Spring Boot Application class, or perhaps some @Configuration class, or the class
containing the @Async) with the @EnableAsync annotation.
Within any Spring‐managed class you can annotate methods with @Async to have them execute asynchronously.
Such methods can return void to have them “fire‐and‐forget,” in which case they will return immediately while the
job executes in the background. Or they can return a Future or CompletableFuture, and these can wrap a
serializable value. For Spring MVC endpoints, you can return void to fire and forget, or return a Completable
Future<ResponseEntity>, to have the caller wait for the result of the call to be computed and returned.
Note that if the @Async method returns anything besides a void or a Future, the method may not execute asyn-
chronously and will result in unexpected behavior.
In our AsyncDemo class, we annotate the following methods as @Async: void performHeads(String
message) and void performTails(String message). These will display message + " Heads" and
message+" Tails", respectively, three times each. We will have all these threads running concurrently, so the
message helps us identify which thread is which.
First let’s do it the wrong way. Can you see what is wrong with this code in the AsyncDemo class?
LEARNING CONCURRENCY SUPPORT IN THE SPRING FRAMEWORK ❘ 279
taskExecutor.execute(() ‐
> performHeads("using TaskExecutor"));
taskExecutor.execute(() ‐
> performTails("using TaskExecutor"));
}
You can see from the alternating output and the two thread names in the logs that these two methods are being
called in parallel:
2024‐07‐21 12:44:05 [task‐1]
INFO c.w.r.concurrency.AD ‐using TaskExecutor Heads
2024‐07‐21 12:44:05 [task‐2]
INFO c.w.r.concurrency.AD ‐using TaskExecutor Tails
2024‐07‐21 12:44:06 [task‐1]
INFO c.w.r.concurrency.AD ‐using TaskExecutor Heads
2024‐07‐21 12:44:06 [task‐2]
INFO c.w.r.concurrency.AD ‐using TaskExecutor Tails
2024‐07‐21 12:44:07 [task‐1]
INFO c.w.r.concurrency.AD ‐using TaskExecutor Heads
2024‐07‐21 12:44:08 [task‐2]
INFO c.w.r.concurrency.AD ‐using TaskExecutor Tails
Understanding @Transactional
While @Transactional is not specifically a concurrency annotation, it is crucial in our microservices‐driven
world for managing transactions and ensuring data consistency, and we should at least mention it. In the
programming world, when we say that a multipart call is a transaction, we mean that that those parts either
all complete or all roll back.
For example, let’s say you have an order‐entry application that adds an item to the cart, depletes from inventory,
and charges a credit card. Now what happens if each of these steps is invoked in parallel? What if the card is
declined or the cart exceeds our limit or the inventory is not available, and the other methods were all invoked?
The entire transaction must be rolled back. The @Transactional annotation accepts a rollbackFor
argument that indicates the compensating method that must be called in the event of a transaction exception.
FURTHER REFERENCES
➤➤ Java Concurrency in Practice (Addison‐Wesley, 2006)
➤➤ Concurrent Programming in Java (Addison‐Wesley, 1999)
➤➤ Java Concurrent Animated, a Swing application that demonstrates most of the concurrency utilities
built into Java
https://github.com/vgrazi/JavaConcurrentAnimatedReboot.git
➤➤ Explanation of Skip List algorithm
https://en.wikipedia.org/wiki/Skip_list
SUMMARY
In this chapter, you learned that Java was one of the first languages to introduce concurrency as a native feature,
and you learned how it has evolved to manage the complexity of multithreaded programming. You learned
about fundamental concepts like atomicity and synchronization, thread states, and interthread communication.
You saw the concurrent collections and data structures used for managing concurrency. You learned how
CompletableFuture helps you build reactive programs and how virtual threads allow your multithreaded
applications to scale. Finally, you learned about the concurrency support available in the Spring framework.
10
Pattern Matching with Regular
Expressions
WHAT’S IN THIS CHAPTER?
Regular expressions are a powerful tool for searching text strings. They are also known as regex for short
and processed by a regular expression engine.
You already probably have some experience with pattern matching. For example, when you are searching
for a file, you might type dir R*.pdf or ls R*.pdf to get a listing of any file starting with R of type
pdf. Regex is a more powerful matching system, with a completely different syntax.
Imagine you have a sequence of URIs, for example:
http://spring.io/projects/spring-cloud-contract
https://www.infoq.com/articles/Living-Matrix-Bytecode-Manipulation/
file:///C:/Documents/happy_birthday_sis.pdf
And imagine that for each URI you want to parse out the protocol (http, https, file), the website or
directory, and the specific path.
You could do it the hard way, using string.indexOf() to search for a non-empty string followed by
a colon and sequence of two or more slashes, followed by a bunch of characters until a slash, followed by
284 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
everything else, and then using string.substring() to parse out the relevant text. Regular expressions make
this far easier, and you’ll see this example again later in the chapter.
Regular expression syntax has a reputation for being terse and hard to read, but in fact it is a powerful tool that
you want in your arsenal. In this chapter you will learn how to build regular expressions and how to work with
them in Java to handle most of the parsing requirements you will encounter in enterprise development. We will
also review some tools and frameworks where you are likely to benefit from regular expressions.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch10-Regex. See the README.md file in that repository for details.
Figure 10.1 shows the before and after of this regular expression replacement.
Wait. What’s this \d+: magic? This is the first regular expression we will see in the chapter. \d means “match any
digit.” + means match one or more occurrences of \d. Therefore, \d+ means “match one or more digits.” The colon
is literally the colon (:) character. You are telling IntelliJ to remove any sequence of one or more digits followed
immediately by a colon. The fun begins! You’ll learn how to write many more regular expressions in this chapter.
Search (and replace) is a common use of regular expressions. Validating data and parsing data are also
common use cases. What do all of these have in common? They involve processing text data and matching
against a general pattern.
If you haven’t used regular expressions before, the syntax takes a little getting used to. But once you master it,
you will be able to perform searches and string processing tasks very efficiently.
Regular expressions are similar across the different programming languages and environments, but they may not
always be exactly the same. On the Regex101 site, you can select the Java flavor in the left navigation to get the
exact syntax and options for Java. Figure 10.2 shows the \d+: example on this website.
Let’s review the layout of Regex101. In the top text field, you will enter the regular expression pattern. You will
notice a little gm to the right of the regular expression. The g (global) and m (multiline) are flags that you can turn
off and that tell the regex parser to treat each line in your “test string” as an independent input string, rather than
one long string. We’ll cover these and other flags later in the chapter. Below the regular expression is where you
enter one or more lines that you’d like to test for matches against the regular expression pattern.
At the top right, the website explains your regular expression. It tells you the mode is greedy, which we will also
explain later in the chapter. It also reminds you that regular expressions are case sensitive (unless you specify
otherwise).
Below the explanation area are the match results, and the corresponding matches are highlighted in the Test
String area. In our example, the first two lines match our pattern because they are digits followed by a colon. The
third line is missing a colon, and the fourth line is missing a number, so those two lines don’t match.
In the following sections, you’ll learn how to write basic regular expressions, and we will cover more advanced
cases as the chapter progresses.
TIP The Javadoc for the Pattern class is an excellent reference when you need to look up
regular expression syntax.
R? Zero or one occurrence of the letter R. The pattern BR?AKE will match BRAKE or BAKE.
L* Zero or more occurrences of the letter L. The pattern JOL*Y matches both JOLLY and JOY.
L+ One or more occurrences of the letter L. The pattern MARVEL+ED matches MARVELED or
MARVELLED.
!{3,5} Three, four, or five occurrences of !. The pattern TERRIFIC!{2,4} matches TERRIFIC!!
and TERRIFIC!!! and TERRIFIC!!!!.
When you are composing a regular expression, you might find there are multiple ways to express the same thing,
so strive for clarity. All of the following are regex patterns for finding one or more occurrences of x. Think about
which one you find clearest.
x+
x{1,}
xx?x*
x{1}x*
x{1,1}x*
xx{0,}
We hope you said x+ is the clearest! What about this set?
xx+
x{2,}
xxx*
x{1}x+
x{2,}x*
xxx{0,}
All of these match two or more occurrences of the letter x. We think you’ll agree xx+ (x followed by one or more x)
is the clearest.
Do take a moment to understand why each one in the set is equivalent. That exercise will help you remember the
syntax for all the quantifiers. While you don’t want to use them all in the same regular expression, they are all
useful to know.
Table 10.2 lists the four main boundary characters. First are the ^ and $, which match the beginning and end of
the line, respectively. These are also known as anchors as they demark the start and end of a line.
TABLE 10.2: Boundaries
^ Beginning of a line b1
$ End of a line b7
Next is \b, which matches a word boundary. A word boundary is the first and last characters of a word. By
word we mean a sequence of one or more alphanumeric characters plus the underscore. This is useful for finding
matches at the start and/or end of a word.
Finally, \B matches anything except for a word boundary. In regular expressions, an uppercase letter is used to
negate the meaning of a special escape sequence. Every = position matches either \b or \B.
[0123456789]
9]
[0-
These both match any character from 0–9, which happen to be all the digits. You can negate a character class
by placing a caret (^) as the first character inside the brackets. For example, these all match anything that is
not a digit:
[^0123456789]
[^0-9]
These use the ^ to negate a list of characters and a range of a characters.
Learning Basic Regular Expression Syntax ❘ 289
To negate a built-in character class, capitalize it. So, a third way to say “any character except a digit” would be
like this:
\D
Java predefines some character classes that make it easier to write a regular expression. You already saw
\d and \D. Remember that the uppercase version is the inverse of the lowercase version; it matches every-
thing the lowercase one does not. Table 10.3 shows the most useful predefined character classes and what
they match.
TIP To remember the character classes, \d is for digit, \w is for word, and \s is for space.
\d Digit No Yes No No
\s Whitespace No No Yes No
The dot (.) is a special character class that matches any character. You will often see code to
match zero or more of any characters expressed as follows:
.*
or code to match one or more of any characters:
.+
Note that the characters matched do not need to be the same. For example, .+ will match any
of the following:
123
abcde
ZZZZ
OK.
Choosing Options
A character class works if you want to pick a sequence of zero or more characters from a predefined set of char-
acters. But what if you want to match either one sequence of characters or another sequence of characters? You
can use the | (pipe) character to provide choices. For example, you can decide what you want for lunch with this:
tuna|turkey
As you might imagine, this will match tuna or turkey. What happens if you want to add the word sandwich
after. This works:
tuna sandwich|turkey sandwich
and so does this:
(tuna|turkey) sandwich
The latter version uses parentheses to specify the alternatives. Then it unconditionally matches sandwich after.
This approach can make the regular expression clearer because it avoids the repetition of sandwich and shows
that only the type of sandwich (in the parentheses) varies.
Escaping Characters
You should be familiar with the concept of escaping characters from working with strings. For example, if you
want to include a double quote in a string, you must escape it with a backslash such as the following:
String escapeQuote = "\"Java is Great\"";
Similarly, you use another backslash to escape a backslash:
String escapeBackslash = "\\";
In regular expressions, you use a backslash to indicate that the character following it should be matched as
a literal, rather than apply a regular expression meaning. For example, suppose you want to write a regular
expression that matches a pattern of two single digit numbers being added together. One way to write it is
like this:
String addition = "\\d \\+ \\d";
Using Regular Expressions with Java APIs ❘ 291
That may seem like a lot of backslashes, so let’s take a moment to unpack this. The regular expression syntax for
a digit is \d. However, a backslash already has a meaning in Java, so you need to add an extra backslash, making
it \\d. Next comes a single space. That’s straightforward; it matches a space.
After that, we want to match the literal plus sign (+). You can’t just write +, though, because as we saw, that rep-
resents a quantifier, meaning one or more of the previous character. To match a literal plus sign, you have to add
a backslash to escape the +. But guess what? To include a backslash in a Java string, you need to backslash it, so
the backslash needs to be escaped, giving you \\+. To finish up our regular expression, we need another space and
then another \\d combo.
If you want to avoid the backslashes, you can put the character in a character class. For example, both of the
following match a single literal period (.):
String escapePeriod1 = "[.]"
String escapePeriod2 = "\\.";
NOTE While "\\" gives you a backlash when defining a string, it will not match a back
slash for a regular expression in Java. That’s because if we want to express "\d", we need to
write that as \\d. As a result, if you want to match a literal backslash character in Java, you
need four backslashes ("\\\\"). Luckily, this doesn’t come up often!
Finding Matches
The matches() method takes a regular expression as a parameter and returns whether the regular expression is
a complete match for the String instance. We intentionally used the phrase complete match. If there are extra
characters in the text, even though the regex might match some portion of the text, nonetheless matches() will
return false. Let’s look at a few examples:
System.out.println("java book".matches("[aeiou]")); // false
The previous regular expression tries to match a single vowel. While there are vowels in the text, there are also
other characters, so it is not a complete match. To fix this, you need to specify that any characters can be before
or after the vowel.
String text = "java book";
System.out.println(text.matches(".*[aeiou].*")); // true
To make it clearer that the whole regular expression needs to match, you can optionally include the beginning (^)
and end ($) anchors.
String text = "java book";
System.out.println(text.matches("^.*[aeiou].*$")); // true
292 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
For one more example, do you understand why the following matches?
String text = "java book";
System.out.println(text.matches("\\w+\\s\\w+")); // true
This pattern has three segments. \\w+ matches “one or more” word characters. \\s matches a single whitespace
character. Finally, the \\w+ matches “one or more” word characters again. Many people find this hard to read.
Luckily, it can be written in a clearer way.
String text = "java book";
String word = "\\w+";
String space = "\\s";
String regex = word + space + word;
System.out.println(text.matches(regex)); // true
Notice how the regular expression is defined in plain English. Clearly, the regular expression matches two words
separated by a space. The word and space variables are also easier to understand because they have a clear
name. For example, it should make sense that one or more word characters make up a word. Suppose you made
a typo and wrote the pattern for “zero or more word characters” instead of “one or more”:
String word = "\\w*"; // INCORRECT
By expressing the regex using named variables as we have done, then your intent is clear, and when doing a code
review, a teammate is more likely to ask you why your word can have zero characters in it. The shortest words we
know have one character, not zero, after all.
Replacing Values
There are two methods for replacing parts of a String that match a regular expression. The replaceAll()
method replaces each substring that matches a supplied regular expression with a supplied value. The
replaceFirst() method replaces only the first substring to match, as you might have gathered
from the name.
String text = "java book";
System.out.println(text.replaceAll("[aeiou]", "_")); // j_v_ b__k
System.out.println(text.replaceFirst("[aeiou]", "_")); // j_va book
Notice how replaceAll() replaced all the vowels with underscores, whereas replaceFirst()replaces only
the first vowel. Now let’s try matching more than one character at a time.
String text = "java book";
System.out.println(text.replaceAll("\\w+", "[word]")); // [word] [word]
\\w+ looks for “one or more” word characters. The replacement is the six-character string [word].
Finally, you can use matched portions from the original text in your replacement using the $ symbol, as follows:
String text = "java book";
System.out.println(text.replaceAll( // book java
"(\\w+)\\s(\\w+)", "$2 $1"));
The parentheses in the pattern create capture groups, which you can then reference in your replacement pat-
tern as $1, $2, etc. In this case, there are two sets of parentheses, so there are two reference groups available. By
reversing them in the replacement pattern, the code reverses their order in the replacement string, resulting in the
output book java.
Using Regular Expressions with Java APIs ❘ 293
Be sure not to mix up the regular replace() method with the regular expression ones.
String text = "java book";
// does not use a regular expression
System.out.println( // java book
text.replace("\\w+", "[word]"));
Nothing was replaced because the replace() does not recognize regex patterns. Rather it
matches a literal string. Since the \\w+ text does not appear in the text java book, nothing
is replaced.
Splitting
The final regular expression operation you can do on the String class is splitting into a String[]. The
most common way to use split() is to have a regular expression that represents a single separator character
like a space.
String text = "java book";
String[] words = text.split(" ");
System.out.println(words.length); // 2
System.out.println(Arrays.asList(words)); // [java, book]
The separator character, a space in this case, is called a delimiter. Of course, you can use a more involved regular
expression like this:
String text = "123 -456 -789";
String[] words = text.split("[- ]+");
System.out.println(words.length); // 3
System.out.println(Arrays.asList(words)); // [123, 456, 789]
This pattern matches separators consisting of one or more adjacent dashes and spaces. When using the split()
method, take care that your delimiter doesn’t match a zero character string, or you will get a sequence of single
characters, as in this example:
String text = "123 -456 -789";
String[] words = text.split("[- ]*"); // matches zero or more characters
System.out.println(words.length); // 11
System.out.println(Arrays.asList(words)); // [1, 2, 3, , 4, 5, 6, , 7, 8, 9]
“What happened there?” you ask. The regular expression matches the position between each character, so they all
get separated out into the array, even the spaces. Unless that is your intent, be careful that the regular expression
doesn’t match an empty string!
You might be surprised what happens if the regular expression matches the beginning or end of your text. In this
example, it matches both:
String text = "-123 -456 -789 -";
String[] words = text.split("[- ]+");
System.out.println(words.length); // 4
System.out.println(Arrays.asList(words)); // [, 123, 456, 789]
Notice how the first entry in the array is an empty string, but the last is not. This discrepancy is the way Java has
chosen to implement the split() method.
294 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
TIP You might be thinking that split() is an easy way to parse a comma-separated value
(CSV) file. This might produce undesired results because CSV files can use quotes to sur
round fields that may contain commas to indicate that the commas were not intended as
field separators. The Apache Common CSV and Open CSV libraries are easy to use and
handle this case. See “Further References” for links to these libraries.
While split() is most commonly called with just the one regular expression parameter, there is a variant
that accepts two parameters. The second is a number called limit, which is used to customize the behavior of
split(). While this second parameter version is less commonly used, it is good to be aware of it. The following
examples show these differences:
String text = "-123 -456 -789 -";
String[] limit0 = text.split("[- ]+", 0);
System.out.println(limit0.length); // 4
System.out.println(Arrays.asList(limit0)); // [, 123, 456, 789]
Finding Matches
The matches() method on the String class tells you whether the regular expression matches the entire target.
Oftentimes, however, you will want to match not the whole target, but rather you want to scan text to find all
the sections that match a particular regular expression. You can do that using the matcher.find() method,
as follows:
11: String text = "java book";
12: Pattern pattern = Pattern.compile("\\w+");
13: Matcher matcher = pattern.matcher(text);
14: while (matcher.find()) {
15: System.out.println(matcher.group());
16: }
This code outputs the following:
java
book
Let’s explore this code in more detail. Line 12 creates a Pattern instance that represents the compiled regular
expression. If you are going to use this regular expression in other methods or classes, save it as an instance
variable or static variable for future use.
Line 13 uses that Pattern to create a Matcher for the text you want to search or parse. Line 14 loops through
each match for the regular expressions. You must call matcher.find() before calling group() to access the
matched text. Line 15 uses the group() method to print out those matches, one at a time.
The group() method without any parameters returns the entire match. You can also supply an optional
int parameter to access the specific groups from the match:
String text = "Real World Java";
String twoAdjacentWordChars = "(\\w)(\\w)";
Pattern pattern = Pattern.compile(twoAdjacentWordChars);
Matcher matcher = pattern.matcher(text);
while (matcher.find()) {
String chars = "%s %s".formatted(matcher.group(1), matcher.group(2));
System.out.println(chars);
}
This time our code matches the input text two characters at a time and outputs the following:
Re
al
Wo
rl
Ja
va
Note how the d in world was omitted, since it is not part of a pair of word characters (the character follow-
ing the d is a space, not a word character). The group indexes are numbered according to the parentheses in the
regular expression. Wait? Why are we starting the count from one? Isn’t this Java, where indexes are always zero-
based? The answer is that group(0) is used to return the entire match, so to access specific groups, you must
start the counting from one.
Rather than referring to groups by number, you can assign names to the groups and refer to them by name. This
is especially useful if you have a large number of groups. To name a group, add a group-name specifier as the first
entry in the group. A group-name specifier consists of a question mark followed by the name in angled brackets,
and then to grab the matching text associated with that group, use matcher.group(name), as in this example:
String text = "Real World Java";
Pattern pattern = Pattern.compile("(?<first>\\w)(?<second>\\w)");
296 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
TIP To find the start and end indexes within the target string, of the latest match, call
matcher.start() and matcher.end().
Replacing Values
The String methods replaceFirst() and replaceAll() are great when you want to replace the first
match or all matches. But what if you want more specific replacement logic; for example, let’s say you want to
replace every other match. Luckily, you can do this with a Pattern and Matcher as well. This fun method
makes every other word (of one or more letters) uppercase:
20: String text = "-->The---quick---brown---fox---jumped!!";
21: StringBuffer buffer = new StringBuffer();
22: Pattern pattern = Pattern.compile("\\w+");
23: Matcher matcher = pattern.matcher(text);
24: boolean flip = false;
25: for(int i = 0; matcher.find(); i++) {
26: String group = matcher.group();
27: flip = !flip;
28: if(flip) {
29: group = group.toUpperCase();
30: }
31: matcher.appendReplacement(buffer, group);
32: System.out.println(i + "." + buffer);
33: }
34: matcher.appendTail(buffer);
35: System.out.println(buffer);
The output is as follows:
0.-->THE
1.-->THE---quick
2.-->THE---quick---BROWN
3.-->THE---quick---BROWN---fox
4.-->THE---quick---BROWN---fox---JUMPED
-->THE---quick---BROWN---fox---JUMPED!!
Lines 20 to 30 should mostly look familiar; we are simply looping through the matches of the regular expression.
The StringBuffer in line 21 works like StringBuilder, except that it is thread-safe. While you don’t need
thread safety here, StringBuilder didn’t exist when regular expressions were added to Java, so
StringBuffer was used in the API.
The appendReplacement() and appendTail() methods are new; they append the replacement text to the
StringBuffer. The appendReplacement() appends any characters after the previous match and before
the current match, followed by the replacement itself (in this case the uppercase version of the match). The
Using Regular Expressions with Java APIs ❘ 297
appendTail() method appends any remaining characters from after the last match until the end of the input
string. To make sure you understand this, let’s go through the flow of our sample:
1. On the first iteration of the loop, group contains The.
2. appendReplacement() starts by appending the text from before the match (in this case -->) to
buffer and then appends the replacement text (THE) to buffer.
3. On the second iteration of the loop, group contains quick.
4. The appendReplacement() method appends the text from before the match (--) to buffer and
then appends the replacement text (quick) to buffer.
5. It continues to find and capitalize every other word until, finally, appendTail() appends the
remaining text after the match (!!) to buffer.
Splitting as a Stream
Pattern provides one useful splitting method that is not available directly on String. With a Pattern, you
can create a Stream<String> instead of a String[]. For example:
String text = "-123 -456 -789 -
";
Pattern pattern = Pattern.compile("[-]+");
Stream<String> stream = pattern.splitAsStream(text);
stream.forEach(a -
> System.out.println("*" + a + "*"));
This code outputs the following:
**
*123*
*456*
*789*
This produces the same four matches as string.split(), except that it returns the split results as a Stream of
Srings rather than an array.
TABLE 10.4: Flags
The following example uses the first two flags at the same time:
String text = """
World Java: Helping You Navigate the Java Ecosystem
Real-
Victor Grazi
Jeanne Boyarsky""";
if (matcher.find()) {
System.out.println("I got this book!");
}
else {
System.out.println("I should have gotten this book!");
}
The pattern will match any string containing real followed by any characters (including line feeds), followed by
sky. Since there is a match, the output is as follows:
I got this book!
In the previous code, the bitwise operator (|) combines the two flags. The CASE_INSENSITIVE flag matches
the lowercase r in the regular expression to the capital R in the input. The DOTALL treats the newlines in the
strings as any other character, allowing .* to match them. If either of these flags were removed, there would
be no match.
Rather than passing these flags to the compile method as parameters, you can embed them directly into your
regex pattern. This is especially useful when you are supplying the regex to the String.matches() method,
which does not have a version that accepts separate flag parameters.
The previous example could be written with embedded flags as follows:
String text = """
Real-
World Java: Helping You Navigate the Java Ecosystem
Victor Grazi
Jeanne Boyarsky""";
if (text.matches("(?i)(?s)real.*sky")) {
System.out.println("I got this book!");
}
else {
System.out.println("I should have gotten this book!");
}
While it is shorter and more expressive with the embedded flags, it is not necessarily clearer, so choose wisely.
The other flag we will show here is MULTILINE.
String text = """
Real-
World Java: Helping You Navigate the Java Ecosystem
Victor Grazi
Jeanne Boyarsky""";
Pattern pattern = Pattern.compile(".*[iy]$", Pattern.MULTILINE);
Matcher matcher = pattern.matcher(text);
while (matcher.find()) {
System.out.println(matcher.group());
}
The regex matches anything that ends in an i or a y. Since it is in MULTILINE mode, it will find all lines that end
in an i or a y. This code outputs the following:
Victor Grazi
Jeanne Boyarsky
The MULTILINE flag matches each line break with $. Since Victor Grazi and Jeanne Boyarsky end with i and y,
both lines are output. Without the flag, only Jeanne’s line would be output because then $ would match only the
end of the String.
300 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
Positive look-ahead uses the syntax (?=X) where X is the regular expression to use in the look-ahead. Positive
look-behind adds a < to the syntax, making it (?<=X). You can remember this since the less-than symbol points
to the left, which is the direction of “before.” Negative look-ahead and look-behind use a ! after the ? instead of
an = to indicate negation.
Table 10.5 reviews the syntax for each of these constructs. For look-behind, notice that java is in parentheses.
This indicates that we want to look behind the whole word java and not just behind the last a.
Positive look-ahead (?=X) play(?=g) Match the word play if immediately followed
by a g.
Negative look-ahead (?!X) java(?![a-z]) Match the word java if not immediately
followed by a lowercase letter.
Positive look-behind (?<=X) (?<=[.])(java) Match the word java if immediately preceded
by the literal period.
Negative look-behind (?<!X) (?<![0-9])(java) Match the word java if not immediately
preceded by a number.
Let’s look at another example. Suppose you have the following text:
once upon a time
Let’s see how some patterns match:
Exploring Advanced Regular Expression Syntax ❘ 301
➤➤ on: This example (without look-ahead or look-behind) matches the letters on with no further con-
straints, which matches two instances of two letters each: once upon a time.
➤➤ on(?=ce): This positive look-ahead example matches the letters on followed by the letters ce, which
results in two letters matched once upon a time.
➤➤ on(?!ce): This negative look-ahead example matches the letters on not followed by the letters ce,
which results in two letters matched once upon a time.
➤➤ (?<=up)on: This positive look-behind matches the letters on only if they come right after the letters
up, which results in two letters matched once upon a time.
➤➤ (?<!up)on: This negative look-behind matches the letters on unless they come right after the letters
up, which results in two letters matched once upon a time.
Differentiating Quantifiers
Early in the chapter you learned about quantifiers. Table 10.1 listed the most common ones. But there are many
more, and Table 10.6 provides the complete list. Note that a reluctant quantifier adds a ?, while possessive adds a +.
Let’s understand what those mean.
Suppose you have the following text and want to match any characters that are followed by row your boat.
How many matches would you expect there to be?
Poem:
Row, row, row your boat,
Gently down the stream,
Merrily, merrily, merrily, merrily
Life is but a dream.
Row, row, row your boat ... dream
The correct answer depends on which quantifier you use! Understanding how each of them processes data will
make this clear. These examples use DOTALL mode since the newline characters need to match.
First look at a greedy quantifier:
.*row your boat
With a greedy quantifier, the regular expression engine reads the entire input at once and checks if there is a
match. Unfortunately here the full text does not match so the engine uses a process called backtracking.
302 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
WHAT IS BACKTRACKING?
With backtracking, the regular expression engine realizes it has reached a dead end and cannot
match. It then releases one character from the end and tries again. If there is still no match, the
engine releases yet another character from the end, and so on, until either there is a match or
the engine runs out of text. For example, suppose we are trying to match .*abc in the text
abcdefg. With a greedy quantifier, the process looks like this:
➤➤ Does abcdefg match .*abc? No. Backtrack!
➤➤ Does abcdef match .*abc? No. Backtrack!
➤➤ Does abcde match .*abc? No. Backtrack!
➤➤ Does abcd match .*abc? No. Backtrack!
➤➤ Does abc match .*abc? Yes. There is a match, and the engine can stop looking.
After backtracking through the entire input in this case, you can see from the match (bolded in the following text)
that it found the result to include everything from the beginning and up until the final boat. This is because .*
matches everything up to the last occurrence of row your boat.
Poem:
Row, row, row your boat,
Gently down the stream,
Merrily, merrily, merrily, merrily
Life is but a dream.
Row, row, row your boat ... dream
Now compare that to a reluctant quantifier.
.*?row your boat
With a reluctant quantifier, the algorithm is reversed. The engine starts at the beginning, adding one character at a
time until it finds a match. This gives a match of Row, row, row your boat from the first line, as high-
lighted in the following:
Poem:
Row, row, row your boat,
Gently down the stream,
Merrily, merrily, merrily, merrily
Life is but a dream.
Row, row, row your boat ... dream
After this first match, the reluctant quantifier picks up from where it left off and sees there is more text remaining
in the input, so it tries again, revealing a second match.
Poem:
Row, row, row your boat,
Gently down the stream,
Merrily, merrily, merrily, merrily
Life is but a dream.
Row, row, row your boat ... dream
There are still characters of input that are unmatched at the end. The engine tries one more time and does not
find another match, so it quits.
Finally, compare all that to a possessive quantifier.
.*+row your boat
Exploring Advanced Regular Expression Syntax ❘ 303
With a possessive quantifier, the engine reads the entire input once. However, unlike a greedy quantifier, there is
no backtracking. There are no matches for this regular expression since .* uses up the entire string. Notice how
the three quantifiers gave completely different output!
Now take a look at an example where a possessive quantifier does find a match. Suppose you have the following text:
row row your boat
Using the pattern .*+your boat would not work for the same reason described in the previous example. However,
you can make the regular expression more specific and still use a possessive quantifier. First consider the following:
(row )*+your boat
This example matches zero or more of the word row followed by a space. After it runs out, it matches your
boat, which matches the entire text in this example:
row row your boat
It gets more interesting if you move the space to form this pattern:
(row)*+ your boat
The possessive quantifier first grabs the first three letters of the text: row. Since the fourth character of the input
is a space that is not followed by your boat, that can’t be a match, so those three characters are ignored. Then
the next word row is considered. This time it is followed by a space and your boat. Eureka! The possessive
quantifier version found a match.
row row your boat
If your regular expression isn’t behaving the way you want it to, there are two good techniques
for debugging it. One is to create a smaller regular expression that matches only part of what
you are looking for. Once that works, add on slowly until something breaks, so you know
where things went wrong.
The other is to use an online debugger. https:/regex101.com has a debugger, but only if you
have PCRE2 mode selected. The syntax is often the same as Java, though, and it is a great visu-
alization tool. The debugger lets you step through and see what part of the regular expression
is being handled and what part of the input is being tested. See Figure 10.4 for an example.
x = x There’s no need for a regular expression at all. This does the same
.replaceAll("\\.\\.\\.", ";") thing and is easier to read:
x = x
.replace("...", "");
SonarQube also has a rule on ensuring your regular expression is not too complicated. Regular expressions are a
tool to have in your toolbox, not the tool to use for every job!
System.out.println(virtualThread);
System.out.println(pooledThread);
}
The regular expression starts by capturing any sequence of characters ending in ]. It then matches any characters
up to and including an @. Finally, it looks for a capturing group of any characters. This could have been split up
into variables for the three pieces if you preferred. The loop is similar to the previous example where the code
gets the capturing group values.
Since you are matching only a single value, an alternative is to use replaceFirst() as shown here:
String text = "VirtualThread[#22]/runnable@ForkJoinPool-1-worker-1";
System.out.println(virtualThread);
System.out.println(pooledThread);
In this example, you use replaceFirst() to get rid of the part you don’t want to match. For the
virtualThread one, it replaces the ] and everything after it with a ]. For the pooled thread, it removes
everything up to the @, Either way, you get the same output.
VirtualThread[#22]
ForkJoinPool-1-worker-1
@ParameterizedTest
@EnumSource(mode = EnumSource.Mode.MATCH_ALL, names = "^.*PROD$")
void monitoring(Env env) {
// assert env up
}
As you saw in the Chapter 7, “Testing Your Code with Automated Testing Tools,” AssertJ provides custom match-
ers that you can use in writing assertions. One of them allows you to match on a regular expression. For example:
@Test
void chess() {
String chessNotationRegex = "[a-
hA-
H][1-
8]";
String actual = "A5";
assertThat("A5").matches(chessNotationRegex);
}
Using with Frameworks and Tools ❘ 307
The JUnit Pioneer library also supports regular expressions. For example, you can disable a test based on the
display name. In this case, the code disables the tests for any months with long names:
static List<String> months() {
return List.of("January", "February", "March", "April", "May",
"June", "July", "August", "September", "October",
"November", "December");
}
@Pattern(regexp = "[A-
Z]+")
private String uppercaseLetters;
NOTE You might have noticed the import starts with jakarta. A number of standards were
introduced in Java Enterprise Edition (JEE). To avoid trademark infringement, the JEE was
rebranded to mean Jakarta Enterprise Edition. Conveniently, the acronym remains the same.
308 ❘ CHAPTER 10 Pattern Matching with Regular Expressions
@RestController
public class TheController {
@RequestMapping("/book/{bookId:[0-9]+}")
public String getBookById(@PathVariable("bookId") String bookId) {
String result = // build return value
return result;
}
}
In the previous code, Spring takes a URL like /books/469 and stores 469 in the bookId parameter. Using a
regular expression gives you a lot of flexibility in specifying the format that should match.
Now that you are used to seeing regular expressions, it is a good time to talk about other
pieces of code that might look similar on first glance but are not regular expressions.
Do you think this is a regular expression?
try (DirectoryStream<Path> dStream = Files.newDirectoryStream(
path, "**/*.{properties,txt}")) {
}
You probably aren’t surprised, given the title of this section, that it is not a regular expression.
There are a few clues. First, notice the **, which would never be a valid regex. Here it means
traverse into any number of levels or directories. Also, notice how there is a * rather than a .*
to match any characters. This is a big clue that you are not dealing with a regular expression.
The syntax is called a glob. It matches any files ending in .properties or .txt.
Similarly, Ant (an older build tool than Maven and Gradle) uses patterns like
**/*.txt to match text files in any directory. It is important not to mix up patterns like these
with regular expressions.
FURTHER REFERENCES
➤➤ https://www.pluralsight.com/courses/
playbook-regular-expressions-java-fundamentals
➤➤ Victor’s Pluralsight Course on Regular Expressions
➤➤ https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/
regex/Pattern.html
➤➤ Java Pattern Documentation
Summary ❘ 309
➤➤ https://regex101.com
➤➤ Regular Expression Tester
➤➤ https://commons.apache.org/proper/commons-csv
➤➤ Apache Commons CSV
➤➤ https://opencsv.sourceforge.net
➤➤ Open CSV
➤➤ https://commons.apache.org/proper/commons-validator
➤➤ Apache Commons Validator
➤➤ https://regexcrossword.com
➤➤ Regular Expression crossword puzzles for learning them better.
➤➤ Introducing Regular Expressions (O’Reilly, 2012)
➤➤ Regular Expressions Cookbook (O’Reilly, 2012)
SUMMARY
In this chapter, you learned about how to write regular expressions. Key takeaways included the following:
➤➤ Character classes are a shorthand so you don’t have to list every character you are interested in
individually.
➤➤ Greedy, lazy, and reluctant quantifiers allow you to change matching behavior.
➤➤ Positive and negative look-ahead and look-behind allow you to constrain matches based on sur-
rounding text.
➤➤ Built-in Java APIs support regular expressions.
➤➤ Regular expressions are also used in libraries.
11
Coding the Aspect-Oriented
Way
WHAT’S IN THIS CHAPTER?
Not all your Java code is coded in your Java code. While that paradox may sound like some Zen koan, it effec-
tively sets the stage for this chapter on coding with aspects, which are orthogonal functions that can be triggered
to execute when certain conditions are met. We will show the benefit of aspects and how to write your own.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch11-Aspects. See the README.md file in that directory for details on how to run the
examples using Postman and curl.
// do stuff
Now, you can copy and paste that code into every endpoint, but nobody ever won a clean-coding award for copy-
and-paste coding. Can you do it in one place? Yes! You can enforce the requirement and still achieve nirvana
using aspect-oriented programming (AOP). In this chapter, we will cover Spring AOP. AspectJ is more powerful,
but Spring AOP is perfect for most enterprise use cases.
The idea is to create some configuration that defines a pointcut, that is, a pattern describing which members will
be modified.
Before we go any further, let’s cover some vocabulary.
➤➤ Advice: Defines the action to take when a join point is reached
➤➤ Join point: The actual point in the code where the aspect is inserted
➤➤ Pointcut: An expression that defines a pattern that matches where the advice should be applied
An example is worth a thousand words. Let’s create a Spring Boot MVC project with a couple of endpoints. We
will make it a product catalog.
@PostMapping("/product")
public void addProduct(@RequestBody Product product) {
productMap.put(product.getStyleNum(), product);
}
@DeleteMapping("/product")
public Product removeProduct(String styleNumber){
return productMap.remove(styleNumber);
}
@GetMapping("/product")
public Collection<Product> listProducts(){
return productMap.values();
}
}
Creating Our First Example ❘ 313
Let’s add our boss’s required logging using the magic of Spring AOP. The first step is to create an aspect
component, which defines the pointcut patterns describing the methods to intercept, as well as the advice
implementations providing the code to invoke when the pointcut is hit. An aspect class is just a standard
Java class, annotated with both @Aspect and @Component. Remember to import these in your class:
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.springframework.stereotype.Component;
LISTING 11-1:
@Aspect
@Component
public class LoggingAspect1 {
private final Logger logger = LoggerFactory.getLogger(LoggingAspect1.class);
@Before("execution(public * com.wiley.realworldjava.aop.product."
+ "ProductController.addProduct(..))")
public void logAddOperations(JoinPoint joinPoint) {
logger.info("===============> called {}", joinPoint.getSignature().getName());
}
}
To run the application, let’s use Postman or curl. (Remember to see the full instructions in the README.md file.)
Make the POST call with the parameters shown here.
Endpoint:
POST localhost:8081/product
Body (Application/Json):
{
"styleNum": "123",
"description": "IPhone"
}
Run the application and fire that endpoint to add a product; you will see the following output that includes the
log message:
===============> called addProduct
This is a simple example, but it demonstrates the beauty of aspects. You can intercept calls to your code and take
action, without ever touching the code being instrumented!
What happened here? Let’s look at the sequence diagram in Figure 11.1.
Spring saw your @Aspect annotation and introduced an invisible AOP proxy class.
This proxy evaluated the join point and delegated to an appropriate advice. The advice was invoked and, in its
course, invoked the target method.
In addition, there are some important points to observe about the previous aspect in Listing 11.1.
➤➤ The @Before annotation tells the runtime to call this code before the method is called.
➤➤ The method body to be invoked, logAddOperations(), is called the advice.
➤➤ The part beginning with execution defines the pointcut: the pattern to match to methods (the
matched method is called a join point) to apply this advice.
That’s the general flow with AOP. Let’s take a closer look at the APIs. You will notice that the JoinPoint
class has several useful accessors. The one shown here is getSignature().getName(). Let’s look at the rest
of the APIs.
➤➤ getArgs(): Returns an Object[] containing all the arguments. For example:
Product{styleNum='123', description='IPhone'}]
➤➤ getKind(): Returns a String representation of the kind of call. For example:
method-execution
➤➤ getThis() and getTarget(): Return the instance toString() of the class containing the
instrumented method. For example:
com.wiley.realworldjava.aop.product.ProductController@29b5747
-The instrumented instance
➤➤ toLongString(): Returns a long representation of the pointcut. For example:
execution(public com.wiley.realworldjava.aop.product.Product com.wiley
.realworldjava.aop.product.ProductController.addProduct(com.wiley
.realworldjava.aop.product.Product))
➤➤ toShortString(): Returns a short-form representation of the pointcut. For example:
execution(ProductController.addProduct(..))
➤➤ toString(): Returns an intermediate String representation of the pointcut. For example:
execution(Product com.wiley.realworldjava.aop.product.
ProductController.addProduct(Product))
➤➤ getSignature(): Signature is a class containing more details about the call. For example:
Product com.wiley.realworldjava.aop.product.ProductController
.addProduct(Product)
➤➤ getStaticPart(): StaticPart contains just the static information about a join point. For example:
execution(Product com.wiley.realworldjava.aop.product.
ProductController.addProduct(Product))
One important gotcha is that Spring AOP will instrument only Spring-managed classes, for example, C omponent,
Bean, Service, and so on. Your advice can reference other data that is not managed by Spring, if that data
is an argument or a return value. You will see some examples shortly, but pointcuts must refer to Spring-
managed methods.
Exploring the Pointcut Expression Language ❘ 315
* Wildcards
You can use * as a wildcard to match a range of characters anywhere in the name of a method, class, or package.
For example:
➤➤ Method wildcard: execution(* add*(..)) matches any method starting with add.
➤➤ Package wildcard: execution(* com.wiley.realworldjava..*.*(..)) matches any method
in com.wiley.realworldjava or any of its subpackages.
➤➤ Class name wildcard: execution(* your.package.name.*Controller*.*(..)) matches any
method on any class with Controller in the class name.
For example, the following will match any ProductController method named addProduct that returns any
class starting with Pr and ending with ct. Therefore, it will match Product, Prospect, and Predict!
com.wiley.realworldjava.aop.product.Pr*ct com.wiley.realworldjava.aop.
product.ProductController.addProduct(..)
.. Wildcards
You saw how to use (..) to match one or more parameters in a method-call parameter list. You can also use
that notation to match any method in a package or any of its subpackages. In general, .. indicates zero or more
elements in the specified position.
For example, the following matches any zero-parameter method in the package com.wiley.realworldjava
.aop.product or any of its subpackages:
execution(* com.wiley.realworldjava.aop.product..*Controller*.*())
Using @AfterReturning
The @AfterReturning type executes after a method returns successfully, that is, without throwing any
exceptions. This can be useful for cases such as logging, but it is especially useful when you want to post-process
data that was handled by the method.
Start with an execution pointcut exactly as you already learned, except annotate it with @AfterReturning.
@AfterReturning(pointcut =
"execution(public * com.wiley.realworldjava.aop.product."
+"ProductController.addProduct(com.wiley.realworldjava.aop.product.Product))")
public void logAfterReturning(JoinPoint joinPoint){
logger.info("=====> called after returning {}",
joinPoint.getSignature().getName());
}
A call to addProduct() shows in the log.
=====> called after returning addProduct
That will allow you to do things like logging, but what if you need the return value?
For that you can apply some signature magic to the pointcut to include the return value, as follows:
@AfterReturning("execution(public * com.wiley.realworldjava.aop.product."
"+ "ProductController.addProduct(com.wiley.realworldjava.aop.product.Product))",
returning = "product")
public void logAfterReturning(JoinPoint joinPoint, Product product){
logger.info("===> called after returning {} returned:{}",
joinPoint.getSignature().getName(), product);
}
A call to addProduct() now displays in the log.
===> called after returning addProduct returned:Product{styleNum=’123’,
description=’IPhone’}
The @AfterReturning advice lets you adjust the return value and return the adjusted value to the user. For
example, let’s say the requirement is to return uppercase descriptions. The advice can be written as follows:
@AfterReturning (value =
"execution(public * com.wiley.realworldjava.aop.product."
+ "ProductController.addProduct(com.wiley.realworldjava.aop.product.Product))",
returning = "product")
Exploring the Pointcut Expression Language ❘ 317
Using @AfterThrowing
This is invoked only if the instrumented method ends abnormally by throwing an exception. You can capture the
exception by adding it to the method call using a throwing parameter.
Let’s add the following to our ProductController class to simulate an exception. A call to the
/products-exception endpoint will always throw an IOException.
@GetMapping("/products-
exception")
public Collection<Product> listProductsWithException() throws IOException {
throw new IOException("An IOException to handle");
}
Now add the following pointcut to your LoggingAspect1 class:
@AfterThrowing(value = "execution(public * com.wiley.realworldjava.aop.product."
+ "ProductController.listProductsWithException())", throwing = "cause")
public void logAfterReturning(JoinPoint joinPoint, Throwable cause){
logger.info("===> called after throwing {}. Throwing:{}",
joinPoint.getSignature().toShortString(),cause) ;
}
In this case, we captured the Throwable cause in the method. Using Postman, let’s make a GET call to our
new endpoint.
localhost:8081/products-exception
You will find the following in the logs, just before the stack trace:
===> called after throwing ProductController.listProductsWithException().
Throwing: java.io.IOException: An IOException to handle
Using @After
@After is similar to the @AfterReturning and @AfterThrowing, except you won’t have access to the
result or any thrown exception. It is useful for things like general end of method cleanup, logging, or
observability capture.
Using @Around
@Around provides more granular access to the instrumented method. With @Around, you can instrument before
and after the method call, modify any results, handle or modify any exceptions, and even bypass the target call
altogether.
A useful use case for this would be to provide caching for a method. Imagine we have a method whose results
are lazy-loaded (which means the results are computed on the first invocation and are saved in a cache for future
calls). An example would be an application that speaks to several databases, and we want to lazy-load the
connection information, which doesn’t change.
318 ❘ CHAPTER 11 Coding the Aspect-Oriented Way
return connection;
}
The idea is that we call this endpoint with a JSON payload like the following, containing connection information:
{
"driver": "com.mysql.cj.jdbc.Driver",
"url":"jdbc:mysql://localhost:3306/products",
"username":"vgrazi",
"password":"pa55w0rd"
}
The raw implementation would create a new connection for each call, which would be a resource-intensive
design.
Let’s correct that with AOP, using an @Around advice, to lazy-load the appropriate connection on the first
invocation of this method for each set of Properties and then save it in a cache for subsequent calls.
1: private final Map<String, Connection> connectionCache = new HashMap<>();
2:
3: @Around("execution(public java.sql.Connection com.wiley.realworldjava.aop."
4: + "product.ProductController.getDbConnection(java.util.Properties))")
5: public Connection cacheConnection(ProceedingJoinPoint joinPoint)
6: throws Throwable {
7: Object[] args = joinPoint.getArgs();
8: Properties properties = (Properties) args[0];
9: Connection connection = connectionCache.get(properties.
getProperty("database"));
10: if(connection==null) {
11: logger.info("Nothing cached. Aspect proceeds with the original call!");
12: try {
13: connection = (Connection) joinPoint.proceed();
14: logger.info("Aspect caches the connection for future calls");
15: connectionCache.put(
16: properties.getProperty("database"), connection);
17: } catch(Throwable throwable) {
18: logger.error("cacheConnection advice threw exception", throwable);
19: throw throwable;
20: }
21: }
22: else {
23: logger.info("Aspect got connection from cache!");
24: }
25: return connection;
26: }
In line 1 we are creating a local cache to be used by our aspect. Note that this instance is unknown to our
ProductController.
Exploring the Pointcut Expression Language ❘ 319
In line 3 we are creating a standard join point expression for our getDBConnection method. Note that we used
the @Around advice. In the method signature on line 5, we return the same type as the target method call, in this
case Connection. Also note that we replaced the usual JoinPoint instance with a ProceedingJoinPoint.
This is always required for @Around so that we can call the proceed method to invoke the target method.
In line 8 we are intercepting the call arguments, which we will use to check for an instance in our cache, which
we do in line 9.
In line 10 we found the cache was missing the connection, so in line 13 we call the target method by calling
ProceedingJoinPoint.proceed(). This invokes the target method, which creates a new
connection instance.
We cache that new instance in line 15. Note that any exceptions thrown by the target method call will be thrown
by the joinPoint.proceed() call, which we catch in line 17 and throw back to the caller in line 19.
In line 23 we log a message that a connection was found, so we bypass the actual method call completely and just
return that cached instance.
At this point a word of caution is in order. AOP is magical, and it’s perfect for handling cases outside of the
developers’ area of concern, such as capturing observability metrics or logging to a distributed log.
But you should use it with care. If your development team is not aware of it, they could waste frustrating
development cycles trying to understand why their product descriptions are being converted to uppercase, why
their breakpoints are not being hit, or why their method calls are pulling from the cache rather than running
their implementation!
Using @Pointcut
This would be a good time to talk about reuse. Imagine if you wanted to create a @Before advice for logging,
an @After advice to capture time, and an @Around advice for caching. You would have to repeat the execution
pointcut three times! And heaven forbid, if you want to make a slight change in the pointcut, you would need to
remember all the places to make the change. Enter the @Pointcut annotation. @Pointcut allows you to give a
short name to your pointcut expression and then use that name in place of the pointcut expression.
If you are following along in the code, you will notice we have been using the class LoggingAspect1. To
continue the examples, comment out the @Aspect advice on line 16 of LoggingAspect1.
16: //@Aspect
17: @Component
18: public class LoggingAspect1 {
19: private final Logger logger = LoggerFactory.getLogger(LoggingAspect1.class);
Then uncomment the @Aspect advice in line 15 of LoggingAspect2.
15: @Aspect
16: @Component
17: public class LoggingAspect2 {
This commenting/uncommenting will replace the AOP proxy from LoggingAspect1 with the revised version in
LoggingAspect2 so you can continue to follow along. The following code lines refer to LoggingAspect2.
Let’s revisit our pointcut expression from the earlier @AfterReturning section.
The expression was as follows:
@AfterReturning(value = "execution(public * com.wiley.realworldjava.aop.product."
"ProductController.addProduct(com.wiley.realworldjava.aop.product.Product))")
)
public void logAfterReturning(JoinPoint joinPoint, Product product){
System.out.println("=============> called after returning " +
joinPoint.getSignature().getName());
}
320 ❘ CHAPTER 11 Coding the Aspect-Oriented Way
You can use the @Pointcut annotation to annotate a method that defines the execution and then refer to that
method wherever it is needed, in place of the actual execution pointcut.
The paradigm is as follows: instead of specifying the execution in the advice annotation, specify the execution on
an empty method. The method is not executed, but it is still important, because that method name can now be
used instead of the explicit execution expression.
In the following example, the original execution expression is moved to the addProductPointcut method.
So, the previous snippet would be changed to the following equivalently named pointcut:
// Define an empty pointcut method named "addProductPointcut"
@Pointcut(value = "execution(public * com.wiley.realworldjava.aop.product."
+ "ProductController.addProduct(com.wiley.realworldjava.aop.product.Product))")
public void addProductPointcut() {}
// Our advice method (@AfterReturning) can now refer to the empty pointcut method,
// instead of inlining the execution
@AfterReturning(value = "addProductPointcut()", returning = "product")
public void logAfterReturning(JoinPoint joinPoint, Product product){
. . .
}
The following REST endpoint will trigger the advice:
@PostMapping("/product")
public Product addProduct(@RequestBody Product product) {
logger.info("Adding product {}", product);
productMap.put(product.getStyleNum(), product);
return product;
}
Now you can reuse that pointcut to redefine your @AfterThrowing and @After advice.
@AfterReturning(value = "addProductPointcut()", returning = "product")
public void logAfterReturningWithReturningValue(JoinPoint joinPoint,
Product product) {
Combining Pointcuts
Now that you know that pointcuts can be named, you can do some interesting things by combining pointcuts
using && (AND), || (OR), and ! (NOT) operators.
So, let’s say you have defined the methods pointcutA() and pointcutB(). You are able to combine them
into a single advice, which will be invoked if either pointcutA() or pointcutB() is matched by using the ||
(OR) operator.
Exploring the Pointcut Expression Language ❘ 321
For example:
@Pointcut(pointcutA() || pointcutB())
public void combinedOrPointcut() {
// Invoked if either pointcutA() or pointcutB() are matched
}
Similarly, you can use the && (AND) operator to be invoked only when both pointcuts are matched.
@Pointcut(pointcutA() && pointcutB())
public void combinedAndPointcut() {
// Invoked if both pointcutA() and pointcutB() are matched
}
Or you can invoke pointcutA() except when pointcutB() is matched.
@Pointcut(pointcutA() && !pointcutB())
public void NotPointcut() {
// Invoked if both pointcutA() and pointcutB() are matched
}
Annotation-Based Pointcuts
Have you ever wondered how the Spring “automagic” configuration works? From what you have learned, you
can surmise that products like Spring, Lombok, JCache, and many other frameworks are using AOP behind the
scenes. Yet you never see such pointcut expressions in these applications. Instead, they use @ annotations of their
own. How can we do that?
The answer is that you can create annotations using the @annotation pointcut.
For example, let’s say we want to redo our caching advice to be reusable by any call that cares to be cached. Let’s
create a new annotation interface called @Cache. The first step is to create the annotation class.
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Cache {
}
Next, associate the new @Cache annotation with a pointcut.
@Pointcut("@annotation(com.wiley.realworldjava.aop.Cache)")
public void cacheAnnotationMethod(){
}
Then, provide an advice to be invoked.
@Around("cacheAnnotationMethod()")
public Object cacheMethodResults(ProceedingJoinPoint joinPoint) throws Throwable {
// Each unique combination of args will be cached separately.
// Create a MultiKey representing the args
// (MultiKey lets you define keys with arbitrary numbers of components):
Object[] args = joinPoint.getArgs();
Object[] argsLong = new Object[args.length + 1];
System.arraycopy(args, 0, argsLong, 1, args.length);
322 ❘ CHAPTER 11 Coding the Aspect-Oriented Way
@Around("timingAnnotationMethod()")
public Object cacheTimingResults(ProceedingJoinPoint joinPoint) throws Throwable {
// grab the start time
long start = System.currentTimeMillis();
// execute the target method...
Object result = joinPoint.proceed();
// grab the end time
long end = System.currentTimeMillis();
// log the timing
logger.info(">>>> Call to {} took {} ms.", joinPoint.toString(), end-
start);
// return the call result
return result;
}
Note that this method returns a result. What if it is applied to a void method? The answer is that the
joinPoint.proceed() invocation will return null, and the final return result call will return null,
which will be ignored by the caller since they are expecting a void return.
FURTHER REFERENCES
➤➤ https://docs.spring.io/spring-framework/reference/core/aop.html
Complete documentation for Spring AOP
➤➤ https://eclipse.dev/aspectj
Documentation for AspectJ, a broader aspects framework that works for even non-Spring projects
SUMMARY
The following are the key takeaways from the chapter:
➤➤ Aspects allow you to specify a pointcut that describes which methods to intercept and what action to
take when intercepting method calls.
➤➤ A join point is where the code is inserted.
➤➤ Pointcuts can be named for reuse and can be combined using &&, ||, and !.
➤➤ The major advice types are @After, @AfterReturning, @AfterThrowing, @Around, and
@Pointcut.
12
Monitoring Your Applications:
Observability in the Java
Ecosystem
WHAT’S IN THIS CHAPTER?
➤➤ Introducing Observability
➤➤ Getting Started with Prometheus
➤➤ Adding Alert Manager
➤➤ Dashboarding with Grafana
➤➤ Logging and Tracing
In production systems, things go wrong. When they do, you want to know about it as early as pos-
sible. . .and even earlier! Monitoring your application is crucial for ensuring a pleasant and beneficial
experience for your users. You need visibility into your application’s inner workings to understand what’s
happening, and certain tools are designed to provide that visibility.
In this chapter, we’ll explore some popular open-source tools in the observability space.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. You can also find the code at https://github.com/realworldjava/
Ch12-Observability. See the README.md file in that repository for details.
326 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
INTRODUCING OBSERVABILITY
Monitoring is just one component of “observability.” What is observability? Ask 10 people and you’ll get 12 dif-
ferent answers. To clarify, let’s break it down. Observability involves the following key components:
➤➤ Metrics: Named units of data used to quantify events, monitor system performance, detect anomalies,
and forecast future trends and issues
➤➤ Alerting: Automated notifications to the appropriate teams when predefined thresholds are breached,
enabling rapid response to incidents
➤➤ Dashboarding: Real-time visualization of metrics, logs, and traces through customizable dashboards,
allowing teams to monitor system health and make data-driven decisions quickly
➤➤ Logging: Centralized event snapshots in the form of detailed log messages, providing visibility and
aiding in troubleshooting
➤➤ Tracing: Consolidated logging and responses using a unified “trace-id” to track the flow of a request
across multiple services, enabling reliability engineers to identify failures, anomalies, or bottlenecks more
effectively
For collecting metrics, there are two approaches: pushing and scraping. Pushing means that the applications
you are monitoring are configured to send periodic metrics to a database, usually a time-series database such as
Graphite. Scraping is a pull technique where the monitoring operation is configured to interrogate specific end-
points for metrics, which it saves in a database.
If you already have access to your enterprise Prometheus instance, you can run queries there, but we strongly
recommend you install Prometheus yourself so that you can configure and experiment with it as you like.
TIP Node exporters have no relation to the Node.js framework. Node is a Linux word that
refers to the servers and containers that host your Linux-based applications.
These exporters are configured to monitor nodes, from which they capture metrics and expose them in a format
that Prometheus can scrape. There are hundreds of such metrics, including CPU utilization, disk usage, memory
consumption, and so forth.
While you can install directly on your computer, we are using a Docker container in this chapter to ensure a con-
sistent setup for readers. If you’d like a refresher on using Docker, see the Jenkins section of Chapter 4, “Automat-
ing Your CI/CD Builds with Maven, Gradle, and Jenkins.”
In this section we will set up Node Exporter using Docker with a Linux image. After starting Docker Desktop,
you can pull the image like so:
docker pull prom/node-exporter
That is not strictly required though. The run command will pull the image if it is not already there. To run the
node exporter container, call the following:
docker run -
-name=node_exporter -
p 9100:9100 prom/node-exporter
TIP Remember from Chapter 4 that you use the run command only once, to create
the container. After the first time, you will start Prometheus by calling
docker start node- exporter instead, to run the existing container instance.
This command launches the node exporter using its defaults. (You can change these with flags. Find the full list of
flags in the “Further References” section. Search for “Flags.”)
Node Exporter runs by default on port 9100. To verify that the exporter is running, browse to http://
localhost:9100. You’ll see a web page that includes the version number, like this:
Version: (version=1.8.2, branch=HEAD,
revision=f1e0e8360aa60b6cb5e5cc1560bed348fc2c1895)
The URL http://localhost:9100/metrics exposes all the metrics being exported by the exporter. If this
looks like a lot of gibberish, don’t worry; that is Prometheus scrape format, and we will make sense of it shortly.
Installing Prometheus
The Prometheus binary keeps track of the metrics and includes a UI for queries.
First, run the following:
docker pull prom/prometheus
328 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
Copy the prometheus.yml file from the root of Chapter 12 repository into your working directory. We will
explain this file in the next section. Run the following on Windows:
docker run -
-name prometheus -
p 9090:9090
-
v .\prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
or the corresponding command for Mac/Linux:
docker run -
-name prometheus -
p 9090:9090
-
v ./prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
TIP If you add the -d flag, Docker runs in the background.
The Docker -v flag stands for volume. It allows you to map a Docker file or directory from your local computer
into the Docker container. In this case, that allows you to supply your own prometheus.yml file. We copied
this from the Prometheus distribution, with some modifications.
Prometheus runs by default on port 9090. You can browse to http://localhost:9090 to confirm
Prometheus started correctly. As you might imagine, after the initial start, you run
docker start prometheus instead.
Configuring a Scrape
In the previous section, you referenced the YAML configuration file prometheus.yml. (For more about YAML,
see the Appendix). The file configures your Prometheus to recognize your exporters and other applications so that
it knows to scrape them.
Under scrape_configs in prometheus.yml you can see the following code, as supplied by Prometheus. We
added the bold lines.
21: scrape_configs:
22: # The job name is added as a label `job=<job_name>`
23: # to any timeseries scraped from this config.
24: -job_name: "prometheus"
Getting Started with Prometheus ❘ 329
25:
26: # metrics_path defaults to '/metrics'
27: # scheme defaults to 'http'.
28:
29: static_configs:
30: -targets: ["localhost:9090"]
31:
32: -job_name: "node_exporter"
33: static_configs:
34: -targets: ["host.docker.internal:9100" ]
35:
36:
37: -job_name: "mtg-calc"
38: static_configs:
39: -targets: ["host.docker.internal:8081" ]
40: metrics_path: '/actuator/prometheus'
The first scrape config job on line 24 is named prometheus. Prometheus is self-aware, so it exports its own
metrics, and this is where we configure Prometheus to scrape itself.
Lines 32–34 configure Prometheus to scrape the node_exporter, and lines 37–40 tell it to scrape the Spring
Boot application. Lines 34 and 39 use host.docker.internal, which is how Docker refers to the host
machine. Otherwise, it would look for the endpoint internally in the Prometheus container.
Let’s explore the Prometheus interface. From a browser window launch the Prometheus web interface at
http://localhost:9090. Choose Status ➪ Configuration to see all the scrapes configured in this instance, as
shown in Figure 12.1. Notice both job names that we configured in the previous YAML are in this screenshot.
Choose Status ➪ Targets to see the health of the scraped targets, as shown in Figure 12.2.
330 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
Let’s get the feel by running some queries. Go back to the Prometheus query window, enter node_cpu_
seconds_total in the search box, and hit Execute. You should see something like Figure 12.3.
Getting Started with Prometheus ❘ 331
The result set contains a new line for each mode associated with the query.
NOTE The metrics are operating system-specific. For example, Windows would use
windows_cpu_time_total. Since the Docker container is a Linux one, you get to use
node_cpu_seconds_total regardless of which operating system your machine happens
to be.
Notice that each metric is annotated with labels in brackets. You can query for specific labels by appending
them to the metric name between brackets. To enter multiple labels, specify them as a comma-separated list of
key="value" pairs between the squiggly brackets. For example, let’s limit our query to just the “idle” CPU time,
by adding the label {mode="idle"} to our metric. Click Add Panel to start a new query panel below the first
one and enter the following:
node_cpu_seconds_total{mode="idle"}
The result is something like Figure 12.4. By comparing that to Figure 12.3, you can see how now we are just
displaying the mode="idle" labels.
This is how you can query any metric in Prometheus. If you need help, you can type a few characters and the
auto-complete will guide you.
Every metric has an implicit label called __name__ that contains the name
of the metric. You can use this fact to get a full list of metric names by entering the query
{__name__=~".*_.*"}. This will produce a very long list.
If you added multiple query panels to a single page, you’ll notice Prometheus does not provide
a way to remove a query panel, so we suggest you start a new Prometheus window for this
query by starting a new browser tab and entering the URL localhost:9090 again and
entering the new query there.
Continuing from the node_cpu_seconds_total{mode="idle"} query in Figure 12.4, click the Graph
button just below your query to see a graph of CPU idle time, as in Figure 12.5.
An instance vector (a list of every label of the metric at a given time, in this case node_cpu_seconds_total)
appears below the graph. You can deselect and reselect instances by clicking them in the list. By default,
Prometheus graphs the last hour, but you can change that by clicking the – and + surrounding the time selector
above the graph on the left side.
Prometheus supports a wide range of functions, including aggregation functions, rate calculations, value adjust-
ments, and so forth.
Getting Started with Prometheus ❘ 333
For example, we can use the metric node_cpu_seconds_total{mode="idle"} to calculate CPU utilization.
CPU utilization in a time window is defined as the percentage of time that the CPU is busy (aka not idle) in that
time window.
The rate function calculates the per-second rate of change of a range vector within a specified time window.
A range vector is essentially a sequence of instance vectors over a time range. In mathematical terms, it is a
matrix. For example, node_cpu_seconds_total{mode="idle"}[5m]is a range vector that represents all
the instance vectors within the last 5 minutes.
rate(node_cpu_seconds_total{mode=“idle”}[5m]) calculates the rate of change of idle time between
the start and end of each five-minute window. This represents the idle time fraction, which can never be more
than 1, indicating the CPU was completely idle. Subtracting that result from 1 returns the CPU utilization. Let’s
multiply that by 100 to convert to percent utilization, producing the following formula:
(1-
rate(node_cpu_seconds_total{mode="idle"}[5m]))*100
You can use an offset to turn back the clock. For example, let’s say we left our laptop off for
five hours; we can set an offset of 5h to tell Prometheus to run the query as if it were executed
five hours ago:
(1-
rate(node_cpu_seconds_total{mode="idle"}[5m] offset 5h))*100
This produces the graph of CPU utilization per logical core, as shown in Figure 12.6. This laptop has four cores,
and you can see the utilization of each by clicking the ones you want to show and clicking off the rest.
To get an average across all CPUs, use the aggregate function avg. Aggregate functions aggregate all the instance
vectors at each point in time. So, the avg function produces the average across all cores.
(1 -avg(rate(node_cpu_seconds_total{mode="idle"}[5m])))*100
You can see the graph in Figure 12.7 is similar to Figure 12.6, except that there is only one trendline, showing the
average CPU utilization over all cores.
In each of these cases, we showed a graph of an instance vector over time. If you come back to the Table tab,
you will see the latest values for each instance in the vector. You can change that to a range vector by specify-
ing the range, for example node_cpu_seconds_total{mode="idle"}[5m] displays the CPU idle time for
each core, for every 15 seconds (Prometheus default) in the specified range, in this case five minutes, as shown in
Figure 12.8.
We have already seen range vector units of m and h. The full set of range vector units is as follows:
➤➤ ms: Milliseconds
➤➤ s: Seconds
➤➤ m: Minutes
➤➤ h: Hours
➤➤ d: Days
➤➤ w: Weeks
➤➤ y: Years
Prometheus will display only a graph of instance vectors but not for range vectors. We have already seen some
arithmetic operators such as – and *. The full list of arithmetic operators is as follows:
➤➤ +: Addition
➤➤ -: Subtraction
➤➤ *: Multiplication
➤➤ /: Division
➤➤ %: Modulus
➤➤ ^: Power
We saw the aggregation function avg. The following are the other aggregation functions:
➤➤ avg: The average over the values of an instance vector.
➤➤ count: The count of members of an instance vector.
➤➤ max: The maximum value of an instance vector.
➤➤ min: The minimum value of an instance vector.
➤➤ sum: The sum of all values of an instance vector.
➤➤ count_values: Counts the number of values for each value of the provided label. For example, count_
values("code", prometheus_http_requests_total) provides the number of HTTP requests
for each value of the label “code.”
336 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
➤➤ topk: Specifies how many of the top values of an instance vector to retail. For example, to retain three:
topk(3, cpu_seconds_total).
➤➤ bottomk: Specifies how many of the values of an instance vector to retain. For example, to retain three:
bottomk(3, cpu_seconds_total).
➤➤ stddev: The standard deviation of the range vector.
➤➤ stdvar: The standard variance (or simply variance) of the range vector.
There is a lot more, but this should provide a flavor for the power of the PromQL language. We recommend you
experiment and check the Prometheus documentation for more information.
59: }
91: @GetMapping("/payment")
92: public ResponseEntity<String> calculateMonthlyPayment(
93: @RequestParam double principal, @RequestParam int years,
94: @RequestParam double interest) {
95: hitCounter.increment();
96: double payment = mortgageCalculator.payment(
97: principal, interest, years);
98: String rval = String.format("Principal:%,.2f<br>Interest: %.2f<br>" +
99: "Years: %d<br>Monthly Payment:%.2f", principal, interest,
100: years, payment);
101: HttpHeaders headers = new HttpHeaders();
102: headers.add("Request time", "Call for payment at "
103: + LocalDateTime.now());
104: return new ResponseEntity<>(rval, headers, HttpStatus.OK);
105: }
In lines 38–40 we create hitCounter, which is a Micrometer Counter. In line 54 we increment
the counter.
Make a few calls to this /payment endpoint by calling http://localhost:8081/mtg/payment?
principal=100000&years=30&interest=6.1 from your browser (or from Postman or curl, as we did in
Chapter 6). Then in Prometheus (http://localhost:9090), enter the metric payments_get_counter_
total and hit Execute to see the growth. Try it in both the Prometheus Table tab and the Graph tab.
Call the mtg/payments POST endpoint a few times. Prometheus scrapes every 15 seconds by default, so leave
that much time between calls. Your Prometheus graph of payments_query_duration_seconds should look
something Figure 12.10.
Before you run Alert Manager, you must configure it. The root of the chapter repository has an alertmanager
.yml file, which looks like the following. Substitute your SMTP server information if you have it.
global:
smtp_smarthost: 'smtp server' # Email SMTP server
smtp_from: 'sender@my_company.com' # Your Email address
smtp_auth_username: 'username' # Your username
smtp_auth_password: 'pa55w0rd' # SMTP password
route:
# Default receiver (defined below) to send alerts
receiver: 'email-
receiver'
receivers:
-name: 'email-
receiver'
email_configs:
-to: 'notifications@my_company.com'
We need to tell Prometheus about our Alert Manager and rules, as well as how often to check for breaches.
The prometheus-with-alerts.yml adds an alerting section just below and at the same indent level
as global.
# Alertmanager configuration
global:
# Set the scrape interval to 15 secs. Default is 1 minute.
scrape_interval: 15s
# Evaluate rules every 15 secs. Default is 1 minute.
evaluation_interval: 15s
alerting:
alertmanagers:
-static_configs:
-targets:
-alertmanager:9093 # AlertManager IP/Port, 9093 by convention
To override the global evaluation_interval for a given rule, open the rule file and specify
an “interval” entry for that rule. For example:
-name: custom-
interval-
rule
interval: 1m # Will be evaluated every 1 minute
If you have a SMTP server, you can start Alert Manager and point it to the alertmanager.yml configuration
file we defined earlier by running the following:
docker run - -
name alertmanager -
p 9093:9093
-v ./alert.rules.yml:/etc/alertmanager/alert.rules.yml
-v ./alertmanager.yml:/etc/alertmanager/alertmanager.yml
quay.io/prometheus/alertmanager
--config.file=/etc/alertmanager/alertmanager.yml
That should launch Alert Manager in your command shell.
Regardless of whether you configured SMTP, you can see the rule in Prometheus. In Docker Desktop, delete the
existing Prometheus container and run the following:
docker run - -
name prometheus -p 9090:9090
-v ./prometheus-with-alerts.yml:/etc/prometheus/prometheus-with-alerts.yml
-v ./alert.rules.yml:/etc/prometheus/alert.rules.yml prometheus
--config.file=/etc/prometheus/prometheus-with-alerts.yml
Go to the URL http://localhost:9090/alerts. You should see the HighCpuUsage alert. Pop that open
and you should see the alert, as in Figure 12.11.
344 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
TIP If you are on Mac, go to Security and Privacy and approve if you get an unknown
developer message. From your browser, navigate to http://localhost:3000 to see the
Grafana login page. Log in with username and password admin/admin. If you get prompted
to change the password, you can either do so or click “skip.”
Once you have logged in, you’ll need to add Prometheus integration. In the left navigation, click Data Sources
and Add Data Source. Enter http://localhost:9090 as the URL and then scroll all the way down to click
Save & Test. Figure 12.12 shows this configuration.
Then click the option to add a new dashboard. A dashboard consists of rows, and rows consist of visualizations.
Visualizations are the graphs, gauges, and tables that Grafana is known for.
Let’s start a new dashboard, click the gear, set a title and description, and accept the remaining defaults, as shown
in Figure 12.13.
Save that to return to the page where you can add visualization. Click Add and add a row; call it Infrastructure.
We will add visualizations to that row for displaying infrastructure performance metrics.
Hover over “Row title” to reveal a gear, which you can click to title your row, as in Figure 12.14.
Next choose Add ➪ Visualization. On the top right, you can select a visualization type. We will use the default
time series and give the panel the title CPU. Below the graph is a place to enter your PromQL query. Click Code
and use the following query:
100 -(avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
which is the formula for CPU utilization. Then click Run Queries, just above the query expression. You should see
a graph similar to the one in Figure 12.15.
Save that, and click Real World Java at the top to navigate back to the dashboard. Grab the graph from its title
bar and drag it into the Infrastructure row.
In the same way, add a new visualization called Memory, and enter the following query:
100 -(avg by (instance) (rate(node_memory_MemAvailable_bytes [5m]))
/ avg by (instance) (node_memory_MemAvailable_bytes
) * 100)
Then save the panel and drag and drop it to the right of the CPU graph. You should see something like
Figure 12.16.
The time selector will default to six hours, but play around with that by selecting various time ranges.
Time series is just one of the many graph types available. We encourage you to play around with the others by
editing a graph and selecting the various types. Then try different metrics and come up with your own.
Introducing Logging
The key players in the distributed logging area are Elasticsearch (the “E” in the ELK stack), Splunk, and Loki.
Splunk and Elastic Search are most popular, with Elastic Search more prevalent in smaller companies. Loki is the
newest entry in this field and also enjoys a large following thanks to its seamless integration with Grafana and its
LogQL language that is similar to Prometheus’s PromQL.
Whichever flavor your enterprise uses, distributed logging is a critical component in any service-based enterprise.
All the choices we mentioned are scalable and highly available.
Logstash is a forwarding service that can filter, transform, and transport strategic log messages to Elasticsearch.
Kibana is Elasticsearch’s analytics and visualization component, where you can seamlessly explore your data, and
those (Elasticsearch, Kibana, Logstash) comprise the “ELK” stack.
Elasticsearch is a Java application, and the download distribution comes bundled with its own Java runtime.
Data is available in near real time. You can perform searches in Elasticsearch; the results are informed and
relevant to the search query you supplied.
In Elasticsearch, an index is the fundamental unit of storage. It roughly corresponds to a table in a relational
database, in that it stores related documents.
For collecting data, Elasticsearch provides the Logstash tool, the L in ELK. Logstash is an ETL tool used to
collect, filter, and transform source data and forward it to Elasticsearch.
TIP Extract, transform, and load (ETL) refers to the fundamental steps in a data engi-
neering pipeline, whereby data is extracted from some upstream source such as a database,
REST call, files, messaging systems, and so forth, and then transforms that data by enrich-
ing it with missing fields, or deleting unneeded fields, and finally loads it into a downstream
system, which for our purposes would be Elasticsearch.
348 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
You can configure a data pipeline by specifying the appropriate input, filter, and output stages in a Logstash
configuration file.
Elasticsearch provides its own analytics and dashboarding tool called Kibana, the “K” in ELK. By default, Kibana
listens on port 5601, so is accessed at localhost:5601 in a browser. We selected “Choose your own” for the
integration and saw the screen in Figure 12.17. Kibana is now connected to Elasticsearch, which it found at the
default port 9200.
Kibana provides some sample data to play around with via “Try sample data” on the home page. The drop-down
labeled ”Other sample data sets” contains the data such as on the Sample eCommerce orders card.
Similar in concept to distributed logging, distributed tracing is a technique for following the path of requests
through different components of a distributed system. In modern microservices, one request might be handled by
many services, and if something goes wrong, we need a way to explore the logs across all calls related to a failed
request. Tracing tools can provide real-time visualizations as requests flow through your system. They apply a
Further References ❘ 349
unique “trace ID” to all calls associated with each request, helping you understand how services interact, and
allowing you to diagnose issues and improve performance.
Tooling
OpenTelemetry is a widely adopted set of observability specifications, as well as an open-source reference imple-
mentation for managing metrics, logs, and traces with APIs in all popular languages. Other popular implementa-
tions include Jaeger and Zipkin, which collect, store, visualize, and analyze your logs and metrics.
FURTHER REFERENCES
➤➤ Prometheus
➤➤ https://prometheus.io/docs/introduction/overview: Prometheus
documentation
➤➤ https://prometheus.io/download: Prometheus and Linux Node Exporter
➤➤ https://github.com/prometheus-community/windows_exporter/
releases: Windows Exporter
➤➤ https://github.com/prometheus/node_exporter: Node Exporter flags
➤➤ https://github.com/prometheus-community/windows_exporter/blob/
master/README.md: Windows Exporter flags
➤➤ https://prometheus.io/download/#alertmanager: Alert Manager
➤➤ Other Tools
➤➤ https://docs.docker.com/engine/install: Download Docker
➤➤ https://grafana.com/grafana/download: Grafana
➤➤ https://www.elastic.co/downloads/elasticsearch: Elastic Search
350 ❘ CHAPTER 12 Monitoring Your Applications: Observability in the Java Ecosystem
➤➤ https://www.elastic.co/downloads/logstash: Logstash
➤➤ https://www.elastic.co/downloads/kibana: Kibana
➤➤ https://www.7-zip.org/download.html: 7-Zip
➤➤ https://opentelemetry.io: OpenTelemetry Documentation
➤➤ https://docs.micrometer.io/tracing/reference/index.html: Micrometer
tracing
➤➤ https://www.jaegertracing.io: Jaeger Documentation
➤➤ https://zipkin.io: Zipkin Documentation
SUMMARY
In this chapter, you learned about the major components of observability, including metrics, logs, alerts, dash-
boarding, and tracing. You learned how to instrument your applications, middleware, and infrastructure using
Prometheus and how to visualize them using Grafana. You also got an introduction to the distributed logging and
tracing landscape.
13
Performance Tuning Your
Services
WHAT’S IN THIS CHAPTER?
Imagine you are shopping online or visiting your favorite news site or movie site. How long would you
expect to wait for the web page to load? The answer is probably not more than a few seconds. Online
vendors know that slow performance translates to lost customers. And that’s not just on an average day,
but websites must also make sure that they can handle large spikes in demand such as during the Super
Bowl or on days like Black Friday or Cyber Monday, when United States retail sites expect massive vol-
umes of online shopping.
When you are learning to program, performance is generally not your primary concern. But in the enter-
prise, performance is a critical component in customer retention and managing operational costs. After all,
servers cost money, so maximizing server performance translates to fewer servers and therefore a bigger
bottom line. Similarly, when you deploy to cloud environments like AWS, you are billed based on usage, so
again we see that improving performance can reduce costs.
On the other hand, Donald Knuth, one of the pioneers of computer science, wrote in The Art of Computer
Programming that “premature optimization is the root of all evil.” Therefore, it is important to identify
where the problem hot spots are and the severity of their impact so you can prioritize them and address the
important issues first. We like to say “code for correctness and then correct for performance.”
352 ❘ CHAPTER 13 Performance Tuning Your Services
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Ch13-Performance. See the README.md file in that repository for details.
In this chapter, you’ll learn basic performance concepts, how to write performant code in Java, and how to
performance tune your application. You’ll also learn about some popular tools that can help you produce
performant code.
The following are some other terms used when discussing performance:
➤➤ Caching: This means storing data in memory to speed up slow operations.
➤➤ Bottleneck or hot spot: This is the part of the system that causes unacceptable performance.
➤➤ Profiler: This is the class of tools used to identify potential hot spots in your application.
➤➤ Memory leak: This is a situation where the program consumes memory without releasing it.
➤➤ Just-in-time (JIT) compilation: This is a capability of the JVM that discovers patterns in your application
usage and replaces bytecode with optimized native code.
BENCHMARKING
Suppose you identify a section of your code that is slow. Maybe it “pins” your CPU using so much CPU that
your machine is maxed out. Or maybe it is doing operations sequentially when it could be running them in
parallel. Benchmarking allows you to measure the time it takes to run specific portions of your code. Running a
benchmark both before and after you make a change lets you see if you have made the program faster. You’ll see
examples of benchmarking using built-in Java APIs, and then you’ll learn how to use a library for more
accurate results.
Benchmarking ❘ 353
System.out.println(recursive);
System.out.printf("Took %d seconds", (end-
start)/nanosPerSecond);
}
System.out.println(recursive);
System.out.printf("Took %d seconds", (end-
start)/nanosPerSecond);
}
Always think through your approach. Although recursion is appropriate for many situations, in this case it was
the wrong approach because it was repeatedly recalculating the same intermediate results.
12586269025
Took 0 seconds
SYSTEM.NANOTIME() OR SYSTEM.CURRENTTIMEINMILLIS()?
Microbenchmarking
The code in the previous example was so inefficient that we didn’t need a detailed benchmark to see that the time
was improved. But in real life we must often locate and tune portions of code with minor improvements. Micro-
benchmarking refers to the act of measuring the performance of small, isolated portions of code.
Writing good benchmarks with precision is hard. You must ensure that no competing tasks (like logging, garbage
collection, and so forth) are happening at the same time that might affect the benchmark. You also must ensure
that the JVM doesn’t optimize the code in a way that makes your benchmark invalid. For example, the applica-
tion might have portions of code that produce an unused result. The JVM may consider such portions as dead
code and optimize it away so it doesn’t run! Additionally, the JVM optimizes code where possible so it may be
faster on later runs than earlier ones in a loop. Or the JIT compiler might convert bytecode to native code. All of
these can greatly skew any benchmarks that were produced before the optimization kicked in.
Luckily, Java Microbenchmark Harness (JMH) exists so you don’t have to worry about such optimizations and
can just focus on measuring your code. The JMH maintainers recommend using the JMH archetype
jmh-java-benchmark-archetype to generate a new project. You can add your project as a dependency to
give it access to run your code:
mvn archetype:generate
-
DinteractiveMode=false
-
DarchetypeGroupId=org.openjdk.jmh
-
DarchetypeArtifactId=jmh-java-benchmark-archetype
-
DgroupId=com.wiley.realworldjava.jmh
-
DartifactId=jmh-benchmark
-
Dversion=1.0.0
This runs the archetype:generate goal to generate a new project using the specified Maven archetype. The
first -D system property just says to not prompt for confirmation. The next two properties specify the JMH
archetype. The last three you use to assign a GAV (group ID, artifact ID, and version) for your project.
The generated code is a Maven project. The archetype created one Java class named MyBenchmark.java with
one empty method. Let’s replace that with the following code containing a call to the sluggish fibonacci()
implementation method signature from the previous example:
package com.wiley.realworldjava.jmh;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.infra.Blackhole;
@Benchmark
public static void benchmark(Blackhole blackhole) {
Benchmarking ❘ 355
Result "com.wiley.realworldjava.jmh.MyBenchmark.benchmark":
0.028 ±(99.9%) 0.001 ops/s [Average]
(min, avg, max) = (0.028, 0.028, 0.029), stdev = 0.001
CI (99.9%): [0.028, 0.029] (assumes normal distribution)
At the end of the output, you get a summary. It took almost 30 minutes to run the benchmark. That’s time
to drink a lot of coffee! The output also summarizes the result and variance. Rerunning using the faster
fibonacci() implementation reports far more operations per second:
# Run progress: 80.00% complete, ETA 00:01:40
# Fork: 5 of 5
# Warmup Iteration 1: 3080887445.320 ops/s
# Warmup Iteration 2: 3086292064.630 ops/s
# Warmup Iteration 3: 3088137264.756 ops/s
# Warmup Iteration 4: 3089568664.300 ops/s
# Warmup Iteration 5: 3084161269.692 ops/s
Iteration 1: 3097137124.647 ops/s
Iteration 2: 3080164130.908 ops/s
Iteration 3: 3095501587.777 ops/s
Iteration 4: 3088198606.124 ops/s
Iteration 5: 3084219015.220 ops/s
Result "com.wiley.realworldjava.jmh.MyBenchmark.benchmark":
3054970163.963 ±(99.9%) 129045935.961 ops/s [Average]
(min, avg, max) = (2229602431.413, 3054970163.963, 3110210135.698),
stdev = 172272632.473
CI (99.9%): [2925924228.001, 3184016099.924] (assumes normal distribution)
@BENCHMARKMODE DESCRIPTION
@BENCHMARKMODE DESCRIPTION
@OUTPUTTIMEUNIT DESCRIPTION
Using the unit specifiers, all of the following represent the same configuration:
-Xms1g
-Xms1G
-Xms1024m
-Xms1024M
-Xms1048576k
-Xms1048576K
-Xms1073741824
The first two are the clearest since they don’t require you to do math to determine the value is 1 gigabyte!
Table 13.3 shows the most common memory flags. It is more common to use the short form in configuration
even though it is less explicit when reading.
-Xms -XX:InitialHeapSize Initial and maximum size of the heap. It must be more than
1 megabyte and a multiple of 1024.
-Xmx -XX:MaxHeapSize Maximum heap size. It must be more than 2 megabytes and a
multiple of 1024.
The use of -X and -XX might seem rather arbitrary, but how these designations evolved is
interesting. -X was originally intended to be a nonstandard option, not guaranteed to work on
all JVMs. -XX, on the other hand, represented more stable, more advanced, or more granular
options. However, the -X options listed in Table 13.3 have become firm standards and are not
likely to change anytime soon.
All of the options described in this book, including -X and -XX options, are in Oracle’s JDK
documentation and are widely supported.
from Eden, survivors are moved to the other survivor space (S1), and the survivors from S0 are also moved to S1.
S0 is reclaimed, and S1 becomes the active survivor space, and so on. After a configurable number of movements
between S0 and S1, surviving objects are promoted to Old Generation, also known as Tenured Space.
There is also a Metaspace for things like static data and class objects that must remain available for the entire
life of the program. (Metaspace replaces the permanent generation space, also known as PermGen, from earlier
versions of Java.)
This section explains the common types of garbage collectors provided in Java, as well as their settings. Before
you configure a garbage collector, you want to check if the default meet your needs. The JDK will attempt to pick
the best garbage collection algorithm based on your configuration, machine, and type of JVM available.
The specific garbage collectors have custom settings you can tune. See the Java documentation page’s advanced
garbage collection section for details if you find yourself tuning your garbage collector.
TIP For settings that don’t require multiples of 1024, it is good to use powers of two for
the numbers, e.g., 2, 4, 8, 16, 32, 64, 128, 256, and 512. Why not 1024 you ask? Because
1024 kilobytes is one megabyte. You’d move up to the next unit of measure instead of typing
a larger number.
Again, this stops all work while it does its garbage collection. If you do need to require the parallel garbage
collector, use the option ‑XX:+UseParallelGC.
Using G1GC
Garbage First Garbage Collector (G1GC) is designed for multicore servers with a lot of available memory. It
works by splitting the heap into logical sections, which allows G1GC to clean up all generations at the same time,
section by section. As a result, pauses are rare, and stop-the-world is not a problem.
While there are still Eden, survivor, and tenured regions, they are located throughout the heap in different logical
sections. The result is that G1GC is going to be a better algorithm for a large heap.
You might be thinking that this sounds great and wonder why G1GC doesn’t get used all the time. The main
reason is that it requires a lot of RAM. A heap size of 6GB or higher is recommended for G1GC. If you aren’t
working with a large memory space, parallel might be better for you.
Oracle/Open JDK defaults to G1GC. However, if you are relying on this garbage collector, you should still specify
the option -XX:+UseG1GC.
TIP The Z Garbage Collector is a variant of G1GC for very large heaps. It uses the option
-XX:+UseZGC.
Give your test plan a name. You can see “Rec Center” is our test plan name. You can enter an optional comment
to express the purpose of your test.
Next create a thread group, which defines the number of simulated users, how often they should send requests to
your application, and how many requests they should send all together. Right-click your test plan and then select
Add ➪ Threads (Users) ➪ Thread Group.
You can give the thread group a name and configuration options. For example, in Figure 13.2 you can see this
thread group contains five users each with two requests. The ramp-up period is the default of one second, which
tells JMeter how long to pause between starting each of the users.
Now that there is a thread group, it is time to tell it what to do! Right-click the thread group and choose Add ➪
Sampler ➪ HTTP Request.
In Figure 13.3, you can see the following information changed from the defaults:
➤➤ Name: This is the display name of the request, in our case “View status.”
➤➤ Server name or IP: We used localhost since this test is running on a laptop. You can use a server name or
DNS alias here.
➤➤ Port number: We used 8080, corresponding to the default port used in our sample web application.
➤➤ HTTP request: We choose GET, the HTTP verb usually associated with a query that does not
change data.
➤➤ Path: This is the remainder of the URL. Be sure to start with a leading slash such as /status in
this case.
TIP If you have more than a few requests, it is easier to configure the server name/IP and
port number as defaults so you don’t have to specify them on each request. This is done by
right-clicking the test plan and choosing Add ➪ Config Element ➪ HTTP Request Details.
It is common to do load tests on operations that update data as well, so let’s create another HTTP request, as
shown in Figure 13.4. Like in the GET example, there is a name, server name, and port number. This time the
HTTP request is POST since we are trying to change the state of the borrowed items. The URL again begins with
a slash: /borrowItem. This time we provide the itemId parameter basketball at the bottom, as required by
our POST request.
For this test, we are going to create one more POST endpoint called /returnItem. The configuration is the
same as Figure 13.4 except that it has a different URL.
Since you want to view the results, we need to add one or more listeners by clicking the thread group and choos-
ing Add ➪ Listener. This menu item presents you with a variety of output formats. For example, this example uses
Summary Report and sets the output filename to summary.csv.
No problem. JMeter has lots of settings, such as for passing login credentials or for working
with cookies. Within your tests you can configure different orderings and combinations. You
can add “think time” delays or have JMeter randomly select from a list of URLs. JMeter can
Testing with JMeter ❘ 363
do almost anything you could possibly need. This example is purposely simple so you can
understand the basics. To go deeper, we encourage you to explore the truly excellent JMeter
documentation. See “Further References” to learn what to do when you encounter more
complex test scenarios.
For now, it is helpful to understand the most common elements of a test plan so you know
what to look for in the documentation when the time comes:
➤➤ Thread group: Control the frequency of the threads being run. Each test plan has one or
more thread groups.
➤➤ Controller: For sending requests. The sampler (HTTP request) you saw earlier is the
simplest type of controller. There is also a variety of logic controllers to customize when
requests are sent. For example, you can add randomization for the order of the requests,
add the length of time tests should run, or even delegate to another test plan.
➤➤ Listener: For accessing information such as the results you saw earlier.
➤➤ Timer: Allows adding delays.
➤➤ Assertion: For adding data-validation tests to your JMeter tests, such as for checking
values in the UI.
Now that you have a test plan, you can save it as a Java Management Extensions (JMX) file by going to
File ➪ Save Test Plan As, and enter your desired filename and location before saving.
The .jmx file is saved in src/main/resources in the repository. You can open the JMeter UI and choose
File ➪ Open to view it with all the settings used in this book.
A JMX file is an XML file. You’ll learn about XML in the Appendix, “Reading and Writing XML, JSON, and
YAML.” For now, know that XML is a structured text format, and you can open the file in a text editor if you are
curious about what is inside.
There are also columns for the number of byes in the request/response, and metrics on idle time and latency. The
following is the beginning of the report for these initial columns with the elapsed time column in bold:
timeStamp,elapsed,label,responseCode,threadName
1.72279E+12,202,View status,200,Borrowers 1- 1
1.72279E+12,37,View status,200,Borrowers 1- 2
1.72279E+12,9,View status,200,Borrowers 1-3
1.72279E+12,10,View status,200,Borrowers 1- 4
1.72279E+12,11,View status,200,Borrowers 1- 5
1.72279E+12,1024,Borrow,200,Borrowers 1-1
1.72279E+12,10,Return,200,Borrowers 1-1
1.72279E+12,9,View status,200,Borrowers 1-1
1.72279E+12,1015,Borrow,200,Borrowers 1-1
1.72279E+12,13,Return,200,Borrowers 1-1
1.72279E+12,5025,Borrow,200,Borrowers 1-2
1.72279E+12,10,Return,200,Borrowers 1-2
Using JDK Tools ❘ 365
TIP Add the JDK bin directory to your path to run the commands in this section so that
you don’t have to type the fully qualified path at the command line.
The examples in this section show how the application behaves when hitting the
URL http://localhost:8080/emailSummary in our recreation center
application. This endpoint produces a report of how many emails were received
each month. Figure 13.5 shows the output from loading this page. Judging from
this report we can assume that June (month 6) had an inordinate amount of spam!
We have purposely coded this endpoint to use a very inefficient algorithm, which
is invoked every time the page is hit. The algorithm reads a file and compares it to
the previously loaded file. The code takes less than a second to run the first time
and about 19 seconds on subsequent runs. To make matters worse, it also wastes
memory. The last section in this chapter will show how to improve the situation. For
now, let’s take a look at the tools themselves.
Now let’s look at some tools for investigating performance problems.
Java Flight Recorder logs events to a file named flight.jfr. You can generate a text report by executing the
jfr command from the command line, or you can see the results in a visual format directly in Java Mission
Control. The next section covers Java Mission Control.
Java doesn’t automatically run Java Flight Recorder; you have to turn it on. There are several ways to turn it on.
One option is to pass the -XX:StartFlightRecording option when starting your application. You can also
configure additional options. For example, -XX:StartFlightRecording=delay=5s,maxSize=1G means
ignore any data from the first five seconds of the program and limit the file size to one gigabyte, after which the
file rolls over. The java tool documentation page has the full list of options.
Alternatively, you can use the jcmd program to tell a running JVM application to start profiling it with Java
Flight Recorder. This approach accepts either a process ID (PID) or Java program name as input. So, for example,
if our application is running as process ID 14020, then the following two commands would be equivalent ways of
profiling it under JFR:
jcmd RecCenterApplication JFR.start filename=flight.jfr
jcmd 14020 JFR.start filename=flight.jfr
When you start Flight Recorder from the command line, it kindly displays a command that you can run later to
dump all of the output to a file. You can launch that dump command at any time to get the full dump. Regardless
of whether you ran the command using a process ID or Java program name, the dump command uses a process
ID. For example, it could output the following:
Use jcmd 14020 JFR.dump name=1 to copy recording data to file.
Running that command tells you the size of the Flight Recorder file and where it was written to on disk:
Dumped recording "1", 519.1 kB written to:
The program was running for only a few seconds and generated 500KB of output. As you might imagine, this file
gets large quickly!
Version: 2.1
Chunks: 2
Start: 2024-08-
10 14:14:27 (UTC)
Duration: 103 s
Finally, if you want to search for something in particular, you can use the print option to list the results in XML
or JSON format. However, if you don’t know what specifically you are looking for or you just want to explore,
omit the print option and just use Java Mission Control to open the file.
TIP JConsole will introduce some performance overhead on the application it is profiling, so
it is not recommended to run it in production.
Using VisualVM
Like Java Mission Control, VisualVM was originally distributed with the JDK. Originally named jvisualvm, it
is now available as a stand-alone tool that you can download from the link in “Further References.”
TIP IntelliJ, Eclipse, and VS Code all have plugins/extensions for using VisualVM straight
from the IDE.
To launch VisualVM, call visualvm.exe on Windows or visualvm on Linux/Mac. Notice that you don’t pass
in a process ID; VisualVM automatically provides all the Java processes it can find in the left navigation. Just
double-click the one you want information on.
Like JConsole, VisualVM can provide real-time monitoring of your process, as shown in Figure 13.9. You can
toggle between the heap and Metaspace graphs on this Monitoring tab. Clicking Heap Dump shows you which
classes are using the most memory with a single click.
The Threads tab provides information on CPU use and wait time. The Sampler tab shows you CPU and memory
usage details in real time. The Profiler tab allows you to gain insight into your application. Besides profiling a
running app, you can also load JFR files or dump files that you generated at the command line.
You might have noticed some of the tools in this chapter overlap. VisualVM is commonly used for initial or quick
profiling and is the fastest to get started with. If that isn’t enough to find the problem, you will generally find that
moving on to Java Flight Recorder/Java Mission Control or an alternate profiler is worth the time.
Using JDK Tools ❘ 371
OTHER PROFILERS
There are many free and commercial profilers. Like VisualVM, they let you dive into the CPU,
memory, I/O, garbage collection, and any other parts of your application likely to cause a
performance problem. Each works differently, but the ideas are the same. Other common
profilers include JProfiler, YourKit Profiler, Dynatrace, and AppDynamics.
IntelliJ comes with a built-in profiler as well. IDE tools are great for improving performance in
development or against a test server. The more powerful profilers are useful for testing in upper
environments such as QA where you have production-like conditions, or even in production
itself.
GCUtil also displays information about how long it took to run garbage collection, along with information about
Metaspace use. The jstat command has many options about the classloader, heap, and garbage collection. If
you find yourself needing real-time information, the jstat documentation page covers all the options.
BIG O NOTATION
Big O notation is used to describe the asymptotic growth pattern of an algorithm as a function
of the input size. Asymptotic means what happens as the data gets larger and larger,
approaching infinity.
The idea is that if doubling the input sizes causes the execution time to grow from 10 seconds
to 100 seconds, then that is way more significant than if it grew to only 20 seconds. Big O
notation provides a shorthand to discuss these scenarios. If performance grows linearly with
the input size n, then we say the complexity is O(n). If the performance varies as the square of
the input, then we say the complexity is O(n2).
You should be able to recognize these common ones in increasing cost:
➤➤ O(1) –constant time: The time doesn’t depend on the size of the input. Reading a single
item from an ArrayList is an example.
➤➤ O(log n) –logarithmic time: The time increases much more slowly than the input. Search-
ing to find a value in a sorted list is an example.
➤➤ O(n) –linear time: The time is proportional to the size of the input. Searching to find a
value in a random list is an example.
➤➤ O(n2) –quadratic time: The time increases quadratically. The nested loops in
fileChangedSinceLastRead() are an example.
➤➤ O(2n) –exponential time: The time doubles with each additional item in the input. The
Fibonacci recursion in the benchmarking section is an example.
The next problem is that this program wastes memory. There are a million rows in the file. Yet the algorithm
needs only a summary. It would be better to store the results of the calculation rather than the original data since
it is used for only this one purpose.
374 ❘ CHAPTER 13 Performance Tuning Your Services
Finally, calculateTotals() converts the String to an int and then boxes it to an Integer. This isn’t a
problem on its own. However, the result is only used for counting, so it might as well have stayed a String.
Many compilers are smart enough to optimize this type of inefficiency out of existence. However, it is better if
you can avoid unnecessary work in the first place.
The following is the more efficient version of the code:
10: private Map<String, Long> emailSummary = getEmailSummary();
11:
12: @GetMapping("/emailSummary")
13: public String emailSummary(Model model) {
14: model.addAttribute("byMonth", emailSummary);
15: return "emailSummary";
16: }
17: private Map<String, Long> getEmailSummary() {
18: Path path = Path.of("src/main/resources/lastYear.txt");
19: List<String> fileData;
20: try {
21: fileData = Files.readAllLines(path);
22: } catch (IOException e) {
23: throw new UncheckedIOException(e);
24: }
25: return fileData.stream()
26: .collect(Collectors.groupingBy(Function.identity(),
27: TreeMap::new, Collectors.counting()));
28: }
In addition to being faster, the code is also shorter and clearer!
Using SonarQube
As you learned in Chapter 4, “Automating Your CI/CD Builds with Maven, Gradle, and Jenkins,” you can
run SonarLint in your IDE or SonarQube in your build. Both provide a set of rules tagged as performance.
Table 13.4 shows examples of rules that it can detect.
RULES REASON
entrySet() should be iterated when It is slightly faster to use entrySet() than continually call
both the key and value are needed. get().
Synchronized classes Vector, These APIs lock the entire collection during any access. These
Hashtable, Stack, and were replaced with more efficient versions long ago.
StringBuffer should not be used.
wait(...) should be used instead of Could cause scalability issues and potential deadlocks. The
Thread.sleep(...) when a lock is reactive wait() is preferred because it wakes up as soon as it
held. is notified, whereas sleep() must finish its sleep time.
Further References ❘ 375
Considering Performance
When you work on performance tuning, you are likely to find the problem lies in one of a few areas. This section
provides a checklist you can use for ideas on what to consider for each one being the bottleneck after using a
profiler to determine the cause.
Network/Bandwidth
➤➤ Batch/reduce the number of network calls
➤➤ Avoid returning unnecessary data
➤➤ Cache data
CPU
➤➤ Use more efficient algorithm
➤➤ Use parallelization (see Chapter 9)
➤➤ Do work asynchronously where possible
Memory
➤➤ Use more efficient data structures
➤➤ Only save data when you need it
➤➤ Increase allocated memory
I/O
➤➤ Cache data
➤➤ Use a logger instead of printlns (see Chapter 5, “Capturing Application State with Logging
Frameworks”)
➤➤ If using a database, use a connection pool
FURTHER REFERENCES
➤➤ Java Performance (O’Reilly, 2020)
➤➤ https://github.com/openjdk/jmh
JMH
➤➤ https://docs.oracle.com/en/java/javase/21/docs/specs/man/java.html
java configuration options
➤➤ https://docs.oracle.com/en/java/javase/21/docs/specs/man/jstat.html
jstat configuration options
➤➤ https://speakerdeck.com/cguntur/
java-garbage-collection-a-journey-until-java-13-darkbg
Detailed presentation on garbage collection
➤➤ https://jmeter.apache.org/download_jmeter.cgi
Download JMeter
376 ❘ CHAPTER 13 Performance Tuning Your Services
➤➤ https://jmeter.apache.org/usermanual/index.html
JMeter documentation
➤➤ https://www.oracle.com/java/technologies/jdk-mission-control.html
Download Java Mission Control
➤➤ https://visualvm.github.io/download.html
Download Java VisualVM
SUMMARY
In this chapter, you learned about performance concepts and tuning. You saw how to create a microbenchmark
and perform low-level comparison. Then you learned about garbage collection and how to configure the JVM
for your application. JMeter is used for running load tests, and a variety of tools are used to gather data about
running applications and identifying performance problems. Finally, you saw how to improve applications once a
problem is identified.
14
Getting to Know More
of the Ecosystem
WHAT’S IN THIS CHAPTER?
➤➤ Writing Javadoc
➤➤ Comparing JVM Languages
➤➤ Exploring Jakarta EE
➤➤ Comparing Database Types
➤➤ Learning About Integrations
➤➤ Deploying Java
➤➤ Building REST APIs
➤➤ Picking a Virtual Machine
➤➤ Exploring Libraries
➤➤ Securing Your Applications
➤➤ Staying Informed About Changes
As we mentioned in the introduction, this book introduced you to many of the most common technologies
that you will encounter as you grow in your career as a Java engineer. The Java ecosystem is vast, and it
isn’t possible to cover everything, or learn everything, in one go!
This chapter gives a high-level overview of many other technologies that you’ll hear about or work with at
some point. Understanding what they are for will help you follow conversations and know when to come
back to the further references if you are about to work with them.
378 ❘ CHAPTER 14 Getting to Know More of the Ecosystem
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. You can also find the code at https://github.com/realworldjava/
Ch14-More-Ecosystem. See the README.md file in that repository for details.
WRITING JAVADOC
When you are learning about a Java API, for example Executors in Chapter 9, “Parallelizing your Application
Using Java Concurrency,” you would check the Javadoc for information about available methods, parameters,
and more. You might be wondering how that document got created.
In the following example, the code is annotated with special Javadoc comments as shown:
4: /**
5: * An object that contains information about
6: * topics the reader may want to learn.
7: *
8: * <p><b> There is a lot more to learn!</b>
9: * To use:
10: * {@snippet lang = java:
11: * Learning learning = new Learning();
12: * learning.addTopic("Docker");
13: * }</p>
14: */
15: public class Learning {
16: private Set<String> topics;
17: /**
18: * Creates a new Learning object
19: */
20: public Learning() {
21: topics = new HashSet<>();
22: }
23: /**
24: * Stores a topic the reader wishes to learn.
25: *
26: * @param topic the name of the topic to learn
27: * @return {@code true} if this topic was not already stored
28: */
29: public boolean addTopic(String topic) {
30: return topics.add(topic);
31: }
32: }
Lines 4–14, 17–19, and 23–28 show three Javadoc comments. As you can see from line 8, Javadoc comments can
contain a limited set of HTML tags. Lines 10–13 show a feature added in Java 18 that allows your IDE and other
generated documentation to format sections of your Javadoc with Java and other language-specific code examples.
Lines 26 and 27 show the @param and @return annotations, which provide information about the method
parameter and return types. Javadoc has many more features such as additional package-level documentation. See
“Further References” for more on writing Javadoc.
Comparing JVM Languages ❘ 379
You can generate Javadoc as part of your build. For example, using the maven-javadoc-plugin in a
<reporting> tag causes Javadoc to be created when you run maven site.
Sampling Kotlin
Kotlin is a popular language used for writing Android applications and is becoming a popular alternative to Java.
Java code can call Kotlin code and vice versa. Kotlin allows you to write more concise code and offers handy
features such as catching null pointer errors at compile time instead of runtime. The following example shows
some Kotlin features:
1: package com.wiley.realworldjava.more
2:
3: class Destination(val city: String, val state: String)
4:
5: fun countCitiesInCalifornia(cities: List<Destination>): Int {
6: return cities.count { it.state == "California" }
7: }
8:
9: fun main() {
10: val cities = listOf(Destination("Atlanta", "Georgia"),
11: Destination("San Jose", "California"),
12: Destination("Denver", "Colorado"),
13: Destination("San Diego", "California"))
14:
15: val californiaCityCount = countCitiesInCalifornia(cities)
16: print("Visited ${cities.size} cities " +
17: "starting with ${cities.first().city}");
18: println(" including $californiaCityCount in California")
19: }
Like in Java, the package statement on line 1 is optional. Line 3 shows how to create a data class. These are
similar to Java records, being a one-liner to create a simple data structure. The val in the parameter declarations
means the data type is immutable, in contrast to var, which would be mutable. Unlike Java, the type declaration
String comes after the parameter name.
Lines 5–6 show how to create a function in Kotlin; this one takes a single List parameter named cities and
returns an Int. The implementation uses a concise functional programming statement to count the number of
Destination values that have a state of California. Like Java, a lambda expression is used to encapsulate
the logic for matching. However, it is more concise than Java. There is no lambda variable required, since it is
provided by default.
Line 9 declares a main method, which is much shorter than the public static void main(String[]
args) familiar to all Java programmers. The args parameter is optional, and we omit it since it is not used.
Lines 10–13 create an immutable list using the listOf() function. Notice that the new keyword is not used
when instantiating the Destination objects.
380 ❘ CHAPTER 14 Getting to Know More of the Ecosystem
Lines 16–18 show concise code for output. System.out is omitted when printing. Additionally, string
interpolation is used, which means variables are evaluated when declared in the string with a $ such as
$californiaCityCount. Additionally, code can be configured inside a ${} allowing methods to be called
directly inside the string. Finally, note that size is used instead of size() since Kotlin provides size as a
property to allow for more concise code than Java.
Another thing that makes the code more concise is the lack of semicolons to end statements. If you noticed the
word “concise” used a lot in the explanation, you understand a key benefit of Kotlin: more concise code!
Sampling Groovy
Groovy is a scripting language, which means you do not have to compile it. As with many other scripting lan-
guages, Groovy does not require types to be specified. The following shows the Kotlin example code translated
to Groovy:
1: class Destination {
2: String city
3: String state
4:
5: Destination(String city, String state) {
6: this.city = city
7: this.state = state
8: }
9: }
10:
11: private static int countCitiesInCalifornia(List<Destination> cities) {
12: cities.count { it.state == "California" }
13: }
14:
15: def cities = [
16: new Destination("Atlanta", "Georgia"),
17: new Destination("San Jose", "California"),
18: new Destination("Denver", "Colorado"),
19: new Destination("San Diego", "California")
20: ]
21:
22: def californiaCityCount = countCitiesInCalifornia(cities)
23: print "Visited ${cities.size()} cities " +
24: "starting with ${cities.first().city}"
25: println " including $californiaCityCount in California"
In Groovy the package declaration is also optional. Lines 1–9 create a class. While properties are available based
on the instance variables, a constructor must be coded to be called.
Lines 11–13 declare a function using a signature that should look familiar from Java. The implementation
matches the Kotlin version except the return keyword is missing, even though the method has a return value. If
you omit the return, Groovy automatically returns the value produced in the last statement in the function.
Notice how there is no main method required! Lines 15–20 create a mutable list using brackets ([]) around the
contents. Unlike Kotlin, note that new is required to create an instance of an object.
Line 22 shows that Groovy uses def when you don’t want to code the type of a variable. Lines 23–25 are the
same as Kotlin except for size(), which must use the method name since a shorthand property is not available.
Exploring Jakarta EE ❘ 381
Sampling Scala
Scala is a compiled language that was designed to process large amounts of data efficiently as well as for working
with distributed systems. Additionally, Scala has built-in support for many functional programming patterns. To
get a feel for the syntax, here’s the example you saw in Kotlin and Groovy rewritten for Scala:
1: package com.wiley.realworldjava.more
2:
3: case class Destination(city: String, state: String)
4:
5: object Travel {
6:
7: private def countCitiesInCalifornia(
8: cities: List[Destination]): Int = {
9: cities.count(_.state == "California")
10: }
11:
12: def main(args: Array[String]): Unit = {
13: val cities = List(
14: Destination("Atlanta", "Georgia"),
15: Destination("San Jose", "California"),
16: Destination("Denver", "Colorado"),
17: Destination("San Diego", "California")
18: )
19:
20: val californiaCityCount = countCitiesInCalifornia(cities)
21: print(s"Visited ${cities.size} cities" +
22: s"starting with ${cities.head.city}")
23: println(s" including $californiaCityCount in California")
24: }
25: }
You can see different JVM languages have some elements in common. Scala uses def when creating functions
like Groovy. The val declaration and a return value of Int are like Kotlin.
There are unique elements as well. Line 3 shows case classes, a breed of class used to produce immutable data
structures. Generics are available but where Java uses < and >, Scala uses brackets, as you can see in the type
declaration of List[Destination] in line 8. The implementation should look familiar from Groovy
except _ is used as the default lambda variable.
Line 12 uses Unit, the Scala equivalent of void in Java. On lines 21–23, you can see that s precedes a string to
tell the runtime to use string interpolation. Line 22 also shows head is provided instead of using first() to get
the first element.
EXPLORING JAKARTA EE
As we mentioned in Chapter 1, “How We Got Here: History of Java in a Nutshell,” Jakarta EE (Enterprise Edi-
tion) is an alternative to Spring for enterprise applications. Jakarta EE was previously called Java EE before it was
open-sourced.
Some of the concepts you see in Spring depend on Jakarta EE. For example, in Chapter 6, “Getting to Know the
Spring Framework,” you learned that filters are also known as servlet filters. A servlet is a subclass that processes
a request and returns a response. The most common type is jakarta.servlet.http.HttpServlet, which is
for web requests and responses.
382 ❘ CHAPTER 14 Getting to Know More of the Ecosystem
Jakarta EE is organized into specifications, and you can use just the parts you want. For example, the core profile
is meant for microservices while the web profile is meant for web applications.
Table 14.1 gives you a feel for what is available in Jakarta EE. You might notice this table has more rows than the
Spring projects listed in Table 6.1. That’s because Jakarta EE projects are more granular. Jakarta EE and Spring
are excellent competitors and provide most of the common functionality that you’d expect. The organization you
work for is likely to have a preference, so you should use that one!
PROJECT DESCRIPTION
Jakarta EE Platform Umbrella project for the most common parts of Jakarta EE including the core profile
and web profile
Jakarta XML Binding Map Java objects and XML (a competitor of Jackson that was used in the Appendix,
“Reading and Writing XML, JSON, and YAML”)
Database: Zoo
Column
Table: exhibits
id name num_acres
integer varchar(255) numeric
1 African Elephant 7.5
Primary key Row
2 Zebra 1.2
Table: names
id species_id name
integer integer varchar(255)
1 1 Elsa
2 2 Zelda
3 1 Ester
4 1 Eddie
5 2 Zoe
Relational databases use a language called Structured Query Language (SQL) to work with the data. There are
four main database operations, which comprise the popular acronym known as CRUD (Create, Read, Update,
Delete). Table 14.2 gives you a feel for what SQL looks like using each of the major types of operations.
INSERT INTO exhibits Adds a new row with the provides values. Defaults to the
VALUES (3, order in which the columns were defined in the table.
'Asian Elephant', 7.5);
SELECT * Reads data from the table with an optional WHERE clause to
FROM exhibits limit the data returned. In the SELECT, can use * to return all
WHERE ID = 3;
columns, list specific ones to return, or even call functions like
COUNT(*) to return the number of matching rows.
UPDATE exhibits Sets a column’s value with an optional WHERE clause to limit
SET num_acres = num_acres + .5 the rows update.
WHERE name = 'Asian Elephant';
DELETE FROM exhibits Deletes rows with an optional WHERE clause to limit the rows
WHERE name = 'Asian Elephant'; deleted.
An alternative to using SQL would be to use an object-relational mapping (ORM) framework, which maps the
database to Java objects for you behind the scenes. Hibernate, Spring, and Java Persistence Architecture (JPA) are
options for ORM.
384 ❘ CHAPTER 14 Getting to Know More of the Ecosystem
➤➤ Document: Stores data as JSON documents, allowing each piece of data to have different attributes.
Couchbase and MongoDB are the most common implementations.
➤➤ Graph: Stores data as a graph of nodes and relations. Neo4J is the most popular implementation.
➤➤ Key/value: Stores data as key/value pairs like in a map. Redis is the most popular implementation.
➤➤ Time series: Stores data by time. InfluxDB is the most popular implementation.
➤➤ Wide column store: Stores data by column instead of by row. Cassandra is the most popular
implementation.
➤➤ Stateless session beans: For providing a service without having to remember user info between visits.
➤➤ Stateful session beans: For providing a service with the requirement to remember user data. This type is
less common since user data is typically stored in other layers of the application.
➤➤ Singleton session beans: For data that can be shared by the entire application.
➤➤ Message-driven beans: For processing JMS messages.
In the past, entity beans were used for working with data. The industry has moved on to JPA and other object-
relational mapping frameworks.
DEPLOYING JAVA
In this book, all development happened on your computer, and you started up some applications at the command
line. Contrast that to enterprise software, where the software runs in a location where your users can access it,
and that is most certainly not your computer. This section describes the main categories available for hosting your
applications.
Deploying Java ❘ 385
TIP A content delivery network (CDN) can also serve static content. CDNs are frequently
used for caching static content, like images and open-source JavaScript libraries, to save
bandwidth costs and provide a faster response.
An application server is able to run Java code packaged as a WAR (web archive) or EAR (enterprise archive). A
WAR consists of JAR files along with a web directory structure with the specifics of how to process web requests.
An EAR file can include a WAR file, JAR files, and Enterprise Java Beans (EJBs).
Common application servers include Apache Tomcat, Glassfish, JBoss, Papaya, and IBM WebSphere. Spring Boot
MVC applications run an embedded Apache Tomcat.
TIP Notice there is both an Apache HTTP Server, which is a web server, and an Apache
Tomcat, which is an application server. Developers often reference “Apache” for the web
server and “Tomcat” for the application server. However, make sure you know which
“Apache” is under discussion from the context; otherwise, ask.
Using Containers
Whatever application you are running, whether a simple stand-alone application, some Spring Boot application,
or a high-traffic web server, you need a machine with an operating system to run it on. In large organizations,
another team is responsible for the maintenance and patching of that operating system. To avoid surprises, many
teams use a container like Docker. Containers are executable packages containing not only your application but
the operating system and any software it depends on. The container image is packaged up and hosted, and you
just need to download it and run it, no installation or configuration required.
When distributing software for others to run, vendors have more control of the environment, thereby reducing
support costs. For example, in Chapter 4, “Automating Your CI/CD Builds with Maven, Gradle, and Jenkins,”
installing Jenkins was as simple as running two commands:
docker pull jenkins/jenkins
docker run -
-name jenkins -
p 8080:8080 jenkins/jenkins
The pull command downloaded the images, and the run command set up the initial configuration. Subsequent
installs were even easier with the following:
docker start jenkins
Since the name and port were already set up, all you had to do was start.
Similar to Maven Central for Java artifacts, Docker uses a repository called DockerHub, which is at https://
hub.docker.com. The configuration is in a file called Dockerfile. The beginning of the Jenkins
Dockerfile for Alpine Linux looks like this:
1: ARG ALPINE_TAG=3.20.3
2:
386 ❘ CHAPTER 14 Getting to Know More of the Ecosystem
WHAT IS KUBERNETES?
Kubernetes is sometimes shortened to K8s with the 8 representing the number of letters
between the first and last. It is used to deploy, scale, and manage containers. This means you
can deploy Docker images directly not just with Docker but with Kubernetes as well.
Kubernetes is particularly useful if you have complex needs or relationships among your
containers. For example, Kubernetes can restart your application as needed and helps with
fault tolerance.
Server—provides operating Elastic Compute Cloud Azure Virtual Machine Google Compute
system to application (EC2) Engine (GCE)
Serving static content Simple Storage Service Azure Blob Storage Google Cloud
(S3) Storage (GCS)
Messaging queues Simple Queue Services Azure Service Bus Google Cloud Pub/
(SQS) Azure Storage Sub
Queues
It is common for APIs to implement one of PUT or PATCH but not both, depending on the functionality
of the API.
WHAT IS A MICROSERVICE?
Lines 11–17 show another GET. This time a URL parameter is used so the state name can be passed in.
For example:
curl http://localhost:8080/api/citiesByState/NY
Lines 18–29 declare the POST and DELETE method. Each receives the parameters as JSON in the request body
and are called as follows:
curl -
d city='New York&state=NY' http://localhost:8080/api/add
curl -
X "DELETE" -
d city='New York&state=NY'
http://localhost:8080/api/delete
Documenting APIs
Swagger combines documentation and the ability to actually run the REST APIs in a browser. It’s easy
to add to a Spring application using a dependency with the group ID org.springdoc and artifact ID
-openapi-starter-webmvc-ui. While you could use curl to access the REST APIs, without running a build,
Swagger does require building and launching. Run the following from the rest-apis directory:
mvn clean verify
java -jar target/rest-api-0.0.1-SNAPSHOT.jar
Once the application is started, going to http://localhost:8080/swagger-ui/index.html in a browser
pulls up Swagger. As you can see in Figure 14.2, all four REST APIs are automatically shown without any
extra work.
To run an API, expand the one you are interested in and click “Try it out.” After adding any required parameters,
click “Execute.” As you can see in Figure 14.3, there are input fields for the city and state parameters generated
as well.
A curl command gets automatically generated, which is helpful if you want to test at the command line.
Additionally, a 200 response code is returned since the REST API call is successful.
➤➤ Open JDK: Oracle’s free JDK worked on with a collection of other parties. A number of vendors dis-
tribute builds based on Open JDK including Amazon Corretto, Eclipse Temurin (Adoptium), Microsoft
Open JDK, and Red Hat Open JDK.
➤➤ Oracle JDK: Oracle’s commercial JDK, which includes fixes and support.
➤➤ Azul: Makes free and commercial JDKs serving a variety of needs.
➤➤ Graal VM: A commercial JVM made by Oracle that supports both just-in-time and ahead-of-time opti-
mization. This JVM is heavily focused on performance.
EXPLORING LIBRARIES
The Java ecosystem is extensive, and there are many libraries available to help with your development. Table 14.4
lists some of our favorites that haven’t been mentioned in other places! The website for each one is in the
“Further References” section.
Securing Your Applications ❘ 391
Apache Parent project for many libraries supporting functionality such Free and
Commons as CSV reading/writing, CLI argument parsing, I/O, and more. commercial options
Apache Kafka Distributed event streaming framework often used for real- Free
time data.
Apache POI Read and write Word and Excel formats. POI originally stood Free
for Poor Obfuscation Implementation as a joke.
RxJava Free
Library for asynchronous and event-based programs.
2. Cryptographic Problems: Data should be protected at rest (for example in a database) and in transit.
3. Injection: All data must be sanitized (ideally against an “allow” list) to ensure that user-supplied data
conforms to its expected type and does not change the meaning of any queries or output.
4. Insecure Design: This item pertains to the lack of security controls in the design and architecture.
5. Security Misconfiguration: All framework defaults should be considered and should be changed where
more appropriate values are required. All settings should be configured securely.
6. Vulnerable and Outdated Components: Vulnerable libraries should be upgraded where possible and
unused libraries should be removed. Several years ago, there was a security vulnerability in the news
concerning the popular Log4J logging framework. And now, years later, there are still applications on
that version!
7. Identification and Authorization Failures: Users should not be allowed to impersonate other users or
intercept/reuse their credentials.
8. Security and Data Integrity Failures: All downloads should be verified to ensure they are legitimate.
9. Security Logging and Monitoring Failures: Data should be logged and monitored to detect attacks early.
10. Server-Side Request Forgery (SSRF): Applications should not be able to send data to unexpected
destinations.
The Common Vulnerabilities and Exposures (CVE) database is run by the U.S. government
and operated by the MITRE Corporation. When a security issue is discovered, it gets a CVE
number and appears on the website.
For example, Figure 14.4 shows the Log4J issue. What you may not have known was that
there were four separate CVEs for Log4J within that month. The sequence of events was:
➤➤ 12/10/21 -CVE-2021-44228: The original finding where the fix was to upgrade to 2.15.0.
➤➤ 12/14/21 -CVE-2021-45046: Turns out 2.15.0 did not cover all cases.
➤➤ 12/18/21 -CVE-2021-45105: A new problem was found in all Log4J 2 versions.
➤➤ 12/28/21 -CVE-2021-44832: Yet another new problem was found.
➤➤ Static Application Security Testing (SAST): Tools that look at the source code to try to find vulnerabil-
ities. SonarQube from Chapter 4 is a SAST tool.
➤➤ Dynamic Application Security Testing (DAST): Tools that try to inject vulnerabilities into an application
while it is running.
➤➤ Interactive Application Security Testing (IAST): Tools placing an agent inside the running application to
test in a non-production environment.
➤➤ Runtime Application Security Protection (RASP): Tools placing an agent inside an application running in
production that looks for attacks.
➤➤ https://dev.java
➤➤ https://www.infoq.com
➤➤ https://foojay.io
➤➤ https://dzone.com
FURTHER REFERENCES
➤➤ Javadoc
➤➤ https://docs.oracle.com/en/java/javase/21/docs/api/index.html:
Java 21 Javadoc
➤➤ https://docs.oracle.com/en/java/javase/21/javadoc/javadoc.html:
Javadoc Guide including new snippets feature
➤➤ https://www.oracle.com/technical-resources/articles/java/
javadoc-tool.html: How to write Javadoc
➤➤ JVM Languages
➤➤ https://dev.java/playground: To run Java code online
➤➤ https://kotlinlang.org/docs/getting-started.html: Kotlin documentation
➤➤ https://play.kotlinlang.org: To run Kotlin code online
➤➤ https://groovy-lang.org/documentation.html: Groovy documentation
➤➤ https://groovyide.com/playground: To run Groovy code online
394 ❘ CHAPTER 14 Getting to Know More of the Ecosystem
SUMMARY
In this chapter, you learned about various technologies in the Java ecosystem. Javadoc is used to document Java
APIs, and Swagger documents REST APIs. Kotlin, Groovy, and Scala are just a few of the many JVM languages.
Jakarta EE is a competitor of Spring with many specialized projects to meet your needs. In fact, Java has extensive
libraries that you can download from Maven Central. Many technologies integrate with Java including databases,
queues, and LDAP. Java applications can be deployed to an application server, made part of a Docker image, or
be deployed to the cloud. Finally, you learned about some security principles and how to keep current on the Java
ecosystem.
APPENDIX
When you think about it, a Java object consists of data and functionality. If you could somehow extract the
data, save it, and then marry it back to its functionality later, then you have effectively achieved a form of
serialization, allowing you to persist the object’s state and restore it when needed.
In this book, we use XML, JSON, and YAML in many of our examples. All three are text formats that can
contain configuration data, or can be used to serialize data, which means converting Java instances to a
format that can be easily saved on disk or transmitted. Conversely, deserialization is the process of
rehydrating these text files back to Java instances. It is important to understand these formats since they
come up frequently in the Java ecosystem.
This appendix will help you learn the basics of these formats and some ways to read and write these files
from your programs. Note that the libraries we cover will have many features. These examples give you the
basics, and we encourage you to explore the excellent documentation supplied with these libraries online.
The source code for this chapter is available on the book page at www.wiley.com. Click the
Downloads link. The code can also be found at https://github.com/realworldjava/
Appendix. See the README.md file in that repository for details.
396 ❘ APPENDIX Reading and Writing XML, JSON, and YAML
TIP Your integrated development environment (IDE) will give you an error if your
XML file isn’t well formed. Additionally, there are many online validators such as
https://onlinexmltools.com/validate-xml.
Working with XML ❘ 397
If you saw any of our Maven POM files, you might have noticed that the pom.xml files start with this:
<?xml version="1.0" encoding="UTF-8"?>
This is an optional feature called an XML prolog, which is used to specify the character encoding. For example,
UTF-8 is most common in the United States.
You also may have noticed more code at the beginning of each pom.xml file.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-
4.0.0.xsd">
The xmlns attribute stands for “XML namespace.” Namespaces are a standard way of listing what elements are
allowed to be in the XML file. Further, namespaces allow specifying a version number.
The rest specifies the location of the XML schema definition, which in this case is the actual file location with
this data. If you need to use an XML namespace or schema definition, the authors of that data would supply it to
you. For common ones, like the Maven POM, it is automatically generated.
XML-RELATED TECHNOLOGIES
TIP The original Jackson was made by Codehaus but is no longer supported. The current
version by FasterXML is the current version, and it supports XML, JSON, and YAML.
version number for your build tool. Table A.1 shows what this looks like for each configuration at the time of
this writing.
TOOL SYNTAX
Maven <dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-
dataformat-
xml</artifactId>
<version>2.17.2</version>
</dependency>
Gradle implementation("com.fasterxml.jackson.dataformat:jackson-
(Kotlin) dataformat-xml:2.17.2")
To include Jackson in your project, grab the appropriate syntax for your build tool and include that in your
dependencies. (We cover build tools in Chapter 4, “Automating Your CI/CD Builds with Maven, Gradle, and
Jenkins.”)
To read a file, create an ObjectMapper and use it to navigate the tree structure.
10: File file = Path.of("src/main/resources/book.xml").toFile();
11: ObjectMapper mapper = new XmlMapper();
12: JsonNode root = mapper.readTree(file);
13:
14: System.out.println("Title: " + root.get("title").asText());
15: System.out.println("Edition: " + root.get("edition").asInt());
16: System.out.println("Paperback? " + (root.get("paperback") != null));
17:
18: JsonNode authors = root.findValues("author");
19: for (JsonNode chars : authors) {
20: System.out.println(chars.asText());
21: }
This code outputs the following:
Title: Real World Java
Edition: 1
Paperback? true
Victor Grazi
Jeanne Boyarsky
Line 11 starts the XML parsing, and line 12 reads the file into a JsonNode instance, a Java object that encapsu-
lates the XML tree structure and returns the root node. Lines 14 and 15 read a tag or attribute as a node along
with specifying the type of the data returned. Line 16 checks if the paperback tag is present.
In line 18, the findValues() method returns a JsonNode containing all the author tags. Lines 19–21 loop
through them outputting the text inside the tags. The findValues() method includes both direct children and
descendants, while the get() method looks only at direct children.
The previous code is a brute-force approach for parsing XML, but this approach can get cumbersome for a
complex XML file. Luckily, Jackson can do this much more concisely using annotations.
Working with XML ❘ 399
@JacksonXmlElementWrapper(localName = "authors")
@JacksonXmlProperty(localName = "author")
private List<String> authors;
TIP The localName attribute is useful if you want to use a different instance variable name
than what is specified in the XML. An XML element may contain characters that are not
allowed in Java variables (such as hyphens) or do not follow Java naming conventions.
Once you have a Java object, the code to populate it and read the values is easy.
File file = Path.of("src/main/resources/book.xml").toFile();
ObjectMapper mapper = new XmlMapper();
Book book = mapper.readValue(file, Book.class);
System.out.println(book.getTitle()); // Real World Java
System.out.println(book.getEdition()); // 1
System.out.println(book.isPaperbackBook()); // true
System.out.println(book.getAuthors()); // [Victor Grazi, Jeanne Boyarsky]
This object-mapping approach does precisely the same thing as the previous example, but much more concisely.
Where the readTree() method returned a JsonNode instance, the readValue() method does all the work of
parsing and converting the XML file to the supplied class type (Book.class) behind the scenes.
Additionally, you can tell Jackson to make the book tag in lowercase in the generated XML file with this:
@JacksonXmlRootElement(localName = "book")
public class Book {
Now that the mapping is complete, let’s write code that uses the default values for all tags except for the authors.
Book book = new Book();
book.setAuthors(List.of("Victor & Jeanne"));
TIP Important escape sequences to know for XML are < (<), > (>), and & (&).
You can also write an XML file programmatically without a mapper object. It is generally more complicated
and not as flexible. For example, you can’t add attributes using this approach. But you will come across this
kind of code:
20: ObjectMapper mapper = new XmlMapper();
21:
22: ObjectNode root = mapper.createObjectNode();
23: root.put("edition", "1");
24:
25: String xml = mapper
26: .writer()
27: .withRootName("book")
28: .writeValueAsString(root);
29: System.out.println(xml);
Working with XML ❘ 401
Line 20 creates the usual mapper. Line 22 creates a new unnamed XML node. It gets named in line 27, which will
be used when written out. In line 23 the put method creates a child XML node named edition. In this exam-
ple, we omitted the pretty print so you can see how the output looks on a single line.
<book><edition>1</edition></book>
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
transformer.transform(source, result);
Here you can see that createElement() is used to create a tag and createTextNode() is used to add the
text inside a tag. A Transformer is used to control the output settings, in this case, adding indentation and
removing the XML prologue. Finally, the transform() method actually writes the XML.
The endElement() method override simply updates a boolean to say we are no longer in a tag. While it could
check to see if it is a tag we care about, the operation is redundant if already false. Therefore, the method is as
simple as possible.
Then there is the startElement() method override, which uses a switch to perform a different operation
based on the tag. Note that localName is relevant only if working with namespaces, so this code uses
qualifiedName. Line 21 sets a boolean if the tag is edition or author. This is the boolean used in
characters(). Line 22 notes if the paperback tag is seen, and lines 23–24 print whether paperback was
seen when getting to the authors tag. Lines 25–26 print the attribute title if on the book tag.
Now that you have the handler, actually reading is easy.
File file = Path.of("src/main/resources/book.xml").toFile();
parser.parse(file, handler);
The only things happening here are creating the objects and calling parse(). All the logic is in the handler class.
DO I NEED A LIBRARY?
Technically, you could read the XML as text and parse it using indexOf() or a regular
expression. Please don’t. The resulting code is very difficult to read compared to the XML
parsing libraries.
Writing XML should also use a library if doing anything of reasonable complexity. However,
for short XML, it is viable to write it directly as follows:
String xml = """
<book>
<edition>%d</edition>
</book>""";
System.out.format(xml, 1)
404 ❘ APPENDIX Reading and Writing XML, JSON, and YAML
TIP Like XML, your IDE will give you an error if your JSON file isn’t valid. There are also
many online validators such as https://jsonlint.com.
TOOL SYNTAX
Maven <dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-
databind</artifactId>
<version>2.17.2</version>
</dependency>
Grab the appropriate syntax for your build tool and include that in your dependencies.
Our first example will navigate the tree structure.
11: File file = Path.of("src/main/resources/book.json").toFile();
12: ObjectMapper mapper = new JsonMapper();
13: JsonNode root = mapper.readTree(file);
14:
15: System.out.println(root.get("title").asText());
16: System.out.println(root.get("edition").asInt());
17: System.out.println("Paperback? " + root.get("paperback"));
18:
19: JsonNode authors = root.findValue("authors");
20: for (JsonNode chars : authors) {
21: System.out.println(chars.asText());
22: }
To parse JSON, we use a JsonMapper in line 12. Contrast this to the XmlMapper we used for parsing XML.
Additionally, the null check for paperback is gone since the JSON example is using a boolean rather than an
empty tag.
Like XML, you can map the JSON to a Java object such as this:
public class Book {
private String title;
private int edition;
406 ❘ APPENDIX Reading and Writing XML, JSON, and YAML
@JsonProperty("paperback")
private boolean paperbackBook;
private List<String> authors;
public boolean isPaperbackBook() {
return paperbackBook;
}
// remaining getters and setters omitted to save space
}
The annotations that were required for our XML parser for mapping nested author tags into a List
are gone, since JSON is using an array, which automatically maps to List. The @JsonProperty is
new and is used to map the field using a different instance variable name than the JSON file. Finally, notice that
isPaperbackBook() is now a normal getter as the specialized logic for the empty XML tag is gone.
Now that you have a Java object, the code to populate it and read the values should look familiar:
File file = Path.of("src/main/resources/book.json").toFile();
ObjectMapper mapper = new JsonMapper();
Book book = mapper.readValue(file, Book.class);
System.out.println(book.getTitle());
System.out.println(book.getEdition());
System.out.println(book.isPaperbackBook());
System.out.println(book.getAuthors());
The only difference in this code besides the filename is that JsonMapper is used instead of XmlMapper.
root.put("edition", "1");
TOOL SYNTAX
Maven <dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.11.0</version>
</dependency>
Grab the appropriate syntax for your build tool and include that in your dependencies.
First you create the Book class.
public class Book {
private String title;
private int edition;
@SerializedName("paperback")
private boolean paperbackBook;
private List<String> authors;
// getters and setters omitted to save space
}
Notice how the only difference from Jackson is that Gson uses @SerializedName, which is used to map the
JSON field name to a different Java field name. The Gson idiom for creating an instance of Book is as follows:
Gson gson = new Gson();
Book book = gson.fromJson(new FileReader(file), Book.class);
408 ❘ APPENDIX Reading and Writing XML, JSON, and YAML
The idea is similar to Jackson. Create a class to do the work and call an API, this time fromJson(), to create the
Book instance. The following is the equivalent code for writing:
Book book = new Book();
book.setAuthors(List.of("Victor & Jeanne"));
Unlike XML and JSON, indentation matters in YAML. Spaces are used to show levels of indentation. It doesn’t
matter how many spaces you use for each level of indentation, provided that you always use the same number of
spaces for each level. Tabs are not allowed to be used since IDEs handled them differently, and they would throw
off the white space count.
The three dashes at the top start the document and are optional. YAML allows you to put multiple YAML
documents in the same file, so the three dashes show the start.
Like JSON, the file consists largely of key/value pairs. There are four keys, title, edition, authors, and
paperback, which represent the four pieces of data in the XML and JSON examples.
Also, like JSON, there can be string, number, and boolean types. In our example, the authors are specified with
both indentation and hyphens to show they are part of a sequence, or ordered list. You can have multiple levels of
indentation for more involved data structures.
YAML supports single-line comments anywhere in the document. Comments begin with the # character and
continue to the end of the line.
A YAML file must comply with the rules to be valid. In particular:
➤➤ Keys are case sensitive.
➤➤ Indentation must be consistent and use spaces, not tabs.
➤➤ Hyphens are used for sequences.
TIP Like XML and JSON, your IDE will give you an error if your YAML file isn’t valid.
There are also many online validators such as https://www.yamllint.com.
TOOL SYNTAX
Maven <dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-
dataformat-
yaml</artifactId>
<version>2.17.2</version>
</dependency>
Gradle implementation("com.fasterxml.jackson.dataformat:jackson-
(Kotlin) dataformat-yaml:2.17.2")
410 ❘ APPENDIX Reading and Writing XML, JSON, and YAML
Grab the appropriate syntax for your build tool and include that in your dependencies.
The Book class is exactly the same as we defined it when we used Jackson to parse JSON. This time the code to
read our YAML file uses a YAMLMapper:
File file = Path.of("src/main/resources/book.yaml").toFile();
ObjectMapper mapper = new YAMLMapper();
Yes, that is all. You read the YAML file using the YAMLMapper. Other than that, it works the same as the JSON
version. Even the @JsonProperty is the same since the Jackson parser shares code.
TOOL SYNTAX
Maven <dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>2.3</version>
</dependency>
Grab the appropriate syntax for your build tool and include that in your dependencies.
First, you’ll see the code for navigating the YAML.
File file = Path.of("src/main/resources/book.yaml").toFile();
Yaml yaml = new Yaml();
Map<String, Object> map = yaml.load(new FileReader(file));
System.out.println(map.get("title"));
System.out.println(map.get("edition"));
System.out.println("Paperback? " + map.get("paperback"));
System.out.println(book.getTitle());
System.out.println(book.getEdition());
System.out.println(book.isPaperback());
System.out.println(book.getAuthors());
To load data into a Java object, you create a Constructor class and then a Yaml object. While this works for
basic classes, it is difficult if you want to change the names from the YAML or specify generic types.
Writing YAML created from a Map is straightforward.
Map<String, String> map = new HashMap<>();
map.put("edition", "1");
FURTHER REFERENCES
➤➤ XML
➤➤ https://www.w3.org/TR/xml: XML specification
➤➤ https://onlinexmltools.com/validate-xml: Online validator
Summary ❘ 413
➤➤ https://javadoc.io/doc/com.fasterxml.jackson.core/jackson-databind/
latest/index.html: Jackson Javadoc
➤➤ https://dom4j.github.io: DOM4J
➤➤ https://xerces.apache.org: Xerces
➤➤ JSON
➤➤ https://datatracker.ietf.org/doc/html/rfc8259: JSON specification
➤➤ https://jsonlint.com: Online validator
➤➤ https://json-schema.org: JSON schema
➤➤ https://jqlang.github.io/jq/: JSON query language and processor
➤➤ https://javadoc.io/doc/com.google.code.gson/gson/: Gson Javadoc
➤➤ https://github.com/stleary/JSON-java: JSON.java: (also known as org.json)
➤➤ https://javaee.github.io/jsonp/: JSON-P
➤➤ https://javaee.github.io/jsonb-spec/: JSON-B
➤➤ YAML
SUMMARY
XML, JSON, and YAML are common formats for specifying configuration and data. The Jackson library works
with all three formats. JAXP, via DOM and SAX, works with XML. The Gson library works with JSON. Finally,
the SnakeYAML library is used for YAML.
INDEX
415
automated testing tools –Clover
416
code–Create
417
credentials–enterprise process collaboration
418
environments–Git
419
Git –inlining
420
input/output –Java
421
Java –jobs
422
JPMS –logging frameworks
423
logging frameworks –mock objects
424
Mockito–platform threads
425
plugins–refactoring code
426
Refactoring: Improving the Design of Existing Code –Secure Shell
427
Secure Sockets Later –System.nanoTime
Secure Sockets Later (SSL), 187 initializing Spring Boot projects with Spring
securing applications, 187–192, 391–393 initializer, 173–174
security, 394 inspecting applications with Actuator, 185–187
semantic versioning, 94 projects, 192
Semaphore, 275 securing applications with Spring security, 187–192
serial garbage collection, 359 Spring MVC, 178–183, 222–225
service account, 120 using component scanning, 169–170
servlet filters, 188 using IntelliJ Spring Initializer integration, 174–178
session scope, 167 using Java configuration classes, 167–169
session-based authentication, 189 using XML configuration files, 165–167
@Setter method, autogenerating, 241–242 website, 193
setTie() method, 226 Spring in Action (Walls), 193
7-Zip, 350 sprints, creating in Jira, 76–78
SEVERE log level, 132 spy objects, 215
signatures, changing, 32 SQL (Structured Query Language), 383
Simple API for XML (SAX), reading XML with, SSH (Secure Shell), cloning via, 47–48
402–403 SSL (Secure Sockets Later), 187
Simple Logging Façade for Java (SLF4J) stack, 259
about, 149 staging area, in Git, 42
compared with other logging frameworks, 160 StampedLock, 275
comparing logging levels, 150 standards, for coding, 159
formatting values, 151 startElement() method, 403
lazy logging, 151 starter-dependencies, 173
omitting logging frameworks, 149–150 starting new projects, 13–16
passing basic configuration, 152 Static Application Security Testing (SAST), 393
specifying, 150 statics, mocking, 221
using with other logging frameworks, 152–155 StAX (STreaming API for XML), 397
website, 161 stop-the-world, 359
singleton scope, 167 Stream, 298
Skip List algorithm, 281 stream() method, 260–261
skipping tests, 207 STreaming API for XML (StAX), 397
@Slf4j annotation, 246 string interpolation, 380
SnakeYAML, reading and writing YAML with, String methods, calling, 291–294
410–412 Structured Query Language (SQL), 383
Sonar, scanning with, 127 stubs objects, 215
SonarQube, 127, 128, 374 summaries, 339–341
special operators, 82 summarizing thread states, 257
split() method, 293–294 Sun Certified Java Associate (SCJA), 8
splitting, 293–294, 298 Sun Certified Java Professional (SCJP), 8
Spring AOP, 323 Sun Certified Master Java Developer (SCMJD), 8
Spring Boot, implementing tracing in applications, Supplier interface, 265–266
349 Symantec Visual Café, 12
Spring Boot Actuator, 336–337 symmetric encryption, 187
Spring framework synchronizing, 257–259
about, 163–164 syntax, for regular expressions, 285–291
coding regular expressions for, 308 system, 92
concurrency support in, 278–281 system properties
configuring, 164–170 in JUnit Pioneer, 214
customizing with properties, 170–173 setting in Maven, 102
error handling in, 183–185 System.currentTimeInMillis(), 354
improving development with Spring Boot, 173–178 System.nanoTime(), 354
428
tagging–void lockInterruptibility
T U
tagging, in Git, 43 UNCODE_CASE flag, 298
TDD (Test-Driven Development), 196 UNICODE_CHARACTER_CLASS flag, 298
templated URL, 186 Uniform Resource Indicator (URI), 47
TERMINATED, 257 unwrapping, 33
terminology, for Jenkins, 116 URI (Uniform Resource Indicator), 47
test, 92 users, in Jira, 78, 126
test directory, staging, 196
test double, 215
test phase, 92
Test-Driven Development (TDD), 196, 202–205
V
testImplementation, 110 validate phase, 92
testing values
with JMeter, 360–365 bucketing, 339–341
tools for (See automated testing tools) capturing fluctuating, 338–339
TestMe, creating test boilerplates with, 230 formatting
TestNG, 195 in Java Util Logging, 132–133
text blocks, coding from Java 17, 6–7 in Log4j, 140–141
then() method, configuring, 218–219 in SLF4J, 151
thread access, controlling with Phaser, 262 replacing, 292–293, 296–297
thread contention, 252–254 variables, specifying, 110
thread group, 361, 363 VCSs (version control systems), 42
thread priority, 251 verify phase, 92
thread states, 256–257 verifying
ThreadPoolTaskExecutor, launching threads calls, 219
with, 280–281 conditions using assume logic, 209–210
threads, launching with version control
ThreadPoolTaskExecutor, 280–281 adding projects to, 15
throughput, 352 creating projects from, 15
TIMED_WAITING, 257 version control systems (VCSs), 42
timer, 363 version numbers (POM), 94
TLS (Transport Layer Security), 188 versions, 4–7
token-based authentication, 189 vertical scalability, 352
tokens, generating, 120 Victor’s Pluralsight Course on Regular Expressions,
toLongString() API, 314 308
tooling, 349 viewing issues in Jira, 80
tools, using regular expressions with, views, in Jenkins, 126
306–308 virtual machines, choosing, 390
toShortString() API, 314 virtual threads, 7, 250, 271–275
toString() API, 314 Visual Studio (VS) Code
@ToString annotation, 243 about, 37–39
TRACE log level, 140, 150, 156 IDE integration, 225
tracing, 326, 348–349 installing Lombok in, 237
@Transactional, 281 using, 104–106
TransferQueue, 275 website, 40
transitive dependencies, 90 VisualVM, 370–371
Transport Layer Security (TLS), 188 void await() method, 261–262
triggers, in Jenkins, 116 void lock() method, 261–262
trunk-based development, 70 void lockInterruptibility() method,
tryLock() method, 261–262 261–262
429
void signal–Zipkin
X
W XML. See eXtensible Markup Language (XML)
WAITING, 257 XML Path Language (XPath), 397
Walls, Craig (author) XML Query (XQuery), 397
Spring in Action, 193 XML Schema Definition (XSD), 397
WAR (web archive), 115 XMLLayout class, 158
WARN logging level, 140, 150, 156 XPath (XML Path Language), 397
WARNING log level, 132 XQuery (XML Query), 397
weaving loggers into codebase, 245–246 XSD (XML Schema Definition), 397
web archive (WAR), 115 @Xslf4j annotation, 246
web servers, 385 XSLT (eXtensible Stylesheet Language
web services, creating, 388–389 Transformation), 397
when() method, configuring, 218–219
.. wildcards, 316
* wildcards, 315–316 Y
Windows Exporter, 349
Yet Another Markup Language (YAML)
WIP (work-in-process), 72
about, 148, 408
workflow (Git), 45–51
comparing libraries, 412
working area, in Git, 42
format for, 408–409
work-in-process (WIP), 72
reading
workspaces
with Jackson, 409–410
in Eclipse, 36
with SnakeYAML, 410–412
in Jenkins, 116
websites, 413
writer() method, 400
writing
writing
with Jackson, 410
Javadoc, 378–379
with SnakeYAML, 410–412
JSON
with Gson, 407–408
with Jackson, 406–407 Z
XML
with DOM, 401–402 Zipkin, 350
430
WILEY END USER LICENSE AGREE-
MENT
Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.