Getting Started With Hazelcast - Second Edition - Sample Chapter
Getting Started With Hazelcast - Second Edition - Sample Chapter
ee
Second Edition
pl
C o m m u n i t y
E x p e r i e n c e
D i s t i l l e d
Second Edition
Mat Johns
P U B L I S H I N G
Sa
m
$ 34.99 US
22.99 UK
Mat Johns
Preface
Hazelcast is an innovative new approach to data, in terms of storage, processing, and
distribution; it provides an accessible solution to the age-old problem of application
and data scalability. This book introduces this great open source technology in a
step-by-step, easy-to-follow manner, from the why to the how and the wow.
Preface
Chapter 9, From the Outside Looking In, rather than the use of the provided drivers for
integrating with a Hazelcast cluster, looks at the popular alternative access that we
have to our data.
Chapter 10, Going Global, discusses how we can explode onto the world stage by using
the public cloud infrastructure and asynchronous remote replication to take our data
all around the globe.
Chapter 11, Playing Well with Others, brings the technology together with popular
companion frameworks to see how we might start to bring the technology to work
with legacy applications.
Appendix, Configuration Summary, provides an overview of the configurations that we
have used throughout the book.
Typical Deployments
So far, we have been looking at Hazelcast in one particular type of deployment;
however, there are a number of configurations that we could use depending on our
particular architecture and application needs. Each deployment strategy tends to be
best suited to a certain type of configuration or application deployment; therefore,
in this chapter, we will look at the following:
Client connectivity, where it's best used and the issues that come with it
[ 91 ]
Typical Deployments
This would be rather unsuitable for storing extensive amounts of data within a heap,
as our application's storage requirements would drastically increase and the amount
of memory available to traditionally run our application would be reduced. We will
either need to provision more web application containers or potentially put ours
and other applications running within that container cluster at risk from excessive
garbage collection, or worse still, running out of heaps altogether.
We can run this code a few times to establish a cluster of a number of instances:
Members [3] {
Member [127.0.0.1]:5701
Member [127.0.0.1]:5702
Member [127.0.0.1]:5703 this
}
[ 92 ]
Chapter 8
Now we need to bring in a new dependency, where our original downloaded archive
is hazelcast-client-3.5.jar, and we can use this to create ClientExample to
connect to the cluster and perform operations against the data that is held there.
As the client is delegating the operations to a wider cluster, the data persisted will
outlive the client.
public class ClientExample {
public static void main(String[] args) {
ClientConfig conf = new ClientConfig();
conf.getNetworkConfig().addAddress("127.0.0.1:5701");
HazelcastInstance hzc =
HazelcastClient.newHazelcastClient(conf);
IMap<String, String> capitals = hzc.getMap("capitals");
if (capitals.isEmpty()) {
System.err.println("Empty capitals map, adding entries");
capitals.put("GB",
capitals.put("FR",
capitals.put("US",
capitals.put("AU",
"London");
"Paris");
"Washington DC");
"Canberra");
}
System.err.println(
"Known capital cities: " + capitals.size());
System.err.println(
"Capital city of GB: " + capitals.get("GB"));
hzc.shutdown();
}
}
By running our client multiple times, we see that the first run initializes the capitals
map with our starting set of data, before shutting down the client, which will
allow the JVM instance to complete and exit cleanly. However, when we run the
client again, the data is successfully persisted by the still running cluster so that
we won't have to repopulate it a second time. Initially, our client connects to one
of the configuration-defined server nodes; however, in doing so, it learns about
the existence of the other nodes once it is running and connected. Therefore, if our
supporting member node dies, the client will simply connect to another one of the
other nodes and continue as normal.
[ 93 ]
Typical Deployments
The only critical phase is the initial connection, but unlike the member nodes,
we don't have an auto-discovery mechanism in place; therefore, the location of the
cluster needs to be configured explicitly. If the node that we have listed is down at
the time of our client starting up, we will fail to connect to the cluster irrespective of
the state of the other nodes or the cluster as a whole. We can address this by listing a
number of seed nodes within our client's configuration; as long as one of these nodes
is available, we can connect to the cluster and go from there.
ClientConfig conf = new ClientConfig();
conf.getNetworkConfig().addAddress("127.0.0.1:5701");
conf.getNetworkConfig()
.addAddress("127.0.0.1:5702", "127.0.0.1:5703");
[ 94 ]
Chapter 8
[ 95 ]
Typical Deployments
Architectural overview
As we have seen, there are a few different types of deployment we could use; which
one you choose really depends on your application's make-up. Each has a number of
trade-offs, but most deployments tend to use one of the first two, with the client and
server cluster approach the usual favorite unless we have a mostly compute-focused
application where the former is a simpler setup.
So, let's have a look at the various architectural setups that we can employ and what
situations they are best suited to.
Peer-to-peer clusters
This is the standard example that we have been mostly using until now: each node
houses both our application and the data persistence and processing. It is most useful
when we have an application that is primarily focused towards asynchronous or
high-performance computing and executes a lot of tasks on the cluster. The greatest
drawback is the inability to scale our application and data capacity separately.
[ 96 ]
Chapter 8
[ 97 ]
Typical Deployments
Summary
We have seen that we have a number of strategies at our deposal for deploying
Hazelcast within our architecture, such as treating it like a clustered standalone
product akin to a traditional data source but with more resilience and scalability.
For more complex applications, we can directly absorb the capabilities into our
application, but that does come with some strings attached. However, whichever
approach we choose for our particular use case, we have easy access to scaling and
control at our finger tips.
In the next chapter, we will look beyond just Hazelcast and the alternative methods
of getting access to the data stored in the cluster.
[ 98 ]
www.PacktPub.com
Stay Connected: