Getting Started With Hazelcast: Chapter No. 7 "Typical Deployments"
Getting Started With Hazelcast: Chapter No. 7 "Typical Deployments"
P U B L I S H I N G
Mat Johns
Typical Deployments
So far we have been looking at Hazelcast in one particular type of deployment, however, there are a number of congurations we could use depending on our particular architecture and application needs. Each deployment strategy tends to be best suited to certain types of conguration or application deployment; so in this chapter we will look at: The issues of co-locating data too close to the application Thin client connectivity, where it's best used and the issues that come with it Lite member node (nee super client) as a middle ground option Overview of the architectural choices
Typical Deployments
Now we need to bring in a new dependency, with our original downloaded archive is the hazelcast-client-2.6.jar, and we can use this to create ClientExample to connect to the cluster and perform operations against the data that is held there. As the client is delegating the operations out to the wider cluster, the data persisted will out-live the client.
public class ClientExample { public static void main(String[] args) { ClientConfig conf = new ClientConfig(); conf.addAddress("127.0.0.1:5701"); HazelcastClient hzc = HazelcastClient.newHazelcastClient(conf); IMap<String, String> capitals = hzc.getMap("capitals");
[ 78 ]
Chapter 7 if (capitals.isEmpty()) { System.err.println("Empty capitals map, adding entries"); capitals.put("GB", capitals.put("FR", capitals.put("US", capitals.put("AU", } System.err.println( "Known capital cities: " + capitals.size()); System.err.println( "Capital city of GB: " + capitals.get("GB")); hzc.shutdown(); } } "London"); "Paris"); "Washington DC"); "Canberra");
In running our client multiple times we can see that the rst run will initialize the capitals map with our starting set of data, before shutting down the client which will allow the JVM instance to complete and exit cleanly. However, when we run the client again, the data has been successfully persisted by the still running cluster so that we won't repopulate it a second time. Our client is currently connecting through to one of the clients specically, however it learns about the existence of the other nodes once it is running. So should our supporting member node die, the client will simply connect over to another one of the other nodes and continue on as normal. The only critical phase is the initial connection, that unlike the member nodes we don't have an auto-discovery mechanism in place; so that needs to be congured explicitly. If the node we have listed is down at the time of our client starting up, we will fail to connect to the cluster irrespective of the state of other nodes or the cluster as a whole. We can address this by listing a number of seed nodes within our client's conguration, as long as one of these nodes is available we can connect to the cluster and go from there.
ClientConfig conf = new ClientConfig(); conf.addAddress("127.0.0.1:5701"); conf.addAddress("127.0.0.1:5702", "127.0.0.1:5703");
[ 79 ]
Typical Deployments
By default, the ordering of the nodes we attempt to connect to is consistent depending on the conguration, should the rst nodes we try to connect to be down or having networking issues, we might have to wait until the congured connection time-out to be reached before moving on to the next to try block. To prevent a consistent issue proving to be an ongoing issue for clients starting up, we can set the client to randomly order the target nodes list from its conguration. In this way we would get a faster connection time, at least for a proportion of the time, this may be preferable to a possible consistent issue.
conf.setShuffle(true);
[ 80 ]
Chapter 7
This will add latency to the requests made to the cluster. Should that latency be too high, there is an alternative method of connecting to the cluster known as a lite member (originally known as a super client). This is effectively a non-participant member of the cluster, in that it maintains connections to all the other nodes in the cluster and will directly talk to partition owners, but does not provide any storage or computation to the cluster. This avoids the double hop required by the standard client, but adds the additional complexity and overhead of participating in the cluster. For most uses cases using the standard client is preferable as it is much simpler to congure and use, and can work over higher latency connections; however, should you need higher levels of performance and throughput, you could consider using a lite member. Lite members are set up as you would set up a standard node, hence the additional complexity is involved; however with one small addition in the conguration that ags the node as being non-participant.
Config conf = new Config(); conf.setLiteMember(true);
When a lite member is present in the cluster the other members will be aware of its presence and the fact that it is such a type of node. You will see the appropriate logging in the startup and cluster state logging on the various cluster nodes.
Members [3] { Member [127.0.0.1]:5701 this Member [127.0.0.1]:5702 Member [127.0.0.1]:5703 lite }
Architectural overview
As we have seen there are a number of different types of deployment we could use, which one you choose really depends on our application's make up. Each has a number of trade-offs but most deployments tend to use one of the rst two, with the client and server cluster approach the usual favorite unless we have a mostly compute focused application where the former is a simpler set up. So let's have a look at the various architectural setups we could employ and what situations they are best suited to.
[ 81 ]
Typical Deployments
Peer-to-peer cluster
This is the standard example we have been mostly using until now, each node houses both our application itself, and data persistence and processing. It is most useful when we have an application that is primarily focused towards asynchronous or high performance computing, and will be executing lots of tasks on the cluster. The greatest drawback is the inability to scale our application and data capacity separately.
Application Node
Application Node
Application Node
Cluster Node
Cluster Node
Cluster Node
Application
Application
Application
[ 82 ]
Chapter 7
Hybrid cluster
A middle ground between the two previous strategies, the creation and management of a primary cluster of nodes with a shadow set, holds the application's capabilities but none of the data or computation responsibilities. The only real use case for this strategy is where the client option doesn't provide the required latency, and performance demands from our application due to having to leap frog through other nodes in the cluster to get at our data.
Application Lite Node
Cluster Node
Cluster Node
Cluster Node
Summary
We have seen that we have a number of strategies at our deposal for deploying Hazelcast within our architecture. Be it, treating it like a clustered standalone product akin to a traditional data source but with more resilience and scalability. For more complex applications we can directly absorb the capabilities directly into our application, but that does come with some strings attached. But whichever approach we choose for our particular use case, we have easy access to scaling and control at our nger tips. In the next chapter we will look beyond just Hazelcast and the alternative methods of getting access to our held data in the cluster.
[ 83 ]
Alternatively, you can buy the book from Amazon, BN.com, Computer Manuals and most internet book retailers.
P U B L I S H I N G
www.PacktPub.com