Hazelcast is an easy to use but scalable in-memory datagrid and distributed executor framework. It enables you to build applications having a big requirement on memory or that needs to scale horizontally.
Today’s amounts of collected data are showing nearly exponential growth. More than 75 percent of all collected data has been collected in the past five years. To store that data and process it within an appropriate time, you need to partition the data and parallelize the processing of reports and analytics. This session demonstrates how to quickly and easily parallelize data processing with Hazelcast and its underlying distributed data structures. By giving a few quick introductions to different terms and some short live coding sessions, the presentation takes you on a journey through distributed computing.
Distributed Computing in Hazelcast - Geekout 2014 EditionChristoph Engelbert
Today’s amounts of collected data are showing a nearly exponential growth. More than 75% of all the data have been collected in the past 5 years. To store this data and process it in an appropriate time you need to partition the data and parallelize the processing of reports and analytics.
This talk will demonstrate how to parallelize data processing using Hazelcast and it’s underlying distributed data structures. With a quick introduction into the different terms and some short live coding examples we will make the journey into the distributed computing.
Sourcecode of the demonstrations are available here:
1. https://github.com/noctarius/hazelcast-mapreduce-presentation
2. https://github.com/noctarius/hazelcast-distributed-computing
The document discusses distributed computing and in-memory computing using Hazelcast. It covers how Hazelcast partitions and distributes data across nodes, allows parallel processing of distributed data, and provides distributed caching capabilities with features like TTL and auto-cleanup. Examples of using Hazelcast for distributed maps, parallel sums, and caching are shown.
Hazelcast is an in-memory data grid that provides a distributed map for fast, reliable storage and access of data in a clustered environment. It offers features such as simple configuration, automatic data partitioning and replication, fail-safety, scalability, and integration with Java interfaces and Spring. Developers can use Hazelcast to store and query data, distribute work across a cluster, and publish and subscribe to cluster-wide events.
Building infrastructure with Terraform (Google)Radek Simko
Building your infrastructure as one-off thing by clicking in the UI of your chosen cloud provider may be easy, but that isn't scalable nor fun in long-term nor in team.
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Building highly scalable website requires to understand the core building blocks of your applicative environment. In this talk we dive into Jahia core components to understand how they interact and how by (1) respecting a few architectural practices and (2) fine tuning Jahia components and the JVM, you will be able to build a highly scalable service
Hazelcast is an open source clustering and highly scalable data distribution platform for Java. It provides an in-memory data grid that partitions data across nodes and provides APIs to access and manipulate distributed maps, queues, topics and more. The document discusses how Hazelcast distributes data across partitions and nodes, handles eviction and persistence, forms clusters, and addresses issues like split brains. It also provides an overview of usage patterns and compares member nodes to client nodes.
Slides of my presentation to the AWS User Group Meetup in Montpellier.
Describes our use of terraform at Teads (http://www.teads.tv)
This is the story of a company that had 10s of customers and were facing severe scaling issues. They approached us. They had a good product predicting a few hundred customers within 6 months. VCs went to them. Infrastructure scaling was the only unknown; funding for software-defined data centers. We introduced Terraform for infrastructure creation, Chef for OS hardening, and then Packer for supporting AWS as well as VSphere. Then, after a few more weeks, when there was a need for faster response from the data center, we went into Serf to immediately trigger chef-clients and then to Consul for service monitoring.
Want to describe this journey.
Finally, we did the same exact thing in at a Fortune 500 customer to replace 15 year-old scripts. We will also cover sleek ways of dealing with provisioning in different Availability Zones across various AWS regions with Terraform.
This document provides an overview of Hazelcast, an open source in-memory data grid. It discusses what Hazelcast is, common use cases, features, and how to configure and use distributed maps (IMap) and querying with predicates. Key points covered include that Hazelcast stores data in memory and distributes it across a cluster, supports caching, distributed computing and messaging use cases, and IMap implements a distributed concurrent map that can be queried using predicates and configured with eviction policies and persistence.
CloudOps' software developer, Patrick Dubé's slides from his talk at Confoo in Montreal about using Hashicorp's Terraform automation tool to treat your infrastructure as code on cloud.ca.
ClickHouse 2018. How to stop waiting for your queries to complete and start ...Altinity Ltd
ClickHouse 2018. How to stop waiting for your queries to complete and start having fun, by Alexander Zaitsev, Altinity CTO
Presented at Percona Live Frankfurt
Spark and Mesos cluster optimization was discussed. The key points were:
1. Spark concepts like stages, tasks, and partitions were explained to understand application behavior and optimization opportunities around shuffling.
2. Application optimization focused on reducing shuffling through techniques like partitioning, reducing object sizes, and optimizing closures.
3. Memory tuning in Spark involved configuring storage and shuffling fractions to control memory usage between user data and Spark's internal data.
4. When running Spark on Mesos, coarse-grained and fine-grained allocation modes were described along with solutions like using Mesos roles to control resource allocation and dynamic allocation in coarse-grained mode.
Understanding Spark Tuning: Strata New YorkRachel Warren
How to design a Spark Auto Tuner.
The first section coves how to set basic Spark settings e.g. executor memory, driver memory, dynamic allocation, shuffle settings, number of partitions etc. The second section it covers how to collect historical data about a spark Job and the third section discusses designing an auto tuner application which will programmatically configure Spark jobs using that historical data.
This document discusses programmatically tuning Spark jobs. It recommends collecting historical metrics like stage durations and task metrics from previous job runs. These metrics can then be used along with information about the execution environment and input data size to optimize configuration settings like memory, cores, partitions for new jobs. The document demonstrates using the Robin Sparkles library to save metrics and get an optimized configuration based on prior run data and metrics. Tuning goals include reducing out of memory errors, shuffle spills, and improving cluster utilization.
ne of the most sought after features in PostgreSQL is a scalable multi-master replication solution. While there does exists some tools to create multi-master clusters such as Bucardo and pgpool-II, they may not be the right fit for an application. In this session, you will learn some of the strengths and weaknesses of these more popular multi-master solutions for PostgreSQL and how they compare to using Slony for your multi-master needs. We will explore the types of deployments best suited for a Slony deployment and the steps necessary to configure a multi-master solution for PostgreSQL.
In the “Sharing is caring” spirit, we came up with a series of internal talks called, By Showmaxers, for Showmaxers, and we recently started making them public. There are already talks about Networks, and Android app building, available.
Our latest talk focuses on PostgreSQL Terminology, and is led by Angus Dippenaar. He worked on Showmax projects from South Africa, and moved to work with us in Prague, Czech Republic.
The talk was meant to fill some holes in our knowledge of PostgreSQL. So, it guides you through the basic PostgreSQL terminology you need to understand when reading the official documentation and blogs.
You may learn what all these POstgreSQL terms mean:
Command, query, local or global object, non-schema local objects, relation, tablespace, database, database cluster, instance and its processes like postmaster or backend; session, connection, heap, file segment, table, TOAST, tuple, view, materialized (view), transaction, commit, rollback, index, write-ahead log, WAL record, WAL file, checkpoint, Multi-version concurrency control (MVCC), dead tuples (dead rows), or transaction exhaustion.
The terminology is followed by a demonstration of transaction exhaustion.
Get the complete explanation and see the demonstration of the transaction exhaustion and of tuple freezing in the talk on YouTube: https://youtu.be/E-RkI3Ws7gM.
The document discusses the history of databases and database management systems. It then summarizes some key features of MongoDB, including how to perform basic CRUD (create, read, update, delete) operations with examples. Potential use cases for MongoDB are also listed.
Infrastructure as Code: Introduction to TerraformAlexander Popov
Terraform is infrastructure as code software that allows users to define and provision infrastructure resources. It is similar to tools like Chef, Puppet, Ansible, Vagrant, CloudFormation, and Heat, but aims to be easier to get started with and more declarative. With Terraform, infrastructure is defined using the HashiCorp Configuration Language and provisioned using execution plans generated from those definitions. Key features include modules, provisioners, state management, and parallel resource provisioning.
The document provides information on migrating to and managing databases on Amazon RDS/Aurora. Some key points include:
- RDS/Aurora handles complexity and makes the database highly available, but it also limits customization options compared to managing your own databases.
- Aurora is a MySQL-compatible database cluster that shares storage across nodes for high availability without replication lag. A cluster has writer and reader endpoints.
- CloudFormation is recommended for creating and managing Aurora clusters due to its native AWS support and ability to integrate with other services.
- Loading large amounts of data into Aurora may require using parallel dump/load tools like Mydumper/Myloader instead of mysqldump due to improved
GridSQL is an open source distributed database built on PostgreSQL that allows it to scale horizontally across multiple servers by partitioning and distributing data and queries. It provides significantly improved performance over a single PostgreSQL instance for large datasets and queries by parallelizing processing across nodes. However, it has some limitations compared to PostgreSQL such as lack of support for advanced SQL features, slower transactions, and need for downtime to add nodes.
The document compares and contrasts the SAS and Spark frameworks. It provides an overview of their programming models, with SAS using data steps and procedures while Spark uses Scala and distributed datasets. Examples are shown of common tasks like loading data, sorting, grouping, and regression in both SAS Proc SQL and Spark SQL. Spark MLlib is described as Spark's machine learning library, in contrast to SAS Stats. Finally, Spark Streaming is demonstrated for loading and querying streaming data from Kafka. The key takeaways recommend trying Spark for large data, distributed computing, better control of code, open source licensing, or leveraging Hadoop data.
RDS and Terraform allow managing relational databases on AWS. While RDS defaults and parameter changes can be difficult to manage with Terraform alone, it is highly recommended because Terraform brings abstraction and infrastructure as code benefits. Modules can help organize RDS configuration but may introduce complexity. Overall, Terraform is effective for managing RDS instances and parameters despite some challenges with defaults and replacements.
How to teach an elephant to rock'n'rollPGConf APAC
The document discusses techniques for optimizing PostgreSQL queries, including:
1. Using index only scans to efficiently skip large offsets in queries instead of scanning all rows.
2. Pulling the LIMIT clause under joins and aggregates to avoid processing unnecessary rows.
3. Employing indexes creatively to perform DISTINCT operations by scanning the index instead of the entire table.
4. Optimizing DISTINCT ON queries by looping through authors and returning the latest row for each instead of a full sort.
Modern infrastructure can sometimes look like a wedding cake with many different layers. It’s no surprise for seasoned users that Terraform was able to provision the most lower layers - compute - for a long while. Skipping a few layers in between, workload scheduler like Kubernetes is typically represented as the top one, exposing high-level APIs for scheduling and scaling pods, managing persistent volumes and restrictions & limits for scheduling.
Terraform 0.10 comes with Kubernetes provider which supports all stable (v1) Kubernetes resources from K8S 1.6.
In this talk you’ll hear about particular examples of where it’s useful to use Terraform for managing K8S resources, what benefits do you get compared to other solutions and demo gods permitting you’ll also see how to get from zero to an application running on K8S.
https://www.hashiconf.com/talks/radek-simko.html
Recording: https://www.youtube.com/watch?v=-UtqHkrvFro
The way from monolithic to micro service architectures can hard. Overall micro services are not the all holy grail to just solve all your issues. You need to be aware that you need the right developers and the right toolset. Oh and not to forget, moving state to authorization systems doesn't mean your application is really stateless :)
Anyhow micro services are a great architecture and this deck is a short introduction on why we need to change our application architectures and what pitfalls you you have when introducing the idea of micro services.
Hazelcast provides scale-out computing capabilities that allow cluster capacity to be increased or decreased on demand. It enables resilience through automatic recovery from member failures without data loss. Hazelcast's programming model allows developers to easily program cluster applications as if they are a single process. It also provides fast application performance by holding large data sets in main memory.
This is the story of a company that had 10s of customers and were facing severe scaling issues. They approached us. They had a good product predicting a few hundred customers within 6 months. VCs went to them. Infrastructure scaling was the only unknown; funding for software-defined data centers. We introduced Terraform for infrastructure creation, Chef for OS hardening, and then Packer for supporting AWS as well as VSphere. Then, after a few more weeks, when there was a need for faster response from the data center, we went into Serf to immediately trigger chef-clients and then to Consul for service monitoring.
Want to describe this journey.
Finally, we did the same exact thing in at a Fortune 500 customer to replace 15 year-old scripts. We will also cover sleek ways of dealing with provisioning in different Availability Zones across various AWS regions with Terraform.
This document provides an overview of Hazelcast, an open source in-memory data grid. It discusses what Hazelcast is, common use cases, features, and how to configure and use distributed maps (IMap) and querying with predicates. Key points covered include that Hazelcast stores data in memory and distributes it across a cluster, supports caching, distributed computing and messaging use cases, and IMap implements a distributed concurrent map that can be queried using predicates and configured with eviction policies and persistence.
CloudOps' software developer, Patrick Dubé's slides from his talk at Confoo in Montreal about using Hashicorp's Terraform automation tool to treat your infrastructure as code on cloud.ca.
ClickHouse 2018. How to stop waiting for your queries to complete and start ...Altinity Ltd
ClickHouse 2018. How to stop waiting for your queries to complete and start having fun, by Alexander Zaitsev, Altinity CTO
Presented at Percona Live Frankfurt
Spark and Mesos cluster optimization was discussed. The key points were:
1. Spark concepts like stages, tasks, and partitions were explained to understand application behavior and optimization opportunities around shuffling.
2. Application optimization focused on reducing shuffling through techniques like partitioning, reducing object sizes, and optimizing closures.
3. Memory tuning in Spark involved configuring storage and shuffling fractions to control memory usage between user data and Spark's internal data.
4. When running Spark on Mesos, coarse-grained and fine-grained allocation modes were described along with solutions like using Mesos roles to control resource allocation and dynamic allocation in coarse-grained mode.
Understanding Spark Tuning: Strata New YorkRachel Warren
How to design a Spark Auto Tuner.
The first section coves how to set basic Spark settings e.g. executor memory, driver memory, dynamic allocation, shuffle settings, number of partitions etc. The second section it covers how to collect historical data about a spark Job and the third section discusses designing an auto tuner application which will programmatically configure Spark jobs using that historical data.
This document discusses programmatically tuning Spark jobs. It recommends collecting historical metrics like stage durations and task metrics from previous job runs. These metrics can then be used along with information about the execution environment and input data size to optimize configuration settings like memory, cores, partitions for new jobs. The document demonstrates using the Robin Sparkles library to save metrics and get an optimized configuration based on prior run data and metrics. Tuning goals include reducing out of memory errors, shuffle spills, and improving cluster utilization.
ne of the most sought after features in PostgreSQL is a scalable multi-master replication solution. While there does exists some tools to create multi-master clusters such as Bucardo and pgpool-II, they may not be the right fit for an application. In this session, you will learn some of the strengths and weaknesses of these more popular multi-master solutions for PostgreSQL and how they compare to using Slony for your multi-master needs. We will explore the types of deployments best suited for a Slony deployment and the steps necessary to configure a multi-master solution for PostgreSQL.
In the “Sharing is caring” spirit, we came up with a series of internal talks called, By Showmaxers, for Showmaxers, and we recently started making them public. There are already talks about Networks, and Android app building, available.
Our latest talk focuses on PostgreSQL Terminology, and is led by Angus Dippenaar. He worked on Showmax projects from South Africa, and moved to work with us in Prague, Czech Republic.
The talk was meant to fill some holes in our knowledge of PostgreSQL. So, it guides you through the basic PostgreSQL terminology you need to understand when reading the official documentation and blogs.
You may learn what all these POstgreSQL terms mean:
Command, query, local or global object, non-schema local objects, relation, tablespace, database, database cluster, instance and its processes like postmaster or backend; session, connection, heap, file segment, table, TOAST, tuple, view, materialized (view), transaction, commit, rollback, index, write-ahead log, WAL record, WAL file, checkpoint, Multi-version concurrency control (MVCC), dead tuples (dead rows), or transaction exhaustion.
The terminology is followed by a demonstration of transaction exhaustion.
Get the complete explanation and see the demonstration of the transaction exhaustion and of tuple freezing in the talk on YouTube: https://youtu.be/E-RkI3Ws7gM.
The document discusses the history of databases and database management systems. It then summarizes some key features of MongoDB, including how to perform basic CRUD (create, read, update, delete) operations with examples. Potential use cases for MongoDB are also listed.
Infrastructure as Code: Introduction to TerraformAlexander Popov
Terraform is infrastructure as code software that allows users to define and provision infrastructure resources. It is similar to tools like Chef, Puppet, Ansible, Vagrant, CloudFormation, and Heat, but aims to be easier to get started with and more declarative. With Terraform, infrastructure is defined using the HashiCorp Configuration Language and provisioned using execution plans generated from those definitions. Key features include modules, provisioners, state management, and parallel resource provisioning.
The document provides information on migrating to and managing databases on Amazon RDS/Aurora. Some key points include:
- RDS/Aurora handles complexity and makes the database highly available, but it also limits customization options compared to managing your own databases.
- Aurora is a MySQL-compatible database cluster that shares storage across nodes for high availability without replication lag. A cluster has writer and reader endpoints.
- CloudFormation is recommended for creating and managing Aurora clusters due to its native AWS support and ability to integrate with other services.
- Loading large amounts of data into Aurora may require using parallel dump/load tools like Mydumper/Myloader instead of mysqldump due to improved
GridSQL is an open source distributed database built on PostgreSQL that allows it to scale horizontally across multiple servers by partitioning and distributing data and queries. It provides significantly improved performance over a single PostgreSQL instance for large datasets and queries by parallelizing processing across nodes. However, it has some limitations compared to PostgreSQL such as lack of support for advanced SQL features, slower transactions, and need for downtime to add nodes.
The document compares and contrasts the SAS and Spark frameworks. It provides an overview of their programming models, with SAS using data steps and procedures while Spark uses Scala and distributed datasets. Examples are shown of common tasks like loading data, sorting, grouping, and regression in both SAS Proc SQL and Spark SQL. Spark MLlib is described as Spark's machine learning library, in contrast to SAS Stats. Finally, Spark Streaming is demonstrated for loading and querying streaming data from Kafka. The key takeaways recommend trying Spark for large data, distributed computing, better control of code, open source licensing, or leveraging Hadoop data.
RDS and Terraform allow managing relational databases on AWS. While RDS defaults and parameter changes can be difficult to manage with Terraform alone, it is highly recommended because Terraform brings abstraction and infrastructure as code benefits. Modules can help organize RDS configuration but may introduce complexity. Overall, Terraform is effective for managing RDS instances and parameters despite some challenges with defaults and replacements.
How to teach an elephant to rock'n'rollPGConf APAC
The document discusses techniques for optimizing PostgreSQL queries, including:
1. Using index only scans to efficiently skip large offsets in queries instead of scanning all rows.
2. Pulling the LIMIT clause under joins and aggregates to avoid processing unnecessary rows.
3. Employing indexes creatively to perform DISTINCT operations by scanning the index instead of the entire table.
4. Optimizing DISTINCT ON queries by looping through authors and returning the latest row for each instead of a full sort.
Modern infrastructure can sometimes look like a wedding cake with many different layers. It’s no surprise for seasoned users that Terraform was able to provision the most lower layers - compute - for a long while. Skipping a few layers in between, workload scheduler like Kubernetes is typically represented as the top one, exposing high-level APIs for scheduling and scaling pods, managing persistent volumes and restrictions & limits for scheduling.
Terraform 0.10 comes with Kubernetes provider which supports all stable (v1) Kubernetes resources from K8S 1.6.
In this talk you’ll hear about particular examples of where it’s useful to use Terraform for managing K8S resources, what benefits do you get compared to other solutions and demo gods permitting you’ll also see how to get from zero to an application running on K8S.
https://www.hashiconf.com/talks/radek-simko.html
Recording: https://www.youtube.com/watch?v=-UtqHkrvFro
The way from monolithic to micro service architectures can hard. Overall micro services are not the all holy grail to just solve all your issues. You need to be aware that you need the right developers and the right toolset. Oh and not to forget, moving state to authorization systems doesn't mean your application is really stateless :)
Anyhow micro services are a great architecture and this deck is a short introduction on why we need to change our application architectures and what pitfalls you you have when introducing the idea of micro services.
Hazelcast provides scale-out computing capabilities that allow cluster capacity to be increased or decreased on demand. It enables resilience through automatic recovery from member failures without data loss. Hazelcast's programming model allows developers to easily program cluster applications as if they are a single process. It also provides fast application performance by holding large data sets in main memory.
Do you need to scale your application, share data across cluster, perform massive parallel processing on many JVMs or maybe consider alternative to your favorite NoSQL technology? Hazelcast to the rescue! With Hazelcast distributed development is much easier. This presentation will be useful to those who would like to get acquainted with Hazelcast top features and see some of them in action, e.g. how to cluster application, cache data in it, partition in-memory data, distribute workload onto many servers, take advantage of parallel processing, etc.
Presented on JavaDay Kyiv 2014 conference.
[OracleCode SF] In memory analytics with apache spark and hazelcastViktor Gamov
Apache Spark is a distributed computation framework optimized to work in-memory, and heavily influenced by concepts from functional programming languages.
Hazelcast - open source in-memory data grid capable of amazing feats of scale - provides wide range of distributed computing primitives computation, including ExecutorService, M/R and Aggregations frameworks.
The nature of data exploration and analysis requires data scientists be able to ask questions that weren't planned to be asked—and get an answer fast!
In this talk, Viktor will explore Spark and see how it works together with Hazelcast to provide a robust in-memory open-source big data analytics solution!
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
In-Memory Computing - Distributed Systems - Devoxx UK 2015Christoph Engelbert
Today’s amounts of collected data are showing a nearly exponential growth. More than 75% of all the data have been collected in the past 5 years. To store this data and process it in an appropriate time you need to partition the data and parallelize the processing of reports and analytics. This talk will demonstrate how to parallelize data processing using Hazelcast and it’s underlying distributed data structures. With a quick introduction into the different terms and some short live coding examples we will make the journey into the distributed computing.
This document provides an introduction and overview of Redis. Redis is described as an in-memory non-relational database and data structure server. It is simple to use with no schema or user required. Redis supports a variety of data types including strings, hashes, lists, sets, sorted sets, and more. It is flexible and can be configured for caching, persistence, custom functions, transactions, and publishing/subscribing. Redis is scalable through replication and partitioning. It is widely adopted by companies like GitHub, Instagram, and Twitter for uses like caching, queues, and leaderboards.
This document discusses Hazelcast, an open source in-memory data grid. It provides an overview of Hazelcast's features such as distributed caching, data structures, and partitioning. It also summarizes several performance tests run on Hazelcast, showing average and maximum operations per second for different workloads including shopping cart simulations, locks, transactions, and entry processors. The presentation concludes by noting that Hazelcast Inc. is hiring.
This document discusses in-memory computing and distributed systems. It provides an overview of Hazelcast, an open-source distributed systems and in-memory data grid platform. Key features highlighted include using standard Java collections and concurrency APIs, transparent data distribution, being a drop-in replacement for caching solutions, and having a disruptively simple design. Distributed computing concepts like data partitioning, parallel processing, and caching evolution are briefly explained.
Peter Veentjer is a senior developer and solution architect for Hazelcast who has 13 years of Java experience. Hazelcast is an open source in-memory data grid that simplifies building scalable and highly available systems. It provides distributed data structures like maps, queues, topics and more through a simple Java API. Hazelcast can be used for caching, messaging, job processing and more.
Hazelcast is an in-memory data grid that allows multiple instances of an application to communicate and share data between each other. It keeps data in main memory for fast processing and provides structures like maps, lists, sets and queues to store distributed data. Hazelcast makes it easy to set up distributed caching and synchronization between nodes with no need to manually discover instances.
WebSockets: The Current State of the Most Valuable HTML5 API for Java DevelopersViktor Gamov
WebSockets provide a standardized way for web browsers and servers to establish two-way communications channels over a single TCP connection. They allow for more efficient real-time messaging compared to older techniques like polling and long-polling. The WebSocket API defines client-side and server-side interfaces that allow for full-duplex communications that some popular Java application servers and web servers support natively. Common use cases that benefit from WebSockets include chat applications, online games, and real-time updating of social streams.
Functional UI testing of Adobe Flex RIAViktor Gamov
The document discusses functional UI testing of Adobe Flex applications. It covers why testing is important, common testing approaches like unit testing and GUI testing, and automated testing tools for Flex like HP QTP, Selenium, Ranorex, and FlexMonkey. It also discusses best practices for creating test-friendly applications and instrumenting custom components and events to facilitate automated testing.
Creating your own private Download Center with Bintray Baruch Sadogursky
This document discusses how to create a private download center using Bintray to automate software distribution. It outlines the requirements of a download center including fast speeds, high uptime, security, usage tracking, and integration with continuous integration processes. It notes that download centers are often neglected non-core projects. Bintray is introduced as a distribution as a service platform that meets all download center requirements and provides a complete, fast, and reliable infrastructure without the need to manage underlying resources. The presenters demonstrate how to quickly set up a download center on Bintray in 10 minutes.
JavaOne 2013: «Java and JavaScript - Shaken, Not Stirred»Viktor Gamov
There is a perception in the Java community that JavaScript is a second-league interpreted language with the main purpose of making Web pages a little prettier. But JavaScript is a powerful, flexible, dynamically typed language. And today language has been experiencing its a revival driven by the interest in HTML5. Nashorn is a modern JavaScript engine available on JVM, and it’s already included with JDK8 builds. This presentation is about building polyglot application with Java and JavaScript.
DevOps @Scale (Greek Tragedy in 3 Acts) as it was presented at Oracle Code SF...Baruch Sadogursky
As in a good Greek Tragedy, scaling devops to big teams has 3 stages and usually end badly. In this play (it’s more than a talk!) we’ll present you with Pentagon Inc, and their way to scaling devops from a team of 3 engineers to a team of 100 (spoiler – it’s painful!)
We aren't sure about you, but working with Java 8 made one of the speakers lose all of his hair and the other lose his sleep (or was it the jetlag?). If you still haven't reached the level of Brian Goetz in mastering lambdas and strings, this talk is for you. And if you think you have, we have some bad news for you, you should attend as well.
Building modern web apps with html5, javascript, and javaAlexander Gyoshev
This document discusses building modern web apps with HTML5, JavaScript, and Java. It covers managing complexity with templates, data binding, data syncing, and widgets. It recommends using logic-less templates like Mustache and Handlebars for simplicity. Frameworks like Backbone, Kendo, and AngularJS help separate data and logic through data binding and sync data with backends. The document demonstrates these concepts with code examples. It acknowledges Java's role through frameworks like Play, Scala, and Lift that improve on plain Java for web development. The document concludes by wrapping up how frameworks provide modular pieces to build applications like puzzles.
Arquillian - extensions which you have to take with you to a deserted islandSoftwareMill
Arquillian has plenty of useful extentions, In this talk Michał will present these that in his opinion are most helpful and should be used in most Arquillian-powered Java projects.
1. JBoss Arquillian is a test framework that manages containers and deploys applications and tests to containers. It supports various container types and container adapters.
2. ShrinkWrap is used to bundle dependent classes and resources into deployable archives. The ShrinkWrap Resolver helps to resolve dependencies.
3. Arquillian Drone integrates WebDriver with Arquillian for browser interaction and testing. Arquillian Graphene provides Page Object and other support for WebDriver tests.
4. Arquillian Warp allows executing HTTP requests and server-side tests in the same request cycle. Arquillian Droidium provides Android testing support.
JavaFX is a software platform for creating and delivering desktop applications, as well as rich internet applications (RIAs) that can run across a wide variety of devices. Some key aspects of the JavaFX platform include its base classes like Application, Scene and Stage; the use of FXML for building the user interface with CSS styling and JavaScript capabilities; JavaFX properties and bindings for observing value changes; and support for animation. The JavaFX architecture provides objects, APIs and utilities to help developers create visually-engaging and responsive user experiences.
Are you writing enough tests for your applications? We thought not! Ryan Roemer of Formidable Labs and author of the new book, "Backbone Testing.js", will help us learn how to test your JavaScript applications in a 3 hour workshop at Redfin's beautiful downtown headquarters.
The workshop will be a mixture of lecture and hands on lessons. With the help of our fabulous mentors you'll learn how to craft a frontend test infrastructure using Mocha, Chai, Sinon.JS and PhantomJS.
Java 8 introduced the Stream API as a modern, functional, and very powerful tool for processing collections of data. One of the main benefits of the Stream API is that it hides the details of iteration over the underlying data set, allowing for parallel processing within a single JVM, using a fork/join framework. I will talk about a Stream API implementation that enables parallel processing across many machines and many JVMs. With an explanation of internals of the implementation, I will give an introduction to the general design behind stream processing using DAG (directed acyclic graph) engines and how an actor-based implementation can provide in-memory performance while still leveraging industry-wide known frameworks as Java Streams API.
https://www.jfokus.se/jfokus/talks.jsp#RidingtheJetStreams
Batteries included: Advantages of an End-to-end solutionJuergen Fesslmeier
Creating Web Applications is challenging. Faced with supporting multiple devices, a patchwork of languages, and various technologies, it requires a team of experts to develop, configure, maintain and run them. In this increasingly complex mix, we’d like to call simplicity to the rescue, so do developers and their clients.
In this session we tell the story of what “It just works out of the box.” means for Web and Mobile applications and how “Less lines of code produces better apps.” relates to business. And best, we like to use the same language everywhere: JavaScript.
The document discusses SOAP, describing it as a protocol specification for exchanging structured information in web services using XML format. It outlines the key parts of a SOAP message including the envelope, header, body and optional fault. The document then provides an example SOAP request and discusses how WSDL and XSD describe the structure and data types of a web service. It evaluates different options for working with SOAP in Scala including rolling your own implementation, using JAXB, Apache CXF, and ScalaXB which generates case classes. Finally, it notes some common pitfalls like Sax parsing errors and timeouts when interacting with web services.
Lambda Expressions: Myths and Mistakes - Richard Warburton (jClarity)jaxLondonConference
Presented at JAX London 2013
tl;dr - How will the everyday developer cope with Java 8’s Language changes?
Java 8 will ship with a powerful new abstraction - Lambda Expressions (aka Closures) and a completely retooled set of Collections libraries. In addition interfaces have changed through the addition of default and static methods. The ongoing debate as to whether Java should include such language changes has resulted in many vocal opinions being espoused. Sadly few of these opinions have been backed up by practical experimentation and experience. - Are these opinions just myths?
tl;dr - How will the everyday developer cope with Java 8’s Language changes?
Java 8 will ship with a powerful new abstraction - Lambda Expressions (aka Closures) and a completely retooled set of Collections libraries. In addition interfaces have changed through the addition of default and static methods. The ongoing debate as to whether Java should include such language changes has resulted in many vocal opinions being espoused. Sadly few of these opinions have been backed up by practical experimentation and experience. - Are these opinions just myths?
- What mistakes does a developer make?
- Can a ‘blue collar’ Java Developer cope with functional programming?
- Can we avoid these mistakes in future?
In London, we’ve been running a series of hackdays trying out Lambda Expressions as part of the Adopt-a-JSR program and have been recording and analysing the results. Huge topics of mailing list discussion have been almost entirely irrelevant problems to developers, and some issues which barely got any coverage at all have proved to be a consistent thorn in people’s side.
The document discusses extending Nginx functionalities with Lua. It provides an overview of Nginx architecture and how the lua-nginx-module allows running Lua scripts inside Nginx. This provides a powerful and performant programming environment while leveraging Nginx's event-driven architecture. Examples show how to access Nginx variables and APIs from Lua, issue subrequests, and handle requests non-blockingly using cosockets. Libraries like lua-resty-memcached reuse these extensions to build applications in a scalable manner.
Nginx Scripting - Extending Nginx Functionalities with LuaTony Fabeen
The document discusses extending Nginx functionalities with Lua. It provides an overview of Nginx architecture and how the lua-nginx-module allows running Lua scripts inside Nginx. This provides a powerful and performant programming environment while taking advantage of Nginx's event-driven architecture. Examples show how to access Nginx variables and APIs from Lua, issue subrequests, and do non-blocking I/O including with cosockets. Libraries like lua-resty-memcached reuse these extensions. In summary, Nginx is excellent for scalable apps and Lua extends its capabilities through embedded scripts and subrequests.
This document provides information about CSS preprocessors like Sass, LESS, and Stylus. It discusses how they extend CSS with features like variables, mixins, functions, and nested rules to make stylesheets more maintainable and reusable. Preprocessors compile code written in their own syntax to regular CSS understood by browsers. While offering powerful features, preprocessors also introduce a learning curve and potential for code bloat if not used properly.
Oracle OpenWorld 2010 - Consolidating Microsoft SQL Server Databases into an ...djkucera
The document discusses strategies for consolidating Microsoft SQL Server databases into an Oracle 11g cluster. It covers gaining approval for the migration project, using the Oracle Migration Workbench to migrate database objects to Oracle, and employing views, stored procedures and Oracle Streams to integrate the databases during a staged migration approach. Challenges with each approach like data type mismatches are also addressed.
This document provides an introduction to JavaFX 2. It discusses the history of desktop applications in Java, including AWT, Swing, and issues with the old approaches. It then summarizes the announcement and initial challenges of JavaFX 1. It outlines the core concepts of JavaFX 2, including the architecture with Application, Scene, Stage, and FXML. It also briefly discusses controllers, properties, bindings, collections, charts, animation, effects, media, and tools like SceneBuilder and Scenic View.
This document introduces Seq, a library for Node.js that provides a cleaner way to handle asynchronous flow control and parallel execution. It summarizes Seq's installation, basic usage with examples, handling errors, nested execution, and more advanced features. Seq allows asynchronous functions to be executed sequentially or in parallel using methods like seq(), seqEach(), and parEach() to simplify complex asynchronous code and avoid "boomerang code". The document provides resources to learn more about Seq and asynchronous programming.
Analytics at Speed: Introduction to ClickHouse and Common Use Cases. By Mikha...Altinity Ltd
ClickHouse is a powerful open source analytics database that provides fast, scalable performance for data warehousing and real-time analytics use cases. It can handle petabytes of data and queries and scales linearly on commodity hardware. ClickHouse is faster than other databases for analytical workloads due to its columnar data storage and parallel processing. It supports SQL and integrates with various data sources. ClickHouse can run on-premises, in the cloud, or in containers. The ClickHouse operator makes it easy to deploy and manage ClickHouse clusters on Kubernetes.
Beginner workshop to angularjs presentation at GoogleAri Lerner
AngularJS workshop to introduce beginner concepts:
The presenter is Ari Lerner from Fullstack.io and teaches AngularJS. The workshop covers tools needed for Angular development like text editors, browsers, and web servers. It demonstrates building a simple greeting app with Angular directives, controllers, expressions, and scopes. Data handling with the $http service and promises is explained. Dependency injection allows services like $http to be passed into controllers. Services are introduced as singleton objects that can persist data beyond a single controller.
SecureSocial - Authentication for Play Frameworkjaliss
This document provides an overview and agenda for SecureSocial, an authentication module for Play!. It discusses main concepts like identity providers and user services. It covers installation, configuration, protecting actions, and customizing views. It also describes extending SecureSocial by creating new identity providers and internationalizing messages. The document aims to explain how SecureSocial works and how developers can customize it for their needs.
Running databases in containers has been the biggest anti-pattern of the last decade. The world, however, moves on and stateful container workloads become more common, and so do databases in Kubernetes. People love the additional convenience when it comes to deployment, scalability, and operation.
With PostgreSQL on its way to become the world’s most beloved database, there certainly are quite some things to keep in mind when running it on k8s. Let us evaluate the important Dos and especially the Don’ts.
Presentation by Chris Engelbert of simplyblock (https://www.simplyblock.io)
For the last two decades, the amount of data we store, process, and analyze is ever growing. The last decade shows a higher focus on immediate feedback loop data pipeline, using technologies such as Complex Event Processing (CEP), Stream Processing, and Change Data Capture (CDC). Services such as Kafka or NATS are to be found in almost every new system (at least to some extent).
To build a data pipeline, the number of technologies, frameworks, and platforms are endless. Getting the initial grasp of it all is much harder than expected, but together we can tackle it!
Messages sind heutzutage überall. Egal ob JavaScript Frontends in Form von Events, oder Backends mit Kafka oder NATS Message Queues, wir wollen zwei Ziele erreichen, Separation of Concerns (unabhängige Einheiten) und Skalierbarkeit (oder in Frontends Freigabe von Resourcen).
Da heute alles Responsive sein muss, brauchen wir Event-basierte Systeme. Also lasst uns gemeinsam die darunterliegenden Systeme erforschen, verstehen und Einsatzbereiche erarbeiten.
Farms are simple. A farm, a building or two, maybe a barn. Done. You’d wish.
Monitoring farms and barns is a tedious task. No farm looks like the other and water distribution, next to other elements, has grown generically. A little bit like the good old legacy systems we all love. With the additional complication of keeping track of topology changes, typical building automation systems are out of the scope.
See how clevabit integrated neo4j, PostgreSQL and TimescaleDB to bring observability to farms and what I learned along the way. And there were a lot of “this time it works” moments.
What I learned about IoT Security ... and why it's so hard!Christoph Engelbert
The document discusses some of the challenges of IoT security and provides recommendations. It notes that IoT security is difficult because devices often lack secure boot processes, have undocumented backdoors, and debugging can be done over unencrypted network connections. It recommends hiring engineers trained in security, prioritizing security over features, performing regular penetration testing, and providing indicators if a device becomes hacked. However, it acknowledges that no security is impossible to break, so the focus should be on choosing important battles.
Time-series data, or data being associated with its respective time of occurrence, is everywhere. From the obvious cases, such as metrics, observability, IoT data, all the way to logs, invoicing, or payment records. While storing some of these in relational databases is standard practice, people often reach for specific time-series databases when volume gets high. But imagine if you could have all of them in the same database: PostgreSQL.
With Instana the "Classic" Observability is not the end of the line. Find out what Observability means and how it can help DevOps, Developers, SREs day-by-day.
The document discusses creating resilient applications and systems. It defines resiliency as the ability to withstand failures from power outages, hardware failures, network issues, human errors, or software bugs. The document outlines some basic rules for resiliency, including having no single point of failure, embracing failures, and using back-off algorithms and idempotency. It also discusses the roles of developers, DevOps, operations, infrastructure, and cloud computing in building resiliency.
Continuous Integration, Continuous Delivery, Continuous Monitoring!
These days CI and CD are commonly used mechanics to achieve fast turn-around times for high-demand applications. Microservices architectures and highly dynamic envrionments (based on Kubernetes, Docker, …), however, come with a whole different set of problems.
Systems, that not only appear and disappear dynamically (e.g. autoscaling), but most commonly tend to be written using multiple different programming languages, are hard to monitor from the point of view that matters: User Requests and User Experience. but the answer is simple; Continuous Monitoring (CM).
Let's build a polyglot microservices infrastructure. A way to monitor and trace multi-service requests will be demonstrated using Instana’s automatic discovery system.
As we all know Java is the best language in the world, except there is Go. Go is just so much more, isn’t it? The syntax is so concise and meaningful, the compiler is so much more helpful and the rules are all over it.
We will uncovering the bitter truth, the 5 reasons, that every Java developer should know about Go. We’ll present why Go is just the better programming language and why the hype around Go is all real.
Let your eyes be to opened and your brain to explode. Sarcasm included.
Everyone knows there isn't just one way of doing things. This is also true for web-administrated Embedded Devices and a lot of different ways to attack the implementation were taken before the combination of Golang and Typescript manifested. Plenty of the tries started by missing knowledge, inability, the hate of some programming languages or just plainly on size requirements. Over Java and C/C++ to Go+Lua, Go+JavaScript and the final decision on Go and Typescript, we follow the adventure of an embedded framework and the arising problems. Pros and Cons but also the feeling for a Java developer and new horizons are given.
JSON, by now, became a regular part of most applications and services. Do we, how ever, really want to transfer human readable information or are we looking for a binary protocol to be as debuggable as JSON? CBOR the Concise Binary Object Representation offers the best of JSON + an extremely efficient, binary representation.
http://www.cbor.io
The days of JNI is counted, Project Panama is on the rise to tear down the walls between Java and C/C++ forever. FFI (Foreign Function Interface) technology finally arrives into the Java world.
This document discusses various approaches to accessing the sun.misc.Unsafe class from outside of the JDK/JRE, as it is an internal class not intended for public use. It presents several options for retrieving an Unsafe instance, such as directly calling Unsafe.getUnsafe() (which only works inside JDK/JRE), accessing the "theUnsafe" field via reflection, or constructing a new Unsafe instance using a private constructor. However, it notes that none of these options feel quite right as sun.misc.Unsafe is an internal class, and its use is discouraged outside of the JDK/JRE.
Reaching critical masses with your application systems becomes harder every day. Caching helps to provide low latency and high availability over slow calculation, networks, databases and any other kind of external resource.
JCache - Caching Introduction - What is the idea, where are we coming from and where we want to go in the future. Why we need caching and why do we want to cache?
Nowadays collected amounts of data growing exponentially. More than 75% of all stored data were collected in the last 5 to 6 years. To store and analyze those always fast growing pile of data we have to go new ways. The Scale-Up approach starts to break apart. Partitioning data and parallelize processing and analyzing are the new way.
Hey guys, lemme tell ya a story.
Once upon a time, we’re talking about the year 2001, a few people had an amazing idea. They were thinking about something that would change the world. It would make the world easy and give programmers almost unlimited power! It was simply referred to as JSR 107, one of the least things to change in the upcoming future. But those pals were way ahead of their time and nothing really happend. So time passed by and by and by and over the years it was buried in the deep catacombs of the JCP. Eventually, in 2011, two brave knights took on the fight and worked themselves through all the pathlessness, to finalize it in 2014. Lads you know what I’m talking about, they called it the “Java Caching API” or in short “JCache”. Yes you heard me, a Java standard for Caching!
Hey lads, lemme tell ya a story.
Once upon a time, we're talking about the year 2001, a few people had an amazing idea. They were thinking about something that would change the world. It would make the world easy and give programmers almost unlimited power! It was simply referred to as JSR 107, one of the least things to change in the upcoming future. But those pals were way ahead of their time and nothing really happend. So time passed by and by and by and over the years it was buried in the deep catacombs of the JCP. Eventually, in 2011, two brave knights took on the fight and worked themselves through all the pathlessness, to finalize it in 2014. Lads you know what I'm talking about, they called it the "Java Caching API" or in short "JCache". Yes you heard me, a Java standard for Caching!
A software system cannot possibly imagined without Caching today and it was time for a standard. No matter if you want to cache database queries, generated HTML or results of long running calculations, new systems have to reach a critical mass to be successful. Therefore caching becomes a First-Class-Citizen of application landscape, the principle of Caching First. JCache has grown for 13 years to it's final success and had an amazing Co-Spec-Lead, Greg Luck - the inventor of EHcache.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
2. WHO AM I
Christoph Engelbert (@noctarius2k)
8+ years of professional Java development
5+ years of backend development
Specialized to performance, GC, traffic topics
Was working for int. companies as Ubisoft and HRS
Since November 2013 official Hazelcast Hacker
Apache DirectMemory / Lightning Committer and PMC
Developer CastMapR - MapReduce on Hazelcast 3
www.hazelcast.com
5. USECASES
Scale your application
Distribute and share data
Partition your data
Distribute messages
Process in parallel on multiple machines
Load balancing
www.hazelcast.com
8. FEATURES
Java Collection API
Map, Queue, Set, List
MultiMap
Topic (PubSub)
Java Concurrency API
Lock, Semaphore, CountDownLatch, ExecutorService
Transactions
Custom Serialization
Off-Heap support
Native client: C#, C++, Java, REST, memcached
www.hazelcast.com
9. EASY API
/ Cetn anwHzlatnd
/ raig
e aecs oe
Hzlatntneh =HzlatnwaecsIsac(;
aecsIsac z
aecs.eHzlatntne)
/ GtigaMp Ls,Tpc ..
/ etn
a, it oi, .
Mpmp=h.eMp"aNm";
a a
zgta(Mpae)
Ls ls =h.eLs(LsNm";
it it
zgtit"itae)
Ioi tpc=h.eTpc"oiNm";
Tpc oi
zgtoi(Tpcae)
/ Sutn dw tend
/ htig on h oe
h.hton)
zsudw(;
www.hazelcast.com
11. DATA PARTITIONING (1/2)
Multiple partitions per node
Consistent Hashing: hash(key) % partitioncount
Option to control partitioning: "key@partitionkey"
Possibility to find key owner for every key
Support for Near-Caching and executions on key owner
Automatic Fault-Tolerance
Synchronous and Asynchronous backups
Define sync / async backup counts
www.hazelcast.com
12. DATA PARTITIONING (2/2)
With 4 cluster nodes every server holds
1/4 real data and 1/4 of backups
www.hazelcast.com
14. HAZELCAST IN NUMBERS
Default partition amount 271
Any partition amount possible
Biggest cluster 100+ members
Handles 100k+/sec messages using a topic
Max datasize depends on RAM
Off-Heap for low GC overhead
www.hazelcast.com
15. COMMUNITY VS. ENTERPRISE
Feature
Java Collection API
Java Concurrency API
SSL Socket
Elastic Memory (Off-Heap)
JAAS Security / Authentication
Management Center
Community
X
X
X
Enterprise
X
X
X
X
X
X
www.hazelcast.com
17. EASY TO UNITTEST
pbi casSmTsCs {
ulc ls oeetae
piaeHzlatntne]isacs
rvt aecsIsac[ ntne;
@eoe
Bfr
pbi vi bfr( trw Ecpin{
ulc od eoe) hos xeto
/ Mlil isacso tesm JM
/ utpe ntne n h ae V
isacs=nwHzlatntne2;
ntne
e aecsIsac[]
isacs0 =HzlatnwaecsIsac(;
ntne[]
aecs.eHzlatntne)
isacs1 =HzlatnwaecsIsac(;
ntne[]
aecs.eHzlatntne)
}
@fe
Atr
pbi vi atr)trw Ecpin{
ulc od fe( hos xeto
HzlatsudwAl)
aecs.htonl(;
}
}
www.hazelcast.com
18. SERIALIZATION
/ jv.oSraial
/ aai.eilzbe
pbi casUe ipeet Sraial {
ulc ls sr mlmns eilzbe }
/ o jv.oEtraial
/ r aai.xenlzbe
pbi casUe ipeet Etraial {
ulc ls sr mlmns xenlzbe }
/ o (o.aecs.i.eilzto)DtSraial
/ r cmhzlatnosraiain.aaeilzbe
pbi casUe ipeet DtSraial {
ulc ls sr mlmns aaeilzbe }
/ o nwi Hzlat3(ut vrinspot Pral
/ r e n aecs
mli eso upr) otbe
pbi casUe ipeet Pral {
ulc ls sr mlmns otbe }
www.hazelcast.com
19. MAP
itraecmhzlatcr.MpK V
nefc o.aecs.oeIa<, >
etnsjv.tlMp jv.tlCnurnMp
xed aaui.a, aaui.ocreta
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
Ia<tig Ue>hMp=h.eMp"sr";
MpSrn, sr za
zgta(ues)
hMppt"ee" nwUe(Ptr,"enjr);
za.u(Ptr, e sr"ee" Vete")
MpSrn,Ue>mp=h.eMp"sr";
a<tig sr a
zgta(ues)
mppt"ee" nwUe(Ptr,"enjr);
a.u(Ptr, e sr"ee" Vete")
CnurnMpSrn,Ue>cnurnMp=h.eMp"sr";
ocreta<tig sr ocreta
zgta(ues)
cnurnMpptfbet"ee" nwUe(Ptr,"enjr);
ocreta.uIAsn(Ptr, e sr"ee" Vete")
Ue ptr=mpgt"ee";
sr ee
a.e(Ptr)
www.hazelcast.com
23. LOCK (2/3)
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
/ DsrbtdRetat
/ itiue enrn
Lc lc =h.eLc(mLc";
o
k k
o
zgtok"yok)
lc.ok)
o
klc(;
ty{
r
/ D smtig
/ o oehn
}fnly{
ial
lc.nok)
o
kulc(;
}
www.hazelcast.com
24. LOCK (3/3)
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
/ Mp(o-lcs
/ a Rw)ok
Ia<tig Ue>mp=h.eMp"sr";
MpSrn, sr a
zgta(ues)
mplc(Ptr)
a.ok"ee";
ty{
r
/ D smtigwt Ptr
/ o oehn ih ee
}fnly{
ial
mpulc(Ptr)
a.nok"ee";
}
www.hazelcast.com
25. TOPIC / PUBSUB
pbi casEapeipeet MsaeitnrSrn>{
ulc ls xml mlmns esgLsee<tig
pbi vi snMsae{
ulc od edesg
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
Ioi<tig tpc=h.eTpc"oi";
TpcSrn> oi
zgtoi(tpc)
tpcadesgLsee(hs;
oi.dMsaeitnrti)
tpcpbih"el Wrd)
oi.uls(Hlo ol";
}
@vrie
Oerd
pbi vi oMsaeMsaeSrn>msae {
ulc od nesg(esg<tig esg)
Sse.u.rnl(Gtmsae "+msaegtesgOjc()
ytmotpitn"o esg:
esg.eMsaebet);
}
}
www.hazelcast.com
28. ADVANCED TECHNIQUES
Indexing keys, values and value properties
Distributed SQL-like query
Write-Behind / Write-Through persistence
Read-Through (if key not loaded use MapLoader)
Transactions
EntryListeners / EntryProcessors
Automatic eviction
Control partitioning (Version 3.1)
and many more ...
www.hazelcast.com
31. DISTRIBUTED SQL-LIKE QUERIES
Ia<tig Ue>mp=Hzlatgta(ues)
MpSrn, sr a
aecs.eMp"sr";
Peiaepeiae=nwSlrdct(atv ADae< 3";
rdct rdct
e qPeiae"cie N g = 0)
StUe>ues=mpvle(rdct)
e<sr sr
a.auspeiae;
StEtySrn,Ue> etis=mpetye(rdct)
e<nr<tig sr> nre
a.nrStpeiae;
www.hazelcast.com
32. MAPLOADER / MAPSTORE
pbi casMptrg
ulc ls aSoae
ipeet Mptr<tig Ue> MpodrSrn,Ue>{
mlmns aSoeSrn, sr, aLae<tig sr
/ Sm mtosmsig..
/ oe ehd isn .
@vriepbi Ue la(tigky {rtr laVleBky;}
Oerd ulc sr odSrn e)
eun odauD(e)
@vriepbi StSrn>laAles){rtr laKyD(;}
Oerd ulc e<tig odlKy(
eun odesB)
@vriepbi vi dlt(tigky {dltD(e) }
Oerd ulc od eeeSrn e)
eeeBky;
@vriepbi vi soeSrn ky Ue vle {
Oerd ulc od tr(tig e, sr au)
soeoaaaeky vle;
trTDtbs(e, au)
}
}
<a nm=ues>
mp ae"sr"
<a-tr eald"re>
mpsoe nbe=tu"
<ls-aecmhzlateapeMptrg<casnm>
casnm>o.aecs.xml.aSoae/ls-ae
<rt-ea-eod><wiedlyscns
wiedlyscns0/rt-ea-eod>
<mpsoe
/a-tr>
<mp
/a>
www.hazelcast.com
33. TRANSACTION (1/2)
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
fnlMpmp=h.eMp"eal";
ia a a
zgta(dfut)
fnlQeeqee=h.eQee"eal";
ia uu uu
zgtuu(dfut)
h.xctTascinnwTascinlakVi>){
zeeuernato(e rnatoaTs<od(
@vrie
Oerd
pbi Vi eeueTascinlakotx cnet {
ulc od xct(rnatoaTsCnet otx)
Tettet=(we)qeepl(;
we we
Tet uu.ol)
poeswe(we)
rcsTettet;
mpptbide(we) tet;
a.u(ulKytet, we)
rtr nl;
eun ul
}
};
)
www.hazelcast.com
34. TRANSACTION (2/2)
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
Tascinotx cnet=h.eTascinotx(;
rnatoCnet otx
znwrnatoCnet)
cnetbgnrnato(;
otx.eiTascin)
Tascinla mp=cnetgta(dfut)
rnatoaMp a
otx.eMp"eal";
Tascinluu qee=cnetgtuu(dfut)
rnatoaQee uu
otx.eQee"eal";
ty{
r
Tettet=(we)qeepl(;
we we
Tet uu.ol)
poeswe(we)
rcsTettet;
mpptbide(we) tet;
a.u(ulKytet, we)
cnetcmiTascin)
otx.omtrnato(;
}cth(xeto e {
ac Ecpin )
cnetrlbcTascin)
otx.olakrnato(;
}
www.hazelcast.com
35. CONTROL PARTITIONING
Force location of corresponding data in the same partition
by providing a special partition key
Hzlatntneh =gtaecsIsac(;
aecsIsac z
eHzlatntne)
Mpues=h.eMp"sr";
a sr
zgta(ues)
uespt"ee@ee" nwUe(Ptr,"enjr);
sr.u(PtrPtr, e sr"ee" Vete")
Mpfins=h.eMp"red";
a red
zgta(fins)
finspt"ee-hi@ee" nwUe(Crsoh,"nebr")
red.u(PtrCrsPtr, e sr"hitp" Eglet);
finspt"ee-udPtr,nwUe(Fa" "aio")
red.u(PtrFa@ee" e sr"ud, Mlkv);
www.hazelcast.com
37. SPI (NEW IN HAZELCAST 3)
Possibility to build own distributed datastructures
Hook into datastructure events
Implement your own services (like RemoteInvocation,
MapReduce)
React on membership events
Manipulate migrations on your purpose
Handle splitbrain events
and many more ...
www.hazelcast.com