After more than 5 years of doing this, I think I managed to capture the essence of the beast quite neatly. Here's what matters about Redis, the open source in-memory data structure store, IMO.
Redis allows running Lua scripts via its embedded Lua engine. Lua scripts have full access to Redis data and commands. Scripts run atomically and block the server during execution. Redis caches compiled scripts to avoid recompilation. Scripts should be parameterized to avoid cache explosions. Lua provides powerful data types like tables and control structures that can be used to build complex logic in scripts.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
Boosting Machine Learning with Redis Modules and SparkDvir Volk
Redis modules allow for new capabilities like machine learning models to be added to Redis. The Redis-ML module stores machine learning models like random forests and supports operations like model training, evaluation, and prediction directly from Redis for low latency. Spark can be used to train models which are then saved as Redis modules, allowing models to be easily deployed and accessed from services and clients.
This document provides an overview of Postgres clustering solutions and distributed Postgres architectures. It discusses master-slave replication, Postgres-XC/XL, Greenplum, CitusDB, pg_shard, BDR, pg_logical, and challenges around distributed transactions, high availability, and multimaster replication. Key points include the tradeoffs of different approaches and an implementation of multimaster replication built on pg_logical and a timestamp-based distributed transaction manager (tsDTM) that provides partition tolerance and automatic failover.
HornetQ is the new name for JBoss Messaging 2. It is an open source, high performance, multi-protocol asynchronous messaging system designed for usability. Key features include high performance persistence using asynchronous IO, support for huge queues and messages, pluggable transports, high availability through replication and failover, clustering for load balancing, and core bridges and diverts for routing messages.
This document discusses Fluentd, an open source log collector. It provides a pluggable architecture that allows data to be collected, filtered, and forwarded to various outputs. Fluentd uses JSON format for log messages and MessagePack internally. It is reliable, scalable, and extensible through plugins. Common use cases include log aggregation, monitoring, and analytics across multiple servers and applications.
Type safe, versioned, and rewindable stream processing with Apache {Avro, K...Hisham Mardam-Bey
This document summarizes a talk on using Apache Kafka and Avro for stream processing at Mate1. It discusses:
1. How Mate1 was using message queues before to address latency and scalability issues, but wanted improvements in type safety, versioning, and being rewindable.
2. How they used Apache Avro for serialization to gain type safety when producing and consuming from Kafka topics.
3. How they developed a simple data format and integrated an Avro schema repository to allow for versioned schemas and backwards compatibility when schemas change.
4. How the integration allows for rewinding by mapping offsets to points in time, and rebuilding state after crashes by reprocessing data from a previous
Redis Cluster is an approach to distributing Redis across multiple nodes. Key-value pairs are partitioned across nodes using consistent hashing on the key's hash slot. Nodes specialize as masters or slaves of data partitions for redundancy. Clients can query any node, which will redirect requests as needed. Nodes continuously monitor each other to detect and address failures, maintaining availability as long as each partition has at least one responsive node. The redis-trib tool is used to setup, check, resize, and repair clusters as needed.
Postgres & Redis Sitting in a Tree- Rimas Silkaitis, HerokuRedis Labs
Postgres and Redis Sitting in a Tree | In today’s world of polyglot persistence, it’s likely that companies will be using multiple data stores for storing and working with data based on the use case. Typically a company will
start with a relational database like Postgres and then add Redis for more high velocity use-cases. What if you could tie the two systems together to enable so much more?
The document discusses the new features introduced in Java versions 8 through 11. It provides 10 examples of features added in Java 9, including private methods in interfaces and collection factory methods. Another section outlines 6 features from Java 10 such as local variable type inference. Finally, Java 11 features like local-variable syntax for lambda parameters and the standardization of the HTTP client are presented along with various API improvements and removed features.
Python Streaming Pipelines on Flink - Beam Meetup at Lyft 2019Thomas Weise
Apache Beam is a unified programming model for batch and streaming data processing that provides portability across distributed processing backends. It aims to support multiple languages like Java, Python and Go. The Beam Python SDK allows writing pipelines in Python that can run on distributed backends like Apache Flink. Lyft developed a Python SDK runner for Flink that translates Python pipelines to native Flink APIs using the Beam Fn API for communication between the SDK and runner. Future work includes improving performance of Python pipelines on JVM runners and supporting multiple languages in a single pipeline.
The document discusses the GlusterFS APIs and libgfapi basics. It describes how libgfapi allows manually creating a context, loading a volume file, and making individual calls like glfs_open and glfs_write. It also provides a Python example of using libgfapi to create a file. The document outlines the basics of the GlusterFS translator including adding functionality from storage bricks to the user, and the translator environment of stacking requests and unwinding responses.
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon
gohbase is an implementation of an HBase client in pure Go: https://github.com/tsuna/gohbase. In this presentation we'll talk about its architecture and compare its performance against the native Java HBase client as well as AsyncHBase (http://opentsdb.github.io/asynchbase/) and some nice characteristics of golang that resulted in a simpler implementation.
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
The document provides an overview of Arnaud Bouchez and his work on mORMot and SynPDF. It discusses mORMot version 1.18 and its features like being an ORM, supporting SOA, MVC, and REST. It then summarizes the results of a survey conducted on refactoring mORMot, including separating it into smaller units, using semantic versioning, dropping old compiler support, and moving to GitHub. It previews the structure and goals of the new mORMot 2 library.
This document discusses distributed Postgres including multi-master replication, distributed transactions, and high availability/auto failover. It explores existing implementations like Postgres-XC and proposes a transaction manager API and time-stamp based approach to enable distributed transactions without a central bottleneck. The document also outlines a multimaster implementation built on logical replication, a transaction replay pool, and Raft-based storage for failure handling and distributed deadlocks. Performance is approximately half of standalone Postgres with the same read speeds and capabilities for node recovery and network partition handling.
This document summarizes the challenges and solutions for maintaining large PostgreSQL databases at Emma, including:
- Maintaining terabytes of data across multiple clusters up to version 9.0
- Facing performance issues when the hardware load was pushed to its limits
- Dealing with huge catalogs containing millions of data points that caused slow performance
- Addressing problems like bloat, backups that took hours, system resource exhaustion, and transaction wraparound issues
- Implementing solutions such as scripts to clean up bloat, sharding to a Linux filesystem, and increasing autovacuum thresholds
Kafka Summit SF 2017 - Shopify Flash Sales with Apache Kafkaconfluent
This document discusses how Shopify uses Apache Kafka in their systems architecture. It describes how Kafka provides reliable asynchronous messaging that allows Shopify to collect logs and events and feed them to their data lake. It outlines how Kafka provides operational decoupling and allows them to deploy application or Kafka changes independently. It then discusses two specific use cases: using Kafka as part of a logs/events pipeline and using it to enable active-active Elasticsearch replication across multiple data centers.
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
HBaseCon 2015: OpenTSDB and AsyncHBase UpdateHBaseCon
OpenTSDB is an open source distributed time series database for storing large amounts of metrics data and performing fast queries. It is scalable and can store trillions of data points across multiple servers. Data is stored in HBase and queries are performed using a simple query language. OpenTSDB 2.x includes new features like salting to distribute writes across servers, compact storage formats using column appends, and downsampling to fill in missing data during aggregation.
This document provides an overview and introduction to Cassandra, an open source distributed database management system designed to handle large amounts of data across many commodity servers. It discusses Cassandra's origins from influential papers on Bigtable and Dynamo, its properties including flexibility, scalability and high availability. The document also covers Cassandra's data model using keyspaces and column families, its consistency options, API including Thrift and language drivers, and provides examples of usage for an address book app and storing timeseries data.
Fluentd Project Intro at Kubecon 2019 EUN Masahiro
Fluentd is a streaming data collector that can unify logging and metrics collection. It collects data from sources using input plugins, processes and filters the data, and outputs it to destinations using output plugins. It is commonly used for container logging, collecting logs from files or Docker and adding metadata before outputting to Elasticsearch or other targets. Fluentbit is a lightweight version of Fluentd that is better suited for edge collection and forwarding logs to a Fluentd instance for aggregation.
Developing a Redis Module - Hackathon KickoffItamar Haber
Slides deck for kicking off Redis Labs' Modules Hackathon - https://www.hackerearth.com/sprints/redislabs-hackathon-global
Video of the webinar is at: https://youtu.be/LPxx4QPyUPw
Starting with v4, modules hold a promise for changing how Redis is used and developed for. Enabling custom data types and commands, Redis Modules build upon and extend the core functionality to handle any use case.
The video of the webinar given with these slides is at: https://youtu.be/EglSYFodaqw
This document discusses Fluentd, an open source log collector. It provides a pluggable architecture that allows data to be collected, filtered, and forwarded to various outputs. Fluentd uses JSON format for log messages and MessagePack internally. It is reliable, scalable, and extensible through plugins. Common use cases include log aggregation, monitoring, and analytics across multiple servers and applications.
Type safe, versioned, and rewindable stream processing with Apache {Avro, K...Hisham Mardam-Bey
This document summarizes a talk on using Apache Kafka and Avro for stream processing at Mate1. It discusses:
1. How Mate1 was using message queues before to address latency and scalability issues, but wanted improvements in type safety, versioning, and being rewindable.
2. How they used Apache Avro for serialization to gain type safety when producing and consuming from Kafka topics.
3. How they developed a simple data format and integrated an Avro schema repository to allow for versioned schemas and backwards compatibility when schemas change.
4. How the integration allows for rewinding by mapping offsets to points in time, and rebuilding state after crashes by reprocessing data from a previous
Redis Cluster is an approach to distributing Redis across multiple nodes. Key-value pairs are partitioned across nodes using consistent hashing on the key's hash slot. Nodes specialize as masters or slaves of data partitions for redundancy. Clients can query any node, which will redirect requests as needed. Nodes continuously monitor each other to detect and address failures, maintaining availability as long as each partition has at least one responsive node. The redis-trib tool is used to setup, check, resize, and repair clusters as needed.
Postgres & Redis Sitting in a Tree- Rimas Silkaitis, HerokuRedis Labs
Postgres and Redis Sitting in a Tree | In today’s world of polyglot persistence, it’s likely that companies will be using multiple data stores for storing and working with data based on the use case. Typically a company will
start with a relational database like Postgres and then add Redis for more high velocity use-cases. What if you could tie the two systems together to enable so much more?
The document discusses the new features introduced in Java versions 8 through 11. It provides 10 examples of features added in Java 9, including private methods in interfaces and collection factory methods. Another section outlines 6 features from Java 10 such as local variable type inference. Finally, Java 11 features like local-variable syntax for lambda parameters and the standardization of the HTTP client are presented along with various API improvements and removed features.
Python Streaming Pipelines on Flink - Beam Meetup at Lyft 2019Thomas Weise
Apache Beam is a unified programming model for batch and streaming data processing that provides portability across distributed processing backends. It aims to support multiple languages like Java, Python and Go. The Beam Python SDK allows writing pipelines in Python that can run on distributed backends like Apache Flink. Lyft developed a Python SDK runner for Flink that translates Python pipelines to native Flink APIs using the Beam Fn API for communication between the SDK and runner. Future work includes improving performance of Python pipelines on JVM runners and supporting multiple languages in a single pipeline.
The document discusses the GlusterFS APIs and libgfapi basics. It describes how libgfapi allows manually creating a context, loading a volume file, and making individual calls like glfs_open and glfs_write. It also provides a Python example of using libgfapi to create a file. The document outlines the basics of the GlusterFS translator including adding functionality from storage bricks to the user, and the translator environment of stacking requests and unwinding responses.
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon
gohbase is an implementation of an HBase client in pure Go: https://github.com/tsuna/gohbase. In this presentation we'll talk about its architecture and compare its performance against the native Java HBase client as well as AsyncHBase (http://opentsdb.github.io/asynchbase/) and some nice characteristics of golang that resulted in a simpler implementation.
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
The document provides an overview of Arnaud Bouchez and his work on mORMot and SynPDF. It discusses mORMot version 1.18 and its features like being an ORM, supporting SOA, MVC, and REST. It then summarizes the results of a survey conducted on refactoring mORMot, including separating it into smaller units, using semantic versioning, dropping old compiler support, and moving to GitHub. It previews the structure and goals of the new mORMot 2 library.
This document discusses distributed Postgres including multi-master replication, distributed transactions, and high availability/auto failover. It explores existing implementations like Postgres-XC and proposes a transaction manager API and time-stamp based approach to enable distributed transactions without a central bottleneck. The document also outlines a multimaster implementation built on logical replication, a transaction replay pool, and Raft-based storage for failure handling and distributed deadlocks. Performance is approximately half of standalone Postgres with the same read speeds and capabilities for node recovery and network partition handling.
This document summarizes the challenges and solutions for maintaining large PostgreSQL databases at Emma, including:
- Maintaining terabytes of data across multiple clusters up to version 9.0
- Facing performance issues when the hardware load was pushed to its limits
- Dealing with huge catalogs containing millions of data points that caused slow performance
- Addressing problems like bloat, backups that took hours, system resource exhaustion, and transaction wraparound issues
- Implementing solutions such as scripts to clean up bloat, sharding to a Linux filesystem, and increasing autovacuum thresholds
Kafka Summit SF 2017 - Shopify Flash Sales with Apache Kafkaconfluent
This document discusses how Shopify uses Apache Kafka in their systems architecture. It describes how Kafka provides reliable asynchronous messaging that allows Shopify to collect logs and events and feed them to their data lake. It outlines how Kafka provides operational decoupling and allows them to deploy application or Kafka changes independently. It then discusses two specific use cases: using Kafka as part of a logs/events pipeline and using it to enable active-active Elasticsearch replication across multiple data centers.
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
HBaseCon 2015: OpenTSDB and AsyncHBase UpdateHBaseCon
OpenTSDB is an open source distributed time series database for storing large amounts of metrics data and performing fast queries. It is scalable and can store trillions of data points across multiple servers. Data is stored in HBase and queries are performed using a simple query language. OpenTSDB 2.x includes new features like salting to distribute writes across servers, compact storage formats using column appends, and downsampling to fill in missing data during aggregation.
This document provides an overview and introduction to Cassandra, an open source distributed database management system designed to handle large amounts of data across many commodity servers. It discusses Cassandra's origins from influential papers on Bigtable and Dynamo, its properties including flexibility, scalability and high availability. The document also covers Cassandra's data model using keyspaces and column families, its consistency options, API including Thrift and language drivers, and provides examples of usage for an address book app and storing timeseries data.
Fluentd Project Intro at Kubecon 2019 EUN Masahiro
Fluentd is a streaming data collector that can unify logging and metrics collection. It collects data from sources using input plugins, processes and filters the data, and outputs it to destinations using output plugins. It is commonly used for container logging, collecting logs from files or Docker and adding metadata before outputting to Elasticsearch or other targets. Fluentbit is a lightweight version of Fluentd that is better suited for edge collection and forwarding logs to a Fluentd instance for aggregation.
Developing a Redis Module - Hackathon KickoffItamar Haber
Slides deck for kicking off Redis Labs' Modules Hackathon - https://www.hackerearth.com/sprints/redislabs-hackathon-global
Video of the webinar is at: https://youtu.be/LPxx4QPyUPw
Starting with v4, modules hold a promise for changing how Redis is used and developed for. Enabling custom data types and commands, Redis Modules build upon and extend the core functionality to handle any use case.
The video of the webinar given with these slides is at: https://youtu.be/EglSYFodaqw
The document summarizes Redis Modules and provides an overview of how to build and use Redis Modules. Some key points:
- Redis Modules allow developers to extend Redis with new commands by dynamically loading libraries written in C/C++. This provides new functionality and integrates with existing Redis data types and commands.
- The Modules API provides both high-level and low-level interfaces. The high-level API is similar to Lua but slower, while the low-level API exposes specific Redis commands and data types for better performance.
- Building a simple module involves writing a command handler, validating arguments, making Redis calls, and returning a reply. Modules are initialized via an OnLoad function and new commands can
Rack provides a simple interface for building web applications in Ruby. This document outlines how to build a basic web framework on top of Rack by leveraging existing Rack middleware and tools. It demonstrates how to add features like routing, controllers, views, ORM, authentication, testing, and a console using middleware like Usher, Tilt, DataMapper, Warden, rack-test, and racksh. The goal is to create a simple but full-featured framework with minimal code by combining existing Rack components.
This document provides an overview of Drupal architecture, including:
- The typical technology stack of OS, web server, PHP, database, and Drupal software.
- How requests are routed through Drupal's bootstrap process and menu system before being returned as HTML.
- Common patterns in Drupal like hooks, structured data arrays, and modules altering output.
- Key concepts like entities, bundles, and fields that make up content types.
- Questions to consider when planning a Drupal site like available functionality and theming.
ESIL - Universal IL (Intermediate Language) for Radare2Anton Kochkov
This document discusses ESIL, an intermediate language used in the reverse engineering tool radare2. It begins by providing context on intermediate languages generally and compares ESIL to other existing intermediate languages. Some key points:
- ESIL stands for Evaluable Strings Intermediate Language and is based on reverse polish notation for speed. It is designed for evaluation and emulation.
- ESIL aims to support a wide range of architectures with infinite memory and registers. It allows external function calls and custom operations.
- Radare2 uses ESIL for tasks like analysis, decompilation, and emulation. The radeco decompiler lifts ESIL to its own intermediate language for decomposition.
- Future work may include
This document introduces Ruby on Rails (RoR) as a web development framework. It discusses key RoR concepts like MVC architecture, Active Record for object-relational mapping, and migrations for managing database changes. It provides resources for learning RoR, including downloading Ruby and Rails, using version control systems like Subversion, and recommended books and articles. The document emphasizes that RoR aims to increase productivity through conventions over configuration and generating code through scaffolds and templates.
The document appears to be a block of random letters with no discernible meaning or purpose. It consists of a series of letters without any punctuation, formatting, or other signs of structure that would indicate it is meant to convey any information. The document does not provide any essential information that could be summarized.
This presentation was delivered on 11th May, 2014 in Drupal Camp Pakistan held in DatumSquare IT Services Islamabad. Contents of the presentation contains some basics stuff for designers, themers and coders.
The document discusses issues with over-engineering and complexity in typical Java web applications and recommends focusing on simplicity, avoiding unnecessary abstractions and frameworks, following principles like the Single Responsibility Principle and YAGNI, and using proven open-source tools instead of too many Java standards and technologies. It also provides suggestions for code style, proper application structure, web UI development, and testing to develop applications in a simpler and more productive way.
A broad introduction to Java.
What is Java and where is it used
Programming Languages in the web development
What is Java and where is it used
OOP PRINCIPLES
JAVA SE, JRE, JDK
IDE’s
Where Java used in the “Real World”
Presentation given to NYC Tech Talks Meetup group on June 26 2012. More info here: http://www.meetup.com/NYC-Tech-Talks/events/69478562/
PVS-Studio and static code analysis techniqueAndrey Karpov
What is «static code analysis»? It is a technique that allows, at the same time with unit-tests, dynamic code analysis, code review and others, to increase code quality, increase its reliability and decrease the development time.
Ruby is designed to make programmers happy by providing simplicity, openness, and an object-oriented yet dynamic programming experience. It aims to focus on humans rather than machines. Ruby promotes productivity through conventions that speed development and testing. Programmers enjoy coding in Ruby due to its immediate feedback and morale boost. Ruby has broad utility across web, text, and GUI applications and is platform agnostic, running on most operating systems.
This document provides an overview of OpenERP/Odoo, an open source suite of business applications. It describes OpenERP's modular, Python-based architecture and Rapid Application Development (RAD) framework. The document then discusses how to build custom modules in OpenERP, including the structure of modules and how to define business objects and fields using the integrated Object-Relational Mapping (ORM) service.
This document contains notes from a PHP extensions workshop. It introduces Julien Pauli, the workshop presenter, and outlines what attendees should bring and know, such as C skills and a Linux environment. The document then covers various topics around PHP extensions, including compiling PHP with debugging, creating an extension skeleton, extension APIs and versions, memory management using Zend Memory Manager, and working with zvals (PHP variables). Attendees will learn how to create, build, and load their first PHP extension.
This document provides an overview and introduction to PHP extensions. It discusses compiling PHP with debugging enabled, creating a basic extension skeleton, configuring and installing extensions, and activating extensions. It also covers extension lifetime, PHP memory management using the Zend Memory Manager, PHP variables called zvals which are containers for data, and zval types. The document is intended to provide attendees with the necessary background knowledge to participate in a workshop about PHP extensions.
How I Implemented the #1 Requested Feature In Redis In Less than 1 Hour with ...Itamar Haber
The document provides instructions for building a Redis module that implements a ZPOP command to pop and remove the first element of a sorted set. It includes code for the ZPOP command implementation, registering the module and command, and compiling the module into a shared object library that can be loaded by Redis. The document also suggests some questions to consider before taking on developing a new module, such as whether the capability already exists or an open source module could be reused.
An introduction and status update on Redis' upcoming new data structure - Stream - that is not unlike a log, has some Apache Kafka-like thingamagigs and can be also used for time series data
Leveraging Probabilistic Data Structures for Real Time Analytics with Redis M...Itamar Haber
Leveraging Probabilistic Data Structures for Real Time Analytics with Redis Modules
Redis is an in-memory database that can be used for caching, messaging, and more. This document discusses how Redis modules can implement probabilistic data structures like HyperLogLog, Bloom filters, Count-Min sketch to enable analytics on streaming data. These data structures allow estimating metrics like cardinality and frequencies with sublinear space and constant query time, at the cost of some accuracy. The speaker demonstrates how modules for these probabilistic structures can extend Redis' versatility for real-time analytics use cases.
Power to the People: Redis Lua ScriptsItamar Haber
Redis is the Sun.
Earth is your application.
Imagine that the Moon is stuck in the middle of the Sun.
You send non-melting rockets (scripts) with robots
(commands) and cargo (data) back and forth…
The slides we used at the first meetup hosted at Redis Labs' TLV offices :)
Touches on some of the more notable user-facing functionality in the newest Redis version, as well as interesting internal optimizations with major gains.
#RedisTLV: www.meetup.com/Tel-Aviv-Redis-Meetup/events/227594422/
A list of all URLs in the deck is at: https://gist.github.com/itamarhaber/87e8c8c7126fbfb3f722
A lightening talk filled to the brim with knowledge and tips about Redis, data structures, performance, RAM and tips to take Redis to the max
Recording: https://www.youtube.com/watch?v=qHkXVY2LpwU
External links: https://gist.github.com/itamarhaber/dddc3d4d9c19317b1477
Applications today are required to process massive amounts of data and return responses in real time. Simply storing Big Data is no longer enough; insights must be gleaned and decisions made as soon as data rushes in. In-memory databases like Redis provide the blazing fast speeds required for sub-second application response times. Using a combination of in-memory Redis and disk-based MongoDB can significantly reduce the “digestive” challenge associated with processing high velocity data.
Redis & MongoDB: Stop Big Data Indigestion Before It StartsItamar Haber
Efficiently digesting data in large volumes can prove to be challenging for any database. The challenges are compounded when this influx must be analyzed on the fly, or "tasted", to satisfy the sophisticated palates of modern apps. Luckily, there are several proven remedies you can concoct with Redis to help with potential indigestion.
The URLs from the presentation are also available at: https://gist.github.com/itamarhaber/325e515c1715a12ef132
An overview and discussion on indexing data in Redis to facilitate fast and efficient data retrieval. Presented on September 22nd, 2014 to the Redis Tel Aviv Meetup.
Redis Use Patterns (DevconTLV June 2014)Itamar Haber
An introduction to Redis for the SQL practitioner, covering data types and common use cases.
The video of this session can be found at: https://www.youtube.com/watch?v=8Unaug_vmFI
DevOpsDays SLC - Platform Engineers are Product Managers.pptxJustin Reock
Platform Engineers are Product Managers: 10x Your Developer Experience
Discover how adopting this mindset can transform your platform engineering efforts into a high-impact, developer-centric initiative that empowers your teams and drives organizational success.
Platform engineering has emerged as a critical function that serves as the backbone for engineering teams, providing the tools and capabilities necessary to accelerate delivery. But to truly maximize their impact, platform engineers should embrace a product management mindset. When thinking like product managers, platform engineers better understand their internal customers' needs, prioritize features, and deliver a seamless developer experience that can 10x an engineering team’s productivity.
In this session, Justin Reock, Deputy CTO at DX (getdx.com), will demonstrate that platform engineers are, in fact, product managers for their internal developer customers. By treating the platform as an internally delivered product, and holding it to the same standard and rollout as any product, teams significantly accelerate the successful adoption of developer experience and platform engineering initiatives.
Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Lea...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged but at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution—avoiding performance bottlenecks and semantically inequivalent results. We discuss the engineering aspects of a refactoring tool that automatically determines when it is safe and potentially advantageous to migrate imperative DL code to graph execution and vice-versa.
Config 2025 presentation recap covering both daysTrishAntoni1
Config 2025 What Made Config 2025 Special
Overflowing energy and creativity
Clear themes: accessibility, emotion, AI collaboration
A mix of tech innovation and raw human storytelling
(Background: a photo of the conference crowd or stage)
Original presentation of Delhi Community Meetup with the following topics
▶️ Session 1: Introduction to UiPath Agents
- What are Agents in UiPath?
- Components of Agents
- Overview of the UiPath Agent Builder.
- Common use cases for Agentic automation.
▶️ Session 2: Building Your First UiPath Agent
- A quick walkthrough of Agent Builder, Agentic Orchestration, - - AI Trust Layer, Context Grounding
- Step-by-step demonstration of building your first Agent
▶️ Session 3: Healing Agents - Deep dive
- What are Healing Agents?
- How Healing Agents can improve automation stability by automatically detecting and fixing runtime issues
- How Healing Agents help reduce downtime, prevent failures, and ensure continuous execution of workflows
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Cyntexa
At Dreamforce this year, Agentforce stole the spotlight—over 10,000 AI agents were spun up in just three days. But what exactly is Agentforce, and how can your business harness its power? In this on‑demand webinar, Shrey and Vishwajeet Srivastava pull back the curtain on Salesforce’s newest AI agent platform, showing you step‑by‑step how to design, deploy, and manage intelligent agents that automate complex workflows across sales, service, HR, and more.
Gone are the days of one‑size‑fits‑all chatbots. Agentforce gives you a no‑code Agent Builder, a robust Atlas reasoning engine, and an enterprise‑grade trust layer—so you can create AI assistants customized to your unique processes in minutes, not months. Whether you need an agent to triage support tickets, generate quotes, or orchestrate multi‑step approvals, this session arms you with the best practices and insider tips to get started fast.
What You’ll Learn
Agentforce Fundamentals
Agent Builder: Drag‑and‑drop canvas for designing agent conversations and actions.
Atlas Reasoning: How the AI brain ingests data, makes decisions, and calls external systems.
Trust Layer: Security, compliance, and audit trails built into every agent.
Agentforce vs. Copilot
Understand the differences: Copilot as an assistant embedded in apps; Agentforce as fully autonomous, customizable agents.
When to choose Agentforce for end‑to‑end process automation.
Industry Use Cases
Sales Ops: Auto‑generate proposals, update CRM records, and notify reps in real time.
Customer Service: Intelligent ticket routing, SLA monitoring, and automated resolution suggestions.
HR & IT: Employee onboarding bots, policy lookup agents, and automated ticket escalations.
Key Features & Capabilities
Pre‑built templates vs. custom agent workflows
Multi‑modal inputs: text, voice, and structured forms
Analytics dashboard for monitoring agent performance and ROI
Myth‑Busting
“AI agents require coding expertise”—debunked with live no‑code demos.
“Security risks are too high”—see how the Trust Layer enforces data governance.
Live Demo
Watch Shrey and Vishwajeet build an Agentforce bot that handles low‑stock alerts: it monitors inventory, creates purchase orders, and notifies procurement—all inside Salesforce.
Peek at upcoming Agentforce features and roadmap highlights.
Missed the live event? Stream the recording now or download the deck to access hands‑on tutorials, configuration checklists, and deployment templates.
🔗 Watch & Download: https://www.youtube.com/live/0HiEmUKT0wY
Webinar - Top 5 Backup Mistakes MSPs and Businesses Make .pptxMSP360
Data loss can be devastating — especially when you discover it while trying to recover. All too often, it happens due to mistakes in your backup strategy. Whether you work for an MSP or within an organization, your company is susceptible to common backup mistakes that leave data vulnerable, productivity in question, and compliance at risk.
Join 4-time Microsoft MVP Nick Cavalancia as he breaks down the top five backup mistakes businesses and MSPs make—and, more importantly, explains how to prevent them.
On-Device or Remote? On the Energy Efficiency of Fetching LLM-Generated Conte...Ivano Malavolta
Slides of the presentation by Vincenzo Stoico at the main track of the 4th International Conference on AI Engineering (CAIN 2025).
The paper is available here: http://www.ivanomalavolta.com/files/papers/CAIN_2025.pdf
Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Lea...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged but at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution—avoiding performance bottlenecks and semantically inequivalent results. We discuss the engineering aspects of a refactoring tool that automatically determines when it is safe and potentially advantageous to migrate imperative DL code to graph execution and vice-versa.
Viam product demo_ Deploying and scaling AI with hardware.pdfcamilalamoratta
Building AI-powered products that interact with the physical world often means navigating complex integration challenges, especially on resource-constrained devices.
You'll learn:
- How Viam's platform bridges the gap between AI, data, and physical devices
- A step-by-step walkthrough of computer vision running at the edge
- Practical approaches to common integration hurdles
- How teams are scaling hardware + software solutions together
Whether you're a developer, engineering manager, or product builder, this demo will show you a faster path to creating intelligent machines and systems.
Resources:
- Documentation: https://on.viam.com/docs
- Community: https://discord.com/invite/viam
- Hands-on: https://on.viam.com/codelabs
- Future Events: https://on.viam.com/updates-upcoming-events
- Request personalized demo: https://on.viam.com/request-demo
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. While hybrid approaches aim for the "best of both worlds," the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges---and resultant bugs---involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation---the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à GenèveUiPathCommunity
Nous vous convions à une nouvelle séance de la communauté UiPath en Suisse romande.
Cette séance sera consacrée à un retour d'expérience de la part d'une organisation non gouvernementale basée à Genève. L'équipe en charge de la plateforme UiPath pour cette NGO nous présentera la variété des automatisations mis en oeuvre au fil des années : de la gestion des donations au support des équipes sur les terrains d'opération.
Au délà des cas d'usage, cette session sera aussi l'opportunité de découvrir comment cette organisation a déployé UiPath Automation Suite et Document Understanding.
Cette session a été diffusée en direct le 7 mai 2025 à 13h00 (CET).
Découvrez toutes nos sessions passées et à venir de la communauté UiPath à l’adresse suivante : https://community.uipath.com/geneva/.
AI 3-in-1: Agents, RAG, and Local Models - Brent LasterAll Things Open
Presented at All Things Open RTP Meetup
Presented by Brent Laster - President & Lead Trainer, Tech Skills Transformations LLC
Talk Title: AI 3-in-1: Agents, RAG, and Local Models
Abstract:
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this presentation, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
This will be a fast-paced, engaging mixture of presentations interspersed with code explanations and demos building up to the finished product – something you’ll be able to replicate yourself after the session!
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
Autonomous Resource Optimization: How AI is Solving the Overprovisioning Problem
In this session, Suresh Mathew will explore how autonomous AI is revolutionizing cloud resource management for DevOps, SRE, and Platform Engineering teams.
Traditional cloud infrastructure typically suffers from significant overprovisioning—a "better safe than sorry" approach that leads to wasted resources and inflated costs. This presentation will demonstrate how AI-powered autonomous systems are eliminating this problem through continuous, real-time optimization.
Key topics include:
Why manual and rule-based optimization approaches fall short in dynamic cloud environments
How machine learning predicts workload patterns to right-size resources before they're needed
Real-world implementation strategies that don't compromise reliability or performance
Featured case study: Learn how Palo Alto Networks implemented autonomous resource optimization to save $3.5M in cloud costs while maintaining strict performance SLAs across their global security infrastructure.
Bio:
Suresh Mathew is the CEO and Founder of Sedai, an autonomous cloud management platform. Previously, as Sr. MTS Architect at PayPal, he built an AI/ML platform that autonomously resolved performance and availability issues—executing over 2 million remediations annually and becoming the only system trusted to operate independently during peak holiday traffic.
2. ● A way to extend Redis with native code/libraries
● For compiling dynamically loaded libraries
● Distributed as a C header file (redismodule.h)
● C++ and Rust are easily possible, others perhaps
● ABI backwards compatible
● An isolated interface, decoupled from internals
● Made of a high-level API, and the low-level APIs
● Available as of v4
The Redis Module API in a nutshell is
3. Who We Are
Open source. The leading in-memory database platform,
supporting any high performance operational, analytics or
hybrid use case.
The open source home and commercial provider of Redis
Enterprise (Redise
) technology, platform, products & services.
Itamar Haber @itamarhaber, Evangely Technicalist, formerly
Chief Developer Advocate & Chief OSS Education Officerhello I am
4. No
The Decision Chart: when to develop a module
Gave serious
thought about
what you need
to solve?
Is there a core
Redis capability
that does it?
Do that!
Can you do it in
the app?
Is Lua good
enough?
Is there an
existing GA
module that
already does it?
Yes
No
No No
Honestly
factored cost of
taking on a new
software
project?
Yes Yes Yes
No
Is it a valid
feature request
to the core?
Yes
Yes
No
mebi
Roll
out
your
own
Try that first.
Hackers love
helping each
other 3:-)
5. ctx is the call's context.
argv and argc are the arguments and their count.
A module is a C file with commands
#include "redismodule.h"
int MyCommand(RedisModuleCtx *ctx,
RedisModuleString **argv, int argc) {
// My code here
// ...
return REDISMODULE_OK;
}
7. Will yield a standard error when condition is met.
Implementing the ZPOP command
/**
ZPOP <key>
*/
int ZPop(RedisModuleCtx *ctx,
RedisModuleString **argv, int argc) {
if (argc != 2) {
RedisModule_WrongArity(ctx);
return REDISMODULE_OK;
}
8. Off by default, turn on by calling before the return.
Automatically keeps track of things like opened keys,
allocated high level API and Redis objects.
Frees everything after the call to function returns, so you
don't have to.
Activate AutomajikMemory
RedisModule_AutoMemory(ctx);
9. Performing a call to Redis via the high-level API
RedisModuleCallReply *rep = RedisModule_Call(ctx,
"ZRANGE", "!sllc", argv[1], 0, 0, "WITHSCORES");
if (RedisModule_CallReplyType(rep) ==
REDISMODULE_REPLY_ERROR) {
RedisModule_ReplyWithCallReply(ctx, rep);
return REDISMODULE_OK;
}
10. • vis a vis Lua's redis.call()
• Variadic arguments via printf format-style
• "!..." means don't replicate to AOF and/or slaves
• RedisModule_Replicate() is exactly like
RedisModule_Call(), only that it replicates rather
than actually calling the command
RedisModule_Call()
11. Extract the element, a call to ZREM & reply
RedisModuleString *ele =
RedisModule_CreateStringFromCallReply(
RedisModule_CallReplyArrayElement(arr, 0));
RedisModule_Call(ctx, "ZREM", "ss",
key, ele);
RedisModule_ReplyWithCallReply(ctx, rep);
13. // Registering the module and its commands
int RedisModule_OnLoad(RedisModuleCtx *ctx,
RedisModuleString **argv, int argc) {
if (RedisModule_Init(ctx, "example", 1,
REDISMODULE_APIVER_1) == REDISMODULE_ERR) {
return REDISMODULE_ERR;
}
if (RedisModule_CreateCommand(ctx,
"example.zpop", ZPop, "Write", 1, 1, 1) ==
REDISMODULE_ERR) {
return REDISMODULE_ERR;
}
return REDISMODULE_OK;
}
14. # Compile it on Linux:
$ gcc -fPIC -std=gnu99 -c -o zpop.o zpop.c
$ ld -o zpop.so zpop.o -shared -Bsymbolic -lc
# Compile it on OSX:
$ gcc -dynamic -fno-common -std=gnu99 -c -o zpop.o zpop.c
$ ld -o zpop.so zpop.o -bundle -undefined dynamic_lookup
-lc
# Run it:
$ redis-server --loadmodule ./zpop.so
# Use it:
$ redis-cli
redis> ZADD z 0 a 1 b 3 c
(integer) 3
redis> EXAMPLE.ZPOP z
"a"
15. That's almost it.
There are several other lower-level functions to deal with
examining, extracting from and iterating over the
RedisModule_CallReply type.
It is easy to use the high-level API, and it can invoke (almost)
any Redis command.
And it is not that slow. Where did you hear that? :)
How high can you get?
16. The low-level APIs provide more and some less exciting
capabilities.
For example, RedisModule_WrongArity() belongs to
the low level API. It is just helper for replying with standard
error.
But since it has nothing to do with RedisModule_Call(),
it is considered "low-level".
The Low-level APIs
17. ● RedisModule_ReplyWith*
● Common keyspace, e.g. RedisModule_DeleteKey()
● Memory management, non-automatic
● RedisModuleString API
● Hash operations (RedisModule_HashGet(), HashSet())
● Sorted Set operations, including range iteration
● Replication control
● Module unload hook (experimental)
Some "groups" of the low-level APIs
18. ● String Direct Memory Access (DMA)
● Blocking commands (a-la BLPOP)
● Callback on keyspace notifications (triggers, WAT?!?)
● Threading API (put them extra 31 cores to use)
● Key locking API (ATM in experimentational state)
● Cluster API (planned)
● @antirez' current goal - make the API robust enough for
implementing Disque as a module <- that's good enough
for me:)
The lower it gets, the cooler it becomes
19. Do/don't use RedisModule_ReplicateVerbatim()
and/or RedisModule_Replicate()
Call RedisModule_CreateDataType()
The custom data type API
void MyTypeLoad(RedisModuleIO *rdb, int encver);
void MyTypeSave(RedisModuleIO *rdb, void *value);
void MyTypeRewrite(RedisModuleIO *aof,
RedisModuleString *key, void *value);
size_t MyTypeMemUsage(const void *value);
void MyTypeFree(void *value);
20. ● Deep dive into low-level APIs
● How to debug with gdb/lldb
● How to write unit/integration tests
● Calling module commands from clients
Do not fear though! Most is available in the documentation,
existing modules repos and online.
Also, check out ze module: github.com/itamarhaber/zpop
So much more important stuff left to cover...
21. You can get started with a template HGETSET project, a
skeleton makefile and a bunch of make-my-life-easier
utilities, simply by cloning:
https://github.com/RedisLabs/RedisModulesSDK
The Modules SDK