Git presentation, internals, advanced use and workflow examples.
Presentation by Tommaso Visconti http://www.tommyblue.it for DrWolf srl http://www.drwolf.it
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
Getting Started with Elastic Stack.
Detailed blog for the same
http://vikshinde.blogspot.co.uk/2017/08/elastic-stack-introduction.html
Running Kafka On Kubernetes With Strimzi For Real-Time Streaming ApplicationsLightbend
In this talk by Sean Glover, Principal Engineer at Lightbend, we will review how the Strimzi Kafka Operator, a supported technology in Lightbend Platform, makes many operational tasks in Kafka easy, such as the initial deployment and updates of a Kafka and ZooKeeper cluster.
See the blog post containing the YouTube video here: https://www.lightbend.com/blog/running-kafka-on-kubernetes-with-strimzi-for-real-time-streaming-applications
At Salesforce, we have deployed many thousands of HBase/HDFS servers, and learned a lot about tuning during this process. This talk will walk you through the many relevant HBase, HDFS, Apache ZooKeeper, Java/GC, and Operating System configuration options and provides guidelines about which options to use in what situation, and how they relate to each other.
In this session, we will learn about Teamcity CI Server. We will look at the different options available and how we can set a CI pipeline using Teamcity.
Kafka Connect & Streams - the ecosystem around KafkaGuido Schmutz
After a quick overview and introduction of Apache Kafka, this session cover two components which extend the core of Apache Kafka: Kafka Connect and Kafka Streams/KSQL.
Kafka Connects role is to access data from the out-side-world and make it available inside Kafka by publishing it into a Kafka topic. On the other hand, Kafka Connect is also responsible to transport information from inside Kafka to the outside world, which could be a database or a file system. There are many existing connectors for different source and target systems available out-of-the-box, either provided by the community or by Confluent or other vendors. You simply configure these connectors and off you go.
Kafka Streams is a light-weight component which extends Kafka with stream processing functionality. By that, Kafka can now not only reliably and scalable transport events and messages through the Kafka broker but also analyse and process these event in real-time. Interestingly Kafka Streams does not provide its own cluster infrastructure and it is also not meant to run on a Kafka cluster. The idea is to run Kafka Streams where it makes sense, which can be inside a “normal” Java application, inside a Web container or on a more modern containerized (cloud) infrastructure, such as Mesos, Kubernetes or Docker. Kafka Streams has a lot of interesting features, such as reliable state handling, queryable state and much more. KSQL is a streaming engine for Apache Kafka, providing a simple and completely interactive SQL interface for processing data in Kafka.
This document covers guidelines around achieving multitenancy in a data lake environment. It mentions the different design and implementation guidelines necessary for on premise as well as cloud-based multitenant data lake, and highlights the reference architecture for both these deployment options.
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
MySQL Clustering over InnoDB engines has grown a lot over the last decade. Galera began working with InnoDB early and then Group Replication came to the environment later, where the features are now rich and robust. This presentation offers a technical comparison of both of them.
This presentation will start by introducing how Apache Lucene can be used to classify documents using data structures that already exist in your index instead of having to generate and supply external training sets. The focus will be on extensions of the Lucene Classification module that come in Lucene 6.0 and the Lucene Classification module's incorporation into Solr 6.1. These extensions will allow you to classify at a document level with individual field weighting, numeric field support, lat/lon fields etc. The Solr ClassificationUpdateProcessor will be explored and how to use it including basic and advanced features like multi class support and classification context filtering. The presentation will include practical examples and real world use cases.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Learn about the various approaches to sharding your data with MongoDB. This presentation will help you answer questions such as when to shard and how to choose a shard key.
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
At Comcast, our team has been architecting a customer experience platform which is able to react to near-real-time events and interactions and deliver appropriate and timely communications to customers. By combining the low latency capabilities of Apache Flink and the dataflow capabilities of Apache NiFi we are able to process events at high volume to trigger, enrich, filter, and act/communicate to enhance customer experiences. Apache Flink and Apache NiFi complement each other with their strengths in event streaming and correlation, state management, command-and-control, parallelism, development methodology, and interoperability with surrounding technologies. We will trace our journey from starting with Apache NiFi over three years ago and our more recent introduction of Apache Flink into our platform stack to handle more complex scenarios. In this presentation we will compare and contrast which business and technical use cases are best suited to which platform and explore different ways to integrate the two platforms into a single solution.
Flyway: The agile database migration framework for JavaAxel Fontaine
The agile database migration framework for Java allows developers and DBAs to cooperatively manage database changes. Flyway provides commands to migrate databases between versions, allowing development and production databases to be aligned. It integrates with Java applications so database migrations run automatically on startup, keeping code and database structure in sync across environments.
The document summarizes BuzzNumbers' transition from using SQL Server to MongoDB as their database. It discusses problems they faced with SQL Server like scalability issues and performance problems with large datasets. It then covers why they chose to use MongoDB, including its ability to scale horizontally and handle large volumes of writes and reads. Finally, it discusses lessons learned in moving to a NoSQL database and using MongoDB and .NET to build their analytics product.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
With distributed tracing, we can track requests as they pass through multiple services, emitting timing and other metadata throughout, and this information can then be reassembled to provide a complete picture of the application’s behavior at runtime - Read more in https://blog.buoyant.io/2016/05/17/distributed-tracing-for-polyglot-microservices/ and https://www.rookout.com/
This presentation was written by Wagner Bianchi for the presentation on the Oracle Consulting Team/Professional Services meeting that took place in San Francisco/CA.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)Brian Brazil
Brian Brazil is an engineer passionate about reliable systems. He worked at Google SRE for 7 years and is now the founder of Robust Perception. Prometheus is an open source monitoring system inspired by Borgmon. It is mainly written in Go and used by over 100 companies. Prometheus regularly polls metrics from instrumented jobs and services. This allows it to provide alerts when things go wrong and insights into performance over time.
This document provides an overview of continuous integration (CI), continuous delivery (CD), and continuous deployment. CI involves regularly integrating code changes into a central repository and running automated tests. CD builds on CI by automatically preparing code changes for release to testing environments. Continuous deployment further automates the release of changes to production without human intervention if tests pass. The benefits of CI/CD include higher quality, lower costs, faster delivery, and happier teams. Popular CI tools include Jenkins, Bamboo, CircleCI, and Travis. Key practices involve automating all stages, keeping environments consistent, and making the pipeline fast. Challenges include requiring organizational changes and technical knowledge to automate the full process.
MariaDB and MySQL are both popular open-source relational database management systems (RDBMS) that are used to store, organize, and manage data. They are both based on the same core software, which was originally developed by MySQL AB, but MariaDB is a fork of MySQL that was created in 2009 due to concerns about the acquisition of MySQL by Oracle Corporation.
MariaDB and MySQL have many similarities, including their architecture, syntax, and functionality. Both databases use SQL (Structured Query Language) to manage data and support a wide range of programming languages. They also offer features such as replication, clustering, and partitioning to improve performance and scalability.
However, there are also some differences between MariaDB and MySQL. MariaDB has some additional features and improvements over MySQL, such as better performance, improved security, and more storage engines. MariaDB also supports more data types than MySQL and has more built-in functions.
Overall, both MariaDB and MySQL are powerful and reliable RDBMS options for managing data, and the choice between them may depend on specific needs and preferences.
This document discusses configuring and implementing a MariaDB Galera cluster for high availability on 3 Ubuntu servers. It provides steps to install MariaDB with Galera patches, configure the basic Galera settings, and start the cluster across the nodes. Key aspects covered include state transfers methods, Galera architecture, and important status variables for monitoring the cluster.
Git is a distributed version control system that provides three main benefits over centralized systems. It allows for offline development, has no single point of failure since anyone can clone a repository, and supports multiple remote repositories. Common Git commands are used to initialize a repository, add and commit changes, manage branches, merge code between branches, and recover from mistakes. Branching best practices include using feature branches for new work and following models like Git Flow to structure team workflows.
Git: An introduction of plumbing and porcelain commandsth507
This document provides an introduction to Git and version control systems. It begins with a poll asking about experience with Git, SVN, and git rebase. It then discusses key Git concepts like distributed version control, the working/staging area, and how Git works. It covers the different types of version control systems and compares centralized and distributed models. The document dives deeper into Git objects like blobs, trees, commits and references. It also distinguishes between low-level plumbing commands and higher-level porcelain commands.
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
MySQL Clustering over InnoDB engines has grown a lot over the last decade. Galera began working with InnoDB early and then Group Replication came to the environment later, where the features are now rich and robust. This presentation offers a technical comparison of both of them.
This presentation will start by introducing how Apache Lucene can be used to classify documents using data structures that already exist in your index instead of having to generate and supply external training sets. The focus will be on extensions of the Lucene Classification module that come in Lucene 6.0 and the Lucene Classification module's incorporation into Solr 6.1. These extensions will allow you to classify at a document level with individual field weighting, numeric field support, lat/lon fields etc. The Solr ClassificationUpdateProcessor will be explored and how to use it including basic and advanced features like multi class support and classification context filtering. The presentation will include practical examples and real world use cases.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Learn about the various approaches to sharding your data with MongoDB. This presentation will help you answer questions such as when to shard and how to choose a shard key.
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
At Comcast, our team has been architecting a customer experience platform which is able to react to near-real-time events and interactions and deliver appropriate and timely communications to customers. By combining the low latency capabilities of Apache Flink and the dataflow capabilities of Apache NiFi we are able to process events at high volume to trigger, enrich, filter, and act/communicate to enhance customer experiences. Apache Flink and Apache NiFi complement each other with their strengths in event streaming and correlation, state management, command-and-control, parallelism, development methodology, and interoperability with surrounding technologies. We will trace our journey from starting with Apache NiFi over three years ago and our more recent introduction of Apache Flink into our platform stack to handle more complex scenarios. In this presentation we will compare and contrast which business and technical use cases are best suited to which platform and explore different ways to integrate the two platforms into a single solution.
Flyway: The agile database migration framework for JavaAxel Fontaine
The agile database migration framework for Java allows developers and DBAs to cooperatively manage database changes. Flyway provides commands to migrate databases between versions, allowing development and production databases to be aligned. It integrates with Java applications so database migrations run automatically on startup, keeping code and database structure in sync across environments.
The document summarizes BuzzNumbers' transition from using SQL Server to MongoDB as their database. It discusses problems they faced with SQL Server like scalability issues and performance problems with large datasets. It then covers why they chose to use MongoDB, including its ability to scale horizontally and handle large volumes of writes and reads. Finally, it discusses lessons learned in moving to a NoSQL database and using MongoDB and .NET to build their analytics product.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
With distributed tracing, we can track requests as they pass through multiple services, emitting timing and other metadata throughout, and this information can then be reassembled to provide a complete picture of the application’s behavior at runtime - Read more in https://blog.buoyant.io/2016/05/17/distributed-tracing-for-polyglot-microservices/ and https://www.rookout.com/
This presentation was written by Wagner Bianchi for the presentation on the Oracle Consulting Team/Professional Services meeting that took place in San Francisco/CA.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)Brian Brazil
Brian Brazil is an engineer passionate about reliable systems. He worked at Google SRE for 7 years and is now the founder of Robust Perception. Prometheus is an open source monitoring system inspired by Borgmon. It is mainly written in Go and used by over 100 companies. Prometheus regularly polls metrics from instrumented jobs and services. This allows it to provide alerts when things go wrong and insights into performance over time.
This document provides an overview of continuous integration (CI), continuous delivery (CD), and continuous deployment. CI involves regularly integrating code changes into a central repository and running automated tests. CD builds on CI by automatically preparing code changes for release to testing environments. Continuous deployment further automates the release of changes to production without human intervention if tests pass. The benefits of CI/CD include higher quality, lower costs, faster delivery, and happier teams. Popular CI tools include Jenkins, Bamboo, CircleCI, and Travis. Key practices involve automating all stages, keeping environments consistent, and making the pipeline fast. Challenges include requiring organizational changes and technical knowledge to automate the full process.
MariaDB and MySQL are both popular open-source relational database management systems (RDBMS) that are used to store, organize, and manage data. They are both based on the same core software, which was originally developed by MySQL AB, but MariaDB is a fork of MySQL that was created in 2009 due to concerns about the acquisition of MySQL by Oracle Corporation.
MariaDB and MySQL have many similarities, including their architecture, syntax, and functionality. Both databases use SQL (Structured Query Language) to manage data and support a wide range of programming languages. They also offer features such as replication, clustering, and partitioning to improve performance and scalability.
However, there are also some differences between MariaDB and MySQL. MariaDB has some additional features and improvements over MySQL, such as better performance, improved security, and more storage engines. MariaDB also supports more data types than MySQL and has more built-in functions.
Overall, both MariaDB and MySQL are powerful and reliable RDBMS options for managing data, and the choice between them may depend on specific needs and preferences.
This document discusses configuring and implementing a MariaDB Galera cluster for high availability on 3 Ubuntu servers. It provides steps to install MariaDB with Galera patches, configure the basic Galera settings, and start the cluster across the nodes. Key aspects covered include state transfers methods, Galera architecture, and important status variables for monitoring the cluster.
Git is a distributed version control system that provides three main benefits over centralized systems. It allows for offline development, has no single point of failure since anyone can clone a repository, and supports multiple remote repositories. Common Git commands are used to initialize a repository, add and commit changes, manage branches, merge code between branches, and recover from mistakes. Branching best practices include using feature branches for new work and following models like Git Flow to structure team workflows.
Git: An introduction of plumbing and porcelain commandsth507
This document provides an introduction to Git and version control systems. It begins with a poll asking about experience with Git, SVN, and git rebase. It then discusses key Git concepts like distributed version control, the working/staging area, and how Git works. It covers the different types of version control systems and compares centralized and distributed models. The document dives deeper into Git objects like blobs, trees, commits and references. It also distinguishes between low-level plumbing commands and higher-level porcelain commands.
This document provides an overview of using Git like a pro. It discusses Git fundamentals like objects, references and branches. It also covers advanced topics such as rebasing, reflogs, resetting and bisecting to find errors. The goals are to increase understanding of Git internals, solve cumbersome situations, produce cleaner histories and have fun.
Git init creates a .git repository in a project directory to track changes over time, building a history. The .git directory contains files like HEAD, config, and objects that store metadata and data for the local repository. Git add stages files, git commit commits the staged files to the local repository, and git status shows file status. Remote repositories on services like GitHub can be created and the local repository connected to it with git remote add and git push to push local commits remotely. Branches allow parallel development and are created, switched between, merged, and deleted.
This document discusses Git internals and provides examples of how Git stores files and commits as objects in a directed acyclic graph (DAG). It explains that commits point to trees, which point to blobs containing file contents or other trees representing subdirectories. Branches and tags are explained as references to commit objects. Examples are given of branching, merging, tagging, and how remote tracking references map to repositories on remote servers.
Git ist eine freie Software zur verteilten Versionsverwaltung von Dateien - Alles klar? Nein? haggl wird uns erklären, was das bedeutet. Er wird uns in seinem Vortrag sämtliche Basic-Funktionen zeigen, so dass am Ende jeder das Programm einsetzen kann.
This document provides an overview of Git commands and workflows:
- It introduces basic Git commands for setting up a local repository, adding and committing files, viewing the status and differences between commits, ignoring files, and more.
- Common workflows are demonstrated including cloning a repository, making changes and committing them locally, and pushing changes to a remote repository.
- More advanced topics are covered like branching, merging, rebasing, resolving conflicts, and using tools to help with these processes.
- Configuration options and tips are provided to customize Git behavior and inspect repositories.
The document discusses version control systems and Git. It provides an overview of centralized and distributed version control workflows. Key points include:
- Centralized VCSs involve committing changes to a central repository, while distributed VCSs allow users to commit locally and push changes.
- Git uses a distributed model where each user has a full local copy of the repository and commits changes locally before pushing.
- Common Git commands are add, commit, push, pull, status, diff, log, branch, tag, and remote for working with remote repositories.
GIT is a distributed version control system that allows for collaboration by keeping track of changes made to source code over time. It keeps snapshots of files and allows users to work offline or disconnected from servers. Unlike centralized systems, GIT considers data as snapshots of files rather than file-based changes, and each user has a full copy of the repository. Users can commit changes to their local repository and then push them to remote repositories. Common commands include add, commit, push, pull, branch, merge, and status.
To introduce and motivate some best practice around version control and Git.
Resources:
https://en.wikipedia.org/wiki/Version_control
https://git-scm.com/
https://try.github.io
http://rogerdudler.github.io/git-guide/
http://ohshitgit.com/
https://www.atlassian.com/git/tutorials
https://www.datacamp.com/courses/introduction-to-git-for-data-science
Here Don goes over some of the benefits of using GIT as well as some of the basic concepts and methods. Later he goes through the workflow of using GIT. Download his slides here or email him at [email protected].
This document provides an overview of using Git like a pro. It begins with introducing the author and stating goals of increasing Git understanding, solving cumbersome situations, producing cleaner Git history, and having fun. It then covers key Git concepts like objects, references, branches, HEAD, merging vs rebasing, interactive rebasing, rerere, and how to use reset, reflog, and bisect commands to troubleshoot issues. The document emphasizes hands-on learning through examples and encourages experimenting in the provided Gitlab repository.
In this slide, I have a fully explanation about what is Git and why use it. I also give a fully explanation about the basic command that mostly use with git.
This document provides an overview of the history and basics of the Git version control system. It discusses key concepts like branches, commits, refs, staging areas and how to perform common operations like resetting, reverting, stashing and rebasing. The history of version control systems is outlined starting from SCCS in the 1970s through CVS and Subversion to the creation of Git in 2005 by Linus Torvalds to improve on BitKeeper. Internal areas like the HEAD, index, file statuses and relative commit names are also covered.
This document provides an introduction to version control with Git. It discusses the basic Git model and workflow, including cloning repositories, making local changes, staging files, and committing changes. It compares Git to centralized version control systems like Subversion and highlights Git's distributed and non-linear development advantages. Basic Git commands are explained like add, commit, status, diff, log, pull and push. Branching and merging with Git are also introduced.
Diapositivas de la charla dada por la gente de uno21.com.ar (@luke_ar y @matitanio) en la UP, el día 21/08/2012. Próximamente en otras universidades :)
This document provides instructions for using Git for version control and collaboration. It begins with downloading and installing Git, then configuring basic user settings. It describes initializing and cloning repositories, checking the status of files, and viewing commit logs. The document outlines the basic Git workflow including making changes to files, staging files, and committing changes to the local repository. It also covers pushing and pulling changes to and from remote repositories. Finally, it discusses resolving merge conflicts that can occur when merging branches.
Version control systems allow recording changes to files over time and reverting files back to previous states. Git is an open source distributed version control system initially created by Linus Torvalds for Linux kernel development. Git stores project snapshots over time as differences from a base version of files and allows fully local operations without needing network access. Basic Git commands include add, commit, branch, checkout, merge, push and pull to manage changes to a local or remote repository.
Version control systems allow recording changes to files over time. There are local, centralized, and distributed version control systems. Git is a free and open-source distributed version control system created by Linus Torvalds. It provides features like speed, support for non-linear development, and ability to handle large projects efficiently.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Leading AI Innovation As A Product Manager - Michael JidaelMichael Jidael
Unlike traditional product management, AI product leadership requires new mental models, collaborative approaches, and new measurement frameworks. This presentation breaks down how Product Managers can successfully lead AI Innovation in today's rapidly evolving technology landscape. Drawing from practical experience and industry best practices, I shared frameworks, approaches, and mindset shifts essential for product leaders navigating the unique challenges of AI product development.
In this deck, you'll discover:
- What AI leadership means for product managers
- The fundamental paradigm shift required for AI product development.
- A framework for identifying high-value AI opportunities for your products.
- How to transition from user stories to AI learning loops and hypothesis-driven development.
- The essential AI product management framework for defining, developing, and deploying intelligence.
- Technical and business metrics that matter in AI product development.
- Strategies for effective collaboration with data science and engineering teams.
- Framework for handling AI's probabilistic nature and setting stakeholder expectations.
- A real-world case study demonstrating these principles in action.
- Practical next steps to begin your AI product leadership journey.
This presentation is essential for Product Managers, aspiring PMs, product leaders, innovators, and anyone interested in understanding how to successfully build and manage AI-powered products from idea to impact. The key takeaway is that leading AI products is about creating capabilities (intelligence) that continuously improve and deliver increasing value over time.
Automation Hour 1/28/2022: Capture User Feedback from AnywhereLynda Kane
Slide Deck from Automation Hour 1/28/2022 presentation Capture User Feedback from Anywhere presenting setting up a Custom Object and Flow to collection User Feedback in Dynamic Pages and schedule a report to act on that feedback regularly.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
5. Git uses snapshots
•
A version is a tree (like a FS) using hashes as nodes
•
Every version is a snapshot of the full repository, with all files
•
A file is identified by a SHA-1 hash, depending on the file content
•
If the content doesn’t change, the hash is the same
7. The .git folder
Everything is in the .git folder (delete it to delete repo)
.git/
hooks/
info/
objects/
=>
refs/
=>
config
description
index
=>
HEAD
=>
repo content
commit objects’ pointers
stage infos
checkouted branch
8. Git objects
Git works like a key-value datastore
When you save an object in Git, it returns its hash
All the object are identified by an hash
Objects
!
Blob
Tree
Commit
TAG
9. Blob objects - 1
Essentially the committed file with its content
# Save a simple text file (without -w it calculates the hash)
$ echo 'test content' | git hash-object -w --stdin
d670460b4b4aece5915caf5c68d12f560a9fe3e4
# The created object (notice the folder structure)
$ find .git/objects -type f
.git/objects/d6/70460b4b4aece5915caf5c68d12f560a9fe3e4
# Extract the content using the hash as reference
$ git cat-file -p d670460b4b4aece5915caf5c68d12f560a9fe3e4
test content
10. Blob objects - 2
# New version of the file
$ echo 'version 1' > test.txt
$ git hash-object -w test.txt
83baae61804e65cc73a7201a7252750c76066a30
# Now there are two objects
$ find .git/objects -type f
.git/objects/83/baae61804e65cc73a7201a7252750c76066a30
.git/objects/d6/70460b4b4aece5915caf5c68d12f560a9fe3e4
# Restore the old version
$ git cat-file -p d670460b4b4aece5915caf5c68d12f560a9fe3e4 >
test.txt
11. Tree objects
Contains references to its children (trees or blobs), like a UNIX folder
Every commit (snapshot) has a different tree
# Adds a file to the index (staging area)
$ git update-index --add --cacheinfo 100644
a906cb2a4a904a152e80877d4088654daad0c859 README
# Write the tree object (containing indexed/staged files)
$ git write-tree
1f7a7a472abf3dd9643fd615f6da379c4acb3e3a
$ git cat-file -t 1f7a7a472abf3dd9643fd615f6da379c4acb3e3a
tree
# Show the tree object content
$ git cat-file -p 1f7a7a472abf3dd9643fd615f6da379c4acb3e3a
100644 blob a906cb2a4a904a152e80877d4088654daad0c859
README
100644 blob 8f94139338f9404f26296befa88755fc2598c289
Rakefile
040000 tree 99f1a6d12cb4b6f19c8655fca46c3ecf317074e0
lib
12. Commit objects - 1
Commit message, other informations (GPG) and reference to a
tree object (through its hash)
# First commit object given a tree object (1f7a7a)
$ echo 'first commit' | git commit-tree 1f7a7a
fdf4fc3344e67ab068f836878b6c4951e3b15f3d
# Let’s check the commit content
$ git cat-file -p fdf4fc3
tree 1f7a7a472abf3dd9643fd615f6da379c4acb3e3a
author Tommaso Visconti <[email protected]> 1243040974 -0700
committer Tommaso Visconti <[email protected]> 1243040974 -0700
!
first commit
13. TAG objects
Lightweight tag
Like a commit with reference to a commit object (not a tree)
Annotated tag
Git creates a tag object and the ref tag pointing to its hash
$ git tag -a v1.1 1a410efbd13591db07496601ebc7a059dd55cfe9 -m 'test tag’
!
$ cat .git/refs/tags/v1.1
9585191f37f7b0fb9444f35a9bf50de191beadc2
!
$ git cat-file -p 9585191f37f7b0fb9444f35a9bf50de191beadc2
object 1a410efbd13591db07496601ebc7a059dd55cfe9
type commit
tag v1.1
tagger Tommaso Visconti <[email protected]> Mon Nov 25 15:56:23 2013
!
test tag
14. References - 1
Useful to avoid using and remembering hashes
Stored in .git/refs/
Text files just containing the commit hash
15. References - 2
Branches are references stored in .git/refs/heads
# Update master ref to the last commit
$ git update-ref refs/heads/master 1a410e
Lightweight tags are refs placed in .git/refs/tags
Remotes are refs stored in
.git/refs/remotes/<REMOTE>/<BRANCH>
!
The commit hash of a remote is the commit of that branch
the last time the remote was synchronized (fetch/push)
17. HEAD - 1
How GIT knows which branch or commit is checkouted?
# HEAD is a symbolic reference
$ cat .git/HEAD
ref: refs/heads/master
# Get HEAD value using the proper tool
$ git symbolic-ref HEAD
refs/heads/master
# Set HEAD
$ git symbolic-ref HEAD refs/heads/test
$ cat .git/HEAD
ref: refs/heads/test
18. HEAD - 2
HEAD generally points to the checked out branch
Committing in this situation means set a new hash to the
current branch and, as a consequence, HEAD points to it
HEAD can be moved with git reset
If HEAD points to a commit and not a branch, it’s the so
called Detached HEAD
Committing in this situation means advance HEAD to the
new commit hash, without modifying any branch
20. Detached HEAD - 2
$ cat .git/HEAD
ref: refs/heads/master
$ git checkout 30657817081f6f0808bf37da470973ad12cb7593
Note: checking out '30657817081f6f0808bf37da470973ad12cb7593'.
!
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
!
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
!
git checkout -b new_branch_name
!
HEAD is now at 3065781... message1
$ cat .git/HEAD
30657817081f6f0808bf37da470973ad12cb7593
22. Detached HEAD - 4
$ git checkout master
Warning: you are leaving 1 commit behind, not connected to
any of your branches:
!
!
b843940 det commit
If you want to keep them by creating a new branch, this may be a good time
to do so with:
!
!
git branch new_branch_name b843940
Switched to branch 'master'
$ git log
commit 33676f027d9c36c66f2a2d5d74ee1cbf3e1ff56b
Author: Tommaso Visconti <[email protected]>
Date:
Mon Nov 25 16:40:31 2013 +0100
!
!
test2
commit 30657817081f6f0808bf37da470973ad12cb7593
Author: Tommaso Visconti <[email protected]>
Date:
Mon Nov 25 16:39:53 2013 +0100
!
message1
The b843940 commit
is gone.. :-(
24. Background: refspec
A refspec is a mapping between remote and local branch
An example is the fetch entry in a remote config:
[remote "origin"]
url = [email protected]:schacon/simplegit-progit.git
fetch = +refs/heads/*:refs/remotes/origin/*
Format: +<src>:<dst>
<src>: where those references will be written locally
<dst>: pattern for references on the remote side
+: (optional) update the reference even if it isn’t a fast-forward
25. Background: ancestry references
Given a ref (eg. HEAD):
•
•
•
•
HEAD^ is the first parent of HEAD
HEAD^2 is the second parent (and so on..)
HEAD~ is identical to HEAD^1
HEAD~2 is is the parent of the parent of HEAD
^ is useful for merging, where a commit has two or more parents
26. Background: ranges
git log <refA>..<refB>
All commits reachable by refB that aren’t reachable by refA
Commits in refB not merged in refA
Synonyms:
git log ^refA refB
git log refB --not refA
git log refA refB ^refC
!
The last is useful when using more refs
27. Background: ranges
git log <refA>…<refB>
All commits either reachable by refB and refA
but not both of them
Commits to be merged between the two refs
30. Basic commands - 2
git commit and commit message
Commit description (visible with —oneline) max 50 chars
!
Commit full message. Breakline at 72nd char
Buzzword buzzword buzzword buzzword buzzword buzzword buzzword
buzzword buzzword buzzword buzzword buzzword buzzword buzzword
!
Commit footer
Commands, eg:
fixes #<BUG>
ecc.
git commit --amend
31. Basic commands - 3
git log
# Short diff
git log --oneline
!
# Line diff between files
git log -p
# Diff stats without lines
git log --stat
!
# Pretty diff with tree
git log --pretty=format:'%h %s' --graph
!
# Word diff
git log -p —word-diff
git diff
# Diff with remote
git diff HEAD..origin/master
.gitignore
# Exact match (absolute path)
/public/README
!
# All matches
public/README
READ*
32. Useful tools
git blame
git bisect
Show the author of each line
Find the problematic commit
git alias
git filter-branch
Create command aliases
Hard change of history
to delete committed passwords
git format-patch
git request-pull
Create a patch file ready to be
sent to somebody
Implement pull-request workflow
33. Git stash
Save unstaged changes for future reuse
Create a branch from stash:
git stash branch <BRANCH>
git unstash doesn’t exist but:
git stash show -p stash@{0} | git apply -R
Create an alias to do it:
git config --global alias.stash-unapply '!git stash show -p | git apply -R'
34. Git reset
Move HEAD to specific state (e.g. commit)
--soft
Move only HEAD
--mixed Move HEAD and reset the staging
area to it (default)
--hard
Move HEAD and reset staging area
and working tree
git reset [option] <PATH>
Don’t move HEAD, but reset the PATH (staging area and working tree,
depending from option)
36. Fetch
Sync remotes status: git fetch <remote>
Uses .git/config to know what to update
[remote "origin"]
url = [email protected]:schacon/simplegit-progit.git
fetch = +refs/heads/master:refs/remotes/origin/master
fetch = +refs/heads/qa/*:refs/remotes/origin/qa/*
After checking a remote status we can:
•
•
•
merge
rebase
cherry-pick
37. Pull
git pull origin master
=
git fetch origin
git merge origin/master
In both cases the master branch must be checkouted
38. Fast-forward merge
When a branch has commits which are direct predecessor
of the branch where we’re merging, this is a so called
fast-forward merge
To merge iss53 to master,
git just advances master
pointer to C3 without
creating a commit
object. This is a fastforward merge
Use merge --no-ff to force creation of a commit object
39. 3-way merge
When the commit to be merged isn’t a direct predecessor
of the commit to merge into
C5 is a new commit object, including merge informations
41. Rebase
The experiment’s
commits (C3) become
a single commit (as a
patch) C3’ which is
applied to master
It’s a rebase on experiment, then a FF merge on master
git
git
git
git
checkout experiment
rebase master # C3’ is created
checkout master
merge experiment # FF merge
44. Advanced rebase
git rebase --onto master server client
“Check out the client branch, figure out the patches from the common ancestor
of the client and server branches, and then replay them onto master”
45. Advanced rebase
Now perform a simple fast-forward merge on
master to advance it to C9’
C8’+C9’, originally made on C3, on C6 can break code!
46. Advanced rebase
With rebase the rebased commits become only one,
the commit history is lost
With interactive rebase is possible to edit history,
changing commit messages, squashing, splitting,
deleting and reordering commits
48. Submodules
Use other repositories in your own
e.g. an external lib in ./lib/<libname>
git submodule add [-b <BRANCH>] <REPO> <PATH>
Submodules track a commit or a branch (from 1.8.2)
Submodules informations are stored in .gitmodules
and .git/modules.
.gitmodules must be committed at the beginning and after a
submodule update
49. Subtree merging
Can be used in place of submodules
Is a way to track a branch in an another branch folder
RepositoryFolder/
|_lib/
|_libA/
RepositoryFolder is on
master branch and libA/ is
the lib_a branch
51. Git flow
master contains production code
and is tagged with release
development contains dev code
create a new feature from
development
the feature merges on
development
development becomes a release
a release merges on master
an hotfix is a branch from master
and merges on master and
development
If you write code on master or development, you’re wrong!