Are you planning to start working with open source and are overwhelmed by all the new tools you have to learn ? Here you can find a quick overview about Subversion, Git and Maven.
This document summarizes a Jenkins pipeline for testing and deploying Chef cookbooks. The pipeline is configured to automatically scan a GitHub organization for any repositories containing a Jenkinsfile. It will then create and manage multibranch pipeline jobs for each repository and branch. The pipelines leverage a shared Jenkins global library which contains pipeline logic to test and deploy the Chef cookbooks. This allows for standardized and reusable pipeline logic across all Chef cookbook repositories.
In this presentation, I covered how I've migrated Android project from old Jenkins (Freestyle jobs, 1st Jenkins instance) to new Jenkins (Multibranch pipeline, 2nd Jenkins instance).
Also, it covers a Jenkins Shared Library usage and integration tests on pipeline code.
At the end, I'm covering pros/cons of final result and what difficulties I faced during migration.
Kurento is a media server for real-time video communication that needed to test its software under many scenarios and environments. Its CI infrastructure originally used many virtual machines, each configured differently for testing. This led to high costs, configuration difficulties, and slow development cycles. By using Docker, Kurento was able to define environments as reusable images and run tests in isolated containers on fewer virtual machines. This simplified infrastructure management and sped up development.
Using Docker to build and test in your laptop and JenkinsMicael Gallego
Docker is changing the way we create and deploy software. This presentation is a hands-on introduction to how to use docker to build and test software, in your laptop and in your Jenkins CI server
This document discusses Jenkins 2.0 and its new "pipeline as code" feature. Pipeline as code allows automation of continuous delivery pipelines by describing the stages in a textual pipeline script stored in version control. This enables pipelines to be more flexible, reusable and survive Jenkins restarts. The document provides examples of pipeline scripts for common tasks like building, testing, archiving artifacts and running in parallel. It also discusses how pipelines can be made more reusable by defining shared library methods.
Jenkins is a Continuous Integration (CI) server or tool which is written in Java. It provides Continuous Integration services for software development, which can be started via command line or web application server. Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.
Codifying the Build and Release Process with a Jenkins Pipeline Shared LibraryAlvin Huang
These are my slides from my Jenkins World 2017 talk, detailing a war story of migrating 150-200 Freestyle Jobs for build and release, into ~10 line Jenkinsfiles that heavily leverages Jenkins Pipeline Shared Libraries (https://jenkins.io/doc/book/pipeline/shared-libraries/)
Jenkins days workshop pipelines - Eric Longericlongtx
This document provides an overview of a Jenkins Days workshop on building Jenkins pipelines. The workshop goals are to install Jenkins Enterprise, create a Jenkins pipeline, and explore additional capabilities. Hands-on exercises will guide attendees on installing Jenkins Enterprise using Docker, creating their first pipeline that includes checking code out of source control and stashing files, using input steps and checkpoints, managing tools, developing pipeline as code, and more advanced pipeline steps. The document encourages attendees to get involved with the Jenkins and CloudBees communities online and on Twitter.
The document discusses 7 habits of highly effective Jenkins users. It recommends using long-term support releases, breaking up large jobs into smaller modular jobs, and defining Jenkins tasks programmatically using scripts and pipelines rather than manually configuring through the UI. Key plugins are also discussed like Pipeline, Job DSL, and others that help automate Jenkins configuration and integration.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points:
- Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration.
- This provides configuration as code that is version controlled and reusable across projects.
- Jenkins plugins support running builds and tests in parallel across Docker containers.
- Notifications can be sent to services like Slack on failure.
- The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
An Open-Source Chef Cookbook CI/CD Implementation Using Jenkins PipelinesSteffen Gebert
This document discusses implementing continuous integration and continuous delivery (CI/CD) for Chef cookbooks using Jenkins pipelines. It introduces Jenkins pipelines and how they can be used to test, version, and publish Chef cookbooks. Key steps include linting, dependency resolution, test-kitchen testing, version bumping, and uploading to the Chef Server. The jenkins-chefci cookbook automates setting up Jenkins with the necessary tools to run pipelines defined in a shared Groovy library for cookbook CI/CD.
Voxxed Luxembourd 2016 Jenkins 2.0 et Pipeline as codeDamien Duportal
Né Hudson en 2004 (cf. http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/), le projet Jenkins vient de franchir un cap majeur : la version Jenkins 2.0 (cf. https://groups.google.com/forum/#!msg/jenkinsci-dev/vbXK7JJekFw/BlEvO0UxBgAJ) !
Cette étape majeure réussit à concilier la gestion de l'ancien, et la transition vers des pratiques de déploiement continu plus modernes.
Parmi les nouveautés, la gestion des Pipeline-as-a-Code et l'intégration de Docker sont deux éléments dont vous allez pouvoir tirer de nombreux bénéfices.
Si vous êtes intéressés pour un exemple concret de migration depuis un Jenkins 1.x vers un flux basé sur Docker et Pipeline avec Jenkins 2.0, cette session est faite pour vous !
L'exemple suivi sera un projet Java-Maven "type", stocké sur un dépôt Git, bénéficiant de tests et d'analyses, en "multi-job enchaînés", que nous ferons glisser dans un "Jenkins Pipeline", configuré via un fichier du dépôt Git, en mode "livraison continue" via Docker.
Building an Extensible, Resumable DSL on Top of Apache Groovyjgcloudbees
Presented at: https://apacheconeu2016.sched.org/event/8ULR
n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline.
In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
OpenShift-Build-Pipelines: Build -> Test -> Run! @JavaForumStuttgartTobias Schneck
Stabile und skalierbare Continuous-Integration-Umgebungen sind seit jeher schwer aufzusetzen und zu pflegen. Besonders in Zeiten von Containern und Cloud-Native-Apps, wird der nächste Schritt hin zur voll-automatisierten Build-Pipeline eingefordert. Sowohl der Aufbau des automatisierten Deployments als auch die Ausführung von automatisierten Integration- und UI-Tests stellen die DevOps-Teams vor neue Hürden. Einen eleganten Ausweg bieten Container-basierte CI/CD-Umgebungen, die dynamisch zum Build-Zeitpunkt bereitgestellt werden. An diesen Punkt setzt die Open-Source-Container-Plattform "OpenShift" an. Durch den Infrastructure-as-Code-Ansatz wird sowohl der CI-Server als auch der komplette Build-Lifecycle vom Bau der Artefakte bis zum Testen der Anwendung in den Container-Cluster verschoben.
Der Talk zeigt auf wo die Unterschiede von OpenShift zur Kubernetes-API liegen, wie durch Jenkins-Build-Pipelines Artefakte gebaut, in Docker Images verpackt, getestet und deployed werden können. In mehreren Live-Demos wird aufgezeigt, wie mit geschickten Einsatz von Open-Source-Tools sowohl Server-APIs als auch grafische Web- und Rich-Client-Oberflächen in Container-Clustern als Black-Box getestet werden können. Eine abschließende, kritische Bewertung der gesammelten Erfahrungen, zeigt wo das Potenzial dieses Ansatz liegt, aber auch welche Fallstricke derzeit (noch) zu meistern sind.
The document discusses managing dependencies in Gradle multi-module projects. It presents three problems: 1) managing shared dependencies across modules, 2) changing dependencies that impact repeatable builds, and 3) dependency conflicts when modules are updated separately. For problem 1, it recommends using a gradle.properties file to define shared versions. For problem 2, it introduces the nebula-dependency-lock plugin to lock dependencies until explicitly updated. For problem 3, it notes conflicts can occur if modules are updated independently and provides solutions like dependency locking or pinning versions.
Testing fácil con Docker: Gestiona dependencias y unifica entornosMicael Gallego
Docker es una tecnología que permite empaquetar el software de forma que se pueda ejecutar de forma sencilla y rápida, sin instalación y en cualquier sistema operativo. Es como tener cualquier programa instalado en su propia máquina virtual, pero arranca mucho más rápido y consume menos recursos. Docker está cambiando la forma en la que desplegamos software, pero también está afectando al propio proceso de desarrollo y particularmente al testing.
En este taller pondremos en práctica cómo usar Docker para facilitar la implementación de diferentes tipos de tests y su ejecución tanto en el portátil como en el entorno de integración continua. Aunque las técnicas que veremos se podrán aplicar en cualquier lenguaje de programación, los ejemplos estarán basados en Java y en JavaScript.
OpenShift Build Pipelines @ Lightweight Java User Group MeetupTobias Schneck
A reliable test infrastructure is half the battle to get stable tests into your delivery pipeline. Therefore container technologies like Docker can help to build up an immutable deployment and test infrastructure. If you think one step further, you will soon get to the point where scalability came into your glance. To get this challenge done, something like a container-based CI/CD environment will be needed. Therefore we will take a look at the open-source solution "OpenShift". The container platform combined the Jenkins build pipeline and Kubernetes concepts to a ready-to-use CI/CD solution. The talk will show what of the platform components can be used to build up your self-hosted automated build, test and deployment pipeline. In a live demo session we will build a microservice application, unit test it, deploy it, execute API integration tests and at least run real UI-Tests in dockerized desktop containers.
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
Let's go HTTPS-only! - More Than Buying a CertificateSteffen Gebert
1) The document discusses various ways to secure a website or client's users, including getting an SSL certificate, setting up HTTPS, ensuring strong security practices with headers and configurations.
2) It describes letting's encrypt as a free and easy way to get SSL certificates with automated renewal, and quality testing services like QUALYS to check SSL configuration.
3) Additional security best practices discussed include HTTP headers like HSTS, CSP, and PKP to prevent vulnerabilities and protect against MITM attacks. Regular testing and integrating checks into development processes are recommended.
Jenkins vs. AWS CodePipeline (AWS User Group Berlin)Steffen Gebert
This document summarizes a presentation comparing Jenkins and AWS CodePipeline for continuous integration and delivery. It provides an overview of how to set up and use Jenkins and CodePipeline, including building environments, secrets handling, testing, branching strategies, approvals, and deployments. It also compares features, pricing, access control, and visualization capabilities between the two tools. Finally, it discusses options for integrating Jenkins and CodePipeline together to leverage the strengths of both solutions. The overall message is that the best tool depends on each organization's needs, and combining tools can provide benefits over relying on a single solution.
BSD/macOS Sed and GNU Sed both support additional features beyond POSIX Sed, such as extended regular expressions with -E/-r, but using only POSIX features ensures portability. GNU Sed defaults allow some non-POSIX behaviors, so --posix is recommended for strict POSIX compliance. The most portable Sed scripts use only basic regular expressions and features defined in the POSIX specification.
This document discusses continuous delivery and the new features of Jenkins 2, including pipeline as code. Jenkins 2 introduces the concept of pipeline as a new type that allows defining build pipelines explicitly as code in Jenkinsfiles checked into source control. This enables pipelines to be versioned, more modular through shared libraries, and resumed if interrupted. The document provides examples of creating pipelines with Jenkinsfiles that define stages and steps for builds, tests and deployments.
Automate App Container Delivery with CI/CD and DevOpsDaniel Oh
This document discusses how to automate application container delivery with CI/CD and DevOps. It describes building and deploying container images using Source-to-Image (S2I) to deploy source code or application binaries. OpenShift automates deploying application containers across hosts via Kubernetes. The document also discusses continuous integration, continuous delivery, and how OpenShift supports CI/CD with features like Jenkins-as-a-Service and OpenShift Pipelines.
The document discusses Jirayut Nimsaeng's presentation on using Docker for continuous delivery of Joomla projects. Some key points:
- Jirayut is interested in cloud, open source, and has worked with Docker since version 0.6
- Docker allows for reliable, consistent, and efficient deployment across environments like development, testing, and production
- The presentation covers using Dockerfiles to build images, Docker workflows, and examples of using Docker for development environments and continuous delivery pipelines.
[WroclawJUG] Continuous Delivery in OSS using ShipkitMarcinStachniuk
Shipkit is a framework that helps automate continuous delivery of open source software projects. It allows developers to easily manage releases by automatically bumping versions, generating release notes, creating tags, and publishing releases when code is merged to master. Shipkit integrates with Travis CI to run release tasks like tests and publication. It aims to reduce the manual work of releases so developers can focus on coding.
The document provides an introduction to Gradle, an open source build automation tool. It discusses that Gradle is a general purpose build system with a rich build description language based on Groovy. It supports "build-by-convention" and is flexible and extensible, with built-in plugins for Java, Groovy, Scala, web and OSGi. The presentation covers Gradle's basic features, principles, files and collections, dependencies, multi-project builds, plugins and reading materials.
The document provides an overview of a 3-day open source workshop being conducted by Luciano Resende from the Apache Software Foundation. Day 1 will cover topics on open source, licenses, communities and how to get involved in Apache projects. Day 2 focuses on hands-on development, setting up environments and tools. Day 3 is about mentoring expectations and working on project proposals. The workshop aims to educate participants and help them get involved in open source.
These slides contain all figures from the book: Dirk Draheim. Business Process Technology - A Unified View on Business Processes, Workflows and Enterprise Applications. Springer, 2010.
The document discusses 7 habits of highly effective Jenkins users. It recommends using long-term support releases, breaking up large jobs into smaller modular jobs, and defining Jenkins tasks programmatically using scripts and pipelines rather than manually configuring through the UI. Key plugins are also discussed like Pipeline, Job DSL, and others that help automate Jenkins configuration and integration.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points:
- Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration.
- This provides configuration as code that is version controlled and reusable across projects.
- Jenkins plugins support running builds and tests in parallel across Docker containers.
- Notifications can be sent to services like Slack on failure.
- The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
An Open-Source Chef Cookbook CI/CD Implementation Using Jenkins PipelinesSteffen Gebert
This document discusses implementing continuous integration and continuous delivery (CI/CD) for Chef cookbooks using Jenkins pipelines. It introduces Jenkins pipelines and how they can be used to test, version, and publish Chef cookbooks. Key steps include linting, dependency resolution, test-kitchen testing, version bumping, and uploading to the Chef Server. The jenkins-chefci cookbook automates setting up Jenkins with the necessary tools to run pipelines defined in a shared Groovy library for cookbook CI/CD.
Voxxed Luxembourd 2016 Jenkins 2.0 et Pipeline as codeDamien Duportal
Né Hudson en 2004 (cf. http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/), le projet Jenkins vient de franchir un cap majeur : la version Jenkins 2.0 (cf. https://groups.google.com/forum/#!msg/jenkinsci-dev/vbXK7JJekFw/BlEvO0UxBgAJ) !
Cette étape majeure réussit à concilier la gestion de l'ancien, et la transition vers des pratiques de déploiement continu plus modernes.
Parmi les nouveautés, la gestion des Pipeline-as-a-Code et l'intégration de Docker sont deux éléments dont vous allez pouvoir tirer de nombreux bénéfices.
Si vous êtes intéressés pour un exemple concret de migration depuis un Jenkins 1.x vers un flux basé sur Docker et Pipeline avec Jenkins 2.0, cette session est faite pour vous !
L'exemple suivi sera un projet Java-Maven "type", stocké sur un dépôt Git, bénéficiant de tests et d'analyses, en "multi-job enchaînés", que nous ferons glisser dans un "Jenkins Pipeline", configuré via un fichier du dépôt Git, en mode "livraison continue" via Docker.
Building an Extensible, Resumable DSL on Top of Apache Groovyjgcloudbees
Presented at: https://apacheconeu2016.sched.org/event/8ULR
n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline.
In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
OpenShift-Build-Pipelines: Build -> Test -> Run! @JavaForumStuttgartTobias Schneck
Stabile und skalierbare Continuous-Integration-Umgebungen sind seit jeher schwer aufzusetzen und zu pflegen. Besonders in Zeiten von Containern und Cloud-Native-Apps, wird der nächste Schritt hin zur voll-automatisierten Build-Pipeline eingefordert. Sowohl der Aufbau des automatisierten Deployments als auch die Ausführung von automatisierten Integration- und UI-Tests stellen die DevOps-Teams vor neue Hürden. Einen eleganten Ausweg bieten Container-basierte CI/CD-Umgebungen, die dynamisch zum Build-Zeitpunkt bereitgestellt werden. An diesen Punkt setzt die Open-Source-Container-Plattform "OpenShift" an. Durch den Infrastructure-as-Code-Ansatz wird sowohl der CI-Server als auch der komplette Build-Lifecycle vom Bau der Artefakte bis zum Testen der Anwendung in den Container-Cluster verschoben.
Der Talk zeigt auf wo die Unterschiede von OpenShift zur Kubernetes-API liegen, wie durch Jenkins-Build-Pipelines Artefakte gebaut, in Docker Images verpackt, getestet und deployed werden können. In mehreren Live-Demos wird aufgezeigt, wie mit geschickten Einsatz von Open-Source-Tools sowohl Server-APIs als auch grafische Web- und Rich-Client-Oberflächen in Container-Clustern als Black-Box getestet werden können. Eine abschließende, kritische Bewertung der gesammelten Erfahrungen, zeigt wo das Potenzial dieses Ansatz liegt, aber auch welche Fallstricke derzeit (noch) zu meistern sind.
The document discusses managing dependencies in Gradle multi-module projects. It presents three problems: 1) managing shared dependencies across modules, 2) changing dependencies that impact repeatable builds, and 3) dependency conflicts when modules are updated separately. For problem 1, it recommends using a gradle.properties file to define shared versions. For problem 2, it introduces the nebula-dependency-lock plugin to lock dependencies until explicitly updated. For problem 3, it notes conflicts can occur if modules are updated independently and provides solutions like dependency locking or pinning versions.
Testing fácil con Docker: Gestiona dependencias y unifica entornosMicael Gallego
Docker es una tecnología que permite empaquetar el software de forma que se pueda ejecutar de forma sencilla y rápida, sin instalación y en cualquier sistema operativo. Es como tener cualquier programa instalado en su propia máquina virtual, pero arranca mucho más rápido y consume menos recursos. Docker está cambiando la forma en la que desplegamos software, pero también está afectando al propio proceso de desarrollo y particularmente al testing.
En este taller pondremos en práctica cómo usar Docker para facilitar la implementación de diferentes tipos de tests y su ejecución tanto en el portátil como en el entorno de integración continua. Aunque las técnicas que veremos se podrán aplicar en cualquier lenguaje de programación, los ejemplos estarán basados en Java y en JavaScript.
OpenShift Build Pipelines @ Lightweight Java User Group MeetupTobias Schneck
A reliable test infrastructure is half the battle to get stable tests into your delivery pipeline. Therefore container technologies like Docker can help to build up an immutable deployment and test infrastructure. If you think one step further, you will soon get to the point where scalability came into your glance. To get this challenge done, something like a container-based CI/CD environment will be needed. Therefore we will take a look at the open-source solution "OpenShift". The container platform combined the Jenkins build pipeline and Kubernetes concepts to a ready-to-use CI/CD solution. The talk will show what of the platform components can be used to build up your self-hosted automated build, test and deployment pipeline. In a live demo session we will build a microservice application, unit test it, deploy it, execute API integration tests and at least run real UI-Tests in dockerized desktop containers.
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
Let's go HTTPS-only! - More Than Buying a CertificateSteffen Gebert
1) The document discusses various ways to secure a website or client's users, including getting an SSL certificate, setting up HTTPS, ensuring strong security practices with headers and configurations.
2) It describes letting's encrypt as a free and easy way to get SSL certificates with automated renewal, and quality testing services like QUALYS to check SSL configuration.
3) Additional security best practices discussed include HTTP headers like HSTS, CSP, and PKP to prevent vulnerabilities and protect against MITM attacks. Regular testing and integrating checks into development processes are recommended.
Jenkins vs. AWS CodePipeline (AWS User Group Berlin)Steffen Gebert
This document summarizes a presentation comparing Jenkins and AWS CodePipeline for continuous integration and delivery. It provides an overview of how to set up and use Jenkins and CodePipeline, including building environments, secrets handling, testing, branching strategies, approvals, and deployments. It also compares features, pricing, access control, and visualization capabilities between the two tools. Finally, it discusses options for integrating Jenkins and CodePipeline together to leverage the strengths of both solutions. The overall message is that the best tool depends on each organization's needs, and combining tools can provide benefits over relying on a single solution.
BSD/macOS Sed and GNU Sed both support additional features beyond POSIX Sed, such as extended regular expressions with -E/-r, but using only POSIX features ensures portability. GNU Sed defaults allow some non-POSIX behaviors, so --posix is recommended for strict POSIX compliance. The most portable Sed scripts use only basic regular expressions and features defined in the POSIX specification.
This document discusses continuous delivery and the new features of Jenkins 2, including pipeline as code. Jenkins 2 introduces the concept of pipeline as a new type that allows defining build pipelines explicitly as code in Jenkinsfiles checked into source control. This enables pipelines to be versioned, more modular through shared libraries, and resumed if interrupted. The document provides examples of creating pipelines with Jenkinsfiles that define stages and steps for builds, tests and deployments.
Automate App Container Delivery with CI/CD and DevOpsDaniel Oh
This document discusses how to automate application container delivery with CI/CD and DevOps. It describes building and deploying container images using Source-to-Image (S2I) to deploy source code or application binaries. OpenShift automates deploying application containers across hosts via Kubernetes. The document also discusses continuous integration, continuous delivery, and how OpenShift supports CI/CD with features like Jenkins-as-a-Service and OpenShift Pipelines.
The document discusses Jirayut Nimsaeng's presentation on using Docker for continuous delivery of Joomla projects. Some key points:
- Jirayut is interested in cloud, open source, and has worked with Docker since version 0.6
- Docker allows for reliable, consistent, and efficient deployment across environments like development, testing, and production
- The presentation covers using Dockerfiles to build images, Docker workflows, and examples of using Docker for development environments and continuous delivery pipelines.
[WroclawJUG] Continuous Delivery in OSS using ShipkitMarcinStachniuk
Shipkit is a framework that helps automate continuous delivery of open source software projects. It allows developers to easily manage releases by automatically bumping versions, generating release notes, creating tags, and publishing releases when code is merged to master. Shipkit integrates with Travis CI to run release tasks like tests and publication. It aims to reduce the manual work of releases so developers can focus on coding.
The document provides an introduction to Gradle, an open source build automation tool. It discusses that Gradle is a general purpose build system with a rich build description language based on Groovy. It supports "build-by-convention" and is flexible and extensible, with built-in plugins for Java, Groovy, Scala, web and OSGi. The presentation covers Gradle's basic features, principles, files and collections, dependencies, multi-project builds, plugins and reading materials.
The document provides an overview of a 3-day open source workshop being conducted by Luciano Resende from the Apache Software Foundation. Day 1 will cover topics on open source, licenses, communities and how to get involved in Apache projects. Day 2 focuses on hands-on development, setting up environments and tools. Day 3 is about mentoring expectations and working on project proposals. The workshop aims to educate participants and help them get involved in open source.
These slides contain all figures from the book: Dirk Draheim. Business Process Technology - A Unified View on Business Processes, Workflows and Enterprise Applications. Springer, 2010.
This document discusses building asynchronous services with Service Component Architecture (SCA). It begins with an overview of SCA and why asynchronous services are important. It then covers facilities that support asynchronous services like concurrency classes, JAX-WS, JMS, and BPEL. The document explains how SCA supports asynchronous service interactions through callbacks and bidirectional interfaces. It provides an example of an asynchronous service, client, and composite. Finally, it compares asynchronous services to using events and provides a demo scenario of an asynchronous travel search application built with SCA.
How mentoring programs can help newcomers get started with open sourceLuciano Resende
As adoption of Open Source code and development practices continues to gain momentum, more newcomers have become interested in getting involved and contributing to Open Source. However, it's usually not easy for newcomers to start contributing to open source projects. This session will discuss how mentoring programs can ease the way for newcomers to get started with open source, and will provide an overview of existing mentoring programs such as Google Summer of Code and our in-house Apache Mentoring Programme.
The document discusses the evolution of service-oriented architecture (SOA) infrastructure from SOAP to enterprise service buses (ESBs), and introduces Service Component Architecture (SCA) as a reference architecture that takes SOA from infrastructure to service modeling. SCA defines a model for building distributed applications and services by allowing entities like services, components and references to be assembled independently of their underlying infrastructure. It supports various implementation technologies and programming languages. The status of ongoing SCA standardization efforts is also covered.
Luciano Resende is a staff software engineer at Shutterfly and committer to Apache projects. He presented on using MongoDB and developing a data access layer and schema definition. The schema definition uses a textual domain-specific language to define models for code generation including persistence, DTOs, mapping, validation and more across platforms like Java, RDBMS and MongoDB.
OpenStack Essex uses a component-based architecture with key services like Nova (compute), Swift (object storage), Glance (image service), Keystone (identity), Quantum (networking), Horizon (dashboard), and Melange (API management). These components communicate through standardized APIs and services for identity management, compute, storage, networking and other core cloud functions to deliver the OpenStack cloud operating system.
This document provides examples and explanations of using Maven for building Java projects. It begins with installing Maven and describing its core concepts like the project object model (POM), plugins, goals, lifecycles and dependency management. It then walks through two examples: a simple project built and run with Maven commands, and using Maven to optimize a more complex project structure. The document also explains how to set up a Maven repository for dependency storage and integration with Jenkins for continuous integration builds.
Maven is an open source build automation tool used primarily for Java projects to manage builds, documentation, dependencies, and reports. It uses a project object model (POM) file to manage build configuration and dependencies. Maven has defined build lifecycles consisting of phases that execute plugin goals. It provides standard project layout and dependency management. Maven searches dependencies in local, central, and remote repositories. Build profiles allow customizing builds for different environments. Plugins are used to perform tasks like compiling, testing, packaging, and generating documentation.
The document discusses using Maven for automation builds. It covers quick starting a Maven project, the Maven lifecycle and phases, dependency and plugin management, and integrating Maven with IDEs like Eclipse. Key points include how to create a basic Maven project, the different Maven directories, common Maven commands, using the Surefire plugin to run tests, and configuring test dependencies.
This document provides an overview of source control and version control using Git and GitHub. It discusses setting up projects with Maven and developing projects locally before pushing changes to GitHub repositories. The key points covered include: setting up source control with Git, forking and cloning repositories on GitHub, making changes locally and pushing updates via pull requests. Maven is also introduced for building Java projects and managing dependencies in an automated way.
This document provides an overview of source control and version control using Git and GitHub. It discusses setting up projects using Maven and developing projects locally before pushing changes to repositories on GitHub. Specifically, it covers initializing and committing code to a Git repository, forking and cloning existing projects on GitHub, and using pull requests to share code changes between remote repositories.
Continuous Delivery w projekcie Open Source - Marcin Stachniuk - DevCrowd 2017MarcinStachniuk
This document discusses continuous delivery in an open source project. It begins with an introduction of the speaker and then discusses various tools used in the continuous delivery process like Travis CI for continuous integration. It outlines the build pipeline for the project including deploying to Bintray and updating GitHub pages. It also covers code quality tools like Codecov and promoting the project on the internet through blogs, conferences and other forums.
Apache maven, a software project management toolRenato Primavera
This document discusses Apache Maven, an open-source tool for managing software projects and automating common software project tasks such as compiling source code, running tests, packaging, and deploying artifacts. It describes how Maven uses a Project Object Model (POM) file to manage a project's build, reporting and documentation from a central configuration. It also outlines key Maven concepts like the POM, lifecycles, phases, dependencies, repositories, inheritance, and multi-module projects.
The document provides an overview and cheat sheet for using the OpenShift command line interface. It defines what OpenShift is, lists common commands for login/authentication, managing projects and resources, building and deploying applications, and provides an example of creating a new project, adding users, building an application from source code and image, and checking the status of deployed resources.
Intelligent Projects with Maven - DevFest IstanbulMert Çalışkan
The document discusses Maven, an open source build automation tool used primarily for Java projects. It provides an overview of Maven's key features like dependency management, build lifecycles, and the project object model (POM). The presentation also demonstrates how to create a basic Maven project, configure dependencies and repositories, and manage multi-module builds.
Maven is a build automation tool used primarily for Java projects. It uses a project object model (POM) to manage dependencies and build processes. This document discusses Maven concepts like the build lifecycle, plugins, dependencies, packaging, and creating custom plugins and archetypes. It provides examples of configuring plugins and executions, defining parameters, and best practices for dependency and plugin management.
Introduction to maven, its configuration, lifecycle and relationship to JS worldDmitry Bakaleinik
The document provides information about Maven, an open source build automation tool. It describes what Maven is used for, how it manages project dependencies and builds through plugins bound to phases in a build lifecycle. Key Maven concepts discussed include the project object model (POM) file, GAV coordinates used to identify projects, and Maven's use of a "super POM" to inherit common configuration.
Maven is a build tool that can manage a project's build process, dependencies, documentation and reporting. It uses a Project Object Model (POM) file to store build configuration and metadata. Maven has advantages over Ant like built-in functionality for common tasks, cross-project reuse, and support for conditional logic. It works by defining the project with a POM file then running goals bound to default phases like compile, test, package to build the project.
Maven is a project management tool that provides conventions for building Java projects, including a standard project structure, dependency management, and lifecycle phases. It simplifies development by standardizing common tasks like compiling, testing, packaging, and deploying. Compared to Ant, Maven takes a more convention-based approach and handles dependencies and lifecycles automatically. The document provides an overview of Maven's key features and how it can help manage Java projects.
Antons Kranga Building Agile InfrastructuresAntons Kranga
This document provides an overview of a presentation on building agile infrastructures. It introduces the presenter, Antons Kranga, and his background. It then outlines the goals of DevOps in bringing developers and operations teams together through practices like Agile and ITIL. The presentation will discuss strategies for adopting a DevOps model, including provisioning continuous integration, automating infrastructure testing, and provisioning QA and production environments using tools like Chef, Vagrant, Jenkins, Nexus, and Test Kitchen. It will also cover techniques for automating infrastructure like configuration management with Chef recipes and testing infrastructure with tools like Chaos Monkey.
This document presents an introduction to the open source tool Apache Maven. It discusses that Maven is a software project management and build automation tool used to manage projects and build Java-based software. It describes Maven's phases, plugins, and usage through the command line or with an IDE. The document also provides an example project structure and overview of installing and using Maven for builds. It concludes with a SWOT analysis of Maven's strengths, weaknesses, opportunities, and threats.
The document provides an overview of agile development using JBoss Seam. It discusses various agile methodologies and technologies that will be presented, including TestNG, Groovy, Hudson, Subversion, Cobertura, DBUnit, and Selenium. It provides descriptions and examples of using these technologies for unit testing, integration testing, and acceptance testing in an agile project.
Introduction to Maven for beginners and DevOpsSISTechnologies
Maven is a build tool that uses conventions over configurations. It manages dependencies and builds projects based on their structural relationships rather than procedural scripts. The document introduces Maven and how it compares to Ant, describes how to set up a basic Maven project using the archetype plugin, and discusses managing dependencies, multi-project builds, and helpful Maven reports.
JavaEdge 2008: Your next version control systemGilad Garon
The next generation of VCS has a clear target ahead of them: making branching and merging easier. Until recently, Subversion was dominating the world of Version Control Systems, but now, Distributed Version Control Systems are growing in popularity and everywhere you go you hear about Git or Mercurial, and how they make branching and merging a breeze. But the Subversion team isn't going down quietly, they have a new weapon: the 1.5 version. Learn about the next generation of Version Control Systems is planning to solve your problems.
A Jupyter kernel for Scala and Apache Spark.pdfLuciano Resende
Many data scientists are already making heavy usage of the Jupyter ecosystem for analyzing data using interactive notebooks. Apache Toree (incubating) is a Jupyter kernel designed that enables data scientists and data engineers to easily connect and leverage Apache Spark and its powerful APIs from a standard Jupyter notebook to execute their analytics workloads. In this talk, we will go over what's new with the most recent Apache Toree release. We will cover available magics and visualizations extensions that can be integrated with Toree to enable better data exploration and data visualizations. We will also describe some high-level designs of Toree and how users can extend the functionality of Apache Toree powerful plugin system. And all of these with multiple live demos that demonstrate how Toree can help with your analytics workloads in an Apache Spark environment.
In this session, Luciano will be walking you through a real use case pipeline that uses Elyra features to help analyze COVID-19 related datasets. He will introduce Elyra, a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines and execute these in a Kubeflow environment, execute notebooks as batch jobs, the ability to create, edit and execute Python scripts directly from JupyterLab
Elyra - a set of AI-centric extensions to JupyterLab Notebooks.Luciano Resende
In this session Luciano will explore the different projects that compose the Jupyter ecosystem; including Jupyter Notebooks, JupyterLab, JupyterHub and Jupyter Enterprise Gateway. Jupyter Notebooks are the current open standard for data science and AI model development, and IBM is dedicated to contributing to their success and adoption. Continuing the trend of building out the Jupyter ecosystem, Luciano will introduce Elyra. It's a project built to extend JupyterLab with AI-centric capabilities. He'll showcase the extensions that allow you to build Notebook Pipelines, execute notebooks as batch jobs, navigate and execute Python scripts, and tie neatly into Notebook versioning.
From Data to AI - Silicon Valley Open Source projects come to you - Madrid me...Luciano Resende
The IBM Center for Open Source, Data and AI Technology "CODAIT" (https://developer.ibm.com/code/open/centers/codait/) works on multiple open-source Data and AI projects. In this section we will introduce these projects around Jupyter Notebooks, reusable Model and Data assets, Trusted AI among others.
The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, it’s very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle.
Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization.
Strata - Scaling Jupyter with Jupyter Enterprise GatewayLuciano Resende
Born in academia, Jupyter notebooks are prevalent in both learning and research environments throughout the scientific community. Due to the widespread adoption of big data, AI, and deep learning frameworks, notebooks are also finding their way into the enterprise, which introduces a different set of requirements.
Alan Chin and Luciano Resende explain how to introduce Jupyter Enterprise Gateway into new and existing notebook environments to enable a “bring your own notebook” model while simultaneously optimizing resources consumed by the notebook kernels running across managed clusters within the enterprise. Along the way, they detail how to use different frameworks with Enterprise Gateway to meet the needs of data scientists operating within the AI and deep learning ecosystems.
Scaling notebooks for Deep Learning workloadsLuciano Resende
Deep learning workloads are computing intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebook’s interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.
This document discusses Jupyter notebooks and the Jupyter Enterprise Gateway. It provides an overview of Jupyter notebooks, their functionality, and architecture. It then describes the limitations of running Jupyter notebooks at an enterprise scale. The Jupyter Enterprise Gateway is introduced as a solution that enables sharing of resources across clusters in a scalable, secure, and multi-tenant way. The document outlines the key features and architecture of the Gateway when used with YARN and Kubernetes clusters. Finally, it provides information on deploying and connecting to the Jupyter Enterprise Gateway.
Inteligencia artificial, open source e IBM Call for CodeLuciano Resende
Nesta palestra vamos abordar algumas das tendências em Inteligência Artificial e as dificuldades na uso da Inteligência Artificial. Por isso, também apresentaremos algumas ferramentas disponíveis em código livre que podem ajudar a simplificar a adoção da IA. E faremos uma breve introdução ao “Call for Code” que é uma iniciativa da IBM para construir soluções na prevenção e reação a desastres naturais.
IoT Applications and Patterns using Apache Spark & Apache BahirLuciano Resende
The Internet of Things (IoT) is all about connected devices that produce and exchange data, and building applications that produce insights from these high volumes of data is very challenging and require a understanding of multiple protocols, platforms and other components. On this session, we will start by providing a quick introduction to IoT, some of the common analytic patterns used on IoT, and also touch on the MQTT protocol and how it is used by IoT solutions some of the quality of services tradeoffs to be considered when building an IoT application. We will also discuss some of the Apache Spark platform components, the ones utilized by IoT applications to process devices streaming data.
We will also talk about Apache Bahir and some of its IoT connectors available for the Apache Spark platform. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.
Getting insights from IoT data with Apache Spark and Apache BahirLuciano Resende
The Internet of Things (IoT) is all about connected devices that produce and exchange data, and producing insights from these high volumes of data is challenging. On this session, we will start by providing a quick introduction to the MQTT protocol, and focus on using AI and machine learning techniques to provide insights from data collected from IoT devices. We will present some common AI concepts and techniques used by the industry to deploy state-of-the-art smart IoT systems. These techniques allow systems to determined patterns from the data, predict and prevent failures as well as suggest actions that can be used to minimize or avoid IoT device breakdowns on an intelligent way beyond rule-based and database search approaches. We will finish with a demo that puts together all the techniques discussed in an application that uses Apache Spark and Apache Bahir support for MQTT.
This presentation describes some of the Open Source Ai projects we are working at the Center for Open Source, Data and AI Technologies (CODAIT), including Model Asset Exchange (MAX), Fabric for Deep Learning (FfDL) and Jupyter Enterprise Gateway.
Building analytical microservices powered by jupyter kernelsLuciano Resende
This document discusses building analytical microservices powered by Jupyter kernels. It provides an overview of Jupyter notebooks and their architecture. It then introduces the Jupyter Enterprise Gateway, which allows running Jupyter kernels on a distributed cluster for improved scalability and security. Finally, it demonstrates a use case of a sentiment analysis microservice that leverages PySpark on a Hadoop cluster via Jupyter kernels.
Building iot applications with Apache Spark and Apache BahirLuciano Resende
We leave in a connected world where connected devices are becoming part of our day to day and are providing invaluable streams of data. In this talk, we will introduce you to Apache Bahir and some of its IoT connectors available for Apache Spark. We will also go over the details on how to build, test and deploy an IoT application for Apache Spark using the MQTT data source for the new Apache Spark Structure Streaming functionality.
An Enterprise Analytics Platform with Jupyter Notebooks and Apache SparkLuciano Resende
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
The Analytic Platform behind IBM’s Watson Data Platform - Big Data Spain 2017Luciano Resende
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale. Behind this service, there are various components that power this platform, including Jupyter Notebooks, an enterprise gateway that manages the execution of the Jupyter Kernels and an Apache Spark cluster that power the computation. In this session we will describe our experience and best practices putting together this analytical platform as a service based on Jupyter Notebooks and Apache Spark, in particular how we built the Enterprise Gateway that enables all the Notebooks to share the Spark cluster computational resources.
What's new in Apache SystemML - Declarative Machine LearningLuciano Resende
SystemML was designed with the main goal of lowering the complexity required to maintain and scale Machine Learning algorithms. It provides a declarative machine learning (DML) that simplify the specification of machine learning algorithms using an R-like and Python-like that significantly increases the productivity of data scientist as it provides flexibility on how the custom analytics are expressed and also provides data independence from the underlying input formats and physical custom analytics.
This presentation gives a quick introduction to Apache SystemML, provides an updated on the recent areas that are being developed by the project community, and go over a tutorial that enables one to quickly get up to speed in SystemML.
Big analytics meetup - Extended Jupyter Kernel GatewayLuciano Resende
Luciano Resende from IBM's Spark Technology Center presented on building an enterprise/cloud analytics platform with Jupyter Notebooks and Apache Spark. The Spark Technology Center focuses on contributions to open source Apache Spark projects. Resende discussed limitations of the current Jupyter Notebook setup for multi-user shared clusters and demonstrated an Extended Jupyter Kernel Gateway that allows running kernels remotely in a cluster with enhanced security, resource optimization, and multi-user support through user impersonation. The Extended Jupyter Kernel Gateway is planned for open source release.
Jupyter con meetup extended jupyter kernel gatewayLuciano Resende
Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a elastic and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter stack and different kernels running in a distributed Apache Spark cluster.
Writing Apache Spark and Apache Flink Applications Using Apache BahirLuciano Resende
Big Data is all about being to access and process data in various formats, and from various sources. Apache Bahir provides extensions to distributed analytic platforms providing them access to different data sources. In this talk we will introduce you to Apache Bahir and its various connectors that are available for Apache Spark and Apache Flink. We will also go over the details of how to build, test and deploy an Spark Application using the MQTT data source for the new Apache Spark 2.0 Structure Streaming functionality.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
TrsLabs - Fintech Product & Business ConsultingTrs Labs
Hybrid Growth Mandate Model with TrsLabs
Strategic Investments, Inorganic Growth, Business Model Pivoting are critical activities that business don't do/change everyday. In cases like this, it may benefit your business to choose a temporary external consultant.
An unbiased plan driven by clearcut deliverables, market dynamics and without the influence of your internal office equations empower business leaders to make right choices.
Getting things done within a budget within a timeframe is key to Growing Business - No matter whether you are a start-up or a big company
Talk to us & Unlock the competitive advantage
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
9. Git
n Distributed source control repository
n Regular project structure
q Trunk – current development
q Branches – parallel development
q Tags – snapshots (e.g. release tags)
Open Source Workshop 9
10. Git useful commands
• Checkout source code
q git clone http://www.company.com/repo/project/trunk
n E.g : git clone git://git.apache.org/wink.git
• Update current checkout
q git pull --rebase
• Check status of current checkout
q git status
n Check differences
q git diff
q git format-patch >> changes.patch (create a patch with the current diff)
n Rollback changes
q git reset –soft / --hard
q git checkout file
• Add new file
q git add file/directory
• Commit files
q git add file // git commit –a –m”Describe the contents of your commit”
q git commit –a –m”Describe the contents of your commit”
q git push
q git svn dcommit for git-svn bridge
Open Source Workshop 10
11. Git branch workflow
n Clone the repository for the first time
git clone git://git.apache.org/wink.git
n Update the repository
git pull –rebase
git svn rebase for git-svn bridge
n Workflow
git checkout –b JIRA-101 creates a new branch
git commit –a –m”JIRA-101 – My updates” commit all files changed
git commit –a –amend updated commit with new changes
git rebase master get changes from master
git push push changes to remote repo
git svn dcommit push changes in case of git-svn bridge
git checkout master goes back to master branch
n Updating a branch with changes from master
git rebase master
Open Source Workshop 11
12. Git hands-on
n Find an ASF project using GIT
n Play with the basic GIT commands
Open Source Workshop 12
14. What is Maven
n A Java project management and integration build tool
n Based on the concept of XML Project Object Model (POM)
n Favors convention over configuration
q Project Structure
q Repository Layout
q Etc
n Modular build platform
q Extensible via plugins (a.k.a MOJO’s)
Open Source Workshop 14
19. Maven Project Configuration
n Configuration is entered in XML format in a Project Object Model or
POM
n Projects are structured in a hierarchy and project configuration is
inherited by sub-projects
q If no parent is specified, parent is called “Super” or “Parent POM”
Open Source Workshop 19
21. Maven useful commands
n Build
q mvn clean install
n Build in offline mode
q mvn clean install -o
n Build and force updating/downloading dependencies
q mvn -U clean install
n Build without executing unit tests
q mvn clean install -Dmaven.test.skip=true
n Build and report errors only at the end (fail at end)
q mvn -fae clean install
n Build and don’t report errors (fail never)
q mvn -fn clean install
n Execute only one test case
q mvn -Dtest=ComponentServiceReferenceTestCase
Clean is optional, but omitting it might cause
unexpected issues.
Open Source Workshop 21
22. Maven useful commands
• Put maven in debug mode
q mvnDebug
• Identify 3rd party dependencies with maven dependency plugin
q mvn dependency:analyze
q mvn dependency:copy-dependencies
q mvn dependency:tree
Open Source Workshop 22
23. Maven with Eclipse
Maven Eclipse plugins
q The goal of the Eclipse m2eclipse project is to provide Apache Maven support in
the Eclipse IDE, making it easier to edit Maven's pom.xml, run a build from the
IDE and much more. For Java developers, the very tight integration with JDT
greatly simplifies the consumption of Java artifacts.
q http://download.eclipse.org/technology/m2e/releases
Open Source Workshop 23
25. Github Project
n Create a github java project
q Create new repository
q Create a maven java project
q Add some business logic
q Add tests to verify your business logic
q Build
n With tests
n Without tests
Open Source Workshop 25
26. Github collaboration
n Provide a patch for your colleague project
q Do a simple change on your colleague project
q Submit a pull request
q Add a bit more change (simulating a update based on feedback)
q Submit a pull request
n Owner should accept pull request and merge changes
q If you can’t find an owner, I will create a simple project and you can use that
n My github account : lresende
Open Source Workshop 26