Enterprise applications written in languages including COBOL are typically large and increasingly difficult to maintain. Yet they represent a significant investment as they contain core business rules that deliver key enterprise functions. Today, the demand for Mobile platforms, faster business change delivery cycles, and ongoing cost pressures are driving enterprises to consider options for modernizing these applications. CA Gen is a proven, model based, application development environment encapsulating multi-platform delivery capabilities. From mainframe to mobile, and all platforms in between, CA Gen delivers multi-channel capability from a single code base, consolidating development skill sets and delivering productivity above and beyond traditional 3GL development. Come and hear about new capabilities for modernizing your COBOL applications via automated migration of your business logic and processes into CA Gen.
For more information, please visit http://cainc.to/Nv2VOe
Session Description:
An early overview of the upcoming new and exciting features and improvements in the next major LTS release of CloudStack, 4.19. Abhishek Kumar, who will be acting as the release manager for the CloudStack 4.19, gives a quick recap of the major additions in the previous LTS release - 4.18.0, discusses the timeline for the 4.19.0 release and talks about the planned and expected new features in the upcoming release.
Speaker Bio:
Abhishek is a committer of the Apache CloudStack project and has worked on the notable features such as VM ingestion, CloudStack Kubernetes Service, IPv6 support, etc. He works as a Software Engineer at ShapeBlue.
---------------------------------------------
On Friday 18th August, the Apache CloudStack India User Group 2023 took place in Bangalore, seeing CloudStack enthusiasts, experts, and industry leaders from across the country, discuss the open-source project. The meetup served as a vibrant platform to delve into the depths of Apache CloudStack, share insights, and forge new connections.
Mule access management - Managing Environments and PermissionsShanky Gupta
The Anypoint Platform allows you to create and manage separate environments for deploying, which are independent from each other. This presentation also explains how permissions work across different products and APIs managed feom the Anypoint Plaform.
DevOps and APIs: Great Alone, Better Together MuleSoft
DevOps has emerged as a critical enabler of agility in enterprise IT; a DevOps model increases reliability and minimizes disruption, with the added benefit of increasing speed. But that isn’t enough. DevOps must be balanced with a focus on asset consumption and reuse to make sure the organization is extracting maximum value out of all the newly built assets. And that’s where an API strategy comes in. In this session, we'll discuss how organizations use DevOps and API-led connectivity to reduce time to market 3-4x.
What is Jenkins | Jenkins Tutorial for Beginners | EdurekaEdureka!
****** DevOps Training : https://www.edureka.co/devops ******
This DevOps Jenkins Tutorial on what is Jenkins ( Jenkins Tutorial Blog Series: https://goo.gl/JebmnW ) will help you understand what is Continuous Integration and why it was introduced. This tutorial also explains how Jenkins achieves Continuous Integration in detail and includes a Hands-On session around Jenkins by the end of which you will learn how to compile a code that is present in GitHub, Review that code and Analyse the test cases present in the GitHub repository. The Hands-On session also explains how to create a build pipeline using Jenkins and how to add Jenkins Slaves.
The Hands-On session is performed on an Ubuntu-64bit machine in which Jenkins is installed.
To learn how Jenkins can be used to integrate multiple DevOps tools, watch the video titled 'DevOps Tools', by clicking this link: https://goo.gl/up9iwd
Check our complete DevOps playlist here: http://goo.gl/O2vo13
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Introduction to ibm cloud paks concept license and minimum config publicPetchpaitoon Krungwong
- IBM Cloud Paks license pricing is based on VPC (Virtual Processor Core) or MVS (Managed Virtual Server) units. The number required depends on the technology, processors, and number of virtual cores/partitions used.
- Sample minimum configurations are provided for IBM Cloud Pak for Applications, Integration, and Multicloud Management. These include the required node types, operating systems, number of VMs, CPU, memory, and storage needed.
- Trade-up licenses allow customers to transition existing software support licenses to IBM Cloud Paks licenses, providing flexibility to use the licenses on-premises or in cloud environments.
From the the teams struggling with DevOps to experienced professionals trying to make a shift to DevOps, this presentation helps in how understanding how DevOps makes Deliveries faster and accurate
DevOps - an Agile Perspective (at Scale)Brad Appleton
by Brad Appleton, Agile Day Chicago 2018, October 26 2018;
This presentation gives a comprehensive introduction to DevOps, for Agile development practitioners. In 2018, there are many misunderstandings about Agile & DevOps and how they relate to one another. Too many think of Agile (development) as primarily "Scrum", and that DevOps is Continuous Integration & Delivery (both of which are wrong). This presentation describes the meaning, origin & history of DevOps from an Agile development perspective.
Red hat ansible automation technical deckJuraj Hantak
Ansible Automation can be used to deliver a high-level introduction of Red Hat Ansible Automation. This deck contains speaker notes and can be used to start discussions with customers. It provides a technical overview but not a deep dive. Follow-on discussions would leverage Red Hat Ansible Automation technical materials.
Easy Setup for Parallel Test Execution with Selenium DockerSargis Sargsyan
Parallel execution of test cases is one of the important requirements of a modern test automation framework.
Generally, to run Selenium Suite in parallel, we use selenium grid and distributing tests across multiple machines which will reduce the time required for running tests. To run tests in parallel, we need to configure Selenium Grid with Hub and Nodes where the hub is the central point which will receive test requests along with configurations or capabilities. Based on the request received, the hub will distribute tests to the registered nodes.
Selenium has made a set of Docker images which are available on Docker Hub. We have Selenium Grid, and the browser images - Chrome and Firefox. There are also images with the debug versions which will let as view the test execution.
In this session, we will go through the Selenium tests parallel run setup and configuration.
webMethods 10.5 & webMethods.io integration are the new avenues enterprises must seek to scale their integration topologies. Scroll our PPT to know what’s new in there and how your business can leverage it.
Podman is an open source tool for managing OCI containers and container images. It allows users to find, run, build, share and deploy applications using containers. Some key points about Podman include:
- It is daemonless, secure, and designed for Linux containers.
- Podman manages the entire container lifecycle from creation to deletion. It handles mounting, networking, and the container runtime.
- When running a container, Podman generates an OCI specification, pulls the image if needed, configures networking using Netavark, and uses Conmon to monitor the container process.
- Podman 4 introduced a new network stack based on Netavark and Aardvark-dns
Exploring Cloud Computing with Amazon Web Services (AWS)Kalema Edgar
In this presentation, I shared about:
1. The business value for AWS
2. How businesses can embrace cloud computing
3. What strategies can be used to migrate to the cloud
4. Technical overview of AWS services and how they can be used
Continuous integration involves developers committing code changes daily which are then automatically built and tested. Continuous delivery takes this further by automatically deploying code changes that pass testing to production environments. The document outlines how Jenkins can be used to implement continuous integration and continuous delivery through automating builds, testing, and deployments to keep the process fast, repeatable and ensure quality.
Getting Started with Google's Infrastructure is summarized as follows:
1. Google Cloud Platform provides infrastructure services including virtual machines, networking, and storage hosted on Google's global network of data centers.
2. Google Compute Engine is an infrastructure as a service offering that allows users to launch and manage virtual machine instances.
3. The document provides an overview of Google Compute Engine including machine types, regions, persistent disks, load balancing, and pricing models.
What a Platform is? Which is the role of Engineers? How to improve time-to-market and reduce total cost of ownership moving from project to product mindset?
Those are just of some questions that Platform Engineers are answering everyday. This is a draft presentation of my next presentation about Platforms and Software Engineering.
Rajnish Kumar presented on Mulesoft and the need for a new delivery model called a Center of Excellence (C4E). The key responsibilities of a C4E include platform enablement, platform architecture, support, deployment and management, API strategy, API best practices, and delivery acceleration. Rajnish discussed Mulesoft's Anypoint Platform which enables digital transformation across customer experience, partner experience, employee experience, new products and services, and operational efficiency. He provided a success story and links to additional resources.
This document discusses DevOps and the movement towards closer collaboration between development and operations teams. It advocates that operations work should start early in the development process, with developers and operations communicating about non-functional requirements, security, backups, monitoring and more. Both developers and operations staff should aim to automate infrastructure and deployments. The goal is reproducible, reliable deployments of applications and their supporting systems.
Productionizing Machine Learning with a Microservices ArchitectureDatabricks
Deploying machine learning models from training to production requires companies to deal with the complexity of moving workloads through different pipelines and re-writing code from scratch.
Building Kubernetes images at scale with Tanzu Build ServiceVMware Tanzu
Building a secure software supply chain
Leveraging Tanzu Build Service
How Build Service fits in the Tanzu portfolio
Modernize your applications
Live demos
Look ma: no Dockerfile!
Kubeflow is an open-source project that makes deploying machine learning workflows on Kubernetes simple and scalable. It provides components for machine learning tasks like notebooks, model training, serving, and pipelines. Kubeflow started as a Google side project but is now used by many companies like Spotify, Cisco, and Itaú for machine learning operations. It allows running workflows defined in notebooks or pipelines as Kubernetes jobs and serves models for production.
Managing Infrastructure as a Product - Introduction to Platform EngineeringAdityo Pratomo
This is an introduction to platform engineering, the bridge that truly fulfills DevOps potential inside a mid-large scale organization. Sure, it's all the rage these days, but I'd argue to completely develop a platform, a product thinking mindset is also required.
This talk was presented in Kubernetes Day Indonesia 2022
Take a load off! Load testing your Oracle APEX or JDeveloper web applicationsSage Computing Services
Geeeez, after demanding you unit test, system test, black box test, white box test, test-test-test everything, your manager is now demanding you load test your brand spanking new Oracle web application. How on earth can you do this?
This technical presentation will explain the concepts behind preparing for load testing, the Http protocol's request/response model, and live demonstrations using Oracle's Http Analyzer and Apache's JMeter to stress test your Oracle web application.
The presentation is suitable for anybody, be it DBAs or developers, who are concerned about the performance of any web based application, possibly an Apex or JDeveloper or 3rd party web application. Knowledge of Apex or JDeveloper is not mandatory for this presentation and they will not be covered in any depth.
Meetup - Automate your project lifecycle using MuleSoft and Azure DevOpsRenato de Oliveira
This document discusses how to automate the project lifecycle for MuleSoft applications using MuleSoft and Azure DevOps. It covers setting up continuous integration (CI) and continuous delivery (CD) pipelines for building, testing, and deploying MuleSoft applications to different environments. The document provides an overview of the tools and processes used, including configuring notifications, auditing deployment logs, and securely managing application properties and secrets.
DevOps Best Practices with Openshift - DevOpsFusion 2020Andreas Landerer
This document discusses DevOps best practices using OpenShift. It describes setting up a CI/CD pipeline with Jenkins on OpenShift to build and deploy a sample application. The pipeline builds a Docker image using OpenShift build configs and deploys the application. It also discusses logging, metrics, distributed tracing and avoiding emulating others' practices without considering your own needs.
This session will cover the development & deployment of containerized ASP.NET Core 6 apps using Docker and Azure and architectural design & implementation approaches using .NET and Docker containers. The different services to deploy on Azure like Azure Container Registry, Azure Container instance, Azure Container Apps, and Azure Kubernetes Services as an orchestrator will be reviewed. We will also create the different resources and explore the different tools and properties if attendees prefer not to use Docker-Compose.yml. Then we will deploy our application that's based on Docker images using Azure App Service. And finally, we will configure continuous deployment for our web app with a webhook that monitors changes to the Docker image.
https://conferences.techwell.com/archives/agiledevopswest-2023/program/concurrent-sessions/build-containerized-applications-using-docker-and-azure-agile-devops-west-2023.html
From the the teams struggling with DevOps to experienced professionals trying to make a shift to DevOps, this presentation helps in how understanding how DevOps makes Deliveries faster and accurate
DevOps - an Agile Perspective (at Scale)Brad Appleton
by Brad Appleton, Agile Day Chicago 2018, October 26 2018;
This presentation gives a comprehensive introduction to DevOps, for Agile development practitioners. In 2018, there are many misunderstandings about Agile & DevOps and how they relate to one another. Too many think of Agile (development) as primarily "Scrum", and that DevOps is Continuous Integration & Delivery (both of which are wrong). This presentation describes the meaning, origin & history of DevOps from an Agile development perspective.
Red hat ansible automation technical deckJuraj Hantak
Ansible Automation can be used to deliver a high-level introduction of Red Hat Ansible Automation. This deck contains speaker notes and can be used to start discussions with customers. It provides a technical overview but not a deep dive. Follow-on discussions would leverage Red Hat Ansible Automation technical materials.
Easy Setup for Parallel Test Execution with Selenium DockerSargis Sargsyan
Parallel execution of test cases is one of the important requirements of a modern test automation framework.
Generally, to run Selenium Suite in parallel, we use selenium grid and distributing tests across multiple machines which will reduce the time required for running tests. To run tests in parallel, we need to configure Selenium Grid with Hub and Nodes where the hub is the central point which will receive test requests along with configurations or capabilities. Based on the request received, the hub will distribute tests to the registered nodes.
Selenium has made a set of Docker images which are available on Docker Hub. We have Selenium Grid, and the browser images - Chrome and Firefox. There are also images with the debug versions which will let as view the test execution.
In this session, we will go through the Selenium tests parallel run setup and configuration.
webMethods 10.5 & webMethods.io integration are the new avenues enterprises must seek to scale their integration topologies. Scroll our PPT to know what’s new in there and how your business can leverage it.
Podman is an open source tool for managing OCI containers and container images. It allows users to find, run, build, share and deploy applications using containers. Some key points about Podman include:
- It is daemonless, secure, and designed for Linux containers.
- Podman manages the entire container lifecycle from creation to deletion. It handles mounting, networking, and the container runtime.
- When running a container, Podman generates an OCI specification, pulls the image if needed, configures networking using Netavark, and uses Conmon to monitor the container process.
- Podman 4 introduced a new network stack based on Netavark and Aardvark-dns
Exploring Cloud Computing with Amazon Web Services (AWS)Kalema Edgar
In this presentation, I shared about:
1. The business value for AWS
2. How businesses can embrace cloud computing
3. What strategies can be used to migrate to the cloud
4. Technical overview of AWS services and how they can be used
Continuous integration involves developers committing code changes daily which are then automatically built and tested. Continuous delivery takes this further by automatically deploying code changes that pass testing to production environments. The document outlines how Jenkins can be used to implement continuous integration and continuous delivery through automating builds, testing, and deployments to keep the process fast, repeatable and ensure quality.
Getting Started with Google's Infrastructure is summarized as follows:
1. Google Cloud Platform provides infrastructure services including virtual machines, networking, and storage hosted on Google's global network of data centers.
2. Google Compute Engine is an infrastructure as a service offering that allows users to launch and manage virtual machine instances.
3. The document provides an overview of Google Compute Engine including machine types, regions, persistent disks, load balancing, and pricing models.
What a Platform is? Which is the role of Engineers? How to improve time-to-market and reduce total cost of ownership moving from project to product mindset?
Those are just of some questions that Platform Engineers are answering everyday. This is a draft presentation of my next presentation about Platforms and Software Engineering.
Rajnish Kumar presented on Mulesoft and the need for a new delivery model called a Center of Excellence (C4E). The key responsibilities of a C4E include platform enablement, platform architecture, support, deployment and management, API strategy, API best practices, and delivery acceleration. Rajnish discussed Mulesoft's Anypoint Platform which enables digital transformation across customer experience, partner experience, employee experience, new products and services, and operational efficiency. He provided a success story and links to additional resources.
This document discusses DevOps and the movement towards closer collaboration between development and operations teams. It advocates that operations work should start early in the development process, with developers and operations communicating about non-functional requirements, security, backups, monitoring and more. Both developers and operations staff should aim to automate infrastructure and deployments. The goal is reproducible, reliable deployments of applications and their supporting systems.
Productionizing Machine Learning with a Microservices ArchitectureDatabricks
Deploying machine learning models from training to production requires companies to deal with the complexity of moving workloads through different pipelines and re-writing code from scratch.
Building Kubernetes images at scale with Tanzu Build ServiceVMware Tanzu
Building a secure software supply chain
Leveraging Tanzu Build Service
How Build Service fits in the Tanzu portfolio
Modernize your applications
Live demos
Look ma: no Dockerfile!
Kubeflow is an open-source project that makes deploying machine learning workflows on Kubernetes simple and scalable. It provides components for machine learning tasks like notebooks, model training, serving, and pipelines. Kubeflow started as a Google side project but is now used by many companies like Spotify, Cisco, and Itaú for machine learning operations. It allows running workflows defined in notebooks or pipelines as Kubernetes jobs and serves models for production.
Managing Infrastructure as a Product - Introduction to Platform EngineeringAdityo Pratomo
This is an introduction to platform engineering, the bridge that truly fulfills DevOps potential inside a mid-large scale organization. Sure, it's all the rage these days, but I'd argue to completely develop a platform, a product thinking mindset is also required.
This talk was presented in Kubernetes Day Indonesia 2022
Take a load off! Load testing your Oracle APEX or JDeveloper web applicationsSage Computing Services
Geeeez, after demanding you unit test, system test, black box test, white box test, test-test-test everything, your manager is now demanding you load test your brand spanking new Oracle web application. How on earth can you do this?
This technical presentation will explain the concepts behind preparing for load testing, the Http protocol's request/response model, and live demonstrations using Oracle's Http Analyzer and Apache's JMeter to stress test your Oracle web application.
The presentation is suitable for anybody, be it DBAs or developers, who are concerned about the performance of any web based application, possibly an Apex or JDeveloper or 3rd party web application. Knowledge of Apex or JDeveloper is not mandatory for this presentation and they will not be covered in any depth.
Meetup - Automate your project lifecycle using MuleSoft and Azure DevOpsRenato de Oliveira
This document discusses how to automate the project lifecycle for MuleSoft applications using MuleSoft and Azure DevOps. It covers setting up continuous integration (CI) and continuous delivery (CD) pipelines for building, testing, and deploying MuleSoft applications to different environments. The document provides an overview of the tools and processes used, including configuring notifications, auditing deployment logs, and securely managing application properties and secrets.
DevOps Best Practices with Openshift - DevOpsFusion 2020Andreas Landerer
This document discusses DevOps best practices using OpenShift. It describes setting up a CI/CD pipeline with Jenkins on OpenShift to build and deploy a sample application. The pipeline builds a Docker image using OpenShift build configs and deploys the application. It also discusses logging, metrics, distributed tracing and avoiding emulating others' practices without considering your own needs.
This session will cover the development & deployment of containerized ASP.NET Core 6 apps using Docker and Azure and architectural design & implementation approaches using .NET and Docker containers. The different services to deploy on Azure like Azure Container Registry, Azure Container instance, Azure Container Apps, and Azure Kubernetes Services as an orchestrator will be reviewed. We will also create the different resources and explore the different tools and properties if attendees prefer not to use Docker-Compose.yml. Then we will deploy our application that's based on Docker images using Azure App Service. And finally, we will configure continuous deployment for our web app with a webhook that monitors changes to the Docker image.
https://conferences.techwell.com/archives/agiledevopswest-2023/program/concurrent-sessions/build-containerized-applications-using-docker-and-azure-agile-devops-west-2023.html
Rome .NET Conference is a free online event organized by the DotNetCode Community for developers. The main topic of this year is .NET 8, but many other topics on Microsoft development technologies and products (.NET, ASP.NET, AZURE, DevOps, and more...) are also covered.
Link to the session: https://www.youtube.com/watch?v=D5aJnBLf2pQ
Containers #101 Meetup: Building a micro-service using Node.js and Docker - P...Codefresh
Recording and overview of the meetup posted here: https://codefresh.io/blog/building-a-microservice-using-docker-and-node-js-part-2/
Continuous Integration with Cloud Foundry Concourse and Docker on OpenPOWERIndrajit Poddar
This document discusses continuous integration (CI) for open source software on OpenPOWER systems. It provides background on CI, OpenPOWER systems, and the Cloud Foundry platform. It then describes using the Concourse CI tool to continuously build a Concourse project from a GitHub repository. Key steps involve deploying OpenStack, setting up a Docker registry, installing BOSH and Concourse, defining a Concourse pipeline, and updating the pipeline to demonstrate the CI process in action. The document emphasizes the importance of CI for open source projects and how it benefits development on OpenPOWER systems.
Mihai Criveti - PyCon Ireland - Automate EverythingMihai Criveti
PyCon Ireland - Python DevOps flows with Ansible, Packer & Kubernetes - Mihai Criveti
https://www.youtube.com/watch?v=lO884XAdddQ
1 Packer: Image Build Automation
2 OpenSCAP: Automate Security Baselines
3 Ansible: Provisioning and Configuration Management
4 Molecule: Test your Ansible Playbooks on Docker, Vagrant or Cloud
5 Vagrant: Test images with vagrant
6 Package Python Applications with setuptools
7 Kubernetes: Container Orchestration at Scale
8 DevOps Culture and Practice
Containers #101 Meetup: Building a micro-service using Node.js and Docker - P...Codefresh
This document summarizes a webinar about building microservices using Node.js and Docker. It discusses creating a base Docker image with Node.js, building a simple Express microservice, running the microservice in a Docker container, building a Docker image from the container, and publishing the image to Docker Hub. The webinar covers Docker terminology and demonstrates each step through code examples to help developers learn how to containerize Node.js applications.
3 years ago, Meetic chose to rebuild it's backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy, as well as functional testing on a microservice. A good gherkin framework and a set of docker container can do the job. The real challenge is set in end-to-end testing even more when a feature can involve up to 60 different components.
To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :
- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Have a clean-up process to destroy testing environment after tests To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack. But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others.
This is typically why Meetic chose to develop it's own tool to manage namespace through a cli tool, or a REST API on which we can plug a friendly UI.
In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request. And we will show you how to make end-to-end testing easier using Blackbeard, the tool we developed to handle the need to manage namespaces inspired by Helm.
Karthik Gaekwad presented on containers and microservices. He discussed the evolution of DevOps and how containers and microservices fit within the DevOps paradigm by allowing for collaboration between development and operations teams. He defined containers, microservices, and common containerization concepts. Gaekwad also provided examples of how organizations are using containers for standardization, continuous integration and delivery pipelines, and hosting legacy applications.
Priming Your Teams For Microservice Deployment to the CloudMatt Callanan
You think of a great idea for a microservice and want to ship it to production as quickly as possible. Of course you'll need to create a Git repo with a codebase that reuses libraries you share with other services. And you'll want a build and a basic test suite. You'll want to deploy it to immutable servers using infrastructure as code that dev and ops can maintain. Centralised logging, monitoring, and HipChat notifications would also be great. Of course you'll want a load balancer and a CNAME that your other microservices can hit. You'd love to have blue-green deploys and the ability to deploy updates at any time through a Continuous Delivery pipeline. Phew! How long will it take to set all this up? A couple of days? A week? A month?
What if you could do all of this within 30 minutes? And with a click of a button soon be receiving production traffic?
Matt introduces "Primer", Expedia's microservice generation and deployment platform that enables rapid experimentation in the cloud, how it's caused unprecedented rates of learning, and explain tips and tricks on how to build one yourself with practical takeaways for everyone from the startup to the enterprise.
Video: https://www.youtube.com/watch?v=Xy4EkaXyEs4
Meetup: http://www.meetup.com/Devops-Brisbane/events/225050723/
This document provides an introduction to Docker presented by Tibor Vass, a core maintainer on Docker Engine. It outlines challenges with traditional application deployment and argues that Docker addresses these by providing lightweight containers that package code and dependencies. The key Docker concepts of images, containers, builds and Compose are introduced. Images are read-only templates for containers which sandbox applications. Builds describe how to assemble images with Dockerfiles. Compose allows defining multi-container applications. The document concludes by describing how Docker improves the deployment workflow by allowing testing and deployment of the same images across environments.
Continuous Deployment with Amazon Web ServicesJulien SIMON
This document summarizes a webinar about continuous deployment with Amazon Web Services. It defines concepts like continuous integration, continuous delivery, and DevOps. It then demonstrates how to set up continuous integration/continuous delivery pipelines on AWS using services like CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. The pipelines shown include building and deploying a C library and a Java web application. Potential issues that may occur with deployments are also discussed.
Containers, microservices and serverless for realistsKarthik Gaekwad
The document discusses containers, microservices, and serverless applications for developers. It provides an overview of these topics, including how containers and microservices fit into the DevOps paradigm and allow for better collaboration between development and operations teams. It also discusses trends in container usage and orchestration as well as differences between platforms as a service (PaaS) and serverless applications.
Tear It Down, Build It Back Up: Empowering Developers with Amazon CloudFormationJames Andrew Vaughn
As a product grows, and the infrastructure becomes more complex, the Operations team traditionally shoulders the burden of maintaining this infrastructure while deploying code from Software Engineers. Code is sometimes given to Operations with little to no information regarding how it should run or what the criteria for successful deployment is. This is not due to lack of caring, Software Engineers often lack the context themselves to provide production deployment instructions. To Software Engineers, production can be like a walled off city, filled with pathways and rooms not to be explored, guarded by Operations.
This presentation aims to provide a solution to this problem. We will address how the traditional separation of Operations and Software Engineers slows innovation, and redefine their relationship -- blending responsibilities. We will examine the transition of two real teams, an Operations team and Engineering team, from complete isolation, to closer environments through virtual machines, to one cloud environment shared by all and managed with CloudFormation.
Docker is a technology that uses lightweight containers to package applications and their dependencies in a standardized way. This allows applications to be easily deployed across different environments without changes to the installation procedure. Docker simplifies DevOps tasks by enabling a "build once, ship anywhere" model through standardized environments and images. Key benefits include faster deployments, increased utilization of resources, and easier integration with continuous delivery and cloud platforms.
A Hitchhiker's Guide to the Cloud Native StackQAware GmbH
Devoxx 2017, Poland: Talk by Mario-Leander Reimer (@LeanderReimer, Principal Software Architect at QAware).
Abstract: Cloud native applications are popular these days. They promise superior reliability and almost arbitrary scalability. They follow three key principles: they are built and composed as microservices. They are packaged and distributed in containers. The containers are executed dynamically in the cloud. But which technology is best to build this kind of application? This talk will be your guidebook.
In this hands-on session, we will briefly introduce the core concepts and some key technologies of the cloud native stack and then show how to build, package, compose and orchestrate a cloud native microservice application on top of a cluster operating system such as Kubernetes. To make this session even more entertaining we will be using off-the-shelf MIDI controllers to visualize the concepts and to remote control a Kubernetes cluster.
The document is a presentation on cloud native applications. It discusses key principles like building microservices, packaging in containers, and dynamic execution in the cloud. It also covers containerization, composition using tools like Docker Compose, and orchestration with Kubernetes. The presentation provides demonstrations of these concepts and recommends designing applications for principles like distribution, performance, automation, and delivery for cloud environments.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
9. @andreaslanderer | @lehmamic
Which pipeline product should we use
We have several possibile CI/CD pipeline products:
• Jenkins was supported by OpenShift from the beginning (build config strategy
“jenkinspipeline” has been deprecated)
• Tekton, a new Kubernetes object based pipeline (introduced by OpenShift 4.0)
• Not supported CI/CD products running in OpenShift/Kubernetes
(e.g. TeamCity, AppVayor)
• Not supported CI/CD products running somewhere else
14. @andreaslanderer | @lehmamic
What we are going to do
• Setup a Jenkins Pipeline
• Build a sample app with the Jenkins Pipeline
• Build and publish a docker image with OpenShift build configs
• Deploy the app in OpenShift
16. @andreaslanderer | @lehmamic
Setup a basic Jenkins Pipeline
We need to setup following files for our basic Jenkins pipeline:
• A Jenkinsfile at the root of our repository
• A build config object in OpenShift with the Jenkins pipeline strategy
19. @andreaslanderer | @lehmamic
Create a custom Pod Template
There are several ways to create a custom pod template in OpenShift:
• Pod templates can be configured through the Jenkins Configuration UI
• OpenShift provides a few ways to create Jenkins Agent pod templates
Imagestreams that have the label role set to jenkins-slave.
Imagestreamtags that have the annotation role set to jenkins-slave.
ConfigMaps that have the label role set to jenkins-slave.
• DSL from the Kubernetes Jenkins plugin
20. @andreaslanderer | @lehmamic
Build the demo app
After we have setup a running pipeline we need to build our application:
• Clone the source code
• Define a proper versioning
• Build and test the application
21. @andreaslanderer | @lehmamic
Build the docker image
We are going to build a docker image with an OpenShift build config:
• Create an image pull secret
• Create a template with the build config and apply it to the OpenShift cluster
• Build the docker image with the build config
23. @andreaslanderer | @lehmamic
Deploy
Finally we need to deploy the docker image for our demo app:
• Create a template containing the deployment config, service and route to the OpenShift
cluster
• Trigger the deployment config rollout
31. @andreaslanderer | @lehmamic
DevOps Best Practices
• Don’t try to emulate others
• Take inspiration from what others did
• But don’t assume what worked for them will work for you
#10: There are severalpossible pipeline products we can use together with Open Shift
First, Jenkins supports Jenkins and as a two way sync mechanism in place.
With OpenShift 4.0 a new build system has been introduced – Tekton. Tekton is a build pipeline running nativly in Kubernetes. The OpenShift build configuration strategy “jenkinspipeline” and the sync plugin have been deprecated since then. We still show it, because we did our projects with that and the OpenShift/Kubernetes Jenkins integratipn is still valuable and in place.
There are other Build Products which are also good player together with Kubernetes. E.g. TeamCity with the Cloud Plugin or AppVayor
And of course, you can also use a build server hosted outside of the cluster. In my current project we use a Jenkins hosted outside because it is a managed Jankins and we don’t need to maintain it ourself. But it makes it a bit more difficult to connect to the cluster (Firewalls, Auth, etc.)
#11: As mentioned earlier, OpenShift has an integration Jenkins.
A Jenkins server can be set up with a few clicks from the OpenShift Developer Catalog
#12: Lets talk about how OpenShift integrates with Jenkins.
There are basically three Jenkins Plugins which come into play:
As Andreas mentioned, there is a special OpenShift build config with a jenkins pipeline strategy. The Jenkins Sync Plugin synchronizes this build config with the Jenkins Pipeline automatically.
The Kubernetes Plugin is a Jenkins Cloud Plugin which allows to run builds in Jenkins agent pods on Kubernetes
The Jenkins Client Plugin adds a OpenShift DSL to the Jenkins Pipeline Syntax allowing to execute commands from the cli directly with the DSL
#13: Everything is in the code (infrastructure as code)
One pipeline for building, testing and deploying until prod
Build once (deploy the same tested artifact to all stages)
Apply (configure) all required Kubernetes objects together with the deployment
#14: Gread, lets start with making our hands dirty
#18: We have a Jenkins file with some dummy stages, just du verify if the pipeline basically works.
#22: After we created our pipeline, we can see and interact with the pipeline in the openshift dashboard
Or -
In the Jenkins dashboard
#23: The Jenkins integration in OpenShift provides three default Jenkins agent pod templates
Basic with only JNLP
Node JS
And Maven
Unfortunatelly these pod templates don’t fit for us, we have a dotnet core application to build. Lets see how we can define our own pod templates in Jenkins
#24: There are several ways how we can define pod templates for Jenkins Agents:
Pod templates can be cofigured ion the cloud settings UI in Jenkins (show quickly)
OpenShift provides a few ways to define Jenkins Agents Pod Templates
Imagestreams that have the label role set to jenkins-slave.
Imagestreamtags that have the annotation role set to jenkins-slave.
ConfigMaps that have the label role set to jenkins-slave.
DSL from the Kubernetes Plugin.
We have chosen the DSL from the Kubernetes Plugin, because this way we have it close to our pipeline which makes it more understandable and it is checked into the git repo (infrastructure as code)
#28: To have a proper versioning is very important. Never use docker images with the “latest” tag since it introduces a non deterministic version.
I know more or less 3 ways to introduce a versioning:
Manual versioning
Versioning based on build number
Versioning based on the git history
The most deterministic versioning is based on git, because it can be reproduced any time and anywhere and is completely deterministic.
Usually we use a tool called GitVersion to produce a semantic version based on the commit history. GitVersion is a dorbet core tool and requires us to use a pod template, beause of that we introduced our own simplified, commit sha based, versioning.
#29: There are basically two ways we can build our docker image running our app:
Multi stage docker builds (has the advantage that a docker image cannot exist if somthings is not ok, but we don’t have really access to our binaries, test results etc.)
Build the app in the pipeline and build the docker image with the prebuilt binaries
The community standard points more in direction multi staged docker builds, but I prefere prebuilding the binaries in a pipeline. This way we have a bit more controll over the flow, can parallelize it andcan have access to the resulting binaries, test results etc.
And in OpenShift Online it is difficult to implement a multi stage docker build since we don’t have access to a direct docker build.
Since we cannot use pod templates anymore, we commited the binaries for the demo and no build is required. But we still need to zip it, which is required for the next step.
#31: We are going to build the docker image with a binary-to-image build config strategy. This build config uses a dedicated docker base image hosted in the redhat docker registry.
In order that we can access it, we need to login into the docker registry. This is done with docker pull- or push-secrets. In our case we need a docker pull secret.
In order to create a docker pull secret for the redhat docker registry you need a redhat account with a user name / password (you can add a password when you registered a 3rd party auth provider)
Please execute the following command. I did this already (you could see my password).
#32: And now we are going to build our docker image.
There are several ways to build a docker image in OpenShift:
Mount the docker socket into the container in the pod template and use “docker build”.This is a bit difficult in OpenShift, because OpenShift restricts the access to the docker socket. If you have your own cluster you can grant the service account running the Jenkins Agent permissions to do so.
BuildConfig with docker build strategy.This would be one of the best solutions. You write your own Dockerfile and pass in the required context. The build config will run docker build and push the image for you. Unfortunatelly this strategy is not permitted in OpenShift Online.
BuildConfig with source-to-image strategy. The build config is configured with a git repo and it will build and push the image for you. A special base image is used, depending on the required technology.
BuildConfig with binary-to-image strategy. This works similar as the source-2-image stategy, but instead of a git repo we pass in the already built binaries.
We used the binary-to-image strategy approach since this is a working approach with OpenShift Online which allows us to demonstrate the pipeline interaction with OpenShift. We pack everything into amn OpenShift template, which allows us to pass in parameters.
In the source specification we pass in the context of type “binary”. This binaries we need to pass in later in the command to trigger a build.
In the strategy, we configure which docker image to use and which startup assembly we need.
In the output we configure into which docker registry we pull our image. We use the OpenShift internal registry.
#33: Now we need to apply the build config we just had a look at. We are going to use the OpenShift Client DSL for that.
We need to specify the cluster and project to use (configurable in Jenins, in this case it is the default which is the current cluster)
“oc process” takes a template, replaces the template parameters and gets the kube objets out of it.
“oc apply” applies the kube objects on the cluster (our build config)
We get a reference of the build config and trigger a build with the last command (important, we wait until the build finished)
#35: First the deployment config.
The deployment config is similar to the standard kube deployment, but it will not rollout automatically like the kube deployment.
We define here a label which is required to select the deployment config afterwards.
And specify the pod demplate of this deployment config. A pod can have multiple containers. We suggest to have only one container per pod except in some whery specific cases where it really fits to have muliple containers. For example with a so called side-car pattern.
We specifiy here the comtainer image, port, volumes etc.
#36: A pod gets a cluster internal, dynamic IP which can change between the deployments. A pod is not accessible from outside of the cluster by default.
In general we create a service which makes the pods accessible over a cluster internal DNS name and introduces load balancing. Also the service is not accessioble from outside of the cluster by default.
We specify the name of the cluster, a selector which select a specific pod, the pod port and the exposed port from the service.
oc port-forward service/demo-app 8080:8080
http://localhost:8080
#37: At the end we want to have a public accessible service without doing a manual port forwarding. For that we need to specify an Ingress object, or with OpenShift a rout.
OpenShift has an internal certificate management which we can configure with a route. With Ingress we need to do this ourself.
We configure a selector to select our service, the host name (you need to replace it with your own host name and make sure the cluster domain is correct), the target port of the service and the TLS termination
#38: The only thing which is left in our demo is applying the template to the cluster and trigger a deployment config rollout.
Again we select the cluster and project touse
We call “oc process” to replace the parameters in the template and get the kube objects out of it
We call “oc apply” to apply the kube objects in the cluster
And we use the rollout command to trigger the deployment.
Latest() and status() makes the DSL wait until the rollout complets
#39: And now we finished our demo. The application is accessible through the internet.
And with that, I’ll hand over back to Andreas.