The presentation showcased at the Open Source Summit North America 2018 in Vancouver, BC. It covers the learnings from transitioning the MSDN site functionality and content to docs.microsoft.com.
Ursula Sarracini - When Old Meets New: CodebasesAnton Caceres
Presented at FrontConf 2017 in Munich by Ursula Sarracini
When your codebase has 13 million lines of code, is written in C++/XUL, and dates back to 1998, it may seem like an impossible task to write a modern web app using technologies like React, and tools like Github, while still managing a graceful integration with the existing codebase. Developing new functionality in a legacy codebase which wasn’t originally built for the modern web can introduce a bunch of new and exciting challenges. This talk will categorize the problems of when old code meets new code in four ways: testing challenges, source control and issue tracking challenges, human and cultural challenges, and technical challenges.
At Mozilla, I’m working on a major feature which was developed independently from the rest of the Firefox codebase, and has since been tightly integrated into the rest of the source tree. Along the way we have faced challenges in each of the categories. I will identify some of the pitfalls that developers can run into when trying to merge old and new codebases, and will provide practical advice in avoiding these problems down the road.
This document discusses setting up a cloud computing site like OpenPlay. It outlines the traditional stages of initial setup on physical hardware in a data center, running and maintaining the site, scaling it up, and eventually recycling the hardware. It also provides technical details on OpenPlay's development including issues addressed between versions 1.0 and 2.0, software used like Ubuntu, MongoDB, and Redis, and VPN configurations. The document ends with reference materials on related topics like data centers, cloud computing, networking protocols, and virtualization.
I have evidence that using git and GitHub for documentation and community doc techniques can give us 300 doc changes in a month. I’ve bet my career on these methods and I want to share with you.
Configuration As Code - Adoption of the Job DSL Plugin at NetflixJustin Ryan
The Jenkins Job DSL plugin allows programmers to express job configurations as code. Learn about the benefits, from the obvious (store your configurations in the SCM of your choice) to the not-so-obvious (focus on intent, instead of succumbing to the distraction of multiple, complex job configuration options). We will share our experience adopting the plugin over the past year to create and maintain more complex job pipelines at Netflix.
Provisioning environments. A simplistic approachEder Roger Souza
This document discusses provisioning development environments using DevOps tools like Vagrant and Puppet. It introduces Vagrant as a way to create reproducible and portable virtual environments. Puppet is then presented as an automation tool to configure and manage the resources and applications within those environments. The document provides a high-level example of using these tools together to provision a load-balanced web application backed by MongoDB. It aims to demonstrate how DevOps practices like automation and cooperation between development and operations can help address challenges like frequent deployment and onboarding new team members.
An introduction the Job DSL plugin for the Jenkins continuous integration server. Learn how to treat job and view configuration as code, how to store the configuration in SCM and how to apply code reuse and refactoring. Learn how to extend the Job DSL for your favorite plugins.
The Job DSL Plugin: Introduction & What’s NewDaniel Spilker
Learn how to practice configuration as code by using the Job DSL plugin for Jenkins. Find out how to organize Job DSL scripts and apply code reuse and refactoring to your Jenkins configuration. This talk will cover advanced techniques for large scale installations and show how to extend the Job DSL for your favorite plugins.
JUC Europe 2015: Continuous Integration and Distribution in the Cloud with DE...CloudBees
By Mark Galpin, JFrog
Correct this if it's wrong, but as a software developer you have two main dreams - to enjoy your coding and to not have to care about anything else but code. Setting up an environment and maintaining a CI/CD cycle for your software can be complicated and painful. The good news is, it doesn't have to be! In this talk, Mark will demo some of the most popular alternatives for a cloud-based development life cycle: from CI builds with DEV@cloud, through artifact deployment to a binary repository and finally, rolling out your release on a truly modern distribution platform.
JUC Europe 2015: Jenkins-Based Continuous Integration for Heterogeneous Hardw...CloudBees
By Oleg Nenashev, CloudBees, Inc.
This talk will address Jenkins-based continuous integration (CI) in the area of embedded systems, which include both hardware and software components. An overview of common automation cases, challenges and their solutions based on Jenkins CI services will be presented. The specifics of Jenkins usage in the hardware area (available plugins and workarounds, environment and desired high availability features) will also be discussed. The session will cover several automation examples and case studies.
Build your application in seconds and optimize workflow as much as you can us...Alex S
Building an application is a very intense and complicated process. Sometimes it could lead to unacceptable results when you can wait for temporary product eternity. Tools could be different, applications could be different, but techniques will be the same.
Optimization is very important thing even when your process is standartized and strong. During that seesion I'll talk about:
- Build is the most valuable product in DevOps
- Tests, Sniffers, Performance tests and other things are minor in comparison to builds
- How to get rid of long waits for small changes or fixes
- How to don't waste time for waiting for build
- How to incorporate measurement tools
- How to solve feature branch hell and don't spent tons of time for merge conflicts
- Make builds for enterprise and big data databases
- Other interesting things from DevOps live :)
Optimisation strategy shouldn’t be strict and shouldn’t ruin current process or block the team from performing operations. Given those answers, we can move forward like a thunder and achieve whatever we want.
Azure Functions allow developers to write small pieces of code, or "functions", that are triggered by events like HTTP requests or messages in Azure services like Storage Queues or Event Hubs. Functions can be used to integrate apps and services, build backends for mobile and web apps, and perform offline data processing. Functions support triggers from various Azure services and other sources, and can be written in C#, F#, Node.js, Python or Java. Functions provide a serverless compute experience and scale automatically based on demand.
The document discusses Camunda's transition from a traditional Jenkins setup with virtual machines to a containerized continuous integration infrastructure using Docker and Jenkins. Some of the key problems with the previous setup included a lack of isolation between jobs, limited scalability, and difficulties maintaining the infrastructure. The new system achieves isolated and reproducible jobs through one-off Docker containers, scalability through Docker Swarm on commodity hardware, and infrastructure maintenance through immutable Docker images and infrastructure as code definitions. Lessons learned include automating as much as possible, designing for scale, testing all aspects of the new system, and controlling dependencies.
.Net OSS Ci & CD with Jenkins - JUC ISRAEL 2013 Tikal Knowledge
This document discusses using Jenkins for continuous integration (CI) and continuous delivery (CD) of .NET open source projects. It covers how to achieve CI using Jenkins by automating builds, testing on each commit, and more. It also discusses using NuGet for dependency management and Sonar for code quality analysis. Finally, it provides examples of using Jenkins to deploy builds to platforms like AWS Elastic Beanstalk for CD after builds pass testing.
The document provides an overview of containers and Docker for busy developers. It introduces containers and Docker, explaining how containers provide isolation compared to virtual machines with less overhead. It then covers how to use Docker to package applications, ensure consistent environments, and share containers between developers. The document also provides useful Docker commands for building, running, and managing containers.
Managing Jenkins with Jenkins (Jenkins User Conference Palo Alto, 2013)Gareth Bowles
Gareth from Netflix is giving a talk about how they use Jenkins' System Groovy Scripts feature to automate management of their large Jenkins infrastructure. Some examples of tasks they automate include disabling jobs that haven't built successfully in 90 days, relabeling slaves when updating EC2 configurations, monitoring slaves for remoting errors and notifying owners about disconnected custom slaves. System Groovy Scripts provide full access to Jenkins like a plugin but are easier to write and maintain.
Containers provide an efficient application delivery mechanism where applications can be built once and run anywhere. The document discusses using containers with a continuous integration and continuous delivery (CI/CD) workflow where source code is built into container images using tools like Jenkins and Docker, and the images are deployed to environments like AWS, Azure, or bare metal using Calm.io. It also describes setting up a microservices architecture with services, backends, and monitoring containers, and automatically scaling infrastructure using tools like Docker swarm based on monitoring information to ensure high availability with zero connection drops during maintenance. The key takeaways are to automate everything, use small container images, be cloud agnostic, and quickly recover from failures.
JUC Europe 2015: Plugin Development with Gradle and GroovyCloudBees
By Daniel Spilker, CoreMedia
Learn how to use the Gradle JPI plugin to enable a 100% Groovy plugin development environment. We will delve into Groovy as the primary programming language, Spock for writing tests and Gradle as the build system.
This document discusses Jenkins best practices, including using plugins to simplify the UI, manage configuration history, rebuild jobs, and mask passwords. It also covers using folders to organize jobs by branch, the job DSL plugin to define jobs programmatically, and artifact repositories to share artifacts between Jenkins instances. More advanced topics include using the multijob plugin for parallel test runs, the pretested integration plugin for branch setup, and integrating pipelines in Jenkins 2.0. The document concludes with bonus techniques like provisioning build slaves with Packer/Vagrant/Docker and load balancing Jenkins slaves.
SenchaCon 2016: Being Productive with the New Sencha Fiddle - Mitchell Simoens Sencha
Would you like to share code or quickly test some code? Before Sencha Fiddle, there was no good way to quickly run Ext JS code. Since its launch, Sencha Fiddle has changed the way we save code in the cloud and share it. In this session, you'll learn what Fiddle is, its new features, and how you can use it to be more productive.
Demo of how to dockerise and deploy your microservices application to the test environment, how to run selenium tests inside docker and how to put this all together to integrate your tests in your CI/CD pipeline using Jenkins.
Presented at ATA GTR 2016 in Pune.
The document discusses various tools for productive front-end development. It covers tools for getting scripts like Bower and NPM, searching for packages on sites like npmjs.com, transpiling languages like TypeScript and JSX with Babel, running tasks with tools like Gulp and Webpack, minifying code with Uglify and Clean CSS, bundling code, testing with tools like Mocha, ESLint and Selenium, generating code with Angular CLI and Create React App, and using GitHub with tools like Travis CI.
JUC Europe 2015: Bringing CD at Cloud-Scale with Jenkins, Docker and "Tiger"CloudBees
By Kohsuke Kawaguchi and Harpreet Singh, CloudBees, Inc.
Continuous delivery (CD) is a competitive differentiator and development and operations teams are under pressure to deliver software faster. The DevOps world is going through a storm of changes - Docker being the key one. This session by Kohsuke and Harpreet will introduce a set of plugins that address various aspects of CD with Docker.
This document discusses different types of continuous integration (CI) pipelines. It begins by describing staging CI, where jobs are triggered on new commits, and issues can arise if the build breaks. It then covers gating CI, used by OpenStack, where code is reviewed and tested before being merged without broken builds. Finally, it discusses doing CI yourself using open source tools like Gerrit, Zuul and Jenkins, alone or via the pre-built Software Factory project. The conclusion is that gating CI prevents broken masters and these techniques can be reused for one's own projects.
The document discusses the concept of "Docs Like Code", which treats documentation like code by storing docs in version control systems, using plain text formats, and integrating doc writing and publishing into the same workflow as software development. It provides the case study of Apache Pulsar, which uses GitHub and other tools to collaborate effectively on docs between developers, writers and users. Benefits include better doc quality and syncing with code through continuous integration/deployment of docs.
This document summarizes Anne Gentle's presentation on treating documentation like code. Some key points include:
- Documentation should be stored and managed in a version control system like code to enable features like automatic builds, continuous integration, testing, and review processes.
- Goals of treating docs like code include improving quality, trust, workflows, ability to scale collaboration, and giving documentation ownership.
- Plans should consider users, contributors, deliverables, and business needs when setting up documentation processes and tools.
- Automating builds, publishing, and other processes through continuous integration/delivery helps improve efficiency and accuracy of documentation.
The Job DSL Plugin: Introduction & What’s NewDaniel Spilker
Learn how to practice configuration as code by using the Job DSL plugin for Jenkins. Find out how to organize Job DSL scripts and apply code reuse and refactoring to your Jenkins configuration. This talk will cover advanced techniques for large scale installations and show how to extend the Job DSL for your favorite plugins.
JUC Europe 2015: Continuous Integration and Distribution in the Cloud with DE...CloudBees
By Mark Galpin, JFrog
Correct this if it's wrong, but as a software developer you have two main dreams - to enjoy your coding and to not have to care about anything else but code. Setting up an environment and maintaining a CI/CD cycle for your software can be complicated and painful. The good news is, it doesn't have to be! In this talk, Mark will demo some of the most popular alternatives for a cloud-based development life cycle: from CI builds with DEV@cloud, through artifact deployment to a binary repository and finally, rolling out your release on a truly modern distribution platform.
JUC Europe 2015: Jenkins-Based Continuous Integration for Heterogeneous Hardw...CloudBees
By Oleg Nenashev, CloudBees, Inc.
This talk will address Jenkins-based continuous integration (CI) in the area of embedded systems, which include both hardware and software components. An overview of common automation cases, challenges and their solutions based on Jenkins CI services will be presented. The specifics of Jenkins usage in the hardware area (available plugins and workarounds, environment and desired high availability features) will also be discussed. The session will cover several automation examples and case studies.
Build your application in seconds and optimize workflow as much as you can us...Alex S
Building an application is a very intense and complicated process. Sometimes it could lead to unacceptable results when you can wait for temporary product eternity. Tools could be different, applications could be different, but techniques will be the same.
Optimization is very important thing even when your process is standartized and strong. During that seesion I'll talk about:
- Build is the most valuable product in DevOps
- Tests, Sniffers, Performance tests and other things are minor in comparison to builds
- How to get rid of long waits for small changes or fixes
- How to don't waste time for waiting for build
- How to incorporate measurement tools
- How to solve feature branch hell and don't spent tons of time for merge conflicts
- Make builds for enterprise and big data databases
- Other interesting things from DevOps live :)
Optimisation strategy shouldn’t be strict and shouldn’t ruin current process or block the team from performing operations. Given those answers, we can move forward like a thunder and achieve whatever we want.
Azure Functions allow developers to write small pieces of code, or "functions", that are triggered by events like HTTP requests or messages in Azure services like Storage Queues or Event Hubs. Functions can be used to integrate apps and services, build backends for mobile and web apps, and perform offline data processing. Functions support triggers from various Azure services and other sources, and can be written in C#, F#, Node.js, Python or Java. Functions provide a serverless compute experience and scale automatically based on demand.
The document discusses Camunda's transition from a traditional Jenkins setup with virtual machines to a containerized continuous integration infrastructure using Docker and Jenkins. Some of the key problems with the previous setup included a lack of isolation between jobs, limited scalability, and difficulties maintaining the infrastructure. The new system achieves isolated and reproducible jobs through one-off Docker containers, scalability through Docker Swarm on commodity hardware, and infrastructure maintenance through immutable Docker images and infrastructure as code definitions. Lessons learned include automating as much as possible, designing for scale, testing all aspects of the new system, and controlling dependencies.
.Net OSS Ci & CD with Jenkins - JUC ISRAEL 2013 Tikal Knowledge
This document discusses using Jenkins for continuous integration (CI) and continuous delivery (CD) of .NET open source projects. It covers how to achieve CI using Jenkins by automating builds, testing on each commit, and more. It also discusses using NuGet for dependency management and Sonar for code quality analysis. Finally, it provides examples of using Jenkins to deploy builds to platforms like AWS Elastic Beanstalk for CD after builds pass testing.
The document provides an overview of containers and Docker for busy developers. It introduces containers and Docker, explaining how containers provide isolation compared to virtual machines with less overhead. It then covers how to use Docker to package applications, ensure consistent environments, and share containers between developers. The document also provides useful Docker commands for building, running, and managing containers.
Managing Jenkins with Jenkins (Jenkins User Conference Palo Alto, 2013)Gareth Bowles
Gareth from Netflix is giving a talk about how they use Jenkins' System Groovy Scripts feature to automate management of their large Jenkins infrastructure. Some examples of tasks they automate include disabling jobs that haven't built successfully in 90 days, relabeling slaves when updating EC2 configurations, monitoring slaves for remoting errors and notifying owners about disconnected custom slaves. System Groovy Scripts provide full access to Jenkins like a plugin but are easier to write and maintain.
Containers provide an efficient application delivery mechanism where applications can be built once and run anywhere. The document discusses using containers with a continuous integration and continuous delivery (CI/CD) workflow where source code is built into container images using tools like Jenkins and Docker, and the images are deployed to environments like AWS, Azure, or bare metal using Calm.io. It also describes setting up a microservices architecture with services, backends, and monitoring containers, and automatically scaling infrastructure using tools like Docker swarm based on monitoring information to ensure high availability with zero connection drops during maintenance. The key takeaways are to automate everything, use small container images, be cloud agnostic, and quickly recover from failures.
JUC Europe 2015: Plugin Development with Gradle and GroovyCloudBees
By Daniel Spilker, CoreMedia
Learn how to use the Gradle JPI plugin to enable a 100% Groovy plugin development environment. We will delve into Groovy as the primary programming language, Spock for writing tests and Gradle as the build system.
This document discusses Jenkins best practices, including using plugins to simplify the UI, manage configuration history, rebuild jobs, and mask passwords. It also covers using folders to organize jobs by branch, the job DSL plugin to define jobs programmatically, and artifact repositories to share artifacts between Jenkins instances. More advanced topics include using the multijob plugin for parallel test runs, the pretested integration plugin for branch setup, and integrating pipelines in Jenkins 2.0. The document concludes with bonus techniques like provisioning build slaves with Packer/Vagrant/Docker and load balancing Jenkins slaves.
SenchaCon 2016: Being Productive with the New Sencha Fiddle - Mitchell Simoens Sencha
Would you like to share code or quickly test some code? Before Sencha Fiddle, there was no good way to quickly run Ext JS code. Since its launch, Sencha Fiddle has changed the way we save code in the cloud and share it. In this session, you'll learn what Fiddle is, its new features, and how you can use it to be more productive.
Demo of how to dockerise and deploy your microservices application to the test environment, how to run selenium tests inside docker and how to put this all together to integrate your tests in your CI/CD pipeline using Jenkins.
Presented at ATA GTR 2016 in Pune.
The document discusses various tools for productive front-end development. It covers tools for getting scripts like Bower and NPM, searching for packages on sites like npmjs.com, transpiling languages like TypeScript and JSX with Babel, running tasks with tools like Gulp and Webpack, minifying code with Uglify and Clean CSS, bundling code, testing with tools like Mocha, ESLint and Selenium, generating code with Angular CLI and Create React App, and using GitHub with tools like Travis CI.
JUC Europe 2015: Bringing CD at Cloud-Scale with Jenkins, Docker and "Tiger"CloudBees
By Kohsuke Kawaguchi and Harpreet Singh, CloudBees, Inc.
Continuous delivery (CD) is a competitive differentiator and development and operations teams are under pressure to deliver software faster. The DevOps world is going through a storm of changes - Docker being the key one. This session by Kohsuke and Harpreet will introduce a set of plugins that address various aspects of CD with Docker.
This document discusses different types of continuous integration (CI) pipelines. It begins by describing staging CI, where jobs are triggered on new commits, and issues can arise if the build breaks. It then covers gating CI, used by OpenStack, where code is reviewed and tested before being merged without broken builds. Finally, it discusses doing CI yourself using open source tools like Gerrit, Zuul and Jenkins, alone or via the pre-built Software Factory project. The conclusion is that gating CI prevents broken masters and these techniques can be reused for one's own projects.
The document discusses the concept of "Docs Like Code", which treats documentation like code by storing docs in version control systems, using plain text formats, and integrating doc writing and publishing into the same workflow as software development. It provides the case study of Apache Pulsar, which uses GitHub and other tools to collaborate effectively on docs between developers, writers and users. Benefits include better doc quality and syncing with code through continuous integration/deployment of docs.
This document summarizes Anne Gentle's presentation on treating documentation like code. Some key points include:
- Documentation should be stored and managed in a version control system like code to enable features like automatic builds, continuous integration, testing, and review processes.
- Goals of treating docs like code include improving quality, trust, workflows, ability to scale collaboration, and giving documentation ownership.
- Plans should consider users, contributors, deliverables, and business needs when setting up documentation processes and tools.
- Automating builds, publishing, and other processes through continuous integration/delivery helps improve efficiency and accuracy of documentation.
When you treat docs like code, you multiply everyone’s efforts and streamline processes through collaboration, automation, and innovation. The benefits are real, but these efforts are complex. The ways you can leverage developer process and tools vary widely. Let’s unpack the absolute best situation for using a docs as code model.
Then, we can walk through multiple considerations that may point you in one direction or another. We can talk about version control, publishing, REST API considerations, source formats, automation, quality controls and testing, and lessons learned. Let’s study best practices that are outcome-dependent and situational, creating strategic efforts.
The web has changed! Users spend more time on mobile than on desktops and expect to have an amazing user experience on both. APIs are the heart of the new web as the central point of access data, encapsulating logic and providing the same data and same features for desktops and mobiles. In this workshop, Antonio will show you how to create complex APIs in an easy and quick way using API Platform built on Symfony.
Markup languages and warp-speed documentationLois Patterson
The presentation discusses how software development has moved towards more frequent releases through DevOps practices. This requires documentation to also be updated quickly. Markup languages can help by allowing many contributors to collaborate easily on documentation. Specific markup languages mentioned include reStructuredText and Markdown, which can be processed by tools like Sphinx to generate documentation from plain text files. The presentation demonstrates how to use reStructuredText and emphasizes that markup languages, collaborative tools like GitHub, and automation are key to supporting modern rapid software development practices.
Lois Patterson: Markup Languages and Warp-Speed DocumentationJack Molisani
The presentation discusses how software development has moved towards more frequent releases through DevOps practices. This requires documentation to also be updated quickly. Markup languages can help by allowing many contributors to collaborate easily on documentation. Specific markup languages mentioned include reStructuredText and Markdown, which can be processed by tools like Sphinx to generate documentation from plain text files. The presentation demonstrates how to use reStructuredText and emphasizes that markup languages, collaborative tools like GitHub, and automation are key to supporting modern rapid software development practices.
This document discusses applying lean and agile principles to SharePoint development. It introduces lean concepts like Kanban and agile frameworks like Scrum and XP. It emphasizes techniques like automated testing, test-driven development (TDD), and continuous integration. The document acknowledges challenges in applying these techniques to SharePoint but provides examples of how to address them through practices like unit testing with isolation frameworks and continuous integration with PowerShell scripts.
How and Why you can and should Participate in Open Source Projects (AMIS, Sof...Lucas Jellema
For a long time I have been reluctant to actively contribute to an open source project. I thought it would be rather complicated and demanding – and that I didn't have the knowledge or skills for it or at the very least that they (the project team) weren't waiting for me.
In December 2021, I decided to have a serious input into the Dapr.io project – and now finally to determine how it works and whether it is really that complicated. In this session I want to tell you about my experiences. How Fork, Clone, Branch, Push (and PR) is the rhythm of contributing to an open source project and how you do that (these are all Git actions against GitHub repositories). How to learn how such a project functions and how to connect to it; which tools are needed, which communication channels are used. I tell how the standards of the project – largely automatically enforced – help me to become a better software engineer, with an eye for readability and testability of the code.
How the review process is quite exciting once you have offered your contribution. And how the final "merge to master" of my contribution and then the actual release (Dapr 1.6 contains my first contribution) are nice milestones.
I hope to motivate participants in this session to also take the step yourself and contribute to an open source project in the form of issues or samples, documentation or code. It's valuable to the community and the specific project and I think it's definitely a valuable experience for the "contributer". I looked up to it and now that I've done it gives me confidence – and it tastes like more (I could still use some help with the work on Dapr.io, by the way).
Publishing strategies for API documentationTom Johnson
Most of the common tools for publishing help material fall short when it comes to API documentation. Much API documentation (such as for Java, C++, or .NET APIs) is generated from comments in the source code. Their outputs don’t usually integrate with other help material, such as programming tutorials or scenario-based code samples.
REST APIs are a breed of their own, with almost no standard tools for generating documentation from the source. The variety of outputs for REST APIs are as diverse as the APIs themselves, as you can see by browsing the 11,000+ web APIs on programmableweb.com.
As a technical writer, what publishing strategies do you use for API documentation? Do you leave the reference material separate from the tutorials and code samples? Do you convert everything to DITA and merge it into a single output? Do you build your own help system from scratch that imports your REST API information?
There’s not a one-size-fits-all approach. In this presentation, you’ll learn a variety of publishing strategies for different kinds of APIs, with examples of what works well for developer audiences. No matter what kind of API you’re working with, you’ll benefit from this survey of the API doc publishing scene.
- See more at: http://idratherbewriting.com
This document discusses semantic annotation using custom vocabularies. It introduces Gabriel Dragomir and provides background on semantic web and linked data. It then describes Apache Stanbol, a framework for semantic annotation of documents. Stanbol allows modular processing of documents using configurable workflows and vocabularies. The document outlines Stanbol's architecture and components. It also discusses integrating Stanbol with Drupal for semantic indexing and annotation of content. A demo is proposed to index Drupal data in Stanbol and annotate entities using DBPedia and a custom semantic web vocabulary.
Managing Changes to the Database Across the Project Life Cycle (presented by ...eZ Systems
In this talk we will cover the different strategies for managing changes to the database content structure, both during the development and maintenance phases. The Kaliop Migrations Bundle will be introduced as current best-in-breed solution to automate changes after the go-live of a site.
DevOps Friendly Doc Publishing for APIs & MicroservicesSonatype
Mandy Whaley, CISCO
Microservices create an explosion of internal and external APIs. These APIs need great docs. Many organizations end up with a jungle of wiki pages, swagger docs and api consoles, and maybe just a few secret documents trapped in chat room somewhere… Keeping docs updated and in sync with code can be a challenge.
We’ve been working on a project at Cisco DevNet to help solve this problem for engineering teams across Cisco. The goal is to create a forward looking developer and API doc publishing pipeline that:
Has a developer friendly editing flow
Accepts many API spec formats (Swagger, RAML, etc)
Supports long form documentation in markdown
Is CI/CD pipeline friendly so that code and docs stay in sync
Flexible enough to be used by a wide scope of teams and technologies
We have many interesting lessons learned about tooling and how to solve documentation challenges for internal and external facing APIs. We have found that solving this doc publishing flow is a key component of a building modern infrastructure. This is most definitely a culture + tech + ops + dev story, we look forward to sharing with the DevOps Days community.
Devops is an approach that aims to increase an organization's ability to deliver applications and services at high velocity by combining cultural philosophies, practices, and tools that align development and operations teams. Under a DevOps model, development and operations teams work closely together across the entire application lifecycle from development through deployment to operations. They use automation, monitoring, and collaboration tools to accelerate delivery while improving quality and security. Popular DevOps tools include Git, Jenkins, Puppet, Chef, Ansible, Docker, and Nagios.
An overview of the way developers approach problems, for Entrepreneurs, Managers & Designers, to facilitate discussion and understanding. Developers are creative problem solvers who use words and logic to “model” stuff with objects, properties, methods, inheritance, composition, apis, and frameworks, to build: web sites, web apps, mobile apps, and iot in a repository on a stack with tools and tests at scale for our users.
This document provides a case study on a project created using open source technology. It discusses analyzing project goals and resources, evaluating open source options based on total cost of ownership, implementing a solution using LAMP stack, and lessons learned. The project was developed using Linux, Apache, MySQL, and PHP based on the needs of a low budget, ability to invest in internal skills, and reduce dependency on external trends. Key steps included preparing the Linux server, using version control and local testing, and engaging the open source community for support.
This document provides an overview of Splunk's developer platform. It introduces Jon Rooney, Director of Developer Marketing at Splunk, and Damien Dallimore, Developer Evangelist. It discusses how Splunk can help with application development challenges like visibility across the development lifecycle. It also demonstrates how Splunk can integrate with the development process using tools like its REST API and SDKs. The document highlights Splunk's modular inputs, web framework, and opportunities for custom visualizations and search commands. Overall, it aims to showcase Splunk's powerful platform for developers.
DocOps: Documentation at the Speed of AgileMary Connor
Presented at Keep Austin Agile 2016: How to we make documentation "Agile", given the Agile Manifesto? How do you get into the Definition of Done? What does "DocOps" mean, in the simplest and broadest terms? What should your requirements be for a DocOps transformation, and how do you find a tool stack that fits them? Where do you start, and how do you escape a waterfall reengineering of your legacy docs?
Pat Farrell, Migrating Legacy Documentation to XML and DITAfarrelldoc
Pat Farrell is a TECHNICAL information developer who has developed a variety of custom solutions to increase productivity. This presentation is an overview of Pat's technical innovations followed by more detail of a conversion project he managed: migrating documentation to XML and DITA. Learn what you need to begin such a conversion project: workflow, considerations, and the benefits and fallbacks of using in-house or external resources for your XML or DITA conversion project.
Best web hosting Vancouver 2025 for you businesssteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
DNS Resolvers and Nameservers (in New Zealand)APNIC
Geoff Huston, Chief Scientist at APNIC, presented on 'DNS Resolvers and Nameservers in New Zealand' at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Reliable Vancouver Web Hosting with Local Servers & 24/7 Supportsteve198109
Looking for powerful and affordable web hosting in Vancouver? 4GoodHosting offers premium Canadian web hosting solutions designed specifically for individuals, startups, and businesses across British Columbia. With local data centers in Vancouver and Toronto, we ensure blazing-fast website speeds, superior uptime, and enhanced data privacy—all critical for your business success in today’s competitive digital landscape.
Our Vancouver web hosting plans are packed with value—starting as low as $2.95/month—and include secure cPanel management, free domain transfer, one-click WordPress installs, and robust email support with anti-spam protection. Whether you're hosting a personal blog, business website, or eCommerce store, our scalable cloud hosting packages are built to grow with you.
Enjoy enterprise-grade features like daily backups, DDoS protection, free SSL certificates, and unlimited bandwidth on select plans. Plus, our expert Canadian support team is available 24/7 to help you every step of the way.
At 4GoodHosting, we understand the needs of local Vancouver businesses. That’s why we focus on speed, security, and service—all hosted on Canadian soil. Start your online journey today with a reliable hosting partner trusted by thousands across Canada.
APNIC Update, presented at NZNOG 2025 by Terry SweetserAPNIC
Terry Sweetser, Training Delivery Manager (South Asia & Oceania) at APNIC presented an APNIC update at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Understanding the Tor Network and Exploring the Deep Webnabilajabin35
While the Tor network, Dark Web, and Deep Web can seem mysterious and daunting, they are simply parts of the internet that prioritize privacy and anonymity. Using tools like Ahmia and onionland search, users can explore these hidden spaces responsibly and securely. It’s essential to understand the technology behind these networks, as well as the risks involved, to navigate them safely. Visit https://torgol.com/
Smart Mobile App Pitch Deck丨AI Travel App Presentation Templateyojeari421237
🚀 Smart Mobile App Pitch Deck – "Trip-A" | AI Travel App Presentation Template
This professional, visually engaging pitch deck is designed specifically for developers, startups, and tech students looking to present a smart travel mobile app concept with impact.
Whether you're building an AI-powered travel planner or showcasing a class project, Trip-A gives you the edge to impress investors, professors, or clients. Every slide is cleanly structured, fully editable, and tailored to highlight key aspects of a mobile travel app powered by artificial intelligence and real-time data.
💼 What’s Inside:
- Cover slide with sleek app UI preview
- AI/ML module implementation breakdown
- Key travel market trends analysis
- Competitor comparison slide
- Evaluation challenges & solutions
- Real-time data training model (AI/ML)
- “Live Demo” call-to-action slide
🎨 Why You'll Love It:
- Professional, modern layout with mobile app mockups
- Ideal for pitches, hackathons, university presentations, or MVP launches
- Easily customizable in PowerPoint or Google Slides
- High-resolution visuals and smooth gradients
📦 Format:
- PPTX / Google Slides compatible
- 16:9 widescreen
- Fully editable text, charts, and visuals
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Joyce Chen, Senior Advisor, Strategic Engagement at APNIC, presented on 'APNIC Policy Development Process' at the Local APIGA Taiwan 2025 event held in Taipei from 19 to 20 April 2025.
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Ad
Docs as Part of the Product - Open Source Summit North America 2018
1. Docs as Part of the Product
Den Delimarsky, PM
@denniscode
2. Who am I?
•Program Manager at Microsoft
•Building docs.microsoft.com
•Based in Vancouver
•Helping drive:
• API Documentation
• Samples
• Interactive Experiences
4. State of Documentation
•What we expect from doc experiences
• Up-to-date and always reflecting the true state of
the product.
• Comprehensive.
• Easy to edit.
• Intuitive search and discovery.
• Connected to the communities inside and
outside the company.
• Rich, interactive presentation.
5. State of Documentation
•What we get from doc experiences
• Out-of-date – generated once and forgotten.
• Inaccurate API docs written by hand.
• Maintained in content silos.
• Scattered – every team has their own site with
their own format and publishing pipeline.
• Search is bad due to fragmentation.
• Text and basic media (images).
8. Looking Back
•Started with the goal to be the
one true place for all Microsoft
developer resources.
•Powered by a closed,
proprietary publishing system.
•Content stored in an internal
XML flavor.
9. Looking Back
• Brittle code base not designed for the cloud.
• Everything is manually written – almost zero
automation.
• Complicated process to update and publish
content – sometimes it took days, if not weeks.
• Teams outgrew MSDN, held back by its update
velocity – new sites started appearing.
11. Docs: The New Hope
• One doc site to rule them all – unify
documentation for all company.
• Start from zero, for the cloud, from the cloud.
• Automate all the things.
• Open, using standard open-source tools and
formats.
• Global by default – 64 locales built-in.
• We don’t know the right way – but we can
experiment.
12. Docs: The New Hope
•Consistent editing experience – Markdown is
the golden standard.
•Integrated in API reference (part of Javadoc
comments, Swagger specs and Python
docstrings).
•Edit directly in GitHub or favorite editor.
•Easily preview changes.
16. Docs: The New Hope
•Automation at the heart of the publishing
process
• API Doc Tooling (Node, Java, Python, .NET,
REST, PowerShell, CLI)
• Content Build and Validation
• Content Testing Suite (404s, orphaned pages,
SEO compliance)
• GitHub Bots (automatically merge PRs, channel
external feedback to internal bug tracker)
• Sample Code Testing
17. Docs: The New Hope
•Making URLs readable
https://msdn.microsoft.com/en-us/library/8ehhxeaf(v=vs.110).aspx
https://docs.microsoft.com/dotnet/api/system.collections.generic.icomparer-1
18. Docs: The New Hope
•Convention over configuration – we infer
content structure from folders in GitHub.
•/content/test.md becomes
docs.microsoft.com/cloud/content/test
•Easy to set up redirects when things
change, directly from the repo – broken
links are much easier to fix.
19. Docs: The New Hope
•Content Versioning
• No “burning in” into the URL.
• Ensures URL consistency even when new
versions are released.
• Easily discoverable.
• Reduces friction and broken links.
• Using query param - ?view={moniker}
20. Docs: The New Hope
• API documentation discoverable from one
place – the API Browser.
• No need to hop between N+1 sites to find the
API.
• Semantic understanding of the APIs.
• Reduce discovery and documentation friction.
• Provide the artifacts (npm, pypi, source) and
the docs are staged automatically.
• Intertwined with human-edited content.
22. Docs: The New Hope
•28K+ API
documentation CI
executed in the past
year.
•10MM+ lines of auto-
generated docs dropped
into GitHub.
23. Docs: The New Hope
•This powers:
• 9.5K+ JavaScript API documentation pages
• 55K+ Java API documentation pages
• 16K+ Python API documentation pages
• 15K+ REST API documentation pages
• 499K+ .NET API documentation pages
24. Docs: The New Hope
•Builds run multiple times
a day.
•Always documenting
public latest versions of
APIs in addition to
secondary (supported)
versions.
25. Docs: The New Hope
•All API docs have standard URL patterns
• /python/api/{package-name}/{entity}
• /java/api/{entity-qualified-name}
• /javascript/api/{package-name}/{entity}
• /rest/api/{product}/{op-group}/{operation}
• /cli/{product}/{command}
26. Docs: The New Hope
• Documentation linked to
source code.
• Switch between versions on
the fly.
• Logically grouped API entities
in the table of contents.
• Grouping generated
automatically – no human
ever does that.
• Allows us to scale to 10K+
APIs in minutes.
27. Docs: The New Hope
•Contracts over hand-crafted documents.
•Schema defines entities and overall
hierarchy.
•Template globally applied.
•Driving consistency in presentation.
•Updates don’t break existing
documentation.
28. Docs: The New Hope
•Generate any post-processing artifacts
after build – IntelliSense and cross-
reference files.
•Artifacts can be used by product teams
(Javadoc to be shipped with product).
29. Docs: The New Hope
• Structured documentation
enables us to power rich API
discovery experiences.
• Find the necessary API in
seconds.
• Search across all products
in a platform.
• IDE “auto-suggest” – in a
search experience.
32. Beyond Text
•Good documentation is not a wall of text.
•Reducing friction from reading to trying –
how can we allow you to see how things
work in seconds?
•Structured content allows us to understand
where we can enable interactivity.
33. Beyond Text
•REST “Try It”
•Powered by Swagger
specs.
•Run REST calls from a
documentation page.
•Instantly see output, with
no apps involved.
34. Beyond Text
•.NET REPL
•Run C# code in a
stateless container.
•Zero friction to get
started – no auth
required.
•Any C# snippet can
integrate it.
35. Beyond Text
•Azure Cloud Shell
•Linux in the browser.
•Works with Bash and
PowerShell Core.
•Stateful container
connected to Azure
subscription.
37. Focus on Community
•2.5K+ repositories
• 1.1K+ public
•4.3K+ internal members
•A huge shift in how the
entire company sees
documentation and
contributions to open
source.
38. Focus on Community
•A lot of our projects were
moved over to GitHub
(VSCode, TypeScript, .NET,
Monaco Editor).
•Natural place to have
documentation, with a huge
community of passionate
developers.
(stats courtesy of GitHub)
39. Focus on Community
•Shifting feedback from
silo-ed platforms to be
open.
•GitHub Issues – for
content and site
feedback.
•Documentation is
treated like a product –
doc issue = bug.
40. Focus on Community
•Key learning – transparency matters.
•Your customers know their needs better than you
do – talk to them. All the time.
•Working with your community is not the same as
asking them to do the work for you.
•Fostering the community and building trust takes
time – coaching them on best practices and
approaches is important.
41. Focus on Community
•Automation is your friend (again)
• Contribution License Agreements (CLAs)
• PR reviews (“Is my PR changing the right
things?”)
• Content build validation (“Is what I added
causing issues?”)
• Test any inserted code (“Does it build?”)
42. Overview
Before After
Open Source Docs No Yes
Localization Poor 64 Languages
Mobile Support None Major platforms
Accessibility Varied Built-in
Content Location Fragmented Unified
Sample Testing Sparse Automated CI
API Docs Manual Automatic
Feedback Varied, closed GitHub
Analytics Fragmented Unified
Engineering Duplicated Shared
45. Handling Legacy Resources
• Mo’ sites, mo’ problems.
• Not as simple as simple as shutting the old site down
in favor of the new one.
• Content migration takes time – you will discover
problems. A lot of problems.
• Redirection is important – customers don’t like broken
links. Neither do search engines.
• Links are “baked into” products over years – you don’t
want to break those.
46. Handling Legacy Resources
•You will inevitably get feedback that “old
was better” – that’s not a cue to rebuild the
old experience on the new site.
•Communication is important – set
expectations.
•Habits die hard – it will take time for people
to rely on new workflows.
#7: Way too often documentation is treated as an after-thought, where the product goes into a different direction, and then documentation teams have to
#10: User has to go to three different places to learn something basic – say I want to build a website with a database. I have to go to asp.net for some fundamental web docs, then to the SQL site to learn about databases, then to some library sites to get more info about how to connect them together.
#18: URLs became easily “hackable” and discoverable.
#26: We are no longer just a Windows company – we want to make sure that we empower developers across different platforms, such as Python, Java, JavaScript and others. We follow the standard URL conventions to make sure that developers feel confident they can find the APIs they need quickly.
#27: We are no longer just a Windows company – we want to make sure that we empower developers across different platforms, such as Python, Java, JavaScript and others. We follow the standard URL conventions to make sure that developers feel confident they can find the APIs they need quickly.
#38: We were using GitHub before we bought GitHub -
#41: Customers want to see that their feedback is being acted upon. They want to see that we are taking action and it’s not just going to some noreply mailbox.