As presented at the San Francisco Drupal Users Group: https://www.meetup.com/SFDUG-San-Francisco-Drupal-Users-group/events/241098139/
Mobile & Desktop Cache 2.0: How To Create A Scriptable CacheBlaze Software Inc.
This document provides an overview of how to build a scriptable cache for mobile and desktop applications. It discusses:
- The benefits of a scriptable cache like improved performance and ability to implement advanced optimizations.
- A 6 step process for building a basic scriptable cache using localStorage and dynamically loading/storing resources.
- Additional techniques like handling errors, tracking cache state and size, and implementing an LRU cache.
The document is intended to introduce the concept of a scriptable cache but notes that implementing one is not trivial and requires modifications to HTML and resources. Pseudocode is provided but may have errors and not cover all cases.
There are many ways to optimize your website, and it’s hard to know where to start. In this webinar we’ll show you five top performance optimizations and explain how each will impact your load time and order. We’ll also share tips and tricks on how to apply each, since the devil’s in the details. We’ll focus on the following five optimizations:
* Domain Sharding
* Consolidation
* Inlining
* Predict Head
* Asynchronous Javascript Loading
This document discusses caching strategies and techniques. It covers when and what to cache, including entire pages, page fragments, and data. It also discusses different caching mechanisms like file system, database, and in-memory caching and their pros and cons. It provides guidance on managing cache expiration policies and invalidating cached content.
Tulsa tech fest 2010 - web speed and scalabilityJason Ragsdale
This document provides an overview of techniques for building scalable and high performance websites, including definitions of scalability, approaches to avoiding failure, load balancing, caching, and tools for analyzing website speed such as YSlow and PageSpeed. Specific techniques discussed include horizontal and vertical scalability, monitoring, release cycles, fault tolerance, static content delivery, memcached, and APC caching.
Reverse proxy & web cache with NGINX, HAProxy and VarnishEl Mahdi Benzekri
Discover the very wide world of web servers, in addition to the basic web deliverance fonctionnality, we will cover the reverse proxy, the resource caching and the load balancing.
Nginx and apache HTTPD will be used as web server and reverse proxy, and to illustrate some caching features we will also present varnish a powerful caching server.
To introduce load balancers we will compare between Nginx and Haproxy.
Cache Sketches: Using Bloom Filters and Web Caching Against Slow Load TimesFelix Gessert
The document discusses using Bloom filters and cache sketches to enable caching of dynamic content across the web caching hierarchy. A cache sketch is a compact probabilistic data structure that allows clients and servers to track cache invalidations and revalidate cached data. This approach aims to keep cached data fresh while minimizing network requests. It could enable low-latency delivery of dynamic content from ubiquitous caches like content delivery networks and browsers.
Web caching provides several benefits including bandwidth savings, reducing server load, and decreasing network latency. It works by intercepting HTTP requests and checking a local cache for the requested object before going to the origin server. Different caching approaches include proxy caching, reverse proxy caching, transparent proxy caching, and hierarchical caching. New techniques like adaptive caching and push caching aim to dynamically optimize cache placement near popular content or users.
Scale your PHP web app to get ready for the peak season.
Useful information you might want to consider before scaling your application.
Slides as presented in my talk at PHP conference Australia in April 2016
The document discusses techniques for improving web performance, including reducing time to first byte, using content delivery networks and HTTP compression, caching resources, keeping connections alive and reducing request sizes. It also covers optimizing images, loading JavaScript asynchronously to avoid blocking, and prefetching content. The overall goal is to reduce page load times and improve user experience.
Content caching is one of the most effective ways to dramatically improve the performance of a web site. In this webinar, we’ll deep-dive into NGINX’s caching abilities and investigate the architecture used, debugging techniques and advanced configuration. By the end of the webinar, you’ll be well equipped to configure NGINX to cache content exactly as you need.
View full webinar on demand at http://nginx.com/resources/webinars/content-caching-nginx/
Building Lightning Fast Websites (for Twin Cities .NET User Group)strommen
1. A website is loaded by a browser through a multi-step process involving DNS lookups, TCP connections, downloading resources like HTML, CSS, JS, and images. This process can be slow due to the number of individual requests and dependencies between resources.
2. Ways to optimize the loading process include making the server fast, inlining critical resources, gzip compression, an optimized caching strategy, optimizing file delivery through techniques like CDNs and HTTP/2, bundling resources, optimizing images, avoiding unnecessary domains, minimizing web fonts, and JavaScript techniques like PJAX. Minifying assets can also speed up loading.
This document discusses improving the performance of a Magento e-commerce site. It identifies several key issues affecting performance, including slow PHP execution, unused modules, and inefficient image delivery. It also outlines changes made to address these problems, such as updating PHP, removing unnecessary modules, improving caching, and implementing performance testing. With these changes, page load times were significantly reduced and conversion rates increased.
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
A common request sent from your web browser to a web server goes quite a long way and it can take a great deal of time until the data your browser can display are fetched back. I will talk about making this great deal of time significantly less great by caching things on different levels, starting with client-side caching for faster display and minimizing transferred data, storing results of already performed operations and computations and finishing with lowering the load of database servers by caching result sets. Cache expiration and invalidation is the hardest part so I will cover that too. Presentation will be focused mainly on PHP, but most of the principles are quite general work elsewhere too.
Nginx is a web server that is faster, uses less memory and is more stable than Apache under load. It is better suited for Rails applications and cloud computing. Nginx acts as a proxy, routing requests to application servers. It can perform request filtering, like caching requests, and authentication checks without modifying Rails application code using custom Nginx modules. This allows separating infrastructure concerns from application logic.
Scaling and hardware provisioning for databases (lessons learned at wikipedia)Jaime Crespo
At the Wikimedia Foundation (host of Wikipedia and many other open collaborative projects) we work on a limited budget, donated by our many generous donors. As many other companies that are not Facebook- or Google-sized, we have to do more with less both in terms of budget and our small number of Ops in order to serve the over 400 thousand requests per second and the 1200 million monthly users. We made several mistakes (and a few successes) along the road regarding architecture and hardware decisions, especially for the database-distributed components, storage model, hardware chosen, server size, technology adoption, etc. Now we want to share those with you.
Drupal performance optimization best practices include:
- Disabling unused modules and cron on production to reduce overhead
- Configuring caching at the application level with modules like Boost and Memcache
- Optimizing server configuration through APC caching, CDN integration, browser caching, and cron job configuration
- Improving database performance by optimizing InnoDB settings and enabling the query cache
The document provides best practices for optimizing Drupal performance at the application, server, and database levels to reduce bottlenecks and improve load times.
Website & Internet + Performance testingRoman Ananev
The presentation about how the site works on the Internet and what happens when you open it in your browser. What happens under the hood of the server and browser.
How to measure the performance of the CS-Cart project simply and without technical knowledge :) And of course, why all the online-performance-testing services lie, or dont provides a clear view ;)
https://www.simtechdev.com/cloud-hosting
---
Cloud hosting for CS-Cart, Multi-Vendor, WordPress, and Magento
by Simtech Development - AWS and CS-Cart certified hosting provider
free installation & migration | free 24/7 server monitoring | free daily backups | free SSL | and more...
Optimizing web page performance involves minimizing round trips, request sizes, and payload sizes. This includes leveraging browser caching, combining and minifying assets, gzip compression, and optimizing images. Developer tools can identify optimization opportunities like unused resources and suggest techniques for faster loading and rendering.
In this short presentation, Subhash Yadav of Valuebound has explained about “Caching in Drupal 8.” A cache is a collection of data of the same type stored in a device for future use. Caches are found at every level of a content's journey from the original server to the browser.
This document discusses various techniques for optimizing frontend performance, including:
1. Using hardware, backend, and frontend optimizations like combined and minified files, CSS sprites, browser caching headers, and content delivery networks.
2. Analyzing performance with tools like Firebug, YSlow, and Google Page Speed to identify opportunities.
3. Specific techniques like gzipping, avoiding redirects, placing scripts at the bottom, and making Ajax cacheable can improve performance.
Did you know that 80% to 90% of the user's page-load time comes from components outside the firewall? Optimizing performance on the front end (e.g. from the client side) can enhance the user experience by reducing the response times of your web pages and making them load and render much faster.
This document provides an overview of Memcached, a simple in-memory caching system. It discusses what Memcached is, how and when it should be used, best practices, and an example usage. Memcached stores data in memory for fast reads and can distribute data across multiple servers. It is not meant as a database replacement but can be used to cache database query results and other computationally expensive data to improve performance. The document outlines how Memcached was used by one company to cache large amounts of data and speed up processing to under 50ms by moving from MySQL to a Memcached distributed cache.
Make Drupal Run Fast - increase page load speedPromet Source
What does it mean when someone says “My Site is slow now”? What is page speed? How do you measure it? How can you make it faster? We’ll try to answer these questions, provide you with a set of tools to use and explain how this relates to your server load.
We will cover:
- What is page load speed? – Tools used to measure performance of your pages and site – Six Key Improvements to make Drupal “run fast”
++ Performance Module settings and how they work
++ Caching – biggest gainer and how to implement Boost
++ Other quick hits: off loading search, tweaking settings & why running crons is important
++ Ask your host about APC and how to make sure its set up correctly
++ Dare we look at the database? Easy changes that will help a lot!
- Monitoring Best practices – what to set up to make sure you know what is going on with your server – What if you get slashdoted? Recommendation on how to quickly take cover from a rhino.
JavaScript news in December 2017 edition:
+ Kill Internet Explorer
+ Google Chrome 63 Released
+ How to Cancel Your Promise
+ Parcel
+ Turbo
+ Average Page Load Times for 2018
+ Vulnerable JavaScript Libraries
+ New theming API in Firefox
+ Bower is dead
+ Extension Tree Style Tab: Reborn
+ React v16.2.0
+ WebStorm 2017.3.1
+ The Best JavaScript and CSS Libraries for 2017
Scale your PHP web app to get ready for the peak season.
Useful information you might want to consider before scaling your application.
Slides as presented in my talk at PHP conference Australia in April 2016
The document discusses techniques for improving web performance, including reducing time to first byte, using content delivery networks and HTTP compression, caching resources, keeping connections alive and reducing request sizes. It also covers optimizing images, loading JavaScript asynchronously to avoid blocking, and prefetching content. The overall goal is to reduce page load times and improve user experience.
Content caching is one of the most effective ways to dramatically improve the performance of a web site. In this webinar, we’ll deep-dive into NGINX’s caching abilities and investigate the architecture used, debugging techniques and advanced configuration. By the end of the webinar, you’ll be well equipped to configure NGINX to cache content exactly as you need.
View full webinar on demand at http://nginx.com/resources/webinars/content-caching-nginx/
Building Lightning Fast Websites (for Twin Cities .NET User Group)strommen
1. A website is loaded by a browser through a multi-step process involving DNS lookups, TCP connections, downloading resources like HTML, CSS, JS, and images. This process can be slow due to the number of individual requests and dependencies between resources.
2. Ways to optimize the loading process include making the server fast, inlining critical resources, gzip compression, an optimized caching strategy, optimizing file delivery through techniques like CDNs and HTTP/2, bundling resources, optimizing images, avoiding unnecessary domains, minimizing web fonts, and JavaScript techniques like PJAX. Minifying assets can also speed up loading.
This document discusses improving the performance of a Magento e-commerce site. It identifies several key issues affecting performance, including slow PHP execution, unused modules, and inefficient image delivery. It also outlines changes made to address these problems, such as updating PHP, removing unnecessary modules, improving caching, and implementing performance testing. With these changes, page load times were significantly reduced and conversion rates increased.
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
A common request sent from your web browser to a web server goes quite a long way and it can take a great deal of time until the data your browser can display are fetched back. I will talk about making this great deal of time significantly less great by caching things on different levels, starting with client-side caching for faster display and minimizing transferred data, storing results of already performed operations and computations and finishing with lowering the load of database servers by caching result sets. Cache expiration and invalidation is the hardest part so I will cover that too. Presentation will be focused mainly on PHP, but most of the principles are quite general work elsewhere too.
Nginx is a web server that is faster, uses less memory and is more stable than Apache under load. It is better suited for Rails applications and cloud computing. Nginx acts as a proxy, routing requests to application servers. It can perform request filtering, like caching requests, and authentication checks without modifying Rails application code using custom Nginx modules. This allows separating infrastructure concerns from application logic.
Scaling and hardware provisioning for databases (lessons learned at wikipedia)Jaime Crespo
At the Wikimedia Foundation (host of Wikipedia and many other open collaborative projects) we work on a limited budget, donated by our many generous donors. As many other companies that are not Facebook- or Google-sized, we have to do more with less both in terms of budget and our small number of Ops in order to serve the over 400 thousand requests per second and the 1200 million monthly users. We made several mistakes (and a few successes) along the road regarding architecture and hardware decisions, especially for the database-distributed components, storage model, hardware chosen, server size, technology adoption, etc. Now we want to share those with you.
Drupal performance optimization best practices include:
- Disabling unused modules and cron on production to reduce overhead
- Configuring caching at the application level with modules like Boost and Memcache
- Optimizing server configuration through APC caching, CDN integration, browser caching, and cron job configuration
- Improving database performance by optimizing InnoDB settings and enabling the query cache
The document provides best practices for optimizing Drupal performance at the application, server, and database levels to reduce bottlenecks and improve load times.
Website & Internet + Performance testingRoman Ananev
The presentation about how the site works on the Internet and what happens when you open it in your browser. What happens under the hood of the server and browser.
How to measure the performance of the CS-Cart project simply and without technical knowledge :) And of course, why all the online-performance-testing services lie, or dont provides a clear view ;)
https://www.simtechdev.com/cloud-hosting
---
Cloud hosting for CS-Cart, Multi-Vendor, WordPress, and Magento
by Simtech Development - AWS and CS-Cart certified hosting provider
free installation & migration | free 24/7 server monitoring | free daily backups | free SSL | and more...
Optimizing web page performance involves minimizing round trips, request sizes, and payload sizes. This includes leveraging browser caching, combining and minifying assets, gzip compression, and optimizing images. Developer tools can identify optimization opportunities like unused resources and suggest techniques for faster loading and rendering.
In this short presentation, Subhash Yadav of Valuebound has explained about “Caching in Drupal 8.” A cache is a collection of data of the same type stored in a device for future use. Caches are found at every level of a content's journey from the original server to the browser.
This document discusses various techniques for optimizing frontend performance, including:
1. Using hardware, backend, and frontend optimizations like combined and minified files, CSS sprites, browser caching headers, and content delivery networks.
2. Analyzing performance with tools like Firebug, YSlow, and Google Page Speed to identify opportunities.
3. Specific techniques like gzipping, avoiding redirects, placing scripts at the bottom, and making Ajax cacheable can improve performance.
Did you know that 80% to 90% of the user's page-load time comes from components outside the firewall? Optimizing performance on the front end (e.g. from the client side) can enhance the user experience by reducing the response times of your web pages and making them load and render much faster.
This document provides an overview of Memcached, a simple in-memory caching system. It discusses what Memcached is, how and when it should be used, best practices, and an example usage. Memcached stores data in memory for fast reads and can distribute data across multiple servers. It is not meant as a database replacement but can be used to cache database query results and other computationally expensive data to improve performance. The document outlines how Memcached was used by one company to cache large amounts of data and speed up processing to under 50ms by moving from MySQL to a Memcached distributed cache.
Make Drupal Run Fast - increase page load speedPromet Source
What does it mean when someone says “My Site is slow now”? What is page speed? How do you measure it? How can you make it faster? We’ll try to answer these questions, provide you with a set of tools to use and explain how this relates to your server load.
We will cover:
- What is page load speed? – Tools used to measure performance of your pages and site – Six Key Improvements to make Drupal “run fast”
++ Performance Module settings and how they work
++ Caching – biggest gainer and how to implement Boost
++ Other quick hits: off loading search, tweaking settings & why running crons is important
++ Ask your host about APC and how to make sure its set up correctly
++ Dare we look at the database? Easy changes that will help a lot!
- Monitoring Best practices – what to set up to make sure you know what is going on with your server – What if you get slashdoted? Recommendation on how to quickly take cover from a rhino.
JavaScript news in December 2017 edition:
+ Kill Internet Explorer
+ Google Chrome 63 Released
+ How to Cancel Your Promise
+ Parcel
+ Turbo
+ Average Page Load Times for 2018
+ Vulnerable JavaScript Libraries
+ New theming API in Firefox
+ Bower is dead
+ Extension Tree Style Tab: Reborn
+ React v16.2.0
+ WebStorm 2017.3.1
+ The Best JavaScript and CSS Libraries for 2017
Web performance optimization - MercadoLibrePablo Moretti
The document provides techniques and tools for improving web performance. It discusses how reducing response times can directly impact revenues and user experience. It then covers various ways to optimize the frontend, including reducing time to first byte through DNS optimization and caching, using content delivery networks, HTTP compression, keeping connections alive, parallel downloads, and prefetching. It also discusses optimizing images, JavaScript loading, and introducing new formats like WebP. The overall document aims to educate on measuring and enhancing web performance.
The 5 most common reasons for a slow WordPress site and how to fix them – ext...Otto Kekäläinen
Presentation given in WP Meetup in October 2019.
Includes fresh new tips from summer/fall 2019!
A Must read for all WordPress site owners and developers.
This document provides tips for optimizing website performance in order to improve loading speeds. It recommends using tools to analyze site speeds and calculate an optimization budget. Key optimizations include image optimization by choosing the right size and format, optimizing HTML, reducing HTTP requests by inlining JavaScript and combining files, minifying CSS and JavaScript, using a CDN, reducing time to first byte, avoiding redirects and errors, implementing caching, prefetching and preconnecting, optimizing web fonts, using GZIP compression, choosing a fast infrastructure, adopting HTTP/2, implementing hotlink protection, and serving scaled images. The document stresses that website speed is crucial because visitors have little patience and will leave slow sites.
What is Nginx and Why You Should to Use it with Wordpress HostingWPSFO Meetup Group
Floyd Smith and the team from NGINX presented at the Wordpress San Francisco MeetUp group in June 2016. In this presentation, he illustrated how NGINX can vastly improve your Wordpress hosting performance.
In today’s systems , the time it takes to bring data to the end-user can be very long, especially under heavy load. An application can often increase performance by using an appropriate caching system. There are many caching level that you can use in our application today : CDN, In-Memory/Local Cache, Distributed Cache, Outut Cache, Browser Cache, Html Cache
Supercharge Application Delivery to Satisfy UsersNGINX, Inc.
Users expect websites and applications to be quick and reliable. A slow user experience can have a significant impact on your business. Join us for this webinar where we will show you a number of ways you can use NGINX and other tools and techniques to supercharge your application delivery, including:
- Client Caching
- Content Delivery Networks (CDN)
- OCSP stapling
- Dynamic Content Caching
View full webinar on demand at http://bit.ly/nginxsupercharge
Spreadshirt Techcamp 2018 - Hold until ToldMartin Breest
The document discusses using content tagging and purging to improve caching strategies for dynamic content at the edge network. It describes how caching everything can lead to serving stale content. Instead, tagging content with surrogate keys allows caching both dynamic and static content, while purging specific resources by tag when they change. This provides better performance than low expiry caching while maintaining freshness. Purging is fast through the Fastly API. Tag-based purging allows invalidating multiple related resources at once from the edge cache.
This document provides tips and best practices for optimizing Magento performance. It discusses the importance of caching, both full page caching and object caching using Redis or Memcache. It also recommends using a CDN, PHP accelerators like OpCache, and monitoring tools like New Relic and Google Analytics to analyze site performance. The key sections discuss optimizing categories, product pages, and checkout through extensive caching and techniques like image optimization.
This document discusses best practices for using WordPress in an enterprise setting. It covers topics like caching, database queries, browser performance, maintainability, security, third party code, and team workflows. The presentation was given by Taylor Lovett, who is the Director of Web Engineering at 10up and a WordPress plugin creator and core contributor.
This document discusses various performance-related topics in SharePoint including latency, throughput, resource throttling, monitoring, and hardware requirements. It provides definitions of latency and throughput. It discusses tools for monitoring like the SharePoint Log Viewer. It also lists minimum hardware requirements for SharePoint 2010 and SQL Server.
Migration Best Practices - SEOkomm 2018Bastian Grimm
My talk from SEOkomm 2018 in Salzburg covering best practices on how to successfully naviate through the various types of migrations (protocal migrations, frontend migrations, etc.) from an SEO perspective - mainly focussing on all things tech.
Service workers can improve network resilience by caching content to reduce trips to the server. Workbox is a set of libraries and tools that help generate service workers using best practices. It can integrate with build tools like Webpack. Caching layers like Redis can also be used in front of databases to provide redundancy and faster requests while protecting databases. Redis is an open source in-memory data store that can be used as a cache. The demos showed how to implement these caching strategies to improve performance.
Demystifying web performance tooling and metricsAnna Migas
Web performance has been one of the most talked about web development topics in the recent years. Yet if you try to start your journey with the speed optimisations, you might find yourself in a pickle. With the tooling, you might feel overwhelmed—it looks complex and hard to comprehend. With the metrics: at first glance all of them seem similar, not to mention that they change over time and you cannot figure out which of them to take into account.
Capacity Planning Infrastructure for Web Applications (Drupal)Ricardo Amaro
In this session we will try to solve a couple of recurring problems:
Site Launch and User expectations
Imagine a customer that provides a set of needs for hardware, sets a date and launches the site, but then he forgets to warn that they have sent out some (thousands of) emails to half the world announcing their new website launch! What do you think it will happen?
Of course launching a Drupal Site involves a lot of preparation steps and there are plenty of guides out there about common Drupal Launch Readiness Checklists which is not a problem anymore.
What we are really missing here is a Plan for Capacity.
Make Drupal Run Fast - increase page load speedAndy Kucharski
What does it mean when someone says “My Site is slow now”? What is page speed? How do you measure it? How can you make it faster? We’ll try to answer these questions, provide you with a set of tools to use and explain how this relates to your server load.
We will cover:
- What is page load speed? – Tools used to measure performance of your pages and site – Six Key Improvements to make Drupal “run fast”
++ Performance Module settings and how they work
++ Caching – biggest gainer and how to implement Boost
++ Other quick hits: off loading search, tweaking settings & why running crons is important
++ Ask your host about APC and how to make sure its set up correctly
++ Dare we look at the database? Easy changes that will help a lot!
- Monitoring Best practices – what to set up to make sure you know what is going on with your server – What if you get slashdoted? Recommendation on how to quickly take cover from a rhino.
Varnish Cache is a web application accelerator that can speed up websites. It works by caching content and serving it to subsequent requests, reducing load on backend servers. The document outlines 9 steps to implement Varnish Cache, starting with easy steps like caching static assets and compression, then progressing to more complex techniques like caching semi-static content, graceful degradation, and advanced invalidation methods using custom headers. Implementing the initial steps provides minor speed improvements, while fully utilizing Varnish Cache through techniques like content composition and invalidation can yield high performance gains.
Optimizing a WordPress site can improve page speed and user experience. A speed test identifies issues like large images, unnecessary JavaScript, and third-party plugins as potential problems. Solutions include image optimization and sprites, JavaScript consolidation and proper placement, code compression, caching, and reducing third-party assets. With these optimizations, a site can improve its speed grade from a D to an A.
Scalable caching in Drupal is broken. Once cache access saturates a network link, the main options are Memcache sharding (which has broken coherency during and after network splits) and Redis clustering (immature in multi-master and as complex as MySQL replication in master/replica modes).
We can do better. We can have better performance, scale, and operational simplicity. We just need to take a lesson from multicore processor architectures and their use of L1/L2 caches. Drupal doesn't even need full-scale coherency management; it just needs the cache writes on an earlier request to be guaranteed readable on a later request.
Container Security via Monitoring and Orchestration - Container Security SummitDavid Timothy Strauss
Security is a basic requirement of modern applications, and developers are increasingly using containers in their development work. In this presentation, we explore the basic components of secure design (preparation, detection, and containment), how containers facilitate that work today (verification), and how container orchestration ought to support models of the future, especially ones that are hard to roll manually (PKI).
How vulnerable are your systems after the first line of defense? Do attackers get a stronger foothold after each compromise? How valuable is the data your systems can leak?
“Death Star” security describes a system that relies entirely on an outermost security layer and fails catastrophically when breached. As services multiply, they shouldn’t all run in a single, trusted virtual private cloud. Sharing secrets doesn’t scale either, as systems multiply and partners integrate with your product and users.
David Strauss explores security methods strong enough to cross the public Internet, flexible enough to allow new services without altering existing systems, and robust enough to avoid single points of failure. David covers the basics of public key infrastructure (PKI), explaining how PKI uniquely supports security and high availability, and demonstrates how to deploy mutual authentication and encryption across a heterogeneous infrastructure, use capability-based security, and use federated identity to provide a uniform frontend experience while still avoiding monolithic backends. David also explores JSON Web Tokens as a solution to session woes, distributing user data and trust without sharing backend persistence.
A good written summary of the key talking points: https://www.infoq.com/news/2016/04/oreilysacon-day-one
This document provides an overview of using systemd to manage services effectively. It discusses defining service behavior and types, handling timeouts and failures, tightening security, and automating monitoring and management. The key steps outlined are to define expected behavior, plan for the unexpected, tighten security early, and automate monitoring. Various systemd directives and options are explained for tasks like controlling resources, dependencies, reloading, and failure handling.
Historically, sharing a Linux server entailed all kinds of untenable compromises. In addition to the security concerns, there was simply no good way to keep one application from hogging resources and messing with the others. The classic “noisy neighbor” problem made shared systems the bargain-basement slums of the Internet, suitable only for small or throwaway projects.
Serious use-cases traditionally demanded dedicated systems. Over the past decade virtualization (in conjunction with Moore’s law) has democratized the availability of what amount to dedicated systems, and the result is hundreds of thousands of websites and applications deployed into VPS or cloud instances. It’s a step in the right direction, but still has glaring flaws.
Most of these websites are just piles of code sitting on a server somewhere. How did that code got there? How can it can be scaled? Secured? Maintained? It’s anybody’s guess. There simply isn’t enough SysAdmin talent in the world to meet the demands of managing all these apps with anything close to best practices without a better model.
Containers are a whole new ballgame. Unlike VMs, you skip the overhead of running an entire OS for every application environment. There’s also no need to provision a whole new machine to have a place to deploy, meaning you can spin up or scale your application with orders of magnitude more speed and accuracy.
Mixing performance, configurability, density, and security at scale has, historically, been hard with PHP. Early approaches have involved CGIs, suhosin, or multiple Apache instances. Then came PHP-FPM. At Pantheon, we've taken PHP-FPM, integrated it with cgroups, namespaces, and systemd socket activation. We use it to deliver all of our goals at unheard-of densities: thousands and thousands of isolated pools per box.
Mixing performance, configurability, density, and security at scale has, historically, been hard with PHP. Early approaches have involved CGIs, suhosin, or multiple Apache instances. Then came PHP-FPM. At Pantheon, we've taken PHP-FPM, integrated it with cgroups, namespaces, and systemd socket activation. We use it to deliver all of our goals at unheard-of densities: thousands and thousands of isolated pools per box.
Watch how it's configured and see PHP-FPM pools start real-time to serve different Drupal sites as requests come into a server.
All of our tools for this are open-source and usable on your own virtual machines and hardware.
This document discusses PHP performance and security at scale on Pantheon. Key topics covered include socket activation to improve performance by starting services on demand, automated file system mounting to lazily load files, and cgroups to control resource usage. Pantheon also uses a customer experience monitor, non-disruptive migrations, and layers of isolation like users and namespaces for security. Demonstrations are provided of socket activation, automated mounting, handling contention with cgroups, and performing a non-disruptive OpenSSL fix.
Learn more about Pantheon at the Developer Open House
Presented by Kyle Mathews and Josh Koenig
Thursday, February 14th, 12PM PST
Sign up: http://tinyurl.com/a3ofpc2
(Title background is "View of the Valhalla near Regensburg" from the Hermitage Museum.)
The document discusses using Apache Cassandra as a highly available backend for DNS and request routing. It describes Cassandra's data replication capabilities and how its data model can be used to store DNS records in a way that provides for efficient lookups and eventual consistency. Code examples show how to model DNS records in Cassandra, insert, lookup, and delete records, and build a DNS server using Twisted that uses Cassandra as its backend data store.
This document provides an overview of designing and configuring scalable Drupal infrastructure. It discusses load distribution, analyzing traffic patterns, throughput methods, tools for scalability like Apache Solr and Varnish, planning infrastructure goals around redundancy, performance and manageability, and managing the cluster ongoing including deployment, system configuration, and monitoring.
The document summarizes a presentation by David Strauss on designing, scoping, and configuring scalable LAMP infrastructure. The presentation covers analyzing traffic patterns to predict peak loads, understanding how to distribute load across servers, and making assumptions about infrastructure such as having root access and separate web and database servers.
Cassandra can be used for queuing in situations where:
1) Messages have different delivery importance and most need to reach consumers at least once.
2) The volume of messages is too high for a single node queue to handle.
3) Latency can be high since queues require polling rather than push delivery.
Cassandra allows specifying consistency levels to indicate delivery requirements and shards queues across nodes for high throughput. However, it only provides optimistic locking and polling is needed rather than push delivery.
Content and eLearning Standards: Finding the Best Fit for Your-TrainingRustici Software
Tammy Rutherford, Managing Director of Rustici Software, walks through the pros and cons of different standards to better understand which standard is best for your content and chosen technologies.
As data privacy regulations become more pervasive across the globe and organizations increasingly handle and transfer (including across borders) meaningful volumes of personal and confidential information, the need for robust contracts to be in place is more important than ever.
This webinar will provide a deep dive into privacy contracting, covering essential terms and concepts, negotiation strategies, and key practices for managing data privacy risks.
Whether you're in legal, privacy, security, compliance, GRC, procurement, or otherwise, this session will include actionable insights and practical strategies to help you enhance your agreements, reduce risk, and enable your business to move fast while protecting itself.
This webinar will review key aspects and considerations in privacy contracting, including:
- Data processing addenda, cross-border transfer terms including EU Model Clauses/Standard Contractual Clauses, etc.
- Certain legally-required provisions (as well as how to ensure compliance with those provisions)
- Negotiation tactics and common issues
- Recent lessons from recent regulatory actions and disputes
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...James Anderson
The Quantum Apocalypse: A Looming Threat & The Need for Post-Quantum Encryption
We explore the imminent risks posed by quantum computing to modern encryption standards and the urgent need for post-quantum cryptography (PQC).
Bio: With 30 years in cybersecurity, including as a CISO, Tommy is a strategic leader driving security transformation, risk management, and program maturity. He has led high-performing teams, shaped industry policies, and advised organizations on complex cyber, compliance, and data protection challenges.
Supercharge Your AI Development with Local LLMsFrancesco Corti
In today's AI development landscape, developers face significant challenges when building applications that leverage powerful large language models (LLMs) through SaaS platforms like ChatGPT, Gemini, and others. While these services offer impressive capabilities, they come with substantial costs that can quickly escalate especially during the development lifecycle. Additionally, the inherent latency of web-based APIs creates frustrating bottlenecks during the critical testing and iteration phases of development, slowing down innovation and frustrating developers.
This talk will introduce the transformative approach of integrating local LLMs directly into their development environments. By bringing these models closer to where the code lives, developers can dramatically accelerate development lifecycles while maintaining complete control over model selection and configuration. This methodology effectively reduces costs to zero by eliminating dependency on pay-per-use SaaS services, while opening new possibilities for comprehensive integration testing, rapid prototyping, and specialized use cases.
European Accessibility Act & Integrated Accessibility TestingJulia Undeutsch
Emma Dawson will guide you through two important topics in this session.
Firstly, she will prepare you for the European Accessibility Act (EAA), which comes into effect on 28 June 2025, and show you how development teams can prepare for it.
In the second part of the webinar, Emma Dawson will explore with you various integrated testing methods and tools that will help you improve accessibility during the development cycle, such as Linters, Storybook, Playwright, just to name a few.
Focus: European Accessibility Act, Integrated Testing tools and methods (e.g. Linters, Storybook, Playwright)
Target audience: Everyone, Developers, Testers
Marko.js - Unsung Hero of Scalable Web Frameworks (DevDays 2025)Eugene Fidelin
Marko.js is an open-source JavaScript framework created by eBay back in 2014. It offers super-efficient server-side rendering, making it ideal for big e-commerce sites and other multi-page apps where speed and SEO really matter. After over 10 years of development, Marko has some standout features that make it an interesting choice. In this talk, I’ll dive into these unique features and showcase some of Marko's innovative solutions. You might not use Marko.js at your company, but there’s still a lot you can learn from it to bring to your next project.
UiPath Community Zurich: Release Management and Build PipelinesUiPathCommunity
Ensuring robust, reliable, and repeatable delivery processes is more critical than ever - it's a success factor for your automations and for automation programmes as a whole. In this session, we’ll dive into modern best practices for release management and explore how tools like the UiPathCLI can streamline your CI/CD pipelines. Whether you’re just starting with automation or scaling enterprise-grade deployments, our event promises to deliver helpful insights to you. This topic is relevant for both on-premise and cloud users - as well as for automation developers and software testers alike.
📕 Agenda:
- Best Practices for Release Management
- What it is and why it matters
- UiPath Build Pipelines Deep Dive
- Exploring CI/CD workflows, the UiPathCLI and showcasing scenarios for both on-premise and cloud
- Discussion, Q&A
👨🏫 Speakers
Roman Tobler, CEO@ Routinuum
Johans Brink, CTO@ MvR Digital Workforce
We look forward to bringing best practices and showcasing build pipelines to you - and to having interesting discussions on this important topic!
If you have any questions or inputs prior to the event, don't hesitate to reach out to us.
This event streamed live on May 27, 16:00 pm CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join UiPath Community Zurich chapter:
👉 https://community.uipath.com/zurich/
Adtran’s new Ensemble Cloudlet vRouter solution gives service providers a smarter way to replace aging edge routers. With virtual routing, cloud-hosted management and optional design services, the platform makes it easy to deliver high-performance Layer 3 services at lower cost. Discover how this turnkey, subscription-based solution accelerates deployment, supports hosted VNFs and helps boost enterprise ARPU.
Fully Open-Source Private Clouds: Freedom, Security, and ControlShapeBlue
In this presentation, Swen Brüseke introduced proIO's strategy for 100% open-source driven private clouds. proIO leverage the proven technologies of CloudStack and LINBIT, complemented by professional maintenance contracts, to provide you with a secure, flexible, and high-performance IT infrastructure. He highlighted the advantages of private clouds compared to public cloud offerings and explain why CloudStack is in many cases a superior solution to Proxmox.
--
The CloudStack European User Group 2025 took place on May 8th in Vienna, Austria. The event once again brought together open-source cloud professionals, contributors, developers, and users for a day of deep technical insights, knowledge sharing, and community connection.
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
cloudgenesis cloud workshop , gdg on campus mitasiyaldhande02
Step into the future of cloud computing with CloudGenesis, a power-packed workshop curated by GDG on Campus MITA, designed to equip students and aspiring cloud professionals with hands-on experience in Google Cloud Platform (GCP), Microsoft Azure, and Azure Al services.
This workshop offers a rare opportunity to explore real-world multi-cloud strategies, dive deep into cloud deployment practices, and harness the potential of Al-powered cloud solutions. Through guided labs and live demonstrations, participants will gain valuable exposure to both platforms- enabling them to think beyond silos and embrace a cross-cloud approach to
development and innovation.
Evaluation Challenges in Using Generative AI for Science & Technical ContentPaul Groth
Evaluation Challenges in Using Generative AI for Science & Technical Content.
Foundation Models show impressive results in a wide-range of tasks on scientific and legal content from information extraction to question answering and even literature synthesis. However, standard evaluation approaches (e.g. comparing to ground truth) often don't seem to work. Qualitatively the results look great but quantitive scores do not align with these observations. In this talk, I discuss the challenges we've face in our lab in evaluation. I then outline potential routes forward.
Introducing the OSA 3200 SP and OSA 3250 ePRCAdtran
Adtran's latest Oscilloquartz solutions make optical pumping cesium timing more accessible than ever. Discover how the new OSA 3200 SP and OSA 3250 ePRC deliver superior stability, simplified deployment and lower total cost of ownership. Built on a shared platform and engineered for scalable, future-ready networks, these models are ideal for telecom, defense, metrology and more.
Droidal: AI Agents Revolutionizing HealthcareDroidal LLC
Droidal’s AI Agents are transforming healthcare by bringing intelligence, speed, and efficiency to key areas such as Revenue Cycle Management (RCM), clinical operations, and patient engagement. Built specifically for the needs of U.S. hospitals and clinics, Droidal's solutions are designed to improve outcomes and reduce administrative burden.
Through simple visuals and clear examples, the presentation explains how AI Agents can support medical coding, streamline claims processing, manage denials, ensure compliance, and enhance communication between providers and patients. By integrating seamlessly with existing systems, these agents act as digital coworkers that deliver faster reimbursements, reduce errors, and enable teams to focus more on patient care.
Droidal's AI technology is more than just automation — it's a shift toward intelligent healthcare operations that are scalable, secure, and cost-effective. The presentation also offers insights into future developments in AI-driven healthcare, including how continuous learning and agent autonomy will redefine daily workflows.
Whether you're a healthcare administrator, a tech leader, or a provider looking for smarter solutions, this presentation offers a compelling overview of how Droidal’s AI Agents can help your organization achieve operational excellence and better patient outcomes.
A free demo trial is available for those interested in experiencing Droidal’s AI Agents firsthand. Our team will walk you through a live demo tailored to your specific workflows, helping you understand the immediate value and long-term impact of adopting AI in your healthcare environment.
To request a free trial or learn more:
https://droidal.com/
Contributing to WordPress With & Without Code.pptxPatrick Lumumba
Contributing to WordPress: Making an Impact on the Test Team—With or Without Coding Skills
WordPress survives on collaboration, and the Test Team plays a very important role in ensuring the CMS is stable, user-friendly, and accessible to everyone.
This talk aims to deconstruct the myth that one has to be a developer to contribute to WordPress. In this session, I will share with the audience how to get involved with the WordPress Team, whether a coder or not.
We’ll explore practical ways to contribute, from testing new features, and patches, to reporting bugs. By the end of this talk, the audience will have the tools and confidence to make a meaningful impact on WordPress—no matter the skill set.
2. Pantheon.io
Defining Measurable Success
❏ Meet project requirements (e.g. blogging, ecommerce, HTTPS)
❏ Have a good time to first byte (TTFB)
❏ Accelerates requests for other resources
❏ Better search ranking
❏ Have a good time to first paint (TTFP)
❏ Better user experience and conversion rates
❏ Stay online during load spikes (no timeouts or errors)
2
3. “Are you sure you have a problem?”
Step One: Triage
3
4. Pantheon.io
Meeting Project Requirements
It’s important to establish project goals early. These needs can
affect performance as well as which optimizations are possible.
❏ HTTPS
❏ To browser or end-to-end?
❏ Needs an EV certificate?
❏ Compliance
❏ Where can data be cached?
❏ Dynamic Pages
❏ Which features require them?
❏ How often are they used?
4
5. Pantheon.io
Know When Performance is Good Enough
5
The more abundant and complex your sites,
the more you need to pick your battles.
“...a clear correlation was identified
between decreasing search rank
and increasing time to first byte.”
—“How Website Speed Actually Impacts Search Ranking,” Moz, 2013
Good Enough: TTFB <400ms Good Enough: TTFP <2.4s
“If your website takes longer than three seconds to load,
you could be losing nearly half of your visitors...”
—“How Page Load Time Affects Conversion Rates: 12 Case Studies,” HubSpot, 2017
8. Pantheon.io
Revisit Measures of Success
❏ Does the site meet business value requirements?
❏ Is the TTFB good enough?
❏ Is the TTFP good enough?
❏ Is the site staying online?
Don’t create unnecessary work for yourself.
8
9. “So, I take it that things aren’t great?”
Step Two: Diagnosis
9
10. Pantheon.io
Let’s Assume You Have a Basic Stack
10
Site
Visitor
Database
Cache
Web
Server
(And Drupal
Page Cache)
Q: How do we know what to add or optimize?
A: With science!
12. Pantheon.io
Where’s the Bottleneck?
From frontend to deep in the backend:
❏ Review scores in the WebPageTest.org.
❏ Review regional load times in Pingdom.
❏ Review slow transactions in New Relic.
❏ Configure and download PHP slow logs.
❏ Profile slow pages using New Relic, APD, Xdebug, XHProf, or BlackFire.
❏ Configure and download MySQL slow query logs.
12
13. Pantheon.io
Symptoms: Resource Bottleneck
● Good TTFB, Bad TTFP
● Recursive Resource Dependencies
⌾ Look for: Cascading bars on WebPageTest.org before the “Start Render” marker
⌾ Example: Javascript multiple dependencies
● Huge Resources
⌾ Look for: Long bars for items on WebPageTest.org before the “Start Render” marker
⌾ Example: Multi-megabyte images
● Blocking, External Resources
⌾ Look for: Many domains for items on WebPageTest.org before the “Start Render” marker
⌾ Examples: Analytics Tools, Web Fonts, Chat Tools, Marketing Optimization Tools
13
▲Cascading Bars
14. Pantheon.io
Symptoms: Database Bottleneck
● Bad TTFB
● Database Timeout Errors
● Slow Page Loads for Authenticated Users
● Slow Queries
⌾ Look for: Queries to non-cache tables in the MySQL Slow Query Log
⌾ Example: Uncached Views
14
15. Pantheon.io
Symptoms: Object Cache Bottleneck
● Bad TTFB
● Timeout Errors
● Slow Page Loads for Anyone
● Heavy Object Cache Queries
⌾ Look for: Heavy aggregate queries to the non-page cache tables in New Relic
● Heavy Network Egress from the Database Server
15
16. Pantheon.io
Symptoms: Page Cache Bottleneck
● Consistently Bad TTFB
⌾ Look for: On the “Summary” tab of WebPageTest.org, even second and later runs have a
long bar for request #1.
● Slow Page Loads for Anonymous Users
● Heavy Page Cache Queries
⌾ Look for: Heavy aggregate queries to the page cache tables in New Relic
● Overloading with Cacheable Requests
⌾ Look for: Many GET requests to the same URLs in web server logs from different IPs
16
17. “What do I do about my bottleneck?”
Step Three: Treatment
17
18. Pantheon.io
Treatment: Resource Bottleneck
● Cache-Based Treatments
⌾ Deploy a CDN to cache resources closer to site visitors.
⌾ Optimize Drupal’s image styles to create files optimized for their use. (Drupal’s image style
system is, at heart, a cache of images processed in various ways.)
● Non-Cache Treatments
⌾ Deploy HTTP/2 (easiest via CDN) to improve parallelism.
⌾ If no HTTP/2, aggregated CSS and JS to allow fewer round trips.
⌾ Move where resources load to make them non-blocking (and loaded after first paint).
18
19. Pantheon.io
Treatment: Database Bottleneck
● Cache-Based Treatments
⌾ Move object caching out of the database (or otherwise reduce the load).
⌾ Move page caching to a layer in front of the web server (as a proxy or CDN).
⌾ Get the InnoDB buffer pool as big as possible.
⌾ MySQL’s query cache can actually be too big. The bigger it is, the more overhead there is for
changing data. While Drupal 7 relied heavily on this cache (for the “system” table), Drupal 8
does not.
● Non-Cache Treatments
⌾ Out of scope for today
19
20. Pantheon.io
Treatment: Object Cache Bottleneck
● Drupal 8 ships a “null” backend. It’s sometimes useful in production:
$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/development.services.yml';
● If you use a CDN or proxy cache, don’t cache pages:
$settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.null';
● If the site mostly has anonymous users and certain bins mostly get used to
generate pages-that-will-be-cached, don’t cache those bins:
$settings['cache']['bins']['render'] = 'cache.backend.null';
● If using an external cache (Redis/memcached), use a sensible size:
⌾ In Redis, using too large of a cache size will cause snapshots to bottleneck.
⌾ Drupal shouldn’t need more than 1GB of cache. Going larger can be less efficient.
20
21. Pantheon.io
Treatment: Page Cache Bottleneck
● Move page caching in front of the web server, ideally to a CDN.
⌾ Deploy Varnish in front of Drupal or use a CDN with an origin shield.
● Configure Drupal to allow page caching for at least 10 minutes.
● Ensure repeated, anonymous requests for the same page start “hitting.”
⌾ Look for: Responses with “Cache-Control” headers having a defined “max-age” without
“private” or “no-store.”
⌾ Look for: Responses with “Age” headers with numbers more than zero.
⌾ Look for: Responses with CDN-specific headers showing a “hit.”
⌾ Look for: Responses without “Set-Cookie” headers.
⌾ Look for: Responses with “Vary” containing no more than “cookie,” “accept-encoding,” and
“accept-language.” Other things can be very harmful to cache hit rates.
21
23. “What if that’s not enough?”
Step Four: Advanced Page Caching
23
24. Pantheon.io
Does Your Site Suffer From...
❏ Downtime when the entire CDN or proxy cache gets cleared?
❏ Frustrating tradeoffs between delivering pages that are fast versus fresh?
❏ Do you want to crank Drupal’s page cache time up but fear the consequences?
❏ Frequent, manual cache clearing to get new content out?
❏ Inconsistent content: Some pages show what’s new but other pages don’t?
❏ Load times that are sometimes great but awful when the cache misses?
❏ Good control of your CDN or proxy but stale browser caches?
❏ Heavy loads while different proxies or CDN POPs warm themselves after
some cache clearing?
24
25. Pantheon.io
...Then You Need Smarter Page Caching
In the world of Varnish (and Fastly):
● stale-while-revalidate
● stale-if-error
● Surrogate-Control
● Surrogate-Key
25
26. Pantheon.io
C-C: stale-while-revalidate=SECONDS
● Semi-Standardized: Part of Informational RFC 5861
● Directive goes into the Cache-Control header.
⌾ SECONDS sets the time it’s usable after it expires.
● Built on the “grace mode” capabilities of Varnish.
● Allows the page cache to “hit” stale content.
● Triggers an asynchronous refresh of the content in the background.
26
27. Pantheon.io
C-C: stale-if-error=SECONDS
● Semi-Standardized: Part of Informational RFC 5861
● Mostly similar to stale-while-revalidate.
● Used to return stale content instead of an error when the backend is
inaccessible or returning errors.
27
28. Pantheon.io
Surrogate-Control: max-age=SECONDS
● Semi-Standard: Part of W3C’s Edge Architecture Specification
● Same syntax as Cache-Control
● Used instead of Cache-Control by some CDNs when present
● Stripped before the response leaves the CDN
● Allows storing things for different durations in the CDN and browser cache
⌾ Mostly useful for retaining things a long time in the CDN and explicitly invalidating them
28
29. Pantheon.io
Surrogate-Key: frontpage node-1
● Non-Standard: Only in Varnish (with xkey) and Fastly
⌾ Equivalents exist for Akamai, Cloudflare (Enterprise-only), and KeyCDN
● Space-delimited list of keys identifying ingredients of the page
● Allows later, explicit invalidation of cache pages with updated content.
● Drupal 8 makes this easy because it has widespread cache tags we can
repurpose as page keys.
29