Metrics, metrics everywhere (but where the heck do you start?)Tammy Everts
You want a single, unicorn metric that magically sums up the user experience, business value, and numbers that DevOps cares about, but so far, you're just not getting it. So where do you start? In this talk at the 2015 Velocity conference in Santa Clara, Cliff Crocker and I walked through various metrics that answer performance questions from multiple perspectives -- from designer and DevOps to CRO and CEO.
Measuring What Matters - Fluent Conf 2018Cliff Crocker
Cliff Crocker discusses best practices for measuring what matters and applying an understandable methodology that achieves what we are all after: happier users.
This document discusses ways to improve web performance for mobile users. It outlines goals like achieving a speed index between 1,100-2,500 and first meaningful paint within 1-3 seconds. Various techniques are presented for hacking first load times, data transfer, resource loading, images and user experience. These include avoiding redirects, using HTTP/2 and service workers, modern cache controls, responsive images, preloading resources, and ensuring consistent frame rates. The overall message is that mobile performance needs more attention given average load times and high bounce rates on slow mobile sites.
Metrics, metrics everywhere (but where the heck do you start?)Tammy Everts
There’s no one-size-fits-all approach to metrics. In this session, Cliff Crocker and I walk through various metrics that answer performance questions from multiple perspectives — from designer and DevOps to CRO and CEO. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
Walmart proves the obvious, devknob wonders why people don't understand why page speed matters. This has been true and known to be true since the beginning of the internet. Do you think people won't get distracted easily and bounce when they're surfing on 2g, 3g and even 4g connections? Page speed matters, devknob is probably the best page speed optimizer in the world so if you need conversion optimization, you may want to visit devknob online at devknob.com
Metrics, Metrics Everywhere (but where the heck do you start?)SOASTA
Not surprisingly, there’s no one-size-fits-all performance metric (though life would be simpler if there were). Different metrics will give you different critical insights into whether or not your pages are delivering the results you want — both from your end user’s perspective and ultimately from your organization’s perspective. Join Tammy Everts, and walk through various metrics that answer performance questions from multiple perspectives. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
A Modern Approach to Performance Monitoring by Cliff Crocker, VP of Product Management, SOASTA
"How fast are you? How fast should you be? How do you get there? In this talk Cliff will discuss traditional approaches to performance measurement and introduce a ""RUM First"" methodology. This approach begins with capturing performance directly from the end user as the single source of truth for cross-functional organizations focused on performance.
Along the way, you will discover the relationship between RUM and synthetic monitoring, learn what to measure and how to capture it and finally how perceived performance impacts human behavior and your bottom line.
Akamai Edge is the premier event for Internet innovators, tech professionals and online business pioneers who together are forging a Faster Forward World. At Edge, the architects, experts and implementers of the most innovative global online businesses gather face-to-face for an invaluable three days of sharing, learning and together pushing the limits of the Faster Forward World. Learn more at: http://www.akamai.com/edge
17 Web Performance Metrics You Should Care AboutEvgeny Tsarkov
This document discusses 17 key web performance metrics across four categories: front-end user experience metrics, backend performance metrics, content complexity metrics, and advanced monitoring tips. It provides descriptions and average metrics for each, including time to title, time to start render, DNS time, connection time, asset weights, counts, and number of domains. The document emphasizes that measuring these metrics through continuous monitoring provides knowledge to optimize performance and improve the user experience. Advanced monitoring tips include setting service level agreements, defining performance issues, and automating alerts.
Velocity NY - How to Measure Revenue in MillisecondsCliff Crocker
Cliff from SOASTA and Steve from Staples discuss the three questions: How fast are you? How fast should you be? How do you get there? An overview of real world performance optimization and RUM.
Cliff Crocker discusses best practices for measuring what matters and applying an understandable methodology that achieves what we are all after: happier users.
Raiders of the Fast Start: Frontend Performance Archaeology - Performance.now...Katie Sylor-Miller
Raiders of the Fast Start: Frontend Performance Archeology
There are a lot of books, articles, and online tutorials out there with fantastic advice on how to make your websites performant. It all seems easy in theory, but applying best practices to real-world code is anything but straightforward. Diagnosing and fixing frontend performance issues on a large legacy codebase is like being an archaeologist excavating the remains of a lost civilization. You don’t know what you will find until you start digging!
Pick up your trowels and come along with Etsy’s Frontend Systems team as we become archaeologists digging into frontend performance on our large, legacy mobile codebase. I’ll share real-life lessons you can use to guide your own excavations into legacy code:
What tools and metrics we used to diagnose issues and track progress.
How we went beyond server-driven best practices to focus on the client.
Which fixes successfully increased conversion, and which didn’t.
Our work, like all good archaeology, went beyond artifacts and unearthed new insights into our culture. We at Etsy pride ourselves on our culture of performance, but, like all cultures, it needs to adapt and reinvent itself to account for changes to the landscape. Based on what we’ve learned, we are making the case for a new, organization-wide, frontend-focused performance culture that will solve the problems we face today.
Edge 2016 what slows you down - your network or your deviceakamaidevrel
For many years we have concentrated on back-end and network latency and ignored the processing time on the user device. Today, we spent most time on small mobile devices, compared to laptops they’re underpowered and web performance suffers. While handheld devices get more powerful, web pages get more and more complex. Especially the usage of javascript libraries and css framworks is very computational expensive. Based on data from real users, we quantify the relative importance of network and device for web performance. We also benchmark mobile devices and correlate their power to the web performance they achieve.
Edge 2016 service workers and other front end techniquesakamaidevrel
This document discusses using service workers and other front-end techniques to create a secure and optimal site. It describes how service workers can be used to control third-party content, such as by implementing client reputation strategies to block requests from untrusted sources. Examples are given of how service workers could maintain counters to throttle requests to third-party domains that exceed timeout thresholds, and serve cached or error responses when thresholds are exceeded. The document also discusses how service workers could be leveraged for offline analytics reporting and metric monitoring to reduce risks compared to traditional third-party JavaScript techniques.
Edge 2014: Increasing Control with Property Manager with eBayAkamai Technologies
Increasing Control with Property Manager by Steve Lerner
Senior Member of Technical Staff, Network Engineering, eBay & Jay Sikkeland, Senior Technical Project Manager, Akamai Technologies
Property Manager, Akamai's next generation configuration tool, enables engineers a new level of control and visibility for Akamai configurations. Steve Lerner, Sr. Member of Technical Staff from eBay Network Engineering, will demonstrate advanced techniques available in Property Manager designed for a new level of use-cases for Content Delivery Network monitoring and control. He will also demonstrate how key Property Manager settings and Akamai services impact eBay's own metrics and performance.
Akamai Edge is the premier event for Internet innovators, tech professionals and online business pioneers who together are forging a Faster Forward World. At Edge, the architects, experts and implementers of the most innovative global online businesses gather face-to-face for an invaluable three days of sharing, learning and together pushing the limits of the Faster Forward World. Learn more at: http://www.akamai.com/edge
Real User Monitoring: Getting Real Data from Real Users in the Real World - S...Akamai Technologies
Improvements to user experience translate directly to real business metrics and the bottom line. To guide the business to making wise choices on user experience, you need an accurate picture of site performance for real users. In this talk, Steve Lerner will describe how eBay’s performance monitoring strategy has evolved, how the insights gained from real user monitoring have impacted eBay’s business, and some of the considerations that have shaped their in house implementation of Real User Monitoring to serve eBay’s massive global scale. See Steve Lerner's Edge Presentation: http://www.akamai.com/html/custconf/edgetv-commerce.html#real-user-monitoring
The Akamai Edge Conference is a gathering of the industry revolutionaries who are committed to creating leading edge experiences, realizing the full potential of what is possible in a Faster Forward World. From customer innovation stories, industry panels, technical labs, partner and government forums to Web security and developers' tracks, there’s something for everyone at Edge 2013.
Learn more at http://www.akamai.com/edge
The document discusses Selendroid, a tool for automating tests on mobile web and native Android applications. It begins with an overview of Selenium and the need for mobile test automation. Selendroid is introduced as an open source tool that allows controlling Android devices and applications using the WebDriver protocol. It supports testing native and hybrid mobile applications as well as mobile web. Key features highlighted include compatibility with the JSON wire protocol, no app modification requirement, and support for gestures, grid infrastructure and an inspector tool.
Making Single Page (SPA) Faster was a presentation done at Velocity NY 2016
It covers 3 main points:
- selecting the right framework (performance oriented)
- best practices and optimizations
- monitoring
This document discusses re-using WebDriver-based tests for client-side performance analysis (CSPA). It covers the basics of CSPA, when to initiate CSPA, how CSPA relates to WebDriver tests, tools that can be used for CSPA including BrowserMob Proxy and dynaTrace, and examples of online services and desktop tools. References are provided for further information on CSPA and tools like YSlow, PageSpeed, GTmetrix and webpagetest.
Service workers your applications never felt so goodChris Love
If you have not heard of service workers you must attend this session. Service Workers encompass new browser capabilities, along with shiny new version of AJAX called Fetch. If you have every wanted your web applications to experience many native application features, such as push notifications, service workers is the gateway to your happiness. Have you felt confused by application cache and going offline? Well service workers enable offline experiences in a much cleaner way. But that is not all! If you want to see some of the cool new, advanced web platform features that you will actually use come to this session!
https://love2dev.com/blog/what-is-a-service-worker/
Metrics are everywhere! We’ve done a great job of keeping pace with measuring the output of our applications, but how are we doing with measuring what really matters? This talk will explore the various metrics available to application owners today, highlight what’s coming tomorrow and level-set on the relative importance as it relates to the user experience.
Images have a high correlation to page load time. Optimizing image delivery through compression alone is a daunting task. Using HTTP2's superpowers, we can optimize images to ship faster, increasing the perceived performance and initiating users' emotional responses to visuals earlier. HTTP2-powered image delivery leads to lower bounce rates and higher conversions.
This document discusses the browser performance analysis tool dynaTrace. It provides an overview of dynaTrace's capabilities such as cross-browser diagnostics, code-level visibility, and deep JavaScript and DOM tracing. It also covers key performance indicators (KPIs) like load time, resource usage, and network connections that dynaTrace measures. Best practices for improving performance, such as browser caching, network optimization, JavaScript handling and server-side performance are outlined. The document aims to explain why and how dynaTrace can help users find and address web performance issues.
Velocity EU 2012 - Third party scripts and youPatrick Meenan
The document discusses strategies for loading third-party scripts asynchronously to improve page load performance. It notes that the frontend accounts for 80-90% of end user response time and recommends loading scripts asynchronously using techniques like async, defer, and loading scripts at the bottom of the page. It also discusses tools for monitoring performance when third-party scripts are blocked.
Velocity NYC: Metrics, metrics everywhere (but where the heck do you start?)Cliff Crocker
This document discusses various metrics for measuring website performance. It begins by noting that there are many metrics to consider and no single metric tells the whole story. It then discusses several key metrics for measuring different aspects of performance, including:
- Front-end metrics like start render, DOM loading/ready, and page load that can isolate front-end from back-end performance.
- Network metrics like DNS and TCP timings that provide insight into connectivity issues.
- Resource timing metrics that measure individual assets to understand impacts of third parties and CDNs.
- User timing metrics like measuring above-the-fold content that capture user experience.
It emphasizes the importance of considering real user monitoring data alongside
Metrics, metrics everywhere (but where the heck do you start?) SOASTA
This document discusses various metrics for measuring website performance and user experience. It outlines different types of metrics including:
- Network metrics like DNS resolution, TCP connection times, and time to first byte.
- Browser metrics like start render time, DOM loading/ready times, and page load times.
- Resource-level metrics obtained from the Resource Timing API like individual asset load times and response sizes.
- User-centric metrics like Speed Index, time to visible content, and metrics for single-page applications without traditional page loads.
It emphasizes the importance of measuring real user monitoring data alongside synthetic tests, and looking at higher percentiles rather than just averages due to variability in user environments and network conditions
Measuring CDN performance and why you're doing it wrongFastly
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and results in metric overload and confusion. It's becoming increasingly important to understand measurement techniques, what they're telling you, and how to apply them to your actual content.
In this session, we'll examine the challenges around measuring CDN performance and focus on the different methods for measurement. We'll discuss what to measure, important metrics to focus on, and different ways that numbers may mislead you.
More specifically, we'll cover:
Different techniques for measuring CDN performance
Differentiating between network footprint and object delivery performance
Choosing the right content to test
Core metrics to focus on and how each impacts real traffic
Understanding cache hit ratio, why it can be misleading, and how to measure for it
17 Web Performance Metrics You Should Care AboutEvgeny Tsarkov
This document discusses 17 key web performance metrics across four categories: front-end user experience metrics, backend performance metrics, content complexity metrics, and advanced monitoring tips. It provides descriptions and average metrics for each, including time to title, time to start render, DNS time, connection time, asset weights, counts, and number of domains. The document emphasizes that measuring these metrics through continuous monitoring provides knowledge to optimize performance and improve the user experience. Advanced monitoring tips include setting service level agreements, defining performance issues, and automating alerts.
Velocity NY - How to Measure Revenue in MillisecondsCliff Crocker
Cliff from SOASTA and Steve from Staples discuss the three questions: How fast are you? How fast should you be? How do you get there? An overview of real world performance optimization and RUM.
Cliff Crocker discusses best practices for measuring what matters and applying an understandable methodology that achieves what we are all after: happier users.
Raiders of the Fast Start: Frontend Performance Archaeology - Performance.now...Katie Sylor-Miller
Raiders of the Fast Start: Frontend Performance Archeology
There are a lot of books, articles, and online tutorials out there with fantastic advice on how to make your websites performant. It all seems easy in theory, but applying best practices to real-world code is anything but straightforward. Diagnosing and fixing frontend performance issues on a large legacy codebase is like being an archaeologist excavating the remains of a lost civilization. You don’t know what you will find until you start digging!
Pick up your trowels and come along with Etsy’s Frontend Systems team as we become archaeologists digging into frontend performance on our large, legacy mobile codebase. I’ll share real-life lessons you can use to guide your own excavations into legacy code:
What tools and metrics we used to diagnose issues and track progress.
How we went beyond server-driven best practices to focus on the client.
Which fixes successfully increased conversion, and which didn’t.
Our work, like all good archaeology, went beyond artifacts and unearthed new insights into our culture. We at Etsy pride ourselves on our culture of performance, but, like all cultures, it needs to adapt and reinvent itself to account for changes to the landscape. Based on what we’ve learned, we are making the case for a new, organization-wide, frontend-focused performance culture that will solve the problems we face today.
Edge 2016 what slows you down - your network or your deviceakamaidevrel
For many years we have concentrated on back-end and network latency and ignored the processing time on the user device. Today, we spent most time on small mobile devices, compared to laptops they’re underpowered and web performance suffers. While handheld devices get more powerful, web pages get more and more complex. Especially the usage of javascript libraries and css framworks is very computational expensive. Based on data from real users, we quantify the relative importance of network and device for web performance. We also benchmark mobile devices and correlate their power to the web performance they achieve.
Edge 2016 service workers and other front end techniquesakamaidevrel
This document discusses using service workers and other front-end techniques to create a secure and optimal site. It describes how service workers can be used to control third-party content, such as by implementing client reputation strategies to block requests from untrusted sources. Examples are given of how service workers could maintain counters to throttle requests to third-party domains that exceed timeout thresholds, and serve cached or error responses when thresholds are exceeded. The document also discusses how service workers could be leveraged for offline analytics reporting and metric monitoring to reduce risks compared to traditional third-party JavaScript techniques.
Edge 2014: Increasing Control with Property Manager with eBayAkamai Technologies
Increasing Control with Property Manager by Steve Lerner
Senior Member of Technical Staff, Network Engineering, eBay & Jay Sikkeland, Senior Technical Project Manager, Akamai Technologies
Property Manager, Akamai's next generation configuration tool, enables engineers a new level of control and visibility for Akamai configurations. Steve Lerner, Sr. Member of Technical Staff from eBay Network Engineering, will demonstrate advanced techniques available in Property Manager designed for a new level of use-cases for Content Delivery Network monitoring and control. He will also demonstrate how key Property Manager settings and Akamai services impact eBay's own metrics and performance.
Akamai Edge is the premier event for Internet innovators, tech professionals and online business pioneers who together are forging a Faster Forward World. At Edge, the architects, experts and implementers of the most innovative global online businesses gather face-to-face for an invaluable three days of sharing, learning and together pushing the limits of the Faster Forward World. Learn more at: http://www.akamai.com/edge
Real User Monitoring: Getting Real Data from Real Users in the Real World - S...Akamai Technologies
Improvements to user experience translate directly to real business metrics and the bottom line. To guide the business to making wise choices on user experience, you need an accurate picture of site performance for real users. In this talk, Steve Lerner will describe how eBay’s performance monitoring strategy has evolved, how the insights gained from real user monitoring have impacted eBay’s business, and some of the considerations that have shaped their in house implementation of Real User Monitoring to serve eBay’s massive global scale. See Steve Lerner's Edge Presentation: http://www.akamai.com/html/custconf/edgetv-commerce.html#real-user-monitoring
The Akamai Edge Conference is a gathering of the industry revolutionaries who are committed to creating leading edge experiences, realizing the full potential of what is possible in a Faster Forward World. From customer innovation stories, industry panels, technical labs, partner and government forums to Web security and developers' tracks, there’s something for everyone at Edge 2013.
Learn more at http://www.akamai.com/edge
The document discusses Selendroid, a tool for automating tests on mobile web and native Android applications. It begins with an overview of Selenium and the need for mobile test automation. Selendroid is introduced as an open source tool that allows controlling Android devices and applications using the WebDriver protocol. It supports testing native and hybrid mobile applications as well as mobile web. Key features highlighted include compatibility with the JSON wire protocol, no app modification requirement, and support for gestures, grid infrastructure and an inspector tool.
Making Single Page (SPA) Faster was a presentation done at Velocity NY 2016
It covers 3 main points:
- selecting the right framework (performance oriented)
- best practices and optimizations
- monitoring
This document discusses re-using WebDriver-based tests for client-side performance analysis (CSPA). It covers the basics of CSPA, when to initiate CSPA, how CSPA relates to WebDriver tests, tools that can be used for CSPA including BrowserMob Proxy and dynaTrace, and examples of online services and desktop tools. References are provided for further information on CSPA and tools like YSlow, PageSpeed, GTmetrix and webpagetest.
Service workers your applications never felt so goodChris Love
If you have not heard of service workers you must attend this session. Service Workers encompass new browser capabilities, along with shiny new version of AJAX called Fetch. If you have every wanted your web applications to experience many native application features, such as push notifications, service workers is the gateway to your happiness. Have you felt confused by application cache and going offline? Well service workers enable offline experiences in a much cleaner way. But that is not all! If you want to see some of the cool new, advanced web platform features that you will actually use come to this session!
https://love2dev.com/blog/what-is-a-service-worker/
Metrics are everywhere! We’ve done a great job of keeping pace with measuring the output of our applications, but how are we doing with measuring what really matters? This talk will explore the various metrics available to application owners today, highlight what’s coming tomorrow and level-set on the relative importance as it relates to the user experience.
Images have a high correlation to page load time. Optimizing image delivery through compression alone is a daunting task. Using HTTP2's superpowers, we can optimize images to ship faster, increasing the perceived performance and initiating users' emotional responses to visuals earlier. HTTP2-powered image delivery leads to lower bounce rates and higher conversions.
This document discusses the browser performance analysis tool dynaTrace. It provides an overview of dynaTrace's capabilities such as cross-browser diagnostics, code-level visibility, and deep JavaScript and DOM tracing. It also covers key performance indicators (KPIs) like load time, resource usage, and network connections that dynaTrace measures. Best practices for improving performance, such as browser caching, network optimization, JavaScript handling and server-side performance are outlined. The document aims to explain why and how dynaTrace can help users find and address web performance issues.
Velocity EU 2012 - Third party scripts and youPatrick Meenan
The document discusses strategies for loading third-party scripts asynchronously to improve page load performance. It notes that the frontend accounts for 80-90% of end user response time and recommends loading scripts asynchronously using techniques like async, defer, and loading scripts at the bottom of the page. It also discusses tools for monitoring performance when third-party scripts are blocked.
Velocity NYC: Metrics, metrics everywhere (but where the heck do you start?)Cliff Crocker
This document discusses various metrics for measuring website performance. It begins by noting that there are many metrics to consider and no single metric tells the whole story. It then discusses several key metrics for measuring different aspects of performance, including:
- Front-end metrics like start render, DOM loading/ready, and page load that can isolate front-end from back-end performance.
- Network metrics like DNS and TCP timings that provide insight into connectivity issues.
- Resource timing metrics that measure individual assets to understand impacts of third parties and CDNs.
- User timing metrics like measuring above-the-fold content that capture user experience.
It emphasizes the importance of considering real user monitoring data alongside
Metrics, metrics everywhere (but where the heck do you start?) SOASTA
This document discusses various metrics for measuring website performance and user experience. It outlines different types of metrics including:
- Network metrics like DNS resolution, TCP connection times, and time to first byte.
- Browser metrics like start render time, DOM loading/ready times, and page load times.
- Resource-level metrics obtained from the Resource Timing API like individual asset load times and response sizes.
- User-centric metrics like Speed Index, time to visible content, and metrics for single-page applications without traditional page loads.
It emphasizes the importance of measuring real user monitoring data alongside synthetic tests, and looking at higher percentiles rather than just averages due to variability in user environments and network conditions
Measuring CDN performance and why you're doing it wrongFastly
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and results in metric overload and confusion. It's becoming increasingly important to understand measurement techniques, what they're telling you, and how to apply them to your actual content.
In this session, we'll examine the challenges around measuring CDN performance and focus on the different methods for measurement. We'll discuss what to measure, important metrics to focus on, and different ways that numbers may mislead you.
More specifically, we'll cover:
Different techniques for measuring CDN performance
Differentiating between network footprint and object delivery performance
Choosing the right content to test
Core metrics to focus on and how each impacts real traffic
Understanding cache hit ratio, why it can be misleading, and how to measure for it
Metrics, Metrics Everywhere (but where the heck do you start?)SOASTA
Not surprisingly, there’s no one-size-fits-all performance metric (though life would be simpler if there were). Different metrics will give you different critical insights into whether or not your pages are delivering the results you want — both from your end user’s perspective and ultimately from your organization’s perspective. Join Tammy Everts, and walk through various metrics that answer performance questions from multiple perspectives. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
RUM and synthetic monitoring each provide valuable but different performance data. RUM captures real user behavior but numbers vary greatly depending on user environments, while synthetic provides consistent baseline data but doesn't reflect real users. Both data sets are needed to understand a site's true performance across different user scenarios. There is no single performance number, and the right metrics depend on the intended audience and business goals.
improving the performance of Rails web ApplicationsJohn McCaffrey
This presentation is the first in a series on Improving Rails application performance. This session covers the basic motivations and goals for improving performance, the best way to approach a performance assessment, and a review of the tools and techniques that will yield the best results. Tools covered include: Firebug, yslow, page speed, speed tracer, dom monster, request log analyzer, oink, rack bug, new relic rpm, rails metrics, showslow.org, msfast, webpagetest.org and gtmetrix.org.
The upcoming sessions will focus on:
Improving sql queries, and active record use
Improving general rails/ruby code
Improving the front-end
And a final presentation will cover how to be a more efficient and effective developer!
This series will be compressed into a best of session for the 2010 http://windycityRails.org conference
The document discusses client side performance testing. It defines client side performance as how fast a page loads for a single user on a browser or mobile device. Good client side performance is important for user experience and business metrics like sales. It recommends rules for faster loading websites, and introduces the WebPageTest tool for measuring client side performance metrics from multiple locations. WebPageTest provides waterfall views, filmstrip views, packet captures and reports to analyze page load times and identify optimization opportunities.
Keys To World-Class Retail Web Performance - Expert tips for holiday web read...SOASTA
As Walmart.com’s former head of Performance and Reliability, Cliff Crocker knows large scale web performance. Now SOASTA’s VP of products, Cliff is pouring his passion and expertise into cloud testing to solve the biggest challenges in mobile and web performance.
The holiday rush of mobile and web traffic to your web site has the potential for unprecedented success or spectacular public failure. The world’s leading retailers have turned to the cloud to assure that no matter what load, mobile and web apps will delight customers and protect revenue.
Join us as Cliff explores the key criteria for holiday web performance readiness:
Closing the gap in front- and back-end web performance and reliability
Collecting real user data to define the most realistic test scenarios
Preparing properly for the virtual walls of traffic during peak events
Leveraging CloudTest technology, as have 6 of 10 leading retailers
MeasureWorks - Why your customers don't like to wait!MeasureWorks
My presentation at the Zycko breakfast session... About why your users don't like to wait and why you should care as a site owner. This presentation covers the importance of perception of speed, navigation and how to do proper performance monitoring...
Микола Ковш “Performance Testing Implementation From Scratch. Why? When and H...Dakiry
This document discusses the importance of performance testing and provides an introduction to the topic. It notes that performance testing determines how a system behaves under different loads and helps identify bottlenecks. The document outlines why performance testing is important from a user experience perspective, discussing metrics like page load times and the financial costs of poor performance. It then covers various performance testing approaches, targets, levels, and common metrics used to evaluate performance.
Web Performance Internals explained for Developers and other stake holders.Sreejesh Madonandy
Web Performance Internals explained for Developers and others
1. Starting with How Internet Works
2. How Browser Works
3. How to measure Web performance
4. Concluded with tips to Developers and Power users on Improving Web Performance
The document provides an overview of Daniel Austin's Web Performance Boot Camp. The class aims to (1) provide an understanding of web performance, (2) empower attendees to identify and resolve performance issues, and (3) demonstrate common performance tools. The class covers topics such as the impact of performance on business, definitions of performance, statistical analysis, queuing theory, the OSI model, and the MPPC model for analyzing the multiple components that determine overall web performance. Attendees will learn tools like Excel, web testing tools, browser debugging tools, and optional tools like R and Mathematica.
Measuring CDN performance and why you're doing it wrongFastly
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and can result in metric overload and confusion. It’s becoming increasingly important to understand measurement techniques, what they’re telling you, and how to apply them to your actual content.
ThousandEyes provides network intelligence and monitoring of web performance. It offers different test types - HTTP server tests measure server response times, page load tests measure loading of full web pages in a browser, and web transaction tests measure performance of specific user interactions on a site. The tests provide metrics on response times, throughput, errors and performance of individual page components from different network locations and internet providers. The document recommends tips for optimizing web transactions such as adjusting timeouts, configuring start/stop steps, using XPath locators, and inserting wait conditions. It demonstrates creating and running page load, HTTP server and web transaction tests to monitor web performance.
From Zero to Performance Hero in Minutes - Agile Testing Days 2014 PotsdamAndreas Grabner
As a Tester you need to level up. You can do more than functional verification or reporting Response Time
In my Performance Clinic Workshops I show you real life exampls on why Applications fail and what you can do to find these problems when you are testing these applications.
I am using Free Tools for all of these excercises - especially Dynatrace which gives full End-to-End Visibility (Browser to Database). You can test and download Dynatrace for Free @ http://bit.ly/atd2014challenge
Using Modern Browser APIs to Improve the Performance of Your Web ApplicationsNicholas Jansma
This document discusses modern browser APIs that can improve web application performance. It covers Navigation Timing, Resource Timing, and User Timing which provide standardized ways to measure page load times, resource load times, and custom events. Other APIs discussed include the Performance Timeline, Page Visibility, requestAnimationFrame for script animations, High Resolution Time for more precise timestamps, and setImmediate for more efficient script yielding than setTimeout. These browser APIs give developers tools to assess and optimize the performance of their applications.
Thinking Beyond Core Web Vitals - Web performance optimisations for the harsh...Anna Migas
Small web performance tweaks and optimisations might not make a difference for some of the users: there can be physical barriers that will make it impossible to achieve the fast page load and the smooth browsing. After working for over a year on a project directed towards emerging markets (namely Nigeria and Kenya), I came to realise that the popular web performance metrics are all centred around a specific type of person: someone who is used to the fast and reliable connection. But when the conditions are not ideal on daily basis, what are our choices? In my talk we will chat about the improvements that will have real impact on the user experience for users browsing the web in the harsh conditions. I will also share details about the background of users in Africa and how their perception might differ from the users we typically develop for.
MeasureWorks - Why people hate to wait for your website to load (and how to f...MeasureWorks
My slides from DrupalJam 2014... About why users abandon your website and best practices to align content and speed to create a fast user experience, and continue to keep it aligned for every release
This document describes how APNIC measures IPv6 adoption by co-opting hundreds of millions of internet users through online ad campaigns. When an ad loads, it sends a "priming query" to APNIC to get a list of measurement tasks. The tasks involve resolving domain names and fetching URLs to APNIC servers, which log the results. APNIC is able to vary tasks to test different capabilities and collect a massive amount of data to analyze IPv6 usage at scale.
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Best web hosting Vancouver 2025 for you businesssteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
APNIC Update, presented at NZNOG 2025 by Terry SweetserAPNIC
Terry Sweetser, Training Delivery Manager (South Asia & Oceania) at APNIC presented an APNIC update at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Understanding the Tor Network and Exploring the Deep Webnabilajabin35
While the Tor network, Dark Web, and Deep Web can seem mysterious and daunting, they are simply parts of the internet that prioritize privacy and anonymity. Using tools like Ahmia and onionland search, users can explore these hidden spaces responsibly and securely. It’s essential to understand the technology behind these networks, as well as the risks involved, to navigate them safely. Visit https://torgol.com/
Smart Mobile App Pitch Deck丨AI Travel App Presentation Templateyojeari421237
🚀 Smart Mobile App Pitch Deck – "Trip-A" | AI Travel App Presentation Template
This professional, visually engaging pitch deck is designed specifically for developers, startups, and tech students looking to present a smart travel mobile app concept with impact.
Whether you're building an AI-powered travel planner or showcasing a class project, Trip-A gives you the edge to impress investors, professors, or clients. Every slide is cleanly structured, fully editable, and tailored to highlight key aspects of a mobile travel app powered by artificial intelligence and real-time data.
💼 What’s Inside:
- Cover slide with sleek app UI preview
- AI/ML module implementation breakdown
- Key travel market trends analysis
- Competitor comparison slide
- Evaluation challenges & solutions
- Real-time data training model (AI/ML)
- “Live Demo” call-to-action slide
🎨 Why You'll Love It:
- Professional, modern layout with mobile app mockups
- Ideal for pitches, hackathons, university presentations, or MVP launches
- Easily customizable in PowerPoint or Google Slides
- High-resolution visuals and smooth gradients
📦 Format:
- PPTX / Google Slides compatible
- 16:9 widescreen
- Fully editable text, charts, and visuals
DNS Resolvers and Nameservers (in New Zealand)APNIC
Geoff Huston, Chief Scientist at APNIC, presented on 'DNS Resolvers and Nameservers in New Zealand' at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Joyce Chen, Senior Advisor, Strategic Engagement at APNIC, presented on 'APNIC Policy Development Process' at the Local APIGA Taiwan 2025 event held in Taipei from 19 to 20 April 2025.
Reliable Vancouver Web Hosting with Local Servers & 24/7 Supportsteve198109
Looking for powerful and affordable web hosting in Vancouver? 4GoodHosting offers premium Canadian web hosting solutions designed specifically for individuals, startups, and businesses across British Columbia. With local data centers in Vancouver and Toronto, we ensure blazing-fast website speeds, superior uptime, and enhanced data privacy—all critical for your business success in today’s competitive digital landscape.
Our Vancouver web hosting plans are packed with value—starting as low as $2.95/month—and include secure cPanel management, free domain transfer, one-click WordPress installs, and robust email support with anti-spam protection. Whether you're hosting a personal blog, business website, or eCommerce store, our scalable cloud hosting packages are built to grow with you.
Enjoy enterprise-grade features like daily backups, DDoS protection, free SSL certificates, and unlimited bandwidth on select plans. Plus, our expert Canadian support team is available 24/7 to help you every step of the way.
At 4GoodHosting, we understand the needs of local Vancouver businesses. That’s why we focus on speed, security, and service—all hosted on Canadian soil. Start your online journey today with a reliable hosting partner trusted by thousands across Canada.
5. Synthetic 101
Synthetic monitoring (for purposes of this discussion) refers to
the use of automated agents (bots) to measure your website
from different physical locations.
• A set ‘path’ or URL is defined
• Tests are run either adhoc or scheduled and data is collected
6. RUM 101
Real User Measurement (RUM) is a technology for collecting
performance metrics directly from the browser of an end user.
• Involves instrumentation of your website via JavaScript
• Measurements are fired across the network to a collection point
through a small request object (beacon)
<JS> <beacon>
7. RUM
Cast a wide net
• Identify key areas of concern
• Understand real user impact
• Map performance to human behavior & $$
Synthetic
Diagnostic tool
• Identify issues in a ‘lab’/remove variables
• Reproduce a problem found with RUM
http://www.flickr.com/photos/84338444@N00/with/3780079044/
http://www.flickr.com/photos/ezioman/
8. The Early Days of RUM
• Round-trip time
• Start/stop timers via JavaScript
• Early contributors:
• Steve Souders/Episodes
• Philip Tellis/Boomerang.js
• Both widely in use today
17. Measuring Assets
• Strength of synthetic
• Full visibility into asset performance
• Images
• JavaScript
• CSS
• HTML
• A lot of which is served by third-parties
• CDN!
20. CORS: Cross-Origin Resource Sharing
Timing-Allow-Origin = "Timing-Allow-Origin" ":" origin-list-or-
null | "*"
Start/End time only unless Timing-Allow-Origin HTTP
Response Header defined
21. Resource Timing
var rUrl = ‘http://www.akamai.com/images/img/cliff-crocker-dualtone-
150x150.png’;
var me = performance.getEntriesByName(url)[0];
var timings = {
loadTime: me.duration,
dns: me.domainLookupEnd - me.domainLookupStart,
tcp: me.connectEnd - me.connectStart,
waiting: me.responseStart - me.requestStart,
fetch: me.responseEnd - me.responseStart
}
22. Resource Timing
• Slowest resources
• Time to first image
• Response time by domain
• Time a group of assets
• Response time by initiator type (element type)
• Cache-hit ratio for resources
For examples see: http://www.slideshare.net/bluesmoon/beyond-page-level-metrics
23. Resource Timing
• PerfMap - https://github.com/zeman/perfmap
• Mark Zeman
• Waterfall.js - https://github.com/andydavies/waterfall
• Andy Davies
24. 1. How fast am I?
1. How fast should I be?
2. How do I get there?
25. Picking a Number
• Industry benchmarks?
• Apdex?
• Analyst report?
• Case studies?
26. “Synthetic monitoring shows you
how you relate to your competitors,
RUM shows you how you relate to
your customers.”
– Buddy Brewer
31. Yahoo! - 2008
Increase of 400ms
causes 5-9% increase in
user abandonment
http://www.slideshare.net/stubbornella/designing-fast-websites-presentation
32. Shopzilla - 2009
A reduction in Page Load
time of 5s increased site
conversion 7-12%!
http://assets.en.oreilly.com/1/event/29/Shopzilla%27s%20Site%20Redo%20-
%20You%20Get%20What%20You%20Measure%20Presentation.ppt
33. Walmart - 2012
http://minus.com/msM8y8nyh#1e
SF WebPerf –
2012
Up to 2%
conversion drop for
every additional
second
38. How Fast Should You Be?
• Use synthetic measurement for benchmarking your
competitors
• Understand how fast your site needs to be to reach
business goals/objectives with RUM
• You must look at your own data
39. 1. How fast am I?
1. How fast should I be?
2. How do I get there?
43. Page Load Times
Platform Median 95th
Percentile
98th
Percentile
Mobile 3.62s 12.53s 20.04s
Desktop 2.44s 9.31s 15.86s
44. Page Load Times
OS Median 95th
Percentile
98th
Percentile
Windows 7 2.41s 9.29s 15.89s
Mac OS X/10 2.30s 8.11s 13.45s
iOS7 3.27s 10.64s 15.79s
Android 4 4.06s 14.30s 27.93s
iOS8 3.53s 11.54s 19.72s
Windows 8 2.67s 10.75s 18,74s
45. Other Factors
• Geography
• User Agent
• Connection Type
• Carrier/ISP
• Device Type
46. Not All Pages are Created Equal
For a typical
eCommerce site,
conversion rate
drops by up to 50%
when “browse”
pages increase
from 1 to 6
seconds
47. Not All Pages are Created Equal
However, there is
much less impact
to conversion
when “checkout”
pages degrade
48. How Do I Get There?
• Focus on the highest value opportunities/demographics
• Identify key pages that have the most impact on your
KPIs
• Prioritize based on reducing friction within the funnel or
critical path
#3: Who am I?
Cliff Crocker
SOASTA as VP of Product where I focus on a Real User Measurement product called mPulse.
Working in performance for the last 10+ years.
Tools vendor, focused on synthetic monitoring and front-end performance consulting
Switched to the private sector, to build the Performance and Reliability team at @walmartlabs
After getting a taste for RUM, sought out SOASTA to build a real user monitoring product called mPulse.
I work with a lot of companies helping them transition to a new approach to performance monitoring. I’ve found that its helpful to break the conversation down into three parts.
#4: How fast am I?
- What metrics should I look at?
- What do these metrics mean?
- What is a good default metric?
- What additional components should I measure?
- What is the best reflection of the user experience?
How fast should I be?
- How fast is fast enough?
- How do I compare to other’s in my industry?
- How fast do I need to be to achieve my business goals?
How do I get there?
- Where do I focus to get the most value?
- Which demographics have the most impact on my business? Where should I start?
#5: For the purposes of simplifying this talk, when I discuss performance monitoring I’m looking specifically at measuring the front-end performance of an application.
Today, this is done two ways:
Synthetically: A simulation of the user experience
Real User: Measurement of the user through their browser or device.
There has been debate around which is “better” – I’m of the opinion you need both.
#6: A few quick definitions for this talk:
When I talk about synthetic monitoring, I’m speaking specifically about “Real Browser” technology.
The use of agents to ‘puppet’ a real browser while executing a defined set of actions and capture/record performance timing data for those targets.
#7: A very quick primer:
For those of you who need a definition, this is a description of what RUM is.
RUM aka:
Real User Monitoring
Real User Measurement
Real User Metrics
Javascript executed in the browser to collect performance timing information about your website.
This data is then fired back to a collection point for processing in the form of a beacon (a request object used for transferring the payload).
#8: This is how I tend to think about RUM vs. Synthetic. Clearly key to have both.
#9: The Origins of RUM:
~2008 Steve Souders open sourced Episodes
- 2010 Yahoo! Open sourced Boomerang
Boomerang is the most widely used and supported library for RUM today
Supported by Philip Tellis (now of SOASTA)
Used in on thousands of sites world-wide, and also the preferred library for tool vendors (such as SOASTA).
Prior to IE9 release in 2010, only round-trip timing through use of JavaScript was supported. This was a great indication of perceived user experience, but lacked key metrics like network timing, Time to First Byte, DOM Complete, etc.
#10: On the same day that boomerang was open-sourced, IE9 Beta announced support for a new browser API called Navigation Timing.
This recommendation was introduced by the newly formed W3C Performance Working Group.
This now gave us the ability to get precise measurements for a number of performance timers, directly from the browser across the entire user population.
#11: Other browser vendors quickly followed suite in providing access to the Navigation Timing API.
With the exception of one…..
Safari (Desktop and iOS) were silent for the last 4 years.
This caused serious problems for the performance community.
The issue was not that desktop Safari as much since we had a good representation from other browsers.
The glaring hole has been for mobile web sites due to iOS market share. Navigation timing metrics have only been available for Android 4….until now!
As of the release of iOS8 a few weeks ago and Safari 8 (any day now) Navigation timing is supported by all major browser vendors!
#12: The following are a few examples of what is now possible with RUM through use of the performance.timing object
DNS – domain lookup time
#13: TCP: The time it takes to establish a connection with the server
#14: Time to First Byte – sometimes referred to as server-time or backend time or time to first packet.
#15: Base Page: The time it takes to serve the base html document
#16: Front-End Time: Time FROM last byte to fully loaded
#17: Page Load time: Time from the start of navigation to the onload event (start or end, take your pick ;))
#18: Ok…. Thanks for the history lesson, but I already new all of that from 2010!
One of the major advantages that synthetic monitor has had is the ability to capture details at the object level.
Your page is composed of many parts.
Assets include:
Images
JavaScript
CSS
Etc.
Third-party contributors can have significant impact on performance
CDN’s (like Akamai) are largely responsible for this content
#19: Enter resource timing.
Second(?) recommendation introduced by the newly formed W3C Performance Working Group.
Visibility (with component breakdown) into performance of assets
#20: Now supported by all major browser vendors…..except one….4 more years?
#21: One important note:
Start/End time only for assets served from a different origin. (including third-parties of course)
Unless that host defines The Timing-Allow-Origin HTTP Header -> encourage your partners to do this!
#22: Here is an example of how you would measure a single asset/resource from the browser using resource timing.
For examples see: http://www.slideshare.net/bluesmoon/beyond-page-level-metrics
#23: Here are some of the other examples discussed at Velocity and WebPerf days in NY a few weeks ago:
Slowest resources for a page
Time to first image
Time to product image for retailers (heavily used by our customers)
Response time by domain
Timing a group of assets (for timing twitter, facebook, etc.)
Response time by asset type
Cache-hit ratio for resources
The main take away is that with Navigation timing and resource timing together, there is a lot that RUM provides you to answer the question of “How Fast am I?”.
Demo of PerfMap:
#25: Now that you have what you need to understand how fast you are, the question comes of ‘how fast should you be?’.
#26: (Poll audience)
How many of you are setting front-end performance SLAs or goals today?
I’ve seen very few organizations that are doing this in practice, at least officially.
Those that are often look at these resources:
Industry benchmarks provided by Synthetic Vendors (great insight)
Apdex (http://en.wikipedia.org/wiki/Apdex): Open standard created by a collaboration of companies used to report and compare a standard of reporting between applications.
Analysts reports: Forrester and Akamai (great paper)
Other case studies from Velocity Conference, THIS conference or tools vendors
#27: Starting with competitive benchmarking – here’s a quote from a friend and co-worker Buddy Brewer
#28: Synthetic offerings like Keynote, Gomez and SpeedCurve have specialized competitive indices that show you just how you relate to your competition.
This is a great use of synthetic measurement, until we can fine a way to hack into all or our competitor websites to collect RUM data, it’s really the only viable option.
#29: That said, we collect and report high-level statistics based on RUM that can be found at http://soasta.com/mpulseux
#30: One thing to take note of however, is that it isn’t just about comparing page load time.
Focus instead on other metrics like http request count, image count and size, JavaScript count
Other great metrics you can gather are:
Speedindex, introduced in WebPagetest which allows you to compare the visual completeness of a page
Start Render also in WPT
PageSpeed Score/Yslow score
Don’t just make it about speed, try to understand what you competitors are doing differently that you are.
Also – note the free options for benchmarking such as WebPagetest and HTTPArchive, which is a great source of data for 1M+ websites measured globally
#31: The key concept I find to understanding how fast you should be is how it relates to your business.
Performance most certainly is a business problem.
#32: Here are some examples of case studies provided that demonstrate the impact of performance on human-behavior.
In 2008, Yahoo! shared findings that a substantial increase in user abandonment when pages increased by 400ms.
#33: The following year, Shopzilla shared how a 5s reduction in load times (synthetic I believe) increase site conversion between 7-12%.
#34: In 2012, I along with my collogues at @Walmartlabs published a study that showed substantial drops in conversion when item pages slowed from 1-6 seconds.
#35: So what?
These are not your businesses. These are not your stories. You have to look at your own data.
#36: In the work that I’m lucky enough to do on a daily basis, we help organizations determine ‘what if’?
What if they improved the page load time by 1sec? What would that do for them?
How fast do they need to be to achieve business targets?
This type of discovery and experimentation is what is needed to determine: HOW FAST DO YOUR CUSTOMERS NEED YOU TO BE?
#37: In this example, I’ve shown the impact of performance (Page Load time) on the Bounce rate for two different groups of sites.
Site A: A collection of user experiences for Specialty Goods eCommerce sites (luxury goods))
Site B: A collection of user experiences for General Merchandise eCommerce sites (commodity goods)
Notice the patience levels of the users after 6s for each site. Users for a specialty goods site with fewer options tend to be more patient. Meanwhile users that have other options for a GM site continue to abandon at the same rate.
#38: The relationship between speed and behavior is even more noticeable when we look at conversion rates between the two sites. Notice how quickly users will decide to abandon a purchase for Site A, versus B.
#40: Finally, step three. “How do you get there?”
Obviously, this is where our community has spent most of their time discussing technology, best practices and techniques to get you there.
This is not quite what we dive into for this discussion. How we get there when discussing RUM is more about where we start, where we focus our efforts.
#41: First and foremost, we need to understand that Real Users are NOT Normal.
#42: Here is an example of a normal distribution. Similar to what you might see in a controlled environment where there is little variation in data.
#43: When looking at RUM data, we see the distribution is log normal.
The ‘long tail’ is due to many factors that introduce variability into the data set.
This is due to the multitude of factors both inside and outside of our control. For this data set, which represents a number of different user experience for a multitude of sites over the month of September.
#44: When segmenting into two groups (mobile vs. desktop) the differences aren’t as profound as you might think...
#45: However, when looking at different OS versions, a different story starts to take shape
#46: The great thing about RUM is that you can take a multi-dimensional approach and recognize hot-spots and key areas you should focus for your business.
#47: Another important aspect is the realize all pages are not created equal.
In this study of retail, we found that pages that were at the top of the funnel (browser pages) such as Home, Search, Product were extremely sensitive to user dissatisfaction.
As these pages slowed from 1-6s, we saw a 50% decrease in the conversion rate!
#48: However, when we looked at pages deeper in the funnel like Login, Billing (checkout pages) – users were more patient and committed to the transaction.