SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
SplunkLive! Frankfurt 2018 - Legacy SIEM to Splunk, How to Conquer Migration ...Splunk
Presented at SplunkLive! Frankfurt 2018:
Introduction
SIEM Migration Methodology
Use Cases
Datasources & Data Onboarding
ES Architecture
Third-Party Integrations
You Got This!
SplunkLive! Frankfurt 2018 - Get More From Your Machine Data with Splunk AISplunk
Presented at SpluknLive! Frankfurt 2018:
Why AI & Machine Learning?
What is Machine Learning?
Splunk's Machine Learning Tour
Use Cases & Customer Stories
Wrap Up
SplunkLive! Zurich 2018: Legacy SIEM to Splunk, How to Conquer Migration and ...Splunk
This document provides an overview of best practices for migrating from a legacy SIEM to Splunk Enterprise Security. It discusses identifying high-value use cases to prioritize for migration. Proper data source onboarding using technologies like the Universal Forwarder and Technology Add-ons is also covered. The presentation recommends planning the target architecture and identifying any necessary third-party integrations. Some preparatory steps customers can take today to get ready for the replacement are also listed.
SplunkLive! Frankfurt 2018 - Intro to Security Analytics MethodsSplunk
Splunk Security Essentials provides concise summaries in 3 sentences or less that provide the high level and essential information from the document. The document discusses an introductory presentation on security analytics methods. It includes an agenda that covers an introduction to analytics methods, an example scenario, and next steps. It also discusses common security challenges, different analytics methods and types of use cases, and how analytics can be applied to different stages of an attack.
SplunkLive! Munich 2018: Intro to Security Analytics MethodsSplunk
The document provides an introduction and agenda for a presentation on security analytics methods. The agenda includes an intro to analytics methods from 11:40-12:40 followed by a lunch break from 12:40-13:40. The presentation may include forward-looking statements and disclaimers are provided. Information presented is subject to change and any information about product roadmaps is for informational purposes only.
SplunkLive! Munich 2018: Get More From Your Machine Data Splunk & AISplunk
Presented at SplunkLive! Munich 2018:
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Munich 2018: Getting Started with Splunk EnterpriseSplunk
The document provides an agenda for a SplunkLive! presentation on installing and using Splunk. It includes downloading required files, importing sample data, conducting searches on the data, and exploring various Splunk features through a live demonstration. Common installation problems are also addressed. The presentation aims to provide attendees with the knowledge and skills to get started using Splunk through hands-on learning and a question and answer session.
SplunkLive! Frankfurt 2018 - Getting Hands On with Splunk EnterpriseSplunk
This presentation introduces Splunk software. It provides an overview of Splunk capabilities including indexing and searching machine data from various sources. The presentation demonstrates how to install Splunk, onboard sample data, and perform searches including field extractions, dashboards and alerts. It concludes with information on Splunk documentation, support and community resources.
SplunkLive! Munich 2018: Predictive, Proactive, and Collaborative ML with IT ...Splunk
This document discusses how machine learning (ML) can be used with IT service intelligence (ITSI) to enable predictive, proactive, and collaborative IT operations. It describes how ML can be applied to analyze machine data using ITSI to predict failures and other notable events. This allows operations teams to be notified earlier of potential issues. The document provides an example of using ITSI's built-in ML and event analytics to cluster similar alerts from thousands of events into meaningful, actionable alerts to improve response time. It also discusses integrating ITSI with chat tools like Slack to immediately notify teams to further reduce resolution times.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
Presented by Bosch Cyber Defense Center at SplunkeLive! Frankfurt 2018:
Introduction / Who am I?
Bosch Cyber Defense Center
SIEM@Manufacturing
SIEM Workbench
Splunk Automation with Ansible
This document summarizes information about the Splunk Usergroup Zurich. It mentions that the group has regular Splunk user get-togethers throughout major German-speaking cities, not just Zurich. It hosts frequent Splunk presentations in German and English. The group is not a sales-focused organization and provides a space for users to meet and learn from each other. Interested users can join the group by visiting the listed URL.
Splunk Discovery: Warsaw 2018 - Legacy SIEM to Splunk, How to Conquer Migrati...Splunk
Presented at Splunk Discovery Warsaw 2018:
SIEM Replacement Methodology
Use Cases
Data Sources & Data Onboarding
Architecture
Third Party Integration
You Got This!
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
Splunk Discovery: Warsaw 2018 - Reimagining IT with Service IntelligenceSplunk
Presented at Splunk Discovery Warsaw 2018:
What's Service Intelligence and Why You Should Care
Introduction to Splunk IT Service Intelligence
IT Service Intelligence Key Concepts
Demo
SplunkLive! Paris 2018: Use Splunk for Incident Response, Orchestration and A...Splunk
Presented at SplunkLive! Paris 2018:
- Challenges with Security Operations Today
- Overview of Splunk Adaptive Response Initiative
- Technology behind the Adaptive Response Framework
- Demonstrations
- How to build your own AR Action
- Resources
SplunkLive! London 2017 - Happy Apps, Happy UsersSplunk
No matter what business you’re in, your web applications are front-and-center for your customers. Downtime, or even bad performance not only creates a spike in costs, they often translate into loss of customers and revenue. You need immediate insight into the availability, performance and usage of your applications and the infrastructure your applications run on. In this session, you will learn why you need to take a platform approach to full stack application management, whether your applications reside on-premises or in the cloud. Second, we will show you how you can use Splunk to monitor the usage and performance of your applications, and quickly troubleshoot faults by stepping through some of the most common issues our customers experience. Third, we’ll contrast what Splunk does relative to other APM tools you may already have deployed, and even show you how you can bring APM data into Splunk to gain more insight into application performance.
SplunkLive! Frankfurt 2018 - Intro to Security Analytics MethodsSplunk
Splunk Security Essentials provides concise summaries in 3 sentences or less that provide the high level and essential information from the document. The document discusses an introductory presentation on security analytics methods. It includes an agenda that covers an introduction to analytics methods, an example scenario, and next steps. It also discusses common security challenges, different analytics methods and types of use cases, and how analytics can be applied to different stages of an attack.
SplunkLive! Munich 2018: Intro to Security Analytics MethodsSplunk
The document provides an introduction and agenda for a presentation on security analytics methods. The agenda includes an intro to analytics methods from 11:40-12:40 followed by a lunch break from 12:40-13:40. The presentation may include forward-looking statements and disclaimers are provided. Information presented is subject to change and any information about product roadmaps is for informational purposes only.
SplunkLive! Munich 2018: Get More From Your Machine Data Splunk & AISplunk
Presented at SplunkLive! Munich 2018:
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Munich 2018: Getting Started with Splunk EnterpriseSplunk
The document provides an agenda for a SplunkLive! presentation on installing and using Splunk. It includes downloading required files, importing sample data, conducting searches on the data, and exploring various Splunk features through a live demonstration. Common installation problems are also addressed. The presentation aims to provide attendees with the knowledge and skills to get started using Splunk through hands-on learning and a question and answer session.
SplunkLive! Frankfurt 2018 - Getting Hands On with Splunk EnterpriseSplunk
This presentation introduces Splunk software. It provides an overview of Splunk capabilities including indexing and searching machine data from various sources. The presentation demonstrates how to install Splunk, onboard sample data, and perform searches including field extractions, dashboards and alerts. It concludes with information on Splunk documentation, support and community resources.
SplunkLive! Munich 2018: Predictive, Proactive, and Collaborative ML with IT ...Splunk
This document discusses how machine learning (ML) can be used with IT service intelligence (ITSI) to enable predictive, proactive, and collaborative IT operations. It describes how ML can be applied to analyze machine data using ITSI to predict failures and other notable events. This allows operations teams to be notified earlier of potential issues. The document provides an example of using ITSI's built-in ML and event analytics to cluster similar alerts from thousands of events into meaningful, actionable alerts to improve response time. It also discusses integrating ITSI with chat tools like Slack to immediately notify teams to further reduce resolution times.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
Presented by Bosch Cyber Defense Center at SplunkeLive! Frankfurt 2018:
Introduction / Who am I?
Bosch Cyber Defense Center
SIEM@Manufacturing
SIEM Workbench
Splunk Automation with Ansible
This document summarizes information about the Splunk Usergroup Zurich. It mentions that the group has regular Splunk user get-togethers throughout major German-speaking cities, not just Zurich. It hosts frequent Splunk presentations in German and English. The group is not a sales-focused organization and provides a space for users to meet and learn from each other. Interested users can join the group by visiting the listed URL.
Splunk Discovery: Warsaw 2018 - Legacy SIEM to Splunk, How to Conquer Migrati...Splunk
Presented at Splunk Discovery Warsaw 2018:
SIEM Replacement Methodology
Use Cases
Data Sources & Data Onboarding
Architecture
Third Party Integration
You Got This!
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
Splunk Discovery: Warsaw 2018 - Reimagining IT with Service IntelligenceSplunk
Presented at Splunk Discovery Warsaw 2018:
What's Service Intelligence and Why You Should Care
Introduction to Splunk IT Service Intelligence
IT Service Intelligence Key Concepts
Demo
SplunkLive! Paris 2018: Use Splunk for Incident Response, Orchestration and A...Splunk
Presented at SplunkLive! Paris 2018:
- Challenges with Security Operations Today
- Overview of Splunk Adaptive Response Initiative
- Technology behind the Adaptive Response Framework
- Demonstrations
- How to build your own AR Action
- Resources
SplunkLive! London 2017 - Happy Apps, Happy UsersSplunk
No matter what business you’re in, your web applications are front-and-center for your customers. Downtime, or even bad performance not only creates a spike in costs, they often translate into loss of customers and revenue. You need immediate insight into the availability, performance and usage of your applications and the infrastructure your applications run on. In this session, you will learn why you need to take a platform approach to full stack application management, whether your applications reside on-premises or in the cloud. Second, we will show you how you can use Splunk to monitor the usage and performance of your applications, and quickly troubleshoot faults by stepping through some of the most common issues our customers experience. Third, we’ll contrast what Splunk does relative to other APM tools you may already have deployed, and even show you how you can bring APM data into Splunk to gain more insight into application performance.
How to Move from Monitoring to Observability, On-Premises and in a Multi-Clou...Splunk
With the acceleration of customer and business demands, site reliability engineers and IT Ops analysts now require operational visibility into their entire architecture, something that traditional APM tools, dev logging tools, and SRE tools aren’t equipped to provide. Observability enables you to inspect and understand your IT stack on premises and in the cloud(s); It’s no longer about whether your system works (monitoring), but being able to task why it is not working? (Observability). This presentation will outline key steps to take to move from monitoring to observability.
This document discusses new capabilities in Splunk's App for Stream and Splunk MINT products. It begins with an introduction and overview of each product. It then discusses key benefits like real-time insights, efficient cloud data collection, and fast time to value. Example use cases are provided for IT operations, security, and applications visibility. Supported protocols, platforms, and architecture options are also outlined. The document concludes by discussing challenges in mobile app delivery and how Splunk MINT addresses them through mobile data collection and correlation with other data sources.
Monitoring End User Experiences with New Relic & SplunkAbner Germanow
When your digital experience is your brand experience, understanding what your customers go through is critical. Troubleshooting and optimizing their experiences requires visibility into metrics, traces and logs. In this session, we'll demonstrate how to use the combined power of New Relic's real-user monitoring and application performance monitoring with Splunk to keep teams focused on identifying issues before customers tweet, fixing problems fast and knowing what to tackle next.
Splunk Webinar: IT Operations Demo für Troubleshooting & DashboardingGeorg Knon
This document provides an overview of Splunk's IT operations software. It discusses the challenges facing IT operations, including siloed tools and reactive problem solving. It presents Splunk as a solution, with its ability to index and analyze machine data from any source in real-time. Key benefits highlighted include faster troubleshooting to reduce downtime, proactive monitoring to address issues before they become problems, and increased operational visibility across the IT environment. The document concludes with a demonstration of Splunk's IT service intelligence capabilities.
Splunk MINT for Mobile Intelligence and Splunk App for Stream for Enhanced Op...Splunk
Learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
SplunkLive! London 2017 - DevOps Powered by SplunkSplunk
DevOps is powering the computing environments of tomorrow. When properly configured, the Splunk platform allows us to gain real-time visibility into the velocity, quality, and business impact of DevOps-driven application delivery across all roles, departments, process, and systems. Splunk can be used by DevOps practitioners to provide continuous integration/deployment and the real-time feedback to help the organisation with their operational intelligence. Join us for an exciting talk about Splunk’s current approach to DevOps, and for examples of how Splunk is being used by customers today to transform DevOps initiatives.
The document discusses how Staples uses Splunk for operational support, application insights, and business intelligence across their infrastructure. Staples relies on Splunk for real-time visibility into the health of their Advantage website and business/operational analytics. Splunk provides comprehensive insights into Staples' infrastructure and helps map application performance to user experience. It has saved Staples numerous times by quickly detecting issues. Adoption of Splunk at Staples has grown organically as more teams see its benefits.
Splunk Data Onboarding Overview - Splunk Data Collection ArchitectureSplunk
Splunk's Naman Joshi and Jon Harris presented the Splunk Data Onboarding overview at SplunkLive! Sydney. This presentation covers:
1. Splunk Data Collection Architecture 2. Apps and Technology Add-ons
3. Demos / Examples
4. Best Practices
5. Resources and Q&A
What’s New: Splunk App for Stream and Splunk MINTSplunk
Join us to learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Introducing the E.P.I.C. APM: Stimulate User-Loyalty and DifferentiationCA Technologies
In a time when businesses are literally being re-coded by software, applications have now become the face of your business. In the age of rapid adoption and rapid rejection, you have mere seconds to impress your app users. This is the reality of the App Economy.
Despite the enormous complexity of today’s application delivery chain, your end-users expect a flawless app experience, regardless of how, when or where they access your app. This means app issues aren’t IT issues, they’re customer satisfaction and retention issues.
With the APM 9.7 release, CA introduces its E.P.I.C. APM strategy, a solution that creates a competitive advantage in the App Economy by proactively managing the user experience. E.P.I.C. APM delivers a solution that is Easy, Proactive, Intelligent and Collaborative (E.P.I.C.) across the application lifecycle. CA APM 9.7 is the first proof point in our E.P.I.C. APM Strategy, starting an E.P.I.C. trend that will build with each new release.
Anand Akela, Head of Product Marketing for CA APM at CA Technologies and Mike Sydor, Engineering Services Architect used these slides in a recent webinar to introduce E.P.I.C APM and provide an overview of CA APM 9.7 as a proof point of this strategy.
Learn more about APM: http://bit.ly/1Be3e4S
Bengaluru Splunk User Group kick off.
Introduction to User Group Leaders,
Session 1 on Splunk Remote Work Insights
Session 2 on Splunk Dashboard Journey
Splunk is a powerful platform for understanding your data. This session will provide an overview of machine learning capabilities available across Splunk’s portfolio. We'll dive deeply into Splunk's Machine Learning Toolkit App, which extends Splunk Enterprise with a rich suite of advanced analytics, machine learning algorithms, and rich visualizations. It also provides customers with a guided model-building and operationalization environment. The demonstration will include the guided model-building UI for tasks such as predictive analytics, outlier detection, event clustering, and anomaly detection. We’ll also review typical use cases and real-world customers who are using the Toolkit to drive business results.
The document provides an overview of the Splunk data platform. It discusses how Splunk helps organizations overcome challenges in turning real-time data into action. Splunk provides a single platform to investigate, monitor, and take action on any type of machine data from any source. It enables multiple use cases across IT, security, and business domains. The document highlights some of Splunk's products, capabilities, and customer benefits.
Presented at SplunkLive! Paris 2018: Get More From Your Machine Data With Splunk AI
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! London 2017 - Splunk Enterprise for IT TroubleshootingSplunk
If you’re just getting started with Splunk, this session will help you understand how to use Splunk software to turn your silos of data into insights that are actionable. In this session, we’ll dive right into a Splunk environment and show you how to use the simple Splunk search interface to quickly find the needle-in-the-haystack or multiple needles in multiple haystacks. We’ll demonstrate how to perform rapid ad hoc searches to conduct routine investigations across your entire IT infrastructure in one place, whether physical, virtual or in the cloud. We’ll show you how to then convert these searches into real-time alerts and dashboards, so you can proactively monitor for problems before they impact your end user. We’ll also demonstrate how you can use Splunk to connect the dots across heterogeneous systems in your environment for cross-tier, cross-silo visibility. Don’t forget to bring your laptop and install Spunk Enterprise before you join us.
SplunkLive! Splunk Enterprise 6.3 - Data On-boardingSplunk
This document discusses Splunk Enterprise 6.3, a platform for machine data that provides breakthrough performance, scale, and total cost of ownership reductions. Key features highlighted include doubling search and indexing speed, increasing capacity by 20-50%, and reducing TCO by over 20%. Advanced analysis and visualization capabilities are improved, along with support for high-volume event collection, enterprise-scale requirements, and development tools. Demo apps showcase custom visualizations and machine learning functionality.
IT-Lagebild: Observability for Resilience (SVA)Splunk
Splunk Public Sector Summit Germany April 2025
Präsentation: IT-Lagebild:
Observability for
Resilience
Sprecher:
Giscard Venn -
Fachvertrieb Big Data & AI
Sebastian Kramp - Team Lead Technical Business Analytics
Nach dem SOC-Aufbau ist vor der Automatisierung (OFD Baden-Württemberg)Splunk
Splunk Public Sector Summit Germany April 2025
Präsentation: Nach dem SOC-Aufbau ist vor der Automatisierung
Sprecher: Sven Beisel, Fachreferent SOC, Oberfinanzdirektion Baden-Württemberg
Security - Mit Sicherheit zum Erfolg (Telekom)Splunk
Splunk Public Sector Summit 2025
Präsentation von der Telekom: "Security - Mit Sicherheit zum Erfolg"
Sprecher:
Thomas Beinke - Senior Sales Expert
Lars Fürle - Senior Sales Expert
One Cisco - Splunk Public Sector Summit Germany April 2025Splunk
Splunk Public Sector Summit Germany April 2025
Präsentation: Cisco & Splunk Stronger Together ...gemeinsam noch stärker
Sprecher: Philipp Behre - Field CTO & Strategic Advisor, Technology & Innovation, Splunk
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
SplunkLive! Frankfurt 2018 - Monitoring the End User Experience with Splunk
1. Monitoring the End User
Experience with Splunk
Gain insight on both the experience, and the
“why” behind the experience
Dirk Nitschke | Senior Sales Engineer
10 April 2018 | Frankfurt
4. Complexity – Difficult Issues for Everyone
▶ Is the problem with the app, the network or the backend system?
▶ Why are my specialists all saying “it works” but the application is down?
▶ How does performance compare mobile vs. web vs. desktop?APP MANAGERS/
OPERATIONS
▶ How can I deliver new releases faster?
▶ How can I see how my applications are working in production?
▶ How can other developer, test and monitoring tools improve my coding?DEVELOPERS
▶ How do I ensure new releases don’t break critical apps?
▶ How can I do “full stack” monitoring easily?
▶ What changes will optimize application and infrastructure performance?DEVOPS, SRE
PERF MANAGER
▶ How are customers using my app? How is it impacting my business?
▶ Which features should I prioritize for future versions?
▶ Are my customers impacted by outages and performance issues?LINE OF BUSINESS
5. Infrastructure and Application Silos
Web Servers
Legacy
Systems
End Users Network/
Load Balancing
Messaging
Databases
Java, .NET, PHG, etc.
App Servers
Security
Virtualization,
Containers,
Servers, Storage
6. What Is Needed?
Web Servers
Legacy
Systems
End Users Network/
Load Balancing
Messaging
Databases
Java,.NET, PHG, etc.
App Servers
Security
Virtualization,
Containers,
Servers, Storage
KPIs, SLOs, service visualization, notable events affecting SLAs
Mobile intelligence, wire data, deep integration w/ AWS
Correlation with business data to enable context
Platform: Universal indexing + analytics of data across silos
7. ▶ Ingest data once – single source of truth
across teams
▶ Analyze machine data across entire stack
▶ Integrate data from other management tools
▶ Connect machine data to business services
▶ Identify root cause of problems quickly
▶ Apply best practices in analytics to predict
changes in reliability and service usage
Reliability Requires a Platform Approach
OTHER TEAMS
PRODUCT
MANAGERS/
BUSINESS OWNERS
DEVOPS, SRE
PERF MANAGER
APP MANAGERS/
OPERATIONS
DEVELOPERS
8. A Platform Approach for Application Performance Analytics
Network
InfrastructureLayer
Packet, Payload, Traffic,
Utilization, Perf
Storage
Utilization, Capacity,
Performance
Server
Performance, Usage,
Dependency
ApplicationLayer
User Experience
Usage, Response Time,
Failed Interactions
Byte Code Instrumentation
Usage, Experience,
Performance, Quality
Business Performance
Corporate Data, Intake,
Output, Throughput
Splunk Approach:
▶ Single repository for ALL data
▶ Data in original raw format
▶ Machine learning
▶ Simplified architecture
▶ Fewer resources to manage
▶ Collaborative approach
MACHINE
DATA
9. Apps for Application Monitoring
*ni
x
Splunk Stream,
Real User Monitoring
300+ IT Ops and App
Delivery Apps
and Add-Ons
Splunk for Mobile
Intelligence
Splunk Apps
for Amazon Web
Services and
Microsoft Exchange
10. ▶ Gain real-time insight into application
performance and customer
experience
▶ Attain visibility into cloud services
▶ Deliver immediate insights from
streaming network
▶ Network-based packet capture does
not require DBA or other admin tools
and doesn’t affect performance
Gaining Transaction Insight From Your Network
Splunk Stream
11. HTTP Event Collector – Agentless Fast Insight
▶ Immediate visibility to mobile app crashes
▶ Insight into mobile app use – MAU/DAU, device usage, network insight
▶ Transaction performance insight
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d
'{"event":"Hello Event Collector"}'
Applications IoT Devices
Agentless, direct data onboarding via a standard API
Scales to Millions of Events/Second
12. ▶ Immediate visibility to mobile app crashes
▶ Insight into mobile app use – MAU/DAU,
device usage, network insight
▶ Transaction performance insights
▶ Correlate mobile with other data types for
complete insight
Gaining Insight on Your Mobile Apps
13. Splunk IT Service Intelligence
Data-driven service monitoring and analytics
Splunk IT Service Intelligence
Time-Series Index
Platform for Operational Intelligence
Dynamic
Service Models
Schema-on-Read Data Model
Common
Information Model
At-a-Glance
Problem Analysis
Early Warning
on Deviations
Event Analytics
Simplified Incident
Workflows
14. Splunk: Application Performance Analytics
End Users
Networking/
Load-balancing Web Servers App Servers
Legacy
Systems
Messaging
Databases
Security
Virtualization,
Containers,
Servers, Storage
Java, .NET, PHP, etc.
Manage to KPIs, SLOs – isolate root case and service impact
Analytics for hybrid and cloud environments + microservices stacks
Full stack monitoring that integrates your APM tool’s data
Platform approach that spans technology and team silos
16. Traditional APM tools excel at… … but have critical limitations
▶ End user response time
(and alerting when performance is slow)
▶ Byte code instrumentation
(detecting what code causes bottlenecks)
▶ App server metrics
▶ Application mapping and transaction profiling
▶ Deploying quickly for base-level use cases
▶ “Full stack” monitoring
(including networks, load balancers, etc.)
▶ Finding the root cause
(that’s usually found in logs)
▶ Reactive (not predictive)
▶ Usually don’t store raw data indefinitely
▶ Advanced analytics
(prediction, anomalies, ML, etc.)
▶ Data access for multiple stakeholders
(LOBs, security, etc.)
APM Tools – Valuable, But Not Enough
17. ▶ Some, but not all of your apps are instrumented
▶ Other “off-the-shelf” apps can’t be instrumented with
traditional APM
▶ Non-instrumented parts of your stack can’t be “seen”
Covering APM “Blind Spots”
Without Splunk
Physical Server (Dell, HP, CISCO blades or servers)
Guest OS (Windows/Linux/*Nix)
Database (Oracle, SQL Server, MySQL)
Hypervisor (ESX, HyperV, Citrix)
Applications, business/mission services
App Server (WebLogic, Jboss EAP, WebSphere)
Web Server (Apache, TomCat)
SAN/NAS Storage (EMC, AppNet)
Network
AWS
Firewalls
Database (Oracle, SQL Server, MySQL)
SAN/NAS Storage (EMC, AppNet)
Network
Load Balancers
Legacy Environments (AS400, Mainframe, ESBs, others)
Akamai
Packaged Apps (SAP, PeopleSoft, etc)
Log Analysis (System, Application, Security, etc)
APMInstrumented-
ApplicationA
APMInstrumented-
ApplicationB
ApplicationD
(notAPMInstrumented)
ApplicationC
(notAPMInstrumented)
▶ End-to-end, holistic visibility to the complete service
▶ Insight across ALL data sources and applications
▶ PREDICTIVE analysis, before issues occur
With Splunk
18. ▶ Pull data from APM tools and provide
events to APM tools
▶ Gain insight into EUM, application
requests, app errors and correlate
with logs all in one platform
▶ Reduce the “clicks” between spotting
problems and finding root cause
▶ Forecast, predict and detect
anomalies in APM data
▶ Integrate triage with non-application
layers of the stack
APM as a Data Source for Splunk
19. APM Tools
▶ Splunk Add-on and App for New Relic
▶ Splunk Add-on and App for AppDynamics
▶ Dynatrace App (provided by Dynatrace)
Other Notable APM Apps
▶ Web Performance (based on boomerang.js)
▶ Splunk Mobile Intelligence (Splunk MINT)
▶ Splunk Stream
splunkbase.splunk.com
Splunk Apps for APM
24. Thank You!
Don't forget to rate this session on Pony Poll
https://ponypoll.com/frankfurt
Editor's Notes
#2: Hallo, mein Name ist Dirk Nitschke und ich arbeite als Sales Engineer bei Splunk.
Dieser Vortrag trägt den Titel „Überwachung des Anwendererlebnisses mit Splunk“.
Dabei geht es nicht nur darum, Einsichten in das Anwendererlebnis zu gewinnen, sondern auch die Ursachen zu erforschen, also der Frage nachzugehen, warum der Endnutzer zum Beispiel eine für ihn unerfreuliche Antwortzeit bei der Nutzung einer Anwendung erlebt. Auf Basis dieser Erkenntnisse kann man dann versuchen, das Anwendererlebnis zu verbessern und ggf. auch proaktiv zu agieren bevor der Enduser eine Beeinträchtigung eines Services mitbekommt.
Warum macht man das? Häufig wird ein und derselbe Service von unterschiedlichen Firmen angeboten. Und ich nutze den Service, der für mich am einfachsten zu bedienen ist.
#4: Was brauchen wir dazu: als erstes natürlich Daten über das Erlebnis des Anwenders. Wenn wir uns dann fragen, warum ist das Erlebnis so wie es ist, benötigen wir natürlich auch Daten über die Applikation selber, klar.
Aber welche Informationen und Erkenntnisse erwarten wir uns eigentlich von diesen Daten?
#5: Wie üblich kommt es darauf an, welche Sichtweise man hat. Wir haben hier einmal 4 verschiedene Personengruppen aufgelistet, für die Applikationsperformance von Interesse ist:
Als Application Manager, Verantwortlicher für den Betrieb einer Applikation ist es für mich wichtig, sicherzustellen, dass die Applikation das tut, was sie soll. Und wenn das einmal nicht der Fall ist, möchte ich möglichst schnell feststellen, dass es ein Problem gibt, wer wie davon betroffen ist, dann die Ursache identifizieren und natürlich zügig eine Lösung finden und die normale Funktion der Applikation wiederherstellen.
Als Applikationsentwickler möchte ich neue Versionen schnell fertigstellen, Fehler schnell identifizieren, sicherstellen dass Test/Build-Zyklen problemlos laufen. Und zusätzlich interessiert mich vielleicht auch, wie sich aktuelle Versionen denn tatsächlich in der Produktion verhalten – nicht nur in meiner meist besc hränkten Testumgebung. Oder testen Sie neue Softwareversionen in ihrer Produktivumgebung?
Als Site Reliability Engineer betrachte ich den gesamten Technologiestack. Bei meinen Entscheidungen muss ich berücksichtigen, welche Auswirkungen ein neues Release auf die gesamte Produktivumgebung hat, mit welchen Codeänderungen kann ich welche Verbesserung in Bezug auf Performance und Erlebnis erreichen? Somit benötige ich nicht nur eine Sicht auf eine einzelne Applikation, sondern auf andere Applikationen zu denen Abhängigkeiten bestehen, die Endpunkte an denen die Benutzer sitzen, die Infrastruktur – sowohl die Hardware als auch die kleinen Helferlein, wie DNS. Was passiert, wenn DNS langsam ist?
Als Businessverantwortlicher geht es darum zu wissen, wie viele Benutzer meinen Service (nicht die Applikation, sondern den ganzen Service!) benutzen, wie wird der Service verwendet? Gibt es Funktionen, die zum Beispiel gar nicht genutzt werden? Welche finanziellen Auswirkungen hat ein schlechtes Antwortverhalten oder gar Ausfall des Services auf mein Geschäft?
#6: Die Komplexität von IT Umgebungen war schon immer eine Herausforderung. Aktuelle Entwicklungen wie die Nutzung von Containern oder Umgebungen, die Komponenten on-premise und in der Cloud nutzen, machen es nicht unbedingt einfacher. So übersichtlich wie auf dieser Slide ist Ihre IT Umgebung sicherlich nur, wenn sie sie sehr stark abstrahieren und aus der Reiseflughöhe auf sie schauen.
Wesentlich ist aber, dass ein Enduser, der heute eine Applikation oder einen Service nutzt, mit Komponenten aus all diesen Bereichen direkt oder indirekt interagiert
Der Betrieb und die Überwachung der Komponenten ist häufig in Silos organisiert. Mit jeweils eigenen Tools und großen Herausforderungen bei der Lösung von Problemen. Identifikation der Fehlerursache, Austausch von Informationen zwischen den Teams und einer fehlenden gemeinsamen Sicht auf die Gesamtumgebung einschließlich der Komponenten in der Cloud.
#7: Was wird also benötigt? Was wünscht man sich?
Zum einen eine Platform, die es erlaubt jede Art von Maschinendaten zu Verarbeiten und analysieren – und zwar über die alten Silogrenzen hinweg.
Auf Basis dieser Maschinendaten wird der Gesundheitszustand ganzer Services bewertet, Abweichungen vom Sollzustand und Ausreißer gemeldet. Direkter Zugriff auf die Maschinendaten erlaubt die RCA bei Problemen
Die Lösung erlaubt die Integration von Daten die on-premise gesammelt werden, aber auch Daten aus Cloudumgebung oder von Mobilgeräten.
Und zusätzlich lassen sich Beziehungen zu Geschäftsdaten herstellen, die die in IT Systemen gesammelten Daten in den Business Kontext setzen (BEISPIEL MIT DEM WEBSTORE: Preis kommt aus DB -> Auf Basis IT Daten sehen, wie viel Umsatz gemacht wurde, aber auch wie viel Umsatz in gefüllten Warenkörben liegt, die nicht ausgecheckt wurden!!!)
#8: Ein solcher Platform-Ansatz bringt diverse Vorteile mit sich:
* Daten werden nur einmal eingelesen statt an mehrere Systeme geschickt zu werden. So hat man nur eine „Source of Truth“ für alle Teams
* Daten können über den gesamten Technologiestack analysiert werden, wobei auch Daten aus anderen Tools, die heute schon existieren integriert werden können.
* Durch eine zentrale Sicht lassen sich Fehlerursachen üblicherweise schneller analysieren als bei der Nutzung verschiedener Tools.
#9: Auf Application Performance Analytics angewendet heißt das folgendes: Im Application Layer benötigen wir Daten über das Anwendererlebnis, also wie wird die Applikation genutzt, welche Antwortzeiten treten auf, welche Interaktionen schlagen fehl.
Informationen, die sie über Byte Code Instrumentalisierung erfassen, erlauben einen tieferen Einblick in die Nutzung und Laufzeit einzelner Methoden.
Aus dem Bereich der Infrastruktur sprechen wir über Daten von Servern, Storage, Netzwerkkomponenten.
Diese werden zentral in Splunk gesammelt. Splunk speichert Daten im Originalformat und bewahrt sie so lange auf, wie sie möchten. Die Daten können dann für unterschiedliche Anwendungsfälle von unterschiedlichen Gruppen genutzt und analysiert werden. Verschiedene Nutzer erhalten die für sie relevante Sicht auf den gemeinsamen Datenbestand.
Dabei kann es sich um einfache Abfragen handeln, etwa die Anzahl der Benutzer, die in der letzten Stunde meinen Webshop besucht haben. Es sind aber auch kompliziertere Dinge möglich, etwa die Vorhersage von Werten, also zum Beispiel die erwartete Anzahl von Benutzern auf Basis des historischen Verlaufes. Oder eine Klassifikation von Benutzern basierend auf ihrem Kaufverhalten.
Insgesamt führt dies zu einer Reduktion der Anzahl von Werkzeugen, die eingesetzt werden und damit natürlich auch zu einer Vereinfachung der Gesamtarchitektur.
#10: Mit welchen Hilfsmitteln kann ich jetzt Applikationsüberwachung mit Splunk durchführen bzw. wielesen wir die benötigten Daten in Splunk ein?
Wir wollen ja den gesamten Technologiestack überwachen, also nicht nur eine einzelne Applikation, sondern auch die Komponenten, von denen die Applikation abhängig ist. Also üblicherweise Datenbanken, Middleware, aber auch Infrastrukturkomponenten wie Betriebssystem, Virtualisierung, Netzwerk, Storage – und nicht zu vergessen ggf. auch Informationen über den genutzten Cloud-Service.
Für viele dieser Komponenten bietet Splunk vorgefertigte Erweiterungen – sogenannte Apps oder Add-ons, die sowohl die Erfassung als auch die Auswertung der Daten vereinfachen.
Hier links sehen wir zum Beispiel das Splunk Add-on for Amazon WebServices und die zugehörige App, mit denen Daten aus AWS erfasst und visualisiert werden.
Rechts außen sehen wir exemplarisch Erweiterungen für VMware, Datenbanken, Windows und UNIX Betriebssysteme, die typischen Webserver oder Applikationsserver. Und ja, es besteht auch die Möglichkeit Daten, die von spezialisierten APM Tool erzeugt wurden, in Splunk zu nutzen.
Für Applikationen, bei denen man Zugriff auf den Quellcode hat hilft der Splunk HTTP Event Collector und speziell für ihre Mobile Apps gibt es noch Splunk MINT – Splunk for Mobile Intelligence.
Nicht immer ist es möglich, auf einem Gerät Software wie den Splunk Universal Forwarder zu installieren oder Daten remote abzufragen. Nicht alle Applikation lassen sich instrumentalisieren oder sollen instrumentalisiert werden und sie möchten lieber passiv Daten erfassen statt in die Applikation einzugreifen. In diesem Fall kann Splunk Stream von Interesse sein.
#11: Wer kennt bereits Splunk Stream?
Splunk Stream erlaubt es, den Inhalt von Netzwerkpaketen als Datenquelle zu nutzen. Der Netzwerkverkehr ist sicherlich die ultimative Quelle, wenn man untersuchen möchte wie Komponenten miteinander kommunizieren. Und manchmal ist es die einzige Datenquelle, die wir haben – zum Beispiel, wenn es nicht möglich ist, Software wie den Splunk Universal Forwarder auf einem System zu installieren.
Netzwerkdaten enthalten viele Informationen. Schauen wir auf HTTP Verbindungen, so können wir aus den Daten nützliche Informationen für den Betrieb ziehen, etwa Performance Metriken wie Round Trip time, Antwortzeiten auf Anfragen.
Aus Sicht der Entwickler einer Webapplikation ist wiederum von Interesse, welche Pages aufgerufen werden, in welcher Reihenfolge.
Der Verantwortliche für den gesamten Webshop interessiert sich aber eher für Informationen über die verkauften oder eben nicht verkauften Produkte, liegen gebliebene Warenkörbe und welche Kunden im Shop unterwegs sind.
#12: Der Splunk HTTP Event Collector erlaubt es, ohne zusätzlichen Agenten / Forwarder und auf einfache Art und Weise Daten über HTTP bzw. HTTPS an Splunk zu schicken. Dies ist für Entwickler auch leicht in Applikationen einzubinden. Diese Variante ist nicht nur einfach zu nutzen, sondern auch effektiv, sicher und sehr gut skalierbar.
#13: Nehmen wir an, wir vertreiben eine Mobile App. Dann interessiert uns die User Experience. Zum einen natürlich die Performance der App, welche Latenz tritt beim Netzwerkverkehr auf, wie bewegt sich der User durch die App, wie sieht der Crash Report aus, gibt es bei Problemen vielleicht Zusammenhänge zur verwendeten Version der App, dem genutzten Endgerät oder der Firmware auf dem Endgerät oder dem Provider?
Mit Splunk MINT stellen wir ein SDK für Android und iOS bereit, welches es einfach macht, Daten aus mobilen Apps an Splunk zu schicken.
#14: OK, jetzt haben wir die Daten also in Splunk. Was machen wir jetzt damit?
Wie bereits gesagt, lebt eine Applikation ja nicht für sich allein, sondern ist Teil eines oder mehrerer Business Services und es ist sinnvoll diesen End-to-End zu überwachen.
Splunk IT Service Intelligence als Erweiterung von Splunk bietet uns genau diese Möglichkeiten:
Dazu erstellen wir ein Service Model mit den einzelnen Komponenten eines Services, deren Abhängigkeiten und Key Performance Indikatoren. Auf Basis dieser KPIs berechnen wir dann den Gesundheitszustand oder Qualität des Services.
Auf Basis von Schwellwerten können wir uns dann benachrichtigen lassen. Adaptive Threshold... Outlier detection, Event Grouping auf Basis von Services zur besseren Priorisierung von Notable Events.
Splunk als Basis bietet uns weiterhin ohne Medienbruch Zugang auf die Raw Events zur Root Cause Analyse bei Problemen.
#15: Fasen wir zusammen: Splunk bietet eine Platform, mit der wir unterschiedliche Maschinendaten zentral, über unterschiedliche Teams hinweg sammeln und analysieren können.
Key Performance Indikatoren, Service Level Targets einschließlich der Abhängigkeiten und Auswirkungen auf Services lassen sich abbilden. Dabei haben Sie weiterhin direkten Zugriff auf die gesammelten Raw Events für die Root Cause Analyse von Problemen.
Daten können on-premise, aus Cloud-umgebungen erfasst werden und bieten so auch Einblicke in hybride Umgebungen.
Die zentrale Erfassung von Daten erlaubt den Blick über den gesamten Technologiestack hinweg – einschließlich der Daten, die APM Tools oder andere Systeme sammeln.
#17: APM Tools sind gut im Bereich Byte Code Instrumentalisierung, Application Mapping, End user response times messen.
Andererseits decken sie nicht den gesamten Technologiestack ab. Aber diese Abdeckung ist wichtig, denn nur etwa 40% der Ausfälle sind bedingt durch Fehler in Applikationen. Weitere 40% haben ihre Ursache in der Infrastruktur, und 20% haben andere Ursachen, also etwa Stromausfälle, DDOS Attacken oder Ausfälle von wichtigen Services wie DNS.
#18: Nicht alle Ihre Applikation sind instrumentalisiert oder lassen sich überhaupt instrumentalisieren und stellen somit einen weißen Fleck auf der Karte dar.
Hier hilft Splunk, diese Flecken zu erschließen und eine End-to-End Sicht über den gesamten Technologiestack zu erhalten. Sie haben eine zentrale Sicht auf alle Datenquellen.
Und diese Daten können wir nicht nur nutzen, um den aktuellen Gesundheitszustand von Services abzubilden oder bei der Root Cause Analyse zu helfen. Splunk speichert historische Daten solange sie wollen – mit voller Granularität und erlaubt es so auch, proaktiv zu werden und auf Basis der historischen Daten Predictive Analysis zu betreiben und damit helfen Probleme zu adressieren bevor die Enduser sie bemerken.
#19: Für die Gesamtsicht ist es also sinnvoll, Daten aus APM Tools ebenfalls in Splunk einzubinden. Die meisten dieser Tools haben eine Schnittstelle, die es erlaubt Daten direkt abzufragen oder zu exportieren. Dies werden dann in Splunk indiziert und können mit anderen Daten in Beziehung gesetzt werden. Also zum Beispiel bei der Root Cause Analyse. Oder sie analysieren die Daten aus dem APM genauer und machen Vorhersagen oder finden Ausreißer in den APM Daten.
#20: Für APM Tools wie New Relic, AppDynamics oder Dynatrace existieren bereits fertige Integration, die es einfach machen, Daten in Splunk zu übernehmen. Auf splunkbase.splunk.com sind die entsprechenden Apps und Add-ons kostenfrei zu finden.
Nützliche Informationen liefern wie schon beschrieben auch Splunk Stream, Splunk MINT oder Web Performance basierend auf boomerang.
#21: Kommen wir jetzt zu einer Demonstration. Wie könnte die Überwachung eines WebStores mit Splunk aussehen?
Dieser Web Store wird aktuell in die Cloud migriert und der Verantwortliche für den WebStore ist ziemlich nervös deswegen.
Er schaut auf seinen Executive View -> sehe niedrige Anzahl an erfolgreichen Käufen und schlechte umsatzzahlen, Mittelmäßiger ApDex (wer weiss, was ApDex ist?)
Apdex: #good + 0.5#tolerated / #total
Da wir gerade eine Migration machen, wollen wir doch einmal prüfen, ob es etwas gibt, was auffällig ist zwischen on-premise und Cloud. Die Kollegen aus der IT schauen sich das an. Sieht eigentlich alles gut aus. Keine Unterschiede zwischen Cloud und VMware Umgebung. Daher schließen wir die Migration als Ursache aus.
Wie geht es dem Web Shop? Lange Antwortzeiten... in allen Tiern über dem Mittelwert des letzten Tages. Sehe Fehler bei DB Verbindungen des Tomcat Servers. Und bei der DB sehe ich Fehler, dass Logdateien nicht geschrieben werden konnten. Kann jetzt genauer auf die Datenbank schauen (klick on Database Tier!!!) Hier kann ich bestätigen, dass es Probleme mit dem freien Speicherplatz gibt. Eigentlich sollte die Logs ja regelmäßig gelöscht werden. Aber ich sehe, dass mysql Server Problem mit einem Locked Account hat.
Hmm, aber die letzten Fehler sind schon eine Weile her. Ist da noch mehr?
Schauen wir noch auf die Mobile App. Wie sieht es da aus?
End User Performance Metrics (MINT)
Error Rate by App Version -> only 6.0!
Latency per Platform -> android
Latency per App Version -> 6.0!!!
Am Ende: Mobile App Health, Latency by App Version -> Version 6.0 hat lange Antwortzeiten.
#22: Industry
Online services
Real estate
Splunk Use Cases
• Business analytics• IT operations• Application delivery
Challenges
Third-party and homegrown open-source solutions could not keep up with data volume
Needed to ensure uptime and maintain SLAs for issue resolution
Log les were not standardized and contained unnecessary information
Required robust monitoring and reporting solution
Lacked visibility into vast volumes of siloed log data
Needed the ability to create ad hoc reporting and provide visibility into the health of key transactions, end-to-end, in real time
Additional Business Impact:
Provides self-service to teams across the enterprise to create their own solutions
Faster incident isolation and mitigation
Correlates user experience metrics with application performance for improved customer website experience
Splunk Products
• Splunk Enterprise• Splunk Cloud (Planned: Trulia,® Retsly®)
• Splunk SDK
Data Sources
• Application logs• Server logs• Website logs including property listings • Data from API endpoints (JSON)• Mobile application data• Website performance data
Case Study
http://www.splunk.com/en_us/customers/success-stories/zillow.html
Video
http://www.splunk.com/en_us/resources/video.psbW41MzE6QgFDBeMDL0VtdskHezTBDw.html
Blog Post:
http://blogs.splunk.com/2016/05/10/zillow-finds-its-way-home-with-splunk/?awesm=splk.it_w0S
Sales Email template:
https://splunk.my.salesforce.com/06933000001O5t0
SplunkLive! Seattle presentation:
http://www.slideshare.net/Splunk/zillow-35018327
Splunk blog by Grigori Melnick:
http://blogs.splunk.com/2015/05/13/zillow-developing-on-splunk/
#23: Industry
Technology
Splunk Use Cases
IT operations
Application delivery
Business analytics
Challenges
Difficulty accessing and managing data across the enterprise
Open source platform lacked stability and scalability needed to accommodate large and growing data volume
Accessing data to make actionable decisions took up to weeks
Developers lacked infrastructure visibility needed to ensure smooth application delivery
Splunk Products
Splunk Enterprise
Splunk App for Unix and Linux
Splunk Machine Learning Toolkit
Splunk App for AWS
Data Sources
Application
Database
Third-party
Case Study
https://www.splunk.com/en_us/customers/success-stories/yelp.html
#24: Fassen wir zusammen:
Für das Monitoring des Anwendererlebnisses ist es nicht ausreichend, eine einzelne Applikation zu überwachen. Überwinden Sie die Silos in Ihrer Monitoringlandschaft und sammelt sie Daten zentral in Splunk. So können Sie den vollen Informationsgehalt Ihrer Maschinendaten nutzen. Und das gilt auch für Daten, die zu zurzeit mit anderen Tools sammeln. Lassen sie auch diese mit in Splunk einfließen.
#25: Vielen Dank! Wie auch bei den anderen Sessiosn können Sie uns über unser sogenanntes Pony Poll Feedback geben. Die entsprechende URL versteckt sich in diesem QR Code.