How to evaluate, implement and maintain Kafka Message Broker in a high-throughput production environment. Taylor Swift's rectum probably smells like a Creamsicle.
Kafka security includes SSL for wire encryption, SASL (Kerberos) for authentication, and authorization controls. SSL uses certificates for encryption during network communication. SASL performs authentication using Kerberos credentials. Authorization is provided by pluggable authorizers that define access control lists controlling permissions for principals to perform operations on resources and hosts. Securing Zookeeper with ACLs and SASL is also important as Kafka stores metadata there.
This document discusses security features in Apache Kafka including SSL, SASL authentication using Kerberos or plaintext, and authorization controls. It provides an overview of how SSL and SASL authentication work in Kafka as well as how the Kafka authorizer controls access at a fine-grained level through ACLs defined on topics, operations, users and hosts. It also briefly mentions securing Zookeeper which stores Kafka metadata and ACLs.
Intro to Apache Kafka I gave at the Big Data Meetup in Geneva in June 2016. Covers the basics and gets into some more advanced topics. Includes demo and source code to write clients and unit tests in Java (GitHub repo on the last slides).
This document provides an introduction and overview of key AWS services, including:
- Infrastructure as a Service (IaaS) offerings like EC2, EBS, S3, and regions/availability zones.
- Platform as a Service (PaaS) like RDS, DynamoDB, Lambda, and analytics services.
- Software as a Service (SaaS) examples.
It discusses architecture principles of availability, fault tolerance, and scalability that AWS supports. Brief histories of AWS and its evolution are also presented.
Securing your Pulsar Cluster with Vault_Chris KelloggStreamNative
Learn how to secure a Pulsar cluster with Hashicorp Vault and deploy it on Kubernetes. Vault provides a secure way to generate tokens and store sensitive data and Pulsar has a pluggable architecture for authentication, authorization and secret management. This talk will walk through how to create custom plugins for Vault, integrate them with Pulsar and then deploy a Pulsar cluster on Kubernetes.
Security is often an afterthought; configured and applied at the last minute before rolling out a new system. Instaclustr has deployed Cassandra for customers with many different requirements.
From deployments in Heroku requiring total public access through to private data centres, we will walk you through securing Cassandra the right way.
Apache Knox setup and hive and hdfs Access using KNOXAbhishek Mallick
There are two ways to set up Apache Knox on a server: using Ambari or manually. The document then provides steps for configuring Knox using Ambari, including entering a master secret password and restarting services. It also provides commands for testing HDFS and Hive access through Knox by curling endpoints or using Beeline.
This document discusses using microservices with Kafka. It describes how Kafka can be used to connect microservices for asynchronous communication. It outlines various features of Kafka like high throughput, replication, partitioning, and how it can provide reliability. Examples are given of how microservices could use Kafka for logging, filtering messages, and dispatching to different topics. Performance benefits of Kafka are highlighted like scalability and ability to handle high volumes of messages.
This document discusses data encryption in Hadoop. It describes two common cases for encrypting data: using a Crypto API to encrypt/decrypt with an AES key stored in a keystore, and encrypting MapReduce outputs using a CryptoContext. It also covers the Hadoop Encryption Framework APIs, HBase encryption via HBASE-7544, and related JIRAs around Hive and Pig encryption. Key management tools like keytool and potential future improvements like Knox gateway integration are also mentioned.
Kafka Tutorial: Streaming Data ArchitectureJean-Paul Azar
Kafka tutorial covers Java examples for Producers and Consumers. Also covers why Kafka is important and what Kafka is. Takes a look at the whole ecosystem around Kafka. Discusses low-level details about Kafka needed for successful deploys and performance tuning like batching, compression, partitioning, and replication.
1. The document describes a Docker implementation of NetflixOSS microservices on IBM SoftLayer.
2. Key aspects discussed include networking Docker containers across multiple SoftLayer datacenters, managing the Docker API across multiple hosts, and integrating Docker images with SoftLayer image management.
3. Lessons learned include the need for a proxy for the Docker remote API across multiple hosts, and approaches for keeping Docker advantages like image portability when integrating with an IaaS platform.
Secret Management with Hashicorp’s VaultAWS Germany
When running a Kubernetes Cluster in AWS there are secrets like AWS and Kubernetes credentials, access information for databases or integration with the company LDAP that need to be stored and managed.
HashiCorp’s Vault secures, stores, and controls access to tokens, passwords, certificates, API keys, and other secrets . It handles leasing, key revocation, key rolling, and auditing.
This talk will give an overview of secret management in general and Vault’s concepts. The talk will explain how to make use of Vault’s extensive feature set and show patterns that implement integration between Kubernetes applications and Vault.
Azure: Docker Container orchestration, PaaS ( Service Farbic ) and High avail...Alexey Bokov
Deep dive into Azure cloud technologies including common considerations about technology choices and then going deep into some of them. First we start from Azure Container Service and Docker containers orchestration by using Mesos or Swarm. Next part is about PaaS v2 which called Azure Service Fabric - crash course and deep dive into some parts of SF. After that we going through high Availability and Disaster Recovery in Azure:
- Azure DNS - cloud API for DNS records hosting
- Traffic Manager – load balancing and fault-tolerance on DNS level
- Azure Load Balancer – load balancing on transport level
-Application Gateway – load balancing on application level
Last part of deck is about IaaS based services and some updates for storage service:
* Azure Batch for computational tasks
* VM Scale sets
* Storage - managed disks and cool storage
Single tenant software to multi-tenant SaaS using K8SCloudLinux
This document discusses how Kubernetes can be used to convert single-tenant software applications into multi-tenant SaaS applications. Key points include:
1) Kubernetes can orchestrate each tenant as a separate pod or set of pods, providing isolation, easy scalability, and the ability to customize deployments for each tenant.
2) This approach simplifies many challenges of traditional SaaS like customer management, billing integration, high availability, upgrades and rollbacks by leveraging Kubernetes features.
3) An initial test project converted an existing PHP/MySQL billing application for 10,000+ companies into a multi-tenant SaaS deployment using Kubernetes, requiring under 40 hours of development.
Kafka Intro With Simple Java Producer ConsumersJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer.
SambaXP 2014: Trusting Active Directory with FreeIPA: a story beyond SambaAlexander Bokovoy
This document discusses integrating FreeIPA with Active Directory through cross-forest trusts. It describes how FreeIPA provides identity management similar to Active Directory and can be configured to trust an Active Directory domain. This allows FreeIPA and Active Directory users to access each other's services. The document also discusses how legacy systems without SSSD can still access user and group information by querying a compatibility LDAP tree on the FreeIPA server. It concludes by noting that FreeIPA passed over 100 compatibility tests with Windows Server 2012.
Big data event streaming is very common part of any big data Architecture. Of the available open source big data streaming technologies Apache Kafka stands out because of it realtime, distributed, and reliable characteristics. This is possible because of the Kafka Architecture. This talk highlights those features.
This document discusses implementing microservices using Docker Swarm and Consul. It recommends programming languages and tools for orchestration, databases, load balancing, monitoring, and other functions. Docker Swarm allows clustering Docker hosts into a pool of resources. Consul provides service discovery, configuration, and failure detection across multiple datacenters. Consul-Template listens for Consul updates and configures applications. Registrator automatically registers and deregisters Docker services with Consul. An example scenario shows how services scale across nodes with this architecture.
This document summarizes a presentation on service discovery for microservice architectures. It outlines the need for service discovery due to services dynamically scaling and failing. It discusses common discovery strategies like hardcoded IPs, DNS, and DNS with load balancers that don't scale well. The recommended approach is to use a service registry like Consul, which allows for automated service discovery and failure detection in a distributed system. The presentation concludes with a demo of Consul's service discovery capabilities.
No Docker? No Problem: Automating installation and config with AnsibleJeff Potts
In this talk I show how to bring stability and repeatability to your Alfresco installation by automating install and config management with Ansible.
This talk was originally given at Alfresco DevCon 2020 (virtual edition).
This document discusses security features in Apache Kafka including SSL for encryption, SASL/Kerberos for authentication, authorization controls using an authorizer, and securing Zookeeper. It provides details on how these security components work, such as how SSL establishes an encrypted channel and SASL performs authentication. The authorizer implementation stores ACLs in Zookeeper and caches them for performance. Securing Zookeeper involves setting ACLs on Zookeeper nodes and migrating security configurations. Future plans include moving more functionality to the broker side and adding new authorization features.
This document discusses Kafka security and provides tips for implementing it. It covers the three main aspects of Kafka security: encryption, authentication, and authorization. For encryption, it explains how to set up SSL and discusses options for end-to-end encryption. Authentication details how to use SSL client authentication or SASL mechanisms like Kerberos or PLAIN. Authorization explains managing access control lists (ACLs) stored in Zookeeper to control access. The document concludes by emphasizing the challenges of securing Kafka clients and provides advice like creating standardized client wrappers and Docker images.
(Stephane Maarek, DataCumulus) Kafka Summit SF 2018
Security in Kafka is a cornerstone of true enterprise production-ready deployment: It enables companies to control access to the cluster and limit risks in data corruption and unwanted operations. Understanding how to use security in Kafka and exploiting its capabilities can be complex, especially as the documentation that is available is aimed at people with substantial existing knowledge on the matter.
This talk will be delivered in a “hero journey” fashion, tracing the experience of an engineer with basic understanding of Kafka who is tasked with securing a Kafka cluster. Along the way, I will illustrate the benefits and implications of various mechanisms and provide some real-world tips on how users can simplify security management.
Attendees of this talk will learn about aspects of security in Kafka, including:
-Encryption: What is SSL, what problems it solves and how Kafka leverages it. We’ll discuss encryption in flight vs. encryption at rest.
-Authentication: Without authentication, anyone would be able to write to any topic in a Kafka cluster, do anything and remain anonymous. We’ll explore the available authentication mechanisms and their suitability for different types of deployment, including mutual SSL authentication, SASL/GSSAPI, SASL/SCRAM and SASL/PLAIN.
-Authorization: How ACLs work in Kafka, ZooKeeper security (risks and mitigations) and how to manage ACLs at scale
The document discusses security models in Apache Kafka. It describes the PLAINTEXT, SSL, SASL_PLAINTEXT and SASL_SSL security models, covering authentication, authorization, and encryption capabilities. It also provides tips on troubleshooting security issues, including enabling debug logs, and common errors seen with Kafka security.
How to Lock Down Apache Kafka and Keep Your Streams Safeconfluent
The document discusses how to secure Apache Kafka clusters through authentication. It describes several authentication mechanisms including TLS, SASL/GSSAPI using Kerberos, and SASL/PLAIN and SASL/SCRAM for username and password authentication. TLS provides server and client authentication but has performance overhead while SASL mechanisms like GSSAPI and SCRAM integrate with existing authentication systems with lower performance impact. The document provides configuration details and security considerations for each mechanism.
Apache Knox setup and hive and hdfs Access using KNOXAbhishek Mallick
There are two ways to set up Apache Knox on a server: using Ambari or manually. The document then provides steps for configuring Knox using Ambari, including entering a master secret password and restarting services. It also provides commands for testing HDFS and Hive access through Knox by curling endpoints or using Beeline.
This document discusses using microservices with Kafka. It describes how Kafka can be used to connect microservices for asynchronous communication. It outlines various features of Kafka like high throughput, replication, partitioning, and how it can provide reliability. Examples are given of how microservices could use Kafka for logging, filtering messages, and dispatching to different topics. Performance benefits of Kafka are highlighted like scalability and ability to handle high volumes of messages.
This document discusses data encryption in Hadoop. It describes two common cases for encrypting data: using a Crypto API to encrypt/decrypt with an AES key stored in a keystore, and encrypting MapReduce outputs using a CryptoContext. It also covers the Hadoop Encryption Framework APIs, HBase encryption via HBASE-7544, and related JIRAs around Hive and Pig encryption. Key management tools like keytool and potential future improvements like Knox gateway integration are also mentioned.
Kafka Tutorial: Streaming Data ArchitectureJean-Paul Azar
Kafka tutorial covers Java examples for Producers and Consumers. Also covers why Kafka is important and what Kafka is. Takes a look at the whole ecosystem around Kafka. Discusses low-level details about Kafka needed for successful deploys and performance tuning like batching, compression, partitioning, and replication.
1. The document describes a Docker implementation of NetflixOSS microservices on IBM SoftLayer.
2. Key aspects discussed include networking Docker containers across multiple SoftLayer datacenters, managing the Docker API across multiple hosts, and integrating Docker images with SoftLayer image management.
3. Lessons learned include the need for a proxy for the Docker remote API across multiple hosts, and approaches for keeping Docker advantages like image portability when integrating with an IaaS platform.
Secret Management with Hashicorp’s VaultAWS Germany
When running a Kubernetes Cluster in AWS there are secrets like AWS and Kubernetes credentials, access information for databases or integration with the company LDAP that need to be stored and managed.
HashiCorp’s Vault secures, stores, and controls access to tokens, passwords, certificates, API keys, and other secrets . It handles leasing, key revocation, key rolling, and auditing.
This talk will give an overview of secret management in general and Vault’s concepts. The talk will explain how to make use of Vault’s extensive feature set and show patterns that implement integration between Kubernetes applications and Vault.
Azure: Docker Container orchestration, PaaS ( Service Farbic ) and High avail...Alexey Bokov
Deep dive into Azure cloud technologies including common considerations about technology choices and then going deep into some of them. First we start from Azure Container Service and Docker containers orchestration by using Mesos or Swarm. Next part is about PaaS v2 which called Azure Service Fabric - crash course and deep dive into some parts of SF. After that we going through high Availability and Disaster Recovery in Azure:
- Azure DNS - cloud API for DNS records hosting
- Traffic Manager – load balancing and fault-tolerance on DNS level
- Azure Load Balancer – load balancing on transport level
-Application Gateway – load balancing on application level
Last part of deck is about IaaS based services and some updates for storage service:
* Azure Batch for computational tasks
* VM Scale sets
* Storage - managed disks and cool storage
Single tenant software to multi-tenant SaaS using K8SCloudLinux
This document discusses how Kubernetes can be used to convert single-tenant software applications into multi-tenant SaaS applications. Key points include:
1) Kubernetes can orchestrate each tenant as a separate pod or set of pods, providing isolation, easy scalability, and the ability to customize deployments for each tenant.
2) This approach simplifies many challenges of traditional SaaS like customer management, billing integration, high availability, upgrades and rollbacks by leveraging Kubernetes features.
3) An initial test project converted an existing PHP/MySQL billing application for 10,000+ companies into a multi-tenant SaaS deployment using Kubernetes, requiring under 40 hours of development.
Kafka Intro With Simple Java Producer ConsumersJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer.
SambaXP 2014: Trusting Active Directory with FreeIPA: a story beyond SambaAlexander Bokovoy
This document discusses integrating FreeIPA with Active Directory through cross-forest trusts. It describes how FreeIPA provides identity management similar to Active Directory and can be configured to trust an Active Directory domain. This allows FreeIPA and Active Directory users to access each other's services. The document also discusses how legacy systems without SSSD can still access user and group information by querying a compatibility LDAP tree on the FreeIPA server. It concludes by noting that FreeIPA passed over 100 compatibility tests with Windows Server 2012.
Big data event streaming is very common part of any big data Architecture. Of the available open source big data streaming technologies Apache Kafka stands out because of it realtime, distributed, and reliable characteristics. This is possible because of the Kafka Architecture. This talk highlights those features.
This document discusses implementing microservices using Docker Swarm and Consul. It recommends programming languages and tools for orchestration, databases, load balancing, monitoring, and other functions. Docker Swarm allows clustering Docker hosts into a pool of resources. Consul provides service discovery, configuration, and failure detection across multiple datacenters. Consul-Template listens for Consul updates and configures applications. Registrator automatically registers and deregisters Docker services with Consul. An example scenario shows how services scale across nodes with this architecture.
This document summarizes a presentation on service discovery for microservice architectures. It outlines the need for service discovery due to services dynamically scaling and failing. It discusses common discovery strategies like hardcoded IPs, DNS, and DNS with load balancers that don't scale well. The recommended approach is to use a service registry like Consul, which allows for automated service discovery and failure detection in a distributed system. The presentation concludes with a demo of Consul's service discovery capabilities.
No Docker? No Problem: Automating installation and config with AnsibleJeff Potts
In this talk I show how to bring stability and repeatability to your Alfresco installation by automating install and config management with Ansible.
This talk was originally given at Alfresco DevCon 2020 (virtual edition).
This document discusses security features in Apache Kafka including SSL for encryption, SASL/Kerberos for authentication, authorization controls using an authorizer, and securing Zookeeper. It provides details on how these security components work, such as how SSL establishes an encrypted channel and SASL performs authentication. The authorizer implementation stores ACLs in Zookeeper and caches them for performance. Securing Zookeeper involves setting ACLs on Zookeeper nodes and migrating security configurations. Future plans include moving more functionality to the broker side and adding new authorization features.
This document discusses Kafka security and provides tips for implementing it. It covers the three main aspects of Kafka security: encryption, authentication, and authorization. For encryption, it explains how to set up SSL and discusses options for end-to-end encryption. Authentication details how to use SSL client authentication or SASL mechanisms like Kerberos or PLAIN. Authorization explains managing access control lists (ACLs) stored in Zookeeper to control access. The document concludes by emphasizing the challenges of securing Kafka clients and provides advice like creating standardized client wrappers and Docker images.
(Stephane Maarek, DataCumulus) Kafka Summit SF 2018
Security in Kafka is a cornerstone of true enterprise production-ready deployment: It enables companies to control access to the cluster and limit risks in data corruption and unwanted operations. Understanding how to use security in Kafka and exploiting its capabilities can be complex, especially as the documentation that is available is aimed at people with substantial existing knowledge on the matter.
This talk will be delivered in a “hero journey” fashion, tracing the experience of an engineer with basic understanding of Kafka who is tasked with securing a Kafka cluster. Along the way, I will illustrate the benefits and implications of various mechanisms and provide some real-world tips on how users can simplify security management.
Attendees of this talk will learn about aspects of security in Kafka, including:
-Encryption: What is SSL, what problems it solves and how Kafka leverages it. We’ll discuss encryption in flight vs. encryption at rest.
-Authentication: Without authentication, anyone would be able to write to any topic in a Kafka cluster, do anything and remain anonymous. We’ll explore the available authentication mechanisms and their suitability for different types of deployment, including mutual SSL authentication, SASL/GSSAPI, SASL/SCRAM and SASL/PLAIN.
-Authorization: How ACLs work in Kafka, ZooKeeper security (risks and mitigations) and how to manage ACLs at scale
The document discusses security models in Apache Kafka. It describes the PLAINTEXT, SSL, SASL_PLAINTEXT and SASL_SSL security models, covering authentication, authorization, and encryption capabilities. It also provides tips on troubleshooting security issues, including enabling debug logs, and common errors seen with Kafka security.
How to Lock Down Apache Kafka and Keep Your Streams Safeconfluent
The document discusses how to secure Apache Kafka clusters through authentication. It describes several authentication mechanisms including TLS, SASL/GSSAPI using Kerberos, and SASL/PLAIN and SASL/SCRAM for username and password authentication. TLS provides server and client authentication but has performance overhead while SASL mechanisms like GSSAPI and SCRAM integrate with existing authentication systems with lower performance impact. The document provides configuration details and security considerations for each mechanism.
Flexible Authentication Strategies with SASL/OAUTHBEARER (Michael Kaminski, T...confluent
In order to maximize Kafka accessibility within an organization, Kafka operators must choose an authentication option that balances security with ease of use. Kafka has been historically limited to a small number of authentication options that are difficult to integrate with a Single Signon (SSO) strategy, such as mutual TLS, basic auth, and Kerberos. The arrival of SASL/OAUTHBEARER in Kafka 2.0.0 affords system operators a flexible framework for integrating Kafka with their existing authentication infrastructure. Ron Dagostino (State Street Corporation) and Mike Kaminski (The New York Times) team up to discuss SASL/OAUTHBEARER and it’s real-world applications. Ron, who contributed the feature to core Kafka, explains the origins and intricacies of its development along with additional, related security changes, including client re-authentication (merged and scheduled for release in v2.2.0) and the plans for support of SASL/OAUTHBEARER in librdkafka-based clients. Mike Kaminski, a developer on The Publishing Pipeline team at The New York Times, talks about how his team leverages SASL/OAUTHBEARER to break down silos between teams by making it easy for product owners to get connected to the Publishing Pipeline’s Kafka cluster.
TechEvent 2019: Wie sichere ich eigentlich Kafka ab?; Markus Bente - TrivadisTrivadis
The document discusses securing Apache Kafka. It covers:
1. Network security, host firewalls, and Linux user concepts can provide general security.
2. Zookeeper security includes SASL authentication between Zookeeper nodes and brokers, and authorization via access control lists (ACLs).
3. Kafka security includes encryption via SSL, SASL authentication like Kerberos or SCRAM, and authorization via ACLs managed in Zookeeper.
4. A demo shows generating certificates, configuring brokers and clients for SSL, and using ACLs to control access between principals.
This document discusses securing Kafka at PayPal, which processes 500 billion messages per day. It covers:
1. Enabling TLS on Kafka brokers using self-signed certificates to encrypt communication.
2. Implementing mutual TLS authentication between brokers and clients by generating and distributing keystores and truststores.
3. Authenticating clients using SASL and OAuthBearer, with custom callback handlers to retrieve credentials from an internal key management service.
SSL, more strictly called Transport Layer Security (TLS), is a means to encrypt data that is in flight between software components, whether within your data center or between that and your end users' devices. This prevents eavesdroppers seeing confidential information, such as credit card numbers or database passwords, and ensures that components are communicating with who they they think they are. So why isn't SSL/TLS used for all electronic communications? Firstly it is, almost by definition, "slightly tricky" to configure and errors are not terribly informative when things don't work (why would you help a hacker?!). Secondly there is a performance overhead for running TLS, although with modern hardware this is probably less of a concern than it used to be.
This session describes how to configure TLS at all layers within a Fusion Middleware stack - from the front-end Oracle HTTP Server, right through to communications with the database.
This platform was first given by Simon Haslam (eProseed UK) and Jacco Landlust (ING) at the OGh Fusion Middleware Experience event in February 2016.
This document discusses using Istio to provide security for Apache Kafka running on Kubernetes. Istio handles mTLS between services and attaches client certificates to requests, removing the need for Kafka brokers to manage certificates directly. It also allows for automated certificate renewal without downtime. Challenges with managing certificates in Kafka like broker restarts during renewal are avoided by offloading security to Istio's sidecars and Envoy filters. The takeaway is that Istio provides a uniform security layer for workloads and opens possibilities for additional features through Envoy filters.
Chickens & Eggs: Managing secrets in AWS with Hashicorp VaultJeff Horwitz
Presented to the Philly DevOps Meetup November 29, 2016.
Managing secrets is hard. It’s even harder in the cloud. At Jornaya (formerly LeadiD), we chose Hashicorp Vault to manage our secrets in AWS, and I’d like to share our experience with everyone.
WebLogic in Practice: SSL ConfigurationSimon Haslam
The document provides an overview of SSL configuration in Oracle WebLogic Server. It discusses key SSL concepts like key pairs, certificates, and certificate authorities. It describes how WebLogic uses Java keystores for identity and trust, and the tools like keytool and orapki that can be used to manage keys and certificates. The document also covers best practices for SSL configuration in WebLogic like always enabling hostname verification and not using demo certificates in production.
Training Slides: 302 - Securing Your Cluster With SSLContinuent
This document discusses securing a Tungsten cluster with SSL. It explains what SSL is and why it is used. It then covers deploying SSL for cluster communications and for the Tungsten connector. For the cluster, SSL is enabled in tungsten.ini and certificates are generated and distributed. For the connector in proxy mode, MySQL certificates must be imported into keystores and SSL configured from the connector to the database. SSL can also be configured from the application to the connector. Successful SSL encryption is verified using tcpdump and checking the Tungsten connection status. The next steps will cover the Tungsten dashboard.
Team Collaboration in Kafka Clusters With Maria Berinde-Tampanariu | Current ...HostedbyConfluent
Team Collaboration in Kafka Clusters With Maria Berinde-Tampanariu | Current 2022
When different teams start to use the same Kafka clusters, it opens up opportunities and challenges. During this talk, we will look at different architectures and team structures to explore ways in which to set up authorization in a granular and maintainable way for real-world users, as well as for producing or consuming clients.
What are the options offered by the Kafka built-in Authorizer, how can the Authorizer be customized and how are integrations with external systems built in order to provide group or role-based access control? Confluent Cloud and Confluent Platform provide predefined roles as part of the Role-based Access Control (RBAC) feature. We will look at the permissions included in these role bindings, the scope on which they can be used, and the components for which they are available. Role-based Access Control and Access Control Lists can be used together - let’s explore the options, best practices, and order of precedence.
We will put the capabilities into action by looking at the practices used by an imaginary company where the central Platform Team provisions clusters for its internal customers and provides access for teams to self-manage their domains. What’s the best approach to grant access to team members to their team’s resources and what needs to happen when one team collaborates with another team? What happens when a team member works temporarily on two teams?
We will close the session by looking at the ability to use the authorization mechanisms in conjunction with different authentication options and at the automation options to make the actions predictable and repeatable.
Security is more critical than ever with new computing environments in the cloud and expanding access to the internet. There are a number of security protection mechanisms available for MongoDB to ensure you have a stable and secure architecture for your deployment. Dave Erickson will walk through general security threats to databases and specifically how they can be mitigated for MongoDB deployments. Rob Moore will then go into depth on the specific topic of setting up and running MongoDB with TLS/SSL and x.509 authentication covering how it works and common errors he's encountered in the field.
DEF CON 24 - workshop - Craig Young - brainwashing embedded systemsFelipe Prado
Firmware analysis often involves searching firmware images for known file headers and file systems like SquashFS to extract contained files. Automated binary analysis tools like binwalk can help extract files from images. HTTP interfaces are common targets for security testing since they are often exposed without authentication. Testing may uncover vulnerabilities like XSS, CSRF, SQLi or command injection. Wireless interfaces also require testing to check for issues like weak encryption or exposure of credentials in cleartext.
Presentation done at the November meeting of the Sudoers Barcelona group (https://www.meetup.com/sudoersbcn/).
HashiCorp Vault (https://www.vaultproject.io/)
"Vault és una eina per emmagatzemar i gestionar secrets. Veurem què ofereix, com instal·lar-la, utilitzar-la i operar-la, i la nostra experiència."
DockerCon Live 2020 - Securing Your Containerized Application with NGINXKevin Jones
NGINX is one of the most popular images on Docker Hub and has been at the forefront of the web since the early 2000's. In this talk we will discuss how and why NGINX's lightweight and powerful architecture makes it a very popular choice for securing containerized applications as a sidecar reverse proxy within containers. We will highlight important aspects of application security that NGINX can help with, such as TLS, HTTP, AuthN, AuthZ and traffic control.Additional Sponsor InformationDuring our session we will be Raffling off a swag pack to live attendees. We'll also be offering 30% off our swag store that can be shared via social. Details below:URL: swag-nginx.com
Code: DOCKERCON30
Value: 30% off
The document discusses SSL (Secure Sockets Layer) and TLS (Transport Layer Security). It provides an overview of SSL, including its history and evolution. It describes the SSL handshake protocol and components of SSL certificates such as subjects, issuers, and digital signatures. It also discusses SSL attacks like POODLE and Heartbleed and problems with certificate authorities.
MariaDB Server & MySQL Security Essentials 2016Colin Charles
This document summarizes a presentation on MariaDB/MySQL security essentials. The presentation covered historically insecure default configurations, privilege escalation vulnerabilities, access control best practices like limiting privileges to only what users need and removing unnecessary accounts. It also discussed authentication methods like SSL, PAM, Kerberos and audit plugins. Encryption at the table, tablespace and binary log level was explained as well. Preventing SQL injections and available security assessment tools were also mentioned.
Best web hosting Vancouver 2025 for you businesssteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Smart Mobile App Pitch Deck丨AI Travel App Presentation Templateyojeari421237
🚀 Smart Mobile App Pitch Deck – "Trip-A" | AI Travel App Presentation Template
This professional, visually engaging pitch deck is designed specifically for developers, startups, and tech students looking to present a smart travel mobile app concept with impact.
Whether you're building an AI-powered travel planner or showcasing a class project, Trip-A gives you the edge to impress investors, professors, or clients. Every slide is cleanly structured, fully editable, and tailored to highlight key aspects of a mobile travel app powered by artificial intelligence and real-time data.
💼 What’s Inside:
- Cover slide with sleek app UI preview
- AI/ML module implementation breakdown
- Key travel market trends analysis
- Competitor comparison slide
- Evaluation challenges & solutions
- Real-time data training model (AI/ML)
- “Live Demo” call-to-action slide
🎨 Why You'll Love It:
- Professional, modern layout with mobile app mockups
- Ideal for pitches, hackathons, university presentations, or MVP launches
- Easily customizable in PowerPoint or Google Slides
- High-resolution visuals and smooth gradients
📦 Format:
- PPTX / Google Slides compatible
- 16:9 widescreen
- Fully editable text, charts, and visuals
Understanding the Tor Network and Exploring the Deep Webnabilajabin35
While the Tor network, Dark Web, and Deep Web can seem mysterious and daunting, they are simply parts of the internet that prioritize privacy and anonymity. Using tools like Ahmia and onionland search, users can explore these hidden spaces responsibly and securely. It’s essential to understand the technology behind these networks, as well as the risks involved, to navigate them safely. Visit https://torgol.com/
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Reliable Vancouver Web Hosting with Local Servers & 24/7 Supportsteve198109
Looking for powerful and affordable web hosting in Vancouver? 4GoodHosting offers premium Canadian web hosting solutions designed specifically for individuals, startups, and businesses across British Columbia. With local data centers in Vancouver and Toronto, we ensure blazing-fast website speeds, superior uptime, and enhanced data privacy—all critical for your business success in today’s competitive digital landscape.
Our Vancouver web hosting plans are packed with value—starting as low as $2.95/month—and include secure cPanel management, free domain transfer, one-click WordPress installs, and robust email support with anti-spam protection. Whether you're hosting a personal blog, business website, or eCommerce store, our scalable cloud hosting packages are built to grow with you.
Enjoy enterprise-grade features like daily backups, DDoS protection, free SSL certificates, and unlimited bandwidth on select plans. Plus, our expert Canadian support team is available 24/7 to help you every step of the way.
At 4GoodHosting, we understand the needs of local Vancouver businesses. That’s why we focus on speed, security, and service—all hosted on Canadian soil. Start your online journey today with a reliable hosting partner trusted by thousands across Canada.
DNS Resolvers and Nameservers (in New Zealand)APNIC
Geoff Huston, Chief Scientist at APNIC, presented on 'DNS Resolvers and Nameservers in New Zealand' at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Joyce Chen, Senior Advisor, Strategic Engagement at APNIC, presented on 'APNIC Policy Development Process' at the Local APIGA Taiwan 2025 event held in Taipei from 19 to 20 April 2025.
APNIC Update, presented at NZNOG 2025 by Terry SweetserAPNIC
Terry Sweetser, Training Delivery Manager (South Asia & Oceania) at APNIC presented an APNIC update at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
2. Outline
• Kafka and security overview
• Authentication
• Identify the principal (user) associated with a connection
• Authorization
• What permission a principal has
• Secure Zookeeper
• Future stuff
5. Security Overview
• Support since 0.9.0
• Wire encryption btw client and broker
• For cross data center mirroring
• Access control on resources such as topics
• Enable sharing Kafka clusters
6. Authentication Overview
• Broker support multiple ports
• plain text (no wire encryption/authentication)
• SSL (for wire encryption/authentication)
• SASL (for Kerberos authentication)
• SSL + SASL (SSL for wire encryption, SASL for authentication)
• Clients choose which port to use
• need to provide required credentials through configs
7. Why is SSL useful
• 1-way authentication
• Secure wire transfer through encryption
• 2-way authentication
• Broker knows the identity of client
• Easy to get started
• Just involve client and server
9. Subsequent transfer over SSL
• Data encrypted with agreed upon cipher suite
• Encryption overhead
• Losing zero-copy transfer in consumer
10. Performance impact with SSL
• r3.xlarge
• 4 core, 30GB ram, 80GB ssd, moderate network (~90MB/s)
• Most overhead from encryption
throughput
(MB/s)
CPU
on
client
CPU
on
broker
producer
(plaintext)
83
12%
30%
producer
(SSL)
69
28%
48%
consumer
(plaintext)
83
8%
2%
consumer
(SSL)
69
27%
24%
11. Preparing SSL
1. Generate certificate (X509) in broker key store
2. Generate certificate authority (CA) for signing
3. Sign broker certificate with CA
4. Import signed certificate and CA to broker key store
5. Import CA to client trust store
6. 2-way authentication: generate client certificate in a similar
way
13. SSL Principal Name
• By default, the distinguished name of the certificate
• CN=host1.company.com,OU=organization
unit,O=organization,L=location,ST=state,C=country
• Can be customized through principal.builder.class
• Has access to X509Certificate
• Make setting broker principal and application principal convenient
14. What is SASL
• Simple Authentication and Security Layer
• Challenge/response protocols
• Server issues challenge and client sends response
• Continue until server is satisfied
• Different mechanisms
• Plain: cleartext username/password
• Digest MD5
• GSSAPI: Kerberos
• Kafka 0.9.0 only supports Kerberos
15. Why Kerberos
• Secure single sign-on
• An organization may provide multiple services
• User just remember a single Kerberos password to use all services
• More convenient when there are many users
• Need Key Distribution Center (KDC)
• Each service/user need a Kerberos principal in KDC
16. How Kerberos Works
• Create service and client
principal in KDC
• Client authenticate with AS
on startup
• Client obtain service ticket
from TGS
• Client authenticate with
service using service ticket
19. Preparing Kerberos
• Create Kafka service principal in KDC
• Create a keytab for Kafka principal
• Keytab includes principal and encrypted Kerberos password
• Allow authentication w/o typing password
• Create an application principal for client KDC
• Create a keytab for application principal
21. Kerberos principal name
• Kerberos principal
• Primary[/Instance]@REALM
• kafka/[email protected]
• [email protected]
• Primary extracted as the default principal name
• Can customize principal name through
sasl.kerberos.principal.to.local.rules
22. Authentication Caveat
• Authentication (SSL or SASL) happens once during socket
connection
• No re-authentication
• If a certificate needs to be revoked, use authorization to remove
permission
25. Operations and Resources
• Operations
• Read, Write, Create, Describe, ClusterAction, All
• Resources
• Topic, Cluster and ConsumerGroup
Opera;ons
Resources
Read,
Write,
Describe
(Read,
Write
implies
Describe)
Topic
Read
ConsumerGroup
Create,
ClusterAcHon
(communicaHon
between
controller
and
brokers)
Cluster
26. SimpleAclAuthorizer
• Out of box authorizer implementation.
• CLI tool for adding/removing acls
• ACLs stored in zookeeper and propagated to brokers
asynchronously
• ACL cache in broker for better performance.
27. Client
Broker
Authorizer
Zookeeper
configure
Read
ACLs
Load
Cache
Request
authorize
ACL
match
Or
Super
User?
Allowed/
Denied
Authorizer Flow
28. Configure broker ACL
• authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
• Make Kafka principal super users
• Or grant ClusterAction and Read all topics to Kafka principal
29. Configure client ACL
• Producer
• Grant Write on topic, Create on cluster (auto creation)
• Or use --producer option in CLI
bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal User:Bob --producer --topic t1
• Consumer
• Grant Read on topic, Read on consumer group
• Or use --consumer option in CLI
bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal User:Bob --consumer --topic t1 --group group1
30. Secure Zookeeper
• Zookeeper stores
• critical Kafka metadata
• ACLs
• Need to prevent untrusted users from modifying
31. Zookeeper Security Integration
• ZK supports authentication through SASL
• Kerberos or Digest MD5
• Set zookeeper.set.acl to true on every broker
• Configure ZK user through JAAS config file
• Each ZK path writable by creator, readable by all
32. Migrating from non-secure to secure
Kafka
• Configure brokers with multiple ports
• listeners=PLAINTEXT://host.name:port,SSL://host.name:port
• Gradually migrate clients to secure port
• When done
• Turn off PLAINTEXT port on brokers
33. Migrating from non-secure to secure
Zookeeper
• http://kafka.apache.org/documentation.html#zk_authz_migration
34. Future work
• More SASL options: plain password, md5 digest
• Performance improvement
• Integrate with admin api
35. Thank you
Jun Rao | [email protected] | @junrao
Meet Confluent in booth
Confluent University ~ Kafka training ~ confluent.io/training
Download Apache Kafka & Confluent Platform: confluent.io/download