There are many advantages to running Perforce Helix on Linux servers. See the process and pitfalls encountered when converting a distributed Perforce infrastructure from Windows to Linux.
Git Fusion manages two inherently different branching models. Learn the ramifications of changing branch mappings, using fully populated or lightweight branches in Git Fusion and the purpose of “ghost” changes.
Leveraging Structured Data To Reduce Disk, IO & Network BandwidthPerforce
Most of the data that is pulled out of an SCM like Perforce Helix is common across multiple workspaces. Leveraging this fact means only fetching the data once from the repository. By creating cheap copies or clones of this data on demand, it is possible to dramatically reduce the load on the network, disks and Perforce servers, while making near-instant workspaces available to users.
Gears of Perforce: AAA Game Development ChallengesPerforce
How does Vancouver-based Xbox team, The Coalition, use Perforce to build Gears of War? By pulling UE4 source from Epic Games, sharing source with other Microsoft Studios, supporting outsourcers—all while delivering 100GB/day inside the studio. Learn how and why we do what we do.
Tips for Administering Complex Distributed Perforce EnvironmentsPerforce
Most users do not have administrator privileges, so how do you allow selected users to forcefully delete their changes and clients? What can be automated to proactively prevent database growth before it affects performance? How do you handle controlled failover in a hierarchical server system between dozens of servers? How do you work around the limitations of shelves in distributed environments? Learn several top tips and tricks we use to handle Perforce servers and how to use the broker’s filtering functionality to your advantage to administer complex Perforce environments.
Perforce BTrees: The Arcane and the ProfanePerforce
"Get a tour of Perforce BTree history, its behaviors and configuration. Learn about performance alternatives, space management tools and future projects, too."
The Helix Broker is underrated. With it, you can extend Helix functionality, allow users limited superpowers, limit admin superpowers, provide shortcuts, work around issues, augment or replace triggers, easily extend or modify help and more. It's like having a rocket pack, only you don't end up in the hospital if it fails!
Redis replication allows slave servers to mirror the data and state of a master server. It supports asynchronous replication where slaves acknowledge processed data from the master. A master can have multiple slaves in a graph-like structure. Replication is automatic and helps with scalability, high availability, and avoiding costly master disk writes. Partial resynchronization allows replication to resume after a temporary disruption without a full resync.
Perforce Administration: Optimization, Scalability, Availability and ReliabilityPerforce
In this session, Michael Mirman of MathWorks describes the infrastructure and maintenance procedures that the company uses to provide disaster recovery mechanisms, minimize downtime and improve load balance.
This document discusses virtualization, monitoring, and replication of Perforce software. It provides recommendations for virtualizing Perforce, such as using vSphere 5 with certain network drivers. It also discusses monitoring Perforce processes and logs to detect performance issues. New replication features in upcoming Perforce releases are outlined, including commit/edge servers that allow work to be distributed across multiple VMs.
DevopsItalia2015 - DHCP at Facebook - Evolution of an infrastructureAngelo Failla
Facebook e' uno dei piu' grandi siti nel mondo, con datacenter e POP in giro per il mondo, e una grande quantita' di macchine.
In questo talk useremo DHCP come un esempio per discutere perche' e' buono progettare sistemi stateless e discutere la sottile linea di separazione tra utilizzare un prodotto OpenSource o prendere un approccio "Not Invented here".
Granular Protections Management with TriggersPerforce
Managing the Perforce Helix protections table can be unwieldy at best. Learn how we implemented a trigger-based system that removes the need for an administrator to manually edit the protections table. By granting ownership of individual projects or codelines in the protections table, we can allow project managers to control permissions to a path without worrying about mistakes that could affect the entire company.
ProxySQL - High Performance and HA Proxy for MySQLRené Cannaò
High Availability proxy designed to solve real issues of MySQL setups from small to very large production environments.
Presentation at Percona Live Amsterdam 2015
Jakob Lorberblatt is an open source database consultant who loves to talk about software and MySQL. The document discusses the confusion around MySQL versions, potential issues when upgrading versions like deprecated parameters or syntax, and strategies for upgrading versions safely such as backing up data, testing on a clone, and using tools like Percona Toolkit to analyze differences. It also covers techniques for gradually moving to a newer version like using ProxySQL for real-time mirroring or black hole relays for multi-version replication.
The talk introduces JBOD setup for Apache Kafka and shows how LinkedIn can save more than 30% storage cost in Kafka by adopting JBOD setup. The talk is given during the LinkedIn Streaming meetup in May, 2017.
Building Linux IPv6 DNS Server (Complete Soft Copy)Hari
The document discusses building a Linux IPv6 DNS server. It provides an overview of the project which aims to configure a DNS server in Linux with IPv6 name resolution. It discusses the hardware and software requirements including using Red Hat Linux, kernel version 2.4 or higher, and BIND version 9. It also summarizes the key steps in building the DNS server such as creating a new kernel, making the DNS server support IPv6, and providing a backup of the existing kernel.
Minerva is a storage plugin of Drill that connects IPFS's decentralized storage and Drill's flexible query engine. Any data file stored on IPFS can be easily accessed from Drill's query interface, just like a file stored on a local disk.
Visit https://github.com/bdchain/Minerva to learn more and try it out!
Apache Jackrabbit Oak is a new JCR implementation with a completely new architecture. Based on concepts like eventual consistency and multi-version concurrency control, and borrowing ideas from distributed version control systems and cloud-scale databases, the Oak architecture is a major leap ahead for Jackrabbit. This presentation describes the Oak architecture and shows what it means for the scalability and performance of modern content applications. Changes to existing Jackrabbit functionality are described and the migration process is explained.
Bitsy 1.5 features improvements to memory efficiency through more compact data structures and lock-free reading algorithms. Benchmarks show the read throughput exceeds 10M reads/sec and is comparable to Neo4J when the graph fits in memory. Bitsy continues to outperform in write throughput due to its "No Seek" writing principle. The release is available with AGPL or commercial licensing options.
This document discusses the architecture of the CRX and Granite platforms and the repository bundle. It outlines how the repository is packaged and deployed as an OSGi bundle, exposing the JCR API and services. It also describes adding JMX support to provide monitoring and diagnostics of the repository via MBeans instead of JSP pages. Future ideas mentioned include additional extension points and potentially breaking the repository into smaller modular bundles.
Speaker: Jean-Daniel Cryans (Cloudera)
HBase Replication has come a long way since its inception in HBase 0.89 almost four years ago. Today, master-master and cyclic replication setups are supported; many bug fixes and new features like log compression, per-family peers configuration, and throttling have been added; and a major refactoring has been done. This presentation will recap the work done during the past four years, present a few use cases that are currently in production, and take a look at the roadmap.
Apache Kafka is a distributed publish-subscribe messaging system that was originally created by LinkedIn and contributed to the Apache Software Foundation. It is written in Scala and provides a multi-language API to publish and consume streams of records. Kafka is useful for both log aggregation and real-time messaging due to its high performance, scalability, and ability to serve as both a distributed messaging system and log storage system with a single unified architecture. To use Kafka, one runs Zookeeper for coordination, Kafka brokers to form a cluster, and then publishes and consumes messages with a producer API and consumer API.
January OpenNTF Webinar - Backup your Domino Server - New Options in V12Howard Greenberg
Domino 12 introduced a new and very flexible Backup solution to bridge the gap between Domino and backup applications.
This session provides a jumpstart into this new functionality and technical background to understand the different types of integration options. Learn about the new backup feature in Domino 12 and discover how to integrate widely used backup solutions like Veeam. Watch the new backup feature in use with a live demo.
This will be a great session if you haven't been backing up your Domino server or are already using other backup solutions and want to integrate them better with Domino.
Your presenter will be Daniel Nashed from Nash!Com. He will answer your questions at the end.
For video go to openntf.org/webinars
[Tel aviv merge world tour] Perforce Server UpdatePerforce
This document outlines Perforce's distributed development roadmap. It discusses upcoming improvements to replication including filtered replication, chained replicas, and Git replication. A new "100X initiative" aims to optimize failover, reduce network load, enable horizontal scaling, and improve concurrency. Key goals for 2014 include horizontal scaling of read operations, high availability and failover capabilities, and improved replication throughput. The roadmap emphasizes support for remote sites through techniques like commit/edge servers and filtering replicas.
Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
This document provides an overview of version control and the distributed version control system Git. It discusses the history and benefits of version control, including backup and recovery, synchronization, undo capabilities, and tracking changes. Key aspects of Git are explained, such as branching and merging, the fast and efficient nature of Git, and how it allows for cheap local experimentation through branches. The document demonstrates Git workflows and commands and provides resources for further information.
SREConEurope15 - The evolution of the DHCP infrastructure at FacebookAngelo Failla
The document describes the evolution of Facebook's DHCP infrastructure. It discusses how Facebook moved from a traditional DHCP architecture with dedicated hardware load balancers to a stateless architecture using the open source DHCP server KEA. With KEA, Facebook is able to distribute DHCP configuration dynamically from an inventory system and extend KEA's functionality through a hook API to integrate DHCP with other internal systems. This improved architecture provides better reliability, scalability, and instrumentation.
Fabric8 - Being devOps doesn't suck anymoreHenryk Konsek
Fabric8 is a tool that aims to reduce the gap between development and operations by allowing developers to deploy and manage applications from development through production. It uses profiles and containers to deploy applications in a unified way across environments. Fabric8 supports various containers like Tomcat, Docker, and OpenShift. It includes integrated monitoring via Hawt.io and a shell to help with scripting and diagnostics. The goal is to make developers more self-sufficient by enabling them to take on more operations tasks in their own sandboxes.
1049: Best and Worst Practices for Deploying IBM Connections - IBM Connect 2016panagenda
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session show good and bad examples on how to do it from multiple customer deployments. Christoph Stoettner describes things he found and how you can optimize your systems. Main topics include simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On for mail, IBM Sametime and SPNEGO. This is valuable information that will help you to be successful in your next IBM Connections deployment project.
A presentation from Christoph Stoettner (panagenda).
Best And Worst Practices Deploying IBM ConnectionsLetsConnect
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session will show examples from multiple customer deployments of IBM Connections. I will describe things I found and how you can optimize your systems. Main topics include; simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On
This document discusses virtualization, monitoring, and replication of Perforce software. It provides recommendations for virtualizing Perforce, such as using vSphere 5 with certain network drivers. It also discusses monitoring Perforce processes and logs to detect performance issues. New replication features in upcoming Perforce releases are outlined, including commit/edge servers that allow work to be distributed across multiple VMs.
DevopsItalia2015 - DHCP at Facebook - Evolution of an infrastructureAngelo Failla
Facebook e' uno dei piu' grandi siti nel mondo, con datacenter e POP in giro per il mondo, e una grande quantita' di macchine.
In questo talk useremo DHCP come un esempio per discutere perche' e' buono progettare sistemi stateless e discutere la sottile linea di separazione tra utilizzare un prodotto OpenSource o prendere un approccio "Not Invented here".
Granular Protections Management with TriggersPerforce
Managing the Perforce Helix protections table can be unwieldy at best. Learn how we implemented a trigger-based system that removes the need for an administrator to manually edit the protections table. By granting ownership of individual projects or codelines in the protections table, we can allow project managers to control permissions to a path without worrying about mistakes that could affect the entire company.
ProxySQL - High Performance and HA Proxy for MySQLRené Cannaò
High Availability proxy designed to solve real issues of MySQL setups from small to very large production environments.
Presentation at Percona Live Amsterdam 2015
Jakob Lorberblatt is an open source database consultant who loves to talk about software and MySQL. The document discusses the confusion around MySQL versions, potential issues when upgrading versions like deprecated parameters or syntax, and strategies for upgrading versions safely such as backing up data, testing on a clone, and using tools like Percona Toolkit to analyze differences. It also covers techniques for gradually moving to a newer version like using ProxySQL for real-time mirroring or black hole relays for multi-version replication.
The talk introduces JBOD setup for Apache Kafka and shows how LinkedIn can save more than 30% storage cost in Kafka by adopting JBOD setup. The talk is given during the LinkedIn Streaming meetup in May, 2017.
Building Linux IPv6 DNS Server (Complete Soft Copy)Hari
The document discusses building a Linux IPv6 DNS server. It provides an overview of the project which aims to configure a DNS server in Linux with IPv6 name resolution. It discusses the hardware and software requirements including using Red Hat Linux, kernel version 2.4 or higher, and BIND version 9. It also summarizes the key steps in building the DNS server such as creating a new kernel, making the DNS server support IPv6, and providing a backup of the existing kernel.
Minerva is a storage plugin of Drill that connects IPFS's decentralized storage and Drill's flexible query engine. Any data file stored on IPFS can be easily accessed from Drill's query interface, just like a file stored on a local disk.
Visit https://github.com/bdchain/Minerva to learn more and try it out!
Apache Jackrabbit Oak is a new JCR implementation with a completely new architecture. Based on concepts like eventual consistency and multi-version concurrency control, and borrowing ideas from distributed version control systems and cloud-scale databases, the Oak architecture is a major leap ahead for Jackrabbit. This presentation describes the Oak architecture and shows what it means for the scalability and performance of modern content applications. Changes to existing Jackrabbit functionality are described and the migration process is explained.
Bitsy 1.5 features improvements to memory efficiency through more compact data structures and lock-free reading algorithms. Benchmarks show the read throughput exceeds 10M reads/sec and is comparable to Neo4J when the graph fits in memory. Bitsy continues to outperform in write throughput due to its "No Seek" writing principle. The release is available with AGPL or commercial licensing options.
This document discusses the architecture of the CRX and Granite platforms and the repository bundle. It outlines how the repository is packaged and deployed as an OSGi bundle, exposing the JCR API and services. It also describes adding JMX support to provide monitoring and diagnostics of the repository via MBeans instead of JSP pages. Future ideas mentioned include additional extension points and potentially breaking the repository into smaller modular bundles.
Speaker: Jean-Daniel Cryans (Cloudera)
HBase Replication has come a long way since its inception in HBase 0.89 almost four years ago. Today, master-master and cyclic replication setups are supported; many bug fixes and new features like log compression, per-family peers configuration, and throttling have been added; and a major refactoring has been done. This presentation will recap the work done during the past four years, present a few use cases that are currently in production, and take a look at the roadmap.
Apache Kafka is a distributed publish-subscribe messaging system that was originally created by LinkedIn and contributed to the Apache Software Foundation. It is written in Scala and provides a multi-language API to publish and consume streams of records. Kafka is useful for both log aggregation and real-time messaging due to its high performance, scalability, and ability to serve as both a distributed messaging system and log storage system with a single unified architecture. To use Kafka, one runs Zookeeper for coordination, Kafka brokers to form a cluster, and then publishes and consumes messages with a producer API and consumer API.
January OpenNTF Webinar - Backup your Domino Server - New Options in V12Howard Greenberg
Domino 12 introduced a new and very flexible Backup solution to bridge the gap between Domino and backup applications.
This session provides a jumpstart into this new functionality and technical background to understand the different types of integration options. Learn about the new backup feature in Domino 12 and discover how to integrate widely used backup solutions like Veeam. Watch the new backup feature in use with a live demo.
This will be a great session if you haven't been backing up your Domino server or are already using other backup solutions and want to integrate them better with Domino.
Your presenter will be Daniel Nashed from Nash!Com. He will answer your questions at the end.
For video go to openntf.org/webinars
[Tel aviv merge world tour] Perforce Server UpdatePerforce
This document outlines Perforce's distributed development roadmap. It discusses upcoming improvements to replication including filtered replication, chained replicas, and Git replication. A new "100X initiative" aims to optimize failover, reduce network load, enable horizontal scaling, and improve concurrency. Key goals for 2014 include horizontal scaling of read operations, high availability and failover capabilities, and improved replication throughput. The roadmap emphasizes support for remote sites through techniques like commit/edge servers and filtering replicas.
Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
This document provides an overview of version control and the distributed version control system Git. It discusses the history and benefits of version control, including backup and recovery, synchronization, undo capabilities, and tracking changes. Key aspects of Git are explained, such as branching and merging, the fast and efficient nature of Git, and how it allows for cheap local experimentation through branches. The document demonstrates Git workflows and commands and provides resources for further information.
SREConEurope15 - The evolution of the DHCP infrastructure at FacebookAngelo Failla
The document describes the evolution of Facebook's DHCP infrastructure. It discusses how Facebook moved from a traditional DHCP architecture with dedicated hardware load balancers to a stateless architecture using the open source DHCP server KEA. With KEA, Facebook is able to distribute DHCP configuration dynamically from an inventory system and extend KEA's functionality through a hook API to integrate DHCP with other internal systems. This improved architecture provides better reliability, scalability, and instrumentation.
Fabric8 - Being devOps doesn't suck anymoreHenryk Konsek
Fabric8 is a tool that aims to reduce the gap between development and operations by allowing developers to deploy and manage applications from development through production. It uses profiles and containers to deploy applications in a unified way across environments. Fabric8 supports various containers like Tomcat, Docker, and OpenShift. It includes integrated monitoring via Hawt.io and a shell to help with scripting and diagnostics. The goal is to make developers more self-sufficient by enabling them to take on more operations tasks in their own sandboxes.
1049: Best and Worst Practices for Deploying IBM Connections - IBM Connect 2016panagenda
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session show good and bad examples on how to do it from multiple customer deployments. Christoph Stoettner describes things he found and how you can optimize your systems. Main topics include simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On for mail, IBM Sametime and SPNEGO. This is valuable information that will help you to be successful in your next IBM Connections deployment project.
A presentation from Christoph Stoettner (panagenda).
Best And Worst Practices Deploying IBM ConnectionsLetsConnect
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session will show examples from multiple customer deployments of IBM Connections. I will describe things I found and how you can optimize your systems. Main topics include; simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On
This document discusses application delivery in a container world. It summarizes using Docker from development to production, including local development, continuous integration, deploying to servers using schedulers like Kubernetes and ECS, service discovery using tools like Consul, and updating applications safely using blue-green deployments and feature toggling. It then demonstrates these concepts using Docker, AWS ECS, Consul, and Consul Template to deploy a voting application.
Running Production CDC Ingestion Pipelines With Balaji Varadarajan and Pritam...HostedbyConfluent
Running Production CDC Ingestion Pipelines With Balaji Varadarajan and Pritam K Dey | Current 2022
Robinhood’s mission is to democratize finance for all. Data driven decision making is key to achieving this goal. Data needed are hosted in various OLTP databases. Replicating this data near real time in a reliable fashion to data lakehouse powers many critical use cases for the company. In Robinhood, CDC is not only used for ingestion to data-lake but is also being adopted for inter-system message exchanges between different online micro services. .
In this talk, we will describe the evolution of change data capture based ingestion in Robinhood not only in terms of the scale of data stored and queries made, but also the use cases that it supports. We will go in-depth into the CDC architecture built around our Kafka ecosystem using open source system Debezium and Apache Hudi. We will cover online inter-system message exchange use-cases along with our experience running this service at scale in Robinhood along with lessons learned.
The document discusses migrating a Novell Open Enterprise Server from NetWare to Linux using Novell's migration tool. It provides an agenda for a lab demonstrating the migration, including an overview of the lab scenario, migration options, prerequisites for a successful consolidation, and steps for preparing, building the target server, and managing services on Open Enterprise Server Linux.
This document discusses automated deployment strategies for web applications. It recommends using source code control and branching features to keep the codebase organized. Database migrations and configuration management allow deployment to different environments. Tools like Phing can automate the deployment process through tasks like exporting code, uploading files, and database migrations. Rollbacks are important and can be facilitated by changing symlinks or deleting deployed directories. Overall, automated deployment prevents mistakes and makes rollbacks easy.
CollabSphere 2019 - Dirty Secrets of the Notes ClientChristoph Adler
Fast. Dangerous. Always in control.Learn the dirty secrets of the Notes Client and how you can turn them into golden features that will make you shine. You will leave the workshop equipped with new knowledge for your next Notes Client deployment and/or optimization project. You will be able to get better Notes client performance and stability by using less of the system resources, like CPU, Memory and File I/O – just because of the right tailor-made configuration of the Notes client for your very own system requirements. Get geared up for your next Notes V11 deployment with the best-practice tips to get Notes Clients deployed, configured, maintained and ‘finally’ loved by your users.Don’t forget, IBM Notes V11 is not far away from being released.
Schema migration (DB migration) with PhinxHadi Ariawan
Schema migration (DB migration) with Phinx.
What is schema migration? Why you should use schema migration? How you do schema migration using phinx, written on PHP.
The document discusses new features in version 0.9.4 of the DivConq file transfer software, including file tasks that can be triggered by uploads, scheduling, or file system events. It introduces dcScript, the scripting language that allows users to string together various file operations and tasks. Key points include that dcScript scripts can run asynchronously, optimize file operations through in-memory streaming rather than disk reads/writes, and offer features to simplify complex multi-step file tasks. The document provides examples of using dcScript to encrypt, compress, split and transfer files with just a few lines of code.
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
In this session (reloaded and remastered for HCL Notes V11), you will learn how easy it can be to maximize Notes client performance. Let Christoph show you, what can be tuned and how to resolve the best possible performance for your HCL Notes client infrastructure. Discover tips and tweaks - how to debug your Notes client, deal with outdated ODS, network latency and application performance issues and the measurable benefit that provides to your users. You’ll discover the current best practices for streamlining location and connection documents and why the catalog.nsf is still so important. You will leave the session with the knowledge you need to improve your HCL Notes V11 client installations and to provide a better experience for happier administration and happier end-users!
Christoph Adler presented on performance tuning HCL Notes clients. He discussed how to optimize local databases to the latest ODS format, configure hardware and Java settings for optimal performance, and measure client performance using tools like client clocking logs. Regular maintenance like keeping software updated, cleaning obsolete files, and optimizing network settings can boost client speeds. Understanding how the cache works is also important to avoid unnecessary traffic increases from deleting it.
Replicate from Oracle to data warehouses and analyticsContinuent
Analyzing transactional data residing in Oracle databases is becoming increasingly common, especially as the data sizes and complexity increase and transactional stores are no longer to keep pace with the ever-increasing storage. Although there are many techniques available for loading Oracle data, getting up-to-date data into your data warehouse store is a more difficult problem. VMware Continuent provides provides data replication from Oracle to data warehouses and analytics engines, to derive insight from big data for better business decisions. Learn practical tips on how to get your data warehouse loading projects off the ground quickly and efficiently when replicating from Oracle into Hadoop, Amazon Redshift, and HP Vertica.
Syntergy upgrade open text content server with replicator - 7-3-2016Vijay Sharma
The Smart Way to Upgrade Content Server - Zero Downtime, Single Hop - In our experience, production outages, disruptions and resource availability are major barriers to organizations performing a timely Livelink or Content Server upgrade to the latest releases of OpenText Content Server 10.X & 16.
Over a number of years, Syntergy has developed a proven upgrade methodology which includes the use of the Syntergy's Replicator for OpenText Content Server software. This approach gives you the capability to perform upgrades directly from older versions of Livelink and Content Server to the latest releases in a Single Hop (no need up "hop" through multiple version upgrades) with Zero Downtime
Yohei Sasaki leads Rakuten Platform as a Service (RPaaS), which provides a private Platform as a Service built on Cloud Foundry version 1 for over 1000 developers across 70+ teams at Rakuten. Rakuten adopted Cloud Foundry to reduce operational costs and make infrastructure transparent for developers. Over time, Rakuten has focused on adding features like mod_rewrite and improving database integration while filling gaps in Cloud Foundry version 2.
Red Teaming macOS Environments with Hermes the Swift MessengerJustin Bui
1. The document introduces Hermes, a Swift payload for the Mythic framework that provides post-exploitation functionality on macOS systems. It discusses the development of Hermes, including cross-compiling Swift from Linux, and its key capabilities like file operations, process interaction, and screenshotting.
2. It also covers considerations for detecting Hermes using Apple's Endpoint Security Framework, which allows monitoring of process execution, file access, and other events.
New VMware Continuent 5.0 - A powerful and cost-efficient Oracle GoldenGate a...Continuent
VMware Continuent 5.0 is a complete data replication solution that includes all the functionality you need at one low price. In this webinar, you’ll see how VMware Continuent delivers:
- Migration. Replicate from an old version of Oracle, often running on non-Linux platform (Windows, AIX, HP-UX, Solaris), to a new version of Oracle (often running in Linux). VMware Continuent supports heterogeneous environments.
- On-boarding to Cloud and Service Providers' data centers. Replicate from an old version of Oracle, often running on non-Linux platform, into a virtual or cloud-hosted environment. VMware Continuent is “Cloud-ready”.
- Replication into Analytics (Hadoop, HP Vertica, Amazon Redshift). VMware Continuent offers real-time data loading into analytics and Big Data. This often includes Oracle running on Linux replicating into Hadoop/Vertica analytics.
- Replication to MySQL (and PostgreSQL). There is a lot of interest by customers to save money with open source databases. VMware Continuent supports two-way replication between Oracle and MySQL, and allows off-loading workloads to cost-saving MySQL databases. VMware Continuent will soon allow migration from Oracle to PostgreSQL.
Don’t miss this opportunity to learn about the alternative to Oracle’s tools!
In this session (re-reloaded and remastered for HCL Notes 11.0.1 FP2), you will learn how easy it can be to maximize Notes client performance. Let Christoph show you, what can be tuned and how to resolve the best possible performance for your HCL Notes client infrastructure. Discover tips and tweaks - how to debug your Notes client, deal with outdated ODS, network latency and application performance issues and the measurable benefit that provides to your users. You’ll discover the current best practices for streamlining location and connection documents and why the catalog.nsf is still so important. You will leave the session with the knowledge you need to improve your HCL Notes 11.0.1 FP2 client installations and to provide a better experience for happier administration and happier end-users!
In this session (re-reloaded and remastered for HCL Notes 11.0.1 FP2), you will learn how easy it can be to maximize Notes client performance. Let Christoph show you, what can be tuned and how to resolve the best possible performance for your HCL Notes client infrastructure. Discover tips and tweaks — how to debug your Notes client, deal with outdated ODS, network latency and application performance issues and the measurable benefit that provides to your users. You’ll discover the current best practices for streamlining location and connection documents and why the catalog.nsf is still so important. You will leave the session with the knowledge you need to improve your HCL Notes 11.0.1 FP2 client installations and to provide a better experience for happier administration and happier end-users!
Data Diffing Based Software Architecture PatternsHuahai Yang
Clojure has been heralded as a pioneer in data oriented functional programming. In this talk, Huahai will explore the use of Clojure data diffing/patching library as a tool to simplify software architecture and solve complex engineering problems. After briefly describing EditScript, a Clojure data diffing/patching library, he will detail several usage patterns by drawing from code examples in our production system.
Huahai will discuss how diffing improves system modularization by reducing namespace dependencies; how it drastically simplifies client-server communication to drive much faster UI iterations; how it enables massive scaling by turning stateful applications into stateless ones; and how it powers collaborative editing of online documents.
This talk is for everyone who are interested in expanding their data oriented functional programming tool box.
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Perforce
Be sure to register for a demo, if you would like to see how Klocwork can help ensure that your code is secure, reliable, and compliant.
https://www.perforce.com/products/klocwork/live-demo
If it’s not documented, it didn’t happen.
When it comes to compliance, if you’re doing the work, you need to prove it. That means having well-documented SOPs (standard operating procedures) in place for all your regulated workflows.
It also means logging your efforts to enforce these SOPs. They show that you took appropriate action in any number of scenarios, which can be related to regulations, change requests, firing of an employee, logging an HR compliant, or anything else that needs a structured workflow.
But when do you need to do this, and how do you go about it?
In this webinar, Tom Totenberg, our Helix ALM senior solutions engineer, clarifies workflow enforcement SOPs, along with a walkthrough of how Perforce manages GDPR (General Data Protection Regulation) requests. He’ll cover:
-What are SOPs?
-Why is it important to have this documentation?
-Example: walking through our internal Perforce GDPR process.
-What to beware of.
-Building the workflow in ALM.
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
How to Do Code Reviews at Massive Scale For DevOpsPerforce
Code review is a critical part of your build process. And when you do code review right, you can streamline your build process and achieve DevOps.
Most code review tools work great when you have a team of 10 developers. But what happens when you need to scale code review to 1,000s of developers? Many will struggle. But you don’t need to.
Join our experts Johan Karlsson and Robert Cowham for a 30-minute webinar. You’ll learn:
-The problems with scaling code review from 10s to 100s to 1,000s of developers along with other dimensions of scale (files, reviews, size).
-The solutions for dealing with all dimensions of scale.
-How to utilize Helix Swarm at massive scale.
Ready to scale code review and streamline your build process? Get started with Helix Swarm, a code review tool for Helix Core.
By now many of us have had plenty of time to clean and tidy up our homes. But have you given your product backlog and task tracking software as much attention?
To keep your digital tools organized, it is important to avoid hoarding on to inefficient processes. By removing the clutter in your product backlog, you can keep your teams focused.
It’s time to spark joy by cleaning up your planning tools!
Join Johan Karlsson — our Agile and backlog expert — to learn how to:
-Apply digital minimalism to your tracking and planning.
-Organize your work by category.
-Motivate teams by transitioning to a cleaner way of working.
TRY HANSOFT FREE
Going Remote: Build Up Your Game Dev Team Perforce
Everyone’s working remote as a result of the coronavirus (COVID-19). And while game development has always been done with remote teams, there’s a new challenge facing the industry.
Your audience has always been mostly at home – now they may be stuck there. And they want more games to stay happy and entertained.
So, how can you enable your developers to get files and feedback faster to meet this rapidly growing demand?
In this webinar, you’ll learn:
-How to meet the increasing demand.
-Ways to empower your remote teams to build faster.
-Why Helix Core is the best way to maximize productivity.
Plus, we’ll share our favorite games keeping us happy in the midst of a pandemic.
Shift to Remote: How to Manage Your New WorkflowPerforce
The spread of coronavirus has fundamentally changed the way people work. Companies around the globe are making an abrupt shift in how they manage projects and teams to support their newly remote workers.
Organizing suddenly distributed teams means restructuring more than a standup. To facilitate this transition, teams need to update how they collaborate, manage workloads, and maintain projects.
At Perforce, we are here to help you maintain productivity. Join Johan Karlsson — our Agile expert — to learn how to:
Keep communication predictable and consistent.
-Increase visibility across teams.
-Organize projects, sprints, Kanban boards and more.
-Empower and support your remote workforce.
Hybrid Development Methodology in a Regulated WorldPerforce
In a regulated industry, collaboration can be vital to building quality products that meet compliance. But when an Agile team and a Waterfall team need to work together, it can feel like mixing oil with water.
If you're used to Agile methods, Waterfall can feel slow and unresponsive. From a Waterfall perspective, pure Agile may lack accountability and direction. Misaligned teams can slow progress, and expose your development to mistakes that undermine compliance.
It's possible to create the best of both worlds so your teams can operate together harmoniously. This is how to develop products quickly, and still make regulators happy.
Join ALM Solutions Engineer Tom Totenberg in this webinar to learn how teams can:
- Operate efficiently with differing methodologies.
- Glean best practices for their tailored hybrid.
- Work together in a single environment.
Watch the webinar, and when you're ready for a tool to help you with the hybrid, know that you can try Helix ALM for free.
Better, Faster, Easier: How to Make Git Really Work in the EnterprisePerforce
There's a lot of reasons to love Git. (Git is awesome at what it does.) Let’s look at the 3 major use cases for Git in the enterprise:
1. You work with third party or outsourced development teams.
2. You use open source in your products.
3. You have different workflow needs for different teams.
Making the best of Git can be difficult in an enterprise environment. Trying to manage all the moving parts is like herding cats.
So, how do you optimize your teams’ use of Git — and make it all fit into your vision of the enterprise SDLC?
You’ll learn about:
-The challenges that accompany each use case — third parties, open source code, different workflows.
-Ways to solve these problems.
-How to make Git better, faster, and easier — with Perforce
Easier Requirements Management Using Diagrams In Helix ALMPerforce
Sometimes requirements need visuals. Whether it’s a diagram that clarifies an idea or a screenshot to capture information, images can help you manage requirements more efficiently. And that means better quality products shipped faster.
In this webinar, Helix ALM Professional Services Consultant Gerhard Krüger will demonstrate how to use visuals in ALM to improve requirements. Learn how to:
-Share information faster than ever.
-Drag and drop your way to better teamwork.
-Integrate various types of visuals into your requirements.
-Utilize diagram and flowchart software for every need.
-And more!
Immediately apply the information in this webinar for even better requirements management using Helix ALM.
It’s common practice to keep a product backlog as small as possible, probably just 10-20 items. This works for single teams with one Product Owner and perhaps a Scrum Master.
But what if you have 100 Scrum teams managing a complex system of hardware and software components? What do you need to change to manage at such a massive scale?
Join backlog expert Johan Karlsson to learn how to:
-Adapt Agile product backlog practices to manage many backlogs.
-Enhance collaboration across disciplines.
-Leverage backlogs to align teams while giving them flexibility.
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Perforce
In Part 3, we will look at what the future might hold for embedded programming languages and development tools. And, we will look at the future for software safety and security standards.
How to Scale With Helix Core and Microsoft Azure Perforce
This document discusses how to scale Helix Core using Microsoft Azure. It begins by explaining the benefits of using Helix Core and Azure together, such as high performance, scalability, security integration, and availability. It then covers computing, storage, and security options on Azure, including virtual machine types and operating system choices. Next, it describes how to set up global deployments with Helix Core on Azure using techniques like proxies, replicas, and the Perforce federated architecture. It concludes with examples of advanced topologies like build servers, hybrid cloud/on-premises implementations, and multi-cloud considerations.
Achieving Software Safety, Security, and Reliability Part 2Perforce
In Part 2, we will focus on the automotive industry, as it leads the way in enforcing safety, security, and reliability standards as well as best practices for software development. We will then examine how other industries could adopt similar practices.
Modernizing an application’s architecture is often a necessary multi-year project in the making. The goal –– to stabilize code, detangle dependencies, and adopt a toolset that ignites innovation.
Moving your monolith repository to a microservices/component based development model might be on trend. But is it right for you?
Before you break up with anything, it is vital to assess your needs and existing environment to construct the right plan. This can minimize business risks and maximize your development potential.
Join Tom Tyler and Chuck Gehman to learn more about:
-Why you need to plan your move with the right approach.
-How to reduce risk when refactoring your monolithic repository.
-What you need to consider before migrating code.
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Perforce
In part one of our three-part webinar series, we examine common software development challenges, review the safety and security standards adopted by different industries, and examine the best practices that can be applied to any software development team.
The features you’ve been waiting for! Helix ALM’s latest update expands usability and functionality to bring solid improvements to your processes.
Watch Helix ALM Senior Product Manager Paula Rome demonstrate how new features:
-Simplify workflows.
-Expand report analysis.
-Boost productivity in the Helix ALM web client.
All this and MORE packed into an exciting 30 minutes! Get inspired. Be extraordinary with the new Helix ALM.
Companies that track requirements, create traceability matrices, and complete audits - especially for compliance - run into many problems using only Word and Excel to accomplish these tasks.
Most notably, manual processes leave employees vulnerable to making costly mistakes and wasting valuable time.
These outdated tracking procedures rob organizations of benefiting from four keys to productivity and efficiency:
-Automation
-Collaboration
-Visibility
-Traceability
However, modern application lifecycle management (ALM) tools solve all of these problems, linking and organizing information into a single source of truth that is instantly auditable.
Gerhard Krüger, senior consultant for Helix ALM, explains how the right software supports these fundamentals, generating improvements that save time and money.
This presentation explores code comprehension challenges in scientific programming based on a survey of 57 research scientists. It reveals that 57.9% of scientists have no formal training in writing readable code. Key findings highlight a "documentation paradox" where documentation is both the most common readability practice and the biggest challenge scientists face. The study identifies critical issues with naming conventions and code organization, noting that 100% of scientists agree readable code is essential for reproducible research. The research concludes with four key recommendations: expanding programming education for scientists, conducting targeted research on scientific code quality, developing specialized tools, and establishing clearer documentation guidelines for scientific software.
Presented at: The 33rd International Conference on Program Comprehension (ICPC '25)
Date of Conference: April 2025
Conference Location: Ottawa, Ontario, Canada
Preprint: https://arxiv.org/abs/2501.10037
Copy & Past Link 👉👉
http://drfiles.net/
When you say Xforce with GTA 5, it sounds like you might be talking about Xforce Keygen — a tool that's often mentioned in connection with cracking software like Autodesk programs.
BUT, when it comes to GTA 5, Xforce isn't officially part of the game or anything Rockstar made.
If you're seeing "Xforce" related to GTA 5 downloads or cracks, it's usually some unofficial (and risky) tool for pirating the game — which can be super dangerous because:
Landscape of Requirements Engineering for/by AI through Literature ReviewHironori Washizaki
Hironori Washizaki, "Landscape of Requirements Engineering for/by AI through Literature Review," RAISE 2025: Workshop on Requirements engineering for AI-powered SoftwarE, 2025.
🌍📱👉COPY LINK & PASTE ON GOOGLE http://drfiles.net/ 👈🌍
Adobe Illustrator is a powerful, professional-grade vector graphics software used for creating a wide range of designs, including logos, icons, illustrations, and more. Unlike raster graphics (like photos), which are made of pixels, vector graphics in Illustrator are defined by mathematical equations, allowing them to be scaled up or down infinitely without losing quality.
Here's a more detailed explanation:
Key Features and Capabilities:
Vector-Based Design:
Illustrator's foundation is its use of vector graphics, meaning designs are created using paths, lines, shapes, and curves defined mathematically.
Scalability:
This vector-based approach allows for designs to be resized without any loss of resolution or quality, making it suitable for various print and digital applications.
Design Creation:
Illustrator is used for a wide variety of design purposes, including:
Logos and Brand Identity: Creating logos, icons, and other brand assets.
Illustrations: Designing detailed illustrations for books, magazines, web pages, and more.
Marketing Materials: Creating posters, flyers, banners, and other marketing visuals.
Web Design: Designing web graphics, including icons, buttons, and layouts.
Text Handling:
Illustrator offers sophisticated typography tools for manipulating and designing text within your graphics.
Brushes and Effects:
It provides a range of brushes and effects for adding artistic touches and visual styles to your designs.
Integration with Other Adobe Software:
Illustrator integrates seamlessly with other Adobe Creative Cloud apps like Photoshop, InDesign, and Dreamweaver, facilitating a smooth workflow.
Why Use Illustrator?
Professional-Grade Features:
Illustrator offers a comprehensive set of tools and features for professional design work.
Versatility:
It can be used for a wide range of design tasks and applications, making it a versatile tool for designers.
Industry Standard:
Illustrator is a widely used and recognized software in the graphic design industry.
Creative Freedom:
It empowers designers to create detailed, high-quality graphics with a high degree of control and precision.
Get & Download Wondershare Filmora Crack Latest [2025]saniaaftab72555
Copy & Past Link 👉👉
https://dr-up-community.info/
Wondershare Filmora is a video editing software and app designed for both beginners and experienced users. It's known for its user-friendly interface, drag-and-drop functionality, and a wide range of tools and features for creating and editing videos. Filmora is available on Windows, macOS, iOS (iPhone/iPad), and Android platforms.
Who Watches the Watchmen (SciFiDevCon 2025)Allon Mureinik
Tests, especially unit tests, are the developers’ superheroes. They allow us to mess around with our code and keep us safe.
We often trust them with the safety of our codebase, but how do we know that we should? How do we know that this trust is well-deserved?
Enter mutation testing – by intentionally injecting harmful mutations into our code and seeing if they are caught by the tests, we can evaluate the quality of the safety net they provide. By watching the watchmen, we can make sure our tests really protect us, and we aren’t just green-washing our IDEs to a false sense of security.
Talk from SciFiDevCon 2025
https://www.scifidevcon.com/courses/2025-scifidevcon/contents/680efa43ae4f5
Join Ajay Sarpal and Miray Vu to learn about key Marketo Engage enhancements. Discover improved in-app Salesforce CRM connector statistics for easy monitoring of sync health and throughput. Explore new Salesforce CRM Synch Dashboards providing up-to-date insights into weekly activity usage, thresholds, and limits with drill-down capabilities. Learn about proactive notifications for both Salesforce CRM sync and product usage overages. Get an update on improved Salesforce CRM synch scale and reliability coming in Q2 2025.
Key Takeaways:
Improved Salesforce CRM User Experience: Learn how self-service visibility enhances satisfaction.
Utilize Salesforce CRM Synch Dashboards: Explore real-time weekly activity data.
Monitor Performance Against Limits: See threshold limits for each product level.
Get Usage Over-Limit Alerts: Receive notifications for exceeding thresholds.
Learn About Improved Salesforce CRM Scale: Understand upcoming cloud-based incremental sync.
Meet the Agents: How AI Is Learning to Think, Plan, and CollaborateMaxim Salnikov
Imagine if apps could think, plan, and team up like humans. Welcome to the world of AI agents and agentic user interfaces (UI)! In this session, we'll explore how AI agents make decisions, collaborate with each other, and create more natural and powerful experiences for users.
Solidworks Crack 2025 latest new + license codeaneelaramzan63
Copy & Paste On Google >>> https://dr-up-community.info/
The two main methods for installing standalone licenses of SOLIDWORKS are clean installation and parallel installation (the process is different ...
Disable your internet connection to prevent the software from performing online checks during installation
Avast Premium Security Crack FREE Latest Version 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
Avast Premium Security is a paid subscription service that provides comprehensive online security and privacy protection for multiple devices. It includes features like antivirus, firewall, ransomware protection, and website scanning, all designed to safeguard against a wide range of online threats, according to Avast.
Key features of Avast Premium Security:
Antivirus: Protects against viruses, malware, and other malicious software, according to Avast.
Firewall: Controls network traffic and blocks unauthorized access to your devices, as noted by All About Cookies.
Ransomware protection: Helps prevent ransomware attacks, which can encrypt your files and hold them hostage.
Website scanning: Checks websites for malicious content before you visit them, according to Avast.
Email Guardian: Scans your emails for suspicious attachments and phishing attempts.
Multi-device protection: Covers up to 10 devices, including Windows, Mac, Android, and iOS, as stated by 2GO Software.
Privacy features: Helps protect your personal data and online privacy.
In essence, Avast Premium Security provides a robust suite of tools to keep your devices and online activity safe and secure, according to Avast.
Microsoft AI Nonprofit Use Cases and Live Demo_2025.04.30.pdfTechSoup
In this webinar we will dive into the essentials of generative AI, address key AI concerns, and demonstrate how nonprofits can benefit from using Microsoft’s AI assistant, Copilot, to achieve their goals.
This event series to help nonprofits obtain Copilot skills is made possible by generous support from Microsoft.
What You’ll Learn in Part 2:
Explore real-world nonprofit use cases and success stories.
Participate in live demonstrations and a hands-on activity to see how you can use Microsoft 365 Copilot in your own work!
Scaling GraphRAG: Efficient Knowledge Retrieval for Enterprise AIdanshalev
If we were building a GenAI stack today, we'd start with one question: Can your retrieval system handle multi-hop logic?
Trick question, b/c most can’t. They treat retrieval as nearest-neighbor search.
Today, we discussed scaling #GraphRAG at AWS DevOps Day, and the takeaway is clear: VectorRAG is naive, lacks domain awareness, and can’t handle full dataset retrieval.
GraphRAG builds a knowledge graph from source documents, allowing for a deeper understanding of the data + higher accuracy.
Douwan Crack 2025 new verson+ License codeaneelaramzan63
Copy & Paste On Google >>> https://dr-up-community.info/
Douwan Preactivated Crack Douwan Crack Free Download. Douwan is a comprehensive software solution designed for data management and analysis.
How Valletta helped healthcare SaaS to transform QA and compliance to grow wi...Egor Kaleynik
This case study explores how we partnered with a mid-sized U.S. healthcare SaaS provider to help them scale from a successful pilot phase to supporting over 10,000 users—while meeting strict HIPAA compliance requirements.
Faced with slow, manual testing cycles, frequent regression bugs, and looming audit risks, their growth was at risk. Their existing QA processes couldn’t keep up with the complexity of real-time biometric data handling, and earlier automation attempts had failed due to unreliable tools and fragmented workflows.
We stepped in to deliver a full QA and DevOps transformation. Our team replaced their fragile legacy tests with Testim’s self-healing automation, integrated Postman and OWASP ZAP into Jenkins pipelines for continuous API and security validation, and leveraged AWS Device Farm for real-device, region-specific compliance testing. Custom deployment scripts gave them control over rollouts without relying on heavy CI/CD infrastructure.
The result? Test cycle times were reduced from 3 days to just 8 hours, regression bugs dropped by 40%, and they passed their first HIPAA audit without issue—unlocking faster contract signings and enabling them to expand confidently. More than just a technical upgrade, this project embedded compliance into every phase of development, proving that SaaS providers in regulated industries can scale fast and stay secure.
Discover why Wi-Fi 7 is set to transform wireless networking and how Router Architects is leading the way with next-gen router designs built for speed, reliability, and innovation.
Download YouTube By Click 2025 Free Full Activatedsaniamalik72555
Copy & Past Link 👉👉
https://dr-up-community.info/
"YouTube by Click" likely refers to the ByClick Downloader software, a video downloading and conversion tool, specifically designed to download content from YouTube and other video platforms. It allows users to download YouTube videos for offline viewing and to convert them to different formats.
Download YouTube By Click 2025 Free Full Activatedsaniamalik72555
From Windows to Linux: Converting a Distributed Perforce Helix Infrastructure
1. From Windows to Linux:
Converting a Distributed
Perforce Helix
Infrastructure
David Foglesong
Senior Systems Software Engineer
Tableau Software
2. 2
About Tableau Software
Tableau Software (NYSE: DATA) helps people see and
understand data. Tableau helps anyone quickly analyze,
visualize and share information. More than 35,000 customer
accounts get rapid results with Tableau in the office and on-
the-go. And tens of thousands of people use Tableau Public to
share data in their blogs and websites. See how Tableau can
help you by downloading the trial at www.tableau.com/trial.
3. 3
Perforce At Tableau
Started with CVS.
Converted to Perforce in 2007 (on Windows).
Infrastructure has evolved over time: One server > commit
+ replicas > commit + replicas + edge servers.
650K+ changes, 900+ users, commit = 500G DB, edge =
800G DB.
3 main dev offices, several smaller offices.
4. 4
Why Change?
Tableau product change from Win-only to Win/Mac to
Win/Mac/mobile (iOS/Android).
Run other parts of build infrastructure (Artifactory,
ReviewBoard, OpenGrok) on Linux.
Dev management wants “best” systems = performance,
stability, scalability.
Perforce admins prefer Linux.
5. 5
Before
1x commit server
1x RO replica (backup and reporting)
3x fwd replica (at main dev offices)
Proxies at smaller dev offices.
1x edge server (for build farm) + RO replica (backup)
Brokers in front of everything.
All on Windows.
7. 7
After
1x commit server
2x RO replicas (backup and reporting)
5x edge servers (3x for offices, 2x for build farm)
5x RO replicas (backups for edge servers)
Brokers in front of everything.
All on Linux.
Most in central data center.
Proxies in dev offices.
9. 9
Edge Servers vs. Forwarding Replicas
Why switch offices to edge servers?
• Remote users (Palo Alto) see small lag on commands.
• Want to move most user data (db.have) off commit server – goal is to
have minimal (ideally none) connections from human users on commit
server.
Move to cluster?
• Not at this time. Want to preserve option of moving edge server to dev
office if needed.
10. 10
Process: Old vs. New
Old way = checkpoint manipulation
• Either via scripts or p4migrate.
• Need to fix case inconsistencies in metadata. E.g.,
- //depot/source/foo.c#1 vs. //depot/SOURCE/bar.c#1
- User “bob” vs. user “BOB”.
• Transfer and convert archive files.
11. 11
Process: Old vs. New
New way = replication
• Create Linux replicas and edge servers connected to Windows commit
server.
• Use “verify –t” to transfer and convert archive files.
• Once everything but commit server is on Linux, failover commit server to
Linux RO replica.
14. 14
Challenges – RHEL7
IT standardized on RHEL7.
RHEL7 uses systemd (vs. init scripts) to control services.
Worked with John Halbig in support to create service unit
files for p4d (and p4p and p4broker), he created KB with
sample: http://answers.perforce.com/articles/KB/10832
15. 15
Challenges – CVS2P4
Original Perforce system created with cvs2p4 script.
cvs2p4 works by using CVS change history to create
checkpoint to build DB files.
The db entries refer to original CVS (RCS) archive files
stored in special “import” dir.
CVS/RCS stores branches in the archive files, with revs like
1.1.1.1, but P4 uses separate files for branches.
16. 16
Challenges – CVS2P4
P4D resolves the CVS 1.1.1.1 branch revs OK, but “verify –t”
wouldn’t transfer them.
Solution = Use smbclient + dos2unix to transfer the cvs2p4
import dir.
This brought over ~70K changes, then used “verify –t” to
get rest.
17. 17
Challenges – Platform specific entries
Because we need to run in a mixed Win/Linux environment
during transition, can’t have platform specific entries
anywhere.
Specific examples:
• Depot dir location.
• Trigger table entries.
18. 18
Challenges – Depot dir location
Win servers had “flat” P4ROOT layout where depot dirs were
nested in D:p4d dir. (e.g., D:p4ddepot)
Short path length was partially to mitigate 260 char Win
max path issue.
Didn’t want depots in P4ROOT on Linux, so set
“server.depot.root” configurable to put dirs in other location.
19. 19
Challenges – Trigger table entries
Single trigger table shared by all servers.
As a result, can’t have:
• OS-specific paths in table.
• OS-specific binaries in table.
• OS-specific references in triggers.
20. 20
Challenges – Trigger table paths
Can’t use OS-specific paths in table.
Solution = %serverroot% var + make sure tools (Perl,
Python, etc.) are present in base system path.
Old =
C:python27python.exe C:bintrigger.py args
New =
python2 %serverroot%/triggers/trigger.py args
21. 21
Challenges – Trigger table binaries
Can’t use OS-specific binaries in table.
Solution = Wrappers and/or rename via client.
Example: p4auth_ad.exe AD auth trigger.
Old =
C:binp4auth_ad.exe args
New =
%serverroot%/triggers/p4auth_ad.exe args
Linux = p4auth_ad.pl gets synced as p4auth_ad.exe
22. 22
Results
Speed = Syncs, checkpoint/DB rebuild process, p4todb
rebuild are all faster on Linux servers.
Load = Linux edge server for build farm handled 200+
“sync –f” at one time.
23. 23
Summary
Time
• It will take longer than expected.
Effort
• It is easier than expected.
Work with support.
#3: Tableau started in 2003, is another “Stanford spinoff” company. Started in Seattle, now has a number of offices. Has grown very quickly.
About me: Have used and administered Perforce for 10+ years at multiple companies. At Tableau I work on the “continuous delivery systems” team that manages Perforce, TeamCity, Artifactory, and similar systems.
This is one of two presentations from Tableau staff at the conference.
Why this presentation: Have attended many Perforce conferences, and always like to see presentations where people are doing unusual things with Perforce.
#4: Also use other Perforce products: P4Web, p4todb, Swarm, GitFusion, GitSwarm (soon).
Many other systems connect to Perforce: TeamCity, ReviewBoard, OpenGrok, etc.
Other presentation has some examples of how data in Perforce is being used.
Main offices in WA (Seattle, Kirkland) and CA (Palo Alto). Also have offices in Austin, Vancouver (CA), and UK/Germany (HyPer).
#5: “best” solution = Perforce generally works best on Linux. Linux is the most widely used platform for Perforce, it’s what gets the most dev attention.
Helps that we have IT staff that understand Linux.
No monthly reboot for updates.
To last point, want to note that I’ve administered Perforce for many years on Windows, and some of the systems were reasonably large at 2500+ users, so I don’t really have any issues with running Perforce on Windows.
#7: Tech writer said I needed to put some graphics in the presentation…
#8: RO replicas to back up office edge servers are located in remote offices for reporting and DR.
#9: Edge servers = 3x for offices, 2x for build farms.
Not shown: RO replicas to back up edge servers are located in offices (on same host as proxy).
#11: At previous job (10+ years back) wrote Perl toolkit to do Win to Solaris migration. It can be done, but it’s a lot of work.
Have played with p4migrate, but haven’t used it to convert a production system. Think it only does file metadata, so you might still need to fix user names, client names, etc. Know it doesn’t like unloaded clients, there may be other limitations.
#12: Our process:
Start with Linux edge server for build farm (stress test) and RO replica for p4todb/reporting.
Next set up edge servers to replace dev office fwd replicas.
Final step is to migrate commit server.
Tableau is not the only site using this approach.
#13: Stages = If you have large enough metadata, the migration scripts can take hours or days to run, and you have to do EVERY server at the SAME TIME – which can be an issue when you have 5+ servers to convert all at once. Tableau (like other companies) is doing “continuous deployment” where we release updates on a regular cadence. Shutting down the SCM system for days is a difficult sell.
#15: IT standard = need to use this on all “production” servers unless there’s a really good reason not to.
Systemd = A year+ when I started working on this project, there weren’t many references to running P4D with systemd. Now there is the KB article and the SDP has example.
Mention OOMKiller setting.
RHEL7 has full support for XFS now.
Another challenge for RHEL7: We also run GitFusion, which doesn’t (at last check) support https:// on RHEL7, but it can be set up manually.
#16: Cvs2p4 is not the “p4convert” CVS import tool that Perforce has now.
#17: There’s a KB article about the proxy not working right with CVS 1.1.1.1 format archives too.
smbclient has option to automatically lowercase all files it transfers.
Why not use smbclient to transfer all archives? Because we have a few files that have extended chars in the names, and smbclient won’t convert the chars but “verify –t” will. (Probably a Win vs. Linux codepage issue.)
CVS stores binary files inside ,v files, so even binaries had to have dos2unix run over them.
Iterated across each change via “p4 –s verify –qtz //…@1,@1”, etc., to make sure everything gets transferred.
When using verify –t to transfer files, don’t forget spec depot (no changes or lazy copies there) and unload depot (-U).
#19: Depot dirs = Fortunately, we did not have hard-coded paths in the depot definition which would have made this harder.
260 char max path = now fixed with lfn configurable.
#20: What about running from depot?
Won’t work for us because we have support files that need to be alongside trigger.
Also wouldn’t work during period when both Win and Linux servers are running side-by-side.
OS-specific references = Can’t have “C:\logs” (or similar) hard-coded in trigger scripts. Can’t (or at least shouldn’t) assume specific user context.
#21: Perforce server on Win will work with Unix path separator (forward slash).
#22: Linux doesn’t care about extensions, so as long as file has +x setting and #!/bin/env perl (or similar) line at start, you can call a Bash script with a .bat extension or a Perl script with a .exe extension.
Similar problems with Swarm trigger (VBS vs. sh) although that’s been replaced by a single Perl script now.
#23: Not going to put up a bunch of spreadsheet numbers – not really an apples to apples comparison, since Linux servers are better spec’d hardware (and are using XFS) and as we were doing this transition the codebase was getting smaller (moving third-party items to Artifactory) – but the number of build agents was growing.
#24: Time = Other projects/issues/interruptions, time to migrate users from fwd replicas to edge servers, moving around systems during process, adding another edge server, etc.
Effort = Old way with checkpoint surgery is a LOT of work. (One of Ed’s original tasks when he was hired 6 years ago was to do conversion to Linux, and it was just too much time.)
Support = Lots of questions during this process – systemd, issues with integrity logs, etc.
Side-effect of change: All Linux hosts are identical configuration in terms of hardware, drive layout, OS, scripts, etc., which makes it easier for a team to support. Win hosts were built up over several years, so they’re not in sync.