The document discusses IBM Spectrum Scale's unified file and object access feature. It allows data to be accessed as both files and objects within the same namespace without data copies. This enables use cases like running analytics directly on object data using Hadoop/Spark without data movement. It also allows publishing analytics results back as objects. The feature supports common user authentication for both file and object access and flexible identity management modes. A demo is shown of uploading a file as object, running analytics on it, and downloading the results as object.
Spectrum Scale Unified File and Object with WAN CachingSandeep Patil
This document provides an overview of IBM Spectrum Scale's Active File Management (AFM) capabilities and use cases. AFM uses a home-and-cache model to cache data from a home site at local clusters for low-latency access. It expands GPFS' global namespace across geographical distances and provides automated namespace management. The document discusses AFM caching basics, global sharing, use cases like content distribution and disaster recovery. It also provides details on Spectrum Scale's protocol support, unified file and object access, using AFM with object storage, and configuration.
IBM Spectrum scale object deep dive trainingSmita Raut
This document provides an overview and agenda for a presentation on object storage capabilities in IBM Spectrum Scale. The summary includes:
1. The agenda covers object protocol, administration including installation methods, object authentication, storage policies, unified file and object, multiregion, S3, creating containers/buckets and objects, and problem determination.
2. Administration of object protocol can be done through the Spectrum Scale installation toolkit or CLI commands. This includes enabling features like S3 and multiregion.
3. Authentication for object access can be configured with options like Active Directory, LDAP, local authentication, or an external Keystone service.
In Place Analytics For File and Object DataSandeep Patil
The document discusses IBM Spectrum Scale's unified file and object access feature. It introduces Spectrum Scale and its support for file and object access. The unified file and object access feature allows data to be accessed as both files and objects without copying, through a single management plane. Use cases like in-place analytics for object data and common identity management across file and object access are enabled. A demo is presented where a file is uploaded as an object, analytics is run on it, and the result downloaded as an object, without data movement.
Secure Hadoop clusters on Windows platformRemus Rusanu
This document discusses securing Hadoop clusters on Windows platforms. Key points include:
- Integrating Hadoop clusters with Active Directory for single sign-on and using Windows domain users and groups for access control instead of local users.
- Services like HDFS, YARN, and HTTP interfaces can leverage Active Directory-based access control.
- "Kerberizing" the cluster so users authenticate with Kerberos and services authenticate each other with Kerberos provides data encryption in traffic.
- The Windows Secure Container Executor (WSCE) leverages Kerberos self-service extension to create isolated containers that run processes impersonating users, similar to Linux containers.
A comprehensive overview of the security concepts in the open source Hadoop stack in mid 2015 with a look back into the "old days" and an outlook into future developments.
- IBM Cloud Object Storage (ICOS) is a scalable object storage service that supports objects up to 10 TB and 100 buckets maximum. It provides S3 API compatibility and is IAM enabled.
- ICOS offers four storage classes - Standard, Vault, Cold Vault, and Flex - with different access frequencies and retrieval fees. Resiliency can be achieved through cross-region, regional, or single datacenter replication.
- Access to ICOS can be through public or private endpoints. Security features include firewalls, automatic server-side encryption, and optional customer-managed keys or Key Protect. Aspera provides high-speed transfer through desktop agents.
- Lifecycle rules can automate object expiration
Implementing Security on a Large Multi-Tenant Cluster the Right WayDataWorks Summit
Raise your hands if you are deploying Kerberos and other Hadoop security components after deploying Hadoop to the enterprise. We will present the best practices and challenges of implementing security on a large multi-tenant Hadoop cluster spanning multiple data centers. Additionally, we will outline our authentication & authorization security architecture, how we reduced complexity through planning, and how we worked with multiple teams and organizations to implement security the right way the first time. We will share lessons learned and takeaways for implementing security at your company.
We will walk through the implementation and its impacts to the user, development, support and security communities and will highlight the pitfalls that we navigated to achieve success. Protecting your customers and information assets is critical to success. If you are planning to introduce Hadoop security to your ecosystem, don’t miss this in depth discussion on a very important and necessary component to enterprise big data.
Hadoop Security in Big-Data-as-a-Service Deployments - Presented at Hadoop Su...Abhiraj Butala
The talk covers limitations of current Hadoop eco-system components in handling security (Authentication, Authorization, Auditing) in multi-tenant, multi-application environments. Then it proposes how we can use Apache Ranger and HDFS super-user connections to enforce correct HDFS authorization policies and achieve the required auditing.
Ozone: Evolution of HDFS scalability & built-in GDPR complianceDinesh Chitlangia
This talk was delivered at ApacheCON, Las Vegas USA, September 2019.
Audio Recording: https://feathercast.apache.org/2019/09/12/ozone-evolving-hdfs-scalability-to-new-heights-built-in-gdpr-compliance-dinesh-chitlangia/
Speakers:
Dinesh Chitlangia: https://www.linkedin.com/in/dineshchitlangia/
Ajay Kumar aka Ajay Yadav: https://www.linkedin.com/in/ajayydv/
Abstract:
https://www.apachecon.com/acna19/s/#/scheduledEvent/1176
Apache Hadoop Ozone is a robust, distributed key-value object store for Hadoop with layered architecture and strong consistency. It separates the namespace management from block and node management layer, which allows users to independently scale on both axes. Ozone is interoperable with Hadoop ecosystem as it provides OzoneFS (Hadoop compatible file system API), data locality and plug-n-play deployment with HDFS as it can be installed in an existing Hadoop cluster and can share storage disks with HDFS. Ozone solves the scalability challenges with HDFS by being size agnostic. Consequently, it allows users to store trillions of files in Ozone and access them as if they are on HDFS. Ozone plugs into existing Hadoop deployments seamlessly, and programs like Yarn, MapReduce, Spark, Hive and work without any modifications. In the era of increasing need for data privacy and regulations, Ozone also aims to provide built-in support for GDPR compliance with strong focus on Right to be Forgotten i.e., Data Erasure. At the end of this presentation the audience will be able to understand: 1. Overview of current challenges with HDFS scalability 2. How Ozone’s Architecture solves these challenges 3. Overview of GDPR 4. Built-in support for GDPR in Ozone
Apache Knox Gateway "Single Sign On" expands the reach of the Enterprise UsersDataWorks Summit
Apache Knox Gateway is a proxy for interacting with Apache Hadoop clusters in a secure way providing authentication, service level authorization, and many other extensions to secure any HTTP interactions in your cluster. One main feature of Apache Knox Gateway is the ability to extend the reach of your REST APIs to the internet while still securing your cluster and working with Kerberos. Recent contributions to the Apache Knox community have added support for Single Sign On (SSO) based on Pac4j 1.8.9 which is a very powerful security engine which provides SSO support through SAML2, OAuth, OpenID, and CAS. In addition, through recent community contributions Apache Ambari, and Apache Ranger can now also provide SSO authentication through Knox. This paper will discuss the architecture of Knox SSO, it will explain how enterprise user could benefit by this feature and will present enterprise use cases for Knox SSO, and integration with open source Shibboleth, ADFS Windows server Idp support, and Okta cloud Idp.
Introduction to Windows Azure Data ServicesRobert Greiner
This document provides an overview of using Azure for data management. It discusses using PartitionKey and RowKey to organize data into partitions in Azure table storage. It also recommends using the Azure Storage Client library for .NET applications and describes retry policies for handling errors. Links are provided for additional documentation on Azure table storage and messaging between Azure services.
The document discusses Hadoop security today and tomorrow. It describes the four pillars of Hadoop security as authentication, authorization, accountability, and data protection. It outlines the current security capabilities in Hadoop like Kerberos authentication and access controls, and future plans to improve security, such as encryption of data at rest and in motion. It also discusses the Apache Knox gateway for perimeter security and provides a demo of using Knox to submit a MapReduce job.
Protect your private data with ORC column encryptionOwen O'Malley
Fine-grained data protection at a column level in data lake environments has become a mandatory requirement to demonstrate compliance with multiple local and international regulations across many industries today. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads that provides optimized streaming reads but with integrated support for finding required rows quickly.
Owen O’Malley dives into the progress the Apache community made for adding fine-grained column-level encryption natively into ORC format, which also provides capabilities to mask or redact data on write while protecting sensitive column metadata such as statistics to avoid information leakage. The column encryption capabilities will be fully compatible with Hadoop Key Management Server (KMS) and use the KMS to manage master keys, providing the additional flexibility to use and manage keys per column centrally.
[DevDay 2016] OpenStack and approaches for new users - Speaker: Chi Le – Head...DevDay Da Nang
OpenStack is an open source cloud computing platform providing infrastructure as a service (IaaS). The presentation will encapsulate the contents of OpenStack, amplified by practical demo and simple but effective guidelines to access OpenStack.
———
Speaker: Chi Le – Head of Infrastructure System at Da Nang ICT Infrastructure Development Center
This document provides an overview of Azure SQL DB environments. It discusses the different types of cloud platforms including IaaS, PaaS and DBaaS. It summarizes the key features and benefits of Azure SQL DB including automatic backups, geo-replication for disaster recovery, and elastic pools for reducing costs. The document also covers pricing models, performance monitoring, automatic tuning capabilities, and security features of Azure SQL DB.
Hadoop security overview discusses Kerberos and LDAP configuration and authentication. It outlines Hadoop security features like authentication and authorization in HDFS, MapReduce, and HBase. The document also introduces Etu appliances and their benefits, as well as troubleshooting Hadoop security issues.
This document introduces Apache Sentry, an open source authorization module for Hadoop. It provides fine-grained, role-based authorization across Hadoop components like Hive, Impala and Solr. Sentry uses a centralized policy store to manage permissions for resources like databases, tables and collections. It evaluates rules to determine if a user's group has privileges to access resources based on their roles defined in the Sentry policy. Future work aims to introduce Sentry to more Hadoop components and provide a centralized authorization service for all protected resources and metadata.
OpenStack is an open source cloud computing platform that provides infrastructure as a service. It consists of interrelated components that control hardware resources like processing, storage, and networking. The key components include Nova for compute, Glance for images, Cinder for block storage, Swift for object storage, Keystone for identity, Horizon for the dashboard, Ceilometer for metering, and Neutron for networking. OpenStack provides APIs and dashboards to allow users to provision resources on demand.
Deploying Enterprise-grade Security for HadoopCloudera, Inc.
Deploying enterprise grade security for Hadoop or six security problems with Apache Hive. In this talk we will discuss the security problems with Hive and then secure Hive with Apache Sentry. Additional topics will include Hadoop security, and Role Based Access Control (RBAC).
This document provides an overview of Apache Hadoop security, both historically and what is currently available and planned for the future. It discusses how Hadoop security is different due to benefits like combining previously siloed data and tools. The four areas of enterprise security - perimeter, access, visibility, and data protection - are reviewed. Specific security capabilities like Kerberos authentication, Apache Sentry role-based access control, Cloudera Navigator auditing and encryption, and HDFS encryption are summarized. Planned future enhancements are also mentioned like attribute-based access controls and improved encryption capabilities.
IBM's Watson is a question answering computer system developed by IBM to answer questions posed in natural language. It was named after IBM's founder Thomas J. Watson and was initially created to compete on the game show Jeopardy! where it defeated human champions in 2011. Watson uses advanced natural language processing, semantic analysis, and machine learning to defeat human opponents. It is capable of answering complex questions with nuanced language and is being developed by IBM for commercial applications in fields like healthcare, finance and education.
L'avortement tardif et les infanticides néonataux en europe, eclj, 26 juin 2015fpaspousser
L'avortement tardif et les infanticides néonataux en europe, eclj, 26 juin 2015
http://9afb0ee4c2ca3737b892-e804076442d956681ee1e5a58d07b27b.r59.cf2.rackcdn.com/ECLJ%20Docs/L%27avortement%20tardif%20et%20les%20infanticides%20n%C3%A9onataux%20en%20Europe%2C%20ECLJ%2C%2026%20juin%202015.pdf
- IBM Cloud Object Storage (ICOS) is a scalable object storage service that supports objects up to 10 TB and 100 buckets maximum. It provides S3 API compatibility and is IAM enabled.
- ICOS offers four storage classes - Standard, Vault, Cold Vault, and Flex - with different access frequencies and retrieval fees. Resiliency can be achieved through cross-region, regional, or single datacenter replication.
- Access to ICOS can be through public or private endpoints. Security features include firewalls, automatic server-side encryption, and optional customer-managed keys or Key Protect. Aspera provides high-speed transfer through desktop agents.
- Lifecycle rules can automate object expiration
Implementing Security on a Large Multi-Tenant Cluster the Right WayDataWorks Summit
Raise your hands if you are deploying Kerberos and other Hadoop security components after deploying Hadoop to the enterprise. We will present the best practices and challenges of implementing security on a large multi-tenant Hadoop cluster spanning multiple data centers. Additionally, we will outline our authentication & authorization security architecture, how we reduced complexity through planning, and how we worked with multiple teams and organizations to implement security the right way the first time. We will share lessons learned and takeaways for implementing security at your company.
We will walk through the implementation and its impacts to the user, development, support and security communities and will highlight the pitfalls that we navigated to achieve success. Protecting your customers and information assets is critical to success. If you are planning to introduce Hadoop security to your ecosystem, don’t miss this in depth discussion on a very important and necessary component to enterprise big data.
Hadoop Security in Big-Data-as-a-Service Deployments - Presented at Hadoop Su...Abhiraj Butala
The talk covers limitations of current Hadoop eco-system components in handling security (Authentication, Authorization, Auditing) in multi-tenant, multi-application environments. Then it proposes how we can use Apache Ranger and HDFS super-user connections to enforce correct HDFS authorization policies and achieve the required auditing.
Ozone: Evolution of HDFS scalability & built-in GDPR complianceDinesh Chitlangia
This talk was delivered at ApacheCON, Las Vegas USA, September 2019.
Audio Recording: https://feathercast.apache.org/2019/09/12/ozone-evolving-hdfs-scalability-to-new-heights-built-in-gdpr-compliance-dinesh-chitlangia/
Speakers:
Dinesh Chitlangia: https://www.linkedin.com/in/dineshchitlangia/
Ajay Kumar aka Ajay Yadav: https://www.linkedin.com/in/ajayydv/
Abstract:
https://www.apachecon.com/acna19/s/#/scheduledEvent/1176
Apache Hadoop Ozone is a robust, distributed key-value object store for Hadoop with layered architecture and strong consistency. It separates the namespace management from block and node management layer, which allows users to independently scale on both axes. Ozone is interoperable with Hadoop ecosystem as it provides OzoneFS (Hadoop compatible file system API), data locality and plug-n-play deployment with HDFS as it can be installed in an existing Hadoop cluster and can share storage disks with HDFS. Ozone solves the scalability challenges with HDFS by being size agnostic. Consequently, it allows users to store trillions of files in Ozone and access them as if they are on HDFS. Ozone plugs into existing Hadoop deployments seamlessly, and programs like Yarn, MapReduce, Spark, Hive and work without any modifications. In the era of increasing need for data privacy and regulations, Ozone also aims to provide built-in support for GDPR compliance with strong focus on Right to be Forgotten i.e., Data Erasure. At the end of this presentation the audience will be able to understand: 1. Overview of current challenges with HDFS scalability 2. How Ozone’s Architecture solves these challenges 3. Overview of GDPR 4. Built-in support for GDPR in Ozone
Apache Knox Gateway "Single Sign On" expands the reach of the Enterprise UsersDataWorks Summit
Apache Knox Gateway is a proxy for interacting with Apache Hadoop clusters in a secure way providing authentication, service level authorization, and many other extensions to secure any HTTP interactions in your cluster. One main feature of Apache Knox Gateway is the ability to extend the reach of your REST APIs to the internet while still securing your cluster and working with Kerberos. Recent contributions to the Apache Knox community have added support for Single Sign On (SSO) based on Pac4j 1.8.9 which is a very powerful security engine which provides SSO support through SAML2, OAuth, OpenID, and CAS. In addition, through recent community contributions Apache Ambari, and Apache Ranger can now also provide SSO authentication through Knox. This paper will discuss the architecture of Knox SSO, it will explain how enterprise user could benefit by this feature and will present enterprise use cases for Knox SSO, and integration with open source Shibboleth, ADFS Windows server Idp support, and Okta cloud Idp.
Introduction to Windows Azure Data ServicesRobert Greiner
This document provides an overview of using Azure for data management. It discusses using PartitionKey and RowKey to organize data into partitions in Azure table storage. It also recommends using the Azure Storage Client library for .NET applications and describes retry policies for handling errors. Links are provided for additional documentation on Azure table storage and messaging between Azure services.
The document discusses Hadoop security today and tomorrow. It describes the four pillars of Hadoop security as authentication, authorization, accountability, and data protection. It outlines the current security capabilities in Hadoop like Kerberos authentication and access controls, and future plans to improve security, such as encryption of data at rest and in motion. It also discusses the Apache Knox gateway for perimeter security and provides a demo of using Knox to submit a MapReduce job.
Protect your private data with ORC column encryptionOwen O'Malley
Fine-grained data protection at a column level in data lake environments has become a mandatory requirement to demonstrate compliance with multiple local and international regulations across many industries today. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads that provides optimized streaming reads but with integrated support for finding required rows quickly.
Owen O’Malley dives into the progress the Apache community made for adding fine-grained column-level encryption natively into ORC format, which also provides capabilities to mask or redact data on write while protecting sensitive column metadata such as statistics to avoid information leakage. The column encryption capabilities will be fully compatible with Hadoop Key Management Server (KMS) and use the KMS to manage master keys, providing the additional flexibility to use and manage keys per column centrally.
[DevDay 2016] OpenStack and approaches for new users - Speaker: Chi Le – Head...DevDay Da Nang
OpenStack is an open source cloud computing platform providing infrastructure as a service (IaaS). The presentation will encapsulate the contents of OpenStack, amplified by practical demo and simple but effective guidelines to access OpenStack.
———
Speaker: Chi Le – Head of Infrastructure System at Da Nang ICT Infrastructure Development Center
This document provides an overview of Azure SQL DB environments. It discusses the different types of cloud platforms including IaaS, PaaS and DBaaS. It summarizes the key features and benefits of Azure SQL DB including automatic backups, geo-replication for disaster recovery, and elastic pools for reducing costs. The document also covers pricing models, performance monitoring, automatic tuning capabilities, and security features of Azure SQL DB.
Hadoop security overview discusses Kerberos and LDAP configuration and authentication. It outlines Hadoop security features like authentication and authorization in HDFS, MapReduce, and HBase. The document also introduces Etu appliances and their benefits, as well as troubleshooting Hadoop security issues.
This document introduces Apache Sentry, an open source authorization module for Hadoop. It provides fine-grained, role-based authorization across Hadoop components like Hive, Impala and Solr. Sentry uses a centralized policy store to manage permissions for resources like databases, tables and collections. It evaluates rules to determine if a user's group has privileges to access resources based on their roles defined in the Sentry policy. Future work aims to introduce Sentry to more Hadoop components and provide a centralized authorization service for all protected resources and metadata.
OpenStack is an open source cloud computing platform that provides infrastructure as a service. It consists of interrelated components that control hardware resources like processing, storage, and networking. The key components include Nova for compute, Glance for images, Cinder for block storage, Swift for object storage, Keystone for identity, Horizon for the dashboard, Ceilometer for metering, and Neutron for networking. OpenStack provides APIs and dashboards to allow users to provision resources on demand.
Deploying Enterprise-grade Security for HadoopCloudera, Inc.
Deploying enterprise grade security for Hadoop or six security problems with Apache Hive. In this talk we will discuss the security problems with Hive and then secure Hive with Apache Sentry. Additional topics will include Hadoop security, and Role Based Access Control (RBAC).
This document provides an overview of Apache Hadoop security, both historically and what is currently available and planned for the future. It discusses how Hadoop security is different due to benefits like combining previously siloed data and tools. The four areas of enterprise security - perimeter, access, visibility, and data protection - are reviewed. Specific security capabilities like Kerberos authentication, Apache Sentry role-based access control, Cloudera Navigator auditing and encryption, and HDFS encryption are summarized. Planned future enhancements are also mentioned like attribute-based access controls and improved encryption capabilities.
IBM's Watson is a question answering computer system developed by IBM to answer questions posed in natural language. It was named after IBM's founder Thomas J. Watson and was initially created to compete on the game show Jeopardy! where it defeated human champions in 2011. Watson uses advanced natural language processing, semantic analysis, and machine learning to defeat human opponents. It is capable of answering complex questions with nuanced language and is being developed by IBM for commercial applications in fields like healthcare, finance and education.
L'avortement tardif et les infanticides néonataux en europe, eclj, 26 juin 2015fpaspousser
L'avortement tardif et les infanticides néonataux en europe, eclj, 26 juin 2015
http://9afb0ee4c2ca3737b892-e804076442d956681ee1e5a58d07b27b.r59.cf2.rackcdn.com/ECLJ%20Docs/L%27avortement%20tardif%20et%20les%20infanticides%20n%C3%A9onataux%20en%20Europe%2C%20ECLJ%2C%2026%20juin%202015.pdf
Suministro de energía eléctrica para Usuarios Calificados en MéxicoGabriel Neuman
Para poder seleccionar a un suministrador de energía diferente a CFE, es necesario ser un usuario calificado. Dentro de los requisitos se considera tener al menos una carga contratada de al menos 1,000kw.
This document appears to be a collection of notes on various technology trends and economic concepts ranging from the sharing economy and Uber to customized experiences, virtual and augmented reality, autonomous vehicles, artificial intelligence, industry 4.0, and the Internet of Things. The notes cover topics like globalization, aging populations, mobile technology usage, and connectivity in the home.
L’année 2017 marque la fin d’un cycle et l’ouverture d’une nouvelle période quinquennale. Les structures de recherche de l’Ifsttar ont été particulièrement sollicitées depuis l’année 2014, avec la préparation de leurs dossiers d’évaluation par le HCERES fin 2014, par celle de l’établissement pour tous les aspects gouvernance et management fin 2015, par la réflexion désormais achevée sur les thématiques prioritaires de
l’Ifsttar et l’élaboration d’un nouveau Contrat d’Objectifs et de Performance, pour la période 2017-20121.
Les équipes vont pouvoir disposer en 2017 d’une année relativement légère en sollicitations stratégiques - les
évaluations sont passées, une stratégie scientifique à 10 ans précisée a été validée en conseil scientifique et en conseil d’administration - et se concentrer sur leur cœur de métier. Cela n’empêchera évidemment pas de suivre avec attention les suites données aux recommandations faites par les comités d’expert mandatés par le HCERES, avec un examen à mi-parcours en Conseil Scientifique de l’Ifsttar.
L’année 2017 devrait être particulière sur deux volets :
1- La mise en œuvre du nouveau Contrat d’Objectifs et de Performance 2017-2021 de l’Ifsttar.
2- Les suites des structurations des grands ensembles d’enseignement et de recherche, issues ou portées par les réponses aux appels lancés dans le cadre du PIA2, en particulier les projets d’IdEx et d’I-Site, en lien avec les différents sites de l’Ifsttar.
(voir également les annexes du programmes de recherche 2017)
QR Code pour des pratiques de lecture autonomes ExplorCamp ludovia2013vpaillas
Support utilisé lors de l'atelier proposé à Ludovia 2013 autour de l'utilisation des QR code pour Créer des livres enrichis et aider à mener un questionnement autonome en lecture.
Check out these Marketing Jokes. If you could get them all: Hats off, you are a 100% Marketer!
Don't forget to share your Marketing Jokes in a comment!
7 deadly mistakes that could kill your next app ideaBizSmart Select
Paul explains from first hand experience the challenges of developing new product ideas for Web and Mobile platforms.
In the past 15 years, Paul has developed new products all with varying degrees of success and he will share the lessons learnt from idea to concept to marketing. He will explain key concepts to save time, how to get the idea out of your head and quickly develop a proof of concept and validate your assumptions.
The Views module is a powerful tool. It gives site builders and site managers enormous control over displaying their content. But flexing the full power of views can be daunting.
In this session, we'll take a practical look at the difference between Views in Drupal 7 as a contributed module, and views in Drupal 8, now that it is in core.
We'll look at some of the Advanced features in views, and find out how we can get more out of our content using the full slice and dice capacity of the views module, with help from some of it's friends.
We'll conclude with an overview of the Views ecosystem, with pointers on where to go next to go from Views beginner, to Views master.
@MarketaAdamova Slides from 'Road Trip To Component' talk (Dutch Clojure Days 2017)
'Few months ago our company NomNom decided to move all its backend services from Ruby to Clojure. And I think a road trip is best comparison for this migration. There was excitement at start, then panic a few hours down the road wondering what was left behind, but now a constant joy of discovering new things. In this talk I’d like to share how we eventually arrived at Stuart Sierra’s Component. Let’s take a look at how components improved our quickly growing codebase and testing, as well as some of the trade-offs we had to make. Finally I’ll show how components can help with managing running code in production.'
Presentation for RTD 2017 to illustrate paper Design Fiction as World Building https://figshare.com/articles/Design_Fiction_as_World_Building/4746964
The document discusses growing a company without traditional bosses or top-down management. It argues that most companies are overly managed, which limits employee engagement, productivity, and innovation. As an alternative, it proposes that companies can function like ecosystems or self-organizing machines, with distributed authority, self-management, and holacracy. This allows for purpose and values, information sharing, role design, decision making, and other processes to be determined collaboratively without hierarchical control. Several examples of companies successfully using this approach are provided.
Common Data Model - A Business Database!Pedro Azevedo
In this session I presented how Common Data Service will be the future of Business Application Platform and how this platform will help the Dynamics 365 to grow.
This slide briefs about various tools & techniques used to extract unprotected data from iOS apps. You can extract resource files, database files, get data in runtime using various methods. In my next slides I will brief about the ways to secure your iOS apps.
The "Internet of Things" (IoT) refers to an Internet like structure consisting of uniquely identified objects that expose services. The IoT is a relatively new field with all more and more connected devices being developed monthly. This presentation discusses the current state of the IoT, what it is lacking and offers up some solutions to those problems.
Vanya Sehgal is seeking a full-time position as a software developer. She has a Master's degree in Computer Science from Rochester Institute of Technology with a 3.75 GPA and relevant coursework including Java, C++, data management, and Android development. She has work experience as a software engineer intern at Intuit and Amazon where she worked on web and mobile applications, gained AWS experience, and performed testing. Her technical skills include Java, C++, SQL, Linux, and Android development and she has completed projects involving distributed systems, security evaluation, and mobile applications.
Choosing the right Technologies for your next unicorn.Gladson DSouza
Startup India had its 5th Meetup on "Choosing The Right Technologies For Your Next Unicorn” on August, 05 2017!
We’re had the coolest techies speak about the latest and tested technologies used by the Best global tech enterprises.
From what to why for you to reach your ultimate Goal – to become a Unicorn!
We cover a range of topics like:
• Technologies that aren't disrupted
• Languages and frameworks
• Agile Project Management
• Web Security
• Automated Testing
• Tech stacks of Whatsapp, Uber, Facebook and other large enterprises
Why should you have attend?
• To understand the future of technology and which direction is recommended as of 2017
• Great Insights on present Latest Technologies
• Growing tech adoption trend globally and in India
• Ready case studies of present Unicorns.
Analytics with unified file and object Sandeep Patil
Presentation takes you through on way to achive in-place hadoop based analytics for your file and object data. Also give you example of storage integration with cloud congnitive services
Common Data Service – A Business Database!Pedro Azevedo
In this session I tried to explain to SQL Community what is Common Data Service, it's a new Database or only a service to allow Power Users to create applications.
So you might have heard of Project Cortex, allowing you to auto-tag information in SharePoint and extract knowledge from your content. But what if you can't wait for the preview? Or you are in an on-premises scenario? You can use the Azure Cognitive services directly from your SharePoint on-premises environment! In this session, you will learn how you can extend your on-premises data in SharePoint with the different cognitive services Azure offers, including Azure Text Analytics and LUIS.
Big data is high volume, high velocity, and high variety information that requires innovative processing for business insights. It can include structured and unstructured data like text, images, videos and sensor data. Handling big data requires new technologies for capturing, storing, integrating, analyzing and presenting insights. Key benefits include understanding customers and processes, fraud detection, and customizing websites in real time. Characteristics of big data include large volumes, many data types, rapid data production, and inconsistent data loads.
Systems, processes & how we stop the wheels falling offWellcome Library
Presentation from Digital Curator Dave Thompson on systems and processes for digitisation at the Wellcome Library for our second Digitisation Open Day.
This document discusses iOS application penetration testing from the perspective of a penetration tester. It begins with an overview of iOS applications and the iOS monoculture, covering code signing, sandboxing, and encryption. It then discusses various techniques a penetration tester may use, including checking compile options, exploiting URL schemes, analyzing insecure data storage in databases, property lists, keyboard caches, image caches, and error logs. It also covers runtime analysis using tools like Clutch, Class-Dump-Z, and Cycript to decrypt binaries, dump classes, and interact with running apps. Examples are provided of potential attacks against apps that involve bypassing locks, extracting hardcoded keys, or injecting malicious code. Defense techniques are also briefly explained.
KeepIt Course 4: Putting storage, format management and preservation planning...JISC KeepIt project
1) The document discusses a practical course on digital preservation tools for repository managers presented by the KeepIt project.
2) The course covers organizational issues, costs, description standards, and preservation workflow tools like EPrints and Plato.
3) Module 4 focuses on format management, risk assessment, storage, and linking preservation planning with tools like EPrints and Plato.
Understanding the Critical Relationship Between Hadoop, Big Data, and Deep Le...Vicky Tyagi
Hadoop, Big Data, and Deep Learning are closely connected in the modern data ecosystem. Big Data refers to the massive volume of structured and unstructured data generated daily. Hadoop is an open-source framework that allows for the distributed storage and processing of this big data across clusters of computers. Deep Learning, a subset of machine learning, uses large neural networks and thrives on vast amounts of data—often stored and processed using tools like Hadoop. Together, they enable scalable, data-driven insights and intelligent decision-making.
This presentation has been uploaded by Public Relations Cell, IIM Rohtak to help the B-school aspirants crack their interview by gaining basic knowledge on IT.
Object Storage promises many things - unlimited scalability, both in terms of capacity and file count, low cost but highly redundant capacity and excellent connectivity to legacy NAS. But, despite these promises object storage has not caught on in the enterprise like it has in the cloud. It seems like, for the enterprise object storage just isn’t a good fit. The problem is that most object storage system’s starting capacity is too large. And while connectivity to legacy NAS systems is available, seamless integration is not. Can object storage be sized so that it is a better fit for the enterprise?
The document discusses approaches to integrated multimedia indexing and retrieval. It describes integrating content-based approaches with structured attributes and annotations to leverage each approach's strengths. Different media types are combined in multimedia objects and various techniques are used to integrate retrieval of audio, images, video and other media. The document also outlines a general architecture for a multimedia information management system.
Mieke Jans is a Manager at Deloitte Analytics Belgium. She learned about process mining from her PhD supervisor while she was collaborating with a large SAP-using company for her dissertation.
Mieke extended her research topic to investigate the data availability of process mining data in SAP and the new analysis possibilities that emerge from it. It took her 8-9 months to find the right data and prepare it for her process mining analysis. She needed insights from both process owners and IT experts. For example, one person knew exactly how the procurement process took place at the front end of SAP, and another person helped her with the structure of the SAP-tables. She then combined the knowledge of these different persons.
Just-in-time: Repetitive production system in which processing and movement of materials and goods occur just as they are needed, usually in small batches
JIT is characteristic of lean production systems
JIT operates with very little “fat”
Thingyan is now a global treasure! See how people around the world are search...Pixellion
We explored how the world searches for 'Thingyan' and 'သင်္ကြန်' and this year, it’s extra special. Thingyan is now officially recognized as a World Intangible Cultural Heritage by UNESCO! Dive into the trends and celebrate with us!
By James Francis, CEO of Paradigm Asset Management
In the landscape of urban safety innovation, Mt. Vernon is emerging as a compelling case study for neighboring Westchester County cities. The municipality’s recently launched Public Safety Camera Program not only represents a significant advancement in community protection but also offers valuable insights for New Rochelle and White Plains as they consider their own safety infrastructure enhancements.
This comprehensive Data Science course is designed to equip learners with the essential skills and knowledge required to analyze, interpret, and visualize complex data. Covering both theoretical concepts and practical applications, the course introduces tools and techniques used in the data science field, such as Python programming, data wrangling, statistical analysis, machine learning, and data visualization.
2. #ibmedge
Introduction to SWIFT Object Store
• Object storage is highly available, distributed, eventually consistent storage.
• Data is stored as individual objects with unique identifier
• Flat addressing scheme that allows for greater scalability
• Has simpler data management and access
• REST-based data access
• Simple atomic operations:
– PUT, POST, GET, DELETE
• Usually software based that runs on commodity hardware
• Capable of scaling to 100s of petabytes
• Uses replication and/or erasure coding for availability instead of RAID
• Access over RESTful API over HTTP, which is a great fit for cloud and mobile applications
• Amazon S3, Swift, CDMI API
1
4. #ibmedge
Spectrum Scale Object Store – Additional Features
• Unified file and object support with Hadoop connectors
• Support for Encryption
• Support for Compression
• Only Object Store with Tape support for Backup
• Object store with integrated transparent cloud tiering Support
• Multi Region support
• AD/LDAP support for authentication
• ILM support for Object
• Movement of Object across storage tiers based on access heat
• Spectrum Scale Object with IBM DeepFlash becomes object store over all flash array for newer faster
workloads.
• Spectrum Scale Object with WAN caching support (AFM)
3
5. #ibmedge
Introduction to Object Store Metadata
• Data is stored as individual objects with unique identifier
• Typically, Objects consist of an object identifier (OID), data and metadata
• Object data is unstructured – images, text, audio, video
• Metadata consists on system metadata and user defined custom metadata that
can be extensive
4
System Metadata
Filename: taj1234.jpg
Created: 01 Aug 2016
Last Modified: 03 Aug
2016
Custom Metadata
Subject: Taj Mahal
Place taken: India
Category: Travel
Allow Sharing: yes
Data
Custom Metadata
Data + Metadata = Object
System Metadata
Object type = image
6. #ibmedge
Object Metadata – Usage
One can assign Object metadata such as
• Tags indicating the contents of the object and type of application the object is associated with.
• The level of data protection / ACLs, Replication / Deletion controlling of object, Movement of object to a
different tier of storage/geography;
• The possibilities are limitless.
Indexing/Searching
• Metadata tags are used to categorize data
• Example: Object repository for Sports – e.g. sport, indoor sport, chess
• Objects are then searched based on category – e.g.
• Search All articles related to outdoor sports
• Search Images of all indoor sports
• Search videos related to Chess
5
Sports
Indoor
Outdoor
Chess
Soccer
Object Metadata for Categorizing
7. #ibmedge
Object Metadata – Usage (continued…)
Smart Tiering
• Objects are placed in different tiers of storage pools
based on metadata tags
• Allow objects of different categories to be placed in
different tiers based on needs
– Example: Its time for the soccer world cup
where objects related to soccer will be
potentially highly accessed.
– Place all the soccer related objects on faster
tier.
• Allow independent tiering of objects within the same
category
– Sports Analysts wants to run analytics over
only Indoor game. This means one needs to
run an haddop job on this data.
– Within “Sport” Category tier only objects
tagged as “Indoor” to faster storage pool for
analytics.
6
Tier-0
(SSD)
Tier 1
(HDD)
Tier 2
(Tape)
Trending Objects like
Soccer world cup
event Photos,
Videos
Non Trending
event data
8. #ibmedge
Now we know Object Metadata is very valuable and it’s
usages are limitless…. But then, Where is the Problem?
7
9. #ibmedge
Meaningful Tagging of Objects with Metadata ?
For Leveraging the power of User Defined Metadata associated with object, the object has to be appropriately
tagged, else it is of less use.
Following are Inhibitors of meaningful tagging of object metadata
• Typical metadata generation processes are
• Device-based (e.g.: Camera tagging basic info to pics)
– Most of the times these attributes are primitive and low level in nature and provides raw data about that
object.
– Since the tags are Limited or specific they are of less or constrained value for analytics
• Manual – given by user or applications
– User / applications defined metadata is sometimes looked as unnecessary overhead by the end user at time
of object generation and do not tag the data.
– User might add unnecessary or misleading metadata attributes, which are not adding value for further
processing.
• There could be many dimensions of a object, and not all may be added when the object is generated. This
too constraints its value. e.g.
• Image may have many faces, but user might tag only few.
• A song might be fusion of two genres but in metadata only one is captured.
Need for Objects to be accurately auto tagged by Object Stores !
8
10. #ibmedge
First of a Kind
We need a provision to Cognitively Auto-tag
Heterogeneous Unstructured data in form of
Object to Leverage its benefits….
9
Solution : Integration of Cognitive Computing Services with Object
Storage for auto-tagging of unstructured data in the form of objects.
Object Storage
11. #ibmedge
What is IBM Watson Cognitive Computing Services
• Cognitive services are based on a technology that encompasses machine learning, reasoning, natural language processing,
speech and vision and more
• IBM Watson Developer Cloud enables cognitive computing features in your app using IBM Watson’s Language, Vision,
Speech and Data APIs.
Watson Services
• Alchemy Language – Text Analysis to give Sentiments of the Document
• Language Translation – Translate and publish content in multiple languages
• Tone Analyzer – Discover, understand, and revise the language tones in text
• Visual Recognition – Understand the contents of images.
• Personality insights – Uncover a deeper understanding of people's personality
• Retrieve and Rank – Enhance information retrieval with machine learning
• Natural language Classifier – Interpret and classify natural language with confidence
• And Many More …
10
12. #ibmedge
Example of IBM Watson Cognitive Service
• Visual Recognition Service
• Allows users to understand the contents of an image or video frame.
• Provides answer to "What is in this image?"
• Result is in scores for relevant classifiers representing things such as objects, events and settings.
– e.g. dog (relevance 0.7), mountain (relevance 0.5)
11
Input Image Watson Results
13. #ibmedge
Example of IBM Watson Cognitive Service (cont.)
Speech to Text
The Speech to Text service converts the
human voice/audio into the written word.
12
Tone Analyzer
This service uses linguistic analysis to detect
and interpret emotions, social tendencies,
and language style cues found in text.
Input Audio / Voice
Watson Results
Speech to Text
Tone
Analyzer
14. #ibmedge
How Does Cognitive Insights solve the Problem of lack of
meaningful Object metadata tags.
• Cognitive Services post deep analysis gives insights of unstructured
data (images, audio, video, text, etc.)
• These insights directly relate to the content of the unstructured data and
can be used for:
• Further Analytics of the data
• Categorization of the data and subsequent use of the
categorization in different use cases.
• Index & Search of data, etc.
• When unstructured data is stored in form of objects, these cognitive
insights can be defined and stored as user defined metadata (tags) for
the objects:
• Helps address the problem of meaningful tagging of objects.
• Opens a realm of newer possibility and opportunity for
analytics.
13
Object Storage
15. #ibmedge
Deriving Object Tags Using Cognitive Services
Cognitive services are asynchronously run by the Object Store to auto-tag the objects.
Example:
• Visual recognition service is used for images to tag and categorize the images
• Alchemy API's entities and concept tagging for text objects like blogs and text feeds
• Speech-to-text conversion in conjunction with Alchemy API for tagging audio/video objects
• Tone Analyzer can be used over audio files to tag likewise them
Cognitive services categorize the objects into specific group or categories.
• Example:
– Animal, canine, tiger
– Sport, outdoor sport, football
Based on the categories, object relations can be derived, such as
• All articles on indoor sports
• All images of animals
• All nature songs
14
16. #ibmedge
High Level Flow
15
IBM Spectrum Scale
Object Storage
Clicks Photo
From Phone
Service to feed
Object toWatson
{
“WatsonTags” :
“Eiffel“: 94.78%,
“Tower“ : 85.81%,
“Tour“: 66.82%
}
17. #ibmedge
What Makes it Possible: OpenStack Swift
Middleware Framework
• OpenStack Swift is based on WSGI specifications and Middleware is a WSGI feature which extends
functionality of any WSGI application.
• Middlewares are heavily used in swift, for purposes such as logging, tempurl, tempauth, quotas etc.
• If there is a middleware in an application pipeline, every request and response is passed through the
middleware when a request is served.
• Can I write custom middleware for Spectrum Scale Object ?
• Yes (needs to be reviewed by development before deployment)
16
18. #ibmedge
How it Works
•
Spectrum Scale Unfied File and Object
Data Ingest
Request Response
Spectrum Scale Object
(OpenStack Swift)
Cognitive
Middleware
Send Objects
Receive Cognitive
Tags and updates
Objects
Associated CognitiveTags to objects as
user defined metadata for applications to leverage.
19. #ibmedge
How OpenStack Swift is integrated with Spectrum Scale
• At very high level Spectrum Scale cluster contains nodes which share common filesystem namespace
which is cross mounted on all of its nodes.
• OpenStack Swift cluster consist of multiple type of processes, of which object, container and account
servers are backend and Proxy server acts as interface for the cluster.
• There are few designated nodes in Spectrum Scale cluster which are known as ‘Protocol Nodes’ which
hosts protocol stack. OpenStack Swift is installed on these protocol nodes.
• All the OpenStack Swift processes are installed on every protocol node of the cluster.
• Depending on the request type i.e. Object or Container or Account, Proxy server chooses the responsible
backend server which will serve depending on distributed circular hash, called as Ring.
• So a proxy server can contact any backend server running on other protocol node to fulfill the request.
18
20. #ibmedge
Approach chosen
Developed Custom Watson Cognitive Middleware for Spectrum Scale Object
• User will upload the object header – X-Visual_Insights_Enabled to enable it for cognitive analysis.
• Watson Cognitive middleware present in proxy server pipeline will intercept the request in return path, if
successful.
• Middleware appends the object path and object timestamp in local queue, where the queue contains a list
of object that needs to be tagged by cognitive computing.
Developed a Watson Background Service for Spectrum Scale
• Is running on every protocol node, and monitors the queue configuration provided.
• When a STOMP (Simple Text Oriented Messaging Protocol) message is added in queue by middleware.
Watson service checks the timestamp of the message with the file’s. If the queue timestamp is older than
the file, it ignores the request.
• If the message timestamp matches with the file, it processes the object with Watson Cognitive service and
updates the metadata thru file interface.
19
22. #ibmedge
Use Case1: Smart Object Tiering based on Cognitive
Tags.
• Objects are tagged by cognitive services based on its
content.
• Tags can be accessed as xattr by Spectrum Scale
placement and migration polices.
• Admins can write placement /movement rules for object
based on cognitive tags associated with objects as user
defined metadata.
• Example: A sports media portal host sports images and
videos for its end users. Assuming Soccer world cup is
coming in a month , its end users will access more
soccer related content which the portal would like to
serve with better response time. With cognitive object
tagging on spectrum scale, one can move all images
tagged as soccer by the cognitive service to fast tier for
better response time for end users.
21
Spectrum Scale
Unified File and Object
Auto Tagging
of Objects as
user defined
metadata
Tiering of Objects based on
Business Rules leveraging
conginitive Tags
Silver pool
( SATA)
Gold pool
(SAS)
23. #ibmedge
• These tags helps categorize data based on type and genre.
• One can then run in-place analytics leveraging spectrum scale Hadoop connectors on the same data to derive more insights.
• Example: Run analytic over all objects marked as “anger emotion chat” to derive the time-line and product being discussed
and present a word chart showing which product is generating anger emotion and what were the timelines when it happened.
Use case 2: Chat Center Interaction Analysis22
Results returned
In place over the
good /bad chats
Faster Tier
Auto Tagging
of Objects (Chat-logs
With emtions
Objects to be analyzed
are identified via the auto
tagging and moved to faster
storage
Spectrum Scale Unified File and Object
Spark or Hadoop
MapReduce
In-Place
Analytics
Analytics With Unified File and Object Access
Slower Tier
• On-going chat Center Analysis is key to
any Business where the business is
required to know:
• % of good chats
• % of bad chats
• Analysis over the chat (using
Hadoop)
– Demographic relation to
good/bad chat
– Specific Product relation
to good/bad chat
– etc.
• Chat interaction files stored as Objects are
tagged by cognitive services as good &
bad interactions or tags based on
emotions identified by cognitive services.
24. #ibmedge
How to Use this Concept: Its Easy and
Simple - Do It Yourself
• We have provided a sample and open sourced the middleware and service code that will allow you to auto
tag objects in form of images ( hosted over IBM Spectrum Scale Object).
• Based on your business needs and types of objects you can re-use and develop the required middleware
• Spectrum Scale Object which is based on OpenStack Swift supports customer middleware like these to be
used, post review with the development.
• The code and instruction are available on GitHub under Apache 3.0 license.
• https://github.com/SpectrumScale/watson-spectrum-scale-object-integration
23
27. #ibmedge
Spectrum Scale User Group
• The Spectrum Scale User Group is free
to join and open to all using, interested
in using or integrating Spectrum Scale.
• Join the User Group activities to meet
your peers and get access to experts
from partners and IBM.
• Next meetings:
- APAC: October 14, Melbourne
- Global at SC16 : November 13 1pm to 5pm, Salt Lake City
• Web page: http://www.spectrumscale.org/
• Presentations: http://www.spectrumscale.org/presentations/
• Mailing list: http://www.spectrumscale.org/join/
• Contact: http://www.spectrumscale.org/committee/
• Meet Bob Oesterlin (US Co-Principal) at Edge2016: [email protected]
28. #ibmedge
Session : How to apply Flash benefits to big data
analytics and unstructured data
NDA & Customers ONLY
• Who: IBM Elastic Storage Server Offering Management
• Alex Chen
• When: Thursday, September 22, 2016
• 1:15pm to 2:15pm
• Where: Grand Garden Arena, Lower Level, MGM, Studio 10
• Contact(if any questions)
• • [email protected], [email protected]
27
29. #ibmedge
Spectrum Scale Trial VM
• Download the IBM Spectrum Scale Trial VM from :
• http://www-03.ibm.com/systems/storage/spectrum/scale/trial.html
28
31. #ibmedge
Notices and Disclaimers Con’t.
30
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not
tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the
ability of any such third-party products to interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT
NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
The provision of the information contained h erein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual
property right.
IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document Management System™, FASP®,
FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM SmartCloud®, IBM Social Business®, Information on Demand, ILOG,
Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®,
PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®,
StoredIQ, Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business
Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM
trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.
33. #ibmedge
How to Deploy and Use the Sample Middleware with IBM
Spectrum Scale
Prerequisite
a) Get IBM Bluemix Visual Recognition account
b) Install following packages on protocol nodes:
I. Stompest https://pypi.python.org/pypi/stompest/
II. Apache Active MQ
III. Watson Developer Cloud SDK https://pypi.python.org/pypi/watson-developer-cloud
c) Ensure connectivity to server - gateway-a.watsonplatform.net from protocol nodes
Deployment
a) Install Watson middleware dist/watsonintegration-0.1-1.noarch.rpm on all protocol nodes (from GitHub)
b) Update proxy-server.conf to include the middleware, and restart proxy servers.
c) Start watsonintegration service on all protocol nodes
d) Create a SwiftOnFile policy and create a container with it.
e) Upload an image - $ swift upload -H "X-Visual_Insights_Enable:true" <container_name> <object_name>
f) Check Watson metadata tags: $ swift stat <container_name> <object_name>
32