Core - Security.patterns - Best.practices - And.strategies - for.J2EE - Web.services - And.identity - Management.oct.2005 0131463071
Core - Security.patterns - Best.practices - And.strategies - for.J2EE - Web.services - And.identity - Management.oct.2005 0131463071
of Contents Copyright Praise for Core Security Patterns Prentice Hall Core Series Foreword Foreword Preface What This Book Is About What This Book Is Not Who Should Read This Book? How This Book Is Organized Companion Web Site Feedback Acknowledgments Chris Steel Ramesh Nagappan Ray Lai About the Authors Part I: Introduction Chapter 1. Security by Default Business Challenges Around Security What Are the Weakest Links? The Impact of Application Security The Four W's Strategies for Building Robust Security Proactive and Reactive Security The Importance of Security Compliance The Importance of Identity Management Secure Personal Identification The Importance of Java Technology Making Security a "Business Enabler" Summary References Chapter 2. Basics of Security Security Requirements and Goals The Role of Cryptography in Security The Role of Secure Sockets Layer (SSL) The Importance and Role of LDAP in Security Common Challenges in Cryptography Threat Modeling Identity Management Summary References Part II: Java Security Architecture and Technologies Chapter 3. The Java 2 Platform Security
Java Security Architecture Java Applet Security Java Web Start Security Java Security Management Tools J2ME Security Architecture Java Card Security Architecture Securing the Java Code Summary References Chapter 4. Java Extensible Security Architecture and APIs Java Extensible Security Architecture Java Cryptography Architecture (JCA) Java Cryptographic Extensions (JCE) Java Certification Path API (CertPath) Java Secure Socket Extension (JSSE) Java Authentication and Authorization Service (JAAS) Java Generic Secure Services API (JGSS) Simple Authentication and Security Layer (SASL) Summary References Chapter 5. J2EE Security Architecture J2EE Architecture and Its Logical Tiers J2EE Security Definitions J2EE Security Infrastructure J2EE Container-Based Security J2EE Component/Tier-Level Security J2EE Client Security EJB Tier or Business Component Security EIS Integration TierOverview J2EE ArchitectureNetwork Topology J2EE Web Services SecurityOverview Summary References Part III: Web Services Security and Identity Management Chapter 6. Web Services SecurityStandards and Technologies Web Services Architecture and Its Building Blocks Web Services SecurityCore Issues Web Services Security Requirements Web Services Security Standards XML Signature XML Encryption XML Key Management System (XKMS) OASIS Web Services Security (WS-Security) WS-I Basic Security Profile Java-Based Web Services Security Providers XML-Aware Security Appliances Summary References
Chapter 7. Identity Management Standards and Technologies Identity ManagementCore Issues Understanding Network Identity and Federated Identity Introduction to SAML SAML Architecture SAML Usage Scenarios The Role of SAML in J2EE-Based Applications and Web Services Introduction to Liberty Alliance and Their Objectives Liberty Alliance Architecture Liberty Usage Scenarios The Nirvana of Access Control and Policy Management Introduction to XACML XACML Data Flow and Architecture XACML Usage Scenarios Summary References Part IV: Security Design Methodology, Patterns, and Reality Checks Chapter 8. The Alchemy of Security DesignMethodology, Patterns, and Reality Checks The Rationale Secure UP Security Patterns Security Patterns for J2EE, Web Services, Identity Management, and Service Provisioning Reality Checks Security Testing Adopting a Security Framework Refactoring Security Design Service Continuity and Recovery Conclusion References Part V: Design Strategies and Best Practices Chapter 9. Securing the Web TierDesign Strategies and Best Practices Web-Tier Security Patterns Best Practices and Pitfalls References Chapter 10. Securing the Business TierDesign Strategies and Best Practices Security Considerations in the Business Tier Business Tier Security Patterns Best Practices and Pitfalls References Chapter 11. Securing Web ServicesDesign Strategies and Best Practices Web Services Security Protocols Stack Web Services Security Infrastructure Web Services Security Patterns Best Practices and Pitfalls References Chapter 12. Securing the IdentityDesign Strategies and Best Practices Identity Management Security Patterns Best Practices and Pitfalls
References Chapter 13. Secure Service ProvisioningDesign Strategies and Best Practices Business Challenges User Account Provisioning Architecture Introduction to SPML Service Provisioning Security Pattern Best Practices and Pitfalls Summary References Part VI: Putting It All Together Chapter 14. Building End-to-End Security ArchitectureA Case Study Overview Use Case Scenarios Application Architecture Security Architecture Design Development Testing Deployment Summary Lessons Learned Pitfalls Conclusion References Part VII: Personal Identification Using Smart Cards and Biometrics Chapter 15. Secure Personal Identification Strategies Using Smart Cards and Biometrics Physical and Logical Access Control Enabling Technologies Smart Card-Based Identification and Authentication Biometric Identification and Authentication Multi-factor Authentication Using Smart Cards and Biometrics Best Practices and Pitfalls References Index SYMBOL A B C D E F G H I J K L M
N O P Q R S T U V W X Y Z
Core Security Patterns: Best Practices and Strategies for J2EE, Web Services, and Identity Management
By Christopher Steel, Ramesh Nagappan, Ray Lai ............................................... Publisher: Prentice Hall PT R / Sun Micros Pub Date: October 14, 2005 ISBN: 0-13-146307-1 Pages: 1088
Praise for Core Security Patterns Java provides the application developer with essential security mechanisms and support in avoiding critical security bugs common in other languages. A language, however, can only go so far. The developer must understand the security requirements of the application and how to use the features Java provides in order to meet those requirements. Core Security Patterns addresses both aspects of security and will be a guide to developers everywhere in creating more secure applications. --Whitfield Diffie, inventor of Public-Key C ryptography A comprehensive book on Security Patterns, which are critical for secure programming. --Li Gong, former C hief Java Security Architect, Sun Microsystems, and coauthor of Inside Java 2 Platform Security As developers of existing applications, or future innovators that will drive the next generation of highly distributed applications, the patterns and best practices outlined in this book will be an important asset to your development efforts. --Joe Uniejewski, C hief Technology Officer and Senior Vice President, RSA Security, Inc. This book makes an important case for taking a proactive approach to security rather than relying on the reactive security approach common in the software industry. --Judy Lin, Executive Vice President, VeriSign, Inc. Core Security Patterns provides a comprehensive patterns-driven approach and methodology for effectively incorporating security into your applications. I recommend that every application developer keep a copy of this indispensable security reference by their side. --Bill Hamilton, author of ADO.NET Cookbook, ADO.NET in a Nutshell, and NUnit Pocket Reference As a trusted advisor, this book will serve as a Java developers security handbook, providing applied patterns and design strategies for securing Java applications. --Shaheen Nasirudheen, C ISSP,Senior Technology Officer, JPMorgan C hase Like Core J2EE Patterns, this book delivers a proactive and patterns-driven approach for designing end-to-end security in your applications. Leveraging the authors strong security experience, they created a must-have book for any designer/developer looking to create secure applications. --John C rupi, Distinguished Engineer, Sun Microsystems, coauthor of Core J2EE Patterns Core Security Patterns is the hands-on practitioners guide to building robust end-to-end security into J2EE enterprise applications, Web services, identity management, service provisioning, and personal identification solutions. Written by three leading Java security architects, the patterns-driven approach fully reflects todays best practices for security in large-scale, industrial-strength applications. The authors explain the fundamentals of Java application security from the ground up, then introduce a powerful, structured security methodology; a vendor-independent security framework; a detailed assessment checklist; and twenty-three proven security architectural patterns. They walk through several realistic scenarios, covering architecture and implementation and presenting detailed sample code. They demonstrate how to apply cryptographic techniques; obfuscate code; establish secure communication; secure J2ME applications; authenticate and authorize users; and fortify Web services, enabling single sign-on, effective identity management, and personal identification using Smart C ards and Biometrics. Core Security Patterns covers all of the following, and more: What works and what doesnt: J2EE application-security best practices, and common pitfalls to avoid Implementing key Java platform security features in real-world applications Establishing Web Services security using XML Signature, XML Encryption, WS-Security, XKMS, and WS-I Basic security profile Designing identity management and service provisioning systems using SAML, Liberty, XAC ML, and SPML Designing secure personal identification solutions using Smart C ards and Biometrics Security design methodology, patterns, best practices, reality checks, defensive strategies, and evaluation checklists End-to-end security architecture case study: architecting, designing, and implementing an end-to-end security solution for large-scale applications
Core Security Patterns: Best Practices and Strategies for J2EE, Web Services, and Identity Management
By Christopher Steel, Ramesh Nagappan, Ray Lai ............................................... Publisher: Prentice Hall PT R / Sun Micros Pub Date: October 14, 2005 ISBN: 0-13-146307-1 Pages: 1088
Copyright Praise for Core Security Patterns Prentice Hall Core Series Foreword Foreword Preface What This Book Is About What This Book Is Not Who Should Read This Book? How This Book Is Organized Companion Web Site Feedback Acknowledgments Chris Steel Ramesh Nagappan Ray Lai About the Authors Part I: Introduction Chapter 1. Security by Default Business Challenges Around Security What Are the Weakest Links? The Impact of Application Security The Four W's Strategies for Building Robust Security Proactive and Reactive Security The Importance of Security Compliance The Importance of Identity Management Secure Personal Identification The Importance of Java Technology Making Security a "Business Enabler" Summary References Chapter 2. Basics of Security Security Requirements and Goals The Role of Cryptography in Security The Role of Secure Sockets Layer (SSL) The Importance and Role of LDAP in Security Common Challenges in Cryptography Threat Modeling Identity Management Summary References Part II: Java Security Architecture and Technologies Chapter 3. The Java 2 Platform Security Java Security Architecture Java Applet Security Java Web Start Security Java Security Management Tools J2ME Security Architecture Java Card Security Architecture Securing the Java Code
Summary References Chapter 4. Java Extensible Security Architecture and APIs Java Extensible Security Architecture Java Cryptography Architecture (JCA) Java Cryptographic Extensions (JCE) Java Certification Path API (CertPath) Java Secure Socket Extension (JSSE) Java Authentication and Authorization Service (JAAS) Java Generic Secure Services API (JGSS) Simple Authentication and Security Layer (SASL) Summary References Chapter 5. J2EE Security Architecture J2EE Architecture and Its Logical Tiers J2EE Security Definitions J2EE Security Infrastructure J2EE Container-Based Security J2EE Component/Tier-Level Security J2EE Client Security EJB Tier or Business Component Security EIS Integration TierOverview J2EE ArchitectureNetwork Topology J2EE Web Services SecurityOverview Summary References Part III: Web Services Security and Identity Management Chapter 6. Web Services SecurityStandards and Technologies Web Services Architecture and Its Building Blocks Web Services SecurityCore Issues Web Services Security Requirements Web Services Security Standards XML Signature XML Encryption XML Key Management System (XKMS) OASIS Web Services Security (WS-Security) WS-I Basic Security Profile Java-Based Web Services Security Providers XML-Aware Security Appliances Summary References Chapter 7. Identity Management Standards and Technologies Identity ManagementCore Issues Understanding Network Identity and Federated Identity Introduction to SAML SAML Architecture SAML Usage Scenarios The Role of SAML in J2EE-Based Applications and Web Services Introduction to Liberty Alliance and Their Objectives Liberty Alliance Architecture Liberty Usage Scenarios The Nirvana of Access Control and Policy Management Introduction to XACML XACML Data Flow and Architecture XACML Usage Scenarios Summary References Part IV: Security Design Methodology, Patterns, and Reality Checks Chapter 8. The Alchemy of Security DesignMethodology, Patterns, and Reality Checks The Rationale Secure UP Security Patterns Security Patterns for J2EE, Web Services, Identity Management, and Service Provisioning Reality Checks Security Testing Adopting a Security Framework
Refactoring Security Design Service Continuity and Recovery Conclusion References Part V: Design Strategies and Best Practices Chapter 9. Securing the Web TierDesign Strategies and Best Practices Web-Tier Security Patterns Best Practices and Pitfalls References Chapter 10. Securing the Business TierDesign Strategies and Best Practices Security Considerations in the Business Tier Business Tier Security Patterns Best Practices and Pitfalls References Chapter 11. Securing Web ServicesDesign Strategies and Best Practices Web Services Security Protocols Stack Web Services Security Infrastructure Web Services Security Patterns Best Practices and Pitfalls References Chapter 12. Securing the IdentityDesign Strategies and Best Practices Identity Management Security Patterns Best Practices and Pitfalls References Chapter 13. Secure Service ProvisioningDesign Strategies and Best Practices Business Challenges User Account Provisioning Architecture Introduction to SPML Service Provisioning Security Pattern Best Practices and Pitfalls Summary References Part VI: Putting It All Together Chapter 14. Building End-to-End Security ArchitectureA Case Study Overview Use Case Scenarios Application Architecture Security Architecture Design Development Testing Deployment Summary Lessons Learned Pitfalls Conclusion References Part VII: Personal Identification Using Smart Cards and Biometrics Chapter 15. Secure Personal Identification Strategies Using Smart Cards and Biometrics Physical and Logical Access Control Enabling Technologies Smart Card-Based Identification and Authentication Biometric Identification and Authentication Multi-factor Authentication Using Smart Cards and Biometrics Best Practices and Pitfalls References Index
Copyright
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. W here those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests. For more information, please contact: U. S. Corporate and Government Sales, (800) 3823419, [email protected]. For sales outside the U. S., please contact: International Sales, [email protected] Visit us on the W eb: www.phptr.com Library of Congress Cataloging-in-Publication Data Steel, Christopher, 1968Core security patterns : best practices and strategies for J2EE, W eb services and identity management / Christopher Steel, Ramesh Nagappan, Ray Lai. p. cm. Includes bibliographical references and index. ISBN 0-13-146307-1 (pbk. : alk. paper) 1. Java (Computer program language) 2. Computer security. I. Nagappan, Ramesh. II. Lai, Ray. III. Title. QA76.73.J3S834 2005 005.8-dc22 2005020502 Copyright 2006 Pearson Education, Inc. All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding permissions, write to: Pearson Education, Inc. Rights and Contracts Department One Lake Street Upper Saddle River, NJ 07458 Text printed in the United States on recycled paper at Courier in W estford, Massachusetts. First printing, September 2005
Dedication
To Kristin, Brandon, Ian, and AlecFor your love and support in making this possible C. S. To Joyce, Roger, and KaitlynFor all your love, inspiration, and sacrifice R. N To Johanan and AngelaLove never fails (1 Corinthians 13:8) R. L.
Foreword
On May 10, 1869 the tracks of the Union Pacific and Central Pacific Railroads were joined to create the Transcontinental Railroad. The first public railway, the Liverpool and Manchester railway, had opened less than forty years earlier on a track only thirty-five miles long. A journey from New Y ork to San Francisco could now be completed in days rather than months. The railroad was the Internet of its day. The benefits of fast, cheap, and reliable transport were obvious to all, but so was the challenge of building an iron road thousands of miles long through mountains and over rivers. Even though few people doubted that a railroad spanning the North American continent would eventually be completed, it took real vision and courage to make the attempt. The railway age is often compared to the Internet since both technologies played a key role in transforming the economy and society of their day. Perhaps for this reason both began with public skepticism that rapidly changed into enthusiasm, a speculative mania, and despair. If the railway mania of 1845 had been the end of railway building, the transcontinental would never have been built. The building of an Internet infrastructure for business applications represents the transcontinental railroad of the Internet age. Every day millions of workers sit at computers generating orders, invoices, and the like that are then sent on to yet more millions of workers who spend most of their time entering the information into yet more computers. Ten years ago practically all businesses worked that way. Today a significant proportion of business takes place through the W eb. Consumers are used to ordering books, clothes, and travel online and expect the improvements in customer service that this makes possible. During the dotcom era it was fashionable to speak of "e-commerce" as if the Internet would create a completely new and separate form of commerce that would quickly replace traditional businesses. Despite the failure of many companies founded on this faulty premise, Internet commerce has never been stronger. In the peak year of the dotcom boom, 2000, VeriSign issued 275,000 SSL certificates and processed 35 million payments in transactions totaling $1.3 billion. In 2003 VeriSign issued 390,000 certificates and processed 345 million payments in transactions totaling $25 billion. It is now understood that defining a class of "e-commerce" businesses is as meaningless as trying to distinguish "telephone commerce" or "fax commerce" businesses. Businesses of every size and in every sector of the economy are now using the Internet and the W eb. It is no longer unusual to find a plumber or carpenter advertising their services through a W eb site. It is clear that the emerging Internet business infrastructures will eventually connect, and electronic processes that are largely automated will replace the current fax gap. It is also apparent that when this connection is finally achieved it will enable a transformation of commerce as fundamental as the railroads. Before this can happen, however, two key problems must be solved. The first problem is complexity. Despite the many practical difficulties that had to be overcome to make online retail successful, a standard business process for mail order sales had been established for over a century. Allowing orders to be placed through a W eb site rather than by mail or telephone required a modification of an established process that was already well understood. Coding systems to support business-to-business transactions is considerably more challenging than producing a system to support online retail. Business-to-business transactions vary greatly as do the internal processes that enterprises have established to support them. The railroad engineers addressed a similar problem through standardization. An engine built to a bespoke design had to be maintained by a specialist. If a part broke it might be necessary for a replacement to be made using the original jigs in a factory a thousand miles away. The theory of interchangeable parts meant that an engine that broke down could be repaired using a part from standard stock. Software reuse strategies that are desirable in building any application become essential when coding business-tobusiness applications. In addition to taking longer and costing more to build, a system that is built using bespoke techniques will be harder to administer, maintain, and extend. Software patterns provide a ready-made template for building systems. Industry standards ensure that software will interoperate across implementation platforms. Interoperability becomes essential when the business systems of one enterprise must talk to those of a partner. The second challenge that must be addressed is security. A business will not risk either its reputation or its assets to an online transaction system unless it is convinced that it fully understands the risks. It is rarely sufficient for an electronic system to provide security that is merely as good as existing systems deliver. W hether fairly or unfairly, electronic systems are invariably held to a higher standard of security than the paper processes they replace. Today, rail is one of the safest ways to travel. This has not always been the case. Railway safety was at one time considered as intractable a problem as some consider Internet safety today. Victorian trains came off their rails with
alarming frequency, bridges collapsed, cargoes caught fire, almost anything that might go wrong did. Eventually engineers began to see these accidents as failures of design rather than merely the result of unlucky chance. Safety became a central consideration in railway engineering rather than an afterthought. The use of standardized, interchangeable parts played a significant role in the transformation of railway safety. W hether an engineer was designing a trestle for a bridge or a brake for a carriage, the use of a book of standard engineering design patterns was faster and less error prone. The security field has long accepted the argument that a published specification that has been widely reviewed is less likely to fail than one that has had little external review. The use of standard security protocols such as SSL or SAML represents the modern day software engineering equivalent of the books of standard engineering parts of the past. This book is timely because it addresses the major challenges facing deployment of Internet business infrastructure. Security patterns describe a means of delivering security in an application that is both repeatable and reliable. It is by adopting the principle of standardized software patterns that the engines to drive Internet business will be built. In particular this book makes an important case for taking a reactive security approach common in the software industry. advantage'that is, whichever side made the last response is cedes the initiative and thus the advantage to the attacker. problems are solved before they become serious. proactive approach to security rather than relying on the Most security problems are subject to a 'last mover likely to win. Relying on the reactive approach to security A proactive approach to security is necessary to ensure that
The recent upsurge in spam and spam-related frauds (so called spam-scams) show the result of relying on a reactive approach to security. By the time the system has grown large enough to be profitably attacked, any measures intended to react to the challenge must work within the constraints set by the deployed base. Despite the numerous applications of email and the W eb, email remains at base a messaging protocol, the W eb a publishing protocol. The security requirements are well understood and common to all users. The challenges faced in creating an Internet business infrastructure are considerably more complex and difficult. It is clear that every software developer must have access to the best security expertise, but how can that be possible when the best expertise is by definition a scarce resource? Reuse of well-understood security components in an application design allows every application designer to apply the experience and knowledge of the foremost experts in the industry to control risk. In addition it allows a systematic framework to be applied, providing better predictability and reducing development cost. Computer network architectures are moving beyond the perimeter security model. This does not mean that firewalls will disappear or that perimeter security will cease to have relevance. But at the same time what happens across the firewall between enterprises will become as important from a security perspective as what takes place within the security perimeter. It is important then that the end-to-end security model and its appropriate application are understood. In the railway age businesses discovered that certain important security tasks such as moving money from place to place were better left to specialists. The same applies in the computer security worldit is neither necessary nor desirable for every company to connect to payments infrastructures, operate a PKI or other identity infrastructure, manage its own security infrastructure, or perform one of hundreds of other security-related tasks itself. The use of standard W eb Services infrastructure makes it possible to delegate these security sensitive tasks to specialists. The benefits of an infrastructure for business applications are now widely understood. I believe that this book will provide readers with the tools they need to build that infrastructure with security built into its fabric. Judy Lin Executive Vice President, VeriSign
Foreword
The last twenty years have brought dramatic changes to computing architectures and technologies, at both the network level and the application level. Much has been done at the network infrastructure layer, with intrusion detection, antivirus, firewalls, VPNs, Quality of Service, policy management and enforcement, Denial of Service detection and prevention, and end point security. This is necessary, but not sufficientmore emphasis must now be placed on designing security into applications and in deploying application security infrastructure. W hile network security focuses on detecting, defending, and protecting, application security is more concerned with enablement and potentially with regulatory compliance (Sarbanes-Oxley, HIPPA, GLB, and so on). Application security is an imperative not only for technology, but also for business. Companies with better security will gain competitive advantage, reduce costs, reach new markets, and improve end-user experience. This is true for both B2B (e.g., supply chain) and B2C (e.g., financial services, e-tail) applications. W ith global network connectivity and everincreasing bandwidth, we have seen business transformation and unprecedented access to information and resources. Security is now at the forefront of the challenges facing users, enterprises, governments, and application providers and developers. Loosely coupled distributed applications based on J2EE and W eb Services have become the preferred model for multivendor, standards-based application development, and the leading application development and deployment platforms have evolved to provide increasing levels of security. Security can no longer be viewed as a layer (or multiple layers) that is added as problems or vulnerabilities arise; it must be an initial design consideration and an essential application development priority. Security should be a first thought, not an afterthought. This book provides a comprehensive description of the various elements and considerations that must be accounted for in developing an overall application security strategy and implementation. Sun Microsystemsas a pioneer and leader in network computing, distributed applications, and the Java systemis uniquely positioned to cover this important area. As developers of existing applications, or future innovators that will drive the next generation of highly distributed applications, the information and best practices outlined in this book will be an important asset to your development efforts. W e are counting on you to ensure that businesses and end users can confidently, and securely, experience the promise and power of the Internet. Joseph Uniejewski CTO and Senior Vice President of Corporate Development, RSA Security
Preface
"T he problems that exist in the w orld today cannot be solved by the level of thinking that created them." Albert Einstein Security now has unprecedented importance in the information industry. It compels every business and organization to adopt proactive or reactive measures that protect data, processes, communication, and resources throughout the information lifecycle. In a continuous evolution, every day a new breed of business systems is finding its place and changes to existing systems are becoming common in the industry. These changes are designed to improve organizational efficiency and cost effectiveness and to increase consumer satisfaction. These improvements are often accompanied by newer security risks, to which businesses must respond with appropriate security strategies and processes. At the outset, securing an organization's information requires a thorough understanding of its security-related business challenges, potential threats, and best practices for mitigation of risks by means of appropriate safeguards and countermeasures. More importantly, it becomes essential that organizations adopt trusted proactive security approaches and enforce them at all levelsinformation processing, information transmittal, and information storage.
Part I: Introduction
Part I introduces the current state of the industry, business challenges, and various application security issues and strategies. It then presents the basics of security.
Part III concentrates on the industry-standard initiatives and technologies used to enable W eb services security and identity management.
Chapter 8: The Alchemy of Security DesignSecurity Methodology, Patterns, and Reality Checks
This chapter begins with a high-level discussion about the importance of using a security design methodology and then details a security design process for identifying and applying security practices throughout the software life cycle including architecture, design, development, deployment, production, and retirement. The chapter describes various roles and responsibilities and explains core security analysis processes required for the analysis of risks, trade-offs, effects, factors, tier options, threat profiling, and trust modeling. This chapter also introduces the security design patterns catalog and security assessment checklists that can be applied during application development to address security requirements or provide solutions.
Chapter 10: Securing the Business TierDesign Strategies and Best Practices
This chapter presents seven security patterns that pertain to designing and deploying J2EE Business-tier components such as EJBs, JMS, and other related components. Each pattern addresses a set of security problems associated with the Business tier and describes a design solution illustrating numerous implementation strategies along with the results of using the pattern. It highlights security factors and associated risks of using the Business-tier security pattern and finally verifies pattern applicability through the use of reality checks. The chapter also provides a comprehensive list of best practices and pitfalls in securing J2EE business components.
This chapter presents three security patterns that pertain to designing and deploying W eb services. The chapter begins with a discussion of the W eb services security infrastructure and key components that contribute to security. Then it describes each pattern, addresses the security problems associated with W eb services, and describes a design solution illustrating numerous implementation strategies and consequences of using the W eb services pattern. It also highlights security factors and associated risks using the pattern and verifies pattern applicability using reality checks. Finally, the chapter provides a comprehensive list of best practices and pitfalls in securing W eb services.
Chapter 15: Secure Personal Identification Using Smart Cards and Biometrics
This chapter explores the concepts, technologies, architectural strategies, and best practices for implementing secure Personal Identification and authentication using Smart Cards and Biometrics. The chapter begins with a discussion of the importance of converging physical and logical access control and the role of using Smart Cards and Biometrics in Personal Identification. This chapter illustrates the architecture and implementation strategies for enabling Smart Cards and Biometrics-based authentication in J2EEbased enterprise applications, UNIX, and W indows environments as well as how to combine these in multifactor authentication. Finally, the chapter provides a comprehensive list of best practices for using Smart Cards and Biometrics in secure Personal Identification.
Feedback
The authors would like to receive reader feedback, so we encourage you to post questions using the discussion forum linked to the W eb site. Y ou can also contact the authors at their prospective email addresses. Contact information can be found at www.coresecuritypatterns.com. The W eb site also includes a reader's forum for public subscription and participation. Readers may also post their questions, share their views, and discuss related topics. W elcome to Core Security Patterns. W e hope you enjoy reading this book as much as we enjoyed writing it. W e trust that you will be able to adopt the theory, concepts, techniques, and approaches that we have discussed as you design, deploy, and upgrade the security of your IT systemsand keep those systems immune from all security risks and vulnerabilities in the future. Chris, Ramesh, and Ray www.coresecuritypatterns.com
Acknowledgments
"T he learning and know ledge that w e have, is, at the most, but little compared w ith that of w hich w e are ignorant." Plato (427347 B.C.) The authors would like to extend their thanks to the Prentice Hall publishing team, including Greg Doench, Ralph Moore, Bonnie Granat, and Lara W ysong for their constant help and support in the process of creating this work. W e also thank Judy Lin, Joe Uniejewski, W hitfield Diffie, Li Gong, John Crupi, Danny Malks, Deepak Alur, Radia Perlman, Glenn Brunette, Bill Hamilton, and Shaheen Nasirudheen for their initial feedback and for sharing their best thoughts and advice. Numerous others furnished us with in-depth reviews of the book and supported us with their invaluable expertise. W ithout their assistance, this book would not have become a reality. Our gratitude extends to Seth Proctor, Anne Anderson, Tommy Szeto, Dwight Hare, Eve Maler, Sang Shin, Sameer Tyagi, Rafat Alvi, Tejash Shah, Robert Skoczylas, Matthew MacLeod, Bruce Chapman, Tom Duell, Annie Kuo, Reid W illiams, Frank Hurley, Jason Miller, Aprameya Puduthonse, Michael Howard, Tao Huang, and Sen Zhang. W e are indebted to our friends at Sun Microsystems, RSA Security, VeriSign, Microsoft, Oracle, Agilent Technologies, JPMorganChase, FortMoon Consulting, AC Technology, Advanced Biometric Controls, and the U. S. Treasury's Pay.Gov project for all their direct and indirect support and encouragement.
Chris Steel
I wish to thank all of the many people who contributed to my effort. First, I would like to thank the individuals that directly contributed content to my work. Frank Hurley, who single-handedly wrote Chapter 2 and who contributed a lot of material and references to the discussion of security fundamentals. W ithout Frank, I would have missed a lot of the security basics. Aprameya Paduthonse, who contributed several patterns across the W eb and Business tiers. He also reviewed several chapters and was able to add content and fill in a lot of gaps quickly. W ithout Aprameya, I would have been even further behind schedule. Jason Miller, who contributed vast amounts of knowledge about the W eb tier and who was responsible for the technical details about how the W eb-tier patterns fit together. His understanding of Struts and W eb tier frameworks is unsurpassed. I also wish to express my deepest gratitude to our many reviewers. Their time and dedication to reviewing the book is what has kept this book on track. In particular, my thanks go to Robert Skoczylas, whose thorough reviews and many suggestions to the chapters about W eb tier and Business tier patterns have made my work much more cohesive and understandable. I have not had a better reviewer than Robert.
Ramesh Nagappan
Security has been one of my favorite subjects ever since I started working at Sun Microsystems. Although I worked mostly on Java distributed computing, I had plenty of opportunities to experiment with security technologies. W ith my passion for writing, a book on security has always been one of my goals, and now it has become a reality with the completion of this mammoth project. It is always fun to have a look back and recall the genesis of this book: It was Sun's JavaSmart DayDeveloper's conference in Boston (September 16, 2002), and after presenting to a huge audience on W eb services security, Chris and I came out, tired and hungry. W e sat down at The Cheesecake Factory, and while we refreshed ourselves, we came up with the idea of writing an applied security book for Java developers that would allow us to share our best kept secrets, tips, and techniques we'd been hiding up our sleeves. Over the course of the next few days, we created the proposal for this book. Greg Doench at Prentice Hall readily accepted our proposal, but Chris and I had a tough time keeping pace with the schedule. At one point, Greg asked me "W ill the manuscript be ready before the Red Sox win the W orld Seriesagain?" Because Chris and I wanted to cover additional relevant topics in the book, it soon became an effort of much greater scope than initially planned. After a few months of increasing the scope of the book, Chris and I decided to invite Ray Lai to contribute to this book. That's how our writing journey began. During the course of writing, it's been great fun having a midnight conference call to discuss and share our thoughts and resolve issues. After more than two years of work on this book, I'm actually a bit surprised that it's done. It's a great feeling to see it turn out much beyond our thoughts as we envisioned back at The Cheesecake Factory. First, I would like to thank and recognize the people who have directly or indirectly influenced me by providing me with opportunities to learn and to gain experience in working with security technologies. I would not have been able to gain the expertise necessary for the writing of this book without those opportunities. Thus, my thanks are extended to: Gary Lippert, Dave DiMillo, Li Gong, and Chris Steel, for giving me the opportunity to work with Java security technologies and J2EE application security projects. Sunil Mathew and W illiam Olsen, for introducing me to real-world W eb services projects and providing me with opportunities to test-drive my W eb services security prototypes. Doug Bunting, for having introduced me to participation in W eb services standards initiatives, particularly the OASIS W S-CAF and W S-Security working groups. W ayne Ashworth and Dan Fisher for giving me access to the world of Smart Cards and opportunities to work on Smart Card application prototypes. Art Sands, Chris Sands, Tuomo Lampinen, Jeff Groves, and Travis Hatmaker for allowing me to play with Biometric technologies and for providing opportunities to work on biometrics integration with Sun Identity Management products. Luc W ijns, Charles Andres, Sujeet Vasudevan for all the trust and confidence on my expertise and giving me a opportunity to prototype the Java Card-based Identity Management solution for a prestigious national ID project. Second, I was fortunate enough to have an excellent team of reviewers whose insightful comments and suggestions considerably increased the quality of my work. My sincere thanks go to Glenn Brunette, Shaheen Nasirudeen, Tommy Szeto, Sang Shin, Robert Skoczylas, Tejash Shah, Eve Maler, Rafat Alvi, Sameer Tyagi, Bruce Chapman, Tom Duell, Annie Kuo, and Reid W illiams for all the excellent review comments that I incorporated into the chapters. My special thanks go to Patric Chang and Matthew MacLeod for all their encouragement and recognition during my work on this book. Finally, the largest share of credit goes to my loving wife Joyce, my son Roger, my little girl Kaitlyn 'Minmini,' and my parents for all their love, inspiration, and endless support. Only through their love and support was I able to accomplish this goal.
Ray Lai
I want to give thanks to God, who answered my prayer to complete this book, and to my family, who waited for me every night and weekend while I was writing. I would like to also express thanks to the following individuals for their support: Dr. Glen Reece, Kumar Swaminathan, and Samir Patel for their management and moral support. Rafat Alvi, Glenn Brunette, Dwight Hare, Eve Maler, and Seth Procter, for their critical and honest review to ensure technical accuracy of the manuscript. Anne Anderson, for her critical review and suggestions for Chapter 7.
Part I: Introduction
Chapter 1. Security by Default Chapter 2. Basics of Security
How do we enforce authentication and authorization? How do we prevent identity theft? How do we establish access control policies? How do we resist internal and external attacks? How do we detect malicious code? How do we overcome service interruptions? How do we assess and test countermeasures? How do we monitor and audit for threats and vulnerabilities? This book introduces a radical approach called Security by Default that delivers robust security architecture from the ground up and proactively assists in implementing appropriate countermeasures and safeguards. This approach adopts security as a key component of the software development life cyclefrom design and development through post-production operations. It is based on a structured security design methodology, is pattern-driven, and adopts industry best practices that help security architects and developers identify situations of w hat, w hy, w hen, w here and how to evolve and apply end-to-end security measures during the application design process as well as in the production or operations environment. This chapter discusses current business challenges, the weakest links in security, and critical application flaws and exploits. Then it introduces the basic concepts behind Security by Default and addresses the importance of a security design process methodology, pattern-driven security development, best practices, and reality checks. Because this book focuses on Java platform-based applications and services, this chapter introduces an overview of the Java platform security. It also highlights the importance of identity management and other emerging security technologies. Finally, it discusses how to make a case for security as a business enabler and reviews the potential benefits brought by approaching security in this way.
Output Sanitation
Re-displaying or echoing the data values entered by users is a potential security threat because it provides a hacker with a means to match the given input and its output. This provides a way to insert malicious data inputs. W ith W eb pages, if the page generated by a user's request is not properly sanitized before it is displayed, a hacker may be able to identify a weakness in the generated output. Then the hacker can design malicious HTML tags to create pop-up banners; at the worst, hackers may be able to change the content originally displayed by the site. To prevent these issues from arising, the generated output must be verified for all known values. Any unknown values not intended for display must be
eliminated. All comments and identifiers in the output response must also be removed.
Buffer Overflow
W hen an application or process tries to store more data in a data storage or memory buffer than its fixed length or its capacity can handle, the extra information is likely to go somewhere in adjacent buffers. This event causes corruption or overwrite in the buffer that holds the valid data and can abruptly end the process, causing the application to crash. To design this kind of attack, a hacker passes malicious input by tampering or manipulating the input parameters to force an application buffer overflow. Such an act usually leads to denial-of-service attacks. Buffer overflow attacks are typically carried out using application weaknesses related to input validation, output sanitization, and data injection flaws.
Weak Encryption
Encryption allows the scrambling of data from plaintext to ciphertext by means of cryptographic algorithms. Using computers with lots of processing power can compromise weaker algorithms. Algorithm key-lengths exceeding 56 bits are considered strong encryption, but in most cases, using 128-bits and above is usually recommended.
Session Theft
Also referred to as session hijacking, session theft occurs when attackers create a new session or reuse an existing session. Session theft hijacks a client-to-server or server-to-server session and bypasses the authentication. Hackers do not need to intercept or inject data into the communication between hosts. W eb applications that use a single SessionID for multiple client-server sessions are also susceptible to session theft, where session theft can be at the W eb application session level, the host session level, or the TCP protocol. In a TCP communication, session hijacking is done via IP spoofing techniques, where an attacker uses source-routed IP packets to insert commands into an active TCP communication between the two communicating systems and disguises himself as one of the authenticated users. In W eb-based applications, session hijacking is done via forging or guessing SessionIDs and stealing SessionID cookies. Preventing session hijacking is one of the first steps in hardening W eb application security, because session information usually carries sensitive data such as credit card numbers, PINs, passwords, and so on. To prevent session theft, always invalidating a session after a logout, adopting PKI solutions for encrypting session information, and adopting a secure communication channel (such as SSL/TLS) are often considered best practices. Refer to [SessionHijack] for details.
Broken Authentication
Broken authentication is caused by improper configuration of authentication mechanisms and flawed credential management that compromise application authentication through password change, forgotten password, account digital update, certificate issues, and so on. Attackers compromise vulnerable applications by manipulating credentials such as user passwords, keys, session cookies, or security tokens and then impersonating a user. To prevent broken authentication, the application must verify its authentication mechanisms and enforce reauthentication by verifying the requesting user's credentials prior to granting access to the application. Refer to [BrokenAuth] for details.
Access control determines an authenticated user's rights and privileges for access to an application or data. Any access control failure leads to loss of confidential information and unauthorized disclosure of protected resources such as application data, functions, files, folders, databases, and so on. Access control problems are directly related to the failure to enforce application-specific security policies and the lack of policy enforcement in application design. To prevent access control failures, it is important to verify the application-specific access control lists for all known risks and to run a penetration test to identify potential failures. Refer to [ACLFailure] for details.
Policy Failures
Security policy provides rules and conditions that are used to determine what actions should be taken in response to defined events. In general, business and organizations adopt security policies to enforce access control in IT applications, firewalls, anti-spam processing, message routing, service provisioning, and so on. If there are insufficient or missing rules, invalid conditions or prerequisites, or conflicting rules, the security policy processing will not be able to enforce the defined security rules. Applications can thus be vulnerable due to policy failures. W ith such failures, hackers can discover and exploit any resource loophole. Policy failure is a security issue for application design and policy management.
Man-in-the-Middle (MITM)
A MITM attack is a security attack in which the hacker is able to read or modify business transactions or messages between two parties without either party knowing about it. Attackers may execute man-in-the-middle attacks by spoofing the business transactions, stealing user credentials, or exploiting a flaw in the underlying public key infrastructure or W eb browser. For example, Krawczyk illustrates a man-in-the-middle attack on a user using Microsoft Internet Explorer while connecting to an SSL server. Man-in-the-middle is a security issue in application design and application infrastructure. Refer to [Krawczyk] for details.
Deployment Problems
Many security exposure issues and vulnerabilities occur by chance because of application deployment problems. These include inconsistencies within and conflicts between application configuration data and the deployment infrastructure (hosts, network environment, and so on). Human error in policy implementation also contributes to these problems. In some cases, deployment problems are due to application design flaws and related issues. To prevent these problems, it is important to review and test all infrastructure security policies and to make sure application-level security policies reflect the infrastructure security policies, and vice versa. W here there are conflicts, the two policies will need to be reconciled. Some trade-offs in constraints and restrictions related to OS administration, services, protocols, and so on may need to be made.
Coding Problems
Coding practices greatly influence application security. Coding issues also cause flaws and erroneous conditions in programming and application program flow. Other issues related to input validation, race conditions, exceptions, runtime failures, and so on may also be present. To ensure better coding practices are followed, it is always recommended to adopt a coding review methodology followed by source code scanning so that all potential risks and vulnerabilities can be identified and corrected.
A fault in the system security of business applications may cause great damage to an organization or to individual clients. Understanding the potential for damage from security breaches will help security architects and developers protect business applications and resources properly. Thus, it is important to understand the threat levels and vulnerabilities and then plan and establish a service recovery and continuity program for all potential failures.
Design Patterns
A design pattern is a reusable solution to a recurring design problem. Design patterns are usually considered successful solution strategies and best practices for resolving common software design problems. In a typical security solution, they allow application-level security design with reusable security components and frameworks. In a typical security design scenario, patterns help architects and developers to communicate security knowledge, to define a new design paradigm or architectural style, and to identify risks that have traditionally been identified only by prototyping or experience.
Best Practices
Best practices are selected principles and guidelines derived from real-world experience that have been identified by industry experts as applicable practices. They are considered exceptionally well-suited to contributing to the improvement of design and implementation techniques. They are also promoted for adoption in the performance of a process or an activity within a process. They are usually represented as do's and don'ts.
Reality Checks
Reality checks are a collection of review items used to identify specific application behavior. They assist in the analysis of whether the applied design principles are practicable, feasible, and effective under all required circumstances. There are many grand design principles and theories in the application security area, but some of them may not be practical. Reality checks can help identify alternatives that have fewer penalties but achieve the same goals.
Proactive Assessment
Proactive assessment is a process of using existing security knowledge and experience and then applying it in order to prevent the same problems from recurring. It also predicts what is likely to occur if preventive measures are not implemented.
Profiling
A complementary strategy to proactive assessment is security profiling and optimization. Using featured tools, it helps in identifying risks and vulnerabilities and in verifying mandated regulatory or compliance requirements on an ongoing basis. These tools execute a set of scripts that detect existing vulnerabilities and mitigate risks by means of required changes or patches.
Defensive Strategies
Defensive strategies are a set of proactive and reactive actions that thwart security breaches. They are usually represented by a plan of action that helps to identify and restrict a security violation earlywhile it is still at a low level. These strategies should present explicit instructions for their use and should also present instructions for use when a low-level breach is missed and the attack has progressed to a higher level.
Sarbanes-Oxley Act
The Sarbanes-Oxley Act of 2002 (SOX) is a United States federal law designed to rebuild public trust in corporate business and reporting practices and to prevent the recent corporate ethics scandals and governance problems from recurring. SOX requires all public U.S. companies to comply with a set of mandatory regulations dealing with financial reporting and corporate accountability. Any failure to comply with this law can result in federal penalties. W hile SOX does not prescribe a solution to the compliance issue, it does make clear what obligations a company is under in order to be compliant. Section 404(a) of the Act requires establishing "adequate internal controls" around financial reporting and its governance. The term "internal controls" refers to a series of processes that companies must adhere to in the preparation of financial reports as well as in the protection of the financial information that goes into making the reports. This financial information must also be protected as stored in various locations throughout the enterprise (including enterprise applications, database tools, and even accounting spreadsheets). The information technology and its related processes generate the majority of data that makes up financial reports, and as such, it is critical that the effectiveness of these processes can be verified. The security and identity management aspects of IT play a critical part in ensuring that a company is in compliance with the law. If they do not properly work, the risk to the corporation and the potential personal liability of its executives can be significant. From an IT security perspective, as mentioned in the previous paragraph, the SOX Act does not explicitly contain any prescriptive processes and definitions. It also does not articulate what "adequate internal controls" means or what solutions must be implemented in order to create them. However, by drawing from industry best practices for security and control of other types of information, several inferences can be made. According to industry experts, a quick review of the legislation reveals the following common requirements for internal control: A readily available, verifiable audit trail and auditable evidence of all events, privileges, and so on should be established. Immediate notification of audit policy violations, exceptions, and anomalies must be made. Real-time and accurate disclosure must be made for all material events within 48 hours. Access rights in distributed and networked environments should be effectively controlled and managed. Companies should be able to remove terminated employees' or contractors' access to applications and systems immediately. Companies should be able to confirm that only authorized users have access to sensitive information and systems. Control over access to multiuser information systems should be put in placeincluding the elimination of multiple user IDs and accounts for individual persons. The allocation of passwords should be managed, and password security policies must be enforced. Appropriate measures must be taken to prevent unauthorized access to computer system resources and the information held in application systems. The SOX Act has certainly raised the bar and the level of interest in the role of information security in improving application and system capabilities. Refer to [SOX1] and [SOX2] for details.
Gramm-Leach-Bliley Act
The Gramm-Leach-Bliley Act (GLB), which was previously known as the Financial Services Modernization Act, is a United States federal law that was passed in 1999. The GLB Act was established primarily to repeal restrictions on banks affiliated with securities firms, but it also requires financial institutions to adopt strict privacy measures relating to customer data. The law applies to any organization that works with people who prepare income tax returns, consumer credit reporting agencies, real estate transaction settlement services, debt collection agencies, and people who receive protected information from financial institutions. From an IT security perspective, there are three provisions of the GLB Act that restrict the collection and use of consumer data. The first two, the Financial Privacy Rule and the Pretexting Provisions, detail responsible business practices and are mainly outside the scope of information security duties. The third provision, the Safeguards Rule, went into effect during 2003 and requires subject institutions to take proactive steps to ensure the security of customer information. W hile financial institutions have traditionally been more security-conscious than institutions in other industries, the GLB Act requires financial institutions to reevaluate their security policies and take action if deficiencies are discovered. The following are key information security actions that financial institutions must perform under the GLB Act: Evaluate IT environments and understand their security risks; define internal and external risks to the organization. Establish information security policies to assess and control risks; these include authentication, access control, and encryption systems. Conduct independent assessmentsthird-party testing of the institution's information security infrastructure. Provide training and security awareness programs for employees. Scrutinize business relationships to ensure they have adequate security. Establish procedures to upgrade security programs that are in place. From a technical perspective, the security requirements set forth in the GLB Act seem to be enormous, but these requirements can be met by a robust security policy that is enforced across the enterprise. Refer to [GrammLeach1] and [GrammLeach2] for details.
HIPPA
HIPPA refers to the Health Insurance Privacy and Portability Act of 1996. HIPPA requires that institutions take steps to protect the confidentiality of patient information. Achieving HIPPA compliance means implementing security standards that govern how healthcare plans, providers, and clearinghouses transmit, access, and store protected health information in electronic form. HIPPA privacy regulations require that the use of personal health information (PHI) be limited to that which is minimally necessary to administer treatment. Such limitations must be based on the requirements of various HIPPA provisions regarding parents and minors; information used in marketing, research, and payment processes; and government access to authorization decisions. HIPPA security regulations further impose requirements to develop and enforce "formal security policies and procedures for granting different levels of access to PHI." This includes authorization to access PHI, the establishment of account access privileges, and modifications to account privileges. Furthermore, the HIPPA security regulations require the deployment of mechanisms for obtaining consent to use and disclose PHI. W ith regard to security, HIPPA defines technical security services in terms of the following: Entity authentication Proving your identity to gain access. Access control W hat you can access. Audit control W hat you have accessed. Authorization control W hat you can do once you have access. Message authentication Ensuring the data integrity and confidentiality of data. Alarms/Notifications Notifies out-of-compliance security policy enforcement. Availability of PHI Ensures high availability of PHI within a secure infrastructure.
These mandatory security requirements are intended to prevent deliberate or accidental access to PHI and to address concerns over the privacy of patient data. W hile most organizations that deal with patient records have implemented HIPPA in one form or another, the recent acceleration of e-mail viruses, spyware introduction, and personal data theft should prompt security architects and developers to reexamine their applications and systems. Refer to [HIPPA] for details.
Federation Services
Identity management federation services incorporate the industry standards (i.e., Liberty) for providing a federated framework and authentication-sharing mechanism that is interoperable with existing enterprise systems. This allows an authenticated identity to be recognized and enables the user associated with that identity to participate in personalized services across multiple domains. Employees, customers, and partners can seamlessly and securely access multiple applications and outsourced services without interruption, which enhances the user experience.
Directory Services
In the quest for secure consolidation of processes and resources, companies are increasingly adopting centralized or decentralized directories to enhance security, improve performance, and enable enterprise-wide integration. The right directory-based solution should deliver benefits above and beyond a basic LDAP directoryexternalizing identity information, making "globally centric" information from key sources of authority, and making relevant information centrally available to both off-the-shelf applications and custom-written corporate applications. An effective directory solution must provide, at a minimum, high performance, security, and availability; full interoperability across other vendor directories; and ease of management and administration.
Comprehensive audit and reporting of user profile data, change history, and user permissions ensure that security risks are detected so that administrators can respond proactively. The ability to review the status of access privileges at any time improves audit performance and helps achieve compliance with governmental mandates. Finally, reporting on items such as usage of self-service password resets and time to provision or de-provision users provides visibility into key operational metrics and possible operational improvements.
Microprocessor cards, as the name implies, contain a processor. The microprocessor performs data handling, processing, and memory access according to a given set of conditions (PINs, encryptions, and so on). Microprocessor-based cards are widely used for access control, banking, wireless telecommunication, and so on. Memory cards contain memory chips with non-programmable logic and do not contain a processor. Memory cards are typically used for prepaid phone cards, for gift cards, and for buying goods sold based on prepayment. Since memory cards do not contain a processor, they cannot be reprogrammed or reused. Contact cards must be inserted in a CAD reader to communicate with a user or an application. Contactless cards makes use of an antenna, and the power can be provided by the internal or collected by the antenna. Typically, contactless cards transmit data to the CAD reader through electromagnetic fields. One limitation of contactless cards is the requirement that they be used within a certain distance of the CAD reader.
Biometric Identity
Biometric identity refers to the use of physiological or behavioral characteristics of a human being to identify a person. It verifies a person's identity based on his or her unique physical attributes, referred to as biometric samples, such as fingerprints, face geometry, hand geometry, retinal information, iris information, and so on. The biometric identification system stores the biometric samples of the identity during registration and then matches it every time the identity is claimed. Biometric identification systems typically work using pattern-recognition algorithms that determine the authenticity of the provided physiological or behavioral characteristic sample. Popular biometric identification systems based on physical and behavioral characteristics are as follows: Fingerprint Verification: Based on the uniqueness of a series of ridges and furrows found on the surface of a human finger as well as its minutiae points. A minutiae point is a point that occurs at either a ridge bifurcation or a ridge ending. Retinal Analysis: Based on the blood vessel patterns in the back of the eye, the vessels' thickness, and the number of branching points, and so on. Facial Recognition: Based on the spatial geometry of the key features of the face measured as distance between eyes, nose, jaw edges, etc. Iris Verification: Based on the iris pattern, which is the colored part of the eye. Hand Geometry: Based on the measurement of the dimensions of a hand, including the fingers, and examining the spatial geometry of the distinctive features. Voice Verification: Based on the vocal characteristics to identify individuals using a pass-phrase. Signature Verification: The verification done based on the shape, stroke, pen pressure, speed, and time taken during the signature.
Figure 1-2. BiObexBiometric identification using facial recognition (Courtesy: AC Technology, Inc.)
Industry standards and specifications are available for developing and representing biometric information. The key standards are as follows: BioAPI: The BioAPI is an industry consortium effort for a standardized application programming interface for developing compatible biometric solutions. It provides BioAPI specifications and a reference implementation to support a wide range of biometric technology solutions. For more information, refer to the BioAPI web site at http://www.bioapi.org/. OASIS XCBF: The OASIS XML Common Biometric Format (XCBF) is an industry effort for XML representation of descriptive biometric information for verifying an identity based on human characteristics such as DNA, fingerprints, iris scans, and hand geometry. For more information, refer to [XCBF]. CBEFF (Common Biometric Exchange File Format): CBEFF is a standard defined by NIST (National Institute of Standards and Technology) for handling different biometric techniques, versions, and data structures in a common way to facilitate ease of data exchange and interoperability. For more information, refer to [CBEFF].
In a Security by Default strategy, secure personal identification using smart cards and biometrics plays a vital role in providing highly secure logical access control to security-sensitive applications. For more information about the architecture and implementation of secure personal identification solutions using smart card and biometric technologies, refer to Chapter 15, "Secure Personal Identification Using Smart Cards and Biometrics."
RFID-Based Identity
Radio Frequency Identification (RFID) provides a mechanism for identification services using radio frequencies. It makes use of an RFID tag, comprised of an integrated circuit chip and an antenna. The RFID tag stores a unique Electronic Product Code (EPC) and uses an antenna to receive the radio frequency for emitting and transmitting the EPC data as signals to the RFID readers. W hen an RFID passes through an electromagnetic zone, the RFID tag is activated and sends signals to the RFID reader. The RFID reader receives and decodes the signals and then communicates the EPC to the RFID server that provides the Object Name Service (ONS). The ONS identifies the object by interpreting the EPC and sends it for further processing. The EPC data may provide information related to the identification, location, and other specifics related to the identified object. The standards and specifications related to RFID systems are defined by EPCglobal (www.epcglobalinc.org) an industry standard initiative with participation by leading firms and industries promoting RFID technologies. Figure 1-3 illustrates the transmission of EPC data from the RFID tag to the RFID reader.
RFID tags are available in a wide variety of shapes and sizes. RFID tags can be categorized as active or passive. Active RFID tags have a power source that supports read/write, longer read range frequencies, and larger storage (operating up to 1 Mb). Passive RFID tags do not contain any power source; they generate power by using the incoming radio frequency energy induced in the antenna to transfer the EPC from the RFID tag to the RFID reader. Passive RFID tags are usually less expensive than active tags and are commonly adopted to support shorter read ranges.
Security attacks by spoofing the business transactions in the network, or replaying the N/A business transactions with tampered transaction information.
B. Inefficient Processes (Quantitative) The cost of resetting user passwords or user administration as a $25,000 result of not having single sign-on
capability. C . Intangible C ost (Qualitative) Loss of confidence of reputation due to $25,000 publicized security breach. A+B+C $1,450,000
N/A
Intrusion detection
C ost of implementing and executing intrusion detection system to N/A monitor any suspicious network activities. C ost of antivirus software to protect network and N/A system resources against viruses. Internal cost for implementing J2EE $1,500,000 and Web services security. Additional hardware and software cost of implementing $1,000,000 single sign-on architecture. Internal cost of addressing the inefficient security administration processes. Total one-time investment This includes the single sign-on architecture only. E (one-time) + F (annual) * 3 years ROI = D (E/3) F $766,666 $766,666 $766,666 ROI per year $683,333
Antivirus protection
N/A
$2.5 million
$100,000
$2,300,000
H. Estimated Return First year cost Second year cost Third year cost
Assumptions
Only denial-of-service attack is included in this ROI estimate. The cost estimate of the denial-of-service attack assumes the average cost per incident (refer to the CSI report in [CSI2003] p. 20 for details).
Most of the investment in security is already in place. This includes infrastructure platform, intrusion detection, and virus protection. Ten percent of the workforce require password reset (1,000 cases per year), assuming $25 per password reset incident will be incurred by the outsourcing data center. Intangible security cost for loss to public image, loss of reputation, and denial to network resources assumes five days of lost sales, amounting to $25,000. Maintenance cost assumes 10 percent of the hardware and software cost. As a simple illustration of the ROI concept, this example does not calculate and display the present value of the security investment and returns. In a real-life scenario, the present value would be used.
Summary
Security has taken unprecedented importance in many industries today, and every organization must adopt proactive security measures for data, processes, and resources throughout the information life cycle. Thus, an organization must have a thorough understanding of the business challenges related to security, critical security threats, exploits, and how to mitigate risk and implement safeguards and countermeasures. Adopting security by using proactive approaches becomes essential to organizational health and well-being. Such approaches may well also increase operational efficiency and cost effectiveness. In this chapter, we have had an overview of security strategies and key technologies as well as the importance of delivering end-to-end security to an IT system. In particular, we discussed the key constituents that contributes to achieving "Security-by-Default," such as: Understanding the weakest links in an IT ecosystem Understanding the boundaries of end-to-end security Understanding the impact of application security Strategies for building robust security architecture Understanding the importance of security compliance Understanding the importance of identity management Understanding the importance of secure personal identification Understanding the importance of Java technology How to justify security as a business enabler W e've just looked at the importance of proactive security approaches and strategies. Now we'll start our detailed journey with a closer look at key security technologies. Then we'll look at how to achieve Security by Default by adopting radical approaches based on well-defined security design methodology, pattern catalogs, best practices, and reality checks.
References
[1798] California Office of Privacy Protection. "Notice of Security BreachCivil Code Sections 1798-29, 1798-82 and 1798-84." http://www.privacy.ca.gov/code/cc1798.291798.82.htm [ACLFailure] Open Web Application Security Project. "A2. Broken Access Control." http://www.owasp.org/documentation/topten/a2.html [AMNews] Security Breach: Hacker Gets Medical Records http://www.amaassn.org/amednews/2001/01/29/tesa0129.htm [BrokenAuth] T he Open Web Application Security Project. "A3. Broken Authentication and Session Management." http://www.owasp.org/documentation/topten/a3.html [CanadaPrivacy] Department of Justice, Canada. "Privacy ActChapter P-21." http://laws.justice.gc.ca/en/P21/94799.html [Caslon] Caslon Analytics. Caslon Analytics Privacy Guide. http://www.caslon.com.au/privacyguide6.htm [CBEFF] Common Biometric Exchange File Format. http://www.itl.nist.gov/div895/isis/bc/cbeff/. [CNET ] Matt Hines. "Gartner: Phishing on the Rise." http://news.com.com/2100-7349_3-5234155.html [ComputerWeek134554] "IBM Offers Companies Monthly Security Report." http://www.computerweekly.com/Article134554.htm [COPPA] Children Online Privacy Protection Act. http://www.ftc.gov/os/1999/10/64fr59888.htm [CSI2003] Robert Richardson. 2003 CSI / FBI Computer Crime and Security Survey. Computer Security Institute, 2003. http://www.gocsi.cpactourom/forms/fbi/pdf.jhtml [CSI2004] Lawrence A. Gordon, Martin P. Loeb, William Lucyshyn, and Robert Richardson. "2004 CSI / FBI Computer Crime and Security Survey." Computer Security Institute, 2004. http://www.gocsi.com [CSO Online] Richard Mogul. "Danger WithinProtecting Your Company from Internal Security Attacks (Gartner Report)." http://www.csoonline.com/analyst/report400.html [DataMon2003] Datamonitor. "Financial Sector Opts for J2EE." T he Register, June 4, 2003. http://theregister.com/content/53/31021.html [DOS] T he Open Web Application Security Project. "A9. Denial of Service." http://www.owasp.org/documentation/topten/a9.html [EU95] European Parliament. Data Protection Directive 95/46/EC. October 24, 1995. http://europa.eu.int/comm/internal_market/privacy/index_en.htm [ExpressComputer] Identity Management Market at Crossroads. April 19, 2004. http://www.expresscomputeronline.com/20040419/securespace01.shtml [FT C] Gramm-Leach-Bliley Act. Federal T rade Commission. http://www.ftc.gov/privacy/glbact/glbsub1.htm [FT C findings] FT C Releases Survey of Identity T heft. http://www.ftc.gov/opa/2003/09/idtheft.htm [Gartner Reports] Security reports from Gartner at: http://www.gartner.com/security [GrammLeach1] Federal T rade Commission. "Gramm-Leach-Bliley Act." 1999. http://www.ftc.gov/privacy/glbact/glbsub1.htm [GrammLeach2] US Senate Committee on Banking, Housing, and Urban Affairs. "Information Regarding the Gramm-Leach-Bliley Act of 1999." http://banking.senate.gov/conf/ [Hewitt] T im Hilgenberg and John A. Hansen. "Building a Highly Robust, Secure Web Services Conference Architecture to Process 4 Million T ransactions per Day." IBM developerWorks Live! 2002. [HIPPA] Achieving HIPPA Compliance with Identity Management from Sun. http://www.sun.com/software/products/identity/wp_HIPPA_identity_mgmt.pdf [ImproperDataHandling] T he Open Web Application Security Project. "A7. Improper Data Handling." http://www.owasp.org/documentation/topten/a7.html [InputValidation] Security T racker. "Lotus Notes/Domino Square Bracket Encoding failure Lets Remote Users Conduct Cross-site Scripting Attacks." http://securitytracker.com/alerts/2004/Oct/1011779.html [InjectionFlaw] Secunia. "Multiple Browsers Window Injection Vulnerability T est." http://secunia.com/multiple_browsers_window_injection_vulnerability_test/
[InsecureConfig] T he Open Web Application Security Project. "A10. Insecure Configuration Management." http://www.owasp.org/documentation/topten/a10.html [KMPG] KMPG. "Comparison of U.S. and Canadian Regulatory Changes." http://www.kpmg.ca/en/services/audit/documents/USCDNRegulatory.pdf [Krawczyk] Pawel Krawczyk. "Practical Demonstration of the MSIE6 Certificate Path Vulnerability." IPSec.pl http://www.ipsec.pl/msiemitm/msiemitm.en.php [Lai] Ray Lai. J2EE Platform Web Services. Prentice Hall, 2003. [LiGong] Li Gong. "Java Security Architecture." in "Java 2 SDK, Standard Edition Documentation Version 1.4.2." Sun Microsystems, 2003. http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc1.html and http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc2.html. [McLeanBrown] Greg McLean and Jason Brown. "Determining the ROI in IT Security." April 2003. http://www.cica.ca/index.cfm/ci_id/14138/la_id/1.htm [Online-Kasino] Online Kasinos Info. http://www.onlinekasinos.info/ [PasswordExploit] Esther Shein, editor. "Worm T argets Network Shares with Weak Passwords." eSecurityPlanet.com. http://www.esecurityplanet.com/alerts/article.php/3298791 [PHP3_errorLog] Security Advisory. "FreeBSD: 'PHP' Ports Vulnerability." LinuxSecurity.com. November 20, 2000. http://www.linuxsecurity.com/content/view/102698/103/ [PICC] IDC. "People's Insurance Company of China: eBusiness Portal Attracts New Customers and Reduces Costs." IDC eBusiness Case Study. http://www.sun.com/service/about/success/recent/PICC_English_IDC.pdf [SDT imes057] Alan Zeichick. ".NET Advancing Quickly on J2EE, but Research Shows Java Maintains Strong Position." SD Times. July 1, 2002. http://www.sdtimes.com/news/057/story7.htm [SessionHijack] Kevin Lam, David LeBlanc, and Ben Smith. "T heft on the Web: Prevent Session Hijacking." Microsoft TechNet Magazine. Winter 2005. http://www.microsoft.com/technet/technetmag/issues/2005/01/sessionhijacking/default.aspx [SOX1] U.S. Congress. Sarbanes-Oxley Act. H.R. 3763. July 30, 2002. http://www.law.uc.edu/CCL/SOact/soact.pdf [SOX2] "T he Role of Identity Management in Sarbanes-Oxley Compliance." http://www.sun.com/software/products/identity/wp_identity_mgmt_sarbanes_oxley.pdf [SQLInjection] Shawna McAlearney. "Automated SQL Injection: What Your Enterprise Needs to Know." SearchSecurity.com. July 26, 2004. http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci995325,00.html [XCBF] OASIS XCBF T echnical Committee Web Site. http://www.oasis-open.org/committees/tc_home.php? wg_abbrev=xcbf [XSiteScript] T he Open Web Application Security Project. "A4. Cross-Site (XSS) Flaws." http://www.owasp.org/documentation/topten/a4.html
Confidentiality
Confidentiality is the concept of protecting sensitive data from being viewable by an unauthorized entity. A wide variety of information falls under the category of sensitive data. Some sensitive data may be illegal to compromise, such as a patient's medical history or a customer's credit card number. Other sensitive data may divulge too much information about the application. For example, a W eb application that keeps a user's state via a cookie value may be more susceptible to compromise if a malicious adversary can derive certain information from the cookie value. W ith the increased use of legacy systems exposed via W eb services over the Internet, ensuring the confidentiality of sensitive data becomes an especially high priority. However, communication links are not the only area that needs solutions to ensure confidentiality. An internal database holding thousands of medical histories and credit card numbers is an enticing target to a malicious adversary. Securing the confidentiality of this data reduces the probability of exposure in the event the application itself is compromised. To protect the confidentiality of sensitive data during its transit or in storage, one needs to render the data unreadable except by authorized users. This is accomplished by using encryption algorithms, or ciphers. Ciphers are secret ways of representing messages. There are a wide variety of ciphers at the software developer's disposal; these are discussed later in this chapter.
Integrity
Integrity is the concept of ensuring that data has not been altered by an unknown entity during it transit or storage. For example, it is possible for an e-mail containing sensitive data such as a contractual agreement to be modified before it reaches the recipient. Similarly, a purchase request sent to a W eb service could be altered en route to the server, or a software package available for downloading could be altered to introduce code with malicious intent (a "trojan horse"). Checking data integrity ensures that data has not been compromised. Many communications protocols, including TCP/IP, employ checksum or CRC (cyclic-redundancy check) algorithms to verify data integrity, but an intelligent adversary easily overcomes these. For example, suppose a downloadable software package has a published CRC associated with it, and this package is available at many mirror sites. An adversary with control over one of the mirrors installs a Trojan horse in the program; now the CRC has changed. The attacker can alter other insignificant parts of the code so that the CRC calculates to what it was before the alteration. To counter this threat, cryptographically strong one-way hash functions have been developed that make it computationally infeasible to create the same hash value from two different inputs. There are quite a few such hash functions available in the public domain; details are discussed later in this chapter.
Authentication
Authentication is the concept of ensuring that a user's identity is truly what the user claims it to be. This is generally accomplished by having the user first state his identity and then present a piece of information that could only be produced from that user. The oldest and most common form of user authentication is password authentication. However, passwords may be intercepted if sent on an unsecured line, where their confidentiality is not ensured. Passwords can be exposed by a variety of other means as well. Passwords are often written down and left in places too easily discovered by others. Also, people often use the same password for different services, so exposure of one password opens up multiple targets for potential abuse. Passwords are an example of authentication based on "what you know." In response to the issues with password-type authentication, alternatives have been developed that are based on other
factors. One approach requires the user to enter a different password for each login. This can be accomplished by giving the user a device that is used during the authentication process. W hen the user logs in, the server sends a "challenge" string that the user keys into her security device. The device displays a response to the challenge string, which the user sends back to the server. If the response is correct, the user has been successfully authenticated. Unlike passwords, this authentication is based on "what you have," rather than "what you know." Alternatively, instead of sending a challenge string, the server and the security device may be time-synchronized so that the user only needs to type in the display on the security device. Generally, such security devices use cryptographic techniques; for example, the device will have a unique internal key value that is known by the server, and the user's response will be derived from an encrypt function of the challenge string or current time. One of the more popular security device-based authentication solutions is SecureID, which uses a time-synchronized device to display a one-time password. Additionally, SecureID requires the user to prepend the one-time password with a regular password of the user's choosing, combining "what you have" with "what you know" to create a strong authentication solution. Along with "what you know" and "what you have," there are authentication methods based on "what you are"biometrics. Biometric authentication products check fingerprints, retina patterns, voice patterns, and facial infrared patterns, among others, to verify the identity of the user. However, even these methods can be fooled [Schneier02], so a best practice is to combine biometrics with additional authentication (such as a password). The role of biometrics in personal identification and authentication is discussed in Chapter 15, "Secure Personal Identification Strategies Using Smart Cards and Biometrics."
Authorization
Authorization is the concept of determining what actions a user is allowed to perform after being allowed access to the system. Authorization methods and techniques vary greatly from system to system. One common method employs access control lists (ACLs), which list all users and their access privileges, such as read-only, read and modify, and so forth. Another technique is to assign each user a role or group identification, and the rest of the application checks this to determine what actions the user may perform. On UNIX operating systems, the owner of each file determines access to that file by others. Each system presents unique requirements that affect the design of authorization methods for that system.
Non-Repudiation
Non-repudiation is the concept that when a user performs an action on data, such as approving a document, that action must be bound with the user in such a way that the user cannot deny performing the action. Non-repudiation is generally associated with digital signatures; more details are presented later in this chapter.
Cryptographic Algorithms
Although cryptography has been studied for years, its value has only recentlywith the tremendous increase in the use of networkingbeen recognized. One normally associates cryptography with confidentiality via data encryption, but some cryptographic algorithms, such as the one-way hash function and digital signatures, are more concerned with data integrity than confidentiality. This chapter will introduce you to the following cryptographic algorithms: one-way hash functions, symmetric ciphers, asymmetric ciphers, digital signatures, and digital certificates. For more information about understanding and implementing cryptographic algorithms in Java, refer to Chapter 4.
As an example of using a hash function, suppose an open-source development project posts its product, which is available for download, on the W eb at several mirror sites. On their main site, they also have available the result of an MD5 hash performed on the whole download file. If an attacker breaks into one of the mirror sites, and inserts some malicious code into the product, he would need to be able to adjust other parts of the code so that the output of the MD5 would be the same as it was before. W ith a checksum or CRC, the attacker could do it, but MD5 is specifically designed to prevent this. Anyone who downloads the altered file and checks the MD5 hash will be able to detect that the file is not the original. Another example: Suppose two parties are communicating over a TCP/IP connection. TCP uses a CRC check on its messages, but as discussed earlier, a CRC can be defeated. So, for additional security, suppose that the two parties are using an application protocol on top of TCP that attaches an MD5 hash value at the end of each message. Suppose an attacker lies at a point in between the two communicating parties in such a way that he can change the contents of the TCP stream. W ould he be able to defeat the MD5 check? It turns out he can. The attacker simply alters the data stream, and then recalculates the MD5 hash on the new data and attaches that. The two communicating parties have no other resource against which to check the MD5 value, because the communicating data could be anything, such as an on-the-fly conversation over an instant message channel. To prevent this, one combines a hash function with a secret key value. A standard way to do this is defined as the HMAC [RFC2104]. W ith hash functions, as with any cryptographic algorithm, the wise developer uses a tried-and-true published algorithm instead of developing one from scratch. The tried-and-true algorithms have undergone much scrutiny, and for every MD5 and SHA-1 there are many others that have fallen because of vulnerabilities and weaknesses. W e will discuss ciphers next. There are two types of ciphers, symmetric and asymmetric. W e will start with symmetric ciphers, which have been around for centuries and are the cornerstone of data privacy.
Symmetric Ciphers
Symmetric ciphers are mechanisms that transform text in order to conceal its meaning. Symmetric ciphers provide two functions: message encryption and message decryption. They are referred to as symmetric because both the sender and the receiver must share the same key to encrypt and then decrypt the data. The encryption function takes as input a message and a key value. It then generates as output a seemingly random sequence of bytes roughly the same length as the input message. The decryption function is just as important as the encryption function. The decryption function takes as input the same seemingly random sequence of bytes output by the first function and the same key value, and generates as output the original message. The term "symmetric" refers to the fact that the same key value used to encrypt the message must be used to successfully decrypt it. The purpose of a symmetric cipher is to provide message confidentiality. For example, if Alice needs to send Bob a confidential document, she could use e-mail; however, e-mail messages have about the same privacy as a postcard. To prevent the message from being disclosed to parties unknown, Alice can encrypt the message using a symmetric cipher and an appropriate key value and e-mail that. Anyone looking at the message en route to Bob will see the aforementioned seemingly random sequence of bytes instead of the confidential document. W hen Bob receives the encrypted message, he feeds it and the same key value used by Alice into the decrypt function of the same symmetric cipher used by Alice, which will produce the original messagethe confidential document (see Figure 2-1).
An example of a simple symmetric key cipher is the rotate, or Caesar, cipher. W ith the rotate cipher, a message is encrypted by substituting one letter at a time with a letter n positions ahead in the alphabet. If, for example, the value of n (the "key value" in a loose sense) is 3, then the letter A would be substituted with the letter D, B with E, C with F, and so on. Letters at the end would "wrap around" to the beginning; W would be substituted with Z, X would be substituted with A, Y with B, Z with C, and so on. So, a plaintext message of "W INNERS USE JAVA" encrypted with the rotate cipher with a key value of 7 would result in the ciphertext "DPUULY Z BZL QHCH." Even without the aid of a computer, the rotate cipher is quite easily broken; one need only try all possible key values, of which there are 26, to crack the code. However, there are plenty of published symmetric ciphers from which to choose that have held up to a great deal of scrutiny. Some examples include DES, IDEA, AES (Rijndael), Twofish, and RC2. For references to these and other symmetric ciphers, see [W eiDai01], which is also a great starting point for other cryptographic references. W ith symmetric ciphers, as with any cryptographic algorithm, the wise developer uses a tried-and-true published algorithm instead of developing one from scratch. The tried-and-true algorithms have undergone much scrutiny, and for every Rijndael and Twofish, there are many others that have fallen because of vulnerabilities and weaknesses [RSA02]. Symmetric ciphers are available in two types: block ciphers and stream ciphers. Block ciphers encrypt blocks of data (blocks are typically 8 bytes or 16 bytes) at a time. Stream ciphers are relatively new and are generally faster than block ciphers. However, it seems that block ciphers are more popular, probably because they have been around longer, and there are many free choices available [RSA02]. Examples of block ciphers include DES, IDEA, AES (Rijndael), and Blowfish. Examples of stream ciphers are RC4 and W AKE. Ron Rivest's RC4 leads the stream cipher popularity contest, because it is used with SSL in all W eb browsers, but it can only be used via a license with RSA. Also, block ciphers can be used in modes where they can emulate stream cipher behavior [FIPS81]. An excellent free reference on the use of the modes, as well as cryptography in general, is available at [RSA01].
Asymmetric Ciphers
Asymmetric ciphers provide the same two functions as symmetric ciphers: message encryption and message decryption. There are two major differences, however. First, the key value used in message decryption is different than the key value used for message encryption. Second, asymmetric ciphers are thousands of times slower than symmetric key ciphers. But asymmetric ciphers offer a phenomenal advantage in secure communications over symmetric ciphers. To explain this advantage, let's review the earlier example of using a symmetric cipher. Alice encrypts a message using key K and sends it to Bob. W hen Bob receives the encrypted message, he uses key K to decrypt the encrypted message and recover the original message. This scenario introduces the question of how Alice sends the key value used to encrypt the message to Bob. The answer is that Alice must use a separate communication channel, one that is known to be secure (that is, no one can listen in on the communication), when she sends the key value to Bob. The requirement for a separate, secure channel for key exchanges using symmetric ciphers invites even more questions. First, if a separate, secure channel exists, why not send the original message over that? The usual answer is that the secure channel has limited bandwidth, such as a secure phone line or a trusted courier. Second, how long can Alice and Bob assume that their key value has not been compromised (that is, become known to someone other than themselves) and when should they exchange a fresh key value? Dealing with these questions and issues falls within the realm of key management. Key management is the single most vexing problem in using cryptography. Key management involves not only the secure distribution of key values to all communication parties, but also management of the lifetime of the keys, determination of what actions to take if a key is compromised, and so on. Alice and Bob's key management needs may not be too complicated; they could exchange a password over the phone (if they were certain that no one was listening in) or via registered mail. But suppose Alice needed to securely communicate not just with Bob but with hundreds of other people. She would need to exchange (via trusted phone or registered mail) a key value with each of these people and manage this list of keys, including keeping track of when to exchange a fresh key, handling key compromises, handling key mismatches (when the receiver cannot decrypt the message because he has the wrong key), and so on. Of course, these issues would apply not just to Alice but to Bob and everyone else; they all would need to exchange keys and endure these key management headaches (there actually exists an ANSI standard (X9.17) [ANSIX9.17] on key management for DES.) To make matters worse, if Alice needs to send a message to hundreds of people, she will have to encrypt each message with its own key value. For example, to send an announcement to 200 people, Alice would need to encrypt the message 200 times, one encryption for each recipient. Obviously, symmetric ciphers for secure communications require quite a bit of overhead. The major advantage of the asymmetric cipher is that it uses tw o key values instead of one: one for message encryption and one for message decryption. The two keys are created during the same process and are known as a key pair. The one for message encryption is known as the public key; the one for message decryption is known as the private key. Messages encrypted with the public key can only be decrypted with its associated private key. The private key is kept secret by the owner and shared with no one. The public key, on the other hand, may be given out over an unsecured communication channel or published in a directory. Using the earlier example of Alice needing to send Bob a confidential document via e-mail, we can show how the exchange works with an asymmetric cipher. First, Bob e-mails Alice his public key. Alice then encrypts the document with Bob's public key, and sends the encrypted message via e-mail to Bob. Because any message encrypted with Bob's public key can only be decrypted with Bob's private key, the message is secure from prying eyes, even if those prying eyes know Bob's public key. W hen Bob receives the encrypted message, he decrypts it using his private key and recovers the original document. Figure 2-2 illustrates the process of encrypting and decrypting with the public and private keys.
If Bob needs to send some edits on the document back to Alice, he can do so by having Alice send him her public key; he then encrypts the edited document using Alice's public key and e-mails the secured document back to Alice. Again, the message is secure from eavesdroppers, because only Alice's private key can decrypt the message, and only Alice has her private key. Note the very important difference between using an asymmetric cipher and a symmetric cipher: No separate, secure channel is needed for Alice and Bob to exchange a key value to be used to secure the message. This solves the major problem of key management with symmetric ciphers: getting the key value communicated to the other party. W ith asymmetric ciphers, the key value used to send someone a message is published for all to see. This also solves another symmetric key management headache: having to exchange a key value with each party with whom one wishes to communicate. Anyone who wants to send a secure message to Alice uses Alice's public key.
Recall that one of the differences between asymmetric and symmetric ciphers is that asymmetric ciphers are much slower, up to thousands of times slower [W eiDai02]. This issue is resolved in practice by using the asymmetric cipher to communicate an ephemeral symmetric key value and then using a symmetric cipher and the ephemeral key to encrypt the actual message. The symmetric key is referred to as ephemeral (meaning to last for a brief time) because it is only used once, for that exchange. It is not persisted or reused, the way traditional symmetric key mechanisms require. Going back to the earlier example of Alice e-mailing a confidential document to Bob, Alice would first create an ephemeral key value to encrypt the document with a symmetric cipher. Then she would create another message, encrypting the ephemeral key value with Bob's public key, and then send both messages to Bob. Upon receipt, Bob would first decrypt the ephemeral key value with his private key and then decrypt the secured document with the ephemeral key value (using the symmetric cipher) to recover the original document. Figure 2-4 depicts using a combination of asymmetric and symmetric ciphers.
Some examples of asymmetric ciphers are RSA, Elgamal, and ECC (elliptic-curve cryptography). RSA is by far the most popular in use today. Elgamal is another popular asymmetric cipher. It was developed in 1985 by Taher Elgamal and is based on the Diffie-Hellman key exchange, which allows two parties to communicate publicly yet derive a secret key value known only to them [Diffie-Hellman]. Diffie-Hellman, developed by W hitfield Diffie and Martin Hellman in 1976, is considered the first asymmetric cipher, though the concept of an asymmetric cipher may have been invented in the U. K. six years earlier. Diffie-Hellman is different from RSA in that it is not an encryption method; it creates a secure numeric value that can be used as a symmetric key. In a Diffie-Hellman exchange, the sender and receiver each generate a random number (kept private) and value derived from the random number (made public). The two parties then exchange the public values. The power behind the Diffie-Hellman algorithm is its ability to generate a shared secret. Once the public values have been exchanged, each party can then use its private number and the other's public value to generate a symmetric key, known as the shared secret, which is identical to the other's. This key can then be used to encrypt data using a symmetric cipher. One advantage Diffie-Hellman has over RSA is that every time keys are exchanged, a new set of values is used. W ith RSA, if an attacker managed to capture your private key, they could decrypt all your future messages as well as any message exchange captured in the past. However, RSA keys can be authenticated (as with X.509 certificates), preventing man-in-the-middle attacks, to which a Diffie-Hellman exchange is susceptible.
Digital Signature
Digital signatures are used to guarantee the integrity of the message sent to a recipient by representing the identity of the message sender. This is done by signing the message using a digital signature, which is the unique by-product of asymmetric ciphers. Although the public key of an asymmetric cipher generally performs message encryption and the private key generally performs message decryption, the reverse is also possible. The private key can be used to encrypt a message, which would require the public key to decrypt it. So, Alice could encrypt a message using her private key, and that message could be decrypted by anyone with access to Alice's public key. Obviously, this behavior does not secure the message; by definition, anyone has access to Alice's public key (it could be posted in a directory) so anyone can decrypt it. However, Alice's private key, by definition, is known to no one but Alice; therefore, a message that is decrypted with Alice's public key could not have come from anyone but Alice. This is the idea behind digital signatures. Digital signatures are the only mechanisms that make it possible to ascertain the source of a message using an asymmetric cipher. Encrypting a message with a private key is a form of digital signature. However, as we discussed before, asymmetric ciphers are quite slow. Alice could use the technique presented in the previous section of creating an ephemeral key to encrypt the message, and then encrypt the ephemeral key with her private key. But encrypting the message is a wasted effort, because anyone can decrypt it. Besides, the point of the exercise is not to secure the message but to prove it came from Alice. The solution is to perform a one-way hash function on the message, and encrypt the hash value with the private key. For example, Alice wants to confirm a contract with Bob. Alice can edit the contract's dotted line with "I agree," then perform an MD5 hash on the documents, encrypt the MD5 hash value with her private key, and send the document with the encrypted hash value (the digital signature) to Bob. Bob can verify that Alice has agreed to the documents by checking the digital signature; he also performs an MD5 hash on the document, and then he decrypts the digital signature with Alice's public key. If the MD5 hash value computed from the document contents equals the decrypted digital signature, then Bob has verified that it was Alice who digitally signed the document. Figure 2-5 shows how a digital signature is created.
Moreover, Alice cannot say that she never signed the document; she cannot refute the signature, because only she holds the private key that could have produced the digital signature. This ensures non-repudiation.
Digital Certificates
A digital certificate is a document that uniquely identifies information about a party. It contains a party's public key plus other identification information that is digitally signed and issued by a trusted third party, also referred to as a Certificate Authority (CA). A digital certificate is also known as an X.509 certificate and is commonly used to solve problems associated with key management. As explained earlier in this chapter, the advent of asymmetric ciphers has greatly reduced the problem of key management. Instead of requiring that each party exchange a different key value with every other party with whom they wish to communicate over separate, secure communication channels, one simply exchanges public keys with the other parties or posts public keys in a directory. However, another problem arises: How is one sure that the public key really belongs to Alice? In other words, how is the identity of the public key's owner verified? W ithin a controlled environment, such as within a company, a central directory may have security controls that ensure that the identities of public keys' owners have been verified by the company. But what if Alice runs a commerce W eb site, and Bob wishes to securely send Alice his credit card number. Alice may send Bob her public key, but Mary (an adversary sitting on the communication between Alice and Bob) may intercept the communication and substitute her public key in place of Alice's. W hen Bob sends his credit card number using the received public key, he is unwittingly handing it to Mary, not Alice. One method to verify Alice's public key is to call Alice and ask her directly to verify her public key, but because public keys are large (typically 1024 bits, or 128 bytes), for Alice to recite her public key value would prove too cumbersome and is prone to error. Alice could also verify her public key fingerprint, which is the output of a hash function performed on her public key. If one uses the MD5 hash function for this purpose, the hash value is 128 bits or 16 bytes, which would be a little more manageable. But suppose Bob does not know Alice personally and therefore could not ascertain her identity with a phone call? Bob needs a trusted third party to vouch for Alice's public key. This need is met, in part, by a digital certificate. For example, assume Charlie is a third party that both Alice and Bob trust. Alice sends Charlie her public key, plus other identifying information such as her name, address, and W eb site URL. Charlie verifies Alice's public key, perhaps by
calling her on the phone and having her recite her public key fingerprint. Then Charlie creates a document that includes Alice's public key and identification, and digitally signs it using his private key, and sends it back to Alice. This signed document is the digital certificate of Alice's public key and identification, vouched for by Charlie. Now, when Bob goes to Alice's W eb site and wants to securely send his credit card number, Alice sends Bob her digital certificate. Bob verifies Charlie's signature on the certificate using Charlie's public key (assume Bob has already verified Charlie's public key), and if the signature is good, Bob can be assured that, according to Charlie, the public key within the certificate is associated with the identification within the certificatenamely, Alice's name, address, and W eb site URL. Bob can encrypt his credit card number using the public key with confidence that only Alice can decrypt it. Figure 2-7 illustrates how a digital certificate is used to verify Alice's identity.
Suppose Mary (an adversary) decides to intercept the communication between Alice and Bob, and replaces Alice's public key with her own within the digital certificate. W hen Bob verifies Charlie's signature, the verification will fail because the contents of the certificate has changed. Recall that to check a digital signature, one decrypts the signature with the signer's public key, and the result should equal the output of the hash function performed on the document. Because the document has changed, the hash value will not be the same as the one Charlie encrypted with his private key. Figure 2-8 shows what happens if an adversary (Mary) tries to alter Alice's certificate.
Verification of a digital certificate can also be a multilevel process; this is known as verifying a certificate chain. In the previous example, it was assumed that Bob had already verified Charlie's public key. Let's now assume that Bob does not know Charlie or Alice but does have in his possession the pre-verified public key of Victor, and that Charlie has obtained a digital certificate from Victor. W hen Bob needs to secure information being sent to Alice, Alice sends Bob not only her digital certificate signed by Charlie, but Charlie's certificate signed by Victor. Bob verifies Charlie's signature on Alice's certificate using Charlie's public key, and then verifies Victor's signature on Charlie's public key using Victor's public key. If all signatures are good, Bob can be assured that Victor vouches for Charlie and that Charlie vouches for Alice. In practice, Victor's public key will be distributed as a certificate that was self-signed. A self-signed certificate is known as a root certificate. So in the example, there are really three certificates involved: Alice's certificate signed by Charlie, Charlie's certificate signed by Victor, and Victor's certificate, also signed by Victor. These three certificates make up the certificate chain. Figure 2-9 shows how certificates can be chained together to verify identity.
In this example, Victor acts as a CA. He is in the business of being a trusted authority who verifies an individual's identification, verifies that individual's public key, and binds them together in a document that he digitally signs. CAs play an important part in the issuance and revocation of digital certificates.
should not be trusted by any system or users. It is important to note that when a revoked certificate's expiration date occurs, the certificate will be automatically removed from the CRL.
W hen the client wants to create a new network connection with the same server, the overhead of exchanging certificates and key values can be skipped. W hen sending the ClientHello message, the client can specify the session identifier returned by the ServerHello during the initial connection setup. If the server still has the session information associated with that session identifier, it will send a ChangeCipherSpec to the client, indicating that it still has the session info and has created the pertinent key values to secure this new connection. The client will then send a ChangeCipherSpec message. The two parties can then continue sending secured application data. (See Figure 2-11.)
The TLS specification was authored in 1999 by Certicom, mostly as an attempt to create an official IETF standard protocol out of SSL 3.0. However, TLS and SSL are not exactly the same. The TLS specification [RFC2246] describes TLS as being based on the SSL 3.0 Protocol Specification as published by Netscape. The differences between this protocol and SSL 3.0 are not dramatic, but they are significant enough that TLS 1.0 and SSL 3.0 do not interoperate (although TLS 1.0 does incorporate a mechanism by which a TLS implementation can back down to SSL 3.0). As the saying goes, the great thing about standards is that there are so many to choose from. Fortunately, the two protocols are so alike that, despite the fact that they do not interoperate, the mechanism by which a TLS implementation can back down to SSL 3.0 is pretty simple. In fact, TLS identifies itself as version 3.1 to differentiate itself from SSL 3.0 and the older, broken SSL 2.0. The differences between the two protocols include: TLS and SSL 3.0 use a slightly different keyed MAC algorithm (TLS uses the new HMAC [RFC2104], which is very similar to the keyed MAC used in SSL 3.0), TLS chooses more data to include in the MAC (it includes the protocol version along with the data and data type), and TLS does not support FORTEZZA, a hardware-based security token based on the ill-received and mostly unused Skipjack security infrastructure. All of the basic principles of the SSL protocol described in the previous section, such as ClientHello messages and ChangeCipherSpec messages, are the same in both protocols.
Note that at the top of the hierarchy is the certificate authority, which has no parent node. Its digital certificate is signed by itself and therefore is the root certificate. The root certificate is widely distributed, and its public key value is well-known; from it, all other certificates can be verified. As an example, suppose Q. McCluskey in Corporate needs to send a secure e-mail to L. Gilroy in Government Services. Q. McCluskey verifies L. Gilroy's certificate by obtaining a certificate chain that includes L. Gilroy's certificate, Government Services' certificate, W ashington's certificate, and the CA master certificate. If each certificate signature checks out, L. Gilroy's certificate is validated.
Key Management
As mentioned earlier in this chapter, key management is the single most vexing problem in cryptography use. To secure data via encryption, one must exchange a key value to be used with the encryption process. If the communicating parties only have a publicly accessible communications link, exchanging this key value securely is impossible. Fortunately, asymmetric ciphers, also known as public key encryption, resolve this issue by providing a mechanism for exchanging key values for use with a symmetric cipher. However, with asymmetric ciphers, a new problem arises: how to verify the identities of the communicating parties. This is resolved through trust models, most notably digital certificates. X.509 digital certificates offer a hierarchical trust model, and X.509 is the most popular protocol due to its use in SSL for securing W eb transactions. But other types of trust models exist, each with their own advantages and disadvantages.
certificate that has in fact been revoked but just has not yet been put on the list. However, Cisco and VeriSign have developed a certificate protocol that includes checking certificate revocations; it is currently a proposed IETF standard [SCEP01].
Trust Models
A trust model is the mechanism used by a security architecture to verify the identity of an entity and its associated data, such as name, public key, and so on. An example of a trust model is the X.509 certificate structure discussed earlier in this chapter. Identities are vouched for through a hierarchical structure that culminates at the root certificate, which is self-signed by a well-known certificate authority (see Figure 2-13). Other trust models exist, including the PGP trust model.
PGP (Pretty Good Privacy) uses a "web of trust" model instead of a hierarchical trust model. Developed by Phil Zimmerman in 1991, PGP is a program that uses a combination of the RSA asymmetric cipher and the IDEA symmetric cipher to secure e-mail, specifically. PGP supports the concept of a digital certificate, but a PGP certificate may contain many signatures, unlike the X.509 certificate that contains exactly one signature. As with X.509 certificates, a signature represents an entity that vouches for the identity and the associated public key within the certificate. W ith X.509 certificates, this entity is a single authority; with PGP, this entity can be any person or organization. The information in the PGP certificate can be verified using any one of the many signatures, and the verifier can choose which signature is trusted the most. For example, suppose Alice creates a PGP key pair. She then creates a digital certificate containing her public key and identification information, typically her name and e-mail address. Alice then immediately signs the certificate herself; this is to protect the information within the certificate from being altered before being signed by anyone else. Alice then calls her good friends Andy, Aaron, and Albert, tells them her public key fingerprint, and asks them to sign her public key certificate, which she mails to each of them. All three verify Alice's public key by checking the fingerprint, and then each signs her public key certificate. Figure 2-14 illustrates a PGP certificate with multiple signatures.
If Bob wants to send a secure e-mail to Alice, he looks up Alice's public key certificate. Bob then checks the signatures to see if any of them are from entities that he trusts. If Bob knows Andy and has verified Andy's public key, and if he trusts Andy to correctly verify someone else's public key certificate, then if Andy's signature checks out on Alice's public key certificate, Bob can be assured that he indeed has Alice's public key, and can use it to send Alice secure e-mail. Furthermore, even if Bob does not trust any of the signatures on Alice's public key certificate, he may still be able to verify Alice's information. Bob can look up the certificates of Alice's signatories and check if any of their certificates are signed by someone he trusts. So, if Bob looks up Aaron's certificate, which is signed by Bart, whom Bob knows and trusts, then Bob can verify Aaron's certificate with Bart's signature, and then in turn verify Alice's certificate with Aaron's signature. This can be seen in Figure 2-15.
As with the X.509 certificate structure, the PGP web of trust assumes that the signer of a certificate performs due diligence in verifying the information contained in the certificate. However, X.509 offers a controlled hierarchy of signatories; in the case where the information in an X.509 certificate is incorrect or falsified, the signatory of the certificate can be pinpointed and held responsible for repercussions (for example, VeriSign could have legal action taken against it if it signed a certificate for a bogus company's secure W eb site). The PGP web of trust model is more "loose" in that no organizational or legal relationship between signatories and certificate holders is required. There is no concept of a certificate authority in the PGP web of trust. Because of this, PGP allows the user to label the "trust level" of a public key certificate. Thus, referring to the earlier scenario, if Bob really doesn't know Andy all that well, he may label Alice's public key as "marginal." Obviously, for business purposes, a key must be trusted or not trusted, which explains the widespread use of X.509 certificates for situations like e-commerce. Newer versions of PGP now support use of X.509 certificates. This allows better interoperability for security providers.
Threat Modeling
Threat modeling involves determining who would be most likely to attack a system and what possible ways an attacker could compromise the system. The results of a threat modeling exercise help determine what risks the threats are to a system and what security precautions should be taken to protect the system. In Chapter 8 we will look at how threat profiling plays an integral part in the security development methodology. W eb applications and all computing systems use security procedures and tools to provide access to the system for legitimate users and to prevent unauthorized users from accessing the system. Different users are often allowed different types of access to a system. For example, a W eb application may allow read-only access to the public and allow only authorized users to perform updates to data. A potential attacker wanting to compromise a system must look at what avenues are available in accessing the system, and then he must try to exploit any vulnerabilities. For example, if an attacker has only public read-only access, he can examine as much of the W eb site as possible and try various attacks, such as malformed URLs, various FORM attacks, and so forth. Perhaps, after a thorough search, he may find the only attack he can perform is to fill up some customer service rep's mailbox. But if the attacker discovers a legitimate logon account, further possibilities for attack open up to him. Once logged in as an authorized user, he is allowed access to more of the W eb site, which allows him to try attacks at more points in the application. Perhaps he can now deface the W eb site or alter data in a database. A threat model allows an assessment of what damage can be inflicted by an attacker who has specific types of access. Attack trees [Schneier01] can assist in modeling threats to a system. The root node of an attack tree is the goal of the attack; child nodes represent direct actions that can attain the attack's goal. These actions may or may not be directly achievable. If they are achievable, they are leaf nodes in the tree. If they are not achievable, the necessary preliminary actions are listed as child nodes under them. Figure 2-16 shows a simple example of an attack tree for reading someone's e-mail.
Note that the root node represents the goal of the attack, to read some other user's e-mail. The nodes directly under the root node describe direct actions that can accomplish the goal. For example, the middle node under the root indicates that one can monitor the network on which the user is sending and receiving the e-mail. The nodes directly under that node describe possible methods of monitoring the target user's network traffic. The process continues until a leaf node is reached, which is an action that requires no prerequisite action. Ideally, the attack tree illustrates the possible avenues an attacker can follow, starting from any one of the leaf nodes and moving on up until the goal of the attack at the root node is accomplished. Once an attack tree's nodes have been identified, each node can be assigned a cost. Looking over the shoulder of the target, for example, is less expensive than installing a hidden camera. However, the cost does not need to be monetary; perhaps looking over the target's shoulder carries too much risk of being caught for the attacker, and thus that risk is worth the price of installing a hidden camera. Various costs, or weightings, can be applied to each node. These include ease of execution, legality, intrusiveness, probability of success, and so forth [Schneier01]. As one applies costs to each of the nodes in the attack tree, one can start to see the path that has the least cost and is therefore more likely to be used by an attacker. The probability of likely paths and the risk value associated with the goal of the attack determine how much effort needs to be directed at securing those paths.
Identity Management
An increasingly important issue in today's online world is identity management. How do we protect and manage our credentials across a plethora of network applications that have different authentication methods and requirements? If you have ever been locked out of one application because you continuously tried a password that you later realized was your password for a different application, you understand why there is a need for identity management. Identity management is a problem that confronts security professionals because users tend to circumvent security measures when they become annoyed with the necessity of managing an increasing number of identities across applications. Ease of use has always been a corollary to a good security architecture. The fewer identities the user has to manage, or the easier it is for the user to manage those identities, the more the user will voluntarily adhere to specified security requirements of the applications owning those identities. Several approaches to identity management exist. Modern W eb browsers have rudimentary identity management built in or available as add-ons, with Identity Management as a value-add. Features such as Google's AutoFill on a user's toolbar allows that user to store profile information and manage passwords across multiple applications. By allowing the user to remember and manage one centralized password, this approach reduces the tendency for users to reuse passwords across applications and thereby reduces the risk of one compromised password being used to impersonate the user across many applications. A common buzz term in the media today is "identity theft." It refers to the theft of a user's credentials in order to impersonate the user, or steal the identity of that user. Popular sites such as eBay, Amazon, and PayPal are common targets for identity thieves. An attacker will send out an e-mail supposedly from a PayPal administrator requesting that the user follow an embedded URL to log in to PayPal and perform some sort of account maintenance. The URL in reality, however, directs the unknowing user to the attacker's PayPal look-alike site where the attacker is then able to capture the user's id and password. The attacker can then use that id and password to log in to PayPal and transfer money from the user to his own account through some laundering scheme. A variant of that attack is one that uses a W eb site that sells items, to redirect users to a phony payment site in order to gain the user's id and password to the legitimate payment site.
now, third-party products are using a mixture of standard and proprietary approaches to providing SSO.
Cross-Domain SSO
As discussed earlier, traditional W eb application SSO is accomplished using cookies. One drawback to this approach is that applications can only set and read cookies from their own domains. Obviously, this prevents straightforward SSO between services in a.com and services in b.com. Cross-domain SSO (CDSSO) enables SSO across domains and allows users to authenticate in one domain and then use applications in other domains without having to reauthenticate. This is a critical issue today for W eb applications that require SSO for their browser-based clients. W hen W eb services become dominant, and SAML is king, this will no longer be an issue. Until then, it is important to understand the problem and the technology that's available to solve it. CDSSO provides both cost and management savings across the enterprise. These savings are realized when the enterprise uses cross-domain access SSO that doesn't require implementation of homogenous platforms or authentication products. Existing systems can be aggregated without reengineering them to use proprietary mechanisms or even, in the radical case, to share a domain.
How It Works
The various implementations of CDSSO use one basic high-level approach that contains the following major components: Cross-Domain Controller Cross-Domain Single Sign-On Component The controller is responsible for redirecting a request to the authentication service or the CDSSO component. The CDSSO component is responsible for setting the cookie information for its particular domain. A CDSSO component must be present in every domain that participates in the SSO federation.
Federated SSO
The goal of the enterprise today is to provide seamless employee access to the multitude of internal and externally hosted W eb applications that exist in the corporate world. But not all of these applications are managed or maintained by a central authority. That fact, though, should not require employees to know whether an application is internal or external. They should be able to access a service without being required to log in again or having to maintain an additional account. W ithout a way to federate SSO across services and domains, users will be forced to manage an increasing number of identities and perform more and more logins as they jump from service to service. The solution to this problem is federation. An SSO Federation provides a means for unifying SSO across a number of services without requiring each service provider to participate directly in the authentication process. Previously, we discussed SSO and CDSSO. W e assumed the participating parties were each responsible for authenticating users for each other. The burden upon each service provider increases at a nonlinear rate as the number of services grows and the number of users increases. To alleviate the burden on each service provider to authenticate and authorize participating users, a centralized Identity Provider is used to off-load the work. The Identity Provider acts as a central authority for authentication and authorization. It authenticates users and then provides users with tokens that can be verified by participating service providers. This method frees service providers from performing authentication and authorization of users. Some users are known by service providers, and others are known only to partner service providers. These users are those who have access to a particular service provider through some temporary partnering agreement. In Figure 2-17, we see how service providers can share identity information through a common identity provider and thus form a federation.
Organizations have struggled with the need to aggregate their users into a single repository in order to reduce costs related to the management and complexity of multiple user stores. Applications have also moved to a centralized user repository in order to implement centralized authentication and authorization. Organizations that are consolidating services via a portal have done the same. Identity Provider federations provide a centralized aggregation of users and the capability of acting as a centralized authentication service. Service providers can delegate authentication and authorization to the Identity Provider. Doing this reduces the burden on a service to provide user management and reduces system complexity.
Cross-Domain Federations
Cross-domain federations are the most complex and most powerful architectures. They allow each Identity Provider to act both as a producer and consumer of identity assertions. Cross-domain federations allow a virtually unlimited topological range of identity propagationnot just across the enterprise, but globally. Someday you may see one central digital warehouse that propagates and verifies your identity information to any serviceanywherethat you want to access. That is the goal of Identity Management. Figure 2-18 shows how Identity Providers can communicate with each other to create cross-domain federations.
For more information about understanding and implementing identity management, refer to Chapter 7, "Identity Architecture and Its Technologies," and Chapter 12, "Securing the Identity: Design Strategies and Best Practices." W e will discuss in those chapters the technologies, patterns, and best practices for enabling Identity Management in enterprise applications.
Summary
This chapter has provided some of the basics of securityfrom the fundamentals of cryptography to the challenges in identity management. W e have explored the security goals and challenges of securing applications and the different cryptographic mechanisms that are used. W e have also discussed other problems that arise from these approaches, such as identity management, single sign-ons, and federated sign-ons. In the following chapters, we will illustrate how these fundamental requirements are met using Java technologies that provide an end-to-end security architectural foundation. W e will also discuss security methodology, patterns, and best practices that contribute to providing secure enterprise computing. W e will wrap up by putting all of it together into a set of case studies that demonstrate an end-to-end security architecture using patterns and best practices.
References
[DiffieHellman] W. Diffie and M. E. Hellman. "New Directions in Cryptography," IEEE Transactions on Information Theory 22, 1976. [FIPS81] NIST . "DES Modes of Operation." Federal Information Processing Standards Publication 81. http://www.itl.nist.gov/fipspubs/fip81.htm [Schneier01] B. Schneier. "Attack T rees: Modeling Security T hreats." Dr. Dobb's Journal 306, December 1999. http://www.schneier.com/paper-attacktrees-ddj-ft.html [Schneier02] B. Schneier. Cryptogram, May 2002. http://www.schneier.com/crypto-gram-0205.html#5 [Schneier03] B. Schneier. "Yarrow: A Secure Pseudorandom Number Generator." http://www.schneier.com/yarrow.html [Schneier04] B. Schneier. "Declassifying Skipjack." Cryptogram, July 1998. http://www.schneier.com/cryptogram-9807.html#skip [Hashfunc01] H. Lipma. "Hash Functions." http://www.cs.ut.ee/~helger/crypto/link/hash/ [RSA01] RSA Security. "Frequently Asked Questions About T oday's Cryptography." http://www.rsasecurity.com/rsalabs/faq/sections.html [RSA02] RSA Security. "What Are Some other Blocks Ciphers." http://www.rsasecurity.com/rsalabs/faq/3-67.html [RSA03] RSA Security. "What Is Capstone." http://www.rsasecurity.com/rsalabs/faq/6-2-3.html [AES01] E. Roback and M. Dworkin. "First AES Candidate Conference Report," August 1998. http://csrc.nist.gov/CryptoT oolkit/aes/round1/conf1/j41ce-rob.pdf [WeiDai01] Wei Dai. Crypto++ T M Library 5.0 Reference Manual. http://cryptopp.sourceforge.net/docs/ref5/ [WeiDai02] Wei Dai. Crypto++ 5.1 Benchmarks, July 2003. http://www.eskimo.com/~weidai/benchmarks.html [Kaufman] Charlie Kaufman, Radia Perman, and Mike Speciner. Network Security: Private Communication in a Public World. Prentice Hall, 2002. [Goldberg01] Ian Goldberg and David Wagner. "Randomness and the Netscape Browser," Dr. Dobbs' Journal, January 1996. http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html [RFC1750] D. Eastlake, et al. "Randomness Recommendations for Security," December 1994. http://www.ietf.org/rfc/rfc1750.txt [SCEP01] Cisco. Simple Certificate Enrollment Protocol. http://www.cisco.com/warp/public/cc/pd/sqsw/tech/scep_wp.html [RFC2459] R. Housley, W. Ford, W. Polk, and D. Solo. "Internet X.509 Public Key Infrastructure Certificate and CRL Profile." T he Internet Society, January 1999. http://www.ietf.org/rfc/rfc2459.txt. [Moraes] Ian Moraes, Ph.D. "T he Use of JNDI in Enterprise Java APIs." JDJ SYS-CON Media, August 1, 2000. http://jdj.sys-con.com/read/36454.htm [Netscape01] A. Frier, P. Karlton, and P. Kocher. T he SSL Protocol, Version 3.0, 1996. http://wp.netscape.com/eng/ssl3/ssl-toc.html [RFC2246] T . Dierks and C. Allen. "T he T LS Protocol Version 1.0," January 1999. http://www.ietf.org/rfc/rfc2246.txt [RFC2104] H. Krawczyk, et al. "HMAC: Keyed-Hashing for Message Authentication," February 1997. http://www.ietf.org/rfc/rfc2104.txt [ANSIX9.17] American National Standards Institute. American National Standard X9.17: Financial Institution Key Management (Wholesale), 1985. [SSL01] Alan O. Freier, et al. "T he SSL Protocol Version 3.0," November 18, 1996. http://wp.netscape.com/eng/ssl3/draft302.txt [RFC2560] M. Meyers et al. "X509 Internet Public Key Infrastructure Online Certificate Status ProtocolOCSP," June 1999. http://www.ietf.org/rfc/rfc2560.txt
The Java object encapsulation supports "programming by contract," which allows the reuse of code that has already been tested. The Java language is a strongly typed language. During compile time, the Java compiler does extensive type checking for type mismatches. This mechanism guarantees that the runtime data type variables are compatible and consistent with the compile time information. The language allows declaring classes or methods as final. Any classes or methods that are declared as final cannot be overridden. This helps to protect the code from malicious attacks such as creating a subclass and substituting it for the original class and override methods. The Java Garbage Collection mechanism contributes to secure Java programs by providing a transparent storage allocation and recovering unused memory instead of deallocating the memory using manual intervention. This ensures program integrity during execution and prevents programmatic access to accidental and incorrect freeing of memory resulting in a JVM crash. W ith these features, Java fulfills the promise of providing a secure programming language that gives the programmer the freedom to write and execute code locally or distribute it over a network.
The release of J2SE [J2SE] introduced a number of significant enhancements to JDK 1.1 and added such features as security extensions providing cryptographic services, digital certificate management, PKI management, and related tools. Some of the major changes in the Java 2 security architecture are as follows: Policy-driven restricted access control to JVM resources. Rules-based class loading and verification of byte code. System for signing code and assigning levels of capability. Policy-driven access to Java applets downloaded by a W eb browser. In the Java 2 security architecture, all coderegardless of whether it is run locally or downloaded remotelycan be subjected to a security policy configured by a JVM user or administrator. All code is configured to use a particular domain (equivalent to a sandbox) and a security policy that dictates whether the code can be run on a particular domain or not. Figure 3-2 illustrates the J2SE security architecture and its basic elements.
Let's take a more detailed look at those core elements of the Java 2 security architecture. Protection Domains ( java.security.ProtectionDomain): In J2SE, all local Java applications run unrestricted as trusted applications by default, but they can also be configured with access-control policies similar to what is defined in applets and remote applications. This is done by configuring a ProtectionDomain, which allows grouping of classes and instances and then associating them with a set of permissions between the resources. Protection domains are generally categorized as two domains: "system domain" and "application domain." All protected external resources, such as the file systems, networks, and so forth, are accessible only via system domains. The resources that are part of the single execution thread are considered an application domain. So in reality, an application that requires access to an external resource may have an application domain as well as a system domain. W hile executing code, the Java runtime maintains a mapping from code to protection domain and then to its permissions. Protection domains are determined by the current security policy defined for a Java runtime environment. The domains are characterized using a set of permissions associated with a code source and location. The java.security.ProtectionDomain class encapsulates the characteristics of a protected domain, which encloses a set of classes and its granted set of permissions when being executed on behalf of a user.
Permissions ( java.security.Permission): In essence, permissions determine whether access to a resource of the JVM is granted or denied. To be more precise, they give specified resources or classes running in that instance of the JVM the ability to permit or deny certain runtime operations. An applet or an application using a security manager can obtain access to a system resource only if it has permission. The Java Security API defines a hierarchy for Permission classes that can be used to configure a security policy. At the root, java.security.Permission is the abstract class, which represents access to a target resource; it can also include a set of operations to construct access on a particular resource. The Permission class contains several subclasses that represent access to different types of resources. The subclasses belong to their own packages that represent the APIs for the particular resource. Some of the commonly used Permission classes are as follows:
For wildcard permissions For named permissions For file system For network For properties For runtime resources For authentication For graphical resources -java.security.AllPermission -java.security.BasicPermission -java.io.FilePermission -java.net.SocketPermission -java.lang.PropertyPermission -java.lang.RuntimePermission -java.security.NetPermission -java.awt.AWTPermission
Example 3-1 shows how to protect access to an object using permissions. The code shows the caller application with the required permission to access an object.
Permissions can also be defined using security policy configuration files (java.policy). For example, to grant access to read a file in "c:\temp\" (on W indows), the FilePermission can be defined in a security policy file (see Example 3-2).
"c:\\temp\\testFile", "read"; };
Policy: The Java 2 security policy defines the protection domains for all running Java code with access privileges and a set of permissions such as read and write access or making a connection to a host. The policy for a Java application is represented by a Policy object, which provides a way to declare permissions for granting access to its required resources. In general, all JVMs have security mechanisms built in that allow you to define permissions through a Java security policy file. A JVM makes use of a policy-driven access-control mechanism by dynamically mapping a static set of permissions defined in one or more policy configuration files. These entries are often referred to as grant entries. A user or an administrator externally configures the policy file for a J2SE runtime environment using an ASCII text file or a serialized binary file representing a Policy class. In a J2SE environment, the default system-wide security policy file java.policy is located at <JRE_HOME>/lib/security/ directory. The policy file location is defined in the security properties file with a java.security setting, which is located at <JRE_HOME>/lib/security/java.security. Example 3-3 is a policy configuration file that specifies the permission for a signed JAR file loaded from "http://coresecuritypatterns.com/*" and signed by "javaguy," and then grants read/write access to all files in /export/home/test.
The J2SE environment also provides a GUI-based tool called "policytool" for editing a security policy file, which is located at "<JAVA_HOME>/bin/policytool." By default, the Java runtime uses the policy files located in:
${java.home}/jre/lib/security/java.policy ${user.home}/.java.policy
${java.home}/jre/lib/security/java.security
The effective policy of the JVM runtime environment will be the union of all permissions in all policy files. To specify an additional policy file, you can set the java.security.policy system property at the command line:
To ignore the policies in the java.security file and only use the custom policy, use '==' instead of '=':
SecurityManager ( java.lang.SecurityManager): Each Java application can have its own security manager that acts as its primary security guard against malicious attacks. The security manager enforces the required security policy of an application by performing runtime checks and authorizing access, thereby protecting resources from malicious operations. Under the hood, it uses the Java security policy file to decide which set of permissions are granted to the classes. However, when untrusted classes and third-party applications use the JVM, the Java security manager applies the security policy associated with the JVM to identify malicious operations. In many cases, where the threat model does not
include malicious code being run in the JVM, the Java security manager is unnecessary. In cases where the SecurityManager detects a security policy violation, the JVM will throw an AccessControlException or a SecurityException. In a Java application, the security manager is set by the setSecurityManager method in class System. And the current security manager is obtained via the getSecurityManager method (see Example 3-4).
The class java.lang.SecurityManager consists of a number of checkXXXX methods like checkRead (String file) to determine access privileges to a file. The check methods call the SecurityManager.checkPermission method to find whether the calling application has permissions to perform the requested operation, based on the security policy file. If not, it throws a SecurityException. If you wish to have your applications use a SecurityManager and security policy, start up the JVM with the -Djava.security.manager option and you can also specify a security policy file using the policies in the -Djava.security.policy option as JVM arguments. If you enable the Java Security Manager in your application but do not specify a security policy file, then the Java Security Manager uses the default security policies defined in the java.policy file in the $JAVA_HOME/jre/lib/security directory. Example 3-5 programmatically enables the security manager.
The security manager can also be installed from the command-line interface:
AccessController ( java.security.AccessController): The access controller mechanism performs a dynamic inspection and decides whether the access to a particular resource can be allowed or denied. From a programmer's standpoint, the Java access controller encapsulates the location, code source, and permissions to perform the particular operation. In a typical process, when a program executes an operation, it calls through the security manager, which delegates the request to the access controller, and then finally it gets access or denial to the resources. In the java.security.AccessController class, the checkPermission method is used to determine whether the access to the required resource is granted or denied. If a requested access is granted, the checkPermission method returns true; otherwise, the method throws an AccessControlException. For example, to check read and write permission for a directory in the file system, you would use the code shown in Example 3-6.
Codebase: A URL location of class or JAR files are specified using codebase. The URL may refer to a location of a directory in
the local file system or on the Internet. Example 3-7 retrieves all the permissions granted to a particular class that's been loaded from a code base. The permissions are effective only if the security manager is installed. The loaded class uses those permissions by executing Class.getProtectionDomain() and Policy.getPermissions().
To test Example 3-7, Example 3-8 is the policy file (test.policy), which provides permission to read all system properties.
To ignore the default policies in the java.security file, and only use the specified policy, use '==' instead of '='. W ith the policy just presented, you may run the following:
CodeSource: The CodeSource allows representation of a URL from which a class was loaded and the certificate keys that were used to sign that class. It provides the same notion as codebase, but it encapsulates the codebase (URL) of the code
where it is loaded and also the certificate keys that were used to verify the signed code. The CodeSource class and its two arguments to specify the code location and its associated certificate keys are as follows:
To construct a code source with the code base and without using certificates, you would use the following:
Bytecode verifier: The Java bytecode verifier is an integral part of the JVM that plays the important role of verifying the code prior to execution. It ensures that the code was produced consistent with specifications by a trustworthy compiler, confirms the format of the class file, and proves that the series of Java byte codes are legal. W ith bytecode verification, the code is proved to be internally consistent following many of the rules and constraints defined by the Java language compiler. The bytecode verifier may also detect inconsistencies related to certain cases of array bound-checking and object-casting through runtime enforcement. To manually control the level of bytecode verification, the options to the Java command with the V1.2 JRE are as follows: -Xverify:remote runs verification process on classes loaded over network (default) -Xverify:all verifies all classes loaded -Xverify:none does no verification
ClassLoader: The ClassLoader plays a distinct role in Java security, because it is primarily responsible for loading the Java
classes into the JVM and then converting the raw data of a class into an internal data structure representing the class. From a security standpoint, class loaders can be used to establish security policies before executing untrusted code, to verify digital signatures, and so on. To enforce security, the class loader coordinates with the security manager and access controller of the JVM to determine the security policies of a Java application. The class loader further enforces security by defining the namespace separation between classes that are loaded from different locations, including networks. This ensures that classes loaded from multiple hosts will not communicate within the same JVM space, thus making it impossible for untrusted code to get information from trusted code. The class loader finds out the Java application's access privileges using the security manager, which applies the required security policy based on the requesting context of the caller application. W ith the Java 2 platform, all Java applications have the capability of loading bootstrap classes, system classes, and application classes initially using an internal class loader (also referred to as primordial class loader). The primordial class loader uses a special class loader SecureClassLoader to protect the JVM from loading malicious classes. This java.security.SecureClassLoader class has a protected constructor that associates a loaded class to a protection domain. The SecureClassLoader also makes use of permissions set for the codebase. For instance, URLClassLoader is a subclass of the SecureClassLoader. URLClassLoader allows loading a class or location specified with a URL. Refer to Example 3-9, which shows how a URLClassLoader can be used to load classes from a directory.
Keystore and Keytool: The Java 2 platform provides a password-protected database facility for storing trusted certificate entries and key entries. The keytool allows the users to create, manage, and administer their own public/private key pairs and associated certificates that are intended for use in authentication services and in representing digital signatures. W e will take a look in greater detail at the usage of the Java keystore and keytool and how these tools help Java security in the section entitled "Java Security Management Tools," later in this chapter.
To run the applet, you need to compile the source code using javac and then you may choose to deploy this applet class along with an HTML page in a W eb server. To do so, create an HTML file (see Example 3-11) called WriteFileApplet.html.
appletviewer http://coresecuritypatterns.com/WriteFileApplet.html
W hen executing this applet, you should receive the SecurityException in the applet window. This applet shouldn't be able to write the file, because it does not have a security policy with a file permission to write in the user's home directory. Now, let's use the following policy file WriteAppletPolicy, which grants a write permission. To do so, create a policy file (see Example 3-12) called WriteAppletPolicy.policy in the working directory:
To test the applet using an appletviewer, you may choose to use the -J-Djava.security.policy=WriteAppletPolicy.policy option on the JVM command line, or you can explicitly specify your policy file in the JVM security properties file in the <JAVA_HOME>/jre/lib/security directory:
policy.url.3=file:/export/xyz/WriteAppletpolicy.policy
Example 3-13 shows running the WriteFileApplet applet with the WriteAppletPolicy policy file from the command-line interface.
Y ou should be able to run the WriteFileApplet applet successfully without a SecurityException, and it should also be able to create and write the file AppletGenrtdFile in the client's local directory. Now let's explore the concept of signed applets.
Signed Applets
The Java 2 platform introduced the notion of signed applets. Signing an applet ensures that an applet's origin and its
integrity are guaranteed by a certificate authority (CA) and that it can be trusted to run with the permissions granted in the policy file. The J2SE bundle provides a set of security tools that allows the end users and administrators to sign applets and applications, and also to define local security policy. This is done by attaching a digital signature to the applet that indicates who developed the applet and by specifying a local security policy in a policy file mentioning the required access to local system resources. The Java 2 platform requires an executable applet class to be packaged into a JAR file before it is signed. The JAR file is signed using the private key of the applet creator. The signature is verified using its public key by the client user of the JAR file. The public key certificate is sent along with the JAR file to any client recipients who will use the applet. The client who receives the certificate uses it to authenticate the signature on the JAR file. To sign the applet, we need to obtain a certificate that is capable of code signing. For all production purposes, you must always obtain a certificate from a CA such as VeriSign, Thawte, or some other CA. The Java 2 platform introduced new key management tools to facilitate support for creating signed applets: The keytool is used to create pairs of public and private keys, to import and display certificate chains, to export certificates, and to generate X.509 v1 self-signed certificates. The jarsigner tool is used to sign JAR files and also to verify the authenticity of the signature(s) of signed JAR files. The policytool is used to create and modify the security policy configuration files. Let's take a look at the procedure involved in creating a signed applet using our previous WriteFileApplet applet example. The following steps are involved on the originating host environment responsible for developing and deploying the signed applet: Compile the Applet source code to an executable class. Use the javac command to compile the WritefileApplet.java class. The output from the javac command is the WriteFileApplet.class. 1.
javac WriteFileApplet.java
Package the compiled class into a JAR file. Use the jar utility with the cvf option to create a new JAR file with verbose mode (v), and specify the archive file name (f). 2.
keytool -genkey -alias signapplet -keystore mykeystore -keypass mykeypass -storepass mystorepass
3. This keytool -genkey command generates a key pair that is identified by the alias signapplet. Subsequent keytool commands are required to use this alias and the key password (-keypass mykeypass) to access the private key in the generated pair. The generated key pair is stored in a keystore database called mykeystore (-keystore mykeystore) in the current directory and is accessed with the mystorepass password (-storepass mystorepass). The command also prompts the signer to input information about the certificate, such as name, organization, location, and so forth. Sign the JAR file. Using the jarsigner utility (see Example 3-14), sign the JAR file and verify the signature on the JAR files.
WriteFileApplet.jar signapplet
The -storepass mystorepass and -keystore mykeystore options specify the keystore database and password where the private key for signing the JAR file is stored. The -keypass mykeypass option is the password to the private key, SignedWriteFileApplet.jar is the name of the signed JAR file, and signapplet is the alias to the private key. jarsigner extracts the certificate from the keystore and attaches it to the generated signature of the signed JAR file. Export the public key certificate. The public key certificate will be sent with the JAR file to the end user who will use it to authenticate the signed applet. To have trusted interactions, the end user must have a copy of those public keys in its keystore. This is accomplished by exporting the public key certificate from the originating JAR signer keystore as a binary certificate file and then importing it into the client's keystore as a trusted certificate. Using the keytool, export the certificate from mykeystore to a file named mycertificate.cer as follows: 5.
keytool -export -keystore mykeystore -storepass mystorepass -alias signapplet -file mycertificate.cer
Deploy the JAR and certificate files. They should be deployed to a distribution directory on a W eb server. Additionally, create a W eb page embedding the applet and the JAR. As shown in Example 3-15, the applet tag must use the following syntax.
In addition to the previous steps, the following steps are involved in the client's environment: Import certificate as a trusted certificate. To download and execute the signed applet, you must import the trusted public key certificate (provided by the issuer) into a keystore database. The Java runtime will use this clientside keystore to store its trusted certificates and to authenticate the signed applet. Using the Keytool utility, import the trusted certificate provided by the issuer (see Example 3-16). 7.
keystore "/export/home/clientstore"; grant SignedBy "clientcer" { permission java.io.FilePermission "<<ALL FILES>>", "write"; };
Run and test the applet using appletviewer. The appletviewer tool runs the HTML document specified in the URL, which displays the applet in its own window. To run the applet using the client policy file, enter the following at the command line (see Example 3-18).
9.
<resources> <j2se version="1.2+" /> <jar href="SignedClientApp.jar"/> </resources> <application-desc main-class="SignedClientApp" /> </jnlp>
Java Keystore
The keystore is a protected database that stores keys and trusted certificate entries for those keys. A keystore stores all the certificate information related to verifying and proving an identity of a person or an application. It contains a private key and a chain of certificates that allows establishing authentication with corresponding public keys. Each entry in the keystore is identified by a unique alias. All stored key entries can also be protected using a password. The Java keystore follows the RSA cryptographic standard known as PKCS#12, which provides a way to securely store multiple keys and certificates in a password-protected file. Unless specified by default, the key entries are stored in a .keystore file and the trusted CA certificate entries are stored in a cacerts file, which resides in the JRE security directory.
Keytool
Keytool is a key and certificate management tool that allows users and administrators to administer their own private/public key pairs and associated certificates. This tool is intended for use with authentication services and verifying data integrity using digital signatures. The keytool is provided with the J2SE bundle as a command-line utility, which can be used to create JKS (Java keystore) and JCEKS (Java Cryptographic Extensions Keystore) keystores, generate and store keys and their associated X.509v1 certificates, generate Certificate Signing Requests (CSR), import and store trusted certificates, and perform maintenance on keystore entries. The keytool utility uses the X.509 certificate standard, which is encoded using the Abstract Syntax Notation 1 (ASN.1) standard to describe data and the Definite Encoding Rules (DER) standard to identify how the information is to be stored and transmitted. The X.509 certificate takes the values of subject and issuer fields from the X.500 Distinguished Name (DN) standard. Let's take a look at the most common operations performed using the keytool utility: Creating a keystore database. A keystore is created whenever you use keytool with options to add entries to a nonexistent keystore. The following options automatically create a keystore when the specified keystore does not exist in the user's directory:
-genkey option is used to generate private/public key pairs. -import option is used to import a trusted certificate. -identitydb is used to import data from a legacy JDK 1.1.
By default, the keytool creates a keystore as a file named .keystore in the user's home directory, but a name can be specified using the keystore option. Generating private/public key pairs. W hen a keytool is used to generate a private/public key pair, each entry contains a private key and an associated certificate "chain." The first certificate in the chain contains the public key corresponding to the private key. A pair of public and private keys can be generated and added to the keystore using the keytool -genkey command. The -genkey option creates a public/private key pair and then wraps the public key in a self-signed certificate. The following example will generate a key pair wrapped in a X.509 self-signed certificate and stored in a single-element certificate chain. In this command, we also need to specify passwords for the keys and the keystore, the algorithm to use (RSA), and the alias (see Example 3-20).
In the command shown in Example 3-20, the -genkey option is used to generate the key pair, and all other options are used in support of this command. The key pair is identified with an alias myalias with a password of mykeypass. Both the alias and the keypass are required for all the subsequent commands and operations when we access the particular key pair in the keystore. The other options that can be used are as follows:
-keyalg -- specifies the encryption algorithm used for the key (example: RSA). An additional "keysize" option would
allow us to specify the bit size for the key; if not specified, the keytool uses the default value of 1024 bits.
-keypass -- specifies the password for the key generated. -keystore -- specifies the name of the keystore to store the keys, which is a binary file. If it is not specified, a new file will be created and saved as a .keystore file. -storepass -- specifies the password used to control access to the keystore. After keystore creation, all modifications
to the keystore will require you to use the password whenever accessing the keystore. W hen you execute these command and options, you will also be prompted with questions to supply the following names for creating subcomponents of the X.500 Distinguished Name standard:
CN - First and Last name OU - Organizational unit O - Organization L - City or Locality ST - State or Province C - Country code
Example 3-21 is a sample output generated using these command and options.
W ith all the questions answered, the keytool generates the keys and the certificate and stores them in the specified keystore file. Listing the entries of a keystore. The keytool with the -list option is used to list all the entries of a keystore, and also to look at the contents of an entry associated with an alias. Example 3-22 is a command that lists all the entries of a keystore named mykeystore. This command also requires us to enter the keystore password.
$ keytool -list -keystore mykeystore Enter keystore password: mystorepass Keystore type: jks Keystore provider: SUN Your keystore contains 1 entry myalias, Sep 5, 2003, keyEntry, Certificate fingerprint (MD5): 68:A2:CA:0C:D5:C6:D2:96:D5:DC:EA:8D:E3:A1:AB:9B
To display the contents of a keystore entry when identified with an alias, the list command prints the MD5 fingerprint of a certificate. If the -v option is specified, the certificate is printed in a human-readable format. If the -rfc option is specified, the certificate is output in the Base 64 encoding format. The following command (see Example 3-23) lists the contents of a keystore entry in a human-readable format for alias "myalias."
Exporting a certificate entry from a keystore. To have trusted interactions, the communicating client peer needs to have a copy of the public keys from the original signer in the keystore. This is done by exporting the certificate (containing the public key and signer information) to a binary certificate file and then importing them as a trusted certificate into the client peer's keystore. To export the certificate to a binary certificate file (see Example 3-24), the keytool -export and -file options are used. The following command exports a certificate entry identified with alias myalias in keystore mykeystore to a file mycertificate.cer. This command requires entering the keystore password.
Importing a trusted certificate. The keytool -import option is used to import a trusted certificate into a keystore database and to associate it with a unique alias. This is executed in the environment of a client who wishes to trust this certificate and to have trusted client interactions with that communicating peer. W hen a new certificate is imported into the keystore, the keytool utility verifies that the certificate has integrity and authenticity. The keytool utility attempts this verification by building a chain of trust from that certificate to the selfsigned certificate that belongs to the issuer. The lists of trusted certificates are stored in the cacerts file.
To execute the import in a keystore (see Example 3-25), you need to provide the certificate entry with a unique alias and key password. For example, the following command imports a certificate entry from a file mycertificate.cer and identifies the entry with myclientalias and key password clientkeypass in keystore clientkeystore with keystore password clientpass. As a last step, the command displays the owner and issuer information of the certificate and prompts the user to trust the certificate:
Printing certificate information. The keytool printcert option is used to display the contents of a certificate that has been exported from a keystore and made available as a file. To execute this command, no associated keystore database or password is required, because the certificate contains the information as a certificate file (.cer), which is not imported into the keystore. Y ou would use Example 3-26 to display the contents of a binary certificate file.
Creating a Certificate Signing Request (CSR). The Keytool -certreq option allows you to generate a certificate authentication request for a certificate from a Certificate Authority (CA). The -certreq option (see Example 3-27) creates a CSR for the certificate and places the CSR in a file named certreq_file.csr, where certreq_file.csr is the name of the file that is to be sent to the CA for authentication. If a CA considers the certificate to be valid, it issues a certificate reply and places the reply in a file named cert_reply.cer, where cert_reply.cer is the file returned by the CA that holds the results of the CSR authorizations that were submitted in the certreq_file.csr file.
Deleting a keystore. To delete a keystore, use an operating system delete command to delete the keystore files. Changing password in the keystore. To change the keystore password, use keytool -storepassword -new options to set
Example 3-28 shows how the password for the keystore mykeystore is changed from mystorepass to newstorepass.
Policytool
The policytool is a utility that provides a menu-driven user-friendly interface for creating and viewing Java security policy configuration files. The policytool menu options enable you to read and edit policy files by adding policy and permission entries, assigning a keystore, and creating a new policy configuration file. To start the Policy Tool utility, simply type the following at the command line:
policytool
For more information about using policytool menu options, refer to the policytool documentation provided with the J2SE bundle.
Jarsigner
The jarsigner tool is used to digitally sign the Java archives (JAR files) and to verify the signature and its integrity. The jarsigner can sign and verify only the JAR file created by the JAR utility provided with the J2SE bundle.
myJar.jar myPrivateKeyalias
The J2SE environment also provides support for cryptographic services, secure communication services using SSL and TLS protocols, and certificate management services. The J2SE 5.0 is the newest release of the Java platform. It includes numerous feature and security updates to the Java language and the JVM. J2SE 5.0 also offers significant security enhancements compared to the previous release of J2SE 1.4.x. These will be discussed in Chapter 4, "Java Extensible Security Architecture and APIs."
J2ME defines the notion of configurations and profiles to represent the characteristics of supported devices. These configurations and profiles are developed by the industry groups participating in the Java community process.
J2ME Configurations
A J2ME configuration defines Java runtime and API technologies that satisfy the needs of a broad range of devices. Configurations are defined based on the device limitations and the characteristics of memory, display, processing power, network connectivity, and so forth. The current J2ME specification defines two types of configurations: Connected Device Configuration (CDC) and Connected Limited Device Configuration (CLDC).
CDC
CDC targets high-end consumer devices with TCP/IP network connectivity and higher bandwidth. It requires at least 2Mb memory available for the Java platform. It defines a full-featured JVM that includes all the functionality of a Java runtime environment residing on a standard desktop system. The low-level interfaces for calling native code (JNI), connecting to debuggers (JVMDI), and profiling code (JVMPI) are optional. Vendors may adopt them based on device requirements.
CDC provides a full-featured Java security model and associated mechanisms of a J2SE environment: All code runs in a sandbox without exposing the user's system to risk. All classes are loaded with full byte-code verification and Java language features. Signed classes verify the integrity and originating source of the Java classes when the JVM attempts to load it. Security policy provides fine-grained access control over the resources using a user-defined set of permissions and policies. Support for Java cryptography to secure programs, data, communication, and retrieval is provided. In short, CDC offers all the security benefits leveraging a standard J2SE environment and gives architects and developers the flexibility to use different Java security API capabilities for building secure applications. J2ME runtime implementations built on the CDC may utilize the standard JVM bundled with the J2SE or the Compact Virtual Machine (CVM) depending on the size of the device for which the implementation is being developed.
CLDC
CLDC targets low-end consumer devices, with only 128512 kilobytes of memory required for the Java platform and running applications. It features a subset of a standard JVM with limited API and supporting libraries. W hen compared to the J2SE implementation, J2ME differs as follows: Limited security model New class verification mechanism No user-defined class loaders No support for thread groups or daemon threads No support for weak references Limited error handling No finalization No reflection support New connection framework for networking CLDC runs on top of Sun's K Virtual Machine (KVM), which is a JVM designed specifically for supporting resource-limited devices. KVM is the core component of the J2ME platform for CLDC devices. CLDC defines two levels of security: Low-
level KVM security and application-level security. Low-level KVM security: An application running in the KVM must not be able to harm the device in any way. Such security is guaranteed by a pre-verification process that rejects invalid class files and ensures that a class does not contain any references to invalid memory locations. The preverify tool is responsible for the verification process, and it inserts some special attributes into the Java class file. After pre-verification, the KVM does an in-device verification process, which ensures that the class is pre-verified. Figure 3-5 illustrates a CLDC verification process.
Application-level security: The KVM defines a sandbox model that is quite different from the J2SE sandbox model. The sandbox requires that all Java classes are verified and guaranteed to be valid Java applications. It limits all but a predefined set of APIs from becoming available to the application as required by the CLDC specifications and supporting profiles. The downloading and management of applications take place at the native code level, and application programmers cannot define their own classloader or override the classloader or system classes and associated packages of the KVM. Application programmers also cannot download or add any native libraries that contain code and functionality that are not part of the CLDC supported libraries.
J2ME Profiles
J2ME profiles define a broader set of Java API technologies that are suited to a specific device class or a targeted class of devices. Profiles are built on top of J2ME configurations and define API libraries that enable specific types of applications that are suitable for the target devices. Each J2ME configuration supports one or more J2ME profiles. For the smallest of devices, the Mobile Information Device Profile (MIDP) is one such profile that is built on top of the CLDC configuration. The other class of devices based on CDC configuration use the Foundation Profile. The Foundation profile and its API libraries are based on J2SE 1.3.
available for free from http://java.sun.com/j2me. A MIDlet suite consists of one or more MIDlets packaged together as a JAR file. A packaged MIDlet will generally consist of compiled and pre-verified Java classes and other files, including images and application-related data. In addition to those files, a manifest file (manifest.mf) is also stored as part of the JAR file. The manifest file stores the MIDlet name, version, and MIDlet vendor-specific attributes. W hen running the JAR utility, it is important to use the -m option along with the manifest file in the arguments. Example 3-31 shows a manifest file.
In addition to a JAR file, a Java Application Descriptor (JAD) file is also required to be available as part of the MIDlet suite to provide information about the MIDlet(s) bundled in the JAR file. The JAD file provides information to the mobile device's application manager about the contents of the JAR file and also provides a way to pass parameters required by the MIDlet(s). The application manager requires that the JAD file have an extension of .jad. Example 3-32 shows an example of a JAD file.
Use the ktoolbar GUI tool provided with the J2ME W TK to load the JAD file and the JAR file containing the MIDlet suite from a local filesystem.
MIDlet Security
The MIDP 1.0 specification introduced the basic security feature that restricts all MIDlet suites to operate within a sandbox-based security model. This was primarily done to ensure that MIDlets prevent access to sensitive APIs and functions of devices, and to avoid any risks to the device resources. The MIDP 2.0 introduced the notion of trusted MIDlets to provide a flexible and consistent security model with accesscontrol mechanisms defined by a domain policy. The device enforces access control on a MIDlet suite as trusted MIDlets in accordance with the defined domain policy. A MIDlet suite is identified as untrusted when the origin and integrity of the MIDlet suite's JAR file cannot be verified and trusted by the device. The MIDP 2.0 also specifies how MIDlet suites can be cryptographically signed so that their authenticity and originating source can be validated.
Trusted MIDlets
The MIDP defines a security model for trusted MIDlets based on a notion referred to as Protection Domains. Each protection domain associates a MIDlet with a set of permissions and related interaction modes, which allows a MIDlet to access the domain based on the permissions granted. A protection domain contains allowed and user permissions.
The allowed permissions define a set of actions that should be allowed without any user interaction. The user permissions define a set of permissions that require explicit user approval. The MIDlet is bound by the protection domain and will allow or deny access to functions after prompting the user and obtaining their permissions. In the case of user permissions, a MIDlet needs permission at the first time of access and asks the user whether the permission should be granted or denied. The user permission is defined to grant allow or deny permissions to specific API functions with the following three interaction modes. Blanket: The MIDlet is valid for every invocation until its permission is revoked by the user or the MIDlet is deleted from the device. Session: The MIDlet is valid for every invocation of the MIDlet suite until it terminates. It prompts the user for every initiated session on or before its first invocation. Oneshot: The MIDlet is valid for a single invocation of a restricted method and prompts the user on each such invocation. All user permission has a default interaction mode with an optional set of available interaction modes. These interaction modes are determined by the security policy. A policy consists of the definitions of domains and aliases. Each domain consists of the definition of granted permissions and user permissions. Aliases permit groups of named permissions to be reused in more than one domain and help keep the policy compact. Aliases may only be defined and used within a single policy file. A domain is defined with a domain identifier and a sequence of permissions. The domain identifier is implementationspecific. Each permission line begins with allowing or denying user permissions and indicates the interaction modes, such as blanket, session, and oneshot for the specified list of permissions that follow. Example 3-33 shows a policy file.
To request permissions for executing a MIDlet suite, we will make use of attributes such as MIDlet-Permissions and MIDletPermissions-opt in a JAD descriptor file that signals a MIDlet suite's dependence on requiring certain permissions. These special JAD attributes represent the MIDlet suite and provide a device at installation time with access control information about which particular operations the MIDlet suite will be attempting. For example, suppose the MIDlet suite will attempt to make an HTTP connection and optionally make socket connections. The attributes in the JAD descriptor file would look like this:
If a device attempts to install a MIDlet suite into a protection domain that doesn't allow the required permissions specified in the MIDlet suite's JAD file, the installation fails automatically. Thus, the trusted MIDlet provides the mechanisms that protect the device from poorly or maliciously written Java code that can render a device inoperable.
MIDlet-Certificate: <Base64 encoded value of certificate> MIDlet-Jar-RSA-SHA1: <Base64 encoded value of signatures>
The J2ME W ireless Toolkit enables a signer to either sign a MIDlet suite with an existing public and private key pair obtained from a certificate authority or with a new key pair that can be generated as a self-signed certificate for testing purposes only. Each key pair is associated with a certificate. Assigning a security domain to the certificate designates the level of trust that the certificate holder has to access protected APIs and the level of access to those APIs. The JADTool is a command-line interface provided with the J2ME toolkit for signing MIDlet suites. The JADTool only uses certificates and keys from J2SE keystores, discussed earlier in this chapter. As we discussed previously, the J2SE bundle provides key management tools for managing keystores. The JADTool utility is packaged in a JAR file. To run it from the command line interface, change your current directory to {j2metoolkit-dir}\bin, and execute the following command:
Example 3-34 would add the certificate of the key pair from the given J2SE keystore to the specified JAD file.
Example 3-35 adds the digital signature of the given JAR file to the specified JAD file.
If you don't wish to use the JADTool command-line utility, the J2ME toolkit also provides a GUI-based utility (Ktoolbar) that enables you to complete the entire signing process without having to use the command-line options. Finally, you may also choose to use the Ktoolbar tool to publish and then directly install the MIDlet suite in the device using Over-TheAir (OTA) provisioning techniques commonly adopted in wireless service deployment. The OTA provisioning mechanisms allow deploying applications wirelessly, the same way they send and receive messages or browse the Internet. The J2ME platform [J2ME] also provides mechanisms for using cryptographic algorithms and securing network communication using SSL/TLS protocols as well as ensuring source authentication, integrity, and confidentiality. This will be discussed in Chapter 4. "Java Extensible Security Architecture and APIs."
Smart card technology is quickly becoming a replacement for several current technologiesfrom ID badges to credit cards. Financial companies are considering smart cards as a mechanism for delivering services at a lower cost to businesses and consumers. A common service would be an electronic purse service that allows bank customers to transfer money from their bank accounts to their smart cards in the form of secure electronic cash. The electronic cash can then be used to purchase products, pay bills, or pay bridge tolls. Smart cards can be used as a form of ID in a variety of industries. Smart cards can hold information commonly found on an ID card, driver's license, or in a patient's medical records. For example, a doctor can create a record of a patient's treatment history by writing information to his or her smart card. This allows the patient, or another doctor, to have medical information available at any time. A smart card can also act as an employee access badge (by containing encrypted security information such as user names and passwords) that allows an employee access into a company's building or computer network.
smart card requires a user PIN (Personal Identification Number) to get access to a system.
means that applets installed after issuance of the card are restricted from running native methods. The Java Card technology embraces techniques using compressed archive files with cryptographic signatures to provide tamperproof distribution and installation procedures for Java class files and Java Card applets.
The development of a Java Card applet typically starts like developing any other Java program and compiling the source file to produce Java class files. The resulting class files are tested and debugged using a Java Card simulator environment that simulates the applet using the Java Card runtime environment running on a development workstation. The simulator helps the developer to study the behavior and results prior to deploying on a Java Card. Then the class files that make up the applet are converted to a Converted Applet (CAP) file using a Java Card CAP converter tool. As a package, the resulting CAP files represent a Java Card applet. These CAP files are further tested using a Java Card emulator tool in the development environment to identify the expected behavior of the applet in a real Java Card. Finally, the tested applet comprised of all the CAP files is downloaded into the Java Card using a proprietary tool provided by the Java Card vendor. To secure the applets (using vendor-provided tools), it is possible to sign the applet code and allow the Java Card to verify the signatures.
Code Obfuscation
Code obfuscation is the process of transforming the executable in a manner that affects reverse engineering mechanisms by making the generated code more complex and harder to understand. It decouples the relationship between the executable code and its original source, which ultimately makes the decompiled code ineffective. W ith all those changes, the obfuscated program still works in a functionally identical way compared to the original executable. There are several transformation mechanisms that allow obfuscation of Java code, and the most common techniques [Obfuscation] are as follows: Structural or layout transformation: This transforms the lexical structure of the code by scrambling and renaming the identifiers of methods and variables.
Data transformation: This transformation affects the data structures represented in the program. For example, it changes the data represented in the memory from a local to a global variable, converting a two-dimensional array into a one-dimensional array and vice-versa, changing the order of data in a list, and so forth. Control transformation: This transformation affects the flow control represented in the program. For example, it changes the grouping of statements as inline procedures, order of execution, and so forth. T amper-proofing and preventive transformation: This transformation makes the decompiler fail to extract the actual program, and the generated code is unusable. Refer to [PrevTransform] for more detailed information. String encryption: This mechanism encrypts all string literals within the executable code, and during runtime invocation it decrypts them for use. Watermarking: This mechanism embeds a secret message in the executable that identifies the copy of the executable and allows you to trace the hacker who exploited the generated code. W ith little performance overhead, the code obfuscation process restricts the abuse of decompilation mechanisms and offers portability without affecting the deployment platforms. Adopting code obfuscators is a good choice to make in the attempt to reduce the risks of reverse engineering. They prevent loss of intellectual property and offer protection of Java code from malicious attacks. The Java code obfuscators are publicly available in the form of freeware, shareware, and commercial applications.
Summary
This chapter explained the Java 2 platform architecture and its security features as they apply to building Java applications. In particular, it described the various Java platforms and the core security features that contribute to the end-to-end security of Java-based applications running on various systemsfrom servers to stand-alone computers, computers to devices, and devices to smart cards. It discussed securing Java applets, JNLP-based Java W eb start applications and code obfuscation strategies. The chapter also described how to use the different security mechanisms, tools, and strategies for implementing the following: Java application security Java applet security Java W eb start security J2ME Platform security Java Card Platform security Java code obfuscation In the next chapter, we will explore the Java extensible security architecture and API mechanisms that allow preserving confidentiality, integrity, authentication, and nonrepudiation in Java-based applications.
References
[J2SE] Li Gong. "Java Security Architecture." Java 2 SDK, Standard Edition Documentation Version 1.4.2. Sun Microsystems, 2003. http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc1.html and http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc2.html. [J2SE5] "J2SE 5.0 Platform Specifications," Sun Microsystems, 2004. http://java.sun.com/j2se/1.5.0/docs/api/ [J2ME] "J2ME Platform Specifications," Sun Microsystems, 2003. http://java.sun.com/j2me/ [JavaCard] "Java Card Platform Specifications," Sun Microsystems, 2003. http://java.sun.com/products/javacard/ [JWS] "Java Web Start T echnology Specifications," Sun Microsystems, 2003. http://java.sun.com/products/javawebstart/. [Obfuscation] Christian Collberg and Clark T homborson. "Watermarking, T amperproofing, and ObfuscationT ools for Software Protection." IEEE Transactions on Software Engineering 28:8, 735-746, August 2002 [PrevT ransform] Christian Collberg, Clark T homborson, and Douglas Low. "A T axonomy of Obfuscating T ransformations." T echnical Report 148, Department of Computer Science, University of Auckland, New Zealand, July 1997. http://www.cs.auckland.ac.nz/~collberg/Research/Publications/CollbergT homborsonLow97a/index.html
Figure 4-1. Java extensible security architecture and its core APIs
As part of the J2SE bundle, the Java extensible security architecture provides the following set of API frameworks and their implementations, which contributes to the end-to-end security of Java-based applications. Java Cryptography Architecture (JCA): Provides basic cryptographic services and algorithms, which include support for digital signatures and message digests. Java Cryptographic Extension (JCE): Augments JCA functionalities with added cryptographic services that are subjected to U.S. export control regulations and includes support for encryption and decryption operations, secret key generation and agreement, and message authentication code (MAC) algorithms. Java Certification Path API (CertPath): Provides the functionality of checking, verifying, and validating the authenticity of certificate chains. Java Secure Socket Extension (JSSE): Facilitates secure communication by protecting the integrity and confidentiality of data exchanged using SSL/TLS protocols. Java Authentication and Authorization Service (JAAS): Provides the mechanisms to verify the identity of a user or a device to determine its accuracy and trustworthiness and then provide access rights and privileges depending on the requesting identity. It facilitates the adoption of pluggable authentication mechanisms and user-based authorization. Java Generic Secure Services (JGSS): Provides functionalities to develop applications using a unified API to support a variety of authentication mechanisms such as Kerberos based authentication and also facilitates single sign-on. These Java security APIs are made available as part of J2SE 1.4 and later. They were also made available as optional security API packages for use with earlier versions of J2SE. W e will take a closer look at each of these API mechanisms in the next sections.
Certificate path builder and validator for X.509 certificates Certificate factory for X.509 certificates and revocation lists Keystore implementation named JKS, which allows managing a repository of keys and certificates
certification path (CertPath) and certificate revocation list (CRL) objects. KeyStore ( java.security.KeyStore): Defines the functionality for creating and managing a keystore. A keystore represents an in-memory collection of keys and certificates. It stores them as key and certificate entries. AlgorithmParameters ( java.security.AlgorithmParameters): Used to manage the parameters of a particular algorithm, which includes its encoding and decoding. AlgorithmParameterGenerator ( java.security.AlgorithmParameterGenerator): Defines the functionality for generating a set of parameters suitable for a specified algorithm. SecureRandom ( java.security.SecureRandom): Defines the functionality for generating cryptographically strong random or pseudo-random numbers. Now, let's take a look at the programming model for some of these classes.
Message Digests
Message digest is a one-way secure hash function. Its computed values are referred to as message digests or hash values and act as fingerprints of messages. The message digest values are computationally impossible to reverse and thus protect the original message from being derived. As a cryptographic technique, message digests are applied for preserving the secrecy of messages, files, and objects. In conjunction with digital signature, message digests are used to support integrity, authentication, and non-repudiation of messages during transmission or storage. Message digest functions are publicly available and use no keys. In J2SE, the JCA provider supports two message digest algorithms: Message Digest 5 (MD5) and secure hash algorithm (SHA-1). MD5 produces a 128-bit (16-byte) hash and SHA-1 produces a 160-bit message digest value.
try { MessageDigest sha = MessageDigest.getInstance("SHA-1"); byte[] testdata = { 1,2,3,4,5 }; sha.update(testdata); byte [] myhash = sha.digest(); } catch (NoSuchAlgorithmException e) { }
To generate a public/private key pair, use the KeyPairGenerator.getInstance (algorithm) to create an instance of the KeyPairGenerator object implementing the given algorithm:
KeyPairGenerator kpg = KeyPairGenerator.getInstance("DSA");
Use the KeyPairGenerator.initialize(bitSize) method to initialize the key generator specifying the size of the key in bits (for DSA, the size should be between 512 and 1024 with a multiple of 64) and to securely randomize the key generation in an unpredictable fashion:
kpg.initialize(1024);
To generate the key pair, use KeyPairGenerator.genKeyPair() to create the KeyPair object. To obtain the private and public keys, use the KeyPair.getPrivate() and KeyPair.getPublic() methods, respectively.
Example 4-4 is a code fragment showing generation of a key pair for public/private key algorithms such as "DSA" and "DH".
//576-bit DiffieHellman key pair keyGen = KeyPairGenerator.getInstance("DH"); keyGen.initialize(576); keypair = keyGen.genKeyPair(); privateKey = keypair.getPrivate(); publicKey = keypair.getPublic(); } catch (java.security.NoSuchAlgorithmException e) { }
To verify a digital signature object using a public key, use the Signature.getInstance(algorithm), which creates an instance of Signature implementing the algorithm, if such an algorithm is available with the provider. To verify the message, the Signature.initVerify(key) method is used with a public key as an input, Signature.update(message) takes the signed message to be verified using the specified byte array, and finally, Signature.verify(signature) takes the signature as a byte array for verification and results in a boolean value indicating success or failure. The signature verification will be successful only if the signature corresponds to the message and its public key. Example 4-6 shows how to verify a digital signature on a message using a public key.
So far we have looked at JCA and the use of standard-based cryptographic services. Now, let's explore Java Cryptographic Extensions (JCE), an enhancement to JCA that has additional features and capabilities.
Let's take a closer look at the JCE provider architecture, core API classes, and its programming model.
KeyGenerator ( javax.crypto.KeyGenerator): Provides the functionality of a symmetric key (secret key) generator. SecretKeyFactory ( javax.crypto.SecretKeyFactory): Acts as a factory class for SecretKey; operates only on symmetric keys. SealedObject ( javax.crypto.SealedObject): Allows creating a serialized object and protects its confidentiality using a
cryptographic algorithm.
KeyAgreement ( javax.crypto.KeyAgreement): Provides the functionality of using KeyAgreement protocols; allows the
creation of a KeyAgreement object for each party involved in the key agreement. Additionally, the javax.crypto.interfaces package provides interfaces for Diffie-Hellman keys, and the javax.crypto.spec provides the key and parameter specifications for the different algorithms such as DES, BlowFish, and Diffie-Hellman.
Generate a DES Key: To create a DES key, instantiate a KeyGenerator using the getInstance("DES"), and then, to generate the key, use the generateKey() method:
KeyGenerator kg = KeyGenerator.getInstance("DES"); SecretKey sKey = kg.generateKey();
Create the Cipher: Use the getInstance() factory method of the Cipher class and specify the name of the requested transformation (algorithm/mode/padding) as input. In the example shown here, we use the DES algorithm, ECB (Electronic code book mode), "PKCS5Padding" PKCS#5 padding:
Cipher myCipher = Cipher.getInstance("DES/ECB/PKCS5Padding");
Initialize the Cipher for encryption: Use the init() method of the Cipher class and initialize the cipher object encryption with ENCRYPT_MODE and secret key. For this example, we use a String object as test data. Use the dofinal() to finish the encrypt operation:
myCipher.init(Cipher.ENCRYPT_MODE, sKey); // Test data String testdata = "Understanding Encryption & Decryption"; byte[] testBytes = testdata.getBytes(); // Encrypt the testBytes byte[] myCipherText = myCipher.doFinal(testBytes);
Initialize the Cipher for decryption: Use the init() method of the Cipher class and initialize the cipher object decryption with DECRYPT_MODE and secret key. For this example, use the dofinal() to finish the decrypt operation.
myCipher.init(Cipher.DECRYPT_MODE, sKey); // Decrypt the byte array byte[] myCipherText = myCipher.doFinal(encryptedText);
Example 4-7 is a full code example (EncryptDecryptW ithBlowFish.java) showing encryption and decryption using the Blowfish algorithm.
System.err.println ("Usage: java EncryptDecryptWithBlowFish <Enter text> "); System.exit(1); } String testData = args[0]; System.out.println("Generating a Blowfish key..."); // Create a key using "Blowfish" KeyGenerator myKeyGenerator = KeyGenerator.getInstance("Blowfish"); keyGenerator.init(128); // specifying keysize as 128 Key myKey = myKeyGenerator.generateKey(); System.out.println("Key generation Done."); // Create a cipher using the key Cipher myCipher = Cipher.getInstance("Blowfish/ECB/PKCS5Padding"); myCipher.init(Cipher.ENCRYPT_MODE, myKey); byte[] testBytes = testData.getBytes(); // Perform encryption byte[] encryptedText = cipher.doFinal(testBytes); // Printing out the encrypted data System.out.println ("Encryption Done:" + new String(encryptedText)); // Initialize the cipher for DECRYPTION mode cipher.init(Cipher.DECRYPT_MODE, myKey); // Performing decryption byte[] decryptedText = cipher.doFinal(encryptedText); // Printing out the decrypted data System.out.println("Decryption Done:" + new String(decryptedText)); } }
Cipher myCipher = Cipher.getInstance("DES/ECB/PKCS5Padding"); // 2. Initialize the Cipher myCipher.init(Cipher.ENCRYPT_MODE, key); // 3. Represent the Plaintext byte[] plaintext = "Eye for an Eye makes the Whole world blind".getBytes(); // 4. Encrypt the Plaintext byte[] myciphertext = myCipher.doFinal(plaintext); // 5. Return the cipher text as String return getString( myciphertext ); } catch( Exception e ) { e.printStackTrace(); } ... // Decryption using DES, ECB Mode and PKCS5Padding Scheme try { ... // 1. Create the cipher using DES algorithm // ECB Mode and PKCS5Padding scheme Cipher myCipher = Cipher.getInstance("DES/ECB/PKCS5Padding"); // 2. Get the ciphertext byte[] ciphertext = getBytes( myciphertext ); // 3. Initialize the cipher for decryption myCipher.init(Cipher.DECRYPT_MODE, key); // 4. Decrypt the ciphertext byte[] plaintext = myCipher.doFinal(ciphertext); // 5. Return the plaintext as string return new String( plaintext ); } catch( Exception ex ) { ex.printStackTrace(); }
= new FileOutputStream(cipherTextFile); CipherOutputStream cipherOutputStream = new CipherOutputStream(outputFile, myCipher); int i = 0; while (i=inputFile.read() != -1) { cipherOutputStream.write(i); } cipherOutputStream.close(); outputFile.close(); inputFile.close();
Sealed Object
JCE introduced the notion of creating sealed objects. A Sealed object is all about encrypting a serializable object using a cipher. Sealed objects provide confidentiality and helps preventing unauthorized viewing of contents of the object by restricting de-serialization. From a programming standpoint, the sealed object creates a copy of given object by serializing the object to an embedded byte array, and then encrypt them using a cipher. To retrieve the original object, the object can be unsealed using the cipher that had been used for sealing the object. Example 4-10 shows how to create a sealed object.
import javax.crypto.*; import javax.crypto.spec.*; import java.util.*; // This is for BASE64 encoding and decoding import sun.misc.*; //... try { // Generate a key using HMAC-MD5 KeyGenerator keyGen = KeyGenerator.getInstance("HmacMD5"); SecretKey mySecretkey = keyGen.generateKey(); // Create a MAC object and initialize Mac mac = Mac.getInstance(key.getAlgorithm()); mac.init(mySecretkey); String testString = "This is a test message for MAC digest"; // Encode the string into bytes and digest it byte[] testBytes = testString.getBytes(); byte[] macDigest = mac.doFinal(testBytes); // convert the digest into a string String digestB64 = new sun.misc.BASE64Encoder().encode(macDigest); System.out.println("Printing MAC Digest as String:" + digestB64); } catch (InvalidKeyException e) { } catch (NoSuchAlgorithmException e) { }
If two or more parties are involved in a key agreement, all corresponding parties must create and initialize a KeyAgreement object. After creating and initializing the KeyAgreement object, the corresponding parties execute the different phases specific to the KeyAgreement protocol. The DHParameterSpec constructs a parameter set for Diffie-Hellman, using a prime modulus p, a base generator g, and the size in bits l, of the random exponent (private value). Example 4-14 is a code fragment showing the steps involved in using the Diffie-Hellman KeyAgreement protocol.
= new DHParameterSpec(p, g, l); keyGen.initialize(dhSpec); KeyPair keypair = keyGen.generateKeyPair(); //2. Get the generated public and private keys PrivateKey privateKey = keypair.getPrivate(); PublicKey publicKey = keypair.getPublic(); //3. Send the public key bytes to the // other party... byte[] publicKeyBytes = publicKey.getEncoded(); //4. Retrieve the public key bytes // of the other party publicKeyBytes = ...; //5. Convert the public key bytes // into a X.509 PublicKey object X509EncodedKeySpec x509KeySpec = new X509EncodedKeySpec(publicKeyBytes); KeyFactory keyFact = KeyFactory.getInstance("DH"); publicKey = keyFact.generatePublic(x509KeySpec); //6. Prepare to generate the secret key // with the private key and // public key of the other party KeyAgreement ka = KeyAgreement.getInstance("DH"); ka.init(privateKey); ka.doPhase(publicKey, true); //7. Specify the type of key to generate; String algorithm = "DES"; //8. Generate the secret key SecretKey secretKey = ka.generateSecret(algorithm); //9. Use the secret key to encrypt/decrypt data; /... } catch (java.security.InvalidKeyException e) { } catch(java.security.spec.InvalidKeySpecException e) { } catch (java.security.InvalidAlgorithmParameterException e) { } catch (java.security.NoSuchAlgorithmException e) { }
Installing PKCS#11
To install the PKCS#11 provider statically, edit the Java security properties file located at <JAVA_HOME>/jre/lib/security/java.security. For example, to install the Sun PKCS#11 provider with the configuration file /opt/hwcryptocfg/pkcs11.cfg, add the following in the Java security properties file:
security.provider.7=sun.security.pkcs11.SunPKCS11 \ /opt/hwcryptocfg/pkcs11.cfg
To install the provider dynamically (see Example 4-15), create an instance of the provider with the appropriate configuration filename and then install it.
security.provider.8=sun.security.pkcs11.SunPKCS11 \ /opt/openSC/openscpkcs11-solaris.cfg
Create the OpenSC PKCS#11 configuration file. For example, the openscpkcs11-solaris.cfg looks like as follows:
2. name = OpenSC-PKCS11
W ith the above settings, it is possible to use the smart card as a keystore and retrieve information about the certificates. For example (see Example 4-16). Using keytool to list the certificate will look like as follows.
Example 4-16. Using keytool to list certificate entries from a smart card
$ keytool -keystore NONE -storetype PKCS11 \ -providerName SunPKCS11-OpenSC -list -v Enter keystore password: <PIN> Keystore type: PKCS11 Keystore provider: SunPKCS11-OpenSC Your keystore contains 4 entries Alias name: Signature Entry type: keyEntry Certificate chain length: 1
Certificate[1]: Owner: SERIALNUMBER=79797900036, GIVENNAME=Nagappan Expir?e1779, SURNAME=R, CN=Nagappan (Signature), C=US Issuer: CN=Nagappan OpenSSL CA, C=BE Serial number: 1000000000102fdf39941 Valid from: Fri Apr 01 15:29:22 EST 2005 until: Wed Jun 01 15:29:22 EST 2005 Certificate fingerprints: MD5: 12:20:AC:2F:F2:F5:5E:91:0A:53:7A:4B:8A:F7:39:4F SHA1: 77:76:48:DA:EC:5E:9C:26:A2:63:A9:EC:A0:14:42:BF:90:53:0F:BC Alias name: Root Entry type: trustedCertEntry Owner: CN=Nagappan OpenSSL Root CA, C=US Issuer: CN=Nagappan OpenSSL Root CA, C=US Serial number: 11111111111111111111111111111112 Valid from: Wed Aug 13 11:00:00 EST 2003 until: Mon Jan 27 00:00:00 EST 2014 Certificate fingerprints: MD5: 5A:0F:FD:DB:4F:FC:37:D4:CD:95:17:D5:04:01:6E:73 SHA1: 6A:5F:FD:25:7E:85:DC:60:81:82:8D:D1:69:AA:30:4E:7E:37:DD:3B Alias name: Authentication Entry type: keyEntry Certificate chain length: 1 Certificate[1]: Owner: SERIALNUMBER=79797900036, GIVENNAME=Nagappan Expir?e1779, SURNAME=R, CN=NAGAPPAN, C=US Issuer: CN=Nagappan OpenSSL CA, C=US Serial number: 1000000000102fd10d2d9 Valid from: Fri Apr 01 11:21:40 EST 2005 until: Wed Jun 01 11:21:40 EST 2005 Certificate fingerprints: MD5: 29:7E:8A:5C:91:34:9B:05:52:21:4E:49:5B:45:F8:C4 SHA1: 15:B7:EA:27:E1:0E:9D:94:4E:7B:3B:79:00:48:A2:31:7E:9D:72:1A
The smart-card token PIN can be specified using the -storepass option. If it is not specified, then the keytool and jarsigner tool will prompt the user for the PIN. If the token has a protected authentication path (such as a dedicated PIN-pad or a biometric reader), then the -protected option must be specified, and no password options can be specified. For more information about installing PKCS#11 providers, refer to the JCE PKCS#11 documentation available at http://java.sun.com/j2se/1.5.0/docs/guide/security/p11guide.html.
U.S. export laws restrict the export and use of JCE with unlimited strength cryptography. Those living in eligible countries may download the unlimited strength version and replace the strong cryptography jar files with the unlimited strength files. Using JCE with unlimited strength cryptography is also subject to the import control restrictions of certain countries. For more information on U.S. export laws related to cryptography, refer to The Bureau of Industry and Security's W eb site (U.S. Department of Commerce) at http://www.bxa.doc.gov and the Java Cryptography Extension (JCE) W eb site at http://java.sun.com/products/jce/. To summarize, the JCE implementation and API framework feature an enhancement to JCA that provides a full range of cryptographic services, including support for encryption and decryption and key agreement protocols. It also maintains interoperability with other cryptographic provider implementations. In the next section, we will explore the Java CertPath API, which defines an API framework for creating, building, and validating digital certification paths.
The following is an example code fragment (see Example 4-17) that demonstrates the steps involved in creating a CertPath object for a specified list of certificates:
CertPathValidatorResult result = certPathValidator.validate(certPath, params); // 6. Get the CA used to validate this path PKIXCertPathValidatorResult pkixResult = (PKIXCertPathValidatorResult)result; TrustAnchor ta = pkixResult.getTrustAnchor(); X509Certificate cert = ta.getTrustedCert(); } catch (CertificateException ce) { } catch (KeyStoreException ke) { } catch (NoSuchAlgorithmException ne) { } catch (InvalidAlgorithmParameterException ie) { } catch (CertPathValidatorException cpe) { // Validation failed }
In the next section, we will explore Java Secure Socket Extensions (JSSE), which defines an API framework to secure communications over the network using standardized protocols.
W ith JSSE, it is possible to develop client and server applications that use secure transport protocols, which include: Secure HTTP (HTTP over SSL) Secure Shell (Telnet over SSL) Secure SMTP (SMTP over SSL) IPSEC (Secure IP) Secure RMI or RMI/IIOP (RMI over SSL) Like other security packages, JSSE also features a provider architecture and service-provider interface that enables different JSSE-compliant providers to be plugged into the J2SE environment.
X.509-based key manager for managing keys of supporting JCA KeyStore. X.509-based trust manager, implementing support for verifying and validating certificate chains. Support for Kerberos Cipher suites if the underlying OS provides it (J2SE 5.0 and later). Support for hardware acceleration and smart-card tokens using JCE PKCS#11 provider (J2SE 5.0 and later). Let's take a closer look at the JSSE API mechanisms, core classes, interfaces, and the programming model.
protocols. HostnameVerifier ( javax.net.ssl.HostnameVerifier): Represents an interface class for hostname verification used for verifying the authenticity of the requests from the originating host. In an SSL handshake, if the URL's hostname and the server's identification hostname mismatch, the verification mechanism uses this interface to verify the authenticity of the connection and its originating host.
// Specify the Client truststore and its password static { Security.addProvider(new com.sun.net.ssl.internal.ssl.Provider()); System.setProperty ("javax.net.ssl.trustStore", "cacerts"); System.setProperty ("javax.net.ssl.trustStorePassword", "changeit"); } try { // Create an instance of a SocketFactory SSLSocketFactory sslsocketfactory = (SSLSocketFactory) SSLSocketFactory.getDefault(); // Create a Socket SSLSocket sslsocket = (SSLSocket)sslsocketfactory.createSocket(servername, sslport); System.out.println("SSL Connection Established with server"); // Create the streams to send and receive data // using Socket OutputStream out = sslSocket.getOutputStream(); InputStream in = sslSocket.getInputStream(); // Send messages to the server using the OutputStream // Receive messages from the server using the InputStream //... } catch (Exception e) { e.printStackTrace(); } } }
Mutual Authentication
The mutual authentication in a secure communication adds the value of a client being able to authenticate a server. This provides a means whereby a client can verify a server's authenticity and trust the data exchanged from the server. In a mutual authentication process, both client and server exchange their certificates and thereby create a trusted
communication channel between them. W hen an SSL client socket connects to an SSL server, it receives a certificate of authentication from the server. The client socket validates the certificate against a set of certificates in its truststore. Then the client sends its certificate of authentication to the server, which the server validates against a set of certificates in its truststore. Upon successful validation, a secure communication is established. To validate the server's certificate on the client side and the client's certificate on the server side, the server's certificate must be imported beforehand to the client's truststore and the client's certificate must be imported to the server's truststore. In JSSE, enabling client-based mutual authentication to authenticate the server can be done by setting SSLServerSocket.setNeedClientAuth(true). To enforce client authentication, requesting the client to furnish the peer client certificate is done by setting SSLServerSocket.setWantClientAuth(true). The following code fragment (see Example 4-21) shows how to force SSL server socket connections that request client certificate authentication (mutual authentication).
Additionally, we need to specify KeyStore and TrustStore properties as command-line options or system properties (see Example 4-22) in both client and server environments. For example (in the client environment):
Proxy Tunneling
Proxy tunneling provides a new level of communication security when two parties decide on communicating across the Internet. W hen the communication layer and data exchanged is not encrypted, it becomes easy to attack and identify the communication endpoints, sender/receiver information, and the conversation from the packets. Proxy tunneling provides a mechanism that allows access to a resource behind a firewall via a proxy server. The proxy server hides the addresses of the communicating hosts on its subnet from the outside attackers and protects the communication from those attacks. JSSE provides proxy tunneling support for accessing applications behind a firewall. This allows access using HTTP only via a proxy server. To enable proxy tunneling, the JSSE requires the application to specify the https.ProxyHost and https.ProxyPort as system properties. To exclude selected hosts to connect without using a proxy, add http.nonProxyHosts as a system property. Let's take a look at the following code example (HTTPSClientUsingProxyTunnel.java) that walks through the steps involved in tunneling through a proxy server using HttpsURLConnection (HTTP over SSL connection) with an SSL-enabled HTTP server:
state of the SSLEngine, a wrap() call may consume application data from the source buffer and may produce network data in the destination buffer. The outbound data may contain application and/or handshake data. A call to unwrap() will examine the source buffer and may advance the handshake if the data is handshaking information, or may place application data in the destination buffer if the data is application information. The state of the underlying SSL/TLS algorithm will determine when data is consumed and produced. Example 4-26 shows the steps in typical usage.
The SSLEngine produces or consumes complete SSL/TLS packets only and it does not store application data internally between calls to wrap() or unwrap(). Thus, input and output ByteBuffers must be sized appropriately to hold the maximum record that can be produced. Calls to SSLSession.getPacketBufferSize() and SSLSession.getApplicationBufferSize() should be used to determine the appropriate buffer sizes. The size of the outbound application data buffer generally does not matter. If buffer conditions do not allow for the proper consumption/production of data, the application must determine the problem using SSLEngineResult, correct the problem, and then retry the call again. Unlike SSLSocket, all methods of SSLEngine are non-blocking. SSLEngine implementations may cause the results of tasks that may take an extended period of time to complete, or may be seemingly blocking or slow. For any operation which may potentially block, the SSLEngine will create a runnable_delegated task using a thread depending on the design strategy. To shut down an SSL/TLS connection, the SSL/TLS protocols require the transmission of close messages. Therefore, when an application is done with the SSL/TLS connection, it should first obtain the close messages from the SSLEngine, transmit them to the communicating peer using its transport mechanism, and then finally shut down the transport mechanism. So far we have looked at JSSE and how to use its secure communication services. Now, let's explore the Java Authentication and Authorization Service (JAAS), which provides API mechanisms for authentication and authorization services.
In an end-to-end application security model, JAAS provides authentication and authorization mechanisms to the Java applications and also enables them to remain independent from JAAS provider implementations. The JAAS API framework features can be categorized into two concepts: Authentication: JAAS provides reliable and secure API mechanisms to verify and determine the identity of who is executing the code. Authorization: Based on an authenticated identity, JAAS applies access control rights and privileges to execute the required functions. JAAS extends the Java platform access control based on code signers and codebases with finegrained access control mechanisms based on identities. Like other security packages, JAAS also features a provider architecture and service-provider interface that allows different JAAS-based authentication and authorization provider modules to be plugged into a J2SE environment.
javax.security.auth.spi.*: Contains interfaces for a JAAS provider for implementing JAAS modules. The classes and interfaces are further classified into three categories: Common, Authentication, and Authorization. Let's take a look at some of the important classes and interfaces from these categories.
Common Classes
Subject ( javax.security.auth.Subject): Represents a group of related entities, such as people, organizations, or services with a set of security credentials. Once authenticated, a Subject is populated with associated identities, or Principals. The authorization actions will be made based on the Subject. Principal ( java.security.Principal): An interface that represents an authenticated entity, such as an individual, organization, service, and so forth.
Authentication Classes
LoginContext ( javax.security.auth.login.LoginContext): Provides the basic methods to authenticate Subjects. Once the caller has instantiated a LoginContext, the LoginContext invokes the login method to authenticate a Subject. It is also responsible for loading the Configuration and instantiating the appropriate LoginModules. LoginModule ( javax.security.auth.spi.LoginModule): This interface is primarily meant for JAAS providers. It allows JAAS providers to implement and plug in authentication mechanisms as login modules. LoginModules are plugged into an application environment to provide a particular type of authentication. In an authentication process, each LoginModule is initialized with a Subject, a CallbackHandler, shared LoginModule state, and LoginModule-specific options. The LoginModule uses the CallbackHandler to communicate with the user. J2SE 1.4 provides a number of LoginModules bundled under the com.sun.security.auth.module package. Configuration ( javax.security.auth.login.Configuration): Represents the configuration of LoginModule(s) for use with a particular login application. CallbackHandler ( javax.security.auth.callback.CallbackHandler): Defines an interface that allows interaction with a user identity to retrieve authentication-related data such as username/password, biometric samples, smart card-based credentials, and so forth. Applications implement the CallbackHandler and pass it to the LoginContext, which forwards it directly to the underlying LoginModule.
Authorization Classes
Policy ( java.security.Policy): Represents the system-wide access control policy for authorization based on an authenticated subject. AuthPermission ( javax.security.auth.AuthPermission): Encapsulates the basic permissions required for a JAAS authorization and guards the access to the Policy, Subject, LoginContext, and Configuration objects. PrivateCredentialsPermission ( javax.security.auth.PrivateCredentialsPermission): Encapsulates the permissions for accessing the private credentials of a Subject.
JAAS Authentication
In a JAAS authentication process, the client applications initiate authentication by instantiating a LoginContext object. The LoginContext then communicates with the LoginModule, which performs the actual authentication process. As the LoginContext uses the generic interface provided by a LoginModule, changing authentication providers during runtime becomes simpler without any changes in the LoginContext. A typical LoginModule will prompt for and verify a username and password or interface with authentication providers such as RSA SecureID, smart cards, and biometrics. LoginModules use a
CallbackHandler to communicate with the clients to perform user interaction to obtain authentication information and to
notify login process and authentication events.
1.
initialize(): The initialize() method initializes the authentication scheme and its state information (see Example 428).
FailedLoginException.
commit(): If the user is successfully authenticated (see Example 4-30), the commit() method adds the Principal from the corresponding authentication state information and populates the Subject with the Principal. If the authentication fails, the commit() method returns false and destroys the authentication state information.
myPassword = null; return loginVerification; } abort(): If the authentication fails, the abort() method exits the LoginModule and cleans up all the corresponding
user state information (see Example 4-31).
login.configuration.provider login.config.url.n
The login.configuration.provider identifies the JAAS LoginModule provider, and the login.config.url.n identifies the JAAS LoginModule configuration file. For example (see Example 4-33), the JAAS provider can be represented in the java.security properties.
To enable the JAAS LoginModule at runtime, the JAAS configuration file can also be specified as Java command-line options (see Example 4-34).
The preceding example specifies that an application named BiometricLogin requires users to first authenticate using the com.csp.jaasmodule.BioLoginModule, which is required to succeed. Even if the BioLoginModule authentication fails, the com.csp.jaasmodule.JavaCardLoginModule still gets invoked. This helps hide the source of failure during authentication. Also note that the LoginModule-specific option, matchOnCache="true" is passed to the JavaCardLoginModule as a custom attribute required for processing.
The following JAAS configuration file MyTestLoginModule.conf specifies the authentication module MyTestLoginModule from the com.csp.jaasmodule package that implements the authentication mechanisms.
Logging In: After instantiating the LoginContext, the login process is performed by invoking the login() method (see Example 4-37).
The LoginContext's login method calls methods in the MyTestLoginModule to perform the login and authentication. Implement the CallbackHandler: The LoginModule invokes a CallbackHandler to perform the user interaction and to obtain the authentication credentials (such as username/password, smart-card tokens). In the following example (see Example 4-38), the MyTestLoginModule utilizes the handle() method of TextCallbackHandler to obtain the username and password. The MyTestLoginModule passes the CallbackHandler handle() method with an array of appropriate Callbacks such as NameCallback for the username and a PasswordCallback for the password, which allows the CallbackHandler to perform the client interaction for authentication data and then set appropriate values in the Callbacks.
The MyTestLoginModule then authenticates the user by verifying the user inputs. If authentication is successful, MyTestLoginModule populates the Subject with a Principal representing the user. The calling application can retrieve the authenticated Subject by calling the LoginContext's getSubject method. Logging Out: It is always good practice to log out the user as soon as the user-specific actions have been performed, or at least before exiting the application. To perform logout, invoke the logout() method in the LoginContext. The logout() method of the LoginContext calls the logout() method of the login module being used. (see Example 4-39).
To run the client application, it is necessary to specify the login module configuration with the following:
-Djava.security.auth.login.config==MyTestLoginModule.conf
This option can be set as a command-line option or a system property. For authorization policies, it is necessary to specify a policy file that defines a security policy. For example:
JAAS Authorization
JAAS authorization enhances the Java security model by adding user, group, and role-based access control mechanisms. It allows setting user and operational level privileges for enforcing access control on who is executing the code. W hen a Subject is created as a result of an authentication process, the Subject represents an authenticated entity. A Subject usually contains a set of Principals, where each Principal represents a caller of an application. Permissions are granted using the policy for selective Principals. Once the user logged in is authenticated, the application associates the Subject with the Principal based on the user's access control context.
Configuring a Principal-based policy file: The JAAS Principal-based policy file provides
grant statements to include one or more Principal fields. Adding Principal fields in the policy file defines the user or entities with designated permissions to execute the specific application code or other privileges associated with the application or resources. The basic format of a grant statement (see Example 4-40) is as follows.
Example 4-40. JAAS authorization policy filegrant statement
grant <signer(s) field>, <codeBase URL> <Principal field(s)> { permission class_name "target_name", "action1"; .... permission class_name "target_name", "action2"; };
The signer field usually defines the application codebase and is followed by a codebase location. The Principal field defines the Principal_class defined by the authentication module and the associated user name. If multiple Principal fields are provided, then the Permissions in that grant statement are granted only if the Subject associated with the current access control context contains all of those as authenticated Principals. To grant the same set of Permissions to multiple Principals, create multiple grant statements listing all the permissions and containing a single Principal field designating one of the Principals. Example 4-41 is a sample JAAS Principal-based policy file defining the access control policies.
Example 4-41. Sample JAAS authorization policy file
grant codebase "file:/export/rn/MyTestLoginModule.jar" { permission javax.security.auth.AuthPermission "modifyPrincipals"; }; grant codebase "file:/export/rn/MyTestAction.jar" { permission javax.security.auth.AuthPermission "MyTestAction.class"; permission javax.security.auth.AuthPermission "doAsPrivileged"; }; /** User-Based Access Control Policy grant codebase "file:/export/rn/MyTestAction.jar", Principal sample.principal.SamplePrincipal "testUser" { permission java.util.PropertyPermission "java.home", "read"; permission java.util.PropertyPermission "user.home", "read"; permission java.io.FilePermission "cspaction.txt", "read"; };
Associating the Subject with access control: To associate a Subject with the access control context (see Example 4-42), we need to authenticate the user with JAAS authentication.
Example 4-42 will find out the Principals part of the authenticated Subject.
Example 4-42. Obtaining the user principal information from the subject
Iterator principalIterator = mySubject.getPrincipals().iterator(); System.out.println("Authenticated user has Principals:"); while (principalIterator.hasNext()) { Principal p = (Principal)principalIterator.next(); System.out.println("\t" + p.toString()); }
Then we can call the doAs() method available in the Subject class, passing the authenticated subject and either a java.security.PrivilegedAction or java.security. PrivilegedExceptionAction object. The doAs() method associates the Subject with the current access control context and invokes the run() method from the action, which contains all the necessary code to be executed. The doAsPrivileged() method from the Subject class can be called instead of the doAs() method, with the AccessControlContext as an additional parameter. This enables the Subject to associate with only the AccessControlContext provided.
Example 4-44 is sample code for the PrivilegedAction to be executed after associating the Subject with the current access control context. Upon successful authentication, the action defined in the MyTestAction class will be executed based on the user access privileges defined in the policy file providing grant permissions.
To run the client application using JAAS-based authentication and authorization, it is necessary to include the CLASSPATH containing the LoginModule and to specify the login module configuration file and JAAS principal-based policy file as command-line options or as system properties (see Example 4-45).
The Subject is populated with principal and credential information by the login module, CallBackHandler is used by the login module for capturing user credential information (such as username/password), sharedStateMap will be used for passing user security information between login modules, and options are additional name/value pairs defined in the configuration file and are meaningful only for that particular login module. The LoginContext determines the authentication result by consulting the configuration file and also combining the returned results of each LoginModule. For example (see Example 4-47), the JAAS configuration file that lists multiple LoginModules will look like the following.
If the useSharedState attribute is specified, the LoginModule stores and retrieves the username and password from the shared state, using javax.security.auth.login.name and javax.security.auth.login.password as the respective keys. The retrieved values, such as username and password, can be used again by other listed LoginModules.
So far we have looked at JAAS and how to use its authentication and authorization services. Now, let's explore Java Generic Secure Services (JGSS), which enables uniform access to security services over a variety of underlying authentication mechanisms.
Java SASL
Java SASL was introduced in the release of J2SE 5.0. It defines Java API mechanisms with an authentication mechanismneutral solution so the application that uses the API need not be hard-wired to use any particular SASL mechanism. The API facilitates both client and server applications. It allows applications to select the mechanism to use based on desired security features, such as whether they are susceptible to passive dictionary attacks or whether they accept anonymous authentication. The Java SASL API supports developers creating their own custom SASL mechanisms. SASL mechanisms are installed by using the JCA. SASL provides a pluggable authentication solution and security layer for network applications. It works together with other API solutions such as JSSE and Java GSS. For example, an application can use JSSE for establishing a secure channel and then use SASL for client, username/password-based authentication. Similarly, SASL mechanisms can be layered on top of GSS-API mechanisms to support the SASL GSS-API/Kerberos v5 mechanism that is used with LDAP.
Then the SASL Client can proceed for LDAP authentication (see Example 4-49).
while (!sc.isComplete() && (res.status == SASL_BIND_IN_PROGRESS || res.status == SUCCESS)) { response = sc.evaluateChallenge(res.getBytes()); if (res.status == SUCCESS) { // we're done here; // Don't expect to send another BIND if (response != null) { throw new SaslException("Protocol error"); } break; } res = ldap.sendBindRequest(dn, sc.getName(), response); } if (sc.isComplete() && res.status == SUCCESS) { String qop = (String) sc.getNegotiatedProperty(Sasl.QOP); if (qop != null && (qop.equalsIgnoreCase("auth-int") || qop.equalsIgnoreCase("auth-conf"))) { // Use SaslClient.wrap() and SaslClient.unwrap() // for future // communication with server ldap.in = new SecureInputStream(sc, ldap.in); ldap.out = new SecureOutputStream(sc, ldap.out); } }
The SASL server can proceed for authentication (i.e., assuming the LDAP server received an LDAP BIND request containing the name of the SASL mechanism and an (optional) initial response). The server will initiate authentication as follows (see Example 4-51).
Example 4-51. SASL server for authentication after LDAP BIND request
while (!ss.isComplete()) { try { byte[] challenge = ss.evaluateResponse(response); if (ss.isComplete()) { status = ldap.sendBindResponse(mechanism, challenge, SUCCESS); } else { status = ldap.sendBindResponse(mechanism, challenge, SASL_BIND_IN_PROGRESS); response = ldap.readBindRequest(); } } catch (SaslException e) { status = ldap.sendErrorResponse(e); break; }
} if (ss.isComplete() && status == SUCCESS) { String qop = (String) sc.getNegotiatedProperty(Sasl.QOP); if (qop != null && (qop.equalsIgnoreCase("auth-int") || qop.equalsIgnoreCase("auth-conf"))) { // Use SaslServer.wrap() // and SaslServer.unwrap() for future // communication with client ldap.in = new SecureInputStream(ss, ldap.in); ldap.out = new SecureOutputStream(ss, ldap.out); } }
security.provider.7=com.sun.security.sasl.Provider
The Sun Java SASL provider (SunSASL) provides support for several SASL mechanisms used in popular protocols such as LDAP, IMAP, and SMTP. This includes support for the following client and server authentication mechanisms as well:
Client Mechanisms
PLAIN (RFC 2595): Supports cleartext username/password authentication. CRAM-MD5 (RFC 2195). Supports a hashed username/password authentication scheme. DIGEST-MD5 (RFC 2831). Defines how HTTP Digest Authentication can be used as an SASL mechanism. GSSAPI (RFC 2222). Uses the GSSAPI for obtaining authentication information. It supports Kerberos v5 authentication. EXTERNAL (RFC 2222). To obtain authentication information from an external channel (such as TLS or IPsec).
Server Mechanisms
CRAM-MD5 DIGEST-MD5 GSSAPI (Kerberos v5) For more information about using Java SASL, refer to http://java.sun.com/j2se/1.5.0/docs/guide/security/sasl/saslrefguide.html.
Summary
This chapter offered a tour of the Java extensible security architecture and its core API technologies that contribute to building an end-to-end security infrastructure for Java-based application solutions. W e studied the various Java security API technologies that provide support for the following: Using cryptographic services in Java Using certificate interfaces and classes for managing digital certificates Using Public Key Infrastructure (PKI) interfaces and classes to manage the key repository and certificates Using secure socket communication to protect the privacy and integrity of data transmitted over the network Using hardware accelerators and smart card based keystores Using authentication and authorization mechanisms for enabling single sign-on access to underlying applications W e also looked at the security enhancements available from J2SE 5.0. In particular, we looked at the API mechanisms and programming techniques of the following Java extensible security technologies: The Java Extensible Security Architecture Java Cryptographic Architecture (JCA) Java Cryptographic Extensions (JCE) Java Certification API (Java CertPath) Java Secure Socket Communication (JSSE) Java Authentication and Authorization Services (JAAS) Java Generic Secure Services (JGSS) Java Simple Authentication and Security Layer (Java SASL) It is important to know these technologies, because they serve as the foundation for delivering end-to-end security to Java-based applications and W eb services. In the next chapter, we will explore the security techniques and mechanisms available for securing J2EE-based applications and W eb services.
References
"Java Security Architecture," in "Java 2 SDK, Standard Edition Documentation Version 1.4.2." Sun Microsystems, 2003. http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc1.html and http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc2.html. Java Security Guide for "Java 2 SDK, Standard Edition Documentation Version 1.4.2." Sun Microsystems, 2003. http://java.sun.com/j2se/1.4.2/docs/guide/security/ "Security Enhancements in the Java 2 Platform Standard Edition 5.0," Sun Microsystems, 2004. http://java.sun.com/j2se/1.5.0/docs/guide/security/enhancements15.html [RSASecurity] CryptoFAQ. http://www.rsasecurity.com/rsalabs/node.asp?id=2168
The J2EE platform is generally represented with the following logical tiers shown in Figure 5-1: Client T ier: The client tier represents the J2EE platform's user interface or its application clients that interact with the J2EE platform-based application or system. A typical client can be a Java application (J2SE/J2ME), Java applet, W eb browser, W eb service, Java enabled device, or Java-based network application. Web or Presentation T ier: The presentation tier represents the presentation logic components required to access the J2EE application and its business services. It handles the requests and responses, session management, deviceindependent content delivery, and invocation of business components. From a security standpoint, it delivers client login sessions and establishes single sign-on access control to underlying application components. J2EE components such as JSPs, Servlets, and Java Server Faces (JSF) reside in the W eb container, which delivers user interfaces to clients. In J2EE 1.4, W eb services-based communication can also be delivered using W eb components such as Servlets. Business or Application T ier: The Business Tier represents the core business logic processing required. It typically deals with business functions, and transactions with back-end resources, or workflow automation. In some cases, it acts as a business wrapper when the underlying resources handle the actual business logic. EJB components such as Session Beans, Entity Beans, and Message-driven Beans reside in this tier. In J2EE 1.4, Stateless EJB components can be exposed as W eb services and can be invoked using SOAP-based W eb services communication. Integration or EIS T ier: The Integration Tier represents the connection and communication with back-end resources such as Enterprise-information systems (EISs), database applications, and legacy or mainframe applications. The business-tier components are tightly coupled with the Integration Tier in order to facilitate data query and retrieval from back-end resources. J2EE components such as JMS, J2EE connectors, and JDBC components reside in this tier. Resources T ier: The resources tier represents the back-end application resources that contain data and services. These resources can be database applications, EIS systems, mainframe applications, and other network-based services. The components in these tiers are executed inside of component-specific containers such as a W eb container or an EJB container. Containers provide the environment in which the components can be executed in a controlled and managed way. They also provide an abstraction layer through which the components see the underlying services and the architecture. The J2EE platform provides a full-fledged security infrastructure and container-based security services that address the end-to-end security requirements of the different application tiers and their resources. To define security at all levels, the J2EE platform defines contracts that establish the security roles and responsibilities involved in the development, deployment, and management of a J2EE application. In accordance with J2EE specifications, the security roles and
responsibilities involve the following: J2EE Platform Provider: The J2EE server vendor who is responsible for implementing the J2EE security infrastructure and mechanisms. J2EE Application Developer: The application developer who is responsible for specifying the application roles and role-based access restrictions to the components. J2EE Application Assembler: The component builder who is responsible for assembling the components and defining the security view identifying the security dependencies in the component. J2EE Application Deployer: The component deployer who is responsible for assigning the users and groups to the roles and who establishes the security deployment scenarios. The above roles and responsibilities ensure security at every stage of development and deployment of all components and their residing application tiers. Before we delve into the J2EE security mechanisms, let's take a look at the J2EE security definitions, which are used to describe the security of a J2EE environment.
Declarative Security
In a declarative security model, the application security is expressed using rules and permissions in a declarative syntax specific to the J2EE application environment. The security rules and permissions will be defined in a deployment descriptor document packaged along with the application component. The application deployer is responsible for assigning the required rules and permissions granted to the application in the deployment descriptor. Figure 5-2 shows the deployment descriptors meant for different J2EE components.
Example 5-1 is an XML snippet from a W eb application deployment descriptor (web.xml) that represents a security constraint defining an access control policy for a W eb application resource (/products/apply-discount) and specifies the access privileges for a role (employee).
Programmatic Security
In a programmatic security model, the J2EE container makes security decisions based on the invoked business methods
to determine whether the caller has been granted a privilege to access or deny a resource. This determination is based on the parameters of the call, its internal state, or other factors based on the time of the call or its processed data. For example, an application component can perform fine-grained access control with the identity of its caller by using EJBContext.getCallerPrincipal (EJB component) or HttpServletRequest.getUserPrincipal (W eb component) and by using EJBContext.isCallerInRole (EJB component) and HttpServletRequest.isUserInRole (W eb component). This allows determining whether the identity of the caller has the privileged role to execute a method for accessing a protected resource. Using programmatic security helps when declarative security is not sufficient to build the security requirements of the application component and where the component access control decisions need to use complex and dynamic rules and policies.
J2EE Authentication
W hen a client interacts with a J2EE application, depending upon the application component architecture, it accesses a set of underlying components and resources such as JSPs, Servlets, EJBs, W eb services endpoints, and other back-end applications. Because processing a client request involves a chain of invocations with subsequent components and resources, the J2EE platform allows introducing a client authentication at the initial call request. After initial authentication, the client identity and its credentials can be propagated to the subsequent chain of calls. The J2EE platform allows establishing user authentication in all application tiers and components. The J2EE environment provides support for the following three types of authentication services: Container-based authentication Application-based authentication Agent-based authentication
Container-Based Authentication
This is the standard authentication service provided by the J2EE server infrastructure. This allows the J2EE environment to authenticate users for access to its deployed applications. The J2EE specification mandates W eb container support for four authentication types, which include the following:
Form-Based Authentication
Similar to Basic Authentication, but the login dialog is customized as a form to pass the username and password to the W eb container.
Application-Based Authentication
In application-based authentication, the application relies on a programmatic security approach to collect the user credentials and verifies the identity against the security realm. In a W eb-componentbased application, the servlet adopts the authentication mechanisms from the J2EE container services and then uses declarative security mechanisms to map the user principal to a security role defined in the deployment descriptor.
Agent-Based Authentication
This allows J2EE applications to use third-party security providers for authentication. The security providers provide pluggable agents typically to provide a single sign-on solution to portals, J2EE-managed business applications, and so forth. The agent usually resides as a proxy that intercepts the user requests to the J2EE server. Typically, to support agent-based authentication, the J2EE server infrastructure uses JAAS-based authentication modules to integrate custom authentication technologies.
Protection Domains
In the J2EE platform, the container provides an authentication boundary between the external callers and the deployed components. It is the container's responsibility to enforce the security within its boundary and ensure that calls entering are authenticated and identified within the boundary for all interactions. Interactions within the container-managed boundary are managed as protection domains, which maintain the identity proof for the interacting components to trust each other. Figure 5-3 illustrates the notion of protection domains.
W hen a user makes an inbound request to the container to invoke a J2EE component, it is the container's responsibility to ensure that the authentication information is available to the component as a credential. Similarly, in the case of outbound calls, it is the container's responsibility to maintain the caller's identity to enforce the protection domain to the called components. In a J2EE component deployment, this is done by declaring the resource references (e.g., resource-ref element) in the deployment descriptor of the J2EE component that interacts with other components and external resources managed by the container.
J2EE Authorization
J2EE uses a role-based authorization model to restrict access control with components and resources. The role is a logical grouping of users defined by the application assembler. The application deployer maps the users to roles in the target environment. In a J2EE environment, the container serves as the authentication boundary between the components and its caller clients. W hen a client initiates a request with a successful authentication, the container verifies the security attributes from the client's credentials and identifies the access control rules for the target resource. If the rules are satisfied, the container allows the caller to access the resource; otherwise, it denies the request. The J2EE platform provides the following two types of authorization mechanisms:
Declarative Authorization
In a J2EE environment, the application deployer specifies the enforced rules and permissions associated with an application. The rules and resources are listed in the application deployment descriptor along with a list of roles that are able to access the resource. These roles are mapped to specific users by the application deployer. In W eb components such as JSPs and Servlets, access can be protected at the URL level, and it can be further protected down to GET or POST methods. In EJB components, permissions can be specified down to specific class methods. Because declarative authorization is based on a static policy, it has limitations when the enforcing of dynamic access rules, multi-role access, and content-level authorization is required. Unless these requirements are demanded, declarative authorizations are usually preferred and easier to deploy.
Programmatic Authorization
The J2EE container decides on access control before forwarding the requests to a component. In programmatic authorization, the access control rule and associated logic is directly implemented into the application. For example: Using EJBContext.isCallerInRole() and EJBContext.getCallerPrincipal() in EJB components and using HttpServletRequest.isUserInRole() and HttpServletRequest.getUserPrincipal() in W eb components provide finer-grained access control than declarative authorization provides. Using programmatic authorization allows implementing security-aware J2EE applications that can enforce access control mechanisms such as dynamic access rules, multi-role access, and contentlevel authorization.
calls whose parameters or return values should be protected for integrity or confidentiality. The component's deployment descriptor is used to represent this information. To secure communication with W eb components such as Servlets and JSP pages, the transport-guarantee sub-element of the user-data-constraint sub-element of a security-constraint is used. In cases where a component's interactions with an external resource are known to carry sensitive information, these sensitivities should be described in the description sub-element of the corresponding resource-ref. In EJB components, this is done in a description sub-element of the target EJB component. Example 5-3 illustrates a W eb deployment descriptor snippet (web.xml) showing the < transport-guarantee> sub-element.
In the previous sections, we briefly looked at the different security mechanisms and services made available by the J2EE environment. Now, we will look over the different component-level security mechanisms that encompass all the logical tiers and components.
order to understand how they contribute to the end-to-end security of a J2EE-based application.
Realm name applies only when the authentication method is BASIC. It provides the name of the security realm in which the user is required to log in and authenticate.
Form-Based Authentication
This allows specifying a custom login interface using a JSP/Servlet/HTML page to authenticate a user who is trying to access a protected W eb application. It also allows configuring a custom login error page to display an invalid authentication request. To set up Form-based Authentication, it is necessary to configure login-config and form-login-config elements in the W eb component deployment descriptor. The form-login-config element includes form-login-page and form-error-page sub-elements, which contain the element values referring to the URL pages to be displayed for login and error. Example 5-5 is a code snippet of the deployment descriptor illustrating the declaration of Form-based Authentication.
In addition to configuring the W eb deployment descriptor, the JSP and Servlet specifications mandate that the custom login interface (using JSP, Servlet, or HTML) be implemented with the special action attribute j_security_check as well as the name attributes j_username (username field) and j_password (password field) when obtaining the username and password inputs. Example 5-6 is a JSP example of myLogin.jsp, illustrating a Form-based Authentication login page.
To display login failures and error conditions, it is also necessary to define an error page. Example 5-7 is a JSP example of myError.jsp that illustrates an error page that will display after an authentication request is made by a user who is not authorized to access the protected W eb application.
Form-based Authentication mechanisms can also be used for GUI clients, including Swing- and AW T-based applet clients. The implementation steps are the same, except the Swing- or AW T-based applets are required to use a Client application component that provides Form-based Authentication. In the case of rich clients using RMI/IIOP communication, the client may choose to use JNDI lookup for creating an InitialContext and then do authentication by passing the username and password information. Alternatively, use JAAS authentication and custom CallbackHandler for login, and then use the Security.runAs() method inside the Swing event thread and its children.
exposes username and password information if someone intercepts the communication and decodes them. HTTP Basic and Form-based Authentication over SSL (HTTPS) is considered the best approach, because they ensure secure communication using digital certificates (encrypting the data sent and then decrypting the data upon receipt). HTTPS ensures a secure communication channel using the SSL/TLS protocol before initiating the HTTP authentication request. This allows establishing confidentiality and data integrity during communication using public-key certificates and SSL/TLS configuration done at the J2EE server or W eb container provider. To configure HTTP Basic or Form-based Authentication over SSL, it is necessary to use the transport-guarantee element in the W eb deployment descriptor. To configure HTTP Basic or Form-based Authentication over SSL in the transport-guarantee element, specify the value as CONFIDENTIAL when the W eb application requires that the data to be transmitted is secured from viewing during transmission, or specify INTEGRAL as the value when the W eb application requires that the data be sent between client and server in such a way that it cannot be tampered with during transit. Example 5-8 is an example W eb deployment descriptor configuring HTTPS for Form-based Authentication.
Digest Authentication
This allows a W eb client to authenticate a W eb container by sending a message digest along with its HTTP request message. Digest-based authentication is also deemed to be insecure because an attacker can intercept and capture the hashed password and resend it as a replay attack. Using digests over SSL should be considered so that the communication remains secure and tamperproof. Digest authentication is quite similar to HTTP Basic Authentication, except that requesting the user's password is represented as a scrambled text using a one-way hash algorithm (message digest). To secure a W eb application using Digest Authentication-based login access, configure the login-config element in the W eb component deployment descriptor. The login-config element includes auth-method, which contains the element value DIGEST, which tells the client to transmit the username and password encrypted using a message digest (see Example 5-10).
It is very important to note that very few W eb browsers support Digest Authentication and that the JSP and Servlet specifications do not mandate this method either.
authentication provider. All login modules are stacked together and configured using a JAAS configuration file. From a programming standpoint, JAAS LoginContext executes all configured authentication provider LoginModule instances and is responsible for managing those configured authentication providers. Once the caller has instantiated a LoginContext, it invokes the login method to authenticate a Subject. This login method iterates through all the configured LoginModules and invokes the login method for each LoginModule assigned to the application. This allows determination of the overall authentication result by combining the successful login results returned from each login module. Each LoginModule maintains the state of remembering whether or not its login or commit method previously succeeded or failed. To establish single sign-on, JAAS uses the shared-state mechanism, which allows passing the authentication information between login modules and results in a unified sign-on for the user, who now has access to all configured applications. To log out, the caller client invokes the logout method, which in turn invokes the logout method of the LoginModules configured for the applications. Refer to the section "Java Authentication and Authorization Service" in Chapter 4, "Java Extensible Security Architecture and APIs," for information about the JAAS programming and login module configuration.
In a typical usage scenario, the W eb agent installed in the W eb server intercepts the caller's request before forwarding it to a J2EE server or its W eb container. After interception, it verifies the caller's request for authentication credentials. If there are no authentication credentials present or the existing authentication credentials are found to be insufficient, then the security provider's authentication service will present a login page or a login challenge window. The login page prompts the user for credentials such as username and password. After proper authentication, the agent examines all the roles assigned to the user. Based on the policies assigned to the user, the client will be either allowed or denied access to the W eb application or its protected URL.
session state is a responsibility of the client applications because they can cache and manipulate substantial amounts of session state in memory. Session state maintenance has an enormous impact on application security, performance, availability, and scalability. W hen designing a W eb application, it is quite important to identify the risks and associated mitigation options in order to manage the state and effectively track the current saved state. From a W eb-tier security infrastructure standpoint, HTTP sessions play a vital role in maintaining security-specific information in the user's session state and in propagating security context in W eb-based single sign-on scenarios. It is necessary to maintain a relatively high level of control and tight security over session information in both server and client environments. In J2EE application servers, the W eb-tier HTTP sessions can be tracked, maintained, and managed via cookies, URL rewriting, and hidden fields in HTML forms or with a combination of cookies and/or URL rewriting. In the case of cookiedisabled W eb browsers, it is the responsibility of the W eb application or the server infrastructure to enable session tracking via URL rewriting. W ith URL rewriting, the client appends the additional data to the end of each URL to identify the session, and the server associates the data to the stored session. To ensure session information security, make sure the cookies are encrypted and URLs are encoded in case of URL rewriting. Refer to the J2EE vendor security administration guide for configuration procedures. Using HTTP sessions in W eb applications is relatively simple. It involves looking up attributes associated with an HTTP request, creating a session object, looking up and storing user-specific information with a session, and terminating completed or abandoned sessions.
The getSession() method returns the valid session object associated with the user's HTTP request, which is identified in the session cookie that is encapsulated in the request object. Calling the method with no arguments creates a new HTTP session if one does not already exist. Calling the method with a Boolean argument creates a session only if the argument is true. Example 5-12 is a snippet showing a doPost() method from a servlet that performs the servlet's functions if the HTTP session is present. This means that the false parameter to getSession() prevents the servlet from creating a new session if one does not already exist, as shown in Example 5-12 Servlet showing how to prevent creating new session if one exists.
Example 5-12. Servlet showing how to prevent creating new session if one exists
public void doPost (HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { if ( HttpSession session = req.getSession(false) ) { // HTTP session present, continuing // servlet operations } else // HTTP session not available, // return an error page } }
W ith this setting, the user's session will automatically be deactivated after 30 minutes of inactivity. The timeout period can also be controlled by using HTTPSession getMaxInactiveInterval and setMaxInactiveInterval methods, which allows to programmatically specify timeout in seconds. The snippet in Example 5-16 will invalidate a user's session after a period of inactivity in timeoutInSeconds seconds.
/export/home/jsp/security/mysecure-resource/* </url-pattern> <!Define the HTTP methods, to be protected --> <http-method>DELETE</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> </web-resource-collection> <!Define the roles that may access this resource --> <auth-constraint> <role-name>authors</role-name> <role-name>readers</role-name> <role-name>publishers</role-name> </auth-constraint> </security-constraint> ... </web-app>
getRemoteUser(): This method returns the name of the authenticated user who is making the request to the application
(see Example 5-18).
isUserInRole(): This method determines whether an authenticated user belongs to the specified role. It returns true or
false indicating whether the user is included in the role (see Example 5-19).
W hen using the isUserInRole(role) method, the string role is mapped to the role name defined in the <role-name> element nested within the <security-role-ref> element of a W eb deployment descriptor. It is also important to note that the <rolelink> element must match a <role-name> defined in the <security-role> element of the W eb deployment descriptor. The web.xml will be as shown in Example 5-20.
... <servlet> ... <security-role-ref> <role-name>author</role-name> <role-link>cspAuthor</role-link> </security-role-ref> ... </servlet> <security-role> <role-name>cspAuthor</role-name> </security-role> ... <web-app>
getUserPrincipal(): This method returns a java.security.Principal object for the current authenticated user. This method is
used to check whether the user has logged in to the W eb application and launched a specific action (see Example 521).
HTTPS Connection
Using JSSE mechanisms, clients can create HTTP/SSL-based URL connections with J2EE-deployed components such as JSPs and Servlets. Using two-way SSL allows the client to confirm the user identity with the J2EE component by verifying client's certificates issued by a CA listed in the server's list of trusted CAs. Using JSSE for HTTPS connection, it is necessary to import the trusted certificates into the respective client and server keystores. The code snippet in Example 5-22 shows how to use HTTP/SSL in a Java client code.
If the client uses a proxy server, the proxy-specific properties can be set as shown in Example 5-23.
For more information about JSSE API mechanisms, refer to Chapter 4, "Java Extensible Security Architecture and APIs."
For more information about implementing JAAS LoginModules and JAAS Clients, refer to Chapter 4, "Java Extensible Security Architecture and APIs."
application client container, it is important to note that the provisioned component must comply with the appropriate J2ME profile. The snippet in Example 5-25 illustrates how a J2EE client establishes an SSL connection and obtain the server certificate information.
MIDP 2.0 also introduced the concept of trusted MIDlets that can be digitally signed and verified. W ith signed MIDlets, it is possible to authenticate and verify the integrity of the MIDlet suite. For more information about MIDlet security features, refer to Chapter 3, "The Java 2 Platform Security."
... </ejb-jar>
In addition to the <role-name> element, a <role-link> element can be defined within a <security-role-ref> element. Like W eb components, this element value can be defined to establish EJB relationships using role names from one EJB to another. There may be cases where you need to restrict a client from executing a list of methods of an EJB. In such cases, you can indicate the methods that should not be invoked using the <exclude-list> element. The methods listed under the <excludelist> element are not callableregardless of the role, and in cases where a method is specified in both <exclude-list> as well as <method-permission> elements. Example 5-27 shows two methods that cannot be called because they are specified within an <exclude-list>.
isCallerInRole(): This method determines whether the caller of the EJB belongs to the specified role. It returns true or
false, indicating whether or not the user is included in the role. The code in Example 5-28 checks whether the bean caller belongs to role "admin."
getCallerPrincipal(): This method returns a java.security.Principal object that contains the name of the current authenticated
user making the call. This method is used to verify the caller's identity; if the identity verified is equivalent to the caller's identity, then the container will allow the caller to proceed with the invocation. If the identity verified is not equivalent to the caller's identity, the container denies further interaction. The code in Example 5-29 shows how to
obtain the caller identity in the EJB using the ejbContext.getCallerPrincipal().getName() method.
In general, the client principal of an EJB method that is invoked is associated with that subsequent invocation across other EJBs. If the EJB component of the other container has to use the caller's identity from the originating EJB container, the < user-caller-identity> option has to be specified to instruct the container. If the EJB method makes a call to another EJB and its defined principal is not the original caller, then to delegate the principal, each EJB has to be assigned with a <run-as> role and principal. To better understand identity propagation (see Figure 5-5), let's consider a scenario where a client principal identity roleA calls an EJB MyEJB1.methodA() as principal roleA in EJB Server A and then calls EJB MySecureEJB2.methodB() to EJB server B. If both MyEJB1 and MyEJB2 have their security identity set to the <user-caller-identity>, then both MyEJB1.methodA() and MyEJB2.methodB() will execute using caller's principal roleA as its identity. To support this scenario, the EJB deployment descriptor will look like Example 5-31.
Run-As
To make the originating EJB call on other EJB components use a different principal identity, it is necessary to use the <run-as> identity option. If the <run-as> identity option is specified, the container establishes the identity of the bean using the specified role name and propagates the <run-as> principal identity when it calls on other EJBs as a whole, including all methods of the home and the remote interfaces. Example 5-32 illustrates <run-as> identity in the EJB deployment descriptor in order to execute MyEJB2 using "roleB."
Using <run-as> identity is very useful when the business functionality requires delegation of certain operations without transferring complete access privileges to the caller principal. An example for its use is when an EJB is required to run privileged administrative tasks that make use of methods from another administrative EJB. Assigning the EJB with <runas> identity will enable use of the administrative EJB without compromising the security and access privileges.
Figure 5-6. Security context propagation from Web tier to EJB tier
W hen a W eb client invokes an EJB method, the container propagates the security context via the EJB stubs and skeletons. The security context propagation is initiated from the W eb container as part of the inbound call, which interacts with the EJBs. It is the EJB container's responsibility to make the representation of the caller's principal identity available to the invoked EJB component. This means that once the user is authenticated in the W eb Tier, the authenticated principal identity is applied to the protection domain that manages the container authentication boundary and in turn it makes the principal identity available to the deployed W eb and EJB components. In the case of CORBA- and RMI/IIOP-based clients, the security context propagation among the EJB components and CORBA applications occurs via interoperability, because J2EE-compliant containers support all the requirements of Conformance Level 0 of the Common Security Interoperability version 2 (CSIv2) specifications from Object Management Group (OMG).
Figure 5-7. J2EE Connector security contract with the J2EE platform
In a typical usage scenario, the application component makes a request to establish a connection with the underlying EIS layer. To serve the request, the resource adapter security services authenticate the caller using the credentials and then establish a connection with the underlying EIS layer. After authentication, the security service determines the access privileges for the authenticated user in order to determine whether the user is permitted to access the EIS resource. All subsequent invocations that the application component makes to the EIS instance occur using the security context of the caller's principal identity.
Container-Managed Sign-On
W ith container-managed sign-on, the application developer assigns the responsibility of managing the EIS sign-on to the J2EE application server and the application deployer to be responsible for managing the EIS sign-on. To represent this, the application developer sets the res-auth element in the Connector deployment descriptor to Container. The deployer sets up and configures the EIS sign-on configuration with the required username and password for establishing the connection. The snippet in Example 5-33 represents the res-auth element in the connector module deployment descriptor.
In the application code, when the component invokes the getConnection method on the ConnectionFactory instance, it does not need to pass any security credentials. The application using the container managed sign-on will look like Example 5-34.
Component-Managed Sign-On
W ith component-managed sign-on, the application developer includes the code that is responsible for managing the EIS
sign-on. To represent this, the application developer sets the res-auth element in the Connector deployment descriptor to Application. This indicates that the component code is designed to perform a programmatic sign-on to the EIS. The application developer must pass the required credentials, such as username and password, to establish the connection. The snippet in Example 5-35 represents the use of the res-auth element in the Connector module deployment descriptor.
In the application code, when the component invokes the getConnection method on the ConnectionFactory instance, it is necessary to pass the required security credentials. The application using the component-managed sign-on will look like Example 5-36.
Determine the resource principal (identity of the initiating caller) under whose security context a new connection to an EIS will be established. Authenticate the resource principal if the connection is not already authenticated. Establish a secure association between the application server and the EIS. Additional mechanisms like SSL or Kerberos can also be deployed.
Once the EIS sign-on is established, the connection is associated with the security context of the initiating user. Subsequently, all application-level invocations of an EIS instance occur under the security context of that principal identity. W hen deploying an application that uses a J2EE Connector, the deployer configures the security credentials required to create the connections to the underlying EIS systems. The deployer performs the principal mapping configuration to ensure that all connections are established under the security context of the EIS user who is the resource principal of the underlying EIS. The J2EE application server takes the responsibility of handling the principal mapping for all the
authenticated caller principals. Thus, a user accesses the EIS under the security context of the configured resource principal.
Securing JMS
Java Message Service (JMS) is an integral part of the J2EE platform. It provides a standard set of Java APIs that allow J2EE applications to send and receive messages asynchronously. It also allows access to the common features of any JMS-compliant enterprise messaging system (typical to message brokers). JMS defines a loosely coupled, reliable application communication mechanism for enabling J2EE components to send and receive messages with enterprise applications and legacy systems. The JMS specification primarily aims at defining a Java-based messaging API designed to support a wide range of enterprise messaging vendor products. It leaves the responsibility of adding security features to the JMS provider vendors. From a security viewpoint, a JMS-based application security solution requires support for authentication, authorization, encryption, confidentiality, data integrity, and non-repudiation. Most messaging vendors provide support for some of these features: JMS provider authentication and access control JMS queues protection so that the destinations are available for access to privileged applications JMS message and transport security It is also important to note that the JMS specification does not address these security requirements but leaves it to the JMS provider vendors to implement its requirements. So the features discussed in the following sections may differ among vendor implementations.
Securing JDBC
JDBC technology is an API (included in both J2SE and J2EE releases) that provides a cross-DBMS connectivity supporting a wide range of data sources, including SQL databases and tabular data sources such as spreadsheets or flat files. W ith JDBC API mechanisms, Java-based applications are able to send SQL or other statements to data sources running on heterogeneous platforms. To access these data sources, JDBC makes use of appropriate JDBC-enabled drivers provided by the database vendor. The JDBC specification leaves the responsibility of providing security features to the JDBC drivers and the database implementation. W ith the introduction of JDBC 3.0 specification, JDBC provides compatibility with the J2EE Connector architecture. This means that JDBC drivers can be implemented as J2EE Connectors (Resource Adapters). They are packaged and deployed as a resource adopter that allows a J2EE container to integrate its connection, transaction, and security management services with the underlying data source.
So far there are no standard JDBC security mechanisms, but most of the major database and JDBC driver vendors offer custom security mechanisms to provide the following: Secure communication between the application and the underlying database system using HTTP over SSL/TLS protocols Data encryption mechanisms Support for data integrity checking mechanisms Support for secure rich-client communication to databases via Firewall Support for security auditing and reporting on data access These mechanisms are represented in the JDBC driver properties by setting appropriate configuration parameters; there is no need to change code.
To decouple the W eb tier from applications, the W eb server is configured with a reverse-proxy, which receives the HTTP requests from a client on the incoming network side and then opens another socket connection on the application server side to perform business application processing. This architectural model is suitable for applications that have relatively intensive servlet-to-EJB communications and less stringent security requirements.
To decouple W eb tier with applications, the W eb server is configured with a reverse-proxy, which receives the HTTP requests from a client on the incoming network side and then forwards the request to the application server side to perform business application processing.
implementation. At the time of writing this book, JAX-RPC supported OASIS W SS 1.0, also referred to as the W S-Security Standard. For more information and further details about W eb services security and applied techniques, refer to Chapter 6, "W eb Services SecurityStandards and Technologies," and Chapter 11, "Securing W eb ServicesDesign Strategies and Best Practices."
Summary
In this chapter, we discussed the J2EE architecture's security concepts and applied mechanisms. W e took an in-depth look at the different security mechanisms facilitated by the J2EE architecture and how they contribute to the end-to-end security of an overall J2EE application solution. In particular, we saw how the J2EE architecture facilitates end-to-end security mechanisms and how it spans across all logical tiersfrom the presentation tier to the Business tier, and from the Business tier to the Back-end resources. W e also looked at how to enforce security mechanisms at the J2EE application level as well as Java and J2ME clients. W e studied the different security mechanisms available for the different J2EE components, including JSPs, Servlets, EJBs, J2EE Connectors, JMS, and JDBC. W e discussed the security mechanisms for enforcing authentication, authorization, security communication, integrity, confidentiality, and so on, and how they can be applied to the tiers and components during the application development and deployment phases. In particular, we focused on the following: J2EE architecture and its logical tiers J2EE security infrastructure and mechanisms J2EE authentication and authorization Users, groups and realms J2EE W eb-tier security mechanisms J2EE Business-tier security mechanisms J2EE Integration-tier security mechanisms J2EE application deployment and network topologies J2EE W eb services securityoverview In general, this chapter provided a J2EE security reference guide that discussed the architectures security details and the available mechanisms for building end-to-end security in a J2EE-based solution. For more information about J2EE security design strategies and best practices, refer to Chapter 9, "Securing the W eb Tier: Design Strategies and Best Practices," and Chapter 10, "Securing the Business TierDesign Strategies and Best Practices." In the next chapter, we will explore W eb services security standards and technologies.
References
[LiGong] Li Gong. "Java Security Architecture," in "Java 2 SDK, Standard Edition Documentation Version 1.4.2." Sun Microsystems, 2003. http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc1.html and http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc2.html [J2EE-WS-Blueprints] J2EE Blueprints: Designing Web Services with the J2EE Platform, 2nd EditionGuidelines, Patterns, and Code for Java Web services. http://java.sun.com/blueprints/guidelines/designing_webservices/ [J2EE-Blueprints] J2EE Blueprints: Designing Enterprise Applications with the J2EE Platform, 2nd EditionGuidelines, Patterns, and Code for End-to-End Java applications. http://java.sun.com/blueprints/guidelines/designing_enterprise_applications_2e/ [JWS] Ramesh Nagappan, Robert Skoczylas, et al. Developing Java Web Services: Architecting and Developing Java Web Services. Wiley, 2002 [CJP] Deepak Alur, John Crupi, Dan Malks. Core Security Patterns: Best practices and Design Strategies, Sun Microsystems, 2003. [EJBT ier] Pravin V. T ulachan. Developing EJB 2.0 Components, Sun Microsystems, 2002. [WebT ier] Marty Hall. More Servlets and Java Server Pages, Sun Microsystems, 2002. [EJBT ier2] Kevin Boone. Applied Enterprise Java Beans T echnology, Sun Microsystems, 2003.
Before jumping into the core standards and technologies of the technology stack, let's take a look at the fundamental operational model of W eb services.
The operational roles and relationships are defined as follows: Service provider: The service provider hosts the W eb services and is primarily responsible for developing and
deploying the W eb services. The provider also defines the services and publishes them with the service registry. Service registry: The service registry hosts the lookup information and descriptions of published services and is primarily responsible for service registration and discovery of the W eb services. The registry stores and lists the various service types, descriptions of the services, and locations of the services that help the service requesters find and subscribe to the required services. Service requester: The service requester acts as the W eb services client, who is responsible for the service invocation. The requester locates the W eb service using the service registry, invokes the required services, and executes them from the service provider.
Figure 6-3 represents the structure of a SOAP message with attachments. Typically, a SOAP message is represented by a SOAP envelope and with zero or more attachments. The SOAP message envelope contains the header and body of the message, and the SOAP message attachments enable the message to contain data such as XML and non-XML data (like text or binary files). In a SOAP message, the SOAP header represents the processing semantics and provides mechanisms for adding features and defining high-level functionalities such as security, transactions, priority, and auditing. The SOAP body contains information defining an RPC call or business documents in XML, and any XML data required to be part of the message during communication. It is important to note that a SOAP message package is constructed using the MIME Multipart/Related structure to separate and identify the different parts of the message. SOAP is endorsed by the W 3C and key industry vendors such as Sun Microsystems, IBM, HP, SAP, Oracle, and Microsoft. These vendors have announced their support by participating in the W 3C's XML Protocol W orking Group. To find out the current status of SOAP from the activities of this group, refer to the W 3C W eb site at http://www.w3.org/2000/xp/Group/.
forth the service interfaces of a W eb services provider. Using the W SDL description, the service requester can construct a SOAP client interface that can communicate with the service provider. By communicating with UDDI registries, the service requesters query for services, locate services, and then invoke them by sending SOAP messages. The UDDI registries can be either private (within an organization) or public (servicing the whole Internet community). UDDI is endorsed by OASIS as a standard. To find out more information about UDDI and its current status, refer to the official OASIS W eb site for UDDI at http://www.oasis-open.org/committees/uddi-spec/tcspecs.
Man-in-the-Middle
Man-in-the-Middle (MITM) is an attack where the hacker acts as a W eb-service intermediary that intercepts the communication and then accesses and modifies the messages between two communicating parties without the communicating parties knowing that the messages have been intercepted.
packet-capturing capabilities to obtain the session information from the communicating client peer. Based on the session identifier, the hijacker constructs forged service requests that affect the operational efficiency of a W eb-services provider or a requester.
Identity Spoofing
Identity spoofing is an attack where a hacker uses the identity of a trusted service requester and sabotages the security of the services provider using forged service requests with malicious information. In this case, the services provider finds normal status and no security breach in the system. Although it is not trivial, from a business perspective, spoofing can cause significant losses due to false identity claims, refund fraud, and related issues.
Message Confidentiality
The threat to message confidentiality comes from eavesdroppers or after an intrusion attack by unauthorized entities. It is very important to use appropriate mechanisms to protect message confidentiality throughout the life cycle of W eb services operations, including messages in transit or in storage. If these mechanisms are not used, messages will be available for viewing and interception by unintended recipients and intermediaries.
Replay Attacks
A replay attack is a form of DoS attack where an intruder forges a service request that has been previously sent to the service provider. In this case, the intruder fraudulently duplicates a previously sent request and repeatedly sends it for the purpose of causing the target W eb services endpoint to generate faults that can cause failure and shutdown of the target's operations. Hackers usually use this attack as a first step in accessing the services provider in order to generate a fake session or to obtain critical information required for accessing services.
Authentication
Authentication enforces the verification and validation of the identities and credentials exchanged between the W ebservices provider and the consumer. The initiating service requester must be authenticated to prove its identity with reliable credentials. The credentials may be X.509 digital certificates, Kerberos tickets, or any security token used to validate the identity of the service requester. Depending upon the security requirements, it is also possible to deploy mutual authentication mechanisms where both service requester and the service provider exchange their credentials and validate them before initiating the communication. Using authentication mechanisms alleviates and mitigates the risks associated with man-in-the-middle, identity spoofing, and message-replay attacks.
Data Integrity
Data integrity plays a vital role in ensuring that messages exchanged between the communicating parties are accurate, complete, and not modified or altered during transit or while in storage. The use of digital signature mechanisms ensures data integrity by securing W eb services-based business transactions from modification. Ensuring data integrity guards W eb-services communication across endpoints and intermediaries from MITM intrusions and interference that may damage data.
Data Confidentiality
Data privacy and confidentiality assure that the actual data transmitted are protected from the prying eyes of unintended recipients. Data privacy and confidentiality are made possible through cryptographic algorithms that convert the data to an encrypted form of message that unauthorized viewers aren't able to understand. Ensuring confidentiality guarantees that data transmitted is not accessible for viewing by interception or interference during transmission between endpoints and through intermediaries.
Non-repudiation
Non-repudiation ensures that the communicating parties accept a committed transaction. This prevents the service
requesters from wrongfully claiming that the transaction has never occurred. Ensuring non-repudiation can be done using many approaches such as enabling logging and recording trails of the transaction exchanged, using timestamps on message requests and responses and using digital signatures to ensure that credentials of communicating parties are authentic.
Security Interoperability
Ensuring and demonstrating security interoperability is another core W eb services requirement to guarantee that the adopted security mechanisms and countermeasures seamlessly work together during communication. This means that the W eb service providers and consumers are making use of standards-based protocols following security interoperability guidelines defined by the W S-I Security profile. The W eb services and their security providers must allow security interoperability at all levels, including transport-level security, message-level security, and other supporting security infrastructures.
XML Signature
The XML signature specification forms the basis for securely exchanging XML documents and conducting secure business transactions. The goal of XML signature is to ensure data integrity, message authentication, and non-repudiation of services. It is an evolving standard for creating and representing digital signatures using XML syntax and processing for XML-based data communication. XML signature evolved from the joint effort by the W 3C and IETF working groups. To find out the current status of the XML signature specification from the W 3C working group activities, refer to the W 3C W eb site at www.w3.org/Signature.
Enveloping signatures: The original XML content is embedded within the XML signature, where the XML content is represented as a child element within an <object> or identified as a URI <Reference> in the parent XML signature.
... <Reference URI = "xyz"/> ... <Object Id="xd001"> <xmldocument> <business-element/> </xmldocument> </Object> </Signature>
Detached signatures: The XML content resides external to the signature and is identified via a URI or transform. It applies to separate data objects external to the signature document and for the data objects residing within the original XML document as sibling elements.
Let's take a closer look at how to represent an XML signature, its structural elements, and its features.
<Signature>
The <Signature> element is a parent element that identifies a complete XML signature within a given context. It contains the sequence of child elements: <SignedInfo>, <SignatureValue>, <KeyInfo>, and <Object>. Also, an optional Id attribute can be applied to the <Signature> element as an identifier. This is useful in the case of multiple <Signature> instances within a single context.
<SignatureValue>
The <SignatureValue> element contains the actual value of the digital signature, which is the digested value of <SignedInfo> element. The value is base64 encoded.
<SignedInfo>
The <SignedInfo> element contains the original data that is actually signed. The contents of this element also include a sequence of elements: <CanonicalizationMethod>, <SignatureMethod>, and one or more <Reference> elements. The <CanonicalizationMethod> and <SignatureMethod> elements describe the type of canonicalization and signature algorithms used in the generation of a <SignatureValue>. The <SignatureValue> element contains the digital signature value that is the digest of <SignedInfo> element. The <Reference> element defines the actual data using a data stream that is eventually hashed and transformed. The actual data stream is referenced by a URI.
<CanonicalizationMethod>
The <CanonicalizationMethod> element defines the representation of the physical structure by specifying the canonicalization algorithm applied to the <SignedInfo> element. To support security and interoperability, the XML signature specification recommends the use of XML-based canonicalization algorithms instead of text-based canonicalization algorithms (such as CRLF and charset normalization). It also mandates that the <SignedInfo> element be presented to the XML canonicalization methods as an XPath node set definition, mentioning the <SignedInfo>, its descendants, attributes, and namespace nodes of the <SignedInfo> element.
<SignatureMethod>
The <SignatureMethod> element specifies the cryptographic algorithm used for generating the signature. The algorithm also identifies other cryptographic functions involved in the signature operation, such as hash, public-key algorithms, MACs, and padding.
<Reference>
The <Reference> element contains the digest value of the data object. It optionally carries identifiers (URI) to the original data objects, including the list of transforms specifying transformations applied prior to computing the digest.
<Transforms>
The optional <Transforms> element contains an ordered list of < Transform> elements. It defines the steps required for obtaining the original data object that was digested. Each <Transform> serves as a transformation input to the next < Transform>. The input to the first <Transform> is the result of dereferencing the URI attribute of the <Reference> element. The output of the last <Transform> is the input for the <DigestMethod> algorithm.
<DigestMethod>
The <DigestMethod> contains the digest algorithm to be applied to the signed object. URIs identify the algorithms.
<DigestValue>
The <DigestValue> element contains the base64-encoded value of the digest.
<KeyInfo>
The optional <KeyInfo> element provides the ability to verify the signature using the packaged verification key. It contains keys, key names, certificates, and related information. This element also enables the integration of trust semantics within an application that utilizes XML signatures. The <KeyInfo> consists of a child element named <KeyValue>. The <KeyValue> element carries a raw RSA or DSA public key with child elements <RSAKeyValue> and <DSAKeyValue>, respectively. All information represented in the <KeyValue> element is represented in base64 encoding.
<Object>
The optional <Object> element is used mostly in enveloping signatures where the data object is part of the <Signature> element. The digest of the data object in this case would contain the <object> element along with its associated data objects. The < Object> elements also include optional MIME type, ID, and encoding attributes.
<Manifest>
The optional <Manifest> element is quite similar to the <SignedInfo> element in that it contains a list of <Reference> elements. In the case of the <Manifest> element, the processing of the <Reference> element is defined by the application.
<SignatureProperties>
The optional <SignatureProperties> element can contain additional information about the signature. This may include date, timestamp, serial number of cryptographic hardware, and other application-specific attributes.
Algorithms
In XML signature, algorithms are associated with an identifier attribute carrying a URI for <DigestMethod>, <SignatureMethod>, <CanonicalizationMethod>, and <Transform> elements. Most algorithms use implicit parameters such as key information for <SignatureMethod>. Some algorithms use explicit parameters with descriptive element names specific to the algorithm and within the XML signature or algorithm-specific namespace. Let's take a brief look at the algorithms and their URIs discussed in the XML signature specification.
Signature Algorithms
Signature algorithms are used for creating the XML signature for given data objects. The algorithms used are a combination of message digests and public-key cryptography algorithms. The XML signature specification defines two signature algorithms, DSA and PKCS1 (RSA-SHA1), and their associated URIs. Both DSA and PKCS1 (RSA-SHA1) take no explicit parameters. DSA: DSA-SHA1, also referred to as DSA algorithm, is specified with a URI identifier, http://www.w3.org/2000/09/xmldsig#dsa-sha1. For example, DSA is represented in <SignatureMethod> element as shown in Example 6-5.
The output of the DSA algorithm consists of a pair of integers referred to as an r,s pair, and the signature value contains the base64-encoded value of the concatenation of two-octet streams of the octet-encoding of the r,s pair. The integer-to-octet stream conversion is done according to RFC2437 (PKCS1) specifications. The resulting <SignatureValue> element of the DSA algorithm will look as shown in Example 6-6.
RSA-SHA1: The RSA-SHA1, also referred to as PKCS1, algorithm is specified with a URI identifier, http://www.w3.org/2000/09/xmldsig#rsa-sha1. For example, RSA is represented in <SignatureMethod> element as shown in Example 6-7.
The <SignatureValue> element of the RSA-SHA1 is represented using base64 encoding, and the octet string is computed according to RFC 2437 [PKCS1, section 8.1.1: Signature generation for the RSASSA-PKCS1-v1_5 signature scheme].
Canonicalization Algorithms
Two equivalent XML documents can possibly differ on representations such as physical structure, attribute ordering, character encoding, or insignificant placing of white space. In an XML signature, it is extremely important to prove the equivalence of XML documents while representing digital signatures, checksums, identifiers, version control, and conformance. The XML Canonicalization algorithms allow generating the canonical form of an XML document, which can be correctly compared, byte-by-byte, to canonical forms of other documents. In XML signature, the XML documents need to be canonicalized before they are signed to ensure the representation is logically byte-by-byte identical with equivalent XML documents. If it is not canonicalized, the validation of an XML signature will potentially fail due to any difference in its physical structure or its representation. The XML signature specification defines two canonicalization algorithms: Canonical XML (omits comments) Identifier: http://www.w3.org/TR/2001/REC-xml-c14n-20010315 Canonical XML with comments Identifier: http://www.w3.org/TR/2001/REC-xml-c14n-20010315#W ithComments For example, the representation of the <CanonicalizationMethod> element in the signature will look as shown in Example 6-8.
Transform Algorithms
Applying transformations is mostly used to support canonicalization and to make sure the actual data object is processed, filtered, and represented in the right fashion before it is signed. Using transform algorithms, the XML
signature can take an ordered list of transformations for a data object as required. Transform algorithms can be applied to the data objects referred to in the <Reference> element or the output of a previous <Transform> element. The XML signature specification defines three transform algorithms: XSLT Transform Identifier: http://www.w3.org/TR/1999/REC-xslt-19991116 Xpath Transform Identifier: http://www.w3.org/TR/1999/REC-xpath-19991116 Enveloped Signature Identifier: http://www.w3.org/2000/09/xmldsig#enveloped-signature For example, the representation of <Transforms> element in the XML signature will look as shown in Example 6-9.
Enveloped Signature
In the enveloped signature, the XML signature resides within the signed document. Example 6-11 represents the enveloped signature, where the XML signature is embedded as part of the signed XML document.
In Example 6-11, the data object signed is the <BusinessAccountSummary> element that is identified by the URI attribute of the <Reference> element. As the XML signature is added, it changes the original document with the embedded <Signature> element. To verify the signature, it also becomes necessary to compare the original document without the signature. The XML digital signature recommendation defines an enveloped signature transformation algorithm for removing the <Signature> from the original document. The enveloped signature transform algorithm is defined in the specification with the URI identifier: http://www.w3.org/2000/09/xmldsig#enveloped-signature.
Enveloping Signature
In the enveloping signature, the XML signature encloses the signed XML document as its child element. Example 6-12 represents the enveloping signature, where the XML document is contained with the <Object> element within the XML signature.
Detached Signature
In the detached signature, both the XML document and XML signature reside independentlythey are detached with external references or within the document as sibling elements. The URI attribute of the <Reference> element holds the identifier of the external reference, pointing to an external resource or pointing to an element id that is a sibling XML fragment residing as part of the same document. Example 6-13 represents a detached signature, where the XML signature <Signature> and the signed XML document <BusinessAccountSummary> are siblings of the main document.
<Address>1 ABZ Drive, Newton, CA</Address> <PrimaryContact>R Nagappan</PrimaryContact> <BusinessAccount id="BS-12345"> <AccountBalance>950000.00</AccountBalance> </BusinessAccountNo> <CreditCard no="1233-3456-4567"> <CreditBalance>45000.00</CreditBalance> </CreditCard> <CreditCard no="4230-3456-9877"> <CreditBalance>6000.00</CreditBalance> </CreditCard> </Customer> </BusinessAccountSummary> <Signature Id="xyz7802370" xmlns="http://www.w3.org/2000/09/xmldsig#"> <SignedInfo> <CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/> <SignatureMethod Algorithm=http://www.w3.org/2000/09/xmldsig#dsa-sha1 /> <Reference URI="#ABCD54321"> <Transforms> <Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116"> </Transform> </Transforms> <DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" /> <DigestValue>jav7lwx3rvLPO0vKVu8nk===</DigestValue> </Reference> </SignedInfo> <SignatureValue>MC0E~LE=</SignatureValue> <KeyInfo> <X509Data> <X509SubjectName>CN=RRN,O=CS,ST=BOSTON,C=MA</X509SubjectName> <X509Certificate> MIID5jCCA0+gA...lYZ== </X509Certificate> </X509Data> </KeyInfo> </Signature> </Document>
In Example 6-13, the data object signed is the <BusinessAccountSummary>, and it is identified by the URI attribute of the <Reference> element.
5. element. If canonicalization is not applied, the validation of an XML signature may fail due to possible differences of the XML structure or its representation. Calculate the digest of the <SignedInfo> element and sign it by applying the signature algorithm identified by the 6. <SignatureMethod> element. The resulting signed value is represented under the <SignatureValue> element. It is optional to include the <KeyInfo>, such as X.509 certificates and whether a public key is required for validating 7. the signature. Finally, construct the <Signature> element, including the <SignedInfo>, <SignatureValue>, and the <KeyInfo> that represent an 8. XML signature of the given XML document or data objects.
XML Encryption
XML Encryption specifications form the basis of securing the data and communication in order to conduct secure business transactions between partners. The goals of XML encryption is to provide data confidentiality and to ensure end-to-end security of messages transmitted between communicating parties. It is an evolving standard for encrypting and decrypting data and then representing that data using XML. XML encryption has emerged from the W3C as an industry-standard initiative for expressing encryption and decryption of digital content in XML. To find out the current status of XML encryption specifications from the W3C working group, refer to the W3C Web site at http://www.w3.org/Encryption.
<EncryptedData>
The <EncryptedData> element is the root element that contains all child elements, including the <CipherData> that contains the encrypted data. It replaces the encrypted content with the exception of the <EncryptedKey> element that contains the encrypted key. The <EncryptedData> elements contain four optional attributes: an Id attribute identifying the encrypted data with a unique id; a Type attribute defining the encrypted data, which is content or an element for the decrypting application; a MimeType attribute defining the content MIME type; and an Encoding attribute specifying the transfer encoding (e.g., Base64-encoded) of the encrypted data. See Example 6-15.
<EncryptionMethod>
The optional <EncryptionMethod> element specifies the applied encryption algorithm of the encrypted data. If it is not
specified, the recipient would not be aware of the applied encryption algorithm and the decryption may fail. See Example 6-16.
<ds:KeyInfo>
The <ds:KeyInfo> is a mandated element that specifies information about the key used for encrypting the data. It contains <ds:KeyName>, <ds:KeyValue>, and <ds:RetrievalMethod> as its child elements. The <ds:KeyName> element specifies the reference to the key or refers to a <CarriedKeyName> element of the <EncryptedKey> element. For example, the <ds:KeyInfo> element and <ds:KeyName> appears as shown in Example 6-17.
The <ds:RetrievalMethod> provides another way to retrieve the key information identified using a URI. Example 6-18 uses the <ds:RetrievalMethod> with a URL location to retrieve the key from the <EncryptedKey> element.
http://www.w3.org/2001/04/xmlenc#tripledescbc
<CipherData>
<CipherData> is a mandatory element that provides the encrypted data. It allows you to specify the encrypted value using <CipherValue> or <CipherReference> as child elements. Using the <CipherValue> element holds the value as an encrypted octet
sequence using base64-encoded text as shown in Example 6-19.
Alternatively, using the <CipherReference> element allows you to specify a URI that references an external location containing the encrypted octet sequence. In addition to URI, <CipherReference> can also contain an optional <Transforms> element to list the decryption steps required to obtain the cipher value. The <Transforms> element allows you to include any number of transformations specified by using the <ds:Transform> element. Example 6-20 illustrates the representation of the <CipherReference> and <Transforms> elements.
<EncryptedKey>
The <EncryptedKey> element is used to transport encryption keys between the message sender and the message's ultimate recipients. It can be used within XML data or specified inside an <EncryptedData> element as a child of a <ds:KeyInfo> element. See Example 6-21.
When <EncryptedKey> is decrypted, the resulting octets are made available to the EncryptionMethod algorithm without any additional processing.
<EncryptionProperties>
The optional <SignatureProperties> can contain all additional information about the creation of the XML encryption. This may include details such as date, timestamp, serial number of cryptographic hardware used for encryption, and other application-specific attributes.
Block Encryption
Block encryption algorithms are designed to provide encryption and decryption of data in fixed-size and multiple-octet blocks. The XML encryption specification defines four algorithms for block encryption, as follows:
Implementation:
Required
Identifying URI:
Implementation:
Required
Identifying URI:
Implementation:
Required
Identifying URI:
Implementation:
Optional
Key Transport
Key transport algorithms are public-key algorithms designed for encrypting and decrypting keys. These algorithms are identified as the value of the <Algorithm> attribute of the <EncryptionMethod> element, representing the <EncryptedKey> element. The XML encryption specification defines two algorithms for key transport as follows:
Algorithm name: RSA-v1.5 Identifying URI:
http://www.w3.org/2001/04/xmlenc#rsa-1_5 Implementation:
Required
Key Agreement
The key agreement algorithm is used to derive the shared secret key based on compatible public keys from both the sender and its recipient. This is represented using the <AgreementMethod> element as a child element of the <KeyInfo> element. The <AgreementMethod> element holds the information identifying the keys of the sender, key size information, and the computation procedure to obtain the shared encryption key. The XML encryption specification defines the following algorithm for key agreement:
Algorithm name: Diffie-Hellman Identifying URI:
http://www.w3.org/2001/04/xmlenc#dh Implementation:
Optional
representing the <EncryptedKey> element. The XML encryption specification defines four algorithms for symmetric key wrap as follows:
Algorithm name: TRIPLEDES KeyWrap Identifying URI:
http://www.w3.org/2001/04/xmlenc#kw-tripledes Implementation:
Required
Message Digest
The message digest algorithms are used to derive the hash value digest of a message or data. As part of the derivation, the <AgreementMethod> element is used to hold the information identifying the keys of the sender, key size information, and the computation procedure to obtain the digest. It can also be used as a hash function in the key transport RSA-OAEP algorithm. The XML encryption specification defines the following four algorithms for message digest:
Algorithm name: SHA1 Identifying URI:
http://www.w3.org/2001/04/xmlenc#sha-1 Implementation:
Required
Message Authentication
For message authentication, the XML encryption specification uses the XML digital signature-based algorithm:
Algorithm name: XML Digital Signature Identifying URI:
http://www.w3.org/2000/09/xmldsig# Implementation:
Recommended
Canonicalization
Prior to XML encryption, applying canonicalization allows you to consistently serialize the XML into an octet stream, which is an identical textual representation of the given XML document. XML encryption defines two kinds of canonicalization algorithms: inclusive canonicalization and exclusive canonicalization. Inclusive Canonicalization: The serialized XML includes both in-scope namespace and XML namespace attribute context from ancestors of the XML being serialized. The specification defines two algorithms specific to inclusive canonicalization.
- Algorithm name: Canonical XML without comments Identifying URI:
http://www.w3.org/TR/2001/REC-xml-c14n-20010315# Implementation:
Optional
- Algorithm name: Canonical XML with comments Identifying URI: http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments Implementation:
Optional
Exclusive Canonicalization: The serialized XML provides the minimum requirement details about its namespace and associated XML namespace attribute context from ancestors of the XML being serialized. This helps a signed XML payload not to break its structural integrity when a sub element is removed from the original message and/or inserted into a different context. The specification defines two algorithms specific to exclusive canonicalization.
- Algorithm name: Exclusive XML canonicalization without comments Identifying URI:
http://www.w3.org/2001/10/xml-exc-c14n# Implementation:
Optional
- Algorithm name: Exclusive XML canonicalization with comments Identifying URI: http://www.w3.org/2001/10/xml-exc-c14n#WithComments Implementation:
Optional
Using the above example, let's take a look at the different scenarios of XML encryption and how XML encryption is represented.
After encryption, the complete <CreditCard> element, including its child elements, are encrypted and represented within a <CipherData> element.
<TotalCost>75.00</TotalCost> <CreditCard> <Cardholder>R Nagappan</Cardholder> <Number> <EncryptedData Id="PYMT1" xmlns="http://www.w3.org/2001/04/xmlenc#" MimeType="text/xml" Type="http://www.w3.org/2001/04/xmlenc#Content"> <CipherData> <CipherValue>safDFFFuyh</CipherValue> </CipherData> </EncryptedData> </Number> <Currency>'USD'</Currency> <Issuer>American Generous Bank</Issuer> <Expiration>04/02</Expiration> </CreditCard> <ShipAddress>1 Bills Dr, Newton, MA01803</ShipAddress> </PurchaseOrder>
In Example 6-25, Both <CreditCard> and <Number> element names are readable, but the character data content of <Number> is encrypted.
</CipherData> </EncryptedData>
After Super encryption, the <EncryptedData Id="pd1"> would appear as follows in Example 6-27b:
The resulting cipher data is the base64-encoding of the encrypted octet sequence of the <EncryptedData> element with "Id1".
Motivation of XKMS
PKI is based on public-private key pairs and has been used for securing business application infrastructures and transactions. Private keys are used by the service provider application and public keys can be distributed to the clients. In the case of W eb services, the use of XML encryption and XML signature required integration with a PKI-based key solution to support related key management functionalities such as encryption, decryption, signature verification, and validation. There are a variety of PKI solutions, such as X.509, PGP, SPKI, and PKIX, available from multiple vendors. In the case of W eb services, using PKI solutions from multiple vendors mandates interoperability. For example, company A encrypts and signs its messages using an X.509 PKI solution from vendor A and communicates with company B using a different PKI solution from vendor B. In this scenario, company B's PKI solution fails to verify and is unable to decrypt the message sent by company A. This has become a problem in securing W eb services due to the issues of interoperability and managing cryptographic keys. XKMS introduces an easy solution for managing PKI-related functionalities by offloading PKI from applications to a trusted service provider. The trusted service provider facilitating XKMS service "under the hood" provides a PKI solution. This means that client applications and W eb services relying on XKMS do not require a PKI solution. Instead, they delegate all PKI-related responsibilities to an XKMS provider (trusted service) and issue XML-based requests for obtaining PKI services from them.
information may be available as part of the message as child elements <ds:KeyName> and <ds:RetrievalMethod> that specify the name and location of the key information, respectively. Figure 6-4 is a sequence diagram illustrating the X-KISS locate service, where a W eb service sends an XML request to a trust service provider to locate the key and obtains the key value. The W eb services requester (or provider) receives a signed XML document from the XKMS trust service that contains a <ds:Keyinfo> element. The <ds:KeyInfo> element specifies a <ds:RetrievalMethod> child element that mentions the location of an X.509 certificate that contains the public key. The application sends an XML request to the trust service that specifies the <ds:Keyinfo> element and in return, the trust service sends a response that specifies the <KeyName> and <KeyValue> elements.
Example 6-28 represents an XML request to a trust service to locate the key information. The <Locate> element contains the complete query for locating the key. The child element, <Query>, contains the <ds:KeyInfo> that may hold either a <ds:KeyName> or <ds:RetrievalMethod>. The <Respond> element specifies the required return values, such as the <KeyName> and the <KeyValue>.
Example 6-29 shows the representation of the XML response from the trust service, which provides the result, including the key name and key value.
In this response, the <LocateResult> element contains the complete response from the trust service, the child element <Result> contains the response; it can be either success or failure. The <Answer> element specifies the actual return values, such as the <KeyName> that mentions the name and the <KeyValue> holding an X.509 certificate.
In this representation, the <Validate> element contains the complete query for validating the binding information, and the child element <Query> contains the <ds:KeyInfo> or a <ds:KeyName>. The <Respond> element specifies the required return values, such as the <ds:KeyName> and the <ds:KeyValue>. Example 6-31 shows the representation of the XML response from the trust service, which mentions the status of the binding between the <ds:KeyName> and <ds:KeyValue> and its validity.
In this response, the <ValidateResult> element contains the complete response from the trust service, and the child element, <Result>, contains the response, which can be either success or failure. The <Answer> element specifies the actual return values in the child element <KeyBinding>, defining the key id <KeyID>, key information <KeyInfo>, and validity <ValidityInterval> as its child elements.
http://www.cspsecurity.com/myapplication?company=csp;CN=web </ds:KeyName> </ds:KeyInfo> <PassPhrase>70lkjwer-94-09i-0</PassPhrase> </Prototype> <AuthInfo> <AuthUserInfo> <ProofOfPossession> <ds:Signature URI="#mykeybinding" [RSA-Sign (KeyBinding, Private)] /> </ProofOfPossession> <KeyBindingAuth> <ds:Signature URI="#mykeybinding" [HMAC-SHA1 (KeyBinding, Auth)] /> </KeyBindingAuth> </AuthUserInfo> </AuthInfo> <Respond> <string>KeyName<string> <string>KeyValue</string> <string>RetrievalMethod</string> </Respond> </Register>
In Example 6-32, the complete registration request is identified by the <Register> element. The <Prototype> element represents the prototype of the key and binding information. Because the request is intended for the registration of a client-generated key pair, it contains the key value information as part of the <ds:KeyInfo> element. The <PassPhrase> element provides the information for authenticating the client with the service provider. During registration, the <AuthInfo> element provides the data that authenticates the request, mentioning the authentication type and algorithm used. Because it is a client-generated key pair, it also includes the proof-of-possession of the private key using the <ProofOfPossession> element. The client uses a previously registered key to sign the <Prototype> element and represents the signature under the <KeyBindingAuth> element. The <Respond> element specifies the actual return values via the <KeyName>, <KeyValue>, and its <RetrievalMethod> elements. In the case of a service-generated key pair, the request does not contain the public-key information and the XKMS trust service provider responds to the requests with the private key along with the binding information. Now, let's take a look at Example 6-33, the response obtained from the XKMS trust service provider after it registered the client-generated key-pair.
The complete response from the trust service is identified by the <RegisterResult> element. The <Answer> element contains the requested key binding information as the child element <KeyBinding>. The <KeyID> identifies the key registered and <ds:RetrievalMethod> provides the URI location of the key and its type.
quite similar to the registration services, except that the <Status> element in the request message will be specified Invalid and identify the <KeyID> and <ds:KeyInfo> elements. The revocation response from the service provider will be quite similar to the registration response message, except with the prototype <Status> element specified as Invalid. If the XKMS service provider does not possess the key information, it responds with the <Status> element specified as NotFound.
X-BULK
X-BULK defines a single batch element that can contain multiple registration requests, responses, and status requests. The responding XKMS service processes the entire batch and returns a single response after processing. W ithout X-BULK support, XKMS would be limited to a single key registration and validation at a time. X-BULK operations are primarily meant for issuing multiple certificates to support deploying smart phone devices, smart cards, modems, and so forth. X-BULK defines batch elements, such as <BulkRegister>, <BulkResponse>, <BulkStatusRequest>, and <BulkStatusResponse>, representing registration requests and responses, and status requests and responses. Each of these batch elements contains request or response messages that include requests or responses that are independently referenced. Example 6-34 is an X-BULK request made by a client application for running a bulk registration of certificates.
<userID xmlns="urn:csp"> 12345 </userID> </ClientInfo> </Request> <Request>...</Request> <Request>...</Request> </Requests> </SignedPart> <dsig:Signature>...</dsig:Signature> </BulkRegister>
In Example 6-34, the <BulkRegister> element represents a bulk request message that contains the batch ID, the type of response from the XKMS service, the sequence of the requests, and the signature used to sign the bulk request. The <BatchHeader> child element consists of general batch-related information, such as the batch ID, batch creation date, and the number of requests included in the batch. The <Request> element carries the individual requests, including the <KeyID> and <ds:KeyInfo> elements for those requests. The <ClientInfo> element specifies client-specific information about each request that can be used by the trust services provider for bookkeeping. The <dsig:Signature> element specifies the digital signature used to sign the X-BULK message. Example 6-35 is an example X-BULK response message received from the XKMS service provider after processing the bulk request containing three individual requests.
In Example 6-35, the <BulkRegisterResult> element represents the bulk response containing information such as the batch ID, the number of results included, the actual results to the registration requests, and the signature used to sign the given bulk response. The <BatchID> element identifies the relationship between the batch elements of the bulk request and bulk response. The <RegisterResults> element contains the individual <xkms:RegisterResult> elements, which hold the registration details of the individual requests.
Motivation of WS-Security
In W eb-services communication, SOAP defines the structure of an XML document and the rules and mechanisms that can be used to enable communication between applications. SOAP does not define or address any specific security mechanisms. Using SOAP headers provides a way to define and add features, enabling application-specific security mechanisms like digital signature and encryption. Incorporating security mechanisms in SOAP headers poses several complexities and challenges in an end-to-end W ebservices security scenario that mandates message-level and transport-level security features such as security context propagation, support for multiple security technologies, maintaining message integrity, and confidentiality across participating intermediaries. More importantly, incorporating security mechanisms in SOAP headers limits interoperability in certain aspects of addressing support for a variety of supporting security infrastructures, such as PKI, binary security tokens, digital signature formats, encryption mechanisms, and so forth. The W S-Security specification incorporates a standard set of SOAP extensions required for securing W eb services and implementing message authentication, message integrity, message confidentiality, and security token propagation. It also defines mechanisms for supporting a variety of security tokens, signature and encryption mechanisms, and standards-based security technologies, including PKI and Kerberos. The goal of W S-Security is to provide secure SOAP messages that support multiple security token formats for authentication or authorization, multiple signature formats, multiple encryption technologies, and multiple trust domains.
WS-Security Definitions
To understand W S-Security, it is important to know the terms and definitions specified in the W S-Security 1.0 specification. Let's take a look at some of the key terms and definitions. Claim: A declaration made by an entity. The entity can be a name, identity, key, group, privilege, capability, and so forth. Claim Confirmation: The process of verifying a claim belonging to an entity. Security T oken: A security token (examples include username, X.509 certificate, and Kerberos token) represents a collection of one or more claims. Signed Security T oken: A security token that is asserted and cryptographically signed by an authority. T rust: A characteristic that one entity is willing to accept and rely upon for another entity to execute a set of actions and/or to make a set of assertions about a set of subjects and/or scopes.
In W eb-services communication, applying encryption can be done using standard SSL/TLS mechanisms, and the message can be encrypted in its entirety and sent confidentially to one or more recipients. Using SSL/TLS satisfies transport-level confidentiality requirements, but it does not solve message-level requirements when parts of an XML message must be made confidential by using selective encryption and then signed by different users. W S-Security adopts and builds on XML encryption specifications for encrypting and decrypting messages. This facilitates encryption of messages in their entirety or as selected parts intended for multiple recipients based on their signed identity and privileges for viewing and consuming them. W S-Security also allows support use of SOAP intermediaries.
Figure 6-5. The WS-Security message structure and its core elements
Let's take a closer look at the W S-Security structural representation and its elements, illustrating the usage of security tokens, XML signature, and XML encryption.
Namespaces
The W S-Security 1.0 specification mandates the use of the following two XML namespaces in implementations. Using the following namespace URLs for obtaining the schema files is required. The prefix wsse identifies the namespace for W SSecurity extensions, while wsu identifies the namespaces for global utility attributes. Table 6-1 shows the prefixes and namespaces for W S-Security extensions.
Table 6-1. Namespaces for WS-Security Extensions
Prefix Namespaces http://docs.oasis-open.org/wss/2004/01/oasis-200401-wsswssecurity-secext-1.0.xsd http://docs.oasis-open.org/wss/2004/01/oasis-200401-wsswssecurity-utility-1.0.xsd
wsse
wsu
<wsse:Security>
The <wsse:Security> element is the parent element that represents a complete W S-Securityenabled SOAP message, including mechanisms for security tokens, applying signature, and encryption. A W S-Securityenabled SOAP message may contain one or more <wsse:Security> header elements. Each <wsse:Security> element contains the security information applied to its intended message recipient or SOAP intermediaries. W S-Security headers representing multiple recipients should be specified with an optional role <S:role> attribute. It is optional to include a mustUnderstand="true" attribute, which forces the message recipient to generate a SOAP fault when the underlying implementation does not support W S-Security specifications. Example 6-36 shows how a <wsse:Security> element can be represented.
<wsse:UsernameToken>
The W S-Security specification defines how to attach security tokens for sending and receiving security information between W eb-services partners. The <wsse:Username> element provides a way to insert username information and additional username-specific information based on schemas. Example 6-37 shows how the <wsse:UserNameToken> element can be represented.
<wsse:BinarySecurityToken>
Similar to username tokens, W S-Security allows representing binary security tokens such as X.509 certificates, Kerberos tickets, and non-XML format security information. For interpreting binary-formatted tokens, it uses two attributes, ValueType and EncodingType. The ValueType attribute indicates a security token, such as wsse:X509v3 or wsse:Kerberosv5ST. The EncodingType attribute specifies the encoding format of the binary data, such as Base64Binary. Example 6-38 shows how a wsse:BinarySecurityToken element indicating a X.509 certificate can be represented.
Example 6-40 show how a REL license can be represented using <r:license> element under <wsse:Security>.
<wsse:SecurityTokenReference>
The <wsse:SecurityTokenReference> element defines a URI that locates where a security token can be found. This provides the flexibility to obtain a security token from named external locations accessible via URI. This element can also be used to refer to a security token contained within the same SOAP message header.
<ds:Signature>
The W S-Security specification builds on the W 3C XML signature specification for representing digital signatures in the W S-Security headers. The <ds:Signature> element can represent the digital signature and other related information, including how the signature was generated. A typical instance of <ds:Signature> contains a <ds:SignedInfo> element describing the information being signed, a <ds:SignatureValue> element containing the bytes that make up the digital signature, a <ds:Reference> element identifying the information being signed along with transformations, and a <ds:KeyInfo> element indicating the key used to validate the signature. A <ds:SignedInfo> element contains the identifiers for canonicalization and signature method algorithms. The <ds:KeyInfo> element may contain where to find the public key that can be referenced using a <wsse:SecurityTokenReference> element. Example 6-41 shows how a <ds:Signature> element can represent a W S-Security-enabled SOAP message.
... <ds:SignedInfo> <ds:Reference URI='#Body'> ... <ds:Transforms> ... </ds:Transforms> <ds:DigestMethod> ... </ds:DigestMethod> ... </ds:Reference> <ds:CanonicalizationMethod ...> </ds:SignedInfo> <ds:SignatureValue> ... </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference> ... </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> ... </wsse:Security> </SOAP:Header> <SOAP:Body Id='Body'> ... </SOAP:Body> </SOAP:Envelope>
Refer to the "XML Signature" section earlier in this chapter for more information and details about generating and validating XML signatures.
<xenc:EncryptedData>
The W S-Security specification builds on the W 3C XML encryption specification for representing encrypted data in SOAP messages. The <xenc:EncryptedData> element can represent the encrypted data and other related information, including the encryption method and the key information. A typical instance of <xenc:EncryptedData> contains an <xenc:EncryptionMethod> element specifying the applied encryption method, an <xenc:CipherData> element containing the cipher value of the encrypted data, and a <ds:KeyInfo> element indicating the key used for encryption and decryption. A <xenc:ReferenceList> element can be used to create a manifest of encrypted parts of a message envelope expressed using the <xenc:EncryptedData> elements. Example 6-42 shows how an <xenc:EncryptedData> element is represented in a W S-Security-enabled SOAP message.
<xenc:EncryptedData Id="msgcontId" Type="http://www.w3.org/2001/04/xmlenc#Content"> <xenc:EncryptionMethod Algorithm="..." /> <ds:KeyInfo> <KeyName>...</KeyName> </ds:KeyInfo> <xenc:CipherData> <xenc:CipherValue ... </xenc:CipherValue> </xenc:CipherData> </xenc:EncryptedData> </SOAP:Body> </SOAP:Envelope>
Refer to the "XML Encryption" section earlier in this chapter for more information and details about encryption and decryption using XML encryption.
<xenc:EncryptedKey>
The <xenc:EncryptedKey> element is used to represent encrypted keys. In usage, a SOAP message carries both encrypted data and the encrypted symmetric key needed to read that data. The symmetric key is encrypted using the recipient's public key. W hen the message is received, the recipient can use its private key to decrypt the encrypted symmetric key and then use this symmetric key to decrypt the actual data. The structure of an <xenc:EncryptedKey> element is quite similar to an <xenc:EncryptedData> element with three sub-elements: an <xenc:EncryptionMethod> element specifying how the key was encrypted, an <xenc:CipherData> element containing the cipher value of the encrypted key, and a <ds:KeyInfo> element specifying the information about the key used. Example 6-43 shows how an <xenc:EncryptedKey> element is represented in a W S-Security-enabled SOAP message.
Again, refer to the "XML Encryption" section earlier in this chapter for more information and details about encryption and decryption using XML encryption.
<wsu:TimeStamp>
The W S-Security specification recommends the use of timestamps to determine the timelines and validity of security semantics. If a recipient receives two contradictory messages, the timestamps inserted in the message can be used to determine validity of one of them. The <wsu:TimeStamp> element allows you to define the message creation time and its expiration time. Example 6-44 shows how a <wsu:Timestamp> element is represented in a W S-Security-enabled SOAP message.
Sun JWSDP
Java W eb Services Developer Pack (JW SDP) is a W eb services development kit that provides build, deploy, and test environments for W eb-services applications and components. It brings together a set of Java APIs and reference implementations for building XML-based Java applications that support key XML W eb services industry-standard initiatives such as SOAP, W SDL, UDDI, W S-I Profiles, XML Encryption, XML Digital Signature, and W S-Security. At the time of writing this book, Sun Microsystems released JW SDP 1.5, which includes the following APIs and tools for W eb services: Java API for XML-based RPC (JAX-RPC) Java Architecture for XML Binding (JAXB) Java API for XML Registries (JAXR) XML and W eb Services Security XML Digital Signature Java API for XML Processing (JAXP) SOAP with Attachments API for Java (SAAJ) JW SDP 1.5 also implements the W S-I Basic profile 1.1 and W S-I Basic Attachment profile for enabling interoperability.
WS-Security in JWSDP
The JW SDP 1.5 provides full implementation of the OASIS W eb Services Security 1.0 (W S-Security) specification as XW SSecurity APIs for providing message-level security for SOAP messages. This allows representing message-level security mechanisms based on XML encryption and XML Digital signatures. It also provides support for applying authentication credentials such as username/password and certificates.
J2EE 1.4
W ith the release of J2EE 1.4, the J2EE platform allows enabling selected J2EE components to participate in W eb-services communication. It adopts APIs and reference implementations from JW SDP. As a key requirement, it mandates the implementation of the JAX-RPC 1.1 and EJB 2.1 specifications that address the role of W eb services and how to expose J2EE components as W eb services. In compliance with W S-I Basic profile guidelines, J2EE ensures interoperability with all W eb-services providers that adhere to W S-I specifications. The J2EE W eb services security builds on the existing J2EE security mechanisms for securing W eb-service interactions by adopting a flexible security model that uses both declarative and programmatic security mechanisms. In addition, it also allows incorporating security mechanisms used for W eb services built using JAX-RPC and SAAJ.
alliance specifications.
XML Firewall
XML firewall appliances will reside in the DMZ behind network firewall appliances and operate on the inbound and outbound XML traffic of a W eb-services provider or requester. These appliances help in identifying XML content-level threats and vulnerabilities based on message compliance, payload, and attachments that are not detected by network firewalls. In addition, XML firewalls offer functionalities that support XML encryption, digital signatures, schema validation, access control, and SSL communication. An XML firewall appliance often will run at wire speeds that are superior to that of the traditional software infrastructure. Adopting XML firewall delivers significant performance gains in W eb-services transactions that involve SSL communication, XML filtering, XML schema and message validation, signature validation, decryption, XML parsing, and transformation. There is a growing list of XML-aware security appliances currently available, including XML firewalls and XML processing accelerators. It is noteworthy that some security hardware vendors provide support for W eb-services security standards and specifications.
Summary
W eb Services are gaining industry-wide acceptance because they can solve IT problems using standards and standardsbased technologies. They deliver a promising solution that allows IT services to be interoperable and to integrate using XML-based messages and industry-standard protocols. W ith the involvement of leading industry vendors in XML W ebservices standards initiatives, there is a growing list of standards and specifications for developing and deploying W eb services. W eb services form the basis for standards-based infrastructure, communication, and application development in the industry today. The security of W eb services is the biggest concern today as the industry faces a continually growing list of requirements and challenges. In this chapter, we began with a discussion about W eb services' architectural concepts, building blocks, core security challenges and requirements, and standards and specifications. W e looked at both the high-level and in-depth technical details of the key W eb-services security specifications and standards that contribute to the end-to-end security of a W eb-services infrastructure. In particular, we looked at the following: W eb services architecture and its building blocks W eb services threats and vulnerabilities W eb services security requirements W eb services security standards XML signature (XML DSIG) XML encryption (XML ENC) XML key management services (XKMS) OASIS W eb services security (W S-Security) W S-I Basic Security profile W e discussed the critical security factors and considerations that need to be addressed with regard to the implementation of W eb services. W e also briefly looked at the Java-based W eb-services infrastructure providers that offer solutions in compliance with W eb-services standards and specifications. In the next chapter, we will explore the identity architecture and its technologies.
References
[W3C] XML SignatureSyntax and Processing Rules. W3C Recommendation, February 12, 2002. http://www.w3.org/T R/xmldsig-core/ [W3C] XML EncryptionSyntax and Processing Rules. W3C Recommendation, December 10, 2002. http://www.w3.org/T R/xmlenc-core/ [OASIS] WS-Security 1.0 Standard and Specifications. http://docs.oasis-open.org/wss/2004/01/oasis-200401wss-soap-message-security-1.0.pdf [WS-I] Web Services Security Basic Profile 1.0Working Group Draft. http://www.wsi.org/Profiles/BasicSecurityProfile-1.0-2004-05-12.html [W3C] Web Services Architecture. W3C Working Group Note, February 11, 2004. http://www.w3.org/T R/wsarch/ [Sun J2EE Blueprints] Designing Web Services with the J2EE Platform, 2nd EditionGuidelines, Patterns, and Code for Java Web Services. http://java.sun.com/blueprints/guidelines/designing_webservices/ [Ramesh Nagappan, Robert Skoczylas et al.] Developing Java Web Services: Architecting and Developing Java Web Services, Wiley 2002.
identities. The following sections introduce the concept of identity management, the associated industry standards, and their logical architecture. They also discuss how these standards and standards-based technologies can help address the challenges of managing user identities.
This chapter focuses on SAML, Liberty, and XACML, while Chapter 13, "Secure Service Provisioning," will cover SPML in more detail.
Introduction to SAML
Security Assertion Markup Language (SAML) is derived from two previous security initiatives: Security Services Markup Language (S2ML) and Authorization Markup Language (AuthXML). SAML is an XMLbased framework for exchanging security assertion information about subjects. Subjects are entities that have identity related information specific to a security domain. SAML plays a vital role in delivering standards-based infrastructure for enabling single sign-on without requiring the use of a single vendor's security architecture. However, SAML does not provide the underlying user authentication mechanism.
SAML 1.0
SAML 1.0 was accepted as an OASIS standard in November 2002. It is endorsed by leading industry vendors for the support of single sign-on and interoperability among security infrastructures. SAML 1.0 addressed one key aspect of identity management: how identity information can be communicated from one domain to another.
SAML 1.1
OASIS released the SAML version 1.1 specification on September 2, 2003. SAML 1.1 is similar to SAML 1.0 but adds support for "network identity," defined by Liberty Alliance specifications
[Liberty]. SAML 1.1 support for Liberty Alliance specifications allows exchanging user authentication and authorization information securely between W eb sites within an organization or between organizations over the Internet by W eb account linking and role-based federation. SAML 1.1 also introduced guidelines for the use of digital certificates that allow signing of SAML assertions. There are also changes in the digital signature guidelines, such as the recommended use of exclusive canonical transformation. For details about the differences, see the SAML 1.1 [SAML11Diff ]. The specification did not address all of the problems in the single sign-on or identity management domain. For example, it did not provide a standard authentication protocol that supports a variety of authentication devices and methods. Although SAML provides a flexible structure for encapsulating user credentials, there is still a problem in integrating with a Kerberos-based security infrastructure such as Microsoft W indows Kerberos. SAML 2.0 currently includes a work item "Kerberos SAML Profiles" that addresses the integration requirement. This subject is discussed in the next section.
SAML 2.0
SAML 2.0 specifications were approved by OASIS as a standard in March 2005. SAML 1.1 defined the protocols for single sign-on, delegated administration, and simple policy management. Liberty's Identity Federation Framework (ID-FF) 1.2 was provided to the SAML committee, and SAML 2.0 was the result of converging previous SAML versions with Liberty IDFF and with Shibboleth. SAML 2.0 fills in the gaps left in SAML 1.1 by including global sign-out, session management, and extension of identity federation framework for opt-in account linking across W eb sites (used by Liberty). Among the additions in SAML 2.0, there are several interesting items: Enhancement of SAML assertions and protocols in support of federated identity and global sign-out. Creation of new SAML attribute profiles, including the X.500/LDAP attribute and XACML. These attribute profiles simply the configuration and deployment of systems that exchange attribute data. Pseudonyms to provide a privacy-enabling "alias" for a global identifier to avoid collusion between service providers. SAML 2.0 uses an opaque pseudo-random identifier (with no discernible correspondence with meaning identifiers such as e-mail addresses) between service providers to represent principals. W ith the use of pseudonyms, SAML 2.0 defines how two service providers can establish and manage these pseudonyms for the principals they are working with. SAML meta-data specify how configuration and trust-related data can be more easily deployed in SAML systems. Identity providers and service providers often need to agree on data such as roles, identifiers, and supported profiles. SAML meta-data provides a structure for identifying the actors involved in the various profiles, such as single sign-on identity provider, attribute authority, and requester. Better support of mobile and active devices by adding more authentication contexts that accommodate new authentication requirements, such as including smart card-based PKI and Kerberos. Support of encryption for attribute statements, name identifiers, or entire assertion statements. Support for privacy using privacy policy and settings that service providers can obtain and use to express a principal's consent to particular operations. Discovery of multiple identity providers, using a provider discovery profile that uses a cookie written in a common domain between the identity provider and service providers. Use of an Authentication Request protocol (<AuthnRequest>), which enables an interoperable "destination-site-first" scenario. In other words, the <AuthnRequest> protocol allows a user to approach a service provider first and then be directed to log in at the identity provider if the service provider deems it to be necessary. Conformance requirements and interoperability details. SAML 2.0 has some changes in the core specification as well. It has significant changes in scope, particularly with regard to extending the functionality and aligning itself with other related initiatives. Here are some examples of the functionalities: Support for global logout and time-out (session support) Discovery of SAML W eb service through a W SDL file (meta-data exchange and protocol)
Exchange of name identifier and pseudonyms between sites (identity federation, or federated name registration protocol) Multi-hop delegation and intermediaries (multi-participant transactional workflows) Liberty authentication context exchange and control (authentication context) Additional protocol binding for direct HTTP use (HTTP-based assertion referencing) Coordinating with IETF Simple Authentication and Security Layer effort (SASL support) Integrating and reconciling SAML 2.0 and XACML 2.0, including attribute usage, authorization decision requests and responses, and policy queries and responses. In addition to the SAML assertion, SAML 2.0 introduces the following new message protocols: artifact protocol, federated name registration protocol, federation termination protocol, single logout protocol, and name identifier mapping protocol. [SAML2Core] has a full description of these new messaging protocols, which are not discussed here. One new message protocol of interest is the logout request (specified by the <LogoutRequest> tag), which supports global logout. This new support means that if a principal (a user or a system entity) logs out as a session participant, then the session authority will issue a logout request to all session participants. The reason for the global logout can be "urn:oasis:names:tc:SAML:2.0:logout:user" (user decides to terminate the session), or "urn:oasis:names:tc:SAML:2.0:logout:admin" (administrator wishes to terminate the session, for example, due to timeout). These and other attributes will be discussed in later sections.
SAML Profiles
The SAML specification defines a standard mechanism for representation of security information. This mechanism allows security information to be shared by multiple applications so that they are able to address single sign-on requirements. The notion of a SAML profile addresses these core interoperability requirements. The SAML profile allows the protocols and assertions to facilitate the use of SAML for a specific application purpose. A SAML profile defines a set of rules and guidelines for how to embed SAML assertions into, and extract them from, a protocol or other context of use. Using a SAML profile, business applications would be able to exchange security information in SAML messages seamlessly and to easily interoperate between SAML-enabled systems. SAML 2.0 defines the following SAML profiles: Web browser SSO Profile: Single sign-on using standard browsers to multiple service providers. The W eb browser SSO profile uses the SAML Authentication Request protocol, in conjunction with the HTTP Redirect, HTTP POST, and HTTP Artifact bindings. SAML 2.0 combines the previous two W eb browser profiles from SAML 1.1 into one single profile. Enhanced Client and Proxy Profile: This profile defines the rules for a system entity to contact the appropriate identity provider, possibly in a context-dependent fashion. It uses the Reverse SOAP (PAOS) binding. Identity Provider Discovery Profile: This profile defines how a service provider discovers the identity provider used by the Principal. Single Logout Profile: This profile defines how to terminate the sessions managed by the session authority (or identity provider). Name Identifier Management Profile: This profile defines how to exchange a persistent identifier for a principal with the service providers and how to later modify changes in the format or value. Artifact Resolution Profile: SAML 2.0 defines an Artifact Resolution protocol to dereference a SAML artifact into a corresponding protocol message. The HTTP Artifact binding can then leverage this Artifact Resolution protocol in order to pass SAML protocol messages by reference. Assertion Query/Request Profile: This profile defines a protocol for requesting existing assertions by reference or by querying on the basis of a subject and additional statement-specific criteria. In addition, the SAML profile also defines rules for mapping attributes expressed in SAML to another attribute representation system. This type of SAML profile is known as Attribute Profile. SAML 2.0 defines five different Attribute
Profiles [SAML2Profiles]. Basic Profile: This profile defines simple string-based SAML attribute names. X.500/LDAP Profile: This profile defines a common standardized convention for SAML attribute-naming using OIDs (Object Identifiers) expressed as URNs (Uniform Resource Names) and accompanied by using the type xsi:type. UUID Profile: This profile defines SAML attribute names as UUIDs (Universal Unique Identifiers) expressed as URNs. DCE PAC Profile: This profile defines how to represent DCE realm, principal, and primary group, local group, and foreign group membership information in SAML attributes. XACML Profile: This profile defines how to map SAML attributes cleanly to XACML attribute representations. To support W eb services and usage of SAML tokens in W eb services communication, the OASIS W S-Security TC developed a SAML Token profile. The SAML Token profile defines the rules and guidelines for using SAML assertions in W eb services communication. OASIS also ratified SAML Token profile 1.0 as an approved standard. Refer to Chapter 6, "W eb Services SecurityStandards and Technologies," for information about the role of SAML token profiles and how to use them in W S-Security.
SAML Architecture
The original SAML specification introduced SAML using a domain model, which consists of Credential Collector, Authentication Authority, Session Authority, Attribute Authority, and Policy Decision Point. These are the key system entities in providing single sign-on service to service requesters. Credential Collector: A system object that collects user credentials to authenticate with the associated Authentication Authority, Attribute Authority, and Policy Decision Point. Authentication Authority: A system entity that produces authentication assertions. Session Authority: A system entity (for example, identity provider) that plays the role of maintaining the state related to the session. Attribute Authority: A system entity that produces attribute assertions. Attribute Repository: A repository where attribute assertions are stored. Policy Repository (or Policy) A repository where policies are stored. Policy Decision Point: A system entity that makes authorization decisions for itself or for other system entities that request authorization. Policy Enforcement Point: A system entity that enforces the security policy of granting or revoking the access of resources to the service requester. Policy Administration Point: A system entity where policies (for example, access control rules about a resource) are defined and maintained.
SAML Assertions
A SAML assertion here resembles a piece of data produced by a SAML authority (for example, Authentication Authority) regarding either an authentication action performed on a subject (for example, service requester), attribute information about the subject, or an authorization request (for example, whether the service requester can access a resource). There are three different SAML assertions: Authentication Assertion: An assertion that carries business data about successful authentication performed on a subject (for example, a service requester). Authorization Decision Assertion: An assertion that carries business data about an authorization decision. For example, the authorization decision may indicate that the subject is allowed to access a requested resource. Attribute Assertion: An assertion that carries business data about the attributes of a subject.
Client requests for access (for example, credential or authentication assertion) come from a System Entry Point (refer to the System Entity in Figure 7-1) and are routed to different Authorities where authenticated and authorized requests will be routed to the relevant Policy Enforcement Point for execution. These architectural entities resemble different instances of Certificate Authority, Registration Authority, and directory server in real life (within an enterprise, external Trust Service, or trading partners). For example, the Credential Collector is like a RADIUS protocol that front-ends the authentication process and passes to the directory server (or Authentication Authority) relevant user credentials for authentication. Architecturally, SAML assertions are encoded in an XML package and consist of Basic Information (such as unique identifier of the assertion and issue date and time), Conditions (dependency or rule for the assertion), and Advice (specification of the assertion for policy decision). SAML now supports a variety of protocol bindings when invoking different assertions. These include SOAP, reverse SOAP (multistage SOAP/HTTP exchange that allows an HTTP client to send an HTTP request containing a SOAP message), HTTP redirect (sending a SOAP message by HTTP 302 method), HTTP POST (sending a SOAP message in Base64-encoded HTML form control), HTTP artifact (way to transport an artifact using HTTP by a URI query string or by an HTML form control), and URI (retrieving a SAML message by resolving a URI).
SAML Architecture
Some architects and developers use the term "SAML architecture" to refer to the SAML entity model. This does not refer to the physical architecture. In the SAML architecture, the SAML domain model depicts the information entities (for example, System Entity) and their roles (for example, Policy Enforcement Point), but does not resemble any infrastructure-level component such as directory server or policy server. Thus, architects and developers need to map these domain entities into logical architecture components. To illustrate how the SAML domain model is mapped to the SAML logical architecture, Figure 7-2 shows a scenario where a client requests access to remote resources under a single sign-on environment. Both the source site and the destination site collaborate to provide single sign-on security using SAML. The destination site has a number of remote resources and an existing authentication infrastructure with a custom-built authentication module (Policy Enforcement Point). It has implemented a SAML Responder (SAML-enabled agent) that can intercept application requests for resources and initiate SAML assertion requests. The source site has built an authentication service (Authentication Authority), directory server (Attribute Authority that stores the policy attributes), and a policy server (Policy Decision Point that determines what the client is entitled to). The SAML server (or authority) processes requests for SAML assertions and responds to the SAML Responder. These domain entities can easily map to the security architecture components. These architecture components may take more than one role; for example, the authentication module may act as Authentication Authority and Policy Enforcement Point.
The following describes the interaction between the system entities as per Figure 7-2: The client has already authenticated with the authentication service offered by the source site (Step 1). The client creates an application request (also called Application Request) to the remote sources at the destination site (Step 2). The destination site has a SAML responder (SAML-enabled agent) that uses an authentication module (for example, a JAAS authentication module using a local directory server) for generating authentication assertions. The remote destination redirects the application request to the SAML responder (Step 3). The SAML responder issues a SAML authentication assertion request to the source site (Step 4). The SAML-enabled authentication service processes the SAML authentication assertion request and provides a response to the destination site (Step 5). Now the authentication module of the destination site knows that the client is already authenticated. It will not require the client to re-login again. The destination site will initiate a SAML attribute assertion request and an authorization decision assertion request to the source site to determine whether the client is entitled to access the remote resources.
The SAML request consists of a SOAP envelope and a SOAP body that contain the SAML request <samlp:Request>. The SAML request element may contain the elements AuthenticationQuery, AttributeQuery, or AuthorizationDecisionQuery. A digital signature <ds:Signature> may also be generated and attached to the SOAP message, though this is not discussed here. Refer to Example 7-1 for an example of the SOAP message skeleton for a SAML request.
The SAML response will include the <Status> element with the SAML assertion statements, such as AuthenticationStatement, AttributeStatement, and AuthorizationDecisionStatement. Example 7-2 shows an example of the SOAP message skeleton for a SAML response.
</samlp:RequestedAuthnContext> </samlp:AuthnQuery>
The SAML authority (or Issuing Authority) asserts that the client request has been authenticated and thus returns with a SAML response, as shown in Example 7-4.
Example 7-6 shows the corresponding response. The list of attribute names and the associated values vary in customer implementations. They may resemble local resource names or policy mnemonics that may not be easily understandable to users.
<saml:AttributeStatement> <saml:Attribute NameFormat="http://www.coresecuritypatterns.com"> Name="PaymentStatus" <saml:AttributeValue> JustPaid </saml:AttributeValue> </saml:Attribute> <saml:Attribute NameFormat="http://coresecuritypatterns.com"> Name="CreditLimit" <saml:AttributeValue xsi:type="coresecuritypatterns:type"> <coresecuritypatterns:amount currency="USD"> 1000.00 </coresecuritypatterns:amount> </saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> </saml:Assertion> </samlp:Response>
The SAML core specification for assertions and protocols (refer to [SAML2Core], p. 25) indicates that the <AuthorizationDecisionStatement> feature has been frozen in version 2.0. Thus, architects and developers should consider using XACML for enhanced authorization decision features.
Global Logout
Corporations that have implemented single sign-on integration for legacy applications and heterogeneous security infrastructures will likely also need global logout capability. However, not all single sign-on implementations are capable of global logout. Single sign-on is usually initiated from a user sign-on action, but global logout can be initiated by a system event such as previous session invalidated or idle session time-out. Many developers have added a session timeout feature (for example, a session that is idle for five minutes will invalidate the previous sign-on session) to their single sign-on infrastructure so that idle user sessions exceeding the time-out limit will trigger a global logout. The global logout capability addresses potential security risks of replay or unauthorized access to resources from invalidated sessions. In a W eb portal that aggregates access to disparate applications, once consumers perform a single sign-on to a primary service provider, they can access any remote resources to which they are entitled with the affiliated service providers. If a consumer decides to sign out of the security session with one particular service provider, the global logout functionality should disconnect from all remaining security sessions with the other service providers. Similarly, if any of the service providers invalidate the user from one of the security sessions, then the primary service provider should also perform a global logout. Typically, a service provider issues a SAML 2.0 global logout request, and the SAML authority processes the global logout request.
service provider. SAML protocol by nature is a request-response protocol model. It is possible that the service provider can check whether the SAML messages are from a valid requester at the origin by using the digital signature in order to filter or discard an influx of invalid incoming requests that may cause DoS. Requiring signed requests and use of XML Signature (for example, using the element <ds:SignatureProperties> with a timestamp to filter influx of the same request from a DoS attack) would help reduce the risk associated with a DoS attack. Requiring client authentication below the SAML protocol level with client-side certificates will help track the source of attacks for diagnosis.
Man-in-the-Middle Attack
In addition, SAML messages are also exposed to man-in-the-middle attacks (impersonating the assertion request using an HTML form) and forged assertions (altering an assertion). For a man-in-the-middle attack, architects and developers may want to use the SAML protocol binding that supports bilateral authentication, message integrity, and confidentialityfor example, digital signature. For forged assertion, architects and developers may enforce digital signing of the SAML response that carries the SAML assertions. The destination site can ensure message integrity and authentication by verifying the signature and authenticating the issuer. Man-in-the-middle attacks can also be mitigated by securing the message transport using SSL/TLS. This can ensure point-to-point tamperproof communication. There are also security risks related to SAML profiles. SAML profiles refer to the rules that depict how to embed SAML assertions into an XML framework and how to extract them from the framework. For the W eb Browser Single Sign-on profile, it is possible that hackers can relay service requests, capture the returned SAML assertions or artifacts, and relay back a falsified SAML assertion. To mitigate this security risk, we need to use a number of countermeasures together. First, we need to use a system with strong bilateral authentication. HTTP over TLS/SSL is recommended for use with an appropriate cipher suite (strong encryption for confidentiality and for data integrity) and X.509v3 certificates (for strong authentication). These countermeasures will make man-in-the-middle attacks more difficult. For the Enhanced Client and Proxy profile (ECP), it is possible for hackers to intercept AuthnRequest and AuthnResponse SOAP messages, which will allow subsequent Principal impersonation. The hackers may then substitute any URL of a responseConsumerServiceURL value in the message header block (PAOS message header) before forwarding the AuthnRequest on to the enhanced client. The inserted URL value may simply point back to itself so that the hackers are able to masquerade as the Principal as the legitimate service provider. To mitigate the security risk, the identity provider can specify to the enhanced client the address to which the enhanced client must send the :AuthnResponse. Thus, the responseConsumerServiceURL in the message header can only be used for error responses from the enhanced client.
Liberty Phase 1
Liberty Phase 1 introduced identity federation, which provides single sign-on and global sign-out to multiple application systems and infrastructures. This involves creating an identity-provider role that initiates federation of two or more identities. Using the Liberty protocol, users are able to sign on once to a Liberty-enabled W eb site and be seamlessly signed on to another Liberty-enabled W eb site without needing to reauthenticate. In essence, Liberty Phase 1 delivers the Identity Federated Framework (ID-FF) version 1.1, which includes the following features: Federated identity life cycle: The lifecycle of identity federation begins with exchanging meta-data to federate an identity. A principal (for example, a user who can acquire a federated identity) can then perform a single sign-on to one or multiple Liberty-enabled W eb sites. During the process of single sign-on, the federated identity needs to be registered (using the name registration protocol). Upon completion of user activity, the principal will perform a global logout and the federated identity will be terminated. Meta-data: The Liberty meta-data provides an extensible framework for describing cryptographic keys, service endpoint information, and support protocols and profiles at run-time. The meta-data classes include entity provider, entity affiliation, and entity trust. The origin and the document that contain these meta-data will be verified using digital signatures.
Static conformance requirements: The static conformance requirements define profiles for identity federation activities: identity provider, service provider basic, service provider complete, and Liberty-enabled Client or Proxy. Interoperability conformance and validation: There is a validation process for a vendor who wants to be licensed as Liberty-interoperable, where the vendor needs to participate in the Liberty Alliance Interop event to validate such a compliance assertion. Security mechanisms: Liberty's identity federated framework supports both channel security and message security. Channel security allows the service provider to authenticate the identity provider using server-side certificates. It also supports mutual authorization between the service providers and identity providers. Message security uses digital signatures for protecting the Liberty messages for data integrity and non-repudiation.
Liberty Phase 2
Liberty Phase 2 is a major enhancement to Phase 1. It delivers the following specifications: Identity Federated Framework (ID-FF) version 1.2. The ID-FF establishes identity federation under a circle of trust and supports single sign-on. Identity federation refers to linking all user accounts for the same user entity among different service providers and identity providers. Single sign-on denotes enabling a user to authenticate with the identity provider once in order to access remote services provided by multiple service providers under a circle of trust. The identity provider provides decentralized authentication of the user identity. The features of ID-FF include opt-in account linking, simplified sign-on, basic session management, user affiliation with W eb sites, anonymity of user identities, and real-time discovery and exchange of meta-data. Identity Service Interface Specification (ID-SIS) version 1.0. The ID-SIS includes two profiles: personal identity and business identity. These profiles define important user attributes for exchanging identity information among service providers and identity providers on top of ID-W SF. Identity Web Services Framework (ID-WSF) version 1.0. The ID-W SF defines a framework to create, discover, or consume identity services. This includes permission-based attribute sharing, identity service discovery, interaction service, security profiles for securing the discovery, SOAP protocol binding for ID-FF, extended client support for nonHTTP devices, and identity service templates to implement identity services on top of ID-W SF. It is important to understand the roles of Liberty and SAML in the identity management space. Liberty Phase 2 is primarily based on SAML version 2.0, and builds an extension on top of it. The additions in Phase 2 are ID-FF, ID-SIS, and ID-W SF. Liberty is not competing with SAML. SAML provides single sign-on and identity management specifications, and it does not provide custom profiles for specific scenarios or industry use. Security architects and developers have to build their own implementation and customize the profiles on top of SAML specifications. Another goal of Liberty Alliance is to share best practices of identity management, data privacy, and interoperability within the Liberty Alliance compliant products. The number of commercial Liberty-enabled identity management products is growing. A full list of Liberty-enabled products is available at http://www.projectliberty.org/resources/enabled.html.
Relationships
Service providers are affiliated with an identity provider into circles of trust based on Liberty-enabled technology and on operational requirements that define trust relationships among themselves. In addition, users federate their accounts (also known as local identities) with these service providers so that the same user identity can link to multiple accounts under different service providers. Under the mutually trusted environment, if a user authenticates with the identity provider, these service providers will honor the authentication. Such a business relationship is also known as "Circle of Trust." Figure 7-4 summarizes the Liberty concept in the use of cross-domain Single Sign-on and depicts the following interactions: User Agent sends an HTTP request to Service Provider for Single Sign-on (Step 1). Service Provider responds by redirecting the request to Identity Provider (Step 2). User Agent sends a request to Identity Provider (Step 3). Identity Provider responds by redirecting to Service Provider (Step 4). User Agent sends an authentication request to Service Provider with URI (Step 5).
Web Redirection
Web Redirection refers to actions that enable Liberty-enabled entities to provide services via user agents. It has two variants: HTTP-redirect-based redirection and Form-POST-based redirection. HT T P-redirect-based redirection uses HTTP redirection and the syntax of URIs to provide a communication channel between identity providers and service providers. For instance, the user clicks a link in the W eb page displayed in the user agent. The user agent sends an HTTP request of resource access to the service provider using HTTP GET. The service provider responds with an HTTP response with a status code 302 (HTTP redirect) and an alternate URI (identity provider URI such as http://www.myidentityprovider.com/auth) in the Location header field. Then the user agent sends an HTTP request to the identity provider, and the identity provider can then respond with a redirect that specifies the service provider URI in the Location header field. Finally, the user agent sends an HTTP request to the service provider using HTTP GET with the complete URI from the identity provider's Location header field. The flow of events in form-POST -based redirection is similar to the HTTP-redirect-based redirection, except that the service provider responds with an HTML form to the user agent with an action parameter pointing to the identity provider and a method parameter with the value of POST. The user needs to click on the Submit button, which sends the form and the data contents to the identity provider using HTTP POST.
Web Services
Web services here refer to business services provided by service providers using SOAP protocol profiles that enable Liberty-enabled entities to communicate to each other. Liberty currently supports RPC-style SOAP W eb services.
Security Mechanisms
The Liberty ID-W SF specification defines security mechanisms that address the use-case scenarios intended for identitybased W eb services. It mandates that the Liberty-provider implementation include security mechanisms that address the following requirements in order to secure the exchange of identity information between the applications and participants. The security mechanisms must address the following key requirements: Request Authentication Response Authentication Request/Response Correlation Replay Protection
Integrity Protection Confidentiality Protection Privacy Protections Resource Access Authorization Proxy Authorization Mitigation of denial of service attack risks In the W eb redirection scenario, Liberty suggests the use of HTTPS for exchanging identity information and authentication assertions. This provides a secure transport mechanism between service providers and identity providers. In addition to the underlying secure transport, Liberty relies on strong authentication mechanisms used by the identity provider. Using cookies to maintain the local session state is often abused by unauthorized W eb sites and hackers. If developers use cookies to persist identity and authentication information, it is possible that once a user exits the W eb browser, another user may re-launch the W eb browser using the same system, which may result in impersonating the first user. Using W eb redirection and URL rewriting, identity providers do not need to send business data to service providers via cookies. For details of the Liberty security mechanisms, please refer to [LibertyIDW SF].
The following are the business events shown in Figure 7-5: Under the circle of trust, members like JoeS can authenticate once with the identity provider (Step 1). User JoeS accesses reservation service using single sign-on without duplicate logins to each service provider (Step 2). Airline A asks for consent to federate identity with the affiliated group, when previous sign-on has been detected (Step 3). User JoeS also accesses car rental service using single sign-on (Step 4). Car Rental B prompts for consent to federate identity with the affiliated group, with previous sign-on detected (Step 5). User JoeS accesses hotel reservation service using single sign-on (Step 6). Hotel C prompts for consent to federate identity with the affiliated group, with previous sign-on detected (Step 7). The following sections discuss how architects and developers can use Liberty-enabled solutions to address the business challenges of federated identity management, single sign-on, and global logout.
Federation Management
Airlines, car rental companies, and hotels can federate themselves in an affinity group of business travel services. By federating themselves, they are able to rely on identity authentication services from an identity provider and share member identity information across different security infrastructures. In Figure 7-5, Airline A, Car Rental B, and Hotel C form an affiliated group under a circle of trust. They do not need to compromise or reengineer their security infrastructure for shared authentication or authorization. In other words, though their members (business travelers) may be using different user ids and account names in each service provider's system infrastructure, these service providers are able to
link different user accounts to the same user identity anonymously under the circle of trust.
Identity Federation
Identity federation refers to linking accounts from distinct service providers and identity providers. The primary service provider (say, the airline reservation company A) will notify its eligible users (in this case, JoeS) of the possibility of federating their local identities among the service providers in the business travel service affinity group. It will also ask for consent to introduce the user into the affinity group, once they detect that the user has previously authenticated with the identity provider. Other service providers will make similar solicitations for permission as well. Federating identities will create a unique identifier to link different user identities established in different service providers. If a user has already established a federated identity with an identity provider, the requesting service provider can issue a <NameIdentifierMappingRequest> message to obtain the federated identity when communicating with other service providers. Upon receiving the request message, the identity provider will respond with a <NameIdentifierMappingResponse> message. The <NameIdentifierMappingRequest> message is digitally signed, and the federated identity is encrypted. Example 79 shows a <NameIdentifierMappingRequest> message.
Identity De-federation
Similarly, if the service provider is disassociated from the affinity group, it will also de-federate from the affinity group (identity de-federation) and notify the user. The federation termination notification (<FederationTerminationNotification>) is a specialized Liberty protocol to handle identity de-federation. Liberty-enabled architecture provides the capability of joining the identity federation or disassociating from the identity federation. Each of the service providers needs to implement a Liberty-enabled user agent so that it can reuse the user authentication service from the identity provider as well as federate the user identity of the users. The new ID-SIS personal or business profile defines the user profile and attributes for identity federation.
Multi-tiered Authentication
W ith the nature of federated identity, Liberty allows authentication and authorization across different service providers, or across a multi-tier architecture. Some business services require more than one level of authentication to handle the varying nature of different business transactions. Some business transactions require sufficient quality of authentication mechanisms, in addition to plain user id and password. For example, a high-value funds transfer would require digital signing using an X.509v3 certificate. For risk-management purposes, service providers would not want any user with a user id and password to authorize a high-value funds transfer automatically without additional authentication. This type of reauthentication, or a second-tier authentication, is also known as multi-tiered authentication. It is true that Liberty supports a wide range of authentication methods, but the actual authentication methods and their supporting protocol exchange are not specified in the Liberty specifications. To support multi-tiered authentication, the identity provider and the affiliated service providers need to make a mutual contractual agreement and the associated protocol exchange prior to the identity exchange (in other words, out of band, or a special mutual agreement outside the standard identity exchange process). The element <AuthenticationContextStatement> can encapsulate the identification process, technical protection, operational protection, authentication method, and government agreements. In addition, it is an implementation decision to determine which party is performing the reauthentication and how to select the appropriate authentication profile for the user. In addition, architects and developers need to customize the authentication policy associated with access to specific resources (for example, funds transfer services).
Credentials
Credentials contain security-related attributes that describe a user identity. Sensitive credentials require special protection from being stolen or tampered with, such as encryption and private cryptographic keys. These are used to prove an authentication or authorization assertion. For example, passwords and X.509v3 certificates are credentials. The Liberty artifact is a special form of credentials. The service provider can use a Liberty artifact profile to issue a query to the identity provider in order to get a SAML assertion. The Liberty artifact is an opaque user handle with a pseudorandom nonce that can be used only once. Thus, it serves the purpose of a credential and is a countermeasure against replay attacks.
Communication Security
Typically, the user communicates with the identity provider or any service provider within the affinity group under the circle of trust via HTTPS. This secures the communication channel between client and server, or between servers. A service provider can reject communication with an identity provider if the security policy requires a credential over a communication protocol supporting bilateral authentication, integrity protection, and message confidentiality.
as cross-domain single sign-on. Liberty 1.2 specifications currently support cross-domain single sign-on. If the service providers are associated with more than one affinity group, then they can participate in multiple circles of trust, and users in the service providers can then benefit from single sign-on to multiple affinity groups. If a user who belongs to multiple circles of trust (affinity groups) wants to access multiple resources with single sign-on, this will require exchanging identity information between different identity providers. This scenario is also known as federated single sign-on. An example of federated single sign-on is accessing resources across two identity providers with two different identity infrastructures, such as Liberty-enabled infrastructure and Microsoft Passport. At present, Liberty Phase 2 does not define any protocol or profile exchange between identity providers under the federated single sign-on scenario. In other words, users need to have an individual security session with each identity provider, and under each session they can enjoy single sign-on.
Global Logout
Liberty defines a logout request to enable a service provider to request global logout within the affiliated group under a circle of trust. This requires specifying the federated identity of the user and the session index. Example 7-10 shows a logout request. In this example, Airline A issues a global logout request for the federated identity (encrypted). The logout request comes with a digital signature.
Sun Java System Access Manager comes with a Java SDK library that supports SAML. The library includes: SAML assertion statement (com.sun.identity.saml.assertion). This package creates and transforms SAML assertion statements, or accesses part of the attributes in the statements. Common SAML functions (com.sun.identity.saml.common). This refers to XML attributes common to all elements. Plug-in (com.sun.identity.saml). Currently, there are four Service Provider Interfaces (SPI) that are customizable. These APIs include AccountMapper, AttributeMapper, ActionMapper, and SiteAttributeMapper. Protocol (com.sun.identity.saml.protocol). This package parses the request and response XML messages used to exchange assertions and their authentication, attribute, or authorization attribute information. Digital signature (com.sun.identity.saml.xmlsig). This package refers to the digital signature utilities that sign and verify SAML messages. In addition, Sun Java System Access Manager also provides a Java SDK library to customize or build Liberty-enabled applications. This Access Manager SDK provides interfaces to the following abstract objects that can be mapped to the resources and entities in the directory server for identity management: Constants (AMConstants) Objects (AMObject) Organization (AMOrganization) Organization unit (AMOrganizationUnit) People container (AMPeopleContainer) User role (AMRole) User (AMUser) Service template (AMTemplate) to associate with the attributes of AMObject. Sun Java System Access Manager uses Sun Java System directory server to store the policy store, authentication data, and system configuration.
specific communication protocol or interaction mechanism that business applications (not systems management applications) can use.
Parlay Group
The Parlay Group has defined a framework for policy management related to secure access using a Parlay gateway. It has API definitions that can define new rules (createRule), define conditions (createCondition), retrieve actions (getAction) and action lists (setActionList), and commit transactions (commitTransactions). For example, architects and developers can create a set of policies that allow newly created users to access broadband Internet services upon successful provisioning of their service accounts. Unfortunately, there is no mature commercial implementation of the Parlay policy product available. In addition, the Parlay policy management API specification is very telecommunication-domainspecific, and it does not currently provide any integration mechanisms for use with other security specifications such as SAML. Strictly speaking, the Parlay policy management API specification is not yet another policy framework, as is IETF and DMTF. It intends to provide APIs that can work on top of policy standards and frameworks. It does not create specific data formats or protocols. Nor does it create a policy language. Thus, it should not be confused with other policy standards that have their own architecture models.
Decision request
C ombining algorithm and X precedence Vocabulary Attribute values Attribute mapping Attribute retrieval XML attribute values X X X X Not Supported
Not Supported X X X X EPAL and XAC ML support different models for hierarchical entities. Identical. Functional equivalent. Identical.
Hierarchical entities
Not Supported
X EPAL is a functional subset of XAC ML. EPAL is a functional subset of XAC ML. EPAL is a functional subset of XAC ML. Functional equivalent. Functional equivalent. EPAL is a functional subset of XAC ML. EPAL is a functional subset of XAC ML. EPAL is a functional subset of XAC ML.
Purpose attribute
Error handling
Targets or pre-conditions X
X X
X X
Data types
Functions
Obligations
Not Supported
EPAL introduces the concept of a policy vocabulary that is not available in XACML. Refer to the next section for the discussion of XACML. The element <vocabulary> points to a separate file that specifies the collection of attributes needed in order to evaluate the policy. Attributes in a vocabulary are grouped into containers. Each separate container specifies a collection of attributes that can be obtained together from a single source. It may also represent a subset of attributes that would be used by a given rule in a policy. However, there is a drawback when designing complex policies: The need to group attributes into containers in a vocabulary may actually add to complexity by requiring vocabulary managers to be aware of the structure of the rules in the policy. In contrast, XACML provides a richer set of access-control and privacy features that are not available in EPAL version 1.2. This includes the following features: Combination of the results of multiple policies that are developed by potentially independent policy issuers The ability to reference other policies as part of a given policy The ability to specify conditions on multiple subjects that may be involved in making a request The ability to return multiple results when access to a hierarchical resource is being requested Support for subjects who must simultaneously be in multiple independent hierarchical roles or groups Clear handling of error conditions and missing attributes Support for attribute values that are XML schema elements Support for additional primitive data types (including X.500 Distinguished Names and RFC822 names)
W eb Services Policy Framework (W S-Policy) is part of the W eb Services roadmap and specifications (aka W S*) proposed by Microsoft, IBM, VeriSign, and others. It is primarily a policy language that defines policies for W eb services; these policies are a collection of "policy alternatives" (a collection of policy assertions such as authentication scheme, privacy policy, and so forth). W S-Policy encodes the policy definition in XML using SOAP messages for data exchange. The policy definitions in the W S-Policy specification are not restricted to access control or privacy, a fact that differentiates W S-Policy from XACML and EPAL. Security architects and developers can use W S-Policy to specify the type of security token, digital signature algorithm, and encryption mechanism for a SOAP message (for example, a payment message), or even partial contents of a SOAP message (for example, credit card number). It can also specify data-privacy or data-confidentiality rules. However, W S-Policy does not specify how to discover policies or how to attach a policy to a W eb service. It relies on other W S* specifications (for example, W S-PolicyAttachment) to provide full functionality of policy management. The other necessary component of W S-Policy is the definition of a set of policy assertions for each policy domain. For example, the assertions for use with W S-Security are defined in W S-SecurityPolicy. Each specification or schema to be controlled or managed by W S-Policy will require definition of a new set of assertions. The authors suggest that in the future, assertions will be defined as part of the underlying specification or schema rather than in a separate document as was required for W S-SecurityPolicy. Under the W S-Policy model, a policy for W eb services denotes conditions or assertions regarding the interactions between two W eb services endpoints. The service provider exposes a W eb services policy for the services they provide. The service requester will decide, using the policies, whether they want to use the service, and if so, the "policy alternative" they wish to use. In other words, W S-Policy does not have the notion of a Policy Enforcement Point (which enforces policies) and a Policy Decision Point (which determines policies). It leaves the policy enforcement and decision to the service providers and service requesters. W SPL (W eb Services Policy Language) is based on XACML (refer to the next section for details) and is currently a working draft in the OASIS XACML technical committee. It uses a strict subset of XACML syntax (restricted to Disjunctive Normal Form) and has a different evaluation engine than XACML has. XACML evaluates the access-control policies with a given set of attributes and policies, while W SPL determines what the mutually acceptable sets of attributes are when given two policies. For a good introduction on W SPL refer to [Anne3]. W SPL has provided similar functionality to define policies for W eb services. W SPL has the semantics of policy (set of rules) and operators (which allow comparison between an attribute of the policy and a value, or between two attributes of the policy). The policy syntax also supports rule preference. There are three distinctive features in W SPL. First, it allows policy negotiation, which can merge policies from two sources. Second, policy parameter allows fine-grained parameters such as time of day, cost, or network subnet address to be defined in a policy for W eb services. Third, the design of W SPL is flexible enough to support any type of policy by expressing the policy parameters using standard data types and functions. One main problem W SPL has addressed is the negotiation of policies for W eb services. Negotiation is necessary when choices exist, or when both parties (W eb services consumers and service providers) have preferences, capabilities, or requirements. In addition, it is necessary to automate service discovery and connection related to policies. W SPL shares similar policy definition capabilities with W S-Policy. Example 7-11 shows a policy defined in W S-Policy, which specifies the security token usage and type for the W eb services. It uses the element <ExactlyOne> to denote the security token usage.
Example 7-11. Policy for a security token usage and type defined in WS-Policy
<wsp:Policy> <wsp:ExactlyOne> <wsse:SecurityToken> <wsse:TokenType>wsse:Kerberosv5TGT </wsse:TokenType> <wsse:/SecurityToken> <wsse:SecurityToken> <wsse:TokenType>X509v3 </wsse:TokenType> <wsse:/SecurityToken> </wsp:ExactlyOne> </wsp:Policy>
Example 7-12 shows that the same policy can be expressed in W SPL. W SPL translates the policy requirements into two rules. This makes it more descriptive and extensible in the event that security architects and developers need to add more operators or constraints.
Example 7-12. Policy for security token usage and type using WSPL
<Policy PolicyId="policy:1" RuleCombiningAlgorithm="&permit-overrides;"> <Rule RuleId="rule:1" Effect="Permit"> Condition FunctionId="&function;string-is-in"> <AttributeValue DataType="&string;">Kerberosv5TGT</AttributeValue> <ResourceAttributeDesignator AttributeId="&SecurityToken;" DataType="&string;"/> </Condition> </Rule> <Rule RuleId="rule:2" Effect="Permit"> <Condition FunctionId="&function;string-is-in"> <AttributeValue DataType="&string;">X509v3</AttributeValue> <ResourceAttributeDesignator AttributeId="&SecurityToken;" DataType="&string;"/> </Condition> </Rule> </Policy>
The following are identified as technical limitations of W S-Policy when compared with W SPL (refer to [Anne2] for details): Negotiation. W S-Policy does not specify a standard merge algorithm or a standard way to specify policy negotiation (for example, for merging policies from two sources). Specifications for domain-specific W S-Policy Assertions may describe how to merge or negotiate assertions, but these methods are domain-specific. Assertion Comparison. Since there is no standard language for defining Assertions in W S-Policy, there is no standard way to describe requirements such as "fee > 25." Again, specifications for domain-specific W S-Policy Assertions may describe schema elements for such comparisons, but the implementation of these elements must be done on a domain-by-domain basis since there is no standard. Dependency. W S-Policy is designed to depend on extensions. Each extension must be supported by a custom evaluation engine. W eb services policy specifications are still evolving. Some of them have specific problems (for example, policy negotiation) to address. It is possible that these specifications may expand and converge into one single standard in the future. For security architects and developers, it is useful to understand the policy language design, architectural components, and differences behind these specifications and to determine whether these policy specifications meet their technical requirements before they adopt them for prototypes and implementations.
Introduction to XACML
eXtensible Access Control Markup Language (XACML) version 2.0 (refer to [XACML2] for details) is an approved security policy management standard under OASIS (http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml). It is both a policy language and an access-control decision request/response language encoded in XML. It defines a standard format for the expression of authorization rules and policies along with a standard way of evaluating rules and policies to produce authorization decisions. In addition, XACML defines an optional format for making authorization decision requests and responses. There are many similarities between XACML and the other policy management initiatives discussed previously. XACML can handle both XML documents and non-XML systems, though it can also handle non-XML objects using a custom context handler. It uses a declarative data model similar to CIM policy. It is generic to all industry sectors, but flexible enough to include new functionalities. XACML is complementing SAML 2.0 by providing functionality that handles complex policy sets and rules. There are a few business problems related to security access control today. Many customer environments have their own security policy governing which resources a service requester can access. To be flexible and adaptive to customer IT security requirements, commercial off-the-shelf vendor products intend to be "generic" enough to support different security access control requirements in heterogeneous or customized environments. For example, some vendor products choose to provide "maximum possible privilege" by default for accessing data and executing business functions and actions. In other words, every user can access all functions unless the access control policies are customized. Once these vendor products are implemented, customers can customize local administrative security policy and configure policy enforcement points. Unfortunately, customized security access control implementations are fairly expensive, and they are unreliable for modifying security policies manually due to their complexity. In addition, they are not scalable and timely if the number of applications or policy enforcement points is large. Thus, a flexible policy system for access control is required to address these problems. Isn't SAML authorization decision assertion used in determining access rights for a service request? SAML provides a very basic assertion format and protocol between policy enforcement point and policy decision point. However, it does not specify any action or how a policy decision point should get information on which its decision will depend. One major technology driver for creating XACML is the need to access partial content of XML documents. The current security method is to use encryption to control access to the entire XML document. Users are either authorized to view the entire XML document or denied access to any part of it. An example is an XML document containing a credit card payment transaction, where user A (call center personnel) is authorized to access the entire payment transaction except the full credit card number, while user B (claims department) is able to read the entire payment transaction. This is undesirable and very often this access control mechanism does not meet local business requirements. In a typical application environment, a user wants to make request to access certain resources. The Policy Enforcement Point (PEP) is a system or application that protects the resources. The PEP needs to check whether the service requester is eligible to access the resources. It sends the resources request to the Policy Decision Point (PDP), which looks up the security access control policies. XACML provides both a policy language and an access-control decision request/response language to meet the security access control requirements. W ith XACML, the PEP forms a query language to ask the PDP whether or not a given action should be allowed. The PDP responds by returning the value of either Permit, Deny, Indeterminate (decision cannot be made due to some errors or missing values), or Not Applicable (the request cannot be answered by this service). XACML provides a rich policy language data model that is able to define sophisticated and flexible security policies. Figure 7-7 shows the full hierarchy of components of an XACML policy extracted from the XACML schema, which may be too complex for novice readers. The following are the key components that may be of interest to most readers: Policies. A Policy represents a single access control policy, expressed through a set of Rules. Policies are a set of rules together with a rule-combining algorithm and an optional set of obligations. Obligations are operations specified in a policy or policy set that should be performed in conjunction with enforcing an authorization decision. Each XACML policy document contains exactly one Policy or PolicySet root XML tag. Policy Set. A Policy Set is a set of policies or other Policy Sets and a policy-combining algorithm, along with a set of optional obligations. Rules. Rules are expressions describing conditions under which resource access requests are to be allowed or denied. They apply to the target (<Target>), which can specify some combination of particular resources, subjects, or actions. Each rule has an effect (which can be "permit" or "deny") that is the result to be returned if the rule's target and condition are true. Rules can specify a condition (<Condition>) using Boolean expressions and a large set of comparison and data-manipulation functions over subject, resource, action, and environment attributes. T arget. A Target is basically a set of simplified conditions for the Subject, Resource, and Action that must be met for a PolicySet, Policy, or Rule to apply to a given request. These use Boolean functions (explained more in the next
section) to compare values found in a request with those included in the Target. If all the conditions of a Target are met, then its associated PolicySet, Policy, or Rule applies to the request. In addition to being a way to check applicability, Target information also provides a way to index policies, which is useful if you need to store many policies and then quickly sift through them to find which ones apply. Attributes. Attributes are named values of known types that may include an issuer identifier or an issue date and time. Specifically, attributes are characteristics of the Subject, Resource, Action, or Environment in which the access request is made. For example, a user's name, their group membership, a file they want to access, and the time of day are all attribute values. W hen a request is sent from a PEP to a PDP, that request is formed almost exclusively of attributes, and they will be compared to attribute values in a policy in order to make the access decisions.
The XML Schema definition for XACML describes the input and output of policy decision points in an XACML context. A context denotes a canonical representation of a decision request and an authorization decision. Figure 7-8 shows the XACML context [XACML11] where a policy decision point makes reference to the attributes of a policy or identifies the attribute by subject, resource, action, or environment. The XACML context handler for requests converts the input format from domain-specific input, say, using XPath or any XSLT transformation mechanism. Upon processing the policy rules by the policy decision point, the XACML context handler for responses converts the authorization decision to a domainspecific output format. The shaded area that covers the XACML policy, policy decision point, and the XACML context handlers are the scope of XACML.
Sun's XACML kit (http://sunxacml.sourceforge.net) is an open source implementation of XACML 1.1. There is also a C# implementation of XACML under http://mvpos.sourceforge.net/. Parthenon Computing's JiffyXACML (http://www.parthenoncomputing.com) is a free binary release that provides some specific functionality. A list of XACML implementations appears on the OASIS XACML TC home page (http://www.oasis-open.org/committees/tc_home.php? wg_abbrev=xacml), along with an XACML reference list that includes publicly announced adoptions of XACML.
XACML 2.0
XACML 2.0 [XACML2] does not have major functional changes. There are a few syntactic changes to make the policy language more flexible in its support of complex security requirements. Apart from the syntactic changes, the major change in XACML 2.0 is the introduction of six profiles: SAML Profile. The SAML profile defines how to use SAML 2.0 to protect, transport, and request XACML schema instances and other information needed by an XACML implementation. It supports six types of queries and statements: AttributeQuery, AttributeStatement, XACMLPolicyQuery, XACMLPolicyStatement, XACMLAuthzDecisionQuery, and XACMLAuthzDecisionStatement. RBAC Profile. The role-based access control profile allows policies to be specified in terms of subject roles instead of individual subject identities. Roles can be nested so that more senior roles inherit the privileges of junior roles. Privacy Profile. The privacy profile supports specifying data privacy requirements by using two attributes: resource purpose and action purpose. The resource purpose, which has a type "urn:oasis:names:tc:xacml:2.0:resource: purpose," indicates the purpose for which the data resource is collected. The action purpose, which has a type "urn:oasis:names:tc:xacml:2.0:action:purpose," indicates the purpose for which access to the data resource is requested. Multiple Resource Profile. This profile describes three ways in which a PEP can request authorization decisions for multiple resources in a single request context and how the result of each such authorization decision is represented in the single response context that is returned to the PEP. It also describes two ways in which a PEP can request a single authorization decision in response to a request for all the nodes in a hierarchy. Hierarchical Resource Profile. This profile specifies how XACML can provide access control for a resource (including files, XML documents, or organizations) that is organized as a hierarchy. For example, if the administrator wants to restrict certain segments of an XML document for access, he or she may want to treat the resource (in this case, the XML document) as a hierarchy in order to allow or deny access to particular nodes in the document. DSIG Profile. This profile uses XML Signature to provide authentication and integrity protection for XACML schema instances. There are some new features to the policy language. For details, refer to [XACML2]. The following are new features that allow more flexibility in expressing policies and rules. The element <CombinerParameters> carries the parameters for use by the combining algorithms.
A new optional attribute <Version> was added with default value "1.0" to denote the version of the Policy and PolicySet. Policy referencing allows developers to put constraints on the policy version. The element <VariableReference> is used to refer to a value by its <VariableDefinition> within the same policy. The element <EnvironmentMatch> was added to match the environment, similar to the elements <SubjectMatch>, <ResourceMatch>, and <ActionMatch>. A new substitution group called <Expression> was added, which contains the elements <Apply>, <AttributeSelector>, <AttributeValue>, <Function>, <VariableReference>, and all <FooAttributeDesignator>. There is a <RuleCombinerparameters> element, and likewise a <PolicyCombiner-Parameters> element, which are used to pass parameters to the combining algorithms. They are not used as a substitution model. Some changes in XACML 2.0 are syntactical. They do not have a major impact on the core policy definition functionality. However, some changes are semantic changes. The following highlight the major syntactic changes in the context schema and the policy schema. For details, refer to [XACML2changes]. The version number of XACML in the namespace has been updated as 2.0. For example,
xmlns="urn:oasis:names:tc:xacml:2.0:context:schema:cd:04."
The element <Status> in a <Result> statement is now optional in XACML 2.0. It is mandatory in XACML 2.0 to specify an <Environment> in a <Request> statement. For the elements <PolicySetIdReference> and <PolicyIdReference>, XACML 2.0 uses "type=xacml:IdReferenceType." The data type for "RuleId" attribute is now changed to "xs:String." Two syntactic changes are made to support SAML 2.0: The <Request> can contain more than one resource. The element <Attribute> can contain more than one <AttributeValue>. Two items are obsolete in XACML 2.0: the attribute <IssuesInstant> in the <Attribute> statement, and the elements <AnySubject>, <AnyResource>, and <AnyAction>.
<VariableDefinition> and <VariableReference> elements support reuse of portions of a policy, which provides a macro
capability.
Once the policy is evaluated successfully, the policy enforcement point will either grant access to the service requester for the targeted resource or deny the access.
XACML Architecture
XACML provides a standard virtual interface between the policy decision point and the service requester. Because XACML is a policy language set, it is difficult to define a specific technical architecture. XACML can support a variety of underlying infrastructures for policy stores. In summary, XACML has the following key logical architecture components: Context Handler. The context handler transforms service request formats into a format that XACML understands. Architects and developers may need to custom-build processing logic in the XACML context handler to handle the conversion between different service request formats. Policy Decision Point. The policy decision point (PDP) evaluates the resource access request against the relevant rules, policies, and policy sets. Architects and developers may customize their PDP by building a PDP application as reusable Java components (such as EJB) or simple servlets. Sun's XACML implementation provides a sample PDP. Policy Repository. The policy repository stores the rules, policies, and policy sets that XACML accesses. XACML does not define any standard interface, and leaves it to the implementation to provide an interface to create or retrieve policy objects from the policy repository. Architects and developers may use an existing directory server infrastructure to store all policy objects using LDAP, or they may opt to implement the policy repository using a relational database if their current entitlement application architecture is database-centric. There are some distinctive architectural aspects in XACML. XACML stores the policy objects in a hierarchy relationship of policy sets, policies, and rules. XACML is defined to have the policy, rather than the rule, be the smallest retrievable policy object. This is different from a traditional rule engine where architects and developers can directly retrieve a specific rule or attribute element. Rules can be implemented as one-rule policies to achieve effective retrieval at the rule level. In addition, an XACML solution can operate on a variety of infrastructures, depending on how customers implement the policy decision point, the policy enforcement point, and the policy information point. The XACML reference implementation from Sun can run on a W eb container (W eb server) or an EJB container (application server). These have created flexibility and agility for XACML solutions to interoperate with heterogeneous infrastructures.
Policy Store
XACML is an ideal technology candidate for use in implementing a centralized or distributed policy store because it can act as a data abstraction layer for the policy decision point. It can be implemented on top of any underlying data store platform, including directory server or relational database. If policy data are stored in a directory server or relational database directly, the policy retrieval will be strictly dependent on the underlying data store platform. If there are different policy store products running on heterogeneous data store platforms, then XACML will be a more flexible approach because it is shielded off from the underlying data store platform. A distributed policy store refers to the scenario where customers partition the types of policies by geographical areas or by functional areas across different servers. This allows easier maintenance by the local administration. It is also possible to have multiple PEPs to process different types of policies by different partitions (for example, by geographical areas). This distributed architecture of the policy system is a common way to scale up the architecture and increase the capability of high-volume policy inquiries. A centralized policy store refers to the scenario where customers have a single master policy store. This is useful for administering all types of security access control rules centrally. However, it also requires that the centralized policy store be highly available. Otherwise, any outage will be disruptive and impact all business services that rely on the access control policies.
ebXML Registry
W hen service requesters discover and look up W eb services from a service registry, there needs to be a reliable access control mechanism to protect the service registry. Many UDDI service registry implementations use database security for access control. However, the database-centric security approach usually provides primitive access control with read or write attributes. It does not support sophisticated rules, preferences, or even policy negotiation because it does not have a policy language. ebXML registry open source implementation (http://sourceforge.net/projects/ebxmlrr/) uses XACML to implement an access control mechanism to discover and consume W eb services. This allows more flexibility and extensibility in controlling who can access and under which condition the service requester can invoke the W eb services. The ebXML registry stores the access-control policies and attributes in the registry and customizes a registry attribute finder module based on Sun's XACML kit.
Sun Microsystems has created an implementation of XACML and released it as an open source project with the 1.0 release of XACML. It is available on sourceforge.net (http://sunxacml.sourceforge.net/). The current XACML Kit version 1.2 supports the XACML 1.x specifications (and most of the XACML 2.0 specification) with APIS for creating, validating, parsing, and evaluating policies and authorization requests. The code is broken into separate packages that support specific elements of the specification and is designed to make it easy to use or extend the XACML specification as needed. For more details, see the Sun XACML programmer's guide at http://sunxacml.sourceforge.net/guide.html.
Sample Scenario
To illustrate XACML kit, we use a sample scenario where a subscriber of an online portal tries to access their own account profile and check for credit card payment information. Here we have the following requirements: Only a premium member from "coresecuritypatterns.com" can access the URL http://www.onlinestore.com/sensitive/paymentinfo.html for their sensitive account information, including their own credit card payment information. Any other users who do not have the e-mail address domain ended with "coresecuritypatterns.com," or who are not a premium member, cannot access the credit card information. Successful access will be logged for audit control. Invalid access from users who do not have the valid e-mail address domain "coresecuritypatterns.com" will be also logged for audit control. The online portal uses XACML for access control. This example will use the following features of XACML policies: Applying the constraint of premium member status for the account information access request. The element <condition> will be used to specify only premium member in the <target> can access the resource. Adding one of the conditions to enable only service requesters with an e-mail address domain "coresecuritypatterns.com" can access the resource. Illustrating the use of <obligation> element to log both successful read access as well as unsuccessful access for audit trail.
Sample Request
Example 7-13 shows a sample service request to access the URL http://www.onlinestore.com/sensitive/paymentinfo.html expressed in XACML. The request denotes a read request from a user [email protected], who has a premium membership, to access the URL for her own account information. The subscriber clicks the URL, and the online portal (acting as a PEP) generates an XACML service request for a read request to the URL resource.
"[email protected]"> <AttributeValue>premiumMember</AttributeValue> </Attribute> </Subject> <Resource> <Attribute AttributeId= "urn:oasis:names:tc:xacml:1.0:resource:resource-id" DataType= "http://www.w3.org/2001/XMLSchema#anyURI"> <AttributeValue> http://www.onlinestore.com/sensitive/paymentinfo.html </AttributeValue> </Attribute> </Resource> <Action> <Attribute AttributeId= "urn:oasis:names:tc:xacml:1.0:action:action-id" DataType= "http://www.w3.org/2001/XMLSchema#string"> <AttributeValue>read</AttributeValue> </Attribute> </Action> </Request>
Sample Policy
The XACML policy engine (acting as a PDP) receives the read request. It looks up any policies that are applicable to the request. Example 7-14 shows a sample policy to protect the sensitive payment resource. In plain English, the policy allows any subject with a group identifier "premiumMember" and with an e-mail address domain name "coresecuritypatterns.com" to have read access to the sensitive payment resource with the URI http://www.onlinestore.com/sensitive/paymentinfo.html. It also specifies that the policy will log any successful read action or any unsuccessful read with an invalid e-mail address domain name.
<Subject> <SubjectMatch MatchId= "urn:oasis:names:tc:xacml:1.0:function:rfc822Name-match"> <AttributeValue DataType= "http://www.w3.org/2001/XMLSchema#string"> coresecuritypatterns.com </AttributeValue> <SubjectAttributeDesignator DataType= "urn:oasis:names:tc:xacml:1.0:data-type:rfc822Name" AttributeId= "urn:oasis:names:tc:xacml:1.0:subject:subject-id"/> </SubjectMatch> </Subject> </Subjects> <Resources> <Resource> <ResourceMatch MatchId= "urn:oasis:names:tc:xacml:1.0:function:anyURI-equal"> <AttributeValue DataType= "http://www.w3.org/2001/XMLSchema#anyURI"> http://www.onlinestore.com/sensitive/paymentinfo.html </AttributeValue> <ResourceAttributeDesignator DataType= "http://www.w3.org/2001/XMLSchema#anyURI" AttributeId= "urn:oasis:names:tc:xacml:1.0:resource:resource-id"/> </ResourceMatch> </Resource> </Resources> <Actions> <AnyAction/> </Actions> </Target> <Rule RuleId="ReadRule" Effect="Permit"> <Target> <Subjects> <AnySubject/> </Subjects> <Resources> <AnyResource/> </Resources> <Actions> <Action> <ActionMatch MatchId= "urn:oasis:names:tc:xacml:1.0:function:string-equal"> <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string"> read </AttributeValue> <ActionAttributeDesignator DataType= "http://www.w3.org/2001/XMLSchema#string" AttributeId= "urn:oasis:names:tc:xacml:1.0:action:action-id"/> </ActionMatch> </Action> </Actions> </Target> <Condition FunctionId= "urn:oasis:names:tc:xacml:1.0:function:string-equal"> <Apply FunctionId=
"urn:oasis:names:tc:xacml:1.0:function:string-one-and-only"> <SubjectAttributeDesignator DataType="http://www.w3.org/2001/XMLSchema#string" AttributeId="group"/> </Apply> <AttributeValue DataType= "http://www.w3.org/2001/XMLSchema#string"> premiumMember </AttributeValue> </Condition> </Rule> <Rule RuleId="DenyOtherActions" Effect="Deny"/> <Obligations> <Obligation ObligationId="LogSuccessfulRead" FulfillOn="Permit"> <AttributeAssignment AttributeId="user" DataType= "http://www.w3.org/2001/ XMLSchema#anyURI">urn:oasis:names:tc:xacml:1.0:subject:subject-id </AttributeAssignment> <AttributeAssignment AttributeId="resource" DataType="http://www.w3.org/2001/XMLSchema#anyURI"> urn:oasis:names:tc:xacml:1.0:resource:resource-id </AttributeAssignment> </Obligation> <Obligation ObligationId="LogInvalidAccess" FulfillOn="Deny"> <AttributeAssignment AttributeId="user" DataType="http://www.w3.org/2001/XMLSchema#anyURI"> urn:oasis:names:tc:xacml:1.0:subject:subject-id </AttributeAssignment> <AttributeAssignment AttributeId="resource" DataType="http://www.w3.org/2001/XMLSchema#anyURI"> urn:oasis:names:tc:xacml:1.0:resource:resource-id </AttributeAssignment> <AttributeAssignment AttributeId="action" DataType="http://www.w3.org/2001/XMLSchema#anyURI"> urn:oasis:names:tc:xacml:1.0:action:action-id </AttributeAssignment> </Obligation> </Obligations> </Policy>
Example 7-15 shows the response to the read request. The PDP returns a status that indicates whether the read request is granted. If this is granted, the <Decision> element will indicate "Permit." If this is rejected, the <Decision> element will return "Deny." An error of any kind (such as missing attribute value) results in "Indeterminate." "NotApplicable" is the result if no available policies apply to the given request.
<Decision>Permit</Decision> <Status> <StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/> </Status> <Obligations> <Obligation ObligationId="LogSuccessfulRead" FulfillOn="Permit"> <AttributeAssignment AttributeId="user" DataType= "http://www.w3.org/2001/XMLSchema#anyURI"> urn:oasis:names:tc:xacml:1.0:subject:subject-id </AttributeAssignment> <AttributeAssignment AttributeId="resource" DataType= "http://www.w3.org/2001/XMLSchema#anyURI"> urn:oasis:names:tc:xacml:1.0:resource:resource-id </AttributeAssignment> </Obligation> </Obligations> </Result> </Response>
Remark
These examples use Sun's XACML Kit version 1.2, which currently supports XACML 1.1. To run these examples in XACML 2.0, developers need to change the version number and the namespace (for example, xmlns="urn:oasis:names:tc: xacml:2.0:policy") in the XML header, and make any necessary XACML 2.0 changes.
- The Policy Enforcement Point may obtain attributes from a repository, where they were stored previously by an Attribute Authority in the form of SAML AttributeStatements (Step 4a). The XACML Policy Decision Point evaluates the resource access request and decides additional attributes are needed. It can obtain these in one of the following ways: - The XACML Policy Decision Point may obtain attributes directly from an online Attribute Authority using an AttributeQuery (Step 2). This query returns an AttributeStatement in the SAML response (Step 3). - The XACML Policy Decision Point may obtain attributes from a repository (Step 4), where they were stored previously by an Attribute Authority in the form of SAML AttributeStatements (Step 5a).
This allows the XACML Policy Decision Point to augment the resource access request with additional attributes. The Policy Enforcement Point may obtain attributes from the Attribute Authority or from the Attribute Repository that stores the attributes about the service requester or resource (AttributeQuery in Step 1a). The Attribute Authority returns attributes in a SAML attribute statement to the Policy Enforcement Point (AttributeStatement in Step 2a). The Attribute Authority creates an assertion of an attribute statement in the Attribute Repository (AttributeStatement in Step 3a), which also makes the attribute statement available to the XACML Policy Enforcement Point (AttributeStatement in Step 4a) or to the XACML Policy Decision Point (AttributeStatement in Step 5a). The XACML Policy Decision Point evaluates the resource access request and decides to make a SAML attribute query with the Attribute Authority (AttributeQuery in Step 2). The Attribute Authority returns with a SAML attribute statement (AttributeStatement in Step 3). This allows the XACML Policy Decision Point to augment the XACML Policy Enforcement Point's description of the resource access request with additional attributes. The XACML Policy Decision Point may need to retrieve any policies relevant to the resource access request from the XACML Policy Administration Point or from the XACML Policy Repository (XACMLPolicyQuery in Step 4). The XACML Policy Administration Point finds relevant policies from the XACML Policy Repository and creates a policy
statement assertion (XACMLPolicyStatement in Step 5). These policies may be retrieved as follows: - The XACML Policy Administration Point responds to the policy query with a policy statement assertion (XACMLPolicyStatement in Step 6). - The XACML Policy Decision Point can also find relevant policies from the XACML Policy Repository (XACMLPolicyStatement in Step 7). W ith the availability of relevant policies and attributes, the XACML Policy Decision Point is able to respond to the XACML Policy Enforcement Point with a SAML authorization decision statement (XACMLAuthzDecisionStatement in Step 8). Alternatively, the XACML Policy Decision Point can retrieve necessary policies directly from the XACML Policy Repository.
Summary
Identity management is certainly becoming critical to preventing identity theft and addressing new security risks related to Java-based applications and W eb services. Given the nature of distributed systems and W eb-based applications, architects and developers need to secure the network identity in multiple tiers and across different security domains, not just in the W eb tier. OASIS has published a set of identity management security standards, including SAML and XACML. The purpose of these security specifications is to address single sign-on, federated identity management, and access control issues. SAML has become the definitive protocol for exchanging assertions that enable single sign-on and global logout. This security protocol allows different security infrastructures to exchange identity information without locking in specificvendor architecture. SAML has gained wide industry support, including Liberty Alliance, which has reused and extended SAML for federated identity management. XACML is a policy language for use in controlling access to XML documents or other resources. It provides a flexible and extensible mechanism for policy management and is consistent with the policy framework laid down by IETF and DMTF. XACML 2.0 is aligned with SAML 2.0 to allow the encapsulation and transmission of XACML attributes, policies, decision requests, and decisions in SAML assertions. It can also serve as a policy engine for many security infrastructures or vendor products. Designing identity management using Java technology and W eb services is complicated because multi-tier and multiple security domains are involved. Using J2EE design patterns for identity management would be helpful. In Chapter 12, "Securing the Identity," and Chapter 13, "Secure Service Provisioning," we will discuss design patterns that address SAML assertions, single sign-on, credential tokens, and security provisioning.
References
[Anne1] Anne Anderson. "A Comparison of EPAL and XACML." Sun Microsystems. July 12, 2004. http://research.sun.com/projects/xacml/CompareEPALandXACML.html [Anne2] Anne Anderson. "IEEE Policy 2004 Workshop 8 June 2004Comparing WSPL and WS-Policy." IEEE Policy 2004. http://www.policy-workshop.org/2004/slides/Anderson-WSPL_vs_WS-Policy_v2.pdf [Anne3] Anne Anderson. "An Introduction to the Web Services Policy Language (WSPL)." Sun Microsystems Laboratories. 2004. http://research.sun.com/projects/xacml/Policy2004.pdf
[KingPerkins1] Chris King and Earl Perkins. "T he Role of Identity Management in Information Security: Part IT he Planning View." http://techupdate.zdnet.com/techupdate/stories/main/Identity_Management_Information_Security_Part_1.html [Liberty] Liberty Alliance ProjectOfficial Web site http://www.projectliberty.org/about/history.html [Liberty12FFArch] T homas Watson, et al. "Liberty ID-FF Architecture Overview." OASIS. http://www.projectliberty.org/specs/liberty-idff-arch-overview-v1.2.pdf [LibertyIDWSF] Liberty Alliance. Liberty ID-WSF Security Mechanisms. Version 1.2. http://www.projectliberty.org/specs/liberty-idwsf-security-mechanisms-v1.2.pdf [Liberty12T utorial] Alexandre Stervinou. "Liberty Specifications T utorial." Liberty Alliance. http://www.itu.int/itudoc/itu-t/com17/tutorial/85606.html [OASIS] OASISOfficial Web site http://www.oasis-open.org/home/index.php [OpenSAML] OpenSAMLOfficial Web site http://www.opensaml.org/ [SecurityBreach2004] Information Security Breaches Survey 2004. http://www.dti.gov.uk/industry_files/pdf/hardfacts.pdf [SAML-T C] OASIS SAMLT echnical Committee http://www.oasis-open.org/committees/tc_home.php? wg_abbrev=security) [SAML11Core] OASIS. Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML) V1.1. September 2, 2003. http://www.oasis-open.org/committees/download.php/3406/oasis-_sstc-saml-core-1.1.pdf [SAML11Diff] OASIS. Differences between OASIS Security Assertion Markup Language (SAML) V1.1 and V1.0. May 21, 2003. http://www.oasis-open.org/committees/download.php/3412/sstc-saml-diff-1.1-draft-01.pdf [SAML11Security] OASIS. Security and Privacy Considerations for the OASIS Security Assertion Markup Language (SAML) V1.1. September 2, 2003. http://www.oasis-open.org/committees/download.php/3404/oasis-_sstcsaml-sec-consider-1.1.pdf [SAML2Core] OASIS. Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML) V2.0. Working Draft 10. April 10, 2004. http://www.oasis-open.org/committees/download.php/6347/sstc-saml-core2.0-draft-10-diff.pdf [SAML2Scope] OASIS. SAML Version 2.0 Scope and Work Items. http://www.oasisopen.org/committees/download.php/6277/sstc-saml-scope-2.0-draft-17.pdf [SAML2Profiles] OASIS. Profiles for the OASIS Security Assertion Markup Language (SAML) V2.0. March 15, 2005. http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf [SourceID] SourceIDOfficial Web site http://www.sourceid.org/ [SPML-T C] OASIS SPMLT echnical Committee http://www.oasis-open.org/committees/tc_home.php? wg_abbrev=provision [Systinet] Systinet article: SAML Support on Smartcard. http://www.theserverside.com/resources/article.jsp? l=Systinet-Webservices-part-6 [WebServicesLifeCycle] Ray Lai. "Web Services Life Cycle: Managing Enterprise Web Services." Sun Microsystems. October 2003. http://wwws.sun.com/software/sunone/whitepapers/wp_mngwebsvcs.pdf [XACML-T C] OASIS XACMLT echnical Committee http://www.oasis-open.org/committees/tc_home.php? wg_abbrev=xacml [XACML11] OASIS. eXtensible Access Control Markup Language (XACML) Version 1.1. Committee Specification. August 7, 2003. http://www.oasis-open.org/committees/xacml/repository/cs-xacml-specification-1.1.pdf [XACML2] OASIS. eXtensible Access Control Markup Language (XACML) Version 2.0. Working Draft 09. April 16, 2004. http://www.oasis-open.org/committees/download.php/6433/oasis-xacml-2.0-core-wd-09.zip
[XACML2changes] Daniel. "Differences Between XACML Versions 1.0 and 2.0." January 7, 2005. http://blog.parthenoncomputing.com/xacml/archives/2005/01/the_differences.html [XACML2SAML2] OASIS. SAML 2.0 Profile of XACML. Committee Draft 02. November 11, 2004. http://docs.oasisopen.org/xacml/access_control-xacml-2.0-saml_profile-spec-cd-02.pdf
The Rationale
An application or service may consist of a single functional component or multiple sets of disparate components that reside locally or over a network. Security is often considered as a complex process, encompassing a chain of features and tasks related to computer system security, network security, application-level security, authentication services, data confidentiality, personal privacy issues, cryptography, and so forth. More importantly, these features must be designed and verified independently and then made to work together across the system. Applying a security feature often represents a unique function that can be a safeguard or a countermeasure, which guarantees the application or service by preventing or reducing the impact of a particular threat or vulnerability and the likelihood of its reoccurrence.
The Security W heel is a logical representation of the fundamental security principles required for establishing Security by Default in an application or a service. It provides guidelines that need to be taken into consideration during the entire software development life-cycle, and it can be applied across all or selected components of an application or a service.
The Hub
At the core of the hub of the Security W heel sits the service or application that you are building. In this representation, it refers more to the business logic than the application as a whole. The service resides in a secured server host with minimized and hardened OS. (OS Minimization refers to fewer software components on a server infrastructure, and Hardened OS refers to a reconfigured OS that applies security measures specified by the OS vendor and retains no nonessential programs, protocols, or services.) The secured host includes storage devices and accessories. Both the service and the target host environment must be configured and deployed through a secure configuration management and reliable provisioning mechanisms. The service makes use of a common identity management solution that provides repository and supporting mechanisms for verifying an entity and its associated credentials, for logging, and for reporting all activities.
The Spokes
The spokes represent the following 12 core security services applicable to an application or a service. Authentication provides the process of verifying and validating the evidence and eligibility of an entity to carry out a desired action. Authorization provides the process of verifying and validating the rights and privileges granted to the authenticated entity. Confidentiality provides mechanisms of protecting the information during transit or in storage from intentional or unintentional unauthorized entities. Integrity provides the mechanisms for maintaining the information tamper-proof and unmodified by unauthorized entities. Policy provides the rules and procedures that can provide access control directives or a regulatory function to all entities. Auditing provides a series of records of events about an application or service activity. These records are maintained to support forensic investigation. It also helps in determining regulatory compliance. Management provides the mechanisms for centrally administering all security operations. Availability provides mechanisms for ensuring reliability and timely access to the application or service and also its prolonged continuity in the event of a disaster or service interruption. Compliance provides the assurance of a degree of constancy and accuracy by adherence to standards or regulatory requirements. Logging provides the mechanisms for recording events that can provide diagnostic information in case of errors, problems, and unexpected behaviors. The recording of these events is usually not driven by business requirements and is generally short-term and transient in nature. Failure to log such events will usually not necessitate cancellation of a transaction. PKI provides key management support for applying cryptographic mechanisms to protect data, transactions, and communication using a public-key infrastructure. Labeling is a process of classifying information based on roles and responsibilities to prevent unauthorized disclosure and failure of confidentiality. The above-mentioned security services are the guiding security principles for providing a robust security architecture. Applications or services can be reviewed with these security measures during their design phases or at appropriate phases prior to deployment.
service that contributes to the overall security architecture. In some cases, many of these security principles, represented as spokes in the wheel, are only applicable to a few components of the overall application or a service. Nevertheless, each component within the system must be examined to determine the associated risks and trade-offs. Adopting a structured security methodology helps to ensure that all security principles are addressed and captured during the software development life cycle or prior to production.
Secure UP
To get started, we must first identify a process to guide us through the software development life cycle so that we can meet the business and security goals we set forth. Adopting the Unified Process (UP) provides a comprehensive approach for ensuring that business requirements are defined, implemented, and tested within the software development life cycle. UP is an industry standard process with a proven track record. It defines the development disciplines, along with an iterative approach, for gathering, analyzing, implementing, and testing functional business requirements. For these reasons, we have chosen it to achieve our business requirements. W hat UP fails to address are how to incorporate the non-functional requirements of the system. These requirements are assumed but never adequately defined as part of the process. Security is a non-functional requirement, in particular, that must be baked into the process right from the beginning of the inception phase. Too often, it is retrofitted into the application at the end of the construction phase, leading to vulnerabilities and performance and/or usability impacts. To avoid this situation, it is necessary to extend UP with a security discipline that will ensure that all of the security requirements of the application are defined, designed appropriately, implemented, and thoroughly tested. W e will refer to the incorporation of these security disciplines into the Unified Process as Secure UP. Secure UP establishes the prerequisites for incorporating the fundamental security principles. It also defines a streamlined security design process within a software development life cycle. UP introduces a security discipline with a set of new security activities. At first glance, the security disciplines seem to overlap heavily with the standard UP disciplines. W hy do we need to split hairs over the difference between business requirements and security requirements, or between implementing a functional use case and a security use case? The answer is that universally, for each of these disciplines, there is a wide gap between the people who know and understand the business needs of the application and those who know and understand the security needs. Figure 8-2 depicts Secure UP and the integrated security disciplines.
The Secure UPSecurity disciplines define the following activities: Security Requirements Security Architecture Security Implementation W hite Box Testing
Black Box Testing Monitoring Security Auditing These activities coalesce to form the basis of a robust security infrastructure and deliver an end-to-end security solution for an application or service. The security discipline activities pertain to different phases of the software development cycle and do not include the sustaining functions in production, such as managing changes, updates, and patch management. An overview of the activities in the security discipline is broken down as follows: Security Requirements: In this activity, one or more analysts will define the business-mandated security requirements of the system. This includes requirements based on industry regulations, corporate policies, and other business-specific needs. The analysts will be well-versed in regulatory compliance as well as corporate security policies. Security Architecture: This activity focuses on the creation of an overall security architecture. Architects will take the mandated security requirements specified by the analysts and then create a draft of the candidate security architecture. This activity qualifies the architectural decisions through a well-defined risk analysis and trade-off analysis processes in order to identify the risks and trade-offs and how to mitigate them. This candidate architecture will also identify a set of security patterns that covers all of the security requirements within the component architecture and will detail them in a high-level way, addressing the known risks, exposures, and vulnerabilities. The candidate architecture will then be prototyped and refined before the final security design activity is begun. This activity will also address the combination of security design with the other non-functional requirements to ensure that the security implementation does not compromise other functional or quality-of-service requirements. Security Design: The Security Design activity takes the security architecture and refines it using approaches such as factor analysis, tier analysis, security policy design, threat profiling, trust modeling, information classification, and labeling. A senior security developer will create and document the design based on the candidate security architecture, analysis results, and taking into account the best practices and pitfalls regarding the strategies of each of the patterns. Security Implementation: In this activity, security-aware developers will implement the security design. A good security design will decouple security components from the business components and therefore not require the security developers to have strong interaction or integration with business developers. The security developers will implement the security patterns by using the strategies defined in the security design and incorporating the best practices for securing the code. White Box T esting: The W hite Box Testing activity is for white box, or full knowledge, security testing of the code. In this activity, security testers will review the code and look for security holes or flaws that can be exploited. They will test a variety of security attacks aimed at compromising the system or demonstrating how the security requirements can be bypassed. Black Box T esting: This activity is for black box, or zero knowledge, security testing of the system. During this activity, security testers will attempt to break into the system without any knowledge of the code or its potential weaknesses. They will use a variety of tools and approaches to hack the system. They will use "out-of-the-box" techniques to break into the system by all possible means at the application level and end-user level. This will provide an overall assessment of the security of the system. Monitoring: The monitoring activity is an ongoing activity for the system while it is in deployment. In this activity, operations personnel will monitor the application and all security facets of it. This consists of a broad range of areas, starting at the perimeter with the routers and firewalls and extending all the way back to the application itself. Monitoring is an integral part of security and an ongoing activity. Security Auditing: In this activity, security auditors will come in and audit the system for security. They assure that the systems are in compliance with all industry, corporate, and business regulations and that proper audit trails are being maintained and archived properly. These audit trails may also be reviewed for suspicious activity that may indicate a possible security breach. These activities take place at different points in the application life cycle and have dependencies on each other. Figure 83 shows the roles and activities representing the Secure UP security discipline activities.
In the above activity diagram, we see the high-level view of the security specific software development life-cycle activities divided by swimlanes representing the different roles. At the start of the application life cycle, analysts gather the mandated security requirements. Once the requirements gathering process is complete, an architect will create a conceptual model of the security architecture. The architect will refine the model further and then define a candidate architecture. He or she will identify appropriate patterns, risks, and trade-offs. He or she will also represent the relevant security principles and perform some conceptual prototyping to validate the architectural decisions. Based on the results of the prototyping, the applicable patterns and the overall architectural approach will then be transitioned to the designer. The designer will take the high-level candidate architecture, decompose it, and create a security design that addresses all component-level requirements and analyses. The resulting security design is a refinement of the architecture based on other functional requirements, non-functional requirements, factor analysis, security policy design, tier analysis, trust modeling, and threat profiling. Once complete, the design is transitioned to the developers to implement. The developers implement the design with an eye on code-level security. Each security requirement is implemented and verified through unit testing, and all code is unit tested before being turned over to the test team for overall systemlevel tests, including security tests to verify the systemic qualities of architecture. The designer takes the completed code and performs a variety of system tests to ensure that the functional requirements are met as well as the non-functional requirements. This includes application-specific regression tests and reality checks. The designer is responsible for ensuring the adequacy of all systemic qualities contributing to the QoS and SLA agreements of the system as a whole. Upon verification of the system tests, the designer transitions the application to the testers for further security testing, such as penetration tests, operational security testing, application-level scanning, and probing tests. Probing tests include network mapping, vulnerability scanning, password cracking, file integrity checking, malicious code testing, and so on. Two sets of security testers test in parallel. The W hite Box testers test the system based on a review of the code and full knowledge of the architecture, design, and implementation of the system. They usually find the most security holes. Black box testers test the security of the application from the outside, with no knowledge of the inner workings. They usually find holes in the system as a whole, not particularly in the
application alone. If any holes are found, they are transitioned back to the system designer for security analysis. From there, they may require modification to the design and then go back through that particular flow again. If no holes are found, or if they are labeled as acceptable risks, the application is transitioned to the operations staff. Operations will then deploy the application into production. Once in production, operations will be responsible for monitoring the system for security activity. This includes all aspects of the system from the router to the database and from the hardware to the application. It also means constantly checking for and applying hardware and software patches to keep the system available and secure. Once deployed, the system will also be transitioned to the security auditors for auditing. Like monitoring, auditors will perform routine security audits of the application for the duration of its lifetime. Finally, all activity ceases when the application is retired and pulled out of production.
Secure UPArtifacts
For each of the security disciplines in our Secure UP, there is a mandated set of artifacts. These artifacts represent the work product of the discipline and serve as milestones that allow for transition to the start of another discipline within the software development life cycle. The following is a list of artifacts by security discipline. Security Requirements: The artifacts from the Security Requirements phase define the security requirements specific to the business, management, and operational security of the applications. Some of those business requirements are represented with organizational roles/rights, policies, and regulatory compliance requirements. These will be in document format with business-functional security requirements broken down and tracked by business-requirement identification number. Create Security Use Cases: The business security requirements are documented as a list of requirements with no real cohesion. To make sense of these requirements, they must be structured into developer-friendly use cases. These use cases will be in document format with business-functional security requirements broken down, combined into logical groups, and assigned use case numbers. The use cases then track back to any supporting business requirements or external policies that drove the use case. Security Architecture: The Security Architecture discipline has several artifacts. The first is a conceptual security model. This model represents the high-level security architecture that addresses the business security requirements defined in the use cases from the security requirements phase. The next artifact is the candidate security architecture. This architecture will be refined through the rest of the security activities in this phase, including risk analysis and trade-off analysis. Finally, a core set of security patterns will be chosen as part of the refined conceptual security model. Security Design: There are four significant artifacts of the Security Design discipline. The first is the policy design document. This document defines the policies for the application based on relevant industry, corporate, and business policies pertaining to the system. The second artifact is the trust model. This is created from factor and tier analyses of the policy design. The third artifact is the threat profile. This document defines the types of attacks and their associated risks based on the trust model, the policy design, and the refined security model. The last artifact of the Security Design discipline is the Security Design itself. This document or set of documents defines the detailed security design formulated from the union of all the artifacts. It will contain the patterns and other design material needed for implementation. Security Implementation: The four artifacts of Security Implementation are the source code, build/configuration, infrastructure, and the security unit tests. The source code in this case is the security-related code, such as the security patterns implementation and any security frameworks. The build artifacts are any security-related configurations (such as J2EE deployment descriptors) or documents for configuring security in third-party products. The infrastructure artifacts specify the firewall rules, minimization, hardening profiles, and so forth. The unit tests artifacts are those tests that developers use to test that their code complies with the use cases and provides the functionality specified in the design document. White Box T esting: This discipline has only one artifact, the test results document. This document will specify the tests performed, and their results identify the source code, configuration, and infrastructure failures/successes as well as status, and severity. Black Box T esting: Black Box Testing has only one artifact as well, the black box test results identifying the code flaws. This document will also contain the tests run, tools used, and any techniques found to exploit the application and its infrastructure weaknesses. Environment Setup: Environment Setup has several artifacts. To begin with, the first artifact is to have all of the hardware and software installed and configured. The next artifact is to have one or more Standard Operating Procedure (SOP) documents detailing how to install, configure, maintain, and troubleshoot the environment as well as how to manage crises in the operations center. Another artifact will be completion of a change management
request (CMR) system for tracking and fulfilling change requests. Also, the infrastructure layout and design is an important artifact. This would consist of the network design (VLANs and DMZs), ACLs, and trust zones. In some instances, honey pots may be implemented in order to detect and observe and possibly capture intruders. System hardening, minimization, business continuity, and other system setup tasks may be treated as artifacts individually or as a whole. These and other artifacts are better described in a book focused on data center operations. Patch Management: The artifacts for Patch Management are similar to those for Environment Setup. A patch management tool in conjunction with the patch management procedures is the foremost artifact. This allows operations staff to patch and track all of the various systems within the data center. This is often a severely underestimated task. It is also the source of many production outagesjust ask anyone whoever tracked down a random bug that turned out to be caused by a missing patch. Patch management is an ongoing task and therefore the artifacts are evolutionary in nature. Monitoring: Service-level agreement (SLA) is usually associated with monitoring. It is represented as an ongoing task using a logging mechanism that captures all the security-specific alerts and issues. This artifact could be a periodic activity report for designating monitoring tools and procedures used in production as well as a method of support for forensic investigation. Security Auditing: Security Auditing delivers many artifacts associated with SLAs, including organizational policies, verifying compliance requirements, application/host/network configuration, and user activity. These artifacts will be outlined in the policy design. Like monitoring, security auditing is an ongoing process.
Iterative Development
One of the major tenets of the Unified Process is iterative development. The activities stated thus far resemble a waterfall approach in terms of how they are tied together in sequence. This is merely a by-product of the representation of the activity diagram tool and not intended to imply that the process is not iterative. It therefore must be stated clearly that the security disciplines are intended to fit into the overall iterative approach to development. Each use case will be addressed in an incremental and iterative manner. While the swim lanes in the activity diagram illustrate some parallelism, the exact breakdown of what can be done in parallel and to what extent tasks are performed iteratively will vary from application to application. It is beyond the scope of this book to discuss the intrinsics of iterative development, and therefore we will simply state that the security activities should be performed iteratively, the same as any other Unified Process activities.
Issue: Mitigation:
Thus, TOA is a ranking index for making architectural security decisions with clear assumptions and for addressing associated uncertainties.
Security Patterns
Good application design is often rooted in appropriate security design strategies and leverages proven best practices using design patterns. Design strategies determine which application security tactics or design patterns should be used for particular application security scenarios and constraints. Security patterns are an abstraction of business problems that address a variety of security requirements and provide a solution to the problem. They can be architectural patterns that depict how a security problem can be resolved architecturally (or conceptually), or they can be defensive design strategies upon which quality security protection code can later be built. This section will note the existing security patterns available in the industry today and then introduce a new set of security patterns that are specific to J2EE-based applications, W eb services, identity management, and service provisioning. These new security patterns will be further elaborated in the following chapters of this book.
Secure Association
Protected System
Authentication
system. C heck Point JAAS Reference: [Berry], p. 204; [Monzillo]; [YoderBarcalow1997], p. 7; [OpenGroup], p. 47; [WassermannBetty], p. 27
Session
Secure applications need to track global information throughout the application life cycle. This pattern identifies session information (for example, HTTP session variables, RPC call information, service requester details in the JMS or the SOAP messages) that needs to be Authenticated maintained for security tracking. Session; User's Environment; Namespace; This pattern differs from the Threaded-based Singleton pattern in that the session information needs to be Singleton; Localized maintained and shared in a multi- Globals threaded, multi-user, or distributed environment. Reference: [YoderBarcalow1997], p. 14; [Amos], p. 3 This pattern describes what a client should operate to perform authentication against the identity service provider for authentication Authoritative Source or authorization assertion. It is of Data; Enterprise part of the single sign-on process Access Management; for enterprise identity Enterprise Identity management. Management Reference: [Romanosky2002], p. 11
Security Provider
Role
C lass-scoped Authorization
Subject Descriptor
Security C ontext
This pattern provides a full view to users with errors incurred, including exceptions when necessary. Reference: [YoderBarcalow1997], p. 17 This pattern allows users to see what they can access.
Limited View
Reference: [YoderBarcalow1997], p. 19; [Amos], p. 4 This pattern is related to the capture and tracking of securityrelated events for logging and audit trail. Logged information can be used for risk assessment or analysis.
A variant of this pattern is the Risk Analysis pattern, which relates the overall security risk to Risk Assessment and the sum of security threat, the cost of protecting the resources or Management; Risk Analysis losing the resources, and the vulnerability. Once the overall security risk is determined, then the priority will be allocated to protect resources appropriately. Reference: [Romanosky2001], p. 8; [Romanosky2002], p. 4; [Amos], p. 4; [Berry], p. 205
This pattern verifies the data source for authenticity and data integrity. Authoritative Source of Data Reference: [Romanosky2001], p. 5; [Romanosky2002], p. 2; [Berry], p. 206 This pattern helps identify the risks of the third-party relationship and applies relevant security protection measures for the third-party Enterprise Partner communication. C ommunication Reference: [Romanosky2001], p. 10; [Romanosky2002], p. 6
Third-Party C ommunication
This pattern shows how to make horizontal scalable authentication components using load balancer and multiple instances of Policy Load Balancer; PEP; Enforcement Points (PEPs). Subject Descriptor Reference: [OpenGroup], p. 18 This pattern makes highly
C lustered PEP
available authentication components over clustered Web containers. Reference: [OpenGroup], p. 46 This pattern configures multiple checkpoints.
Recoverable C omponent; Hot Standby; C old Standby; C omparator-checked Fault Tolerant System
Layered Security
Reference: [Romanosky2001], p. 7
C old Standby
This pattern describes how to structure a security system or service to resume service after a system failure. The C old Standby pattern typically consists of one active Recoverable C omponent Disaster Recovery; and at least one standby Recoverable Recoverable C omponent. C omponent; Hot The C old Standby pattern differs Standby; C old from the C lustered PEP pattern in Standby; that the latter primarily provides C omparator-checked Fault Tolerant an authentication service as a System Policy Enforcement Point, while the former may be any security service (including PEP). Reference: [OpenGroup], p. 49 This pattern structures a system that enables the detection of independent failure of any component. It requires a faultdetecting mechanism to be in place to report or detect any system fault for a security system, for example, polling the Tandem System state of the security device periodically, or checking the heartbeat of the Secure Service Proxy, Secure Daemon, or similar intermediaries. Reference: [OpenGroup], p. 51 This pattern specifies how to capture changes to a security component's state for future system state recovery. Reference: [OpenGroup], p. 53 This pattern describes how to structure a security system or service to provide highly available security services, or to protect system integrity from system failure. This is usually done by synchronizing state updates to the replica or back-up security components without temporary loss of security services in case of full or partial system failure. Reference: [OpenGroup], p. 55
Journaled C omponent
Hot Standby
Web Tier
Table 8-3 shows a list of known security patterns that support the W eb Tier, which represents the components responsible for the presentation logic and delivery. The W eb Tier accepts the client requests and handles access control to business service components. The security patterns shown in the table enable securing the client-to-server or serverto-server communication in the infrastructure as well as the application.
Business Tier
Table 8-4 shows a list of known security patterns that support the security services in the Business Tier. The Business Tier represents the business data and business logic.
Integration Tier
Table 8-5 shows a list of security patterns that facilitate integration with external data sources.
Security Patterns for J2EE, Web Services, Identity Management, and Service Provisioning
There are new security patterns specific to delivering end-to-end security in J2EE applications, W eb services, identity management, and service provisioning. These security patterns differ from existing security design patterns in that they address the end-to-end security requirements of an application by mitigating security risks at the functional and deployment level, securing business objects and data across logical tiers, securing communications, and protecting the application from unauthorized internal and external threats and vulnerabilities. A simple taxonomy by logical architecture tiers are made here: Web T ier, Business T ier, Web Services T ier, and Identity T ier. Ideally, these patterns and others like them will be maintained in a patterns catalog that will be consulted during the security architecture activity in order to feed patterns into the security design. Through many versions of the application and across applications, these patterns will continue to grow and their implementation will be refined. These patterns are usually structured and represented using a standard pattern template that allows expressing a solution for solving a common or recurring problem. The template captures all the elements of a pattern and describes its motivation, issues, strategies, technology, applicable scenarios, solutions, and examples.
authenticate with the server. It creates a base Action class C ontext Object to handle authentication of [C JP]; Intercepting HTTP requests. Filter [C JP] Refer to C hapter 9, "Securing the Web Tier: Design Strategies and Best Practices," for details. This pattern creates a base Action class to handle authorization of HTTP requests. Refer to C hapter 9 for details. This pattern refers to secure mechanisms for validating parameters before invoking a transaction. Unchecked parameters may lead to buffer overrun, arbitrary command execution, and SQL injection attacks. The validation of applicationspecific parameters includes Message Inspector; validating business data and Interceptor [POSA] characteristics such as data type (string, integer), format, length, range, null-value handling, and verifying for character-set, locale, patterns, context, and legal values. Refer to C hapter 9 for details. The secure base action is a pattern for centralizing and coordinating security-related tasks within the Presentation Tier. It serves as the primary entry point into the Presentation Tier and should be extended, or used by a Front C ontroller. It coordinates use of the Authentication Enforcer, Authorization Enforcer, Secure Session Manager, Intercepting Validator, and Secure Logger to ensure cohesive security architecture throughout the Web Tier. Refer to C hapter 9 for details. This pattern defines how to capture the applicationspecific events and exceptions in a secure and reliable manner to support security auditing. It accommodates the different behavioral nature of HTTP servlets, EJBs, SOAP messages, and other middleware events. Refer to C hapter 9 for details. This pattern shows how to secure the connection between the client and the server, or between servers when connecting between trading partners. In a complex distributed application environment, there will be a mixture of security requirements and constraints between clients, servers, and any intermediaries. Standardizing the connection between Message Interceptor external parties using the Gateway same platform and security protection mechanism may not be viable. It adds value by requiring mutual authentication and establishing confidentiality or non-repudiation between trading partners. This is particularly critical for B2B
Intercepting Validator
JSP Servlets
FrontC ontroller [C JP]; C ommand[GoF]; Authentication Enforcer; Authorization Enforcer; Secure Logger; Intercepting Validator
Secure Logger
Secure Pipe
integration using Web services. Refer to C hapter 9 for details. This pattern is intended to secure and control access to J2EE components exposed as Web services endpoints. It acts as a security proxy by providing a common interface to the underlying service provider components (for Proxy [GoF] example, session EJBs, Intercepting Web servlets, and so forth) and Agent; Secure restricting direct access to the Message Router; actual Web services provider Message Interceptor components. The Secure Gateway; Extract Service Proxy pattern can be Adapter [Kerievsky] implemented as a Servlet or RPC handler for basic authentication of Web services components that do not use message-level security. Refer to C hapter 9 for details. This pattern defines how to create a secure session by capturing session information. Use this in conjunction with Secure Pipe. This pattern describes the actions required to build a secure session between the client and the server, or between the servers. It includes the creation of session information in the HTTP or stateful EJB sessions and how to protect the sensitive C ontext Object [C JP] business transaction information during the session. The Session pattern is different from the Secure Session Manager pattern in that the former is generic for creating HTTP session information. The latter is much broader in scope and covers EJB sessions as well as server-to-server session information. This pattern helps protect Web applications through a Web Agent that intercepts requests at the Web Server and provides authentication, Proxy [GoF] authorization, encryption, and auditing capabilities. Refer to C hapter 9 for details.
Servlets EJB
The Secure Logger pattern provides instrumentation of the logging aspects in the front, and the Audit Interceptor pattern enables the administration and Audit Interceptor Java API for Logging manages the logging and audit in the back-end. Refer to C hapter 10, "Securing the Business TierDesign Strategies and Best Practices," for details.
EJB
This pattern describes how to declare security-related information for EJBs in a Secure Pipe deployment descriptor. Refer to C hapter 10 for details. This pattern provides dynamically adjustable instrumentation of security components for monitoring and active management of business objects. Refer to C hapter 10 for details. This pattern describes ways of protecting business data represented in transfer objects and passed within and Transfer Object [C JP]; between logical tiers. Refer to C hapter 10 for details. This pattern creates, manages, and administers security management policies Secure Base Action; governing how EJB tier Business Delegate objects are accessed and [C JP] routed. Refer to C hapter 10 for details. This pattern provides a session faade that can contain and centralize complex interactions between business components under a secure session. It provides dynamic and declarative security to back-end business objects in the service faade. It shields off foreign entities Secure Service from performing illegal or Proxy; Session unauthorized service Faade [C JP] invocation directly under a secure session. Session information can be also captured and tracked in conjunction with the Secure Logger pattern. Refer to C hapter 10 for details. This pattern defines ways to secure session information in EJBs facilitating distributed Transfer Object access and seamless [C JP]; Session propagation of security Faade[C JP] context. Refer to C hapter 10 for details.
Policy Delegate
EJB
EJB
This pattern checks for and verifies the quality of XML message-level security mechanisms, such as XML Signature and XML Encryption in conjunction with a security token. The Message Inspector
Message Inspector
pattern also helps in verifying and validating applied XML Encryption; XML security mechanisms in a Signature; SAAJ; Message Interceptor SOAP message when JAX-RPC ; WSGateway, Secure processed by multiple Security; SAML; Message Router intermediaries (actors). It XKMS; supports a variety of signature formats and encryption technologies used by these intermediaries. Refer to C hapter 11, "Securing Web ServicesDesign Strategies and Best Practices," for details.
This pattern provides a single entry point and allows centralization of security enforcement for incoming and outgoing messages. The security tasks include JAX-RPC ; SAAJ; WS- creating, modifying, and administering security policies Secure Access Point, Message Security XML Message Inspector, for sending and receiving Interceptor Signature; XML SOAP messages. It helps to Secure Message Gateway Encryption; SAML Router apply transport-level and XAC ML WS-* message-level security mechanisms required for securely communicating with a Web services endpoint. Refer to C hapter 11 for details. This pattern facilitates secure XML communication with multiple partner endpoints that adopt message-level security and identityfederation mechanisms. It acts as a security intermediary component that WSS-SMS XML applies message-level Signature XML security mechanisms to Encryption WSdeliver messages to multiple Security Liberty recipients where the intended Alliance SAML XKMS recipient would be able to access only the required portion of the message and remaining message fragments are made confidential. Refer to C hapter 11 for details.
Assertion Builder
This pattern defines how an identity assertion (for example, authentication assertion or authorization assertion) can be built. Refer to C hapter 12, "Securing the IdentityDesign Strategies and Best Practices," for details. This pattern describes how a principal's security token can be encapsulated, embedded in a SOAP message, routed, and processed. Refer to C hapter 12 for details.
C redential Tokenizer
This pattern describes how to construct a delegator agent for handling a legacy system for single sign-on (SSO). Refer to C hapter 12 for details. This pattern describes how to securely synchronize principals across multiple applications using service provisioning. Refer to C hapter 13, "Secure Service ProvisioningDesign Strategies and Best Practices," for details.
Web Tier
The subscriber uses a W eb browser to sign on to the rewards portal. The portal initiates a secure communication channel between the client browser and the W eb server using the Secure Pipe pattern. The Secure Pipe establishes the transportlayer security between the client and server using secure handshake protocols (such as SSL or TLS), which provide an encrypted data exchange and digital signatures for guaranteed message integrity.
Once the secure communication channel is established, the Front Controller pattern is used to process application requests (refer to http://java.sun.com/blueprints/patterns/FrontController.html for details). The Front Controller uses a Secure Base Action pattern that attempts to validate the session. Finding that the session information does not exist, the Secure Base Action uses the Authentication Enforcer pattern to authenticate the subscriber. The Authentication Enforcer prompts the subscriber for his user credentials. Upon successful authentication of the user credentials by the Authentication Enforcer, the Secure Base Action pattern uses the Secure Session Manager pattern to create a secure session for that user. It then applies the Authorization Enforcer pattern to perform access control on the request. Based on the user credentials and the relevant user provisioning information, it creates a secure session to access the required membership functions. During this process, the application uses the Secure Logger pattern to make use of the application logging infrastructure and initiates logging of all the user requests and responses by recording the sensitive business information and transactions, including success or failure attempts. Figure 8-5 depicts the scenario with a sequence diagram showing the participants in the W eb Tier.
In Figure 8-5, the actors denote the security patterns used. The Service Requester (or client) sends a request to initiate business services in the online portal. The Secure Pipe secures the service request in the transport layer. The Secure Base Action validates the session and uses the Authentication Enforcer to authenticate the session. The Authentication Enforcer will in turn request user credentials from the service requester and log the authentication result in the Secure Logger. Upon successful authentication, the Secure Base Action will create a secure session under the Secure Session Manager. It will also use the Authorization Enforcer to authorize the request and use the Intercepting Validator to validate the parameters in the request. Upon successful authorization processing, the Secure Base Action will log the events using the Secure Logger.
Business Tier
Under the secure session, a Container Managed Security pattern may be used to delegate authentication and authorization handling of the requests to the application server container. Policy can then be applied declaratively, in an XML deployment descriptor, or programmatically, using the container's J2EE security APIs. In our example, the business portal architects and designers require a more dynamic policy framework for the Business Tier and choose not to use container managed security due to the relative static nature of the deployment descriptors. Instead they use a combination of Business Tier patterns to provide security in the back-end business portal services. Once the request has been processed in the W eb Tier, the application invokes the relevant Business Tier services. These services are fronted using the Secure Service Faade pattern. This pattern can be augmented by the Container Managed Security pattern and is used for authenticating, authorizing, and auditing requests from the W eb Tier. Anticipating a large user volume through the business portal, its W eb Tier and Business Tier are placed on separate machines (horizontally scaling) in order to enable high-scalability. Since the Business Tier lives in a different application server instance, authentication and authorization must be enforced on the Business Tier via security context propagation. This may seem redundant, but were it not done this way, there would be a significant security risk. The Secure Service Faade represents the functional interface to the back-end application services. This may include a service to inquire about the membership award balance and the submission of the reward redemption request. These may be business functions to which the subscriber is not entitled. The Secure Service Faade will use the Policy Delegate pattern to determine and govern the business-related security policies for the services to which the requester is entitled.
W hen a request is first made to the Secure Service Faade, it will use the Dynamic Service Management pattern to load and manage the Policy Delegate class and any security-related supporting classes. The Dynamic Service Management pattern allows the application to maintain up-to-date policy capabilities by providing the ability to dynamically load new classes at runtime. In addition, it provides JMX management interfaces to the Policy Delegate for management and monitoring of policy operations. Once the Policy Delegate is loaded, it can provide authentication and authorization of requests. W hen the customer requests their rewards balance, the Policy Delegate authenticates and then authorizes the request. It then uses the Secure Session Object pattern to create a session object such as an SSO (Single Sign-on) token that can then be used in subsequent service calls or requests to verify the identity of the requester. The Secure Service Faade provides business and security auditing capabilities by using the Audit Interceptor pattern. Upon invocation, it notifies the Audit Interceptor of the requesting service. The Audit Interceptor then determines if, when, and how to log the request. Different types of requests may be logged in different locations or through different mechanisms. For the membership award balance service, the Audit Interceptor disregards the balance inquiries and generates an audit entry message that gets logged each time a redemption request is made. Since confidential material is passed via the Secure Service Faade and the back-end services, it is necessary to provide a means for securing data, such as account numbers, balances, and credit card information, which must be prevented from disclosure in log files and audit entries. The Secure Service Faade uses the Obfuscated Transfer Object pattern to obscure business data from potential unauthorized interception and intentional or unintentional access without authorization. In this case, our customer's credit card number, account number, and balance amount are obfuscated so that they will not show up in any logs or audit entries. Figure 8-6 depicts a sequence diagram with some details about the scenario in the Business Tier.
In Figure 8-6, the actors denote the security patterns used. Typically, once the service request is processed by the W eb Tier security patterns, a Business Delegate pattern (refer to http://java.sun.com/blueprints/corej2eepatterns/Patterns/BusinessDelegate.html for details) will be used to invoke Business Tier objects. The Business Delegate will create a service session using the Secure Service Faade (either local or remote synchronous invocation). The Secure Service Faade will instruct the Audit Interceptor to initiate the auditing process either synchronously or asynchronously. It will also load the Dynamic Service Management pattern for forceful instrumentation of management and monitoring process for business components. The Dynamic Service Management creates an instance of the Policy Delegate. The Secure Service Faade will start processing the request and invoke the Policy Delegate functions to process the request with the relevant policies defined for the objects or the service requester. It creates an instance of a Secure Session Object for the online portal transactions. The Secure Service Faade will invoke the business object to process business data in the service request. This may involve accessing business information related to the membership reward balance or requesting reward redemption services, in our sample scenario. To protect the business data in the transfer object, the business object can create instances of Obfuscated Transfer Object for delivery. Upon completion of the service request, the Secure Service Faade instructs the Audit Interceptor to capture and verify application-related events.
hosted via W eb services. The W eb servicesbased service provider intercepts the service request from the member portal using the Message Interceptor Gateway pattern. The SOAP service request (using RPC-style messaging or the requestreply model) is verified and validated for message-level security credentials and other information by applying the Message Inspector pattern. Then the underlying services apply the Secure Message Router pattern that securely routes the message to the appropriate service provider or recipients. Upon successful message verification and validation using the Message Inspector pattern, the response message will be routed back to the intending client application. If asynchronous messaging intermediaries (using document-style messaging) initiate the SOAP messages, the Message Interceptor Gateway pattern at each intermediary will process these SOAP messages and apply similar techniques. This process may also involve forwarding the request to an identity provider infrastructure for verification of the authenticity of the credentials. Figure 8-7 depicts a sequence diagram with some details about the scenario for W eb services.
In Figure 8-7, the actors denote the security patterns used. The Service Requester sends a request to invoke business services for the membership award catalog or redemption requests with other content providers. The Secure Pipe secures the service request. The Message Interceptor Gateway intercepts the SOAP message and uses the Message Inspector to verify and validate the security elements in the SOAP message. The Message Inspector confirms with the Message Interceptor Gateway that the message is validated (or not validated). Upon successful validation, the Message Interceptor Gateway forwards the SOAP message to the Secure Message Router, which will send the SOAP message to the final service endpoints provided by the Service Provider. The Service Provider will process the service request and return the result of the membership award request to the Service Requester via the Secure Pipe.
Identity Tier
The member portal currently has the capability to allow subscribers to sign on with other underlying services, including the business applications hosted by the service provider or the remote business services provided by the content provider (trading partners). To establish identity and grant access to users to other business services to which they are entitled, the portal uses protocols based on SAML and Liberty Alliance specifications. Using the Assertion Builder pattern, the application creates a SAML authentication assertion for each business service that the subscriber chooses to invoke from the Authorization Activator pattern. It then encapsulates the user credentials in the security token in the SAML assertion using the Credential Tokenizer pattern. Because the customer loyalty system runs on the legacy back-end systems, the SSO Delegator pattern can be applied to integrate with the legacy back-end EIS system to provide the single sign-on access. This also facilitates global logout capability. Using the Password Synchronizer as a supporting infrastructure function, the application runs secure service provisioning to synchronize user accounts across service providers. It complements the single sign-on security functionality provided by the SSO Delegator pattern and the Assertion Builder pattern. The subscriber inquires about the account balance of his or her membership award. Figure 8-8 depicts a sequence diagram with some details about the scenario in the Identity Tier.
In Figure 8-8, the actors denote the security patterns used. Before the subscriber can use different remote membership reward services, his or her user account and password need to be registered first, using the Password Synchronizer. Once the subscriber signs on to the W eb portal, he or she initiates a single sign-on request using the Single Sign-on (SSO) Delegator to the W eb portal services and the associated service providers. The SSO Delegator will initiate remote security services. In order to process requests from the subscriber under the single sign-on environment, the service provider requires a SAML assertion (for example, a SAML authentication assertion). The SSO Delegator will then create a SAML assertion statement using the Assertion Builder, which will use the Credential Tokenizer to digitally sign the SAML assertion statement with the user credentials. Upon completion, the Assertion Builder will return the SAML assertion statement to the SSO Delegator, which will forward the SAML assertion statement to the service provider. After the subscriber has finished the membership reward request, he decides to log out from the online portal. He issues a global logout request to the SSO Delegator, which will issue the logout request to the service providers. Upon completion, the SSO Delegator notifies the service requester of the global logout result.
In Figure 8-9, in the architecture phase, the architects identify potential security patterns that can be used to satisfy the application-specific security requirements and rationalize the mitigated risks and trade-offs. Based on those inputs, the security design process will be carried out. The security designers perform factor analysis, tier analysis, trust models, threat profiling. They then create security policies that realize and validate the security use cases, architecture, and the identified patterns. During the design process, if there exists a security pattern that corresponds to the security use case requirements and it is architecturally significant, it can be incorporated into the security design. If there is no security pattern available, a new design approach must be taken. It can then be considered for reuse. If it is found to be reused enough, it can be classified as a pattern for inclusion into the Security Pattern Catalog. In the build and integration portions of the development life cycle, architects and designers apply the relevant security patterns to the application design that satisfy the security use cases. They choose to use their preferred security framework tools to implement the application using the security patterns. Prior to the deployment process, testers evaluate the application to ensure no security requirements or risk areas were overlooked. If a gap is identified that requires a change to the design, architects can revisit the security patterns to see if any additional security patterns or protection mechanisms are necessary. W hite and black box testing is an essential security measure that must be performed prior to deploying an application. In summary, the security architecture and design process can be broken down into the following steps: Identify the required security features based on the functional and non-functional requirements and organizational 1. policies (Security Use Cases). 2. Create a conceptual security model based on architecturally significant use cases (Candidate Architecture). 3. Perform risk analysis and mitigate risks by applying security patterns (Risk Analysis). 4. Perform trade-off analysis to justify architectural decisions (Trade-off Analysis). Identify the security factors for each component or service specific to the application or system infrastructure (Factor 5. Analysis). Review the factors that impact the security of applications or W eb services elements under each logical architecture 6. tier (Tier Analysis). Define the object relationship for security protection and identify the associated trust models or security policies 7. (Trust Model). 8. Identify any security risks or threats that are specific to the use case requirements (Threat Profiling). 9. Formulate a security policy (Security Policy Design).
9. Apply security patterns wherever appropriate. Sometimes, rearchitecting or reengineering may be required (Security 10. Pattern Design, Security Pattern Catalog, Apply Security Pattern). Prior to implementation, use Security Reality Checks to review and assess the security levels by logical architecture 11. tiers (Security Reality Check). By applying this design process within a structured methodology, architects should be able to complete a secure architecture design using security patterns and derive a secure application architecture addressing the known risks and vulnerabilities. They should also be able to maintain a customized security pattern catalog based on past design and implementation experience or known design patterns.
Factor Analysis
The objective of end-to-end application security is to provide reliable and secure protection mechanisms in business applications that can support authentication, authorization, data integrity, data privacy (encryption), non-repudiation (digital signature), single sign-on (for better efficiency and cost-effective security administration), monitoring and audit control, and protection from various security threats or attacks. The related security attacks can be malicious code attacks, Denial of Service (DoS)/Distributed DoS attacks, dictionary attack, replay attacks, session hijacking, buffer overflow attacks, unauthorized intrusion, content-level attacks, session hijacking, identity spoofing, identity theft, and so on. In an end-to-end security perspective, the security design will vary by a number of application-, platform-, and environment-specific requirements and factors, including the following.
Infrastructure
Target deployment platform (and the underlying technologies and implementation constraints) Number or type of access points or intermediaries Service provider infrastructure (centralized, decentralized, distributed, or peer-to-peer), and the associated constraints of connecting to the infrastructure (for example, the data transport security requirement) Network security requirements
Web Tier
Authentication-specific requirements (for example, multifactor authentication mechanism) Client devices or platform used (for example, Java card, Biometrics) Key management strategy (for example, whether key pairs are generated by a Certificate Authority and how the key pairs are stored) Authorization-specific requirements based on the sensitivity of the access requests.
Business Tier
Nature of the business transaction (for example, non-sensitive information has a lower security protection requirement than sensitive, high-value financial transactions).
Service invocation methods (RPC-style, document-style, synchronous, or asynchronous communication). Service aggregation requirements (for example, whether business services need to be intercepted, filtered, or aggregated from multiple service providers).
Identity Tier
Identity management strategy (for example, how network identity is established, validated, and managed). Policy management strategy (for example, management policies for who can access the SOAP messages and whether the service requester can access the full or partial content). Legacy security integration constraints (for example, security credential propagation). Single sign-on and sign-out requirements.
Quality of Services
Service-level requirements (for example, quality-of-services requirements for high availability, performance, and response time). Relating the Factor Analysis to apply Security Patterns. The security factor analysis is a good practice to use to identify the important application-specific and environment-specific constraints of the target applications and the target clients in relation to the overall security requirements. This will also help with locating the appropriate security patterns that can be used to address the business problems. For example, in a W eb-services security design scenario, we address the application- and environment-specific security requirements and constraints by representing the following security patterns: Secure the transport layer (Secure Pipe pattern, Secure Message Router pattern). Validate the SOAP message for standards compliance, content-level threats, malicious payload, and attachments (Message Interceptor Gateway pattern, Message Inspector pattern). Validate the message at the element level and the requesting identity (Message Inspector pattern). Establish the Identity policies before making business requests (Assertion Builder pattern). Protect the exposed business services and resources by service masking (Secure Service Proxy pattern). Protect the service requests from untrusted hosts, XML DoS, Message replay, Message tampering (Message Interceptor Gateway pattern). Timestamping all service requests (Secure Message Router pattern). Log and audit all service requests and responses (Secure Logger and Audit Interceptor pattern). Route requesting to multiple service endpoints by applying message-level security and Liberty SSO mechanisms (Secure Message Router pattern). Applying to the Media and Devices. The security factors will be different when applied to different media or client devices. Different media and client devices, ranging from a W eb browser, Java card, J2ME phones, and a rich client to legacy systems, have different memory footprints. Some of them may have more memory capacity to store the key pairs, or some of them have less memory to perform required security checking. For instance, W eb browsers are able to store the certificates keys and provide a flexible way to download signed Java applets, establish client-certificate-based authentication, and use SSL communication. J2ME based mobile phones and client devices operate on a lesser memory footprint and lesser processing speed. It is harder to use encryption and signature mechanisms and to perform complex cryptographic processing with these phones and devices due to their memory capacity and environment constraints. A possible security artifact for the factor analysis is to produce a summary of the security factors based on the
application-specific, platform-specific security requirements and the technology constraints in the security requirements document. This can be a separate appendix or a dedicated section that highlights the key areas of security requirements. The factor analysis provides an important input to the security architecture document. From the factor analysis, security architects and developers can justify which security design patterns or security design decisions should be used.
Tier Analysis
Tier analysis refers to the analysis of the security protection mechanisms and design strategies based on the business applications residing in different logical architecture tiers. In particular, it identifies the intra-tier communication requirements and dependencies. For instance, architects can use the HTTPS protocol to secure the data transport for applications residing in the W eb Tier, but the same security protection mechanism will not work for applications residing in the Business Tier or in the Integration Tier. Similarly, the security protection mechanisms for asynchronous W eb services will not work for synchronous W eb services due to the difference in the RPC-style service invocation and document-style messaging architecture. The security design strategies and patterns discussed in this book are grouped by tiers to reflect what security protection mechanisms are relevant for each logical architecture tier. A possible security artifact for the tier analysis is to produce a T ier matrix of security features by architecture tiers and by application layers. This matrix identifies the key security capability and design elements and their relation to different architecture tiers and application layers. During the security review, security architects and developers can evaluate the appropriateness and reliability-availability-scalability of the security design based on the tier matrix.
Threat Profiling
Threat profiling denotes profiling of architecture and application configurations for potential security weaknesses. It helps to reveal the new or existing security loopholes and the weaknesses of an application or service. Thus, it enumerates the potential risks involved and how to protect the solutions built and deployed using them. This will involve defining and reinforcing security deployment and infrastructure management policies dealing with updating and implementing security mechanisms for the application security infrastructure on an ongoing basis. It can be applied to the newly designed application systems, existing applications, or legacy system environments. A possible security artifact for threat profiling identifies and categorizes the types of threats, potential security vulnerabilities, or exposures that can attack the application systems. A use-casedriven data flow analysis can also be used to trace the potential risks. For example, a threat profile may identify and list the threats and vulnerabilities as follows: Actual or attempted unauthorized access Introduction of viruses, Trojan horses, and malicious code Actual or attempted unauthorized probing of content Denial of service attacks Arbitrary code execution Unauthorized alteration and deletion of data Unauthorized access Unauthorized disclosure Unauthorized privilege escalation In addition, it would discuss the security considerations and risk management techniques for all the identified loopholes and flaws.
Trust Model
A trust model is the backbone of the security design. It provides mechanisms that establish a central authority of trust among the components of the security architecture and that verify the identity of participating user entities and their credentials, such as name, password, certificates, and so forth. In simpler terms, a trust-modeling process is defined as follows:
A trust model identifies specific mechanisms meant for responding to a specific threat profile, where a threat profile is a set of threats or vulnerabilities identified through a set of security use cases. A trust model facilitates implicit or explicit validation of an entity's identity or the characteristics necessary for a particular event or transaction. A trust model may contain a variety of systems infrastructure, business application, and security products. From a security design perspective, a trust model allows test-driving the patterns used, imposing a unique set of constraints, and determining the type and level of threat profiling required. Significant effort must go into the analysis preceding creation of the trust model to ensure that the trust model can be implemented and sufficiently tested. A trust model must be constructed to match business-specific requirements, because no generic trust model can be assumed to apply to all business or security requirements and scenarios. Let's take a server-side SSL example in which we assume that an SSL session is initiated between a W eb browser and a server. The W eb browser determines the identity of the server by testing the credentials embedded in the SSL session by means of its underlying PKI. The testing of credentials proves a "one-way trust model" relationship; that is, the W eb browser has some level of confidence that the server is who it claims to be. However, the server has no information for testing the W eb browser. Essentially, the server is forced to trust the W eb-browserreturned content after initiating its SSL session. Two possible security artifacts from the trust model can be produced. First, the analysis of the trust model usually specifies the security requirements and system dependencies for authentication and authorization in the security requirements specification. This provides the basic design consideration for authentication and authorization and provides input to the definition of system use cases for authentication and authorization. Second, the trust model will identify the security risks associated with the trust relationship. These form an important component in the overall risk document. For an example of a trust model, refer to [Liberty1] and [XACML2].
Policy Design
Security policies are a set of rules and practices that regulate how an application or service provides services to protect its resources. The security policies must be incorporated into the security design in order to define how information may be accessed, what pre-conditions for access must be met, and by whom access can be permitted. In a typical security design artifact, security policies are presented in the form of rules and conditions that use the words must, may, and should. These rules and conditions are enforced on the application or service during the design phase by a security authority by defining the rights and privileges with respect to accessing an application resource or conducting operations. Security policies applied to an application or service can be categorized as the following six types: Identity policies: Define the rules and conditions for verifying and validating the requesting entity's credentials. These include usage of username/passwords, digital certificates, smart cards, biometric samples, SAML assertions, and so forth. This policy is enforced during authentication, authorization, and re-verification requirements of an identity requesting access to an application. Access control policies: Define the rules and conditions applied to a requesting entity for accessing a resource or executing operations exposed by an application or service. The requesting entity can be a user, device, or another application resource. The access control policies are expressed as rights and privileges corresponding to the identity roles and responsibilities of the requesting entity. Content-specific policies: Define the rules and conditions for securing the content during communication or storage. This policy enforces the content-specific privacy and confidentiality requirements of an application or service. Network and Infrastructure policies: Define the rules and conditions for controlling the data flow and deployment of network and hosting infrastructure services for private or public access. This helps to protect the network and hosting infrastructure services from external threats and vulnerabilities. Regulatory policies: Define the rules and conditions an application or service must adhere to in order to meet compliance, regulation, and other legal requirements. These policies typically apply specifically to financial, health, and government institutions (for example, the SOX, GLBA, HIPAA, and Patriot Act). Advisory and informative policies: These rules and conditions are not mandated but they are strongly suggested with respect to an organization or to business rules. For example, these policies can be applied to inform an organizational management team about service agreements with external partners for accessing sensitive data and resources, or to establish business communication. In addition, in some cases we need to design and apply target application environment and business-specific policies
such as: User registration, revocation, and termination policy Role-based access control policy PKI management policy Service provider trust policy Data encryption and signature verification policy Service audit and traceability policy Password selection and maintenance policy Information classification and labeling policy DMZ Environment access policy Application administration policy Remote access policy Host and network administration policy Application failure notice policy Service continuity and recovery policy The security policy artifacts must capture these policy requirements and define the roles and responsibilities of the stakeholders who are responsible for implementing and enforcing them. It is also important to incorporate updates based on the changes in the organization and the application environment.
Classification
Classification is a process of categorizing and designating data or processes according to an organization's sensitivity to its loss or disclosure. In an application or service, not all data has the same value to the requesting entity or to the business. Some data, such as trade-secrets, legal information, strategic military information, and so on, may be more sensitive or valuable than other data in terms of making business decisions. Classification is primarily adopted in information-sensitive applications or services in order to prevent the unauthorized disclosure of information and the failure of confidentiality and integrity. The classification of data or processes in an application or service is typically represented as classes with five levels ranging from the lowest level of sensitivity to the highest. The least sensitive level is 1, and the most sensitive is 5. 1. Unclassified: The data or process represented with this classification is neither sensitive nor classified. The information is meant for public release and the disclosure does not violate confidentiality. 2. Sensitive But Unclassified (SBU): The data or process represented with this classification may contain sensitive information but the consequences of disclosure do not cause any damage. Public access to this data or processes must be prevented. For example: General health care information such as medications, disease status, etc. 3. Confidential: The data or process in this classification must be protected within the organization and also from external access. Any disclosure of this information could affect operations and cause significant losses. For example: Loss of customer credit card information from a business data center. 4. Secret: The data or process in this classification must be considered as secret and highly protected. Any unauthorized access may cause significant damage. For example: Loss of strategic military information. 5. T op Secret: The data or process in this classification must be considered as highest secret. Any unauthorized disclosure will cause grave damage. For example: A country's national security information. In a classified information system, all data has an owner and the owner is responsible for defining the sensitivity of the
data depending on the organizational policies. If the owner is not sure about the sensitivity level, then the information must be classified as "3 - Confidential." The owner is also responsible for security of the data as per the organization security policy pertaining to the classification and for defining who can access the data. Classification also depends on organizational requirements related to information confidentiality. Organizations must define their classification terms and definitions.
Security Labeling
Security labels represent the sensitivity level of data or processes. They denote the type of classification assigned. During runtime access, the labels are verified and validated in accordance with an organization's security policy. To adopt classification and labeling of data processes, it is necessary to choose a highly secure operating system (for example, Trusted Solaris Operating system) that offers labeling of data and processes based on discretionary and mandatory access-control policies throughout the operating system, including all users, files, directories, processes, services, and applications. The label, once assigned, cannot be changed other than by an owner or authorized person in the classification hierarchy with higher privileges. Classification and labeling requirements must be identified during the design phase. Classification and labeling must be adopted when an application or service is required to manage highly sensitive data or processes and the business or organization dictates classification requirements for its information with higher confidentiality.
Rules Secret Key RSA RSA - MD5 Public Key RSA - SHA-1 Audit logs C entralized audit log Encrypted checksums on log records Accountability Encrypted log records Digital signature (non-repudiation) 1024-bit 1024-bit 1024-bit Yes Yes Yes
Reality Checks
In the design process, architects and developers may have chosen multiple security patterns to meet the application security requirements. They may need to make certain compromises in order to meet other application requirements, such as application performance throughput or cost constraints. However, any trade-off should not sacrifice the overall business security requirements without proper sign-off by the business owners. Table 8-12 shows application-specific security reality checks intended for ensuring production-quality security protection measures prior to deploying the J2EE-based applications or W eb services to production. It is not meant to be exhaustive, but can be very handy for self-assessment.
Table 8-12. Security Reality Checks
Areas Security Reality Check Item Y/N Remarks A written security policy is a key ingredient in a well-formed security architecture. The security policy document should clearly define the application, its users, and environment-specific security policies for the security design, implementation, and deployment. It should also then document associated procedures for securing the applications or the underlying infrastructure until its retirement.
Policy
Are there any documented security policies for J2EE-based or Web-servicesbased applications?
Does the existing security policy cover the depth of application security that is associated with the following? Minimizing and hardening of the target operating system that runs the target applications Policy Securing the application servers that run applications developed in J2EE or Web-services technologies Securing the business logic components and data objects Securing the production environment and the data center infrastructure. Does the security policy cover the organizational standards, procedures, and design processes for data encryption, cryptographic algorithms, and key management? Does the security policy cover any escalation procedure to manage security threats in case of security intrusion to the J2EEbased applications or Web services? Do you have a business continuity plan that includes the business continuity of the application infrastructure to protect the applications and the associated risks? Options are e-mail, bulletin board system, newsletter, training sessions, and so forth. Security functions should not be treated as only security personnel's job. It should be considered as everyone's responsibility. If not, security policy is doomed to be ignored. If management is not supportive of the current policy, but look to you to provide a secure architecture, chances are you will be blamed for everything bad down the road that occurs because of their reluctance to enforce security. It is
Policy
Policy
Policy
Policy
Policy
Is senior management aware of and supportive of the security policy? This includes regulatory requirements such as SOX, FISMA, and so forth
important to make management aware of their responsibility in enforcing the policy. These are the policies for establishing roles and groups. A simple system, for example, will have users and administrators. A more complex system requires remote access management, personnel management, access to server administration, and so forth.
Policy
Is there an application and data access-control security policy? Are access policy roles and groups clearly documented?
Policy
Are allowed and denied services and protocols documented? Is the network topology enabled with a firewall to protect the DMZ environment, including the hosts and applications? Are the locations of physical management points to support policy accessed through routers, gateways, and bridges identified on network topology documentation and how often is documentation updated?
Policy
These stress the importance of topology documentation, which is the guide to identifying where possible breaches in data access security could occur.
Policy
Encryption is typically required for services involving personal information, for example, online banking. Although efforts are usually made to only encrypt sensitive payloads in order to avoid the computing overhead that encryption incurs, it is often (especially for Webbased applications) the case that the encryption overhead is minimal compared to the communications overhead. SSL carries noticeable performance issues; to counter these overheads, it should make use of hardware acceleration with key storage. Online privacy is a very broad topic, but in our discussion we focus on the communication aspect, which is usually accomplished with encryption. It is important to pinpoint where the encryption is done, who is responsible for the product doing the encryption, and what technology/algorithm is being used. It is also important to know where the keys are stored and how they are managed. There are different methods for encrypting data between the client and a Web server, such as HTTP/SSL, which is transportlayer (or channel) encryption, and application-level encryption, which is encrypting data directly in the application. Data integrity can be accomplished through checksums or message digest algorithms. This is built into HTTPS, but at the application-level it must be implemented.
Policy
Policy
Are users aware of the need to ensure data privacy and follow information and data handling procedures. Are they aware of importance of using data sensitivity levels? These policies are more currently referred to as trust management. How is the trust established for a principal so that he or she is given authentication credentials and permitted authorization for certain functions? How is the trust maintained and terminated? Who is given the ability to give out these privileges? This portion of the security policy should lay these out in a step-by-step fashion. As stated previously, many headaches and finger-pointing episodes can be avoided by assuring a one-to-one relationship between each individual
Policy
Policy
user and his or her authentication information. All actions and events should be traceable back to a unique credential (except for public access). Password changes should be enforced regularly if a secure form of authentication such as S/KEY, smart cards, or biometrics are not used. This minimizes unauthorized users from borrowing or stealing passwords to access the system. There should be a process in place for change management on both a system and a data level. If accessing different databases, authentication should be performed on a per-transaction level. Access C ontrol Lists are one way to accomplish this task. In many systems, the middle tier accesses all the back-end databases as one user. A sufficiently savvy client with access to one database may try to access another database or someone else's records. This is why access control should be performed on a per-transaction basis with sufficiently fine-grained control.
Policy
Policy
Policy
Policy
Is access to data controlled so that users can only change that which they should have access to?
Policy
Do procedures for changing production systems and data exist? Are access controls sufficiently flexible for users to do their jobs without compromising data confidentiality and integrity? This can be accomplished in the AC L by specifying different levels of access (that is, read, write, append, modify, and create). There should be a process in place to ensure that change management activities conform to a policy. This can be accomplished through routine reviews of log files.
Policy
Policy
Do you have a secure protection mechanism and well-defined procedures for key management (storing the key pairs used for Administration authentication or generating digital signatures)? How are the key pairs managed? Where are they stored? Who can create, update, and manage key pairs? Are the security design and Administration administration control personnel separated? Do you have a regular security patch management process that Administration applies to J2EE application servers, back-end application resources, and Web browsers? Segregation of security design and administration is one of the security control best practices.
There are new security threats discovered from time to time, especially for some operating systems and Web browsers.
Authorization databases may be mirrored; how is this process protected if separate systems are at separate locations or colocated using primary/secondary servers? Are the activities managed via delegated administration? Are repeated login failures detected and recorded? If repeated login failures are logged, there should be tools or mechanisms to monitor the logs and alert security personnel to possible hacking attempts. Efficient monitoring requires the use of intrusion detection systems (IDS) and filtering software that flags potential problems in accordance with the security policy in place. It is important that the IDS and filtering mechanisms
How is unauthorized access Administration detected? Are automated detection tools in use? How are logs managed and reviewed?
are not performed during the creation of the audit trail; when problems arise, the more detail available from the audit trail, the better. Audit trails should record as much as possible and be reviewed with a healthy dose of filtering. No one will catch any problem in the middle of 1000 pages of normal access logs. It is important that any changes to authorization of existing users/groups be scrutinized. At a minimum, an audit trail record should contain activity identification, the time of the activity, user identification, requested transaction, and results. Audits of administration changes should ideally contain old and new data. Most audit trail mechanisms, other than those that are directly related to a security product, are not protected from tampering. An attacker "covers his tracks" by removing incriminating entries from the audit trail and can foil detection. An authenticated audit trail can detect tampering; an audit trail to a write-only device (printer or C DROM) prevents tampering.
Are audit trails of authentication Administration and authorization generated and reviewed?
Does your application security design support high availability of the security services, including the authentication of user credentials? Quality of Services In other words, have you included any design considerations to secure the application infrastructure, processes, and communications from loss or damage from disasters, accidents, or security attacks? Do you have a recovery plan if your security components (such as Intercepting Web Agent and Secure Message Router Intermediary) are being compromised or fail to function? Do you have validation methods for verifying the integrity of those deployed components? Does your application security design include any plan to predetermine the tolerance of application security failure? Does your application design include a recovery of the security services and provide an alternative infrastructure for the recovery while restoring the security services? Do you have a checklist of Java objects, Web services, XML messages, or any application design elements for which you need to evaluate or identify security threats? Do you have any risk management plan or any recovery strategies for each type of security breach?
High availability of application security may include the use of hardware or software clustering of directory servers, Web Agents, Message Router Intermediaries, or any security service components.
Quality of Services
The recovery design of the security service component should be part of the design process. It should be represented in the infrastructure and/or application level.
Quality of Services
Quality of Services
Quality of Services
Quality of Services
C lient Device Do you check for any suspicious "footprint" in the client devices Tier (for example, PDA)? Are the key pairs stored securely C lient Device in the local client device? Tier How is the security assured?
Hackers may leave a suspicious footprint in the client devices for future "replay" or "exploit."
C lear text password is strongly not recommended. It is considered highly insecure because the password can be sniffed on the network, but it may be sufficient with adequate protection that guarantees there is no danger of interception; at any rate, architects need to weigh those risks. Presentation Tier How does the J2EE-based application design support login authentication? Encrypted password via Kerberos tickets or SSL mechanisms. One-time passwords using a token device (for example, SecureID). C ertificates (used with SSL). Browsers using SSL normally support server authentication via certificates. C lient authentication using passwords over an HTTP/SSL connection is often used, but using client-side certificates are highly recommended. C ookies in the clear-text form can be a source of attack. They should be encrypted, hashed, and timestamped to avoid session hijacking. URL rewriting must be protected using SSL and URL encoding/decoding mechanisms. Security tokens are set within the confines of an established security protocol such as SSL. This is a measure of the importance of J2EE-based applications being dealt with. A positive answer means the bar has been raised for other security mechanisms, such as authentication, session state, and so forth. Typically, there is a user lookup mechanism (database, LDAP, and so forth), which also holds authorization information about what the user is allowed to do now that we are confident he is who he says he is. However, certificate-based authentication may rely strictly on the certificate signature to ascertain authentication as well as authorization privileges. A trusted applet or application (from a signed JAR) may perform actions outside of the normal Java sandbox, such as writing to the local machine's hard drive. Take care that the client sanctions these actions and make sure there is obfuscation that hides the business logic to protect the middle-tier business abstraction.
Web Tier
How is the login information carried throughout the session execution: cookies, URL rewriting, or use of a security token?
Web Tier
Web Tier
Web Tier
Web Tier
How is authentication done? How is authorized access controlled and how is authentication and authorization administered? Hardware-based encryption is often more secure and has better performance. It provides a tamperresistant solution. Software-based encryption is easier to install and change as necessary, but may be compromised if an attacker attains "root" access on the host machine. It should be addressed in the application level, the network level, and at the host-environment level. There is a standard set of accepted encryption algorithms, many of which are in the public domain, but there are security products that use unproven encryption technology. Standard algorithms include Triple-DES, RSA, Diffie-Hellman, IDEA, RC 2, RC 4, and Blowfish. The newsgroup sci.crypt regularly publishes a FAQ that identifies features to look out for when reviewing an encryption security
Web Tier
Web Tier
Web Tier
Does encryption technology use standard encryption algorithms widely recognized as being effective (for example, FIPS approved)?
product. U.S. federal law currently restricts export of products using encryption technology. For an intranet environment where U.S. businesses have a presence overseas, this is not an issue. For a U.S. company offering services to overseas clients, this is an issue. Key management involves making sure each member of a communication has the correct key value to either encrypt or decrypt a data stream. C urrent encryption products involve the use of public-key technology, usually in the form of X.509 certificates. These are the certificates used by Web browsers from Netscape and Microsoft. The big problem today is finding out when a certificate has been revoked. A certificate always has an expiration date, but to date no standard method is in wide use for how to resolve premature certificate revocation; that is, revoking a fired employee's certificate. Some options: token device (for example, smart card), or passwordencrypted file. Use of a plaintext file to store a secret key is risky because the key is easily compromised if the machine is successfully attacked, but such a measure is necessary for machines that need the ability to cycle without human intervention (typing a password or inserting a smart card). If an LDAP database is used, encryption between the LDAP server and the authenticating party should be considered. Multifactor authentication combining smart card and biometric authentication is often considered as a reliable personal authentication option. Access can be granted to roles that can be assigned to individual users, thereby allowing user accounts to be tied to just one person. Efficiency is often realized by grouping similar types of users into one account. For some services (for example, the ubiquitous "anonymous" FTP account), it makes sense, but for most commercial services, an authenticated user should be an individual. This policy can assist clients with billing problems ("Who was logged on for 38 hours?") and help pinpoint liability when problems occur. Is so, they should be protected with OS specific AC Ls and residing on a read-only file system.
Web Tier
Are there U.S. export or international laws to be considered while using encryption?
Web Tier
Web Tier
Web Tier
Web Tier
Web Tier
Are the Java security policy files and configuration files protected in the application server? Do the application log files show the key-pair values and timestamps for troubleshooting? Are the log files and audit trails stored and secured in isolated systems and accessible by authorized personnel only?
Web Tier
Web Tier
Web Tier
Are the XML schemas (or DTDs) used to validate the data quality as well as to detect any invalid data or suspicious actions?
Someone could potentially send a valid schema with a petabyte of data in it. This could cause more trouble for the application than a small file that was malformed. The schemas should include restrictions on the amount of data being sent to prevent this type of attack.
access to particular applications? Business Tier Is authentication and authorization used to control access to particular data within applications? Do users need to reauthenticate themselves for each type of Business Tier access? Does the application make use of a shared security context propagation? Does user authentication expire based on inactivity? How are unauthenticated users prevented Business Tier from accessing network facilities? Is an authenticated session terminated after a period of inactivity ?
This helps pinpoint the extent of resources a malicious, unauthorized user can use up.
Are the authentication and Business Tier authorization databases properly protected?
For password-based authentication, encryption of passwords is prudent. C ertificate-based authentication databases need only contain public keys, that are by nature secure.
Business Tier Are the EJB transfer objects obfuscated ? Applications should not use superuser IDs. In required circumstances, it should make use of role-based access control mechanisms to get access to what they need.
Do you use the system ID (or one Business Tier superuser ID) to update or access business data?
Is the JDBC communication with remote databases protected? Business Tier Does it use encrypted communication to transmit JDBC statements? Do you tightly couple the business Business Tier logic with the security processing rules in the same application logic code? Do you make use of role-based Business Tier access to connect with back-end resources? N-tier application architecture design allows loose coupling of the business logic with security processing rules for better scalability. Role-based access is more secure, flexible, and scalable than user-based access. Sensitive data in XML text encapsulated in a SOAP message can be easily snooped. Use of XML Signature and XML Encryption mechanisms to sign and encrypt sensitive payload in SOAP messages is often recommended. RPC ports are easily exploited for malicious attacks.
Integration Tier
Integration Tier
Are unused ports, OS services, and network devices disabled? Do you have security appliances to scan and inspect SOAP payload content and attachments for suspected malicious action? Are SOAP messages containing sensitive data or financial transaction encrypted and digitally signed during transit and storage? C an the intermediary (SOAP proxies) modify the message contents in SOAP messages? Do intermediaries make use of XML Signature?
Integration Tier
Integration Tier
Integration Tier
Intermediaries must make use of XML signatures to prove the authenticity and privileges to modify the message contents.
Integration Tier
No public access to WSDL should be permitted unless the requesting entity is authenticated and authorized to download them.
Integration Tier
Does your application mandate selected encryption of the contents of sensitive business data in the SOAP messages?
Integration Tier
Do you encrypt SOAP messages that contain sensitive business data between SOAP proxies that route the messages?
SOAP proxies or intermediaries that route SOAP messages should be tamper-proof, and unauthorized access should not change the data contents. One way to provide data integrity and confidentiality is the use of XML Signature and Encryption. Some sites do not enforce access user IDs for the UDDI or ebXML service registry. Thus, hackers can easily hack in the service registry. UDDI or ebXML service registries can be made highly available by hardware clustering or software clustering (for example, using vendor-specific replication features). WSDL can be dynamically looked up and then application can be invoked. The implication is that hackers may easily locate all Web services endpoints easily for future security attacks. Using timestamps allows identifying and preventing forged messages from further processing. It is also important to synchronize time throughout your environment. Using time-to-live tokens helps detect DoS attacks using abnormal payloads and malicious messages requiring parsing with endless loops.
Integration Tier
Have you properly set up individual user IDs for accessing the UDDI or ebXML service registry?
Integration Tier
Integration Tier
Do you allow all users dynamic look-up of WSDL files and dynamic invocation of services?
Integration Tier
Integration Tier
Integration Tier
Do you associate access control and rights for all requesting resources? It is important to ensure that each of the host machines and network appliances is scanned for any suspicious footprint or unusual security-related activities. A securityhost OS hardening and minimizing must be performed.
Integration Tier
Have you performed any security tests such as penetration tests or a regular host security scan for all intermediaries?
Security Testing
One of the most important and most frequently overlooked areas of application development is security testing. W hile we pay much heed to functional testing, it is surprising how little security testing we do. This may be attributed to many factors: Lack of understanding (of the importance of security testing) Lack of time Lack of knowledge (of how to do security testing) Lack of tools Regardless of the reasons, it is not being done and it poses a serious security risk to the application. Security testing is a time-consuming and tedious process, most often even more so than functional testing. It is also spread across a variety of disciplines. There are the functional business security requirements that the regular test team will perform. However, there are also non-business functional, or operational, tests that must be performed as well. These can be broken down into two categories, Black Box Testing and W hite Box Testing.
Instead, the framework should make use of the architecturally significant security use cases. A good security framework should enable building vendor-independent security solutions that adopt standards-based technologies, structured security methodology, security patterns, and industry best practices.
Conclusion
Security must be omnipresent throughout your infrastructure in order for you to begin to feel your application or service is secure. In order to accomplish this, it is imperative that you follow a structured methodology. In this chapter, we looked at why this methodology must be baked into the development process right from the beginning. Security spans every aspect of your systemfrom the network perimeter to the serviceas shown in the Security W heel. To incorporate security into the software development process, we extended the Unified Process to include several new security disciplines. These disciplines define the roles and responsibilities of the different security participants within the software life cycle. The Secure UP (as we called it) ensures that our security methodology can be supported within a software development process. Any security methodology must include this process or one like it. A secure methodology should also include how to adopt security patterns based on security use case requirements and design analysis as well as how to apply them in appropriate business scenarios. In summary, we looked at what goes into a good security design. It starts with a methodology, leverages patterns and frameworks, and gets baked into the software development process from the ground up to deliver "Security by Default."
Security Principles
[NIST ] NIST Security principles: http://csrc.nist.gov/publications/nistpubs/ [Sun Blueprints] T rust Modeling for Security Architecture Development http://www.sun.com/blueprints/1202/817-0775.pdf
Security Patterns
[Amos] Alfred Amos . "Designing Security into Software with Patterns." April 26, 2003. http://www.giac.org/practical/GSEC/Alfred_Amos_GSEC.pdf [Berry] Craig A. Berry, John Carnell, Matjaz B. Juric, Meeraj Moidoo Kunnumpurath, Nadia Nashi, and Sasha Romanosky. J2EE Design Patterns Applied. Wrox Press, 2002. [CJP] Deepak Alur, John Crupi, Dan Malks. Core J2EE Patterns: Best Practices and Design Strategies. Prentice Hall, 2003. [IBM] IBM. "Introduction to Business Security Patterns: An IBM White Paper." IBM, 2003. http://www3.ibm.com/security/patterns/intro.pdf [Monzillo] Ron Monzillo and Mark Roth. "Securing Applications for the Java 2 Platform, Enterprise Edition (J2EE)." Java One 2001 Conference. [OpenGroup] T he Open Group. "Guide to Security Patterns." Draft 1. T he Open Group, April 5, 2002. [Romanosky2001] Sasha Romanosky. "Security Design Patterns, Part 1" Version 1.4. November 12, 2001. [Romanosky2002] Sasha Romanosky. "Enterprise Security Patterns." June 4, 2002. http://www.romanosky.net/papers/securitypatterns/EnterpriseSecurityPatterns.pdf [WassermannBetty] Ronald Wassermann and Betty H. C. Cheng. "Security Patterns." Michigan State University (MSU-CSE-03-23). August 2003. http://www.cse.msu.edu/cgi-user/Web/tech/document?ID=547 [YoderBarcalow1997] Joseph Yoder and Jeffrey Barcalow. "Architectural Patterns for Enabling Application Security." Pattern Languages of Programs Conference, 1997. http://www.joeyoder.com/papers/patterns/Security/appsec.pdf
Others
[XACML2] OASIS. Extensible Access Control Markup LanguageVersion 2, Committee draft 04, December 6, 2004. http://docs.oasis-open.org/xacml/access_control-xacml-2.0-core-spec-cd-04.pdf [LIBERT Y1] Liberty Alliance. Liberty T rust Models Guidelines, Version 1.0 http://www.projectliberty.org/specs/liberty-trust-models-guidelines-v1.0.pdf [Fowler1] Martin Fowler. Refactoring: Improving the Design of Existing Code. Addison-Wesley, 2000. [Kerievsky1] Joshua Kerievsky. Refactoring to Patterns. Addison-Wesley, 2005. [Gof] Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides. Design Patterns: Elements of Reusable ObjectOriented Software. Addison-Wesley, 1994.
Forces
Access to the application is restricted to valid users, and those users must be properly authenticated. There may be multiple entry points into the application, each requiring user authentication. It is desirable to centralize authentication code and keep it isolated from the presentation and business logic.
Solution
Create a centralized authentication enforcement that performs authentication of users and encapsulates the details of the authentication mechanism. The Authentication Enforcer pattern handles the authentication logic across all of the actions within the W eb tier. It assumes responsibility for authentication and verification of user identity and delegates direct interaction with the security provider to a helper class. This applies not only to password-based authentication, but also to client certificatebased authentication and other authentication schemes that provide a user's identity, such as Kerberos. Centralizing authentication and encapsulating the mechanics of the authentication process behind a common interface eases migration to evolving authentication requirements and facilitates reuse. The generic interface is protocolindependent and can be used across tiers. This is especially important in cases where you have clients that access the Business tier or W eb Services tier components directly.
Structure
Figure 9-1 shows a class diagram of the Authentication Enforcer pattern. The core Authentication Enforcer consists of three classes: AuthenticationEnforcer, RequestContext, and Subject.
Figure 9-2 depicts a typical client authentication using Authentication Enforcer. In this case, the Client is a SecureBaseAction that delegates to the AuthenticationEnforcer, which retrieves the appropriate user credentials from the UserStore. Upon
successful authentication, the AuthenticationEnforcer creates a Subject instance for the requesting user and stores it in its cache.
Client (such as a FrontController or ApplicationController) creates RequestContext containing user's credentials. Client invokes AuthenticationEnforcer's authenticate method, passing the RequestContext. AuthenticationEnforcer retrieves the user's credentials from the RequestContext and attempts to locate user's Subject instance in its
cache based upon the supplied user identifier in the credentials. This identifier may vary depending upon the authentication mechanism and may possibly require some form of mapping, for example, if an LDAP DN retrieved from a client certificate is used as a credential. Unable to locate an entry in the cache, the AuthenticationEnforcer retrieves the user's corresponding credentials in the UserStore. (Typically this will contain a hash of the password.) The AuthenticationEnforcer will verify that the user-supplied credentials match the known credentials for that user in the UserStore and upon successful verification will create a Subject for that user. The AuthenticationEnforcer will then place the Subject in the cache and return it to the SecureBaseAction.
Strategies
The Authentication Enforcer pattern provides a consistent and structured way to handle authentication and verification of requests across actions within W eb-tier components and also supports Model-View-Controller (MVC) architecture without duplicating the code. The three strategies for implementing an Authentication Enforcer pattern include Container Authenticated Strategy, Authentication Provider Strategy (Using Third-party product), and the JAAS Login Module Strategy.
getRemoteUser() Determines the user name with which the client authenticated. isUserInRole (String username) Determines the given user is in a specified security role. getUserPrincipal() Returns a java.security.Principal object.
As you can see in Figure 9-3, the authentication provider takes care of the authentication and creation of the Principal. The Authentication Enforcer simply creates the Subject and adds the Principal and the Credential to it. The Subject then holds a collection of permissions associated with all the Principals for that user. The Subject object can then be used in the application to identify, and also to authorize, the user.
In this strategy, the AuthenticationEnforcer is implemented as a JAAS client that interacts with JAAS LoginModule(s) for performing authentication. The JAAS LoginModules are configured using a JAAS configuration file, which identifies one or more JAAS LoginModules intended for authentication. Each LoginModule is specified via its fully qualified class name and an authentication Flag value that controls the overall authentication behavior. The flag values (such as Required, Requisite, Sufficient, Optional) defines the overall authentication process. The authentication process proceeds down the specified list of entries in the configuration file based on the flag values. The AuthenticationEnforcer instantiates a LoginContext class that loads the required LoginModule(s), specified in the JAAS configuration file. To initiate authentication the AuthenticationEnforcer invokes the LoginContext.login() method which in turn calls the login() method in the LoginModule to perform the login and authentication. The LoginModule invokes a CallbackHandler to perform the user interaction and to prompt the user for obtaining the authentication credentials (such as username/password, smart card and biometric samples). Then the LoginModule authenticates the user by verifying the user authentication credentials. If authentication is successful, the LoginModule populates the Subject with a Principal representing the user. The calling application can retrieve the authenticated Subject by calling the LoginContext's getSubject method. Figure 9-5 shows the sequence diagram for JAAS Login Module strategy.
1. SecureBaseAction creates RequestContext containing user's credentials. 2. SecureBaseAction invokes AuthenticationEnforcer's login method, passing in the RequestContext. 3. AuthenticationEnforcer creates a CallbackHandler object that contains the username and password extracted from the RequestContext 4. AuthenticationEnforcer creates a LoginContext. 5. LoginContext loads the AuthenticationProvider implementation of a LoginModule. 6. LoginContext initializes the AuthenticationProvider. 7. AuthenticationEnforcer invokes login method on the AuthenticationProvider. 8. AuthenticationProvider retrieves the username and password by calling handle on the CallbackHandler. 9. AuthenticationProvider uses username and password to authenticate user. 10. Upon successful invocation AuthenticationProvider, upon commit, sets the Principal in the Subject and returns the Subject back up the request chain.
Consequences
By employing the Authentication Enforcer pattern, developers will be able to benefit from reduced code and consolidated authentication and verification to one class. The Authentication Enforcer pattern encapsulates the authentication process needed across actions into one centralized point that all other components can leverage. By centralizing authentication logic and wrapping it in a generic Authentication Enforcer, authentication mechanism details can be hidden and the application can be protected from changes in the underlying authentication mechanism. This is necessary because organizations change products, vendors, and platforms throughout the lifetime of an enterprise application. A centralized approach to authentication reduces the number of places that authentication mechanisms are accessed and thereby reduces the chances for security holes due to misuse of those mechanisms. The Authentication Enforcer enables authenticating users by means of various authentication techniques that allow the application to appropriately identify and distinguish user's credentials. A centralized approach also forms the basis for authorization that is discussed in the Authorization Enforcer pattern. The Authentication Enforcer also provides a generic interface that allows it to be used across tiers. This is important if you need to authenticate on more than one tier and do not want to replicate code. Authentication is a key security requirement for almost every application, and the Authentication Enforcer provides a reusable approach for authenticating users.
Sample Code
Examples 9-1, 9-2, and 9-3 illustrate different authentication configurations that can be specified in the web.xml file of a J2EE application deployment. Example 9-4 shows the programmatic approach to authentication.
By using a proprietary approach, you increase the risk of creating security holes that can be exploited to subvert the application. Web authentication. Choose the right approach for your security requirements. Basic HTTP authentication is usually highly vulnerable to attacks and provides unacceptable exposure. On the other hand, requiring client certificates for authentication may deter potential users of the system, which is an abstract form of a denial of service attack. Confidentiality. During the authentication process, sensitive information is sent over the wire, so confidentiality becomes a critical requirement. Use a Secure Pipe pattern during the user login process to protect the user's credentials. Not securing transmission of the user credentials presents a risk that they may be captured and used by an attacker to masquerade as a legitimate user.
Reality Check
The following reality checks should be considered before implementing an Authentication Enforcer pattern. Should you use the Authentication Enforcer programmatically or rely on container-managed security? That depends on the requirements of the application. If you are forced to integrate with a third-party security provider that does not plug into the container's underlying security SPI, then you may have to use a programmatic strategy. Why w ould you use client certificate authentication? Client certificate authentication provides a high degree of authentication. It is a two-factor authentication mechanism, relying on what you have (client certificate) in addition to what you know (password). Dependency on Secure Pipe. Most likely, you will be using the Form Based Authentication strategy. In order to protect privacy and prevent man-in-the-middle attacks, you will need to use a Secure Pipe pattern that will encrypt the password en route.
Related Patterns
The following are patterns related to the AuthenticationEnforcer. ContextObject [CJP2]. A ContextObject is used to encapsulate protocol-specific request parameters. I ntercepting Filter [CJP2].
Authorization Enforcer
Problem
Many components need to verify that each request is properly authorized at the method and link level. For applications that cannot take advantage of container-managed security, this custom code has the potential to be replicated. In large applications, where requests can take multiple paths to access multiple business functionality, each component needs to verify access at a fine-grained level. Just because a user is authenticated does not mean that user should have access to every resource available in the application. At a minimum, an application makes use of two types of users; common end users and administrators who perform administrative tasks. In many applications there are several different types of users and roles, each of them require access based on a set of criterion defined by the business rules and policies specific to a resource. Based on the defined set of criterion, the application must enforce that a user can be able to access only the resources (and in the manner) that user is allowed to do.
Forces
Y ou want to minimize the coupling between the view presentation and the security controller. W eb applications require access control on a URL basis. Authorization logic required to be centralized and should not spread all over the code base in order to reduce risk of misuse or security holes. Authorization should be segregated from the authentication logic to allow for evolution of each without impacting
the other.
Solution
Create an Access Controller that will perform authorization checks using standard Java security API classes. The AuthorizationEnforcer provides a centralized point for programmatically authorizing resources. In addition to centralizing authorization, it also serves to encapsulate the details of the authorization mechanics. W ith programmatic authorization, access control to resources can be implemented in a multitude of ways. Using an AuthorizationEnforcer provides a generic encapsulation of authorization mechanisms by defining a standardized way for controlling access to W eb-based applications. It provides fine-grained access control beyond the simple URL restriction. It provides the ability to restrict links displayed in a page or a header as well as to control the data within a table or list that is displayed, based on user permissions.
Structure
Figure 9-6 shows the AuthorizationEnforcer class diagram.
SecureBaseAction. SecureBaseAction is an action class that gets the Subject from the RequestContext, and checks whether it is authorized for various permissions. RequestContext. A protocol-independent object used to encapsulate protocol-specific request information. AuthorizationEnforcer. An object used to generically enforce authorization in the W eb tier. Subject. A Subject class used to store a user's identities and credential information. [Java2] AuthorizationProvider. A security provider that implements the authorization logic. PermissionCollection. A PermissionCollection class used to store permissions, with a method for verifying whether a particular permission is implied in the collection. [Java2]
Strategies
There are three commonly adopted strategies that can be employed to provide authorization using Authorization Enforcer pattern. The first is using an authorization provider, using a third-party security solution that provides authentication and authorization services. The second is purely programmatic authorization strategy which makes use of the Java 2 security API classes and leveraging the Java 2 Permissions class. The third is a JAAS authorization strategy that makes use of the JAAS principal based policy files and takes advantage of the underlying JAAS programmatic authorization mechanism for populating and checking a user's access privileges. Not discussed further here is the J2EE container-managed authorization strategy. This strategy, or more correctly, the implementation, was found to be too static and inflexible.
AuthorizationEnforcer retrieves the PermissionsCollection from the Subject's public credential set. AuthorizationEnforcer calls the implies method of the PermissionCollection, which passes in the checked Permission and returns the
response.
In Example 9-5, a table with admin and user links is rendered based on the requester's permissions. A user who has admin and user permissions will see both links. Regular users will only see the user link. Public users would see neither. Example 9-6 shows the custom tag library used in Example 9-5.
creation. Developers can map permissions to resources and roles to permissions declaratively at deployment time, thus eliminating programmatic mappings that often result in bugs and cause security vulnerabilities. Figure 9-8 shows the sequence diagram of the Authorization Enforcer implemented using the JAAS Authorization Strategy. The key participants and their roles are as follows: AuthorizationEnforcer. An object used to generically enforce authorization in the W eb tier. Subject. A Java 2 security class used to store a user's identities and security-related information. [Java2] PrivilegedAction. A computation to be performed with privileges enabled. Policy. It is JAAS Principal-based policy file, which defines the Principals with designated permissions to execute the specific application code or other privileges associated with the application or resources. [JAAS] JAAS Module. The JAAS Module is responsible for enforcing access control by enforcing the JAAS Policy and verifying that the authenticated Subject has been granted the appropriate set of permissions before invoking the PrivilegedAction.
Example 9-7 shows a JAAS Authorization policy file (MyJAASAux.policy). Example 9-8 shows the source code for a JAASbased authorization strategy (SampleAuthorizationEnforcer.java) and Example 9-9 shows the Java source code for PrivilegedAction implementation (MyPrivilegedAction.java).
**/ grantcodebase "file:./MyPrivilegedAction.jar", Principal csp.principal.myPrincipal "chris" { permission java.util.PropertyPermission "java.home", "read"; permission java.util.PropertyPermission "user.home", "read"; permission java.io.FilePermission "Chris.txt", "read"; };
return null; } }
Consequences
Centralizes control. The Authorization Enforcer allows developers to encapsulate the complex intricacies of implementing access control. It provides a focus point for providing access control checks, thus eliminating the chance for repetitive code. I mproves reusability. Authorization Enforcer allows greater reuse through encapsulation of disparate access-control mechanisms through common interfaces. Promotes separation of responsibility. Partitions authentication and access-control responsibilities, insulating developers from changes in implementations.
Reality Check
T oo complex. Implementing a JAAS Authorization Strategy and all but the most simplistic authorization strategies requires an in-depth understanding of the Java 2 security model and a variety of Java security APIs. As with any security mechanism, complexity can lead to vulnerabilities. Make sure you understand how your resources are being protected through this approach before diving in and implementing it.
Related Patterns
Context Object [CJP2]. The Authorization Enforcer uses a Context Object pattern to encapsulate handling and transferring of security-related request data. Refer to http://www.corej2eepatterns.com/Patterns2ndEd/ContextObject.htm for details. Authentication Enforcer. The Authorization Enforcer relies on the Authentication Enforcer to first authenticate the user.
Intercepting Validator
Problem
Y ou need a simple and flexible mechanism to scan and validate data passed in from the client for malicious code or malformed content. T he data could be form-based, queries, or even XML content. Several well-known attack strategies involve compromising the system by sending requests containing invalid data or malicious code. Such attacks include injection of malicious scripts, SQL statements, XML content, and invalid data using a form field that the attacker knows will be inserted into the application to cause a potential failure or denial of service. The embedded SQL commands can go further, allowing the attacker to wreak havoc on the underlying database. These types of attacks require the application to intercept and scrub the data prior to its use. W hile some of the approaches for scrubbing the data are well known, it is a constant battle to keep up-to-date as new attacks are
discovered.
Forces
Y ou want to validate a wide variety of data passed in by the client. Y ou want a common mechanism for validating various types of data. Y ou want to dynamically add validation logic as necessary to keep your application secure against newly discovered attacks. Validation rules must be decoupled from presentation logic.
Solution
Use an Intercepting Validator to cleanse and validate data prior to its use within the application, using dynamically loadable validation logic. A good application will verify all input, for both business and security reasons. Similar to the Intercepting Filter pattern [CJP2], the Intercepting Validator makes use of a pluggable filter approach. The filters can then be applied declaratively based on URL, allowing different requests to be mapped to different filter chains. In the case of the Intercepting Validator, filtering would be restricted to preprocessing of requests and would primarily consist of validation (yes or no) logic that determines whether or not the request should continue as opposed to manipulation logic that would require business logic above and beyond the security concerns of the application. W hile applications could incorporate security filters into an existing Intercepting Filter implementation, the preferred approach would be to employ both and keep them separate. Typically, the Intercepting Validator would be invoked earlier in the request-handling process than the Intercepting Filter and would consist of a more static and reusable set of filters because it is not tied to any particular set of business rules. Business rules are often tied to the business logic and must be performed in the Business tier, but security rules are often independent of the actual application and should be performed up front in the W eb tier. Client-side validations are inherently insecure. It is easy to spoof submitting a W eb page and bypass any scripting on the original page. [Vau] For the application to be secure, validations must be performed on the server side. This does not detract from the value of client-side validations. Client-side validations via JavaScript make sense for business rules and should be optionally supported. They provide the user with validation feedback before the form gets submitted. This increases the perceived performance by the end user and saves the server the cost of processing errors for the vast majority of cases. Input from malicious users who circumvent client-side validations must still be validated, though. Therefore, to be prudent, validation checks must always be performed on the server side, whether or not client-side checking is done as well.
Structure
Figure 9-9 depicts a class diagram of the Intercepting Validator pattern.
Figure 9-10 illustrates a sequence of events for the Intercepting Validator pattern described by the following components. Client. A client sends a request to a particular target resource. SecureBaseAction. The SecureBaseAction is used by the client to generically enforce validating the request in the W eb tier. SecureBaseAction delegates this responsibility to the InterceptingValidator. InterceptingValidator. The InterceptingValidator is a specialized version of the InterceptingFilter pattern [CJP2], with some notable changes in strategy. Unlike the InterceptingFilter, it is solely responsible for data validation. ParamValidator. The ParamValidator is responsible for validating all request parameters. Boundary checking, data formatting, and examining the parameters for cross-site scripting and malformed URL vulnerabilities are some of the validations it performs. Validations are specific to the Target. SQLValidator. SQLValidator is responsible for validating the parameters for executing SQL statements. Examining the parameters for boundary checking, data size and format, and formatting are some of the validations it performs. Validations are specific to the database queries and transactions. T arget. The client requested resource. Figure 9-10 depicts a use case scenario of how a request from a client to a resource gets properly validated to ensure against attacks based on malformed data. The scenario follows these steps: 1. Client makes a request to a particular resource, specified as the Target.
2. SecureBaseAction uses the InterceptingValidator to validate the data for the target service request. 3. InterceptingValidator retrieves the appropriate validators according to the configuration for the target. 4. InterceptingValidator invokes a series of validators as configured. 5. Each Validator validates and scrubs the request data, if necessary. 6. Upon successful validation, the SecureBaseAction invokes the target resource.
Strategies
Different validators are used to validate different types of data in a request or perform validation on that data in a different manner. For example, certain form fields need to be validated for size constraints. Form fields that will contain data that will be become part of an SQL query or update require SQL character validation to ensure that embedded SQL commands cannot be entered. The logic to perform the validations can often be cumbersome. This logic can be simplified through use of the new J2SE 1.4 regular expressions package. This package contains classes that allow developers to perform regular expression matches as they would in the Perl programming language. These classes can be used for validation implementations. Example 9-10 illustrates a simple action configuration in MVC architecture using Apache Struts. To implement a simple Intercepting Validator, the parameter attribute can define a key that can be used to apply a Validator against an ActionForm/HTTPRequest. Alternatively, a separate validation file, such as validator.xml as shown in Example 9-11 can define generic sets of validation rules to be applied to a form or other input request. Such XML-descriptive validation definitions can be leveraged by Intercepting Validator to dynamically apply over input requests; Validator instances that have been coded to be configured based on the XML content. Both these scenarios are illustrated in Figure 9-10.
<var name="maxValue" value="9999"/> <var name="maskingExpression" value="^\(?(\d{3})\)?[-| ]?(\d{3})[-| ]?(\d{4})$"/> <var msg="errors.dataToSave"/> </field> </form> </formset> </form-validation>
An architect who is not inclined to use the Intercepting Validator pattern may end up forcing each developer to naively hardcode validation logic in each of the servlets/action classes/forms beans. Developers implementing business action classes (front controllers), who may not necessarily be security-aware, are prone to miss necessary validations. An example of this type of programmatic validation is illustrated in Example 9-13.
Example 9-13. SimpleFormAction using programmatic validation logic using Apache Struts
import org.apache.struts.action.Action; import org.apache.struts.action.ActionErrors; import org.apache.struts.action.ActionForm; import org.apache.struts.action.ActionForward; import org.apache.struts.action.ActionMapping; /** * This code is taken from the Apache Struts examples. * It requires a working knowledge of Struts, not * explained here. */ public final class SimpleFormAction extends Action { public ActionForward perform(ActionMapping actionmapping, ActionForm actionform, HttpServletRequest httpservletrequest, HttpServletResponse httpservletresponse) throws IOException, ServletException { //perform explicit validation since // it is not implicitly taken care of by //the web application framework ActionErrors actionerrors = actionform.validate(); if(!actionerrors.empty()) { saveErrors(httpservletrequest, actionerrors); //redirect to input page with errors return new ActionForward(actionmapping.getInputForward()); } else { return actionmapping.findForward("continueWorkFlow"); } } } public class SimpleForm extends Form { //... public ActionErrors validate(ActionMapping actionmapping, HttpServletRequest request) { //for each request/form parameter, code for validation ActionErrors errs = new ActionErrors(); if (request.getParameter("param1").indexOf("&")!=-1) errs.add(Action.ERROR_KEY, new ActionError("error_unacceptable_parameter1")); //... return errs; } }
Consequences
Using the Intercepting Validator pattern helps in identifying malicious code and data injection attacks before the business logic processes the request. It ensures verification and validation of all inbound requests and safeguards the application from forged requests, parameter tampering, and validation failure attacks.
In addition, the Intercepting Validator offers developers several key benefits: Centralizes security validations. The Intercepting Validator pattern provides centralized security validation logic. By centralizing security validations, code is more maintainable and more reusable. Input validation is one of the most crucial aspects of securing W eb applications, because there are so many vulnerabilities stemming from lack of validation. The Intercepting Validator ensures that such vulnerabilities can be addressed in a standardized way. Decouples validations from presentation logic. It is good programming practice to decouple the validation of the request data from the presentation logic. It promotes better software manageability and cleaner code. It also reduces redundancy. Simplifies addition of new validators. Developers have the ability to add new validators dynamically. As new databased attacks are discovered, new validators can be implemented and installed without requiring redeployment of the application.
Reality Check
I s an elaborate security validation framew ork really needed? There are many known data attacks that can be prevented through proper data validation. A framework is needed to ensure that there is a mechanism in place to facilitate easy addition of new validation logic as future data attacks on the application become known. Performance implications. The Intercepting Validator pattern could be competing with other W eb server resources to read session data, which could lead to concurrency issues such as long wait times or deadlocks. A careful analysis of the W eb requests traffic, performance requirements, dependencies, and other possible scenarios should bring forth appropriate resolutions and trade-offs.
Related Patterns
I ntercepting Filter [CJP2]. The Intercepting Filter is similar but is used more as a filtering or transforming mechanism than a validation tool. Message I nspector. The Message Inspector intercepts and processes XML requests, and in situations where custom validation mechanisms are required, it may need to use an Intercepting Validator.
Forces
Y ou want to enforce security by centralizing all security-related functionality. Y ou want to reduce direct integration of presentation logic with security logic. Y ou want to encapsulate the details of the security-related components so that those components can be enhanced without impact to presentation logic. Y ou have several security components that you need to coordinate or orchestrate to ensure overall security requirements are met.
Solution
Use a Secure Base Action to coordinate security components and to provide Web tier components with a central access point for administering security-related functionality. A Secure Base Action pattern can be used as a single point for security-related functionality within the W eb tier. By having W eb components such as Front Controllers [CJP2] and Application Controllers [CJP2] inherit from it, they gain access to all of the security operations that are necessary throughout the front end. Authentication, authorization, validation, logging, and session management are areas that the Secure Base Action encapsulates and provides centralized access to.
Structure
Figure 9-11 depicts a class diagram for the Secure Base Action.
As shown in Figure 9-11, the client is a FrontController or ApplicationController that allows delegation of the security handling of the request to the SecureBaseAction. The SecureBaseAction in turn delegates the individual tasks to the appropriate classes.
Client. The client is a request handler that uses the SecureBaseAction to perform security processing of the request. SecureBaseAction. A client sends a request to a particular target resource. AuthenticationEnforcer. The AuthenticationEnforcer provides authentication and verification of requests. AuthorizationEnforcer. The AuthorizationEnforcer authorizes users for requests. InterceptingValidator. The InterceptingValidator handles validation of request parameters. SecureLogger. The SecureLogger logs events for the request. As shown in Figure 9-12, the SecureBaseAction invokes methods on all of the supporting security classes, ensuring that the request is authenticated, authorized, validated, and logged. The sequence is as follows: 1. Client invokes execute on SecureBaseAction. 2. SecureBaseAction gets the LoginContext from the session. 3. SecureBaseAction uses the AuthenticationEnforcer to verify the LoginContext. 4. SecureBaseAction authorizes the request using the AuthorizationEnforcer. 5. SecureBaseAction validates the request data using the InterceptingValidator. 6. SecureBaseAction logs the request using the SecureLogger. Typically, the client would be a FrontController [CJP2] or an ApplicationController [CJP2] that would invoke and execute prior to delegating to any other classes. Y ou would not want to make more than one call per request, because it would cause redundant processing.
Consequences
Using Secure Base Action helps in aggregating and enforcing security operations that include authentication, authorization, auditing, input validation, and other management functions before processing the request with presentation or business logic. By employing the Secure Base Action pattern, developers will realize the following benefits: I mproved manageability of security requirements. Architects and developers can improve management of security requirements by consolidating enforcement of those requirements through a central class. This provides a single integration point between presentation code and security code. Therefore, presentation developers do not have to focus on security and have fewer chances for causing holes by not integrating security properly. Changes required for evolving security requirements are isolated from the rest of the application, reducing touchpoints in the application that could possibly be overlooked and result in security holes. I mproves reusability. By consolidating security-related functionality behind one interface, developers gain better reuse of the functionality, since there are fewer touchpoints in the code base. All supporting security classes can be packaged together and reused, with the SecureBaseAction acting as the single interface for the W eb tier components. The more components are reused, the more they are improved. The same goes for security. Reusing security code means that you are using code that has been security-tested and has many of the kinks already worked out.
Sample Code
} // Authorize the request authorizationEnforcer.authorize(req, lc); // Validate request data interceptingValidator.validate(req); // Log data secureLogger.log(lc.getSubject().getPrincipals()[0] + rc); } /** * Set the subject in the login context */ public void setLoginContext(ResponseContext resp, Subject s) throws Exception { // Get an instance of ContextFactory ContextFactory factory = ContextFactory.getInstance(); // Get LoginContext from factory LoginContext lc = factory.getContext(Constants.LOGIN_CONTEXT); lc.setSubject(s); resp.setParameter(Constants.LOGIN_CONTEXT_KEY, lc); } }
Reality Check
T oo inflexible. The Secure Base Action encapsulates all security-related functionality. This is great for shielding application components from changes to underlying security mechanisms, but it is also prohibitive in some cases where application components require access to those security mechanisms beyond what the Secure Base Action provides. For example, a web developer might want to implement password services that require direct access to the authentication provider, which is not accessible through the Secure Base Action. T oo much encapsulation? The Secure Base Action is not providing a lot of functionality on top of the other patterns that it delegates to. It is worthwhile, because it hides the detail of how the other security patterns are implemented from the presentation developer and reduces exposing integration points.
Related Patterns
Front Controller [CJP2]. A Front Controller inherits from or uses a Secure Base Action to provide and coordinate security-related functions. Command [GoF]. Secure Base Action class makes use of the Command pattern for handling requests. ContextObject [CJP2]. A Context Object is used by the Secure Base Action to hide the protocol-specific details of
the requests, responses, and session information. Authentication Enforcer. Secure Base Action uses the Authentication Enforcer to authenticate users. Authorization Enforcer. Secure Base Action uses the Authorization Enforcer to authorize requests. Secure Logger. The Secure Logger is used by the Secure Base Action to log request events. I ntercepting Validator. The Secure Base Action validates request data through use of an Intercepting Validator.
Secure Logger
Problem
All application events and related data must be securely logged for debugging and forensic purposes. T his can lead to redundant code and complex logic. All trustworthy applications require a secure and reliable logging capability. This logging capability may be needed for forensic purposes and must be secured against stealing or manipulation by an attacker. Logging must be centralized to avoid redundant code throughout the code base. All events must be logged appropriately at multiple points during the application's operational life cycle. In some cases, the data that needs to be logged may be sensitive and should not be viewable by unauthorized users. It becomes a critical requirement to protect the logging data from unauthorized users so that the data is not accessible or modifiable by a malicious user who tries to identify the information trail. W ithout centralized control, sometimes the code usually gets replicated, and it becomes difficult to maintain the changes and monitor the functionality. One of the common elements of a successful intrusion is the ability to cover one's tracks. Usually, this means erasing any tell-tale events in various log files. W ithout a log trail, an administrator has no evidence of the intruder's activities and therefore no way to track the intruder. To prevent an attacker from breaking in again and again, administrators must take precautions to ensure that log files cannot be altered. Cryptographic algorithms can be adopted to ensure data confidentiality and the integrity of the logged data. But the application processing logic required to apply encryption and signatures to the logged data can be complex and cumbersome, further justifying the need to centralize the logger functionality.
Forces
Y ou need to log sensitive information that should not be accessible to unauthorized users. Y ou need to ensure the integrity of the data logged to determine if it was tampered with by an intruder. Y ou want to capture output at one level for normal operations and at other levels for greater debugging in the event of a failure or an attack. Y ou want to centralize control of logging in the system for management purposes. Y ou want to apply cryptographic mechanisms for ensuring confidentiality and integrity of the logged data.
Solution
Use a Secure Logger to log messages in a secure manner so that they cannot be easily altered or deleted and so that events cannot be lost. The Secure Logger provides centralized control of logging functionality that can be used in various places throughout the application request and response. Centralizing control provides a means of decoupling the implementation details of the logger from the code of developers who will use it throughout the application. The processing of the events can be modified without impacting existing code. For instance, developers can make a single method call in their Java code or JSP code. The Secure Logger takes care of how the events are securely logged in a reliable manner.
Structure
Figure 9-14 depicts a class diagram for Secure Logger.
Client. A client sends a request to a particular target resource. SecureLogger. SecureLogger is a class used to manage logging of data in a secure, centralized manner. LogManager. LogManager obtains a Logger instance from LogFactory and uses it to log messages. LogFactory. A LogFactory is responsible for creating and returning Logger instances. Logger. A Logger writes log messages to a target destination. A client uses the SecureLogger to log events. The SecureLogger centralizes logging management and encapsulates the security mechanisms necessary for preventing unauthorized log alteration. 1. Client wants to log an event using SecureLogger. 2. SecureLogger generates a sequence number and prepends it to the message. 3. SecureLogger passes the LogManager the modified event string to log.
4. LogManager obtains a handle to a Logger instance from a LogFactory. 5. LogFactory creates a Logger instance. 6. LogManager delegates actual logging of the event to the Logger. There are two parts to this logging process. The first part involves securing the data to be logged and the second part involves logging the secured data. The SecureLogger class takes care of securing the data and the LogManager class takes care of logging it.
Strategies
There are two basic strategies for implementing a Secure Logger. One strategy is to secure the log itself from being tampered with, so that all data written to it is guaranteed to be correct and complete. This strategy is the Secure Log Store Strategy. The other strategy, the Secure Data Logger Strategy, secures the data so that any alteration or deletion of it can be detected. This works well in situations where you cannot guarantee the security of the log itself.
Figure 9-16. Secure Logger with Secure Data Logger Strategy class diagram
W e use the MessageDigest, Cipher, Signature, and UIDGenerator classes for applying cryptographic mechanisms and performing various functions necessary to guarantee the data logged is confidential and tamperproof. Figure 9-17 shows the sequence of events used to secure the data prior to being logged.
Figure 9-17. Secure Logger with Secure Data Logger Strategy sequence diagram
W hen you have sensitive data or fear that log entries might be tampered with and can't rely on the security of the infrastructure to adequately protect those entries, it becomes necessary to secure the data itself prior to being logged. That way, even if the log destination (file, database, or message queue) is compromised, the data remains secure and any corruption of the log will become clearly evident. There are three elements to securing the data: Protect sensitive data. Ensure all sensitive data are stored and remain confidential throughout the process. For example, Credit card numbers should not be viewed directly by unauthorized personnel. Prevent data alteration. Make sure that data is tamperproof. For example, user IDs, transaction amounts, and so forth should not be changed. Detect deletion of data. Detect if events have been deleted from the log, a tell-tale sign that an attacker has compromised the system. To protect sensitive data, encrypt it using a symmetric key algorithm. Public-key algorithms are too CPU-intensive to use for bulk data. They are better for encrypting and protecting a symmetric key for use with a symmetric key algorithm. Properly protecting the symmetric key can ensure that attackers cannot access sensitive data even if they have access to the logs. For this, the SecureLogger can use an EncryptionHelper class. This class is responsible for encrypting a given string but not for decrypting it. This is an extra security precaution to make it harder for attackers to gain access to that sensitive data. Decryption should only be done outside the application, using an external utility that is not accessible from the application and its residing host. Data alteration can be prevented by using digitally signed message digests in the same manner that e-mail is signed. A message digest is generated for each message in the log file and then signed. The signature prevents an attacker from modifying the message and creating a subsequent message digest for the altered data. For this operation, the SecureLogger uses MessageDigestHelper and DigitalSignatureHelper classes. Finally, to detect deletion of data, a sequence number must be used. Using message digests and digital signatures is of no use if the entire log entry, including the signed message, is deleted. To prevent deletion, each entry must contain a sequence number that is part of the data that gets signed. That way, it will be evident if an entry is missing, since there will be a gap in the sequence numbers. Because the sequence numbers are signed, an attacker would be unable to alter subsequent numbers in the sequence, making it easy for an administrator reviewing the logs to detect deletions. To accomplish this, the SecureLogger uses a UUID [Middleware] pattern.
Figure 9-18. Secure Logger Pattern with Secure Log Store Strategy class diagram
The Secure Log Store strategy does not require the data processing that the Secure Data Logger Strategy introduced. Instead, it makes use of a Secure Pipe pattern and a secure datastore (such as a database), represented as the SecureStore object in Figure 9-18. In Figure 9-19, the only change from the main Secure Logger pattern sequence is the introduction of the Secure Pipe pattern.
In the Secure Log Store Strategy sequence diagram, depicted in Figure 9-19, Logger establishes a secure connection to the SecureStore using a SecurePipe. The Logger then logs messages normally. The SecureStore is responsible for preventing tampering with the log file. It could be implemented as a database with create-only permissions for the Logger user; a listener on a separate, secure box with write only capabilities; or any other solution that prevents deletion, modification, or unauthorized creation of log entries.
Consequences
Using the Secure Logger pattern helps in logging all data-related application events, user requests, and responses. It facilitates confidentiality and integrity of log files. In addition, it provides the following benefits: Centralizes logging control. The Secure Logger improves reusability and maintainability by centralizing logging control and decoupling the implementation details from the API. This allows developers to use the logging facilities through the API independent of the security functionality built into the logger itself. This reduces the possibility that business developers will inadvertently circumvent security by misusing it. Prevents undetected log alteration. The key to successfully compromising a system or application is the ability to cover your tracks. This involves alteration of log files to ensure that an administrator cannot detect that a breach has occurred. By employing a Secure Logger, security developers can prevent log alterations, ensuring that a breach can be detected through log file forensics, which is the first step in tracking down an intruder and preventing security breaches. Reduces performance. The Secure Logger impacts performance due to the use of cryptographic algorithms. Operations such as message digests, digital signatures, and encryption are computationally expensive and add additional performance overhead. Use only the necessary functionality to avoid unwanted performance overhead. Reduced performance can lead to a self-inflicted denial of service attack. Promotes extensibility. Security is a constantly evolving process. To protect against both current and future threats,
code must be adaptable and extensible. The Secure Logger provides the requisite extensibility by hiding implementation details behind a generic interface. By increasing the overall lifespan of the code, you increase its reliability by having tested it and worked out all of its bugs. I mproves manageability. Since all of the logging control is centralized, it is easier to manage and monitor. The Secure Logger performs all of the necessary security processing prior to the actual logging of the data, which allows management of each function independently of the others without risk of impacting overall security.
Sample Code
Example 9-15 shows a sample signer class, Example 9-16 depicts a digest class, and Example 9-17 provides a sample encryptor class. These classes are used by the Secure Logger to sign, digest, and encrypt messages, respectively.
Reality Check
Should everything be logged from Web tier? No. The Secure Logger pattern is applicable across tiers. It should be implemented on each tier that requires logging. T oo much performance overhead. Using the Secure Data Store Strategy incurs severe performance overhead. Expect a significant slowdown due to the extensive use of cryptographic algorithms. The Secure Data Logger Strategy is the preferred strategy for performance, but it also incurs the same overhead associated with use of Secure Pipe. How likely is log tampering? Log modifications to cover an attacker's tracks is not only common, it is the hallmark of a good hacker. It is difficult to determine how prevalent it is due to its very nature. Log files that have been successfully altered usually mean that the last trace of evidence that a system has been compromised is now gone. Shouldn't log security be the responsibility of the system administrators? In many cases, system administrators can effectively secure the log, and additional security is unnecessary. It depends on the skill of your operations staff along with the requirements of the application. Like any other security, log security is only as strong as the weakest link. By consolidating and encapsulating log functionality using the Secure Logger, you provide the capability to add additional security, such as in the Secure Data Strategy, if and when you find external mechanisms are not sufficient.
Related Patterns
Abstract Factory Pattern [GoF]. An Abstract Factory, or Factory, provides an interface for creating objects with a common interface or base class that is responsible for the concrete implementation of the interface.
Secure Pipe [Web T ier]. Secure Pipe shows how to secure the connection between the client and the server, or between servers. Universally Unique I dentifier [Middlew are]. A Universally Unique Identifier (UUID) provides an identifier that is unique.
Secure Pipe
Problem
Y ou need to provide privacy and prevent eavesdropping and tampering of client transactions caused by man-in-themiddle attacks. W eb-based transactions are often exposed to eavesdropping, replay, and spoofing attacks. Anytime a request goes over an insecure network, the data can be intercepted or exposed by unauthorized users. Even within the confines of a VPN, data is exposed at the endpoint, such as inside of an intranet. W hen exposed, it is subject to disclosure, modification, or duplication. Many of these types of attacks fall into the category of man-in-the-middle attacks. Replay attacks capture legitimate transactions, duplicate them, and resend them. Sniffer attacks just capture the information in the transactions for use later. Network sniffers are widely available today and have evolved to a point where even novices can use them to capture unencrypted passwords and credit card information. Other attacks capture the original transactions, modify them, and then send the altered transactions to the destination. This is a common problem shared by all applications that do business over an untrusted network, such as the Internet. For simple W eb applications that just serve up W eb pages, it is not cost-effective to address these potential attacks, since there is no reason for attackers to carry out such an attack (other than for defacement of the pages) and therefore the risk is relatively low. But, if you have an application that requires sending sensitive data (such as a password) over the wire, you need to protect it from such an attack.
Forces
Y ou want to avoid writing application logic to provide the necessary protection; it is better to push this functionality down into the infrastructure layer to avoid complexity. Y ou want to make use of hardware devices that can speed up the cryptographic algorithms needed to prevent confidentiality- and integrity-related issues. Y ou want to adopt tested, third-party products for reliable data and communication security. Y ou want to limit the protection of data to only sensitive data due to the large processing overhead and subsequent delay due to encryption.
Solution
Use a Secure Pipe to guarantee the integrity and privacy of data sent over the wire. A Secure Pipe provides a simple and standardized way to protect data sent across a network. It does not require application-layer logic and therefore reduces the complexity of implementation. In some instances, the task of securing the pipe can actually be moved out of the application and even off of the hardware platform altogether. Because a Secure Pipe relies on encrypting and decrypting all of the data sent over it, there are performance issues to consider. A Secure Pipe allows developers to delegate processing to hardware accelerators, which are designed especially for the task.
Structure
Figure 9-20 depicts a class diagram of the Secure Pipe pattern in relation to an application.
The following participants are illustrated in the sequence diagram shown in Figure 9-20. Client. Initiates a login with the application. Application. Creates a system level SecurePipe over which to communicate with the client. SecurePipe. A SecurePipe is an encrypted communications channel that provides data privacy and integrity between two endpoints. In the sequence shown in Figure 9-21, a client needs to connect to an application over a secure communication line. The sequence diagram shows how the client and the application communicate using the Secure Pipe. The interaction is as follows.
1. Client sends login request to the Application. 2. Application uses System to create a SecurePipe. 3. SecurePipe negotiates parameters of the secure connection with the Client. 4. Client sends request to the Application. 5. SecurePipe processes the request and creates a secure message by encrypting the data. It sends the message over the wire to the corresponding SecurePipe components on the Application. 6. SecurePipe on the Application processes the request received from the Client by decrypting it and then forwards the decrypted message to the Application. 7. Client sends a logout request. 8. Application destroys the SecurePipe. There are two components of the Secure Pipe pattern: the client-side component and the server-side component. These components work together to establish a secure communication. Typically, these components would be SSL or TLS libraries that the client's W eb browser and the application use for secure communications.
Strategies
There are several strategies for implementing a Secure Pipe pattern, each with its own set of benefits and drawbacks. Those strategies include: W eb-server-based SSL/TLS Hardware-based cryptographic accelerator cards Application-layer encryption using the Java Cryptography Extension (JCE)
Web-Server-Based SSL
All major W eb-server vendors support SSL. All it takes to implement SSL is to obtain or create server credentials from a CA, including the server X.509 certificate, and configure the W eb server to use SSL with these credentials. Before enabling SSL, the W eb server must be security-hardened to prevent compromise of the server's SSL credentials. Since these credentials would be stored on the W eb server, if that server were compromised, an attacker could gain access to the server's credentials (including the private key associated with the certificate) and would then be able to impersonate the server.
In some cases, Secure Pipe can be implemented in the application layer by making use of Java Secure Socket Extensions (JSSE) framework. JSSE allows enabling secure network communications using Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. It includes functionality for data encryption, server authentication, message integrity, and optional client authentication. Example 9-18 shows how to create secure RMI connections by implementing an RMI Secure Socket Factory that provides SSL connections for the RMI protocol, which provides a secure tunnel.
Consequences
Ensures data confidentiality and integrity during communication. T he Secure Pipe pattern enforces data confidentiality and integrity using a mixture of encryption and digital signatures. Using SSL/TLS mechanisms, all point-to-point communications links can be secured from man-in-the-middle attacks. Promotes interoperability. Using industry-standard infrastructure components to implement the Secure Pipe pattern allows application owners to achieve greater interoperability with clients and partners. By taking advantage of infrastructure products and standard protocols like SSL/TLS, IPSEC, application-level interoperability can be achieved between W eb browser clients and W eb-server-based applications. I mproves performance. Delegating CPU-intensive cryptographic operations into hardware infrastructure often shows performance benefits. Strategies such as SSL accelerators and network appliances often demonstrated quadruple performance over application layer processing. Reduces complexity. The Secure Pipe pattern reduces complexity by separating complex cryptographic algorithms and procedures from application logic. The details associated with providing secure communications can be pushed down into the infrastructure, thus freeing up the application to focus on business logic rather than security.
Sample Code
Example 9-18. Creating a Secure RMI Server Socket Factory that Uses SSL
package com.csp.web.securepipe; import java.io.*; import java.net.*; import java.rmi.server.*; import java.security.KeyStore; import javax.net.*; import javax.net.ssl.*; import com.sun.net.ssl.*; import javax.security.cert.X509Certificate; /** * This class creates RMI SSL connections. */ public class RMISSLServerSocketFactory implements RMIServerSocketFactory, Serializable { SSLServerSocketFactory ssf = null; /** * Constructor. */ public RMISSLServerSocketFactory(char[] passphrase) { // set up key manager to do server authentication SSLContext ctx; KeyManagerFactory kmf; KeyStore ks; try { ctx = SSLContext.getInstance("SSL"); // Retrieve an instance of an X509 Key manager kmf = KeyManagerFactory.getInstance("SunX509"); // Get the keystore type. String keystoreType = System.getProperty(
"javax.net.ssl.KeyStoreType"); ks = KeyStore.getInstance(keystoreType); String keystoreFile = System.getProperty( "javax.net.ssl.trustStore"); // Load the keystore. ks.load(new FileInputStream(keystoreFile), passphrase); kmf.init(ks, passphrase); passphrase = null; // Initialize the SSL context. ctx.init(kmf.getKeyManagers(), null, null); // Set the Server Socket Factory for getting SSL connections. ssf = ctx.getServerSocketFactory(); } catch(Exception e) { e.printStackTrace(); } } /** * Creates an SSL Server socketnad returns it. */ public ServerSocket createServerSocket(int port) throws IOException { ServerSocket ss = ssf.createServerSocket(port); return ss; } }
Example 9-19. Creating a secure RMI client socket factory that uses SSL
Package com.csp.web.securepipe; import java.io.*; import java.net.*; import java.rmi.server.*; import javax.net.ssl.*; public class RMISSLClientSocketFactory implements RMIClientSocketFactory, Serializable { public Socket createSocket(String host, int port) throws IOException { SSLSocketFactory factory = (SSLSocketFactory)SSLSocketFactory.getDefault(); SSLSocket socket = (SSLSocket)factory.createSocket(host, port) return socket; } }
Infrastructure
I nfrastructure for ensuring data privacy and integrity. Any communication over the Internet or an intranet are
subject to attack. Attackers can sniff the wire and steal data, alter it, or resend it. Developers need to protect this data by encrypting it and using digitally signed timestamps, sequence numbers, and checksums. Using industry standards, such as SSL and TLS, developers can secure data that is interoperable with W eb browsers and other client applications. Data encryption performance. Encryption is an expensive processing task. Hardware devices can increase throughput and response times by performing the necessary cryptographic functions in hardware, freeing up CPU cycles for the application.
Web Tier
Server certificates. One of the requirements with SSL is public key management and trust models. To solve this problem, certificate authorities were established to act as trusted third parties responsible for the authentication and validation of public keys through the use of digital certificates. Several CA's certificates are packaged in W eb browsers and in the Java Runtime Environment's cacerts file. This allows developers to take advantage of client certificate chains to ensure that the requesting client was properly authenticated by a trusted third party.
Reality Check
Will Secure Pipe impact performance? Using a Secure Pipe will certainly impact performance noticeably. Do not use it when it is not required. Many business cases dictate securing sensitive information and therefore a Secure Pipe must be used. If your W eb application mandates the need for protecting passwords and sensitive information in transit, use a Secure Pipe (such as HTTPS) just for those operations. Otherwise, you may conduct all other transactions over standard HTTP communication. Are there any compatibility issues w ith Secure Pipe? Implementing a Secure Pipe requires an agreement between the communicating peers. The client and the server must support the same cryptographic algorithms and key lengths as well as agree upon a common protocol for exchanges keys. SSL and TLS provide standard protocols for ensuring this compatibility by providing handshake mechanisms that allow clients and servers to negotiate algorithms and key lengths.
Related Patterns
Point-to-Point Channel [EI P]. A Secure Pipe is similar to a Point-to-Point Channel in its implementation. A Point-toPoint Channel ensures that only one receiver will receive a message from the sender. The Secure Pipe guarantees that only the intended receiver of the message will be able to successfully retrieve the message that was sent.
Forces
Y ou want to support a legacy application's security protocols and can't modify the existing application. Y ou want to completely decouple security tasks from applications so that you do not inadvertently break existing functionality. Y ou want to leverage out-of-the-box security from reliable third-party vendors or reuse the J2EE security infrastructure developed for a different purpose that is less risky than a home-grown security solution. Y ou want to protect W eb service endpoints from malicious requests.
Solution
Use Secure Service Proxy to provide authentication and authorization externally by intercepting requests for security checks and then delegating the requests to the appropriate service. A Secure Service Proxy intercepts all requests from the client, identifies the requested service, enforces the security policy as mandated by the specific service, optionally transforms the request from the inbound protocol to that expected by the destination service, and finally forwards the request to the destination service. On the return path, it transforms the results from the protocol and format used by the service to that format expected by the requesting client. It could also choose to maintain the security context, created in the initial request, in a session created for the client with the intent of using it in future requests. The Secure Service Proxy can be configured on the corporate perimeter to provide authentication, authorization, and other security services that enforce policy to security-unaware lightweight or legacy enterprise services. W hile the Secure Service Proxy pattern acts similar to an Intercepting W eb Agent pattern, it is more advanced because it does not require restricting HTTP URL-based access control, or delegating service requests to any service using any transport protocol. It externalizes the addition of security logic to existing applications that have been implemented and deployed already and integrates cleanly with newer applications that have been developed without security.
Structure
The Secure Service Proxy pattern allows developers to decouple the protocol details from the service implementation. This allows multiple clients using different network protocols to access the same enterprise service that expects one particular protocol. For instance, you may have an enterprise service that expects only HTTP requests. If you want to add additional protocol support, you can use a Secure Service Proxy, rather than building each protocol handler into the service. Each protocol may have its own way of handling security, and therefore the Secure Service Proxy can delegate security handling of each protocol to its appropriate protocol handler. For example, using a Secure Service Proxy in an enterprise messaging scenario involves transforming message formats, such as converting a HTTP or an IIOP request from the client to a Java Messaging Service (JMS) message expected by a message-based service and vice versa. In so doing, the proxy can choose to use a channel that connects to the destination service, further decoupling the service implementation details from the proxy. Figure 9-22 is a class diagram of a Secure Service Proxy pattern.
In the sequence shown in Figure 9-23, a Secure Service Proxy provides security to an Enterprise Service. The following are the participants. Client. An end user making a request from a W eb browser. SecureServiceProxy. The SecureServiceProxy is responsible for intercepting and validating client requests and then forwarding them to an enterprise service. SecurityProtocolHandler. The SecurityProtocolHandler validates requests based on the protocols supported by the enterprise service and the client. EnterpriseService. An existing application or service that cannot or should not be modified to support additional security protocols. In Figure 9-23, the following sequence takes place. 1. Client sends request to the EnterpriseService. 2. SecureServiceProxy intercepts request. 3. SecureServiceProxy uses SecurityProtocolHandler to validate the request. 4. SecureServiceProxy transforms request to a protocol suitable for EnterpriseService. 5. SecureServiceProxy forwards the request to EnterpriseService. 6. SecureServiceProxy transforms the response from the EnterpriseService to the Client's protocol. 7. SecureServiceProxy forwards response to the Client.
Strategies
The Secure Service Proxy can represent a single service or act as a service coordinator, orchestrating multiple services. A Secure Service Proxy may act as a faade exposing a coarse-grained interface to many fine-grained services, coordinating the interaction between those services, maintaining security context and transaction state between service invocations, and transforming the output of one service to the input format expected by any other service. This avoids having the client make any changes in the code if the service implementations and interfaces change over time.
This is also useful for retrofitting newer security protocols to legacy applications. If you want to provide a W eb service faade to an existing application that expects a security token in the form of a cookie, you need to adapt the W eb services security protocol requirement (e.g., SAML token) to support the legacy format or rewrite the application. Since you may not be able to change the code for the legacy application, or it may prove too cumbersome, you are better off using the Single Service Secure Service Proxy Strategy. That way, the Secure Service Proxy can perform the necessary translation, independent of the existing service. This reduces effort and complexity and is less likely to introduce bugs or security holes.
Sample Code
Example 9-20 provides a sample service proxy single service strategy.
Example 9-20. Secure Service Proxy Single Service Strategy Sample Code
package com.csp.web; public class ServiceProxyEndpoint extends HTTPServlet { /** * Process the HTTP Post request, */ public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // Get the appropriate service proxy based on the request parameter // which contains the service identifier SecureServiceProxy proxy = ProxyFactory.getProxy(request.getQueryString()); ((HTTPProxy)proxy).init(request, response); //Make the request pass through security validation if(!proxy.validateSecurityContext(request)) request.getRequestDispatcher("unauthorizedMessage").forward(request, response); //Have the proxy process the request proxy.process(); //Finally, send the response back to the client proxy.respond(); } } // This interface must be implemented by all service proxies. public interface SecureServiceProxy { public boolean validateSecurityContext(); public void process(); public void respond(); } // This interface, that caters to process and respond to HTTP requests,
// must be implemented by a HTTP Proxy public interface HTTPProxy extends SecureServiceProxy { public void init(HttpServletRequest request, HttpServletResponse response); } /** * This is a sample proxy class that uses HTTP for client-to-proxy * communication and SOAP for proxy-to-service communication * It is responsible for security validations and message translations, * that involve marshalling and unmarshalling messages in one format to * another */ public class SimpleSOAPServiceSecureProxy implements HTTPProxy { private HttpServletRequest request; private HttpServletResponse response; private SOAPMessage input; private SOAPMessage output; public void init(HttpServletRequest request, HttpServletResponse response) { this.request = request; this.response = response; } //validates the security credentials in the request public boolean validateSecurityContext(HttpServletRequest request) { HttpSession session = request.getSession(); LoginContext lc = session.getAttribute("LoginContext"); if(lc == null) Return false; if(!AuthenticationProvider.verify(lc)) return false; if(!AuthorizationProvider.authorize(lc, request)) return false; return true; } public void process() { MessageFactory factory = MessageFactory.newInstance(); input = factory.createSOAPMessage(request); SOAPService service = SOAPService.getService(request.getParameter("action")); output = service.execute(input); } public void respond() { response.write(output.getHTTPResponse()); } }
Consequences
Application firew all capability. Secure Service Proxy can act as an application firewall, curbing malicious requests and preventing them from reaching mission-critical, lightweight W eb applications. It can also perform network address translation, shielding the W eb infrastructure from the outside world. A smart Secure Service Proxy can monitor the number and frequency of requests from clients, potentially suppressing denial-of-service traffic from specific clients or determining the quality of service assigned to those clients. Provides flexibility w ith a loosely coupled architecture. The Secure Service Proxy pattern provides a loosely coupled
approach to providing security across protocols. It translates protocol-specific security semantics between clients and existing services, allowing those services to be accessed by clients that otherwise would not have been able to use those services due to protocol impedance mismatch. Enables integration of security mechanisms w ith legacy systems. Security architects and developers can quickly implement a sufficient security solution for legacy applications using Secure Service Proxy, which acts as an adapter for non-J2EE applications while using the J2EE security infrastructure. Secure Service Proxy takes the responsibility of authentication and authorization before forwarding the request to security-unaware legacy applications. I mproves scalability. Secure Service Proxy can delay input validation until the requester is authenticated and authorized, thus avoiding resource wastage caused by misuse of the system by attackers. The proxy can also store the session between requests from the client and manage the security context used in service calls, offloading that burden from lightweight, thin W eb services. All these factors enhance scalability of the system.
Reality Check
I s Secure Service Proxy really required? If the Secure Service Proxy is written with the only goal of restricting access to applications differentiated by the URL pattern, an out-of-the-box product utilized with the Intercepting W eb Agent pattern (mentioned later in this chapter) can serve the same needs without custom code development. The Secure Service Proxy is generally only required when you must integrate with an existing service's legacy security protocol. T oo service specific? A generic, multiprotocol Secure Service Proxy must be written in such a way that new applications and protocol mechanisms can be easily added to the existing proxy.
Related Patterns
Proxy [GoF]. A Proxy acts as a surrogate or placeholder. The Intercepting W eb Agent acts as a security proxy for the W eb application that it protects. I ntercepting Web Agent [Web T ier]. The Intercepting W eb Agent can also retrofit security onto existing W eb applications. However, it is not well suited for when your resources do not correspond to URLs or when you need fine-grained access control. Secure Message Router [Web Services T ier]. The secure Message Router acts as an intermediary that applies and reappliesor even transformsthe message-level security context before delivering the message to the destination service. The output of this pattern is a securely wrapped message that can traverse again in an insecure channel. Extract Adapter [Kerievsky]. The Extract Adapter is similar to the Secure Service Proxy because it recommends using one class to adapt multiple versions of a component, library, or API.
Forces
Y ou do not want to or cannot modify the existing application.
Y ou want to completely decouple the authentication and authorization from an existing application. Y ou want to leverage out-of-the-box security from reliable third-party vendors rather than try to implement your own.
Solution
Use an Intercepting Web Agent to provide authentication and authorization external to the application by intercepting requests prior to the application. Using an Intercepting Web Agent protects the application by providing authentication and authorization of requests outside the application. For example, you inherit an application with little or no security and you are tasked with providing proper authentication and authorization. Rather than attempt to modify the code or rewrite the W eb tier, use an Intercepting W eb Agent to provide the proper protection for the application. The Intercepting W eb Agent can be installed on the W eb server and provide authentication and authorization of incoming requests by intercepting them and enforcing access control policy at the W eb server. The decoupling of security from the application provides the ideal approach to securing existing applications that can't do so themselves or that are too difficult to modify. It also provides centralized management of the security-related components. Policy and the details of its implementation are enforced outside the application and can therefore be changed, usually without affecting the application. Third-party products from a variety of vendors provide security using the Intercepting W eb Agent pattern. The Intercepting W eb Agent improves maintainability by isolating application logic from security logic. Typically, the implementation requires no code changes, just configuration. It also increases application performance by moving security-related processing out of the application and onto the W eb server. Requests that are not properly authenticated or authorized are rejected at the W eb server and thus use no extra cycles in the application.
Structure
Figure 9-24 is a class diagram of an Intercepting W eb Agent.
Client. A client performs a login and then sends a request to the Application. WebServer. The W eb Server delegates handling of the request to the InterceptingWebAgent. InterceptingWebAgent. The InterceptingWebAgent intercepts the request and checks that it is properly authenticated and authorized before forwarding it to the Application. Application. The Application processes the request without needing to perform any security checks. Figure 9-25 takes us through a typical sequence of events for an application employing an Intercepting W eb Agent. The Intercepting W eb Agent is located either on the W eb server or between the W eb server and the application, external to the application. The W eb Server delegates handling of the requests for the application to the Intercepting W eb Agent. It, in turn, checks authentication and authorization of the requests before forwarding to the application itself. W hen attempting to access the application, the client will be prompted by the Intercepting W eb Agent to log in. Figure 9-25 illustrates the following sequence of events: 1. Client sends a login request to the Application. 2. The WebServer delegates this request to the InterceptingWebAgent. 3. The InterceptingWebAgent performs authentication of the passed-in user credentials. 4. The InterceptingWebAgent stores a cookie with encrypted session information on the Client. 5. The Client sends a request to the Application. 6. The WebServer delegates handling of the request to the InterceptingWebAgent. 7. The InterceptingWebAgent gets the cookie it stored on the Client and verifies it. 8. The InterceptingWebAgent checks that the Client is authorized to send the request (usually through a URL mapping). 9. The InterceptingWebAgent forwards the request to the Application for processing.
Y ou have multiple W eb servers, but only need one user and policy store. W eb servers usually live in the DMZ, and you do not want user and policy information out where it is more susceptible to compromise. Y ou want to segregate responsibility of authentication from enforcement of it. The External Policy Server Strategy provides centralized storage and management of user and policy information that can be accessed by all Intercepting W eb Agents on different W eb servers. Using an external policy server, performance can be improved through caching. Load balancing and failover are possible due to the use of cookies and separation of authentication and authorization from particular W eb Agent instances. The External Policy Server itself must be replicated. Figure 9-26 depicts the interaction between a client and an Intercepting W eb Agent that is protecting an application.
Figure 9-26. Intercepting Web Agent using External Policy Server sequence diagram
The sequence of events for an Intercepting W eb Agent using and external Policy Server is: 1. Client attempts to log into the application. 2. Intercepting Web Agent intercepts request and authenticates the Client with the PolicyServer. 3. Upon successful authentication, Client sends a request to the Application. 4. Intercepting Web Agent intercepts request and authenticates it using the information stored in a cookie. 5. Intercepting Web Agent authorizes the request using the PolicyServer. 6. Upon successful authorization, Intercepting Web Agent forwards the request to the Application.
Consequences
Using the Intercepting W eb Agent, developers and architects gain the following: Helps defend Replay and DoS attacks. Many of the vendor implementations perform tasks beyond just authentication and authorization. They provide auditing, reporting, and forensic capabilities as well. In addition,
they perform filtering and weed out replay of forged requests and DoS attacks. Provides more flexibility w ith a loosely coupled architecture. The Intercepting W eb Agent pattern provides a loosely coupled connection to remote security services. It also avoids specific-vendor product lock-in by disallowing clients to invoke the remote security services directly. This allows developers to quickly add authentication and authorization to their new or existing W eb applications. Since the vendors provide the authentication and authorization implementation, you have a more tried and true security solution. Supports legacy integration. Security architects and developers can quickly implement a sufficient security solution for legacy W eb applications by employing the Intercepting W eb Agent. Because the Intercepting W eb Agent can be used outside the application (usually on the W eb server), legacy code does not have to be altered. This allows authentication and authorization to be retrofitted into an existing application. I mproves scalability. Because the Intercepting W eb Agent is typically implemented on the W eb server, it scales horizontally. This scaling can be accomplished in a manner completely independent of the application, thus decoupling security overhead from application performance. This assumes of course that the matching components in the Identity tier can scale as well.
Sample Code
Example 9-21 is a sample obj.conf configuration file for a SiteMinder W ebAgent running on a Sun Java System W eb Server. Example 9-22 is the corresponding magnus.conf file for the W ebAgent.
Example 9-21. Sun Java System Web Server obj.conf using CA SiteMinder Web Agent
<Object name="MyForms" ppath="*/MyForms/*"> AuthTrans fn="SiteMinderAgent" NameTrans fn="document-root" root="" PathCheck fn="SmRequireAuth" PathCheck fn="unix-uri-clean" PathCheck fn="check-acl" acl="default" PathCheck fn="find-pathinfo" PathCheck fn="find-index" index-names="index.html,home.html" ObjectType fn="type-by-extension" ObjectType fn="force-type" type="text/plain" Service method="(GET|POST|HEAD)" type="magnus-internal/fcc" fn="SmLoginFcc" Service method="(GET|HEAD)" type="magnus-internal/scc" fn="smGetCred" Service method="(GET|HEAD)" type="magnus-internal/ccc" fn="smMakeCookie" Service method="(GET|POST|HEAD)" type="magnus-internal/sfcc" fn="SmSSLLoginFcc" Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file" AddLog fn="flex-log" name="access" </Object> <Object name="default"> NameTrans fn="pfx2dir" from="/smreportscgi" dir="/usr/siteminder/reports" name="cgi" NameTrans fn="pfx2dir" from="/sitemindercgi" dir="/usr/siteminder/admin" name="cgi" NameTrans fn="pfx2dir" from="/siteminder" dir="/usr/siteminder/admin" NameTrans fn="pfx2dir" from="/netegrity_docs" dir="/usr/siteminder/ netegrity_documents" NameTrans fn="NSServletNameTrans" name="servlet" NameTrans fn="pfx2dir" from="/siteminderagent/pwcgi" dir="/usr/siteminder/ webagent/pw" name="cgi" NameTrans fn="pfx2dir" from="/siteminderagent/pw" dir="/usr/siteminder/ webagent/pw" NameTrans fn="pfx2dir" from="/siteminderagent/certoptional" dir="/usr/ siteminder/webagent/samples" NameTrans fn="pfx2dir" from="/siteminderagent/jpw" dir="/usr/siteminder/ webagent/jpw" NameTrans fn="pfx2dir" from="/siteminderagent" dir="/usr/siteminder/ webagent/samples" NameTrans fn="pfx2dir" from="/servlet" dir="/usr/iplanet/web/docs/servlet" name="ServletByExt" NameTrans fn=pfx2dir from=/mc-icons dir="/usr/iplanet/web/ns-icons" name="es-internal"
NameTrans fn="pfx2dir" from="/manual" dir="/usr/iplanet/web/manual/https" name="es-internal" NameTrans fn=document-root root="$docroot" PathCheck fn=unix-uri-clean PathCheck fn="check-acl" acl="default" PathCheck fn=find-pathinfo PathCheck fn=find-index index-names="index.html,home.html" ObjectType fn=type-by-extension ObjectType fn=force-type type=text/plain Service type="magnus-internal/jsp" fn="NSServletService" Service fn="send-cgi" type="magnus-internal/cgi" Service method=(GET|HEAD) type=magnus-internal/imagemap fn=imagemap Service method=(GET|HEAD) type=magnus-internal/directory fn=index-common Service method=(GET|HEAD|POST) type=*~magnus-internal/* fn=send-file AddLog fn=flex-log name="access" </Object>
Example 9-21. Sun Java System Web Server magnus.conf with CA SiteMinder Web Agent
Init fn="load-modules" funcs="wl_proxy,wl_init" shlib=/usr/iplanet/web/ plugins/nsapi/wls7/libproxy.so Init fn="wl_init" Init fn=flex-init access="$accesslog" format.access="%Ses->client.ip% - %Req->vars.auth-user% [%SYSDATE%] \"%Req->reqpb.clf-request%\" %Req->srvhdrs.clf-status% %Req->srvhdrs.content-length%" Init fn=load-types mime-types=mime.types Init fn="load-modules" shlib="/usr/iplanet/web/bin/https/lib/ libNSServletPlugin.so" funcs= "NSServletEarlyInit,NSServletLateInit,NSServletNameTrans,NSServletService" shlib_flags="(global|now)" Init fn="NSServletEarlyInit" EarlyInit=yes Init fn="NSServletLateInit" LateInit=yes SSL2 off SSL3 off SSLClientCert off Init fn="load-modules" shlib="/usr/siteminder/webagent/lib/NSAPIWebAgent.so" funcs="SmInitAgent,SiteMinderAgent,SmRequireAuth,SmLoginFcc,smGetCred,smMake Cookie,SmSSLLoginFcc" LateInit="no" Init fn="SmInitAgent" config="/usr/iplanet/web/https-pluto/config/ WebAgent.conf" LateInit="no" Init fn="init-cgi" SM_ADM_UDP_PORT="44444" SM_ADM_TCP_PORT="44444"
Reality Check
Should I build or buy? Unless you have specific functional requirements that cannot be met by the products out there in the market today, consider buying. There are several third-party products that provide Intercepting W eb Agent capabilities and many of them are mature, meaning that the bugs have been worked out. The products will generally provide more robust functionality and will typically scale better than a home-grown solution. T oo coarse-grained. The Intercepting W eb Agent pattern is ideal for legacy applications, but often it is too coarsegrained for current business requirements. Assuming you purchase a third-party product, that product probably only allows access-control decisions down to the URL level. In today's W eb applications, the URL level is too coarsegrained. W ith industry-standard frameworks such as Struts, it is necessary to go beyond just the URL level and into the request parameters for fine-grained access-control decisions. For those situations, an Intercepting W eb Agent may not be an appropriate pattern.
Related Patterns
Secure Service Proxy [Web T ier]. A Secure Service Proxy acts as a security protocol translator. Like the Intercepting W eb Agent, it intercepts requests and performs security validations. The Intercepting W eb Agent on the other hand, performs all of the security functions itself, not relying on the services it protects for anything. Proxy [GoF]. A Proxy acts as a surrogate or placeholder. The Intercepting W eb Agent acts as a security proxy for the W eb application that it protects. Message I nterceptor Gatew ay [Web Services T ier]. The Intercepting W eb Agent is similar to the Message Interceptor Gateway. It acts as a translator in the same regard, the only difference being that its purpose is for security protocol translation and not message translation.
Infrastructure
1. Put Web Servers in a DMZ. Secure the Internet-facing W eb server host in a DMZ (Demilitarized Zone) using an exterior firewall. It is always a recommended option to use DMZ bastion hosts or switched connections to target W eb servers. This will prevent attackers who have compromised the W eb server from penetrating deeper into the application. 2. Use Stateful Firew alls. Use a stateful firewall inspection to keep track of all W eb-tier transmissions and protocol sessions. Make sure it blocks all unrequested protocol transmissions. 3. Drop Non-HT T P Packets. Make sure your firewall is configured to drop connections except for HTTP and HTTP over SSL. This will help prevent Denial of Service (DoS) attacks and disallow malicious packets intended for back-end systems. 4. Minimize and Harden the Web Server Operating System. Make sure the operating system that is running the W eb and application server is hardened and does not run any unsolicited services that may provide an opening for an attacker to compromise. 5. Secure Administrative Communications. Make sure all administration tasks on the server are done using encrypted communication. Disable remote administration of W eb and application servers using system-level and root administrator access. All remote administration needs to be carried out using secure-shell connections from trusted machines and disallow administrator access from untrusted machines. 6. Disallow Untrusted Services. Disable telnet, remote login services, and FTP connections to the server machines. These are commonly attacked services and represent a strong security risk. 7. Check Web Server User Permissions. Make sure the running W eb and application servers' configurations and their user privileges have no rights to access or to modify system files. 8. Disable CGI . Unless required, disable running CGI applications in the W eb server or application server and accessing /cgi-bin directories. This is another common source of attacks. 9. Enforce Strong Passw ords. Change all default passwords and use robust password mechanisms to avoid passwordrelated vulnerabilities such as sniffing and replay attacks. For all application-level administrators, use password ageing, account locking, and password-complexity verification. Using one-time password mechanisms, dynamic passwords that use challenge-response schemes, smart card- and certificate-based authentication, and multifactor authentication are reliable practices. 10. Audit Administration Operations. Limit all W eb administration access to the system to very few users. Monitor their account validity, log all their administration operations using encryption mechanisms, and make sure that system log files are written to a different machine that is secure from the rest of the network. 11. Check T hird-Party I Ps. All third-party supporting applications that are required to coexist with the W eb and application servers have to be tested for their usage of IP and port addresses. Those applications must follow the maintained rules of the DMZ. 12. Monitor Web Server Communications. Monitor all transmissions and W eb-server requests and responses for suspicious activity and misuse. Use watchdog macros or daemons to monitor and trap these activities. W hen detection of such abuses occurs, make sure you check the integrity of the application afterward and notify the system security administrator. 13. Setup I DS. Use intrusion detection systems to detect suspicious acts, abuses, and unsolicited system uses. Alert security administrators for such activities. 14. Deny Outbound T raffic to Web Server. Disallow all application requests to the IP addresses of the W eb servers running in the DMZ.
15. Distrust Servers in the DMZ. Do not store any user or application-generated information in the DMZ. Store all sensitive information behind the DMZ interior firewall with the expectation that servers running in the DMZ will eventually be compromised. The exception would be a Honeypot, which is a server created specifically to lure attackers into what appears to be the real application. Their activities are then logged and analyzed to help stave offd future attacks.
Communication
16. Secure the Pipe. For all security-sensitive W eb applications and W eb-based online transactions, make sure the session and data exchanged between the server and client remain confidential and tamper-proof during transit. SSL/TLS is the de facto technology for securing communication on the wire. It allows W eb-based applications to communicate over a secure communication channel. Using SSL communication with digital certificates offers confidentiality and integrity of data transmitted between the W eb applications and client authentication. 17. Segregate by Sensitivity. Make sure that W eb applications that contain sensitive information and W eb applications that do not contain sensitive information run on different machines and different W eb-server instances. 18. Enforce Strong Encryption. For applications that use classified or financial data, make sure that the W eb server does not accept weak encryption. Disable or delete weak encryption cipher suites from the W eb server and enforce adequate key lengths. 19. Don't Flip Back to HT T P from HT T PS. After switching to SSL communication, make sure the application no longer accepts non-SSL requests until logging out from the SSL session. In the event that the client is sending a non-SSL request, the application must enforce reauthentication of that user over a new SSL session and then stop listening to non-SSL requests. Verifying SSL requests can be implemented using various Connection filter mechanisms. 20. Capture Bad Requests. Log and monitor all fake SSL and non-SSL requests using filters. For all such unsolicited requests, redirect the user to an authentication page to provide valid login credentials. 21. Use Certificates for Server-to-Server Communications. To support server-to-server or nonbrowser-based client communications that host secure transactions, always suggest using mutual or client certificate-based authentication over SSL. In this case, the server will authenticate the client using the client's X.509 certificate, a public-key certificate that conforms to a standard that is defined by X.509 Public Key Infrastructure (PKI). This provides a more reliable form of authentication than standard password-based approaches. 22. Check Mutual Authentication. Verify that mutual authentication is configured and running properly by examining debug messages. To generate debug messages from SSL mutual authentication, pass the system property javax.net.debug=ssl,handshake to the application, which will provide information on whether or not mutual authentication is working. 23. Check Certification Expiration. W hile authenticating a client using mutual authentication, be sure to check for certificate expiration or revocation. 24. User SSL Accelerators. Using hardware-based SSL accelerators enhances secure communication performance because it offloads the cryptographic processing load from the W eb server. 25. Use SGC When Possible. Consider using Server Gated Cryptography (SGC) mechanisms when possible to ensure the highest level of security for W eb applications regardless of browser clients or versions. 26. Check Export Policies. Before installing SSL server certificates, make sure that you understand and are in accordance with your country's export policies regarding encryption products containing cryptographic technology. Some countries and organizations may ban or require special licenses for using encryption technologies.
Application
27. Disallow Direct Access. Make sure that the application resides on a server accessed via reverse-proxy or Network Address Translation (NAT)-based IP addresses. Then rewrite the URLs of W eb applications. This protects the application from direct access from unsolicited users. 28. Encrypt Application-Specific Properties. Make sure that all application-specific properties stored on local disks that are exposed to physical access are encrypted or digested using encryption or secure hash mechanisms. Decrypt these entries and verify them before use in the application. This will protect unauthorized access or modification of application-specific parameters. 29. Restrict Application Administrator Privileges. Do not create an application administrator user account (that is, admin or root) with explicit administration access and privileges. If someone steals the administrator's password and abuses the application with malicious motives, it is hard to find out who really abused the application. Use the
security Principal of the user and assign role-based access by assigning users to the administrator role. This helps in identifying the tasks carried out by the associated Principal with an administrator role. 30. Validate Request Data. Verify and validate all user requests and responses and inbound and outbound data exchanged with the application. Apply constraints and verify the input data so that the data does not cause any undesirable side effects on the application. 31. Validate Form Fields. Ensure that any alteration, insertion, and removal of HTML form fields by the originating browser are detected, logged, and result in an error message. 32. Use HT T P POST . Use HTTP POST rather than HTTP GET and avoid using HTTP GET requests while generating HTML forms. HTTP GET requests reveal URL-appended information, allowing sensitive information to be revealed in the URL string. 33. T rack Sessions. Identify the originating user and the host destination making the application request in the sessionid. Verify that all subsequent requests are received from that same user's host origin until the user logs out. This protects application sessions from hijacking and spoofing. 34. Obfuscate Code. Obfuscate all application-related classes to avoid code misuse. This protects the code from being easily reverse engineered. This should be done for stand-alone Java clients, helper classes, precompiled JSPs, and application JAR files. 35. Sign JAR Files. Make sure that all JAR files are signed and sealed before deployment. This includes signing JNLP files (Java W eb Start), signed applets, and signed JARs for application server deployments. 36. Audit All Relevant Security T asks. Securely log, audit, and timestamp all relevant application-level events. Audit events should include login attempts and failures, logouts, disconnects, timeouts, administration tasks, user requests and responses, exceptions, database connections, and so forth. Redirect the log data to a file or database repository residing in another machine. Logging and auditing help to track and identify users with malicious intentions. 37. Audit All Relevant Business T asks. Create audit trails for all identified user-level sessions and actions with timestamps and store them in a different log file with unique line identifiers. This helps in achieving nonrepudiation in both business and technical aspects of the application. 38. Destroy HT T P Sessions on Logout. Once a user logs out or exits a security-sensitive application resource, invalidate the HTTP Session and remove all state within the session. Leaving stale sessions on the server often leads to security breaches involving session hijacking, client-side Trojan horses, and eavesdropping on subsequent sessions. 39. Set Session T imeouts. Use HTTP session timeouts for all user sessions after a period of inactivity. Redirect the user back to the login page for reauthentication to enable the stored HTTP session state.
References
[Sua] Liz Blair. Build to Spec. http://java.sun.com/developer/technicalArticles/J2EE/build/build2.html [WebAppSecurity] Java Web Services T utorial. http://java.sun.com/webservices/docs/1.1/tutorial/doc/WebAppSecurity3.html [Needham] R. M. Needham and M. D. Schroeder. "Using Encryption for Authentication in Large Networks of Computers." Communications of the ACM, Vol. 21 (12), pp. 993-99. [Kerievsky] Joshua Kerievsky, Refactoring to Patterns. Addison-Wesley, 2004. [Vau] David Winterfeldt and T ed Husted. Struts in Action, "Chapter 12: Validating User Input." http://java.sun.com/developer/Books/javaprogramming/struts/struts_chptr_12.pdf [POSA] Buschmann, Meunier, Rohnert, Sommerlad, and Stal. Pattern-Oriented Software ArchitectureA System of Patterns. Wiley Press, 1996-2000. [Gof] Gamma, Helm, Johnson, Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994. [EIP] Hohpe, Woolf. Enterprise Integration Patterns. Addison-Wesley, 2004 [CJP2] Alur, Crupi, and Malks. Core J2EE Patterns, Second Edition. Prentice Hall, 2003. [Java2] T he Java 2 runtime environment http://java.sun.com/java2 [RFC1508] RFC 1508Generic Security Service Application Program Interface http://www.faqs.org/rfcs/rfc1508.html [RFC2743] RFC 2743Generic Security Service Application Program Interface Version 2, Update 1 http://www.faqs.org/rfcs/rfc2743.html [JAAS] Java Authentication and Authorization Service Developer Guide http://java.sun.com/security/jaas/doc/api.html
Chapter 10. Securing the Business TierDesign Strategies and Best Practices
T opics in T his Chapter Security Considerations in the Business Tier Business Tier Security Patterns Best Practices and Pitfalls In Chapter 9, we discussed the security patterns and best practices related to the W eb tier. In this chapter, we will examine security patterns and best practices applicable to the Business tier. The Business tier comprises components responsible for implementing the business logic in the application. These patterns build upon those outlined in Core J2EE Patterns [CJP2]. They assume use of certain J2EE patterns and best practices mentioned there as well as industryrecognized approaches. W e will begin by briefly stating prominent security considerations relevant to the Business tier. These security considerations are the driving forces behind the security patterns. W e will then dive into a detailed explanation of the security patterns. Finally, we will list some best practices and pitfalls for securing the Business tier.
Forces
Y ou want centralized and declarative auditing of service requests and responses. Y ou want auditing of services decoupled from the applications themselves. Y ou want pre- and post-process audit handling of service requests, response errors, and exceptions
Solution
Use an Audit Interceptor to centralize auditing functionality and define audit events declaratively, independent of the Business tier services. An Audit Interceptor intercepts Business tier requests and responses. It creates audit events based on the information in a request and response using declarative mechanisms defined externally to the application. By centralizing auditing functionality, the burden of implementing it is removed from the back-end business component developers. Therefore, there is reduced code replication and increased code reuse. A declarative approach to auditing is crucial to maintainability of the application. Seldom are all the auditing requirements correctly defined prior to implementation. Only through iterations of auditing reviews are all of the correct events captured and the extraneous events discarded. Additionally, auditing requirements often change as corporate and industry policies evolve. To keep up with these changes and avoid code maintainability problems, it is necessary to define audit events in a declarative manner that does not require recompilation or redeployment of the application. Since the Audit Interceptor is the centralized point for auditing, any required programmatic change is isolated to one area of the code, which increases code maintainability.
Structure
Figure 10-1 depicts the class diagram for the Audit Interceptor pattern. The Client attempts to access the Target. The AuditInterceptor class intercepts the request and uses the AuditEventCatalog to determine if an audit event should be written to the AuditLog.
Figure 10-2 shows the sequence of events for the Audit Interceptor pattern. The Client attempts to access the Target, not knowing that the Audit Interceptor is an intermediary in the request. This approach allows clients to access services in the typical manner without introducing new APIs or interfaces specific to auditing that the client would otherwise not care about.
The diagram in Figure 10-2 does not reflect the implementation of how the request is intercepted, but simply illustrates that the AuditInterceptor receives the request and then forwards it to the Target.
AuditInterceptor intercepts request and uses EventCatalog to determine which, if any, audit event to generate and log.
2. 3. AuditInterceptor uses AuditLog to log audit event. 4. AuditInterceptor forwards request to Target resource.
AuditInterceptor uses EventCatalog to determine if the request response or any exceptions raised should generate an audit
5. event. 6. AuditInterceptor uses AuditLog to log generated audit event.
Strategies
The Audit Interceptor pattern provides a flexible, unobtrusive approach to auditing Business tier events. It offers developers an easy-to-use approach to capturing audit eventsby decoupling auditing from the business flow. This allows business developers to disregard auditing and defer the onus to the security developers, who then only deal with auditing in a centralized location. Auditing can easily be retrofitted into an application using this pattern. By making use of an Event Catalog, the Audit Interceptor becomes decoupled from the actual audit events and therefore can incorporate changes in auditing requirements via a configuration file. The following is a strategy for implementing the Audit Interceptor.
Using a Secure Service Faade Interceptor strategy, developers can audit at the entry and exit points to the Business tier. The SecureServiceFaade is the appropriate point for audit interception, because its job is to forward to the Application Services and Business Objects. Typically, a request consists of several Business Objects or Application Services, though only one audit event is required for that request. For example, a credit card verification service may consist of one Secure Service Faade that invokes several Business Objects that make up that service, such as an expiration date check, a LUN10 check, and a card type check. It is unlikely that each individual check generates an audit event; it is likely that only the verification service itself generates the event. In Figure 10-3, the SecureServiceFaade is the entry to the Business tier. It provides the remote interface that the Client uses to access the target component, such as another EJB or a Business Object. Instead of forwarding directly to the target component, the SecureServiceFaade first invokes AuditInterceptor. The AuditInterceptor then consults the EventCatalog to determine whether to generate an audit event and, if so, what audit event to generate. If an audit event is generated, the AuditLog is then used to persist the audit event. Afterward, the SecureServiceFaade then forwards the request as usual to the Target. On the return of invocation of the Target, the SecureServiceFaade again calls the AuditInterceptor. This allows auditing of both start and end events. Exceptions raised from the invocation of the Target also cause the SecureServiceFaade to invoke the AuditInterceptor. More often than not, you want to generate audit events for exceptions. Figure 10-4 depicts the Secure Service Faade Interceptor strategy sequence diagram.
Consequences
Auditing is one of the key requirements for mission-critical applications. Auditing provides a trail of recorded events that can tie back to a Principal. The Audit Interceptor provides a mechanism to audit Business-tier events so that operations staff and security auditors can go back and examine the audit trail and look for all forms of application-layer attacks. The Audit Interceptor itself does not prevent an attack, but it does provide the ability to capture the events of the attack so that they can later be analyzed. Such an analysis can help prevent future attacks. The Audit Interceptor pattern has the following consequences for developers: Centralized, declarative auditing of service requests. The Audit Interceptor centralizes the auditing code within the application. This promotes reuse and maintainability. Pre- and post-process audit handling of service requests. The Audit Interceptor enables developers to record audit events prior to a method call or after a method call. This is important when considering the business requirements. Auditing is often required prior to the service or method call as a form of recording an "attempt." In other cases, an audit event is required only after the outcome of the call has been decided. And finally, there are cases where an audit event is needed in the event of an exception with the call. Auditing of services decoupled from the services themselves. The Audit Interceptor pattern decouples the business logic code from the auditing code. Business developers should not have to consider auditing requirements or implement code to support auditing. By using the Audit Interceptor, auditing can be achieved without impacting business developers. Supports evolving requirements and increases maintainability. The Audit Interceptor supports evolving auditing requirements by decoupling the events that need to be audited from the implementation. An audit catalog can be created that defines audit events declaratively, thus allowing different event types for different circumstances to be added without changing code. This improves the overall maintainability of the code by reducing the number of changes to it. Reduces performance. The cost of using an interceptor pattern is that performance is reduced anytime the interceptor is invoked. Every time that Audit Interceptor determines that a request or response does not require generation of an audit event, it unnecessarily decreases performance.
Sample Code
Example 10-1 is sample source code for the AuditRequestMessageBean class. This class is used subsequent to the AuditLog class placing audit events onto the JMS queue and is responsible for pulling audit messages off a JMS queue and writing them to a database using an AuditLogJdbcDAO class (not shown here). It is not reflected in the previous diagrams.
package com.csp.audit; import javax.jms.*; /** * @ejb.bean transaction-type="Container" * acknowledge-mode="Auto-acknowledge" * destination-type="javax.jms.Queue" * subscription-durability="NonDurable" * name="AuditRequestMessageBean" * display-name="Audit Request Message Bean" * jndi-name= * "com.csp.audit.AuditRequestMessageBean" * * @ejb:transaction type="NotSupported" * * @message-driven * destination-jndi-name="Audit_Request_Queue" * connection-factory-jndi-name="Audit_JMS_Factory" */ public class AuditRequestMessageBean extends MessageDrivenBeanAdapter { public void onMessage(Message msg) throws Exception { ObjectMessage objMsg = (ObjectMessage)msg; try { String message = (String)objMsg.getObject(); JdbcDAOBase dao = (JdbcDAOBase) JdbcDAOFactory.getJdbcDAO( "com.csp.audit.AuditLogJdbcDAO"); // The DAO is responsible for actually writing the // audit message in the database using the JDBC API. dao.executeUpdate(dto); } catch(Exception ex) { System.out.println("Audit event write failed: " + ex, ex); } } // Other EJB Methods for MessageDrivenBean interface public void ejbCreate() { System.out.println("ejbCreate called"); } public void ejbRemove() { System.out.println("ejbRemove called"); } public void setMessageDrivenContext(MessageDrivenContext context) { System.out.println("setMessageDrivenContext called"); this.context = context; } }
Example 10-2 lists the sample source code for the AuditClient class, which is responsible for placing audit event messages on a JMS queue for persisting later. This class is used by the AuditLog class.
= "Audit_JMS_Factory"; private static String AUDIT_QUEUE_NAME = "Audit_Request_Queue"; private static QueueSender queueSender = null; private static ObjectMessage objectMessage = null; // Initialize the JMS Client // 1. Lookup JMS connection factory // 2. Create a JMS connection // 3. Create a JMS session object // 4. Lookup a JMS Queue and Create a JMS sender synchronized static void init() throws Exception { Context ctx = new InitialContext(); QueueConnectionFactory cfactory = (QueueConnectionFactory) ctx.lookup( JMS_FACTORY_NAME); QueueConnection queueConnection = (QueueConnection) cfactory.createQueueConnection(); QueueSession queueSession = (QueueSession) queueConnection.createQueueSession( false, javax.jms.Session.AUTO_ACKNOWLEDGE); Queue queue = (Queue)ctx.lookup(AUDIT_QUEUE_NAME); queueSender = queueSession.createSender(queue); objectMessage = queueSession.createObjectMessage(); } // 5. Send the audit message to the Queue public static void audit(String auditMessage) throws Exception{ try { if(queueSender == null || objectMessage == null){ init(); objectMessage.setObject(auditMessage); queueSender.send(objectMessage); return; } objectMessage.setObject(auditMessage); queueSender.send(objectMessage); } catch(Exception ex) { System.out.println("Error sending audit event: " + ex, ex); throw ex; } } }
Business Tier
Auditing. The Audit Interceptor pattern is responsible for providing a mechanism to capture audit events using an Interceptor approach. It is independent of where the audit information gets stored or how it is retrieved. Therefore, it is necessary to understand the general issues relating to auditing. Typically, audit logs (whether flat files or databases) should be stored separately from the applications, preferably on another machine or even off-site. This prevents intruders from covering their tracks by doctoring or erasing the audit logs. Audit logs should be writable but not
Distributed Security
JMS. The Audit Interceptor pattern is responsible for auditing potentially hundreds or even thousands of events per second in high-throughput systems. In these cases, a scalable solution must be designed to accommodate the high volume of messages. Such a solution would involve dumping the messages onto a persistent JMS queue for asynchronous persistence. In this case, the JMS queue itself must be secured. This can be done by using a JMS product that supports message-level encryption or using some of the other strategies for securing JMS described in Chapter 5. Since the queue must be persistent, you will also need to find a product that supports a secure backing store.
Reality Check
What is the performance cost? The Audit Interceptor adds additional method calls and checks to the request. Using a JMS queue to asynchronously write the events reduces the impact to the end user by allowing the request to complete before the data is actually persisted. The trade-off would be to insert auditing code only where it is required. But anticipating that requirements will change, and a lot of areas that require auditing, the benefits of decoupling and reduced maintenance outweigh the slight performance degradation. Why not use Aspect Oriented Programming (AOP) techniques instead? AOP provides a new technique that reduces code complexity by consolidating code such as auditing, logging, and other functions that are spread across a variety of methods. It does this by inserting the (aspect) code into the methods either during the build process or through post-compile bytecode insertion. This makes it very useful when you require method-level auditing. The Audit Interceptor allows you to do service-level auditing. It can be as fine-grained as your Service Faade or other client allows, though usually not as fine-grained as AOP allows. The drawback to AOP is that it requires a thirdparty product and may introduce slight performance penalties, depending on the implementation. I s auditing essential? In most cases, the answer is yes. It's essentialnot just for record-keeping, but for forensic analyses purposes as well. Y ou may not be able to detectand most likely cannot diagnosean attack if you do not maintain an audit log of events. The audit log can be used to detect brute-force password attacks, denial of service attacks, and many others.
Related Patterns
I ntercepting Filter [CJP2]. The Audit Interceptor pattern is similar to the Intercepting Filter but is not as complex and is better suited for asynchronous writes. Pipes and Filters [POSA1]. The Audit Interceptor pattern is closely related to the Pipes and Filters pattern. Message I nterceptor Gatew ay. It is often necessary to audit on the W eb Services tier as well as the Business tier. In such cases, the Message Interceptor Gateway should employ the Audit Interceptor pattern.
Forces
Y ou need to authenticate users and provide access control to business components. Y ou want a straightforward, declarative security model based on static mappings. Y ou want to prevent developers from bypassing security requirements and inadvertently exposing business functionality.
Solution
Use Container Managed Security to define application-level roles at development time and perform user-role mappings at deployment time or thereafter. In a J2EE application, both ejb-jar.xml and web.xml deployment descriptors can define container-managed security. The J2EE security elements in the deployment descriptor declare only the logical roles as conceived by the developer. The application deployer maps these application domain logical roles to the deployment environment. Container Managed Security at the W eb tier uses delayed authentication, prompting the user for login only when a protected resource is accessed for the first time. On this tier, it can offer security for the whole application or specific parts of the application that are identified and differentiated by URL patterns. At the Enterprise Java Beans tier, Container Managed Security can offer method-level, fine-grained security or object-level, coarse-grained security.
Structure
Figure 10-5 depicts a generic class diagram for a Container Managed Security implementation. Note that the class diagram can only be applicable to the container's implementation of Container Managed Security. The J2EE application developer would not use such a class structure, because it is already implemented and offered by the container for use by the developer.
Client. A client sends a request to access a protected resource to perform a specific task. Container. The container intercepts the request to acquire authentication credentials from the client and thereafter authenticates the client using the realm configured in the J2EE container for the application. Protected Resource. The security policy of the protected resource is declared via the Deployment Descriptor. Upon authentication, the container uses the Deployment Descriptor information to verify whether the client is authorized to access the protected resource using the method, such as GET and POST, specified in the client request. If authorized, the request is forwarded to the protected resource for fulfillment. Enterprise Java Bean. The protected resource in turn could be using a Business Tier Enterprise Java Bean that declares its own security policy via the ejb.jar deployment descriptor. The security context of the client is propagated to the EJB container while making the EBJ method invocation. The EJB container intercepts the requests to validate against the security policy much like it did in the W eb tier. If authorized, the EJB method is executed, fulfilling the client request. The results of execution of the request are then returned to the client.
Strategies
Container Managed Security can be used in the W eb and Business tiers of a J2EE application, depending on whether a W eb container, an EJB container, or both are used in an application. It can also be supplemented by Bean Managed/Programmatic Security for fine-grained implementations. The various scenarios are described in this section.
Declarative Security for EJBs can either be at the bean level or at a more granular method level. Home and Remote interface methods can declare a <method-permission> element that includes one or more <role-name> elements that are allowed to access one or more EJB methods as identified by the <method> elements. One can also declare <exclude-list> elements to disable access to specific methods. To specify an explicit identity that an EJB should use when it invokes methods on other EJBs, the developer can use <usecaller-identity> or <run-as>/<role-name> elements under the <security-identity> element of the deployment descriptor.
Consequences
Container Managed Security offers flexible policy management at no additional cost to the organization. W hile it allows the developer to incorporate security in the application by way of simply defining roles in the deployment descriptor without writing any implementation code, it also supports programmatic security for fine-grained access control. The pattern offers the following other benefits to the developer: Straightforw ard, declarative security model based on static mappings. The Container Managed Security pattern provides an easy-to-use and easy-to-understand security model based on declarative user-to-role and role-toresource mappings. Developers are prevented from bypassing security requirements and inadvertently exposing business functionality. Developers often advertently or inadvertently bypass security mechanisms within the code. Using Container Managed Security prevents this and ensures that EJB methods are adequately protected and properly restricted at deployment time by the application deployer. Less prone to security holes. Since security is implemented by a time-tested container, programming errors are less likely to lead to security holes. However, the security functionality offered by the container could be too limited and inflexible to modify. Separation of security code from business objects. Since the container implements the security infrastructure, the application code is free of security logic. However, developers often end up starting with Container Managed Security and then using programmatic security in conjunction with it, which leads to mangled code with a mixture of declarative and programmatic security that is difficult to manage.
Sample Code
Sample code for each strategy described earlier is illustrated in this section. The samples could be used in conjunction with each other to implement multiple flavors of Container Managed Security. Example 10-3 shows declarative security via a web.xml deployment descriptor.
<role-name>CORPORATEADMIN</role-name> <role-name>CLIENTADMIN</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee> NONE </transport-guarantee> </user-data-constraint> </security-constraint> <!-- Declare login configuration here --> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page> /login.jsp </form-login-page> <form-error-page> /login.jsp </form-error-page> </form-login-config> </login-config> <security-role> <description>Corporate Administrators</description> <role-name>CORPORATEADMIN</role-name> </security-role> <security-role> <description>Client Administrators</description> <role-name>CLIENTADMIN</role-name> </security-role> ... </web-app>
<method-permission> <role-name>GUEST</role-name> <method> <ejb-name>PublicUtilities</ejb-name> <method-name>viewStatistics</method-name> </method> </method-permission> ... <exclude-list> <description>Unreleased Methods</description> <method> <ejb-name>PublicUtilities</ejb-name> <method-name>underConstruction</method-name> </method> </exclude-list> ... </assembly-descriptor> ...
Reality Check
I s Container Managed Security comprehensive at the Web tier? If the granularity of security enforcement is not matched by the granularity offered by the resource URL identifiers used by Container Managed Security to distinguish and differentiate resources, this pattern may not fulfill the requirements. This is particularly true in applications that use a single controller to front multiple resources. In such cases, the request URI would be the same for all resources, and individual resources would be identified only by way of some identifier in the query string (such as /myapp/controller?page=resource1). Container Manager Security by URL patterns is not applicable in such cases unless the container supports extensive use of regular expressions. Resource-level security in such scenarios requires additional work in the application. I s Container Managed Security required at the service tier? If all the back-end business services are inevitably fronted by a security gateway such as Secure Service Proxy or Secure Service Faade, having additional security enforcement via Container Managed Security on EJBs may not add much value and may incur unnecessary performance overhead. The choice must be carefully made in such cases.
Related Patterns
Authentication Enforcer, Authorization Enforcer. Authentication Enforcer enforces authentication on a request that is antecedently unauthenticated, much like what Container Managed Security implementation can enforce on the W eb tier resource of a J2EE application. Similarly, Authorization Enforcer behaves like the Business tier implementation of Container Managed Security Secure Service Proxy. If security architecture was not planned in the initial phases of application development, utilization of Container Managed Security at later stages may seem chaotic. In such cases, Secure Service Proxy or Secure Service Faade can be used to offer a secure gateway exposed to the client that enforces security in lieu of such enforcement at the business service level. I ntercepting Web Agent. Rather than custom-building security via deployment descriptors and configuring the container as in Container Manager Security, one may delegate those tasks to a COTS product, with the application using W eb Agent Interceptor to preprocess the security context of the requests before they are forwarded and fulfilled by the security-unaware application services.
Forces
Y ou want to instrument POJO business objects that the container does not monitor for you. Y ou have many business objects and want to adjust instrumentation at runtime as needed to provide security monitoring and real-time forensic data gathering.
Y ou want to monitor and actively manage business objects to tightly control security and proactively prevent attacks in progress. Y ou want to use industry-standard Java Management Extension (JMX) technology to ensure a vendor-neutral solution.
Solution
Use a Dynamic Service Management pattern to enable fine-grained instrumentation of business objects at runtime on an as-needed basis using JMX.
Structure
Figure 10-7 illustrates a Dynamic Service Management pattern class.
Client. A Client requests registration of an object as an MBean from the ServiceManager. ServiceManager. The ServiceManager creates an instance of the MBeanServer and obtains an instance of an MBeanFactory. ServiceManager instantiates the Registry and then uses the MBeanFactory to create an MBean for a particular object passed in by the Client. It creates an ObjectName for that object and then registers it with the MBeanServer. MBeanServer. The MBeanServer exposes registered MBeans via adaptor-specific protocols. MbeanFactory. The MBeanFactory creates the Registry and uses it to find managed MBean definitions, which it loaded from the Descriptor Store. Registry. The Registry loads and maintains a registry of MBean descriptors. It creates a Registry Monitor to monitor changes to the DescriptorStore and reloads the definitions when the RegistryMonitor notifies it that the DescriptorStore has changed. RegistryMonitor. The RegistryMonitor is responsible for monitoring changes to the DescriptorStore. It registers listeners and notifies those listeners when it detects a change to the DescriptorStore. DescriptorStore. The DescriptorStore is an abstract representation of a persistent store of MBean descriptor definitions. Figure 10-8 shows the following sequence for registering an object as an MBean using the Dynamic Service Management pattern. 1. ServiceManager creates an instance of an MBeanServer. 2. ServiceManager calls getInstance on MBeanFactory. 3. MBeanFactory, upon initial creation, creates an instance of Registry. 4. Upon creation, Registry calls its loadRegistry method, which loads MBean descriptors from the DescriptorStore. 5.
6. Registry then adds itself as a listener to that DescriptorStore through a call to addListener method. On call to addListener method, RegistryMonitor stores listener and begins polling DescriptorStore passed in as argument to addListener. 1. Client invokes registerObject on ServiceManager. 2. ServiceManager invokes createMBean on MBeanFactory. 3. MBeanFactory calls findManagedBean on Registry. 4. ServiceManager then calls createObjectName on MBeanFactory. Once the MBean and an ObjectName for it have been created, the ServiceManager invokes registerMBean on the 5. MBeanServer, passing in the MBean instance and its corresponding ObjectName.
Strategies
The Dynamic Service Management pattern provides dynamic instrumentation of business objects using JMX. JMX is a commonly used technology, present in all major application server products. There are several strategies for implementing this pattern, depending on what product you choose and what type of persistent store you require for your MBean Descriptors.
This strategy involves using JMX Model MBean loaded from an external configuration source. Model MBeans allow developers to define the attributes and operations they want to expose on their classes through metadata. This metadata can then be externalized from the class definition entirely. W ith a bit of work, the metadata can be reloaded at runtime to allow for just-in-time creation of MBeans as needed. The Jakarta Commons subproject of the Apache Software Foundation is focused on building open source, reusable Java components. One of the components of the Commons project is the Commons-Modeler. Commons-Modeler provides a framework for creating JMX Model MBeans that allows developers to circumvent the creation of the metadata programmatically (as described in the specification) and instead defines that data in an XML descriptor file. This greatly reduces the amount of source code needed to create the Model MBeans. The Model MBean Strategy utilizes the Commons-Modeler framework approach to and to leverage the file-based XML descriptor to implement dynamic reloading of descriptor file at runtime. This provides a mechanism that allows developers and components on an as-needed basis instead of incurring the run-time overhead of components statically, most of which will never be used. simplify the task of creating MBeans MBeans based on changes to that operations staff to instrument trying to instrument all of the
Figure 10-9 depicts a class diagram of a Dynamic Service Management pattern implemented using the Model MBean strategy.
Figure 10-10 is sequence diagram of the Dynamic Service Management pattern implemented using the Model MBean strategy. In this strategy, the Commons-Modeler framework supplies the Registry implementation and provides an XML DTD for the MBeans descriptor file. The Registry does all the work of creating the MBean from the data in the descriptor file, which is the bulk of the work overall. A simple file monitor can be implemented to detect changes to the XML file and the Registry can be told to reload from the changed file.
Consequences
The Dynamic Service Management pattern helps to identify and mitigate several types of threats. By enabling operations staff to monitor business components, they can readily identify an attack in progress, whether it is a denial-of-service attack or somebody trying to guess passwords using a dictionary attack. It also enables staff to manage those components so that they can take reactive action during an attack, such as setting a filter on an incoming IP or locking a user account. By employing the Dynamic Service Management pattern, developers can benefit from the following: I nstrumentation of POJO business objects. Using a Dynamic Service Management pattern provides a means to instrument POJOs so that their attributes and operations can be managed and monitored based on definitions defined in a descriptor file. This allows operations staff to probe down into the business objects themselves to troubleshoot or collect data. Adjust instrumentation at runtime as needed. Today, systems are built with static management and monitoring capabilities. These capabilities incur a run-time cost in terms of performance, memory, and complexity. They also do not provide the ability to manage or monitor subsequent components or attributes at runtime as the needs arise. They require a large amount of upfront analysis and speculation to determine what to manage and monitor. The Dynamic Service Management pattern allows you to instrument thousands of business components on an as-needed basis. Use industry-standard Java Management Extension (JMX) technology. The Dynamic Service Management can be used in conjunction with JMX to ensure that a completely vendor-independent management and monitoring solution can be implemented. Using a Dynamic Service Management pattern eliminates the need for upfront analysis and needless run-time overhead from monitoring or exposing components and attributes unnecessarily. Instead, components and attributes can be dynamically instrumented at runtime on an as-needed basis. W hen the need no longer exists, the instrumentation can be turned off, freeing up cycles and memory for business processing.
Sample Code
Example 10-6 is a sample source listing of a Service Manager class.
public class MBeanManager implements ManagedObject { private static MBeanManager instance = null; private HtmlAdaptorServer htmlServer = null; private MBeanFactory factory = null; private HashMap objNames = new HashMap(); // This class returns an instance of MBeanManager public static MBeanManager getInstance() { if(instance == null) { instance = new MBeanManager(); try { instance.registerObject(instance, "CSPM"); } catch (Exception e) { log.error("Unable to register mbean.", e); } } return instance; } // Create and initialize the MBeanManager private MBeanManager() { init(); } // Initializes the adaptors and servers. private void init() { htmlServer = new HtmlAdaptorServer(); htmlServer.setPort(4545); String htmlServiceName = "Adaptor:name=html, port=" + port; // Create the Object name to register with. ObjectName htmlObjectName = new ObjectName(htmlServiceName); objNames.put(htmlServiceName, htmlObjectName); htmlServer.start(); // Load MBean factory factory = MBeanFactory.getInstance(); } // Register a service object as an MBean. public void registerObject(Object service, String serviceName) throws Exception { ModelMBean mbean = factory.createMBean(service, serviceName); if(mbean == null) { return; } // Create the ObjectName ObjectName objName = factory.createObjectName( mbeanDomain, service, serviceName); if (objName == null) { log.error("Could not create object name."); return; } // Get the MBeanServer MBeanServer wlsServer = getWLSMBeanServer(); // Register the MBean with the server wlsServer.registerMBean(mbean, objName); // Add the ObjectName to the list of names objNames.put(serviceName, objName); } // Method to unregister an object as an MBean public void unregisterObject(String serviceName) { try {
if(serviceName != null && objNames != null) { // Remove the ObjectName from the list ObjectName oName = (ObjectName)objNames.remove(serviceName); if(oName != null) { MBeanServer server = getMBeanServer(); // Unregister the bean from the server server.unregisterMBean(oName); } } } catch(Exception e) { log.error("Unable to unregister service.", e); } } // Method to reload the MBean descriptor from the registry public void reloadMBeans() throws Exception { // Unload previously registered mbeans unloadMBeans(); // Tell factory to reload new MBeans. factory.loadRegistry(); } // Unload the MBeans public void unloadMBeans() throws Exception { // Get a handle to all of our registered MBeans Enumeration svcNames = objNames.keys(); while(svcNames.hasMoreElements()) { String svc = (String) svcNames.nextElement(); // Iterate through list unregistering each MBean unregisterObject(svc); } } }
"mbeans-descriptors.xml"; private FileMonitor fileMonitor = null; private static final Object lock = new Object(); // Private constructor private MBeanFactory() { init(); } // Initialization method for loading the MBean descriptor // registry and adding a file listener to detect changes. private void init() { loadRegistry(); try { fileMonitor.getInstance().addFileChangeListener( this, registryFileName); } catch (FileNotFoundException fnfe) { log.error("Unable to add listener."); } } // Load the MBean descriptor registry. public void loadRegistry() { InputStream inputStream = null; try { inputStream = ClassLoader.getSystemClassLoader(). getResourceAsStream(registryFileName); // Get the registry registry = Registry.getRegistry(null, instance); // Load the descriptors from the input stream. registry.loadDescriptors(inputStream); } catch (Exception e) { log.error("Unable to load file.", e); } } // Returns an MBeanFactory instance public static MBeanFactory getInstance() throws Exception { if (instance == null) { instance = new MBeanFactory(); } return instance; } // Create a ModelMBean given a service and name public ModelMBean createMBean(Object service, String serviceName) throws Exception { ModelMBean mbean = null; // Create an MBean from the Registry ManagedBean managed = registry.findManagedBean(serviceName); if (managed != null) { mbean = managed.createMBean(service); } return mbean; } // Create an ObjectName for a service. public ObjectName createObjectName(String domain, Object service, String serviceName) throws Exception { ObjectName oName = null; if(service instanceof ManagedObject) { ManagedObject svcImpl = (ManagedObject)service; // Set the JMX name to the input service name.
svcImpl.setJMXName(serviceName); // Create the ObjectName oName = new ObjectName(domain + "Name:" + svcImpl.getJMXName() + ",Type=" + svcImpl.getJMXType()); } else { oName = new ObjectName(domain + ":service=" + serviceName + ",className=" + service.getClass().getName()); } return oName; } public String getRegistryFileName() { return this.registryFileName; } public void setRegistryFileName(String fileName) { this.registryFileName = fileName; } public void fileChanged(String fileName) { try { loadRegistry(); } catch(Exception e) { log.error("Failed to reload registry."); } } }
Reality Check
What types of things need to be managed and monitored? W hat should be managed and monitored is very subjective and depends on the circumstances. The Dynamic Service Management pattern provides a means to transparently attach management and monitoring capabilities to business objects without prior consideration or elaboration of those objects. But to be effective, developers must at least understand what the approach provides and design their business objects to be taken advantage of by the JMX framework. If they do not implement member variables and choose to pass in only complex Objects as parameters to their method calls, they will be unable to make use of the framework in many cases.
Related Patterns
Secure Pipe. The Dynamic Service Management pattern makes use of the Secure Pipe pattern to provide confidentiality when communicating with the application via the management protocol.
Forces
Y ou want to protect sensitive data passed in Transfer Objects from being captured in console messages, log files, or audit logs. Y ou want the Transfer Object to be responsible for protecting the data in order to reduce code and prevent business components from inadvertently exposing sensitive data. Y ou want to specify which data elements are protected, since not all data should be protected and may need to be exposed.
Solution
Use an Obfuscated T ransfer Object to protect access to data passed within and between tiers. The Obfuscated Transfer Object allows developers to define data elements within it that are to be protected. The means of protection can vary between applications or implementations depending on the business requirements. The Obfuscated Transfer Object provides a way to prevent either purposeful or inadvertent unauthorized access to its data. The producers and consumers of the data can agree upon the sensitive data elements that need to be protected and on their means of access. The Obfuscated Transfer Object will then take the responsibility of protecting that data from any intervening components that it is passed to on its way between producer and consumer. Credit card and other sensitive information can be protected from being accidentally dumped to a log file or audit trail, or worse, such as being captured and stored for malicious purposes.
Structure
Figure 10-11 is the class diagram for the Obfuscated Transfer Object.
Client. The Client wants to send and receive data from a Target component via an intermediary Component. The Client can be any component in any tier. Component. The Component is any application component in the message flow that is not the intended target of the Client. The Component can be any component in any tier that acts as an intermediary in the message flow between the Client and the Target. T arget. The Target is any object that is the intended recipient of the Client's request. It is responsible for setting the data that needs to be obfuscated. Obfuscated T ransfer Object. The ObfuscatedTransferObject is responsible for protecting access to data within it, as necessary. Typically, the intermediary Component is not trusted or should not have access to any or all data in the Obfuscated Transfer Object. It then becomes the Obfuscated Transfer Object's responsibility to protect the data. The means of protection is dependent upon the business requirements and the level of trust of the intermediary components. Figure 10-12 takes us through a typical sequence of events for an application employing an ObfuscatedTransferObject. 1. Client creates an ObfuscatedTransferObject, setting the necessary request data.
2. Client serializes the ObfuscatedTransferObject and applies required obfuscation mechanisms. 3. Client sends the serialized ObfuscatedTransferObject to an intermediary Component. The Component invokes toString on the ObfuscatedTransferObject and writes the output to a file. In this case, none of the 4. protected data is listed in that output. 5. The Component sends the ObfuscatedTransferObject to the Target. 6. The Target retrieves the protected (obfuscated) data elements. 7. The Target sets new data that requires protection. 8. The Target logs the ObfuscatedTransferObject; again, the protected data is not output. 9. The Target returns the ObfuscatedTransferObject to the Client. 10. The Client retrieves the newly set data.
Strategies
A variety of strategies can implement the Obfuscated Transfer Object. A simple strategy is just to mask various data elements to prevent them from inadvertently being logged or displayed in an audit event. A more elaborate strategy is to encrypt the protected data within the Obfuscated Transfer Object. This entails a more complex implementation, but offers a higher degree of protection.
The Client creates an ObfuscatedTransferObject, sets the data, and sends it to the Target component via an intermediary 1. Component. The Component is any application component in the message flow that is not the intended target of the Client. The 2. Component may log the ObfuscatedTransferObject. The Target is any object that is the intended recipient of the Client's request. It retrieves the data that needs to be 3. obfuscated. The ObfuscatedTransferObject does not output data in the masked list. That data is only retrieved when specifically asked 4. for. In this strategy, the client sets data as name-value (NV) pairs in the Obfuscated Transfer Object. Internally, the Obfuscated Transfer Object maintains two maps, one for holding NV pairs that should be obfuscated and another for NV pairs that do not require obfuscation. In addition to the two maps, the Obfuscated Transfer Object contains a list of NV pair names that should be protected. Data passed in with names corresponding to names in the masked list, are placed in the map for the obfuscated data. This map is then protected. In the sequence above, when the Component logs the ObfuscatedTransferObject, the data in the obfuscated map is not logged, and thus it is protected.
Encryption Strategy
Using the Encryption Strategy for Obfuscated Transfer Object provides the highest level of protection for the data elements protected within. The data elements are stored in a Data Map, and then the Data Map as a whole is encrypted using a symmetric key. To retrieve the Data Map and the elements within it, the consumer must supply a symmetric key identical to the one used by the producer to seal the Data Map. The Sun Java 2 Standard Edition (J2SE) runtime provides a Sealed Object class that allows developers to easily encrypt objects by passing in a serialized object and a Cipher object in the constructor. The serialized object can then be retrieved by either passing in an identical Cipher or a Key object that can be used to recreate the Cipher. This encapsulates all of the underlying work associated with encrypting and decrypting objects. The only issue remaining is the management of symmetric keys within the application. This poses a significant challenge because it requires the producers and consumers to share symmetric keys without providing any intermediary components with access to those keys. This may be simple or overwhelmingly complex depending on the architecture of the application and the structure of the component trust model. Use this strategy with caution, because the keymanagement issues may be harder to overcome than architecting the application again to eliminate the need for the pattern. Figure 10-15 is a sequence diagram illustrating the Encryption Strategy for an Obfuscated Transfer Object.
The Client creates an ObfuscatedTransferObject, sets the data, and sends it to the Target component via an intermediary 1. Component. The Component is any application component in the message flow that is not the intended target of the Client. The 2. Component may log the ObfuscatedTransferObject. The ObfuscatedTransferObject creates a Sealed object by encrypting the given serializable object using an encryption key. 3. That data is only retrieved when the proper decryption key is supplied. The Target is any object that is the intended recipient of the Client's request. It retrieves the obfuscated data by 4. supplying the key used to decrypt it. The sequence diagram shown in Figure 10-15 illustrates implementation of the Obfuscated Transfer Object using an Encryption Strategy. The client creates the Obfuscated Transfer Object and adds the data as name value pairs. The client then seals the data by passing in an encryption key. The intermediate components in the request flow are unable to access the data. The target object, upon receiving the Obfuscated Transfer Object, first unseals it by passing in the corresponding decryption key. It can then access the data as before, through the name-value pair keys.
Consequences
The Obfuscated Transfer Object protects against sniffing attacks and threats arising from log-file capture within the Business tier by ensuring that sensitive data is not passed or logged in the clear. By employing the Obfuscated Transfer Object pattern, the following consequences will apply: Confidentiality: Generic protection of sensitive data passed in T ransfer Objects. The Obfuscated Transfer Object provides a means of generically protecting data passed between components and tiers from being improperly
accessedsuch as for logging or auditing, or purposefully, in the case of untrusted intermediary components. Centralized encryption or obfuscation code. The Obfuscated Transfer Object provides a central point for encrypting or obfuscating data that is passed in a Transfer Object. Moving the responsibility for protecting the data to the Transfer Object ensures that such code is then not required across all the components that use the Transfer Object. I ncreased performance overhead. The code necessary to obfuscate or encrypt has associated memory and processing overhead. For large amounts of data, this overhead may significantly reduce overall performance. Specify w hich data elements are protected and w hich are not. By using the Obfuscated Transfer Object, you can specify which data elements to protect and therefore only impact performance as necessary for security, which is better than alternative bulk encryption or obfuscation techniques.
Sample Code
Example 10-8 shows a sample listing of an Obfuscated Transfer Object implemented using an Encryption Strategy.
Reality Check
Should w e use a Masked List Strategy or an Encryption Strategy? It depends on your requirements and whether you trust your intermediary components not to access the data in the masked list. Using a Masked List Strategy, a component could access the data and dump it to a log if it wished, circumventing the intention of the masked list. By using an Encryption Strategy, the intermediary components cannot gain access to the sensitive data unless they obtain the Cipher used to protect that data. There is significant processing overhead to encrypting and decrypting the data, so you should only use this strategy when necessary and only for the data elements that require it.
Related Patterns
T ransfer Object [CJP2]. The Obfuscated Transfer Object is similar to, and may be considered a strategy of, the Core J2EE Patterns Transfer Object pattern. It provides the additional capability of protecting data elements within it from unauthorized access. Data T ransfer HashMap (Middlew are). The Obfuscated Transfer Object is similar to the Data Transfer HashMap pattern from the Middleware Company. Like the Data Transfer HashMap, it employs a strategy that makes use of an underlying HashMap for storing and retrieving data elements. In the case of the Obfuscated Transfer Object, that underlying map may be encrypted using a Sealed Object or may be divided into two maps, one containing data that can be dumped to a log or audit table and another containing sensitive data that should not be accessed.
Policy Delegate
Problem
Y ou w ant to shield clients from discovery and invocation details of security services and to control client interactions by intercepting and administering policy on client requests. Y ou need an abstraction between enterprise security infrastructure and clients; hiding the intricacies of finding and invoking security services. It is desirable to abstract common framework specific code related to invocation of those services, thus reducing the coupling between clients and the security framework. As a result of the loose coupling, clients and services can then be easily replaced with alternate technologies, when appropriate, to increase the lifespan of the application.
Forces
Y ou want to reduce the coupling between the security framework and the client of security services offered by the framework and reduce the number of complex security interfaces exposed to the client in order to limit touchpoints that can give way to security holes. Y ou need to manage the life cycle of a client's security context at the server and want to use it across multiple invocations by the same client. Y ou need a way to centralize Business-tier security functions so that security can be enforced across business components without impacting business developers.
Solution
Use Policy Delegate to mediate requests between clients and security services, and to reduce the dependency of client code on implementation specifics of the service framework. Policy Delegate is a coordinator of Business-tier security services that is akin to the Secure Base Action in the W eb tier. The clients use the delegate to locate and mediate back-end security services. A delegate could in turn use a Secure Service Faade that offers a coarse-grained aggregate interface to fine-grained security services or business components and entities. This abstraction also offers a looser coupling and cleaner contract between clients and the secure services, reducing the magnitude of change required in the clients when the implementations of the security services change over time. To use a delegate, the client need not be aware of the actual location of the service. A Policy Delegate uses a Service Locator to locate distributed security services. The client is unaware of the underlying implementation technology and the communication protocol of the service, which could be RMI, W eb services, DCOM, CORBA, or another service.
W hile coordinating and mediating requests and responses between clients and the security framework, a delegate could also perform pertinent message translation to accommodate disparate message formats and protocols both expected and supported by the clients and individual services. In the same vein, the delegate could choose to perform error translation to encapsulate service-level security exceptions as user-friendly, application-level error messages. The Policy Delegate can be a stateless delegate or a stateful delegate. A stateful delegate, identified and looked up by an appropriate ID, can cache the security context, service references, and transient state between multiple invocations by the client. This caching at the server side optimizes and reduces the number of object creations, service lookups, and security computations. The security context could be cached as a Secure Session Object. The clients can retrieve a security delegate using a Factory pattern [GoF]. This is particularly useful when the application exposes multiple Policy Delegates rather than one aggregate delegate that mediates between multiple services.
Structure
Figure 10-16 shows a typical Policy Delegate class diagram. The Target in the diagram represents any security service, a Secure Service Faade, or a security-unaware Session Faade. The delegate uses a SecureSessionObject to maintain the transient state associated with a client session.
The single PolicyDelegate could maintain a one-to-many relationship with multiple targets, or multiple Policy Delegates could each map exactly to one of the several possible targets. In the latter case, it could make use of a Factory that returns an appropriate delegate, depending on the requested service.
Client. A Client retrieves a PolicyDelegate through DelegateFactory to invoke a specific service. PolicyDelegate. The PolicyDelegate uses ServiceLocator [CJP2] to locate the service. SecureSessionObject. The PolicyDelegate maintains a SecureSessionObject to store transient client security context and service references between consecutive invocations by the same client. SecureServiceFaade, Service2. The back-end service could be implemented using any technology, such as a SecureServiceFaade session bean or as a W eb service depicted as Service2.
Strategies
The Policy Delegate pattern could be implemented in a variety of flavors depending on the magnitude of services it mediates and the approach to state management as discussed here. One-to-many/one-to-one Policy Delegate. In a one-to-one Policy Delegate, a delegate takes the responsibility of controlling one specific service, resulting in as many delegates as there are back-end services. This is a more granular approach than a one-to-many Policy Delegate, where a delegate controls multiple services offering a unified aggregate interface to a client. Remote references could be lazily loaded in such a delegate to avoid unnecessary service lookups and object creations. Stateless/stateful Policy Delegate. A stateful Policy Delegate maintains the state on the server side in a SecureSessionObject on behalf of the client. This is useful when clients are thin or are unaware of how security context must be preserved between invocations.
Consequences
The Policy Delegate pattern reduces the coupling between the security framework and the client of security services offered by the framework and thereby reduces the number of complex security interfaces exposed to the client. This has the overall effect of reducing complexity and therefore reducing potential software bugs that could lead to a variety of attacks. It also allows you to cache and manage the life cycle of a client's security context at the server and use it across multiple invocations by the same client, which enhances performance. The Policy Delegate pattern benefits developers in the following ways: Hides service complexity from client, exposes a cleaner interface. The client only needs to be aware of the input and output messages of the delegate and not any implementation specifics or invocation details of the complex security services. Optimizes performance. By appropriate caching of the security context, the delegate could reduce repetitive computations associated with each individual request, thereby increasing the responsiveness of the system and scalability.
Performs message translation and error translation. Security exceptions that are hard to decipher for a nonsecurityaware client can be translated by the delegate to user-friendly exceptions before passing them to the client. Performs service discovery, failover, and recovery. Discovery and invocation details are abstracted to a central place, avoiding code duplication among clients. The delegate, being aware of the location of each security service, can also perform application-level failover and recovery from catastrophic errors.
Sample Code
Example 10-9 lists the interface of the Policy Delegate that serves as the contract between security framework and clients.
Example 10-10 lists the implementation code of the Policy Delegate. The implementation code is not relevant to the client, which only relies on the Delegate Interface and a reference to the delegate.
// services/session facades/session beans... try { authenticationEnforcer = ServiceLocator.lookup( AuthenticationEnforcer.SERVICE_NAME); authorizationEnforcer = ServiceLocator.lookup( AuthorizationEnforcer.SERVICE_NAME); secureSessionManager = ServiceLocator.lookup( SecureSessionManager.SERVICE_NAME); secureLogger = ServiceLocator.lookup( SecureLogger.SERVICE_NAME); //... secureServiceFacade = ServiceLocator.lookup( SecureServiceFacade.SERVICE_NAME); } catch (Exception e) { throw new ApplicationException(e); } } public void destroy() { secureSessionManager.invalidate(rc); } //implement delegate methods // Alternative 1: Declare service specific methods public boolean authenticate(GenericTO request) throws AuthenticationFailureException { try { // Return the results of authentication return authenticationEnforcer.authenticate(request); } catch (SecurityFrameworkException e) { throw new AuthenticationFailureException(e); } } // Authorize the request. public boolean authorize(GenericTO request) throws AuthorizationFailureException { try { // Check the request is authenticated if(!request.authenticated()){ if(!authenticationEnforcer.authenticate(request)) throw new AuthorizationFailureException( new AuthenticationFailureException()) } // Return the result of authorization return authenticationEnforcer.authorize(request); } catch (SecurityFrameworkException e) { throw new AuthorizationFailureException(e); } } // Alternative 2: Implement a generic method with generic // transfer object as input or output public GenericTO execute(String serviceName, GenericTO input) throws ApplicationException{ //Validate request as per security policy if(!input.authenticated()){ if (!authenticationEnforcer.authenticate(input)) throw new AuthenticationFailureException(); } if(!input.authorized()){ if (!authorizationEnforcer.authorize(input)) throw new AuthorizationFailureException(); } //process request GenericService service = ServiceLocator.lookup(serviceName); return service.execute(input); }
Example 10-11 lists a sample client code that uses a Policy Delegate.
Reality Check
I s Policy Delegate redundant? If the W eb tier is already integrated with back-end security services in an implementation-specific manner without using Policy Delegate but using a Secure Service Faade, adding a Policy Delegate at that stage may not offer any benefit and will only cause rework. A thoughtful, careful design could avoid such scenarios. I s the Policy Delegate interface too complex? If Policy Delegate usage becomes too complicated and requires too much knowledge of the underlying security framework by clients, it defeats the purpose of abstracting the complex logic in a simple helper as described in this pattern.
Related Patterns
Secure Base Action. Secure Base Action on the W eb tier has a similar objective as the Policy Delegate on the Business tier. A Secure Base Action could in turn use a Policy Delegate to access security services. Business Delegate [CJP2]. A Policy Delegate is similar to the Business Delegate pattern, but leverages other patterns discussed in this book related to security. Policy Delegate additionally makes use of a SecureSessionObject to protect the confidentiality and integrity of a client session.
Forces
Y ou want to off-load security implementations from individual service components and perform them in a centralized fashion so that security developers can focus on security implementation and business developers can focus on business components. Y ou want to impose and administer security rules on client requests that the service implementers are unaware of in order to ensure that authentication, authorization, validation, and auditing are properly performed on all services. Y ou want a framework to manage the life cycle of the security context between interactive service invocations by clients and to propagate the security context to appropriate servers where the services are implemented. Y ou want to reduce the coupling between fine-grained services but expose a unified aggregation of such services to the client through a simple interface that hides the complexities of interaction between individual services while enforcing all of the overall security requirements of each service. Y ou want to minimize the message exchange between the client and the services, storing the intermittent state and context on the server on behalf of the client instead.
Solution
Use a Secure Service Faade to mediate and centralize complex interactions between business components under a secure session. Use a Secure Session Faade to integrate fine-grained, security-unaware service implementation and offer a unified, security-enabled interface to clients. The Secure Service Faade acts as a gateway where client requests are securely validated and routed to the appropriate service implementations, often maintaining and mediating the security and workflow context between interactive client requests and between fine-grained services that fulfill portions of the client requests.
Structure
Figure 10-18 illustrates a Secure Service Faade class diagram. The Faade is the endpoint exposed to the client and could be implemented as a stateful session bean or a servlet endpoint. It uses the security framework (implemented
using other patterns) to perform security-related tasks applicable to the client request. The framework may request the client to present further credentials if the requested service mandates doing so and if those credentials were not found in the initial client request. The Faade then uses the Dynamic Service Management pattern to locate the appropriate service-provider implementations. The request is then forwarded to the individual services either sequentially, in parallel, or in any complex relationship order as specified in the request description.
If the client request represents an aggregation of fine-grained services, the return messages from previous sequential service invocations can be aggregated and delivered to the subsequent service to achieve a sequential workflow-like implementation. If those fine-grained services are independent of each other, then they can be invoked in parallel and the results can be aggregated before delivering to the client, thus achieving parallel processing of the client request.
Client. A client sends a request to perform a specific task with the appropriate service descriptors to the Secure Service Faade, optionally incorporating the decision-tree predicates to determine the sequence services to be invoked. The Secure Service Faade deciphers the client request, verifies authentication, fulfills the request, and returns the results to the client. In doing so, it may use the following components: - Security Framework. The faade uses the existing enterprise-wide security framework implemented using other security patterns discussed in this book. Such a framework can be leveraged for authentication, authorization and access control, security assertions, trust management, and so forth. If the request is missing any credentials, the client request could be terminated or the client could be asked to furnish further credentials. - Dynamic Service Framework/Service Locator. The faade uses the Dynamic Service Framework or Service Locator to locate the services that are involved in fulfilling the request. The services could reside on the same host or be distributed throughout an enterprise. In either case, the faade ensures that the security context established using the security framework is correctly propagated to any service that expects such security attributes. The faade then establishes the execution logic and invokes each service in the correct order.
The fine-grained business services are not directly exposed to the client. The services themselves maintain loose coupling between each other and the faade. The faade takes the responsibility of unifying the individual services in the context of the client request. The service faade contains no business logic itself and therefore requires no protection.
Strategies
The Secure Service Faade manages the complex relationships between disparate participating business services, plugs in security to request fulfillment, and provides a high-level, coarse-grained abstraction to the client. The nature of such tasks opens up multiple choices for implementation flavors, two of which are briefly discussed now. Faade w ith static relationships betw een individual service components. The relationship between participating fine-grained services is permanently static in nature. In such cases, the faade can be represented by an interface that corresponds to the aggregate of the services and can be implemented by a session bean that implements the interface. The session bean life cycle method Create can preprocess the request for security validations. Faade w ith dynamic, transient relationships betw een individual service components. W hen the sequence of service calls to be invoked by the faade is dependent upon the prior invocation history in the execution sequence, the decision predicates can be specified in the request semantics and used in the faade implementations to determine the next service to be invoked. Such an implementation can be highly dynamic in nature, and the decision predicates can incorporate security class and compartment information to enable multilevel security in the faade implementation. A different flavor can use a simple interface in the faade, such as a command pattern implementation, and can mandate that the service descriptors be specified in the request message. This allows new services to be plugged-and-played without requiring changes to the faade interface and is widely used in W eb services.
Consequences
The Secure Service Faade pattern protects the Business-tier services and business objects from attacks that circumvent the W eb tier or W eb Services tier. The W eb tier and the W eb Services tier are responsible for upfront authentication and access control. An attacker who has penetrated the network perimeter could circumvent these tiers and access the Business tier directly. The Secure Service Faade is responsible for protecting the Business tier by enforcing the security mechanisms established by the W eb and W eb Services tiers. By employing the Secure Service Faade pattern, developers and clients can benefit in the following ways: Exposes a simplified, unified interface to a client. The Secure Service Faade shields the client from the complex interactions between the participating services by providing a single unified interface for service invocation. This brings the advantages of loose coupling between clients and fine-grained business services, centralized mediation, easier management, and reduces the risks of change management. Off-loads security validations from lightw eight services. Participating business services in a faade may be too lightweight to define security policies and incorporate security processing. Secure Service Faade off-loads such responsibility from business services and offers a centralized policy management and administration of centralized security processing tasks, thereby reducing code duplication and processing redundancies.
Centralizes policy administration. The centralized nature of the Secure Service Faade eases security policy administration by isolating it to a single location. Such centralization also makes it feasible to retrofit infrastructure security to otherwise security-unaware or existing services. Centralizes transaction management and incorporates security attributes. As with a generic session faade, a Secure Service Faade allows applying distributed transaction management over individual transactions of the participating services. Since security attributes are accessible at the same place, transaction management can incorporate such security attributes, offering multilevel, security-driven transaction management. Facilitates dynamic, rule-based service integration and invocation. As explained in the preceding "Strategies" section, multiple flavors of faade implementations offer a very dynamic and flexible integration of business services. Integration rules can incorporate security and message attributes in order to dynamically determine execution sequence. An external Business Rules Engine can also be plugged into such a dynamic faade. Minimize message exchange betw een client and services. Secure Service Faade minimizes message exchange by caching the intermittent state and context on the server rather than on the client.
Sample Code
The sample code that follows illustrates a Stateful Session Bean approach to a Secure Service Faade implementation. Example 10-12 and Example 10-13 show the home and remote interfaces to the Faade Session bean.
Example 10-14 lists a sample bean implementation code. The important item to notice is that the SecurityContext object is maintained as a state variable in the stateful session bean in order to facilitate propagation of the context to any individual service that expects it. The SecureMessage encapsulates the aggregate service description of the client request and is used to locate the appropriate services and optionally establish a dynamic sequence of participating service executions.
import javax.ejb.*; import javax.naming.*; import java.util.*; import com.csp.*; public class SecureServiceFacadeSessionBean implements SessionBean { private SessionContext context; private SecurityContext securityContext; // Remote references for the individual services // can be encapsulated as facade attributes // or made part of the message private ServiceMaps services = new HashMap(); // Create the facade and initialize the security context public void ejbCreate(SecurityContext ctx) throws CreateException, ResourceException { securityContext = ctx; } // Locate the requested service and cache for // prospective future use and stickiness private SecureMessage execute(SecureMessage msg) throws SecureServiceFacadeException, ServiceLocatorException { SecureService svc = ServiceLocator.getService( msg.getRequestedServiceName()); services.put(msg.getRequestedServiceName(), svc); return svc.execute(msg); } // ... // Other lifecycle methods public void ejbActivate() { ... } public void ejbPassivate() { ... } public void setSessionContext(SessionContext ctx) { ... } public void ejbRemove() { ... } }
Reality Check
Does the Service Faade need to incorporate security? The Secure Service Proxy uses the existing security framework while aggregating fine-grained services. However, security context validation may not be required if other means of authentication and access control are pertinently enforced on the client request before it reaches the faade. Does the Secure Service Faade need to perform service aggregation? If the client requests will mostly be fulfilled by a single, fine-grained service component, there is no necessity for aggregation. In such cases, Secure Service Proxy may well suit the purpose.
Does the Secure Service Faade reduce security code duplication? If security context validation is performed by each service component, the validation at the faade level may turn out to be redundant and wasteful. A planned design could reduce such duplication.
Related Patterns
Secure Service Proxy. Secure Service Proxy, implemented as a W eb service endpoint, acts as a mediator between the clients and the J2EE components with a one-on-one mapping between proxy methods and remote methods of J2EE components. Secure Service Faade, on the other hand, maintains complex relationships between participating services and exposes an aggregated uniform interface to the client. Session Faade. The Secure Service Faade and the generic Session Faade [CJP2] offer the same benefits with respect to business object integration and aggregation. However, Secure Service Faade does not require that the participating components are EJBs. The participating services could use any framework and the faade would incorporate the appropriate invocation logic to use those services. In addition, Secure Service Faade emphasizes the security context life cycle management and its propagation to appropriate services.
Forces
Y ou want to define a data structure for the security context that comprises authentication and authorization credentials so that application components can validate those credentials. Y ou want to define a token that can uniquely identify the security context to be shared between applications to retrieve the context, thereby enabling single sign-on between applications. Y ou want to abstract vendor-specific session management and distribution implementations. Y ou want to securely transmit the security context across virtual machines and address spaces when desired in order to retain the client's credentials outside of the initial request thread.
Solution
Use a Secure Session Object to abstract encapsulation of authentication and authorization credentials that can be passed across boundaries. Y ou often need to persist session data within a single session or between user sessions that span an indeterminate period of time. In a typical W eb application, you could use cookies and URL rewriting to achieve session persistence, but there are security, performance, and network-utilization implications of doing so. Applications that store sensitive data in the session are often compelled to protect such data and prevent potential misuse by malicious code (a Trojan horse) or a user (a hacker). Malicious code could use reflection to retrieve private members of an object. Hackers could sniff the serialized session object while in transit and misuse the data. Developers could unknowingly use debug statements to print sensitive data in log files. Secure Session Object can ensure that sensitive information is not inadvertently exposed. The Secure Session Object provides a means of encapsulating authentication and authorization information such as credentials, roles, and privileges, and using them for secure transport. This allows components across tiers or asynchronous messaging systems to verify that the originator of the request is authenticated and authorized for that
particular service. It is intended that this serves as an abstract mechanism to encapsulate vendor-specific implementations. A Secure Session Object is an ideal way to share and transmit global security information associated with a client.
Structure
Figure 10-20 is a class diagram of the Secure Session Object.
Client. The Client sends a request to a Target resource. The Client receives a SecureSessionObject and stores it for submitting in subsequent requests. SecureSessionObject. SecureSessionObject stores information regarding the client and its session, which can be
validated by consumers to establish authentication and authorization of that client. T arget. The Target creates a SecureSessionObject. It then verifies the SecureSessionObject passed in on subsequent requests. The Secure Session Object is implemented through the following steps: 1. Client accesses a Target resource. 2. Target creates a SecureSessionObject. 3. Target serializes SecureSessionObject and returns it in response. 4. Client needs to access Target again and serialize SecureSessionObject from the last request. 5. Client accesses Target, passing the SecureSessionObject created previously in response to the request. 6. Target receives the request and verifies the SecureSessionObject before completing the request.
Strategies
Y ou can use a number of strategies to implement Secure Session Object. The first strategy is using a Transfer Object Member, which allows you to use Transfer Objects to exchange data across tiers. The second strategy is using an Interceptor, which is applicable when transferring data across remote endpoints, such as between tiers.
Interceptor Strategy
In the Interceptor Strategy, which is mostly applicable to a distributed client-server model, the client and the server use
appropriate interceptors to negotiate and instantiate a centrally managed Secure Session Object. This session object glues the client and server interceptors to enforce session security on the client-server communication. The client and the server interceptors perform the initial handshake to agree upon the security mechanisms for the session object. The client authenticates to the server and retrieves a reference to the session object via a client interceptor. The reference could be as simple as a token or a remote object reference. After the client has authenticated itself, the server interceptor uses a session object factory to instantiate the Secure Session Object and returns the reference of the object to the client. The client and the server interceptors then exchange messages marshalled and unmarshalled according to the security context maintained in the Secure Session Object. Figure 10-23 is a class diagram of the Secure Session Object pattern implemented using an Interceptor Strategy.
This strategy offers the ability to update or replace the security implementations in the interceptors independently of one another. Moreover, any change in the Secure Session Object implementation causes changes only in the interceptors instead of the whole application.
Consequences
The Secure Session Object prevents a form of session hijacking that could occur if session context is not propagated and therefore not checked in the Business tier. This happens when the W eb tier is distributed from the Business tier. This also applies to message passing over JMS as well. The ramifications of not using a Secure Session Object are that impersonation attacks can take place from inside the perimeter. By employing the Secure Session Object pattern, developers benefit in the following ways: Controlled access and common interface to sensitive information. The Secure Session Object encapsulates all sensitive information related to session management and communication establishment. It can then restrict access to such information, encrypt with complete autonomy, or even block access to information that is inappropriate to the rest of the application. A common interface serves all components that need access to the rest of the session data and offers an aggregate view of session information. Optimized security processing. Since Secure Session Object can be reused over time, it minimizes repetition of security tasks such as authentication, secure connection establishment, and encryption and decryption of shared, static data. Reduced netw ork utilization and memory consumption. Centralizing management and access to a Secure Session Object via appropriate references and tokens minimizes the amount of session information exchanged between clients and servers. Memory utilization is also optimized by sharing security context between multiple components. Abstract vendor-specific session management implementations. The Secure Session Object pattern provides a generic data structure for storing and retrieving vendor-specific session management information. This reduces the dependency on a particular vendor and promotes code evolution.
Sample Code
Example 10-15 shows sample code for Transfer Object Member strategy.
implementation
package com.csp.business; public class SecureSessionTransferObject implements java.io.Serializable { private SecureSessionObject secureSessionObject; public SecureSessionObject getSecureTransferObject() { return secureSessionObject; } public void setSecureTransferObject( SecureSessionObject secureSessionObject) { this.secureSessionObject = secureSessionObject; } // Additional TransferObject methods... }
A developer can implement a SecureSessionTransferObject whenever they want to pass credentials within a Transfer Object.
Reality Check
I s Secure Session Object too bloated? Abstracting all session information into a single composite object may increase the object size. Serializing and de-serializing such an object quite frequently degrades performance. In such cases, one could revisit the object design or serialization routines to alleviate the performance degradation. Concurrency implications. Many components associated with the client session could be competing to update and read session data, which could lead to concurrency issues such as long wait times or deadlocks. A careful analysis of the possible scenarios is recommended.
Related Patterns
T ransfer Object [CJP2]. Secure Service Proxy, implemented as a W eb service endpoint, acts as a mediator between the clients and the J2EE components with a one-on-one mapping between proxy methods and remote methods of J2EE components. Secure Service Faade, on the other hand, maintains complex relationships between participating services and exposes an aggregated uniform interface to the client. Session Faade. The Secure Service Faade and the generic Session Faade [CJP2] offer the same benefits with respect to business object integration and aggregation. However, Secure Service Faade does not require that the participating components be EJBs. The participating components may be plain old java objects (POJOs) or any other object.
Infrastructure
1. Agent-based policy enforcement. Developers can use agents to enforce policies instead of writing custom code for policy enforcement. Application Server agents are good ways to take advantage of the J2EE container-managed security model while leveraging existing third-party security products. 2. Access protection. Make sure that a secure java.policy file is in place that enforces access privileges and permissions to protect the JAR components deployed by the application server. Make sure no untrusted JAR files are deployed. This secures JAR/Class files from downloading by hackers and external applications. Make use of digitally signed JAR files so that the code is downloaded by the coexisting application and the owner of the JAR file. 3. Access restriction to Naming services. Restrict anonymous access to the naming services. Secure the access to the naming services provider by defining only an administrator who can add or remove services from the JNDI registry. Allow application-level access to lookup, bind, and rebind with services. 4. Error reporting. Always return an error page or exception specific to the application error and the user's request. For example, you might use an application-specific InvalidUserException and NoAccessPrivilegesException. Do not expose remote, system-level, and naming service specific exceptions to the user accessing the applications. These exceptions to the end user expose weakness in the application and allow hackers to design potential attacks. 5. Database communication. Make use of the database provider's recommended security mechanisms for persisting data. Adopt possible options to improve the transport and data security, such as encrypting communication and sensitive data before writing it to the database. For example, store customer data in the database in plain text except for the encrypted credit card information.
Architecture
6. Component protection. Make sure all business components are protected with security roles and appropriate privileges before deployment. Avoid defining a global security role such as administrator unless it is warranted by the application. 7. Role mappings. Adopt dynamic role mappings based on business rules, the context of the request, or a condition such as access hours, time of day, group membership, specific assignment, or a caller member of the group. For example, a user may be allowed to be in an administrator role only while the actual administrator is away, as a temporary arrangement, by adding the user to a special role or group. Y ou must be able to specify the hours between which the temporary administrator has special privileges. These privileges are automatically revoked when the time expires or the actual administrator returns. Y ou also want to avoid using the same role names in the application because application-specific roles are mapped in deployment descriptors and they should be different from LDAP roles. 8. Rich-client authentication. W hile accessing EJB components via rich Java clients, do not send username/password credentials of the user through a Java object. Adopt container-managed mechanisms and then use declarative or programmatic security mechanisms to obtain the associated principal from them. 9. MDB access. Restrict unauthorized access to the MDB and prevent it from sending and processing malicious messages. Make sure the message sender uses a unique message ID, correlation IDs, and custom message headers to verify the sender's authenticity before processing the message. 10. Secure auditing. Deploy a secure logging and auditing mechanism (Audit Interceptor) to record and to provide a means of identifying and auditing all direct access attempts, policy violations, failed authentication attempts, failed EJB access, and exceptions. 11. Principal propagation. In the case of defining delegated associations between EJBs and trust relationships between containers, it is necessary to analyze the consequences and potential risks of using the runAs technique for principal delegation and for propagating identity. SSL Mutual authentication is the best approach before initiating communication for exploring trust relationships between containers. 12. Securing Business T ier components. Use the Secure Session Faade to protect interactions and to mask the exposure of underlying business components and their methods. 13. Data validation. Adopt well-defined validation procedures in the business component for handling data-format and
business data. This ensures data integrity and protects application components from the risk of processing overhead due to malicious data injection. 14. Data obfuscation. Adopt object obfuscation (Obfuscate Transfer Object) while using value objects that represent a snapshot of security-sensitive data in the database.
Policy
15. Disallow unnecessary protocols. In cases where rich clients are not being used to access business components, disallow all traffic from ports specific to RMI/IIOP, IIOP-CSIv2, or J2EE vendor-preferred protocols. Also disallow all service requests, routing information, and packet content from external access to EJBs or business components. 16. Restrict deployed components. Do not store undeployed business components in the production environment. 17. Restrict user access. Configure user lockouts and access time limits to prevent attacks from potential hackers on user accounts that send malicious data. Avoid long-duration transactions that affect performance. 18. Authentication enforcement. Use a Secure Session Object to provide authentication enforcement in the Business tier. This prevents circumvention of W eb-tier controls by preventing any communication with the Business tier (for example, a direct EJB invocation via RMI) from an external source without proper authentication. 19. Use Audit I nterceptor to audit events. Properly audit events and have formal audit reviews to ensure the application has not been compromised. Have a process and procedures in place to diagnose the audit logs in the case of a disaster or attack. 20. Monitor for malicious activity. Use Dynamic Service Management to monitor security-related components. Build or buy a tool that provides automated monitoring of those components in a way that lets you detect malicious activity. For example, set a threshold with an alert on the number of incorrect logins per user client to detect a hacker using a brute force password attack or attempting to scan for weak passwords across accounts.
Pitfalls
21. Build versus buy. Developers tend to build versus buy solutions because doing so gives them maximum flexibility and allows them to maintain control. This is usually a bad practice, because the costs of additional time and resources outweigh the benefits of the flexibility. In the case of security, this is especially true. Most developers do not understand all of the security issues well enough to implement a security model better than a vendor. A vendor product has the added benefit of being time-tested in the real world with feedback from external sources. 22. Performance risks. The principle of conservation of energy demonstrates that you cannot get something for nothing. This holds true for security. The cost for increased security usually comes at the price of performance. W hile it is necessary to achieve a certain level of security, introducing unnecessary security functionality usually reduces performance and increases complexity. Strive to balance security and performance in your applications.
References
[CJP2] Deepak Alur, John Crupi, and Dan Malks. Core J2EE Patterns: Best Practices and Design Strategies, Second Edition. Prentice Hall, 2003. [SPEC] Java 2 Platform Enterprise Edition Specification, v1.4. http://java.sun.com/j2ee/j2ee-1_4-fr-spec.pdf [POSA1] Buschmann, Meunier, Rohnert, Sommerlad, and Stal. Pattern-Oriented Software ArchitectureA System of Patterns. Wiley Press, 1996-2000. [Gof] Gamma, Helm, Johnson, and Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994.
Figure 11-1. The OSI stack 7-layers and Web services security
Effectively, the end-to-end security of a W eb services solution is addressed by three security layers that are clearly delineated with mechanisms and responsibilities for securing the W eb services communication, messages, and their network infrastructure. The three security layers and their tasks and responsibilities are described in the following sections.
Network-Layer Security
Network-Layer Security works on the IP and TCP layers by providing perimeter security for the network infrastructure hosting the service and filtering out connections from unauthorized intruders. Network routers and firewall appliances make up this solution, and the protection is limited to connection attacks based on IP addresses, TCP ports, protocols, and packets.
Transport-Layer Security
Transport-Layer Security secures the communication and ensures data privacy, confidentiality, and integrity between the communicating endpoints. It ensures that the data transmitted and the sessions are protected from eavesdropping by unintended recipients. Applying cryptographic algorithms and adopting two-way SSL/TLS mechanisms make up this solution, which allows securing the transport and data exchanged on the wire by encrypting messages. During transit, it also guarantees that data transmitted is not accessible for viewing by unintended recipients or intermediaries.
Message-Layer Security
Message-Layer Security secures the W eb services endpoint with application-specific security information in the form of XML metadata. In W eb services communication, XML messages may contain malicious content from unauthorized parties that can cause a threat to the service endpoint. Traditional security mechanisms such as firewalls and HTTP/SSL would not verify the XML content-level threats that can lead to a buffer overflow or SQL/XQUERY insertion, or XML-based
denial-of-service (X-DoS). Incorporating message-level security allows defining application- or service-specific security as XML metadata or SOAP header blocks that represent information related to a user's identity, authentication, authorization, encryption/decryption, and digital signatures.
XML Firewall
The XML firewall is an XML-aware security device or a proxy infrastructure that can perform XML-based security processing operations. It helps in identifying and thwarting content-level threats and vulnerabilities such as malicious messages, buffer overflows, oversized payloads, virus attachments, and so on. It encapsulates access and enforces XMLbased security mechanisms and access control policies to the underlying W eb service endpoints and W SDL descriptions. Usually, XML firewalls are provided as specialized hardware or an XML-aware agent component that can be plugged in a W eb server running on a bastion host. XML firewalls are also required to support XML W eb services standards and specifications to enable message interoperability and compliance with the underlying W eb services provider infrastructure.
Identity Provider
The Identity Provider facilitates identity management, single sign-on (SSO), and identity federation for participating applications, service providers, and service requesters. Its primary responsibility is to provide authentication, authorization, and auditing services for all service interactions between the services provider and the requester. It also facilitates identity registration and termination services in conjunction with a user's repository. W ith Liberty standards compliance, it also enables interoperability and allows the establishment of trusted relationships between communicating service providers and identity providers.
Directory Services
The Directory Services provide mechanisms for storing and managing the user profiles, configuration, policies, and rules for accessing application and network resources. It features a specialized database, standard protocol, and APIs to store and retrieve information. LDAP is a de facto standard for implementing directory services. It defines a lightweight protocol that specifies the data model for representing information, naming, and security and functionalities for storing, accessing, and updating the LDAP information. Directory services provide support for application security mechanisms related to locating and managing PKI certificates and supporting other PKI life-cycle operations. For more information about LDAP, PKI, and digital certificates, refer to Chapter 2, "Basics of Security."
Forces
Y ou want to block and prevent all direct access to the exposed service endpoints. Y ou want to provide a single entry point, a centralized security point, and a policy enforcement point for invoking all target service endpoints. Y ou want to intercept all XML traffic and inspect the complete XML message and attachments before processing at the service endpoint. Y ou want to verify for message integrity and confidentiality during its transit, particularly for eavesdropping and tampering. Y ou want to enforce transport-layer security using two-way SSL/TLS (mutual authentication) to achieve end-to-end data privacy and confidentiality of the communication. Y ou want to protect the exposed W SDL descriptions from public access and prevent revealing operations. Y ou want to apply message inspection and filter mechanisms on the XML traffic based on content, payload size, and message representation. Y ou want to centralize enforcement of identity, role, and policy-based access control for all exposed services. Y ou want to integrate existing identity-provider infrastructure for authentication and authorization. Y ou want to monitor and identify XML-based replay and DoS attacks by tracking and verifying the IP addresses, hostnames, message timestamps, and other message sender-specific information. Y ou want to verify and validate all incoming and outgoing messages for interoperability, current standards, and regulatory compliance. Y ou want to enforce centralized logging, monitoring, and management of all XML-based transports, sessions, transactions, and exchanged data. Y ou want to track usage, failures, and other service-level statistics, such as metering and billing.
Y ou want to provide support for verifying and validating incoming and outgoing messages based on XML security standards such as OASIS W S-Security, XML digital signatures, XML Encryption, SAML, XACML, and XrML.
Solution
T he Message Interceptor Gateway pattern is a proxy infrastructure providing a centralized entry point that encapsulates access to all target service endpoints of a Web services provider. It acts as a controller that aggregates access and enforces security mechanisms on the XML traffic by making use of identity and access management infrastructure. It secures the incoming and outgoing XML traffic by securing the communication channels between the service endpoints. The Message Interceptor Gateway accesses the network traffic using a packet sniffer mechanism that inspects the network packets for HTTP and XML headers. Once it encounters a packet with HTTP and XML headers, it off-loads the message to further processing using XML validation mechanisms. The XML validation mechanism is similar to the regular expression matching process for XML, which checks the given XML message against standard XML Schemas meant for verifying XML for well-formedness, structural integrity, and standards compliance. After validation and before allowing the user to access the service, the Message Interceptor Gateway communicates with the Message Inspector pattern for verification of mandated message-level security mechanisms, such as the user's identity and associated security policies that confirm the user's access privileges. Figure 11-3 illustrates the representation of a Message Interceptor Gateway in the W eb services architecture.
The Message Interceptor Gateway pattern can be an XML-aware security appliance or a W eb proxy infrastructure service that allows intercepting inbound and outbound XML W eb services traffic and enforcing consistent security and policies to all its exposed service endpoints and targeting service requesters. In effect, a Message Interceptor Gateway must handle tasks such as: Intercepting all incoming and outgoing XML traffic and verifying XML messages for integrity and confidentiality. Applying transport-level security mechanisms such as SSL/TLS for initiating secure communication. Identifying the communicating peer and authenticating them via X.509 server certificate or X.509 Mutual authentication of server and client. Applying data integrity at the transport-level using HTTP over SSL/TLS to ensure that the data in transit is not intercepted or tampered with by unauthorized parties. Applying data confidentiality at the transport-level using HTTP over SSL/TLS to ensure that the data in transit is not available for viewing or disclosed to unauthorized parties. Ensuring that a message received is unique and valid by verifying the IDs, timestamps, and receiving order, and making sure it is not resubmitted in a replay attack. Disallowing messages initiated from unauthorized parties and untrusted hosts by verifying IP addresses, protocols, service endpoints, message formats, and so on.
Protecting access to W SDL descriptions with authentication and access control. Validating and verifying incoming XML messages for well-formedness, XML schema, and compliance with standards such as OASIS W S-Security and SAML Token profile. Validating and verifying the representation of XML digital signatures and XML Encryption. Verifying incoming messages for message correlation ID, timestamps, and expiration. Detecting content-based attacks such as virus attachments, abnormal messages, and malformed messages that can cause service endpoint crash or failure. Interfacing with the Message Inspector, identity provider, and PKI infrastructures to enforce authentication, authorization, and other security policies. Enforcing authentication and authorization decisions to the service callers. Initiating automated response that alert administrators of detected security breaches and malicious activities. Logging and recording auditable trails for monitoring and diagnosing activities.
Structure
Figure 11-4 shows a class diagram of the Message Interceptor Gateway pattern.
The key participants of the pattern are as follows: Client. The client of the Message Interceptor Gateway is a W eb service endpoint that initiates XML traffic or responds to a service request. In a W eb services communication, the client can be a service provider or a requester endpoint that encapsulates an application or a W eb browser that is capable of posting an XML message. RequestMessage. The Request message represents a SOAP RPC message or an XML document that is subject to all the required security processing tasks carried out by the Message Interceptor Gateway. Based on the request message, the service endpoint may allow or deny further processing of the message. Message Interceptor Gateway. The Message Interceptor Gateway is the primary role object of this pattern. It acts as the security enforcement point by encapsulating access to all target service endpoints of a service provider or requester. It facilitates infrastructure services that can provide a secure single entry and exit point by intercepting inbound requests and outbound responses of W eb services traffic. This ensures transport-layer security by addressing data integrity and confidentiality at the transport-level; identifying message uniqueness via timestamps, correlation and ordering; validating for standards compliance; and enforcing communicating peer authentication and authorization policies as required by the service endpoint infrastructure. It also ensures logging and recording audit trails of all incoming and outgoing messages. Message Inspector. The Message Interceptor Gateway makes use of a Message Inspector pattern in a secondary role to provide message-level security processing. It also acts as a security decision point with the authentication and authorization policy decisions, message and element-level verification and validation, logging, and auditing that are required by the service provider endpoint or the message handler intermediary. Identity Provider. The Identity Provider represents a service or user repository that contains all information required for authentication and authorization of an identity accessing a service. ServiceEndpoint. The ServiceEndpoint represents the target object and the ultimate consumer of the message that the client asks to do message processing. Response Message. The Response Message represents the SOAP RPC call or XML document that is sent by the Message Interceptor Gateway. It represents the results of processing the message or an acknowledgement message that lets the client know that the request was received.
In addition, depending on vendor implementation the agent may provide support and services for the following: ensuring XML message well-formedness, XML schema validation, content-filtering for viruses and noncompliant messages, identifying and verifying user authentication tokens, signing and validating XML digital signatures, encrypting and decrypting XML Encryption, enforcing XML-based access-control policies, ensuring W SDL protection, auditing, logging, enabling standards-based security interoperability, and ensuring conformance to W eb services security specifications such as XML digital signature, XML Encryption, OASIS W S-Security, SAML Token profile, XACML, and W S-I security profile.
Consequences
Using the Message Intercept Gateway pattern helps in intercepting XML traffic in order to perform message-level security operations, including authentication, authorization, auditing, encryption/decryption, signature validation, compression/decompression, transformation, routing, and management. All these functions can be carried out before processing the message at its ultimate endpoint. Message Interceptor Gateway ensures transport-level security and peer authentication, which verify message uniqueness and standards compliance. At the transport-level, it guarantees data integrity, data confidentiality, non-repudiation, auditability, and traceability. It also safeguards the service endpoint from attacks such as XML DoS, man-in-the-middle, untrusted hosts, brute-force message replay, malicious payloads, and non-compliant messages. In addition to the above, the Message Interceptor Gateway pattern provides the following benefits to the service endpoint: Centralized control. The Message Interceptor Gateway acts as a controller offering a centralized control and processing subsystem for enforcing security-related tasks across all exposed service endpoints. It offers centralized management of related services, including authentication, authorization, faults, encryption, audit trails, metering, billing, and so on. Modularity and maintainability. Restricting direct access, centralizing all security mechanisms, enforcing accesscontrol policies, and off-loading security tasks from the service endpoints keeps the underlying application interfaces unpolluted with security-handling methods and saves application processing time and resources. This enhances a service with a modular subsystem designated for security and reduces complex tasks, which results in better maintainability of the overall W eb services security infrastructure. Reusability. The Message Interceptor Gateway pattern encapsulates and protects all direct access to underlying
service endpoints, facilitating a common reusable solution that helps in protecting multiple service endpoints. Extensibility. The Message Interceptor Gateway pattern offers extensibility by allowing you to incorporate more mechanisms and functionalities related to transport-level and message-level security, thereby reducing tight coupling or integration with the underlying service endpoint infrastructure. Ease of migration. The security layer provided by the Message Interceptor Gateway pattern makes it easier for the underlying service endpoint to have a different security provider implementation. The service-requesting clients or the service provider have no knowledge about the security layer. I mproved testability. The Message Interceptor Gateway pattern infrastructure separates the security architectural model from the underlying service endpoint. This improves ease of testability and extensibility of the security architecture. Netw ork responsiveness. Implementing the Message Interceptor Gateway pattern with a combination of XML firewall and W eb services security infrastructure often demonstrates significant performance gains in latency and message throughput. Using a software-only implementation has more processing overhead and impacts network performance. Additional expertise required. Implementing and managing the Message Interceptor Gateway pattern often requires strong familiarity and skills related to XML-aware and network appliances.
Reality Checks
Choosing the right strategy: XML firewall or an Intercepting W eb agent? Using an Intercepting W eb agent infrastructure provided by a W eb services security provider could meet the target W eb services endpoint-specific requirements for W eb services security standards compliance as well as transport-level and message-level security mechanisms. Using software interfaces to incorporate SSL/TLS, signature validation, and encryption and decryption is usually resource-intensive and incurs processing overheads. It can also affect the performance of the overall architecture. Adopting a combination of an XML firewall appliance strategy and the Intercepting W eb Agent strategy would help in achieving the performance and high-availability goals, particularly while handling a large number of connections, binary security tokens, and larger message payloads. I s the Message I nterceptor Gatew ay pattern performance too slow ? Intercepting all XML traffic and handling security-related tasks within an intermediary often degrades performance. In such cases, one could revisit the Message Interceptor Gateway pattern with the Message Inspector pattern or use load-balancing strategies, adding multiple Interceptor gateways to alleviate the degradation.
Related Patterns
Message I nspector [W eb Services Tier]. The Message Inspector pattern is used to verify and validate the quality of message-level security mechanisms applied to XML W eb services. Secure Message Router [W eb Services Tier]. The Secure Message Router pattern allows secure communication with multiple partner endpoints using message-level security and identity-federation mechanisms.
Message Inspector
Problem
Y ou w ant to verify and validate the quality of message-level security mechanisms applied to XML Web services. In a W eb-service communication, an incoming message should not be received unless it is confirmed and proven to be safe for further processing. The incoming messages may be client requests or response messages. These messages may contain malicious content or XML messages from unauthorized parties, which are a potential threat to the service provider. Traditional security mechanisms such as firewalls and packet filtering systems do not secure and verify the content and cannot handle these threats. Message-level security mechanisms are required to secure the XML messages and to handle XML-related security attacks. It is necessary to adopt and deploy an XML standards-based security framework and consistently enforce it before processing messages. This involves pre- and post-processing XML messages by parsing the incoming content. This is done by verifying and validating for processing requirements and then making authentication and authorization decisions based on the message sender or receiver. As a result, it becomes mandatory to inspect the message, particularly for the purpose of identifying the sender and verifying whether the sender confirms its identity. Verification of whether the identity is authorized to send the message, the content has been secured and unaltered during transit, and the content is legitimate and does not contain any malicious information is also necessary. Integrating message-level security mechanisms with an application service endpoint creates a direct dependency between the service and the XML security implementation. Such code dependencies in application components add complexity and make it tedious to process the application-specific content. It is also tedious when applying changes to security mechanisms in the content. Thus, a common solution for implementing a series of message-level tasks related to identifying, verifying, and validating XML-based security before and after receiving the message is required. These tasks must be carried out as part of pre-processing or post-processing tasks to ensure that there are no security risks and vulnerabilities associated with the message. Some of these tasks determine whether it is necessary to process the message or to discontinue processing it based on required schemas, constraints, compliance, and specific processing requirements.
Forces
Y ou want to use a common solution for message-level security tasks, such as examining the structure and content and verifying and determining the uniqueness, confidentiality, integrity, and validity of messages before the application endpoint starts processing them. Y ou want to proactively identify and potentially limit messages upon receipt based on applied security token profiles and assertions representing the identity and policies. Y ou want to monitor and identify message replay and XML-based DoS attacks by tracking and verifying encrypted communication, security tokens, XML digital signatures, message correlation, message expiry, or timestamps. Y ou want to verify and validate messages at the element level to identify parameter tampering and message injection attacks via XPATH and XQUERY expressions. Y ou want to verify messages for interoperability and standards compliance to guarantee that the applied security mechanisms of the incoming and outgoing messages work seamlessly in all usage scenarios. Y ou want to enforce a centralized logging based on the security actions and decisions made on the received messages. Y ou want to provide a uniform API mechanism for managing message-level security and processing the security headers in accordance with various XML security standards, such as OASIS W S-Security, XML digital signatures, XML Encryption, SAML Token profile, and REL Token profile.
Solution
Use the Message Inspector pattern as a modular or pluggable component that can be integrated with infrastructure service components that handle pre-processing and post-processing of incoming and outgoing SOAP or XML messages. The Message Inspector combines a chain of tasks intended for identifying message-level security headers, dissecting the header elements, and verifying the message for the key security requirements specified by the service provider. It acts as a Security Decision Point for enforcing all the security policies applicable to accessing a service endpoint, that is, a W eb service provider or requester. In effect, you are able to integrate a set of tasks, including:
Verifying and validating a SOAP message and its populated headers for standards compliance, such as OASIS W SSecurity, SAML Token profile, W S-I Basic Security Profile, REL Token profile, and so forth. Identifying the data origin by identification and authentication of the message payload and its elements using OASIS W S-Security and XML digital signature mechanisms. Verifying the message for data integrity and validating for accuracy and consistency (for example, that the message parts are not modified or deleted by unauthorized parties) using OASIS W S-Security or XML digital signature mechanisms. Verifying the message for data confidentiality to ensure that the message is not viewed by unauthorized parties during transit or processing at intermediaries using OASIS W S-Security or XML Encryption mechanisms. Validating and verifying the representation of XML digital signatures, including recalculating the digests by applying the digest algorithm and recalculating the signature using the key information. Decrypting and verifying the encrypted data to support the underlying service or prior to further processing by the service endpoint. Looking up an XKMS service provider to locate public keys intended for verifying and validating signatures. Verifying the messages for correlation IDs, timestamps, and expiration. Verifying and validating the business data for required length and data format to avoid buffer overflow attacks and to restrict malicious data insertion attacks. Interacting with the identity provider to enforce authentication and authorization. Enforcing authentication and authorization decisions based on the message sender's content (such as username/Password, SAML assertions, REL licenses and BinarySecurityTokens such as certificates and Kerberos tickets) and associated security policies. Ensuring the XML message conformity based on a given XML Schema, DTD, or XPATH expression to ensure that the content conforms to the security specifications. Detecting data injection attacks by identifying malicious schema definitions, XPATH/XQUERY expressions, SQL, cross-site scripting and malformed URLs. Initiating automated response upon detection of security breaches and malicious activities. Logging and recording audit trails for the monitoring and diagnosis of activities and for the reconstruction of events after a security issue. Using the Message Inspector pattern eliminates the need for the service endpoint to perform complex message-level security operations, particularly looking up processes with an identity provider, accessing XKMS service, and creating decrypted business documents. These operations are quite resource-intensive (that is, they require excessive utilization of CPU, memory, and network bandwidth). To eliminate these overheads, this pattern provides a mechanism for offloading these tasks to an intermediary by abstracting all message-level security-specific dependencies required by the service provider application. The Message Inspector pattern can be implemented as a SOAP intermediary that integrates a set of message handlers working in sequence to perform a chain of message-level security tasks required by the service endpoint, such as identification, verification, validation, and extraction of security-specific headers and associated data represented in the message. An XML-aware security appliance that is capable of performing messagelevel and element-level validation and verification can also be incorporated. It is strongly recommended that the Message Inspector pattern does not cache any data during execution or any data from the message sender that might be needed later.
Structure
Figure 11-7 shows the class diagram for the Message Inspector pattern.
The key participants of the pattern are as follows: Client. The Client is any service requester that needs to invoke a W eb-service endpoint. The client initiates a request message represented as method names with parameter values or XML documents. The client can be any type of application or a W eb service that can create and send XML messages according to W eb-services standards. Request Message. The Request message represents a SOAP RPC message or an XML document that is verifiable by all the required security processing tasks carried out by the Message Inspector. Based on the request message, the service provider endpoint may allow or deny further processing of the message. Message Inspector. The Message Inspector is the primary role object and main class of this pattern. It implements all the methods intended for message-level security processing. The Message Inspector parses the message requests to determine what needs to be done. It makes use of a series of operations to verify and validate the messages for all security-related processing. In a typical scenario, it acts as a security decision point that provides authentication and authorization policy decisions, message and element-level verification and validation, and logging and auditing functionalities required by the service-provider endpoint or the message-handler intermediary. Message Interceptor Gateway. The Message Interceptor Gateway is the secondary role object of this pattern. It provides infrastructure services that can intercept inbound requests and outbound responses to ensure transportlayer security, message integrity and confidentiality, standards compliance. It also enforces authentication and authorization policies required by the service-provider endpoint or the subsequent message-handler intermediary. Identity Provider. The Identity Provider represents a service or user repository that contains all information required for authentication and authorization of an identity accessing a service.
ServiceEndpoint. The ServiceEndpoint represents the target object and the ultimate consumer of the message that the client asks to do message processing. Response Message. The Response Message represents the SOAP RPC call or XML document that is sent by the Message Interceptor Gateway or service endpoint. It represents the results after processing the message or an acknowledgement message that lets the client know that the request was received.
Figure 11-9 illustrates a client sending a request to an endpoint. The request is intercepted by the Message Interceptor Gateway. After interception, the message is redirected to an XML appliance for verification, validation, and processing of message-level security information. In addition, the XML appliance may connect and interact with an identity provider to verify the request for authentication and authorization credentials.
In a W eb services communication, each handler represents functionality such as pre-processing or post-processing of inbound requests or outbound responses. Each handler can be implemented to support a security operation that is configured and associated with a service requester client or a services provider server, or both. At runtime, a handler has the ability to access the message header or its body and introduce an operation that can verify, validate, or modify the target message. Multiple message handlers can be grouped together as an ordered group or with a designated sequence representing a set of message processing operations and shared data. It is important that Message Handlers should make use of a dedicated Fault handler that captures all errors and exceptions from the respective operations and returns a response that sanitizes those exceptions in such a way that it does not reveal the internal functionalities and failures. All message handlers are implemented as stateless and they should not cache results of an operation or any data that the client might need at a later point. This helps message handlers to avoid potential threading and concurrency issues.
Design Note
In the Message Handler chain strategy, there is a known issue related to repeated XML processing in certain parts of XML parsing, DOM creation, and XML serialization functions. This issue usually impacts performance. A failure in one of the handler chains may result in a restart and it is a complex task to diagnose the data and the corresponding handler-specific errors. However, having the intermediate XML stored in database tables during the process has proved very valuable for troubleshooting purposes. Having intermediate XML storage helps the downstream processing without the need to restart or an extract from the beginning.
A message handler chain, including a series of security operations, can be represented as a Message Inspector for a service provider or service requester. During service invocation, each handler completes its operation and then passes the result to the next handler in the chain. W hen the handler chain completes processing, the message is delegated to the application service endpoint for further processing. In a J2EE-based W eb services environment, message handlers can be built using JAX-RPC and SAAJ APIs. Message handlers can also be used for verifying SOAP attachments for potential virus or content-related vulnerabilities, Trojan horses, and malicious data attachments. The message handler chain strategy can make use of the Secure Logger Pattern (W eb tier) and Audit Interceptor Pattern (Business tier) to ensure recording of audit trails. Figure 11-10 shows the sequence diagram for the Message Handler Chain Strategy and the various participants.
The Client sends a request message to its intended service endpoint. The request message is intercepted using the Message Interceptor Gateway pattern for verification and validation of the message for security requirements. The Message Interceptor Gateway makes use of a Message Inspector for initiating message-level security processing. The Message Inspector is represented as a MessageHandlerChain that defines a series of message handlers tasked to apply and perform the sequence of message-level security processing required by the service endpoint. Once the defined operations are complete, the MessageHandlerChain returns a result that verifies all message-level security requirements, such as authentication, authorization, signature verification, and so forth. Based on the results, the message will be allowed or denied further processing. Example 11-1 is a source code example (LoggingHandler.java) that shows the implementation of a Logging handler using Apache Axis. The logging handler receives all incoming message requests, verifies and validates them for XML Signature using Apache XML security kit, and then logs them using the Apache logging framework (log4j).
import org.apache.xml.security.utils.Constants; import org.apache.xpath.CachedXPathAPI; import org.w3c.dom.Document; import org.w3c.dom.Element; import java.io.FileWriter; import java.io.PrintWriter; public class LogHandler extends BasicHandler { // 1. Initialize the logger static Log log = LogFactory.getLog(LogHandler.class.getName()); // 2. Initialize Apache XML Security library static { org.apache.xml.security.Init.init(); } // 3. Initiate message verification public void invoke(MessageContext msgContext) throws AxisFault { try { System.out.println("Starting message verification"); Message inMsg = msgContext.getRequestMessage(); Message outMsg = msgContext.getResponseMessage(); // 4. Verify the incoming message for XML signature Document doc = inMsg.getSOAPEnvelope().getAsDocument(); String BaseURI = "http://xml-security"; CachedXPathAPI xpathAPI = new CachedXPathAPI(); Element nsctx = doc.createElement("nsctx"); nsctx.setAttribute("xmlns:ds", Constants.SignatureSpecNS); Element signatureElem = (Element) xpathAPI.selectSingleNode(doc, "//ds:Signature", nsctx); // 5. Ensure that the document is digitally signed if (signatureElem == null) { System.out.println("The document is not signed"); return; } // 6. Validate the signature XMLSignature sig = new XMLSignature(signatureElem, BaseURI); boolean verify = sig.checkSignatureValue(sig.getKeyInfo().getPublicKey()); System.out.println("Message verification complete."); System.out.println("The signature is" + (verify ?"" : " not ") + "valid"); } catch (Exception e) { throw AxisFault.makeFault(e); } } // 7. Log messages to a file public void onFault(MessageContext msgContext) { try { Handler serviceHandler = msgContext.getService(); String filename = (String) getOption("filename");
if ((filename == null) || (filename.equals(""))) throw new AxisFault("Server.NoLogFile", "No log file configured for the LogHandler!", null, null); FileWriter fw = new FileWriter(filename, true); PrintWriter pw = new PrintWriter(fw); pw.println("====================="); pw.println("= " + Messages.getMessage("fault00")); pw.println("====================="); pw.close(); } catch (Exception ex) { log.error(ex); } } }
For more information about implementing and deploying message handlers using Apache Axis, refer to the architecture guide available at http://ws.apache.org/axis/java/architecture-guide.html. To implement XML security using Java, refer to the Apache XML security kit installation and API guide available at http://xml.apache.org/security/Java/. At the time of this writing, preparation of the JSR-105: XML Digital Signature APIs and JSR-106: XML Digital Encryption APIs specifications is still in progress; there are no current standard API mechanisms available for representing XML security using Java.
The Client sends a request message to its intended service endpoint. The request message is intercepted using the Message Interceptor Gateway in order to verify and validate the message for security requirements. The Message Interceptor Gateway delegates the request to an identity provider agent residing as a proxy infrastructure component that supports an underlying identity provider. The identity provider initiates the message-level security processing as required by the service endpoint. It takes responsibility for performing key security operations, such as authentication, authorization, signature verification, and so forth. Once the operations are complete, the identity provider issues a single
sign-on token (SSO Token) that represents the authentication and authorization decisions to allow or deny the message for further processing at its intended endpoint. In addition to processing for authentication and authorization decisions, the identity provider agent must be able to incorporate custom mechanisms for verifying selected elements of messages for the purpose of identifying message correlation, timestamps, and element-level data validation. These mechanisms help in detecting message-level attacks that can lead to forged requests, buffer overflow, malicious data injection, infinite parsing loops, and other content-level threats. It is highly recommended to install agents on W eb-service infrastructure running on DMZ bastion hosts.
Consequences
Adopting Message Inspector facilitates message-level security processing capabilities and ensures message-level data integrity, confidentiality, non-repudiation, auditability, and traceability. It also safeguards the service endpoint from XML DoS attacks, forged tokens, malicious data injection, identity spoofing, message validation failure attacks, replay of selected parts, schema poisoning, and element/parameter tampering. In addition, it provides the following benefits to the service endpoint: Modularity and maintainability. Separating message-level security mechanisms off-loads resource-intensive security processing tasks from the underlying application endpoint. This enhances the security architecture with a modular subsystem dedicated to processing security headers. It also reduces the complexity of maintaining security-related mechanisms at the service endpoint, which results in better maintainability of the overall W eb-services security infrastructure. Reusability. Since a Message Inspector pattern encapsulates all the message-level security mechanisms, it facilitates a common reusable solution for protecting multiple service endpoints. Extensibility. A Message Inspector pattern offers extensibility by allowing you to incorporate more mechanisms and functionalities and by providing adherence to newer standards for enforcing message-level security. These mechanisms will also remain independent and reduce tight coupling with the underlying service endpoint infrastructure.
Reality Checks
Choosing the right strategy. XML-aware appliance, a message handler chain, or an identity provider agent? Y our implementation choice depends on the application service endpoint requirements and the series of operations required for verification and validation of security headers of the message. Using a message handler chain or a vendor's identity provider agent strategy is extensible via programmatic interfaces. Using programmatic interfaces allows adapting to custom element-level data verification and achieving security standards compliance by handling newer content-level threats and vulnerabilities. Message-level security processing. Tasks such as validating signatures and encrypting and decrypting the data are resource-intensive operations that often impact performance and result in processing overhead (for example, CPU, memory, and network bandwidth utilization). Adopting an XML-aware appliance strategy would help achieve performance and high-availability requirements, particularly while handling a large number of connections, binary security tokens, and larger message payloads by off-loading this processing to specialized hardware design specifically for these operations. Concurrency implications. Many components associated with the Message Inspector pattern could be competing to update and read session data, which could lead to concurrency issues such as long wait times or deadlocks. A careful analysis of the XML traffic, message payloads, processing requirements, dependencies, and other possible scenarios should bring forth appropriate resolutions and trade-offs.
Related Patterns
Security Logger [W eb Tier]. The Secure Logger is used by the Message Inspector to log request messages. Audit I nterceptor [Business Tier]. The Audit Interceptor is used to capture security-related events. Message I nterceptor Gatew ay [W eb Services Tier]. Message Interceptor Gateway provides a single entry point by aggregating access to all service endpoints and centralizes security enforcement.
Forces
Y ou want to use a security intermediary to support W eb servicesbased workflow applications or to send messages to multiple service endpoints. Y ou want to configure element-level security and access control that apply message-level security mechanisms, particularly authentication tokens and signatures and encrypted portions using XML digital signature or XML Encryption. Y ou want to make sure to reveal only the required portions of a protected message to a target recipient. Y ou want to implement SSO by interacting with an identity provider authority to generate SAML assertions and XACML-based access control lists for accessing W eb services providers and applications that rely on SAML assertions. Y ou want to incorporate a global logout mechanism that sends a logout notification to all participating service
endpoints. Y ou want to notify participating service providers when an identity is registered, revoked, and terminated. Y ou want to dynamically apply security criteria through message transformations and canonicalizations before forwarding them to their intended recipients. Y ou want to filter incoming message headers for security requirements and dynamically apply context-specific rules and other required security mechanisms before forwarding the messages to an endpoint. Y ou want to support document-based W eb services, particularly by checking document-level credentials and attributes. Y ou want to enforce centralized logging for incoming messages, faults, messages sent, and intended recipients of the messages. Y ou want to configure multiple message formats and support XML schemas that guarantee interoperability with intended service endpoints without compromising message security. Y ou want to meet the mandated regulatory requirements defined by W eb-services partners. Y ou want to use a centralized intermediary that provides mechanisms for configuring message-level security headers supporting XML security specifications such as OASIS W S-Security, XML Signature, XML Encryption, SAML, XACML, and Liberty Alliance.
Solution
The Secure Message Router pattern is used to establish a security intermediary infrastructure that aggregates access to multiple application endpoints in a workflow or among partners participating in a W eb-services transaction. It acts on incoming messages and dynamically provides the security logic for routing messages to multiple endpoint destinations without interrupting the flow of messages. It makes use of a security configuration utility to apply endpoint-specific security decisions and mechanisms, particularly configuring message-level security that protects messages in entirety or reveals selected portions to its intended recipients. During operation, the Secure Message Router pattern works as a security enforcement point for outgoing messages before sending them to their intended recipients by providing endpoint-specific security services, including SSO, access control, and message-level security mechanisms. In addition, it can also provide identity-federation mechanisms that notify service providers and identity providers upon SSO, global logout, identity registration, and termination. In effect, a Secure Message Router must handle tasks such as: Configuring message-level security that allows signing and encrypting an XML message or its selected elements intended for multiple service endpoints. Configuring SSO access with multiple W eb-services endpoints using SAML tokens and XACML assertions that can act as SSO session tickets. Supporting the use of XKMS-based PKI services to retrieve keys for signing and encrypting appropriate message parts specific to a service endpoint or to participate in workflow. Notifying all participating service providers and identity providers of SSO and global logouts. Notifying all participating service providers and identity providers of identity registration, revocation, and termination. Dynamically applying message transformation and canonicalization algorithms to meet recipient endpoint requirements or standards compliance. Reconfiguring incoming messages to destination-specific message formats and supporting XML schemas that guarantee interoperability with the target service endpoint. Centralizing logging of messages and recording of auditable trails for incoming messages, faults, and their ultimate endpoints.
Supporting use of a Liberty-compliant identity provider and agents for identity federation and establishing a circle of trust among participating service providers.
Structure
Figure 11-12 shows the class diagram of the Secure Message Router pattern.
The key participants of the pattern are as follows: Client. The client of the Secure Message Router pattern can be any application that initiates a service request to access a single endpoint or multiple service endpoints. Typically, it can be any application component or a Message Interceptor Gateway that sends requests or responds to a W eb-services transaction.
Secure Message Router. The Secure Message Router allows configuring message-level security mechanisms and provides support for Liberty-enabled services such as Federated SSO, global logout, identity registration, and termination services by interacting with a Liberty-enabled identity provider. Message Configurator. The Message Configurator plays a secondary role as the Secure Message Router pattern. It implements all the methods intended for configuring message-level security intended for a specified endpoint. It makes use of configuration tables that identify the message, service endpoint and intermediaries, message-level access privileges, validating XML schemas, transformations, and compliance requirements. It signs and encrypts messages in their entirety or selected portions, as specified in the configuration table. Identity Provider. The identity provider represents a Liberty-compliant service provider that delivers federatedidentity services such as federated single sign-on, global logout, identity registration, termination, authentication, authorization, and auditing. Request. The Request message represents an XML document that is verified by all the required security-processing tasks carried out by the Secure Message Router. ServiceEndpoint. The ServiceEndpoint represents the target object and the ultimate consumer of the message that the client uses to do message processing. In the case of the Secure Message Router pattern, the ServiceEndpoint can be a single provider or multiple service providers or applications that implement the business logic and processing of the client request. WorkflowRecipient. The W orkflowRecipient represents an endpoint that participates in a workflow or in collaboration. It is an intermediary endpoint representing an identity or business logic designated for processing the entire document or selected portions of an incoming message and then forwarding it to the next recipient in the workflow chain.
The Client initiates XML message requests intended for processing at multiple service endpoints in a W orkflow. These messages are forwarded to the messaging provider, which acts as a SOAP security intermediary that allows configuring
and applying security-header mechanisms before sending the messages to its workflow participants. Upon receipt of a request message from the client, the messaging provider processes the message and then identifies and determines its intended recipients and their message-level security requirements. It makes use of a Message configurator that provides the required methods and information for applying the required message-level security mechanisms and defining endpoint-specific requirements. The Message configurator follows a security configuration table that specifies the message identifier, endpoints, and message-level security requirements related to representing the identity, signature, encryption, timestamps, correlation ID, and other endpoint-specific attributes. After configuring the message, the messaging provider initiates the workflow by dispatching configured message to its first intended endpoint (that is, a workflow participant). The dispatched message ensures that only the privileged portions of the message are allowed to be viewed or modified by workflow participants, based on their identities and other information; all other portions of the message remain integral and confidential throughout the workflow process.
During operation, the client will make use of Secure Message Router to process the message, determine its intended endpoint recipients using a message Configurator, and then interact with a Liberty-enabled identity provider to establish SSO with partner endpoints. The Secure Message Router communicates with the Liberty-enabled identity provider using a Liberty-agent via a request and response protocol that works as follows: 1. The Secure Message Router initiates a request to the service provider, which sends a SAML authentication request to an identity provider that instructs the identity provider to provide an authentication assertion. 2. The identity provider responds with a SAML authentication response containing SAML artifacts or an error. 3. The Secure Message Router uses the SAML artifacts as an SSO token to interact with all partner endpoints and to initiate the transaction. The partner endpoints trust the SSO tokens issued by the Liberty-enabled identity provider that established the identity federation. In addition to the above, the Secure Message Router also facilitates other Liberty-enabled services and tasks, such as notification of identity registration, termination, and global logout to all partner endpoints.
Consequences
Adopting the Secure Message Router pattern facilitates applying SSO mechanisms and trusted communication when the target message is exchanged among multiple recipients or intended to be part of a workflow. It also allows selectively applying XML Encryption and XML Signature at the element level by ensuring that content is not exposed to everyone unless the recipient has privileges to access the selected fragments of the message. This helps in securely sending messages to multiple recipients and ensuring that only selected fragments of the message are revealed or modified by the privileged recipients. W ith the support for Liberty-enabled identity providers, it establishes a circle of trust among participation endpoints and facilitates SSO by securely sharing identity information among the participating service endpoints. The Secure Message Router also ensures seamless integration and interoperability with all participating endpoints by sending destination-specific messages. In addition, the Secure Message Router pattern provides the following benefits: Centralized routing. The Secure Message Router delivers a centralized message intermediary solution for applying message-level security mechanisms and enabling SSO access to multiple endpoints. This allows configuring a centralized access control and processing subsystem for incorporating all security-related operations for sending messages to multiple service endpoints. It offers centralized management of related services, including authentication, authorization, faults, encryption, audit trails, metering, billing, and so on. Modularity and maintainability. Centralizing all security mechanisms and configuring access-control policies using a single intermediary keep the message-sender application interfaces separated from security operations. This enhances a service with a modular subsystem designated for security and reduces complex tasks at the service endpoint of a W eb services provider. This also saves significant application processing time and resources at the message-sending application endpoint. Reusability and extensibility. The Secure Message Router pattern encapsulates all direct access to participating service endpoints, facilitating a common reusable solution that is necessary for protecting multiple service endpoints. It also offers extensibility by allowing you to incorporate more message-level security mechanisms and functionalities specific to the target endpoints. I mproved testability. The Secure Message Router infrastructure separates the security architectural model from the underlying message-sender's service endpoint. This improves ease of testability and extensibility of the security architecture.
Reality Checks
Enabling interoperability in a w orkflow ? The Secure Message Router must pre-verify the messages for interoperability before sending them to participants in a workflow or intended recipients. The interoperability requirements of the recipient endpoint with regard to W S-I profiles, XML schemas, transformations, canonicalizations, and other endpoint-specific attributes must be specified using the Message Configurator. Scalability? It is important to verify the Secure Message Router solution architecture for scalability to eliminate
bottlenecks when communicating with multiple endpoints. This is critical to the success of every Message Router to perform resource-intensive tasks such as applying signatures, encryptions, and transformations without the expense of scalability and overall performance.
Related Patterns
Message I nspector [W eb services]. The Message Inspector pattern is used to verify and validate the quality of message-level security mechanisms applied to XML W eb services. Message I nterceptor Gatew ay [W eb services]. Message Interceptor Gateway provides a single entry point by aggregating access to all service endpoints and centralizes security enforcement.
Best Practices
Web Services Infrastructure Security
1. End-to-End T ransport Layer Security. During communication, secure the transport layer with appropriate message integrity and confidentiality mechanisms. The communication must be tamperproof and the messages in transit must not be intercepted or accessed. Adopting two-way SSL/TLS communication with the use of both server and client certificates is often considered the best-practice solution. 2. Standards-Based Security and I nfrastructure. W eb services are all about implementing standards-based messages and communication. They enable the adopted security mechanisms and countermeasures to seamlessly work together with architecture independence in all application layers and enable cross-platform support among W ebservices providers and the client requesters. Thus, follow standards and adopt standards-based infrastructure providers to ensure security interoperability throughout the life cycle of the service. Using proprietary mechanisms affects interoperability with standards-based infrastructure providers. 3. Netw ork Perimeter Protection. Use network firewalls and intrusion detection systems for identifying and protecting the W eb-services infrastructure against connection attacks such as network spoofing, man-in-the-middle, and DOS attacks. Use router mechanisms for filtering incoming and outgoing traffic and use network access control lists (ACLs) for allowing authorized hosts and blocking traffic from unauthorized hosts based on IP addresses and protocols. 4. Minimization and Hardening. Prior to deployment testing of the host platform infrastructure, remove all unnecessary services, user accounts, OS/application libraries, and tools. All services that are considered to be insecure or vulnerable must be secured or replaced with alternatives (for example, SSH, SFTP, and so forth). Furthermore, it is important to consider adopting preventive measures such as securing file systems with encryption, tightened access control, and deploying host-based intrusion detection and monitoring systems that allow detection of suspicious events, policy violations, and abuses. 5. I P Filtering. Use IP filtering mechanisms to provide packet filtering based on IP addresses, port, protocol, network interface, and traffic direction. This helps to safeguard the W eb-services endpoint host by allowing messages passed through authorized hosts and proxy servers. 6. XML-Aw are Security I nfrastructure. Adopt an XML-aware security infrastructure such as an XML firewall or W ebservices security solution that can proactively detect and protect against XML DOS attacks, malformed or corrupted XML, malicious SOAP/XML payloads, and unsupported message attachments. These issues can disrupt the infrastructure by consuming excessive bandwidth and can degrade performance with infinite processing loops that can compromise availability of the service endpoint. 7. Access Protection. Make sure direct access to all service endpoints is disabled. Use an XML firewall or a W eb-proxy infrastructure that masks all the underlying service endpoints and communicates through network address translation (NAT) or URL rewriting mechanisms. This helps in enforcing transport-layer security (such as two-way SSL/TLS) and in identifying all incoming traffic for XML and content-layer vulnerabilities before processing at the application service endpoint. 8. XML Firew all Appliance Adoption for Performance. XML firewall appliances can recognize and provide protection against XML-related malicious attacks. In particular, they can enhance message throughput significantly by reducing the processing time involved with resource-intensive tasks such as XML parsing, XML schema validation, XML Encryption and decryption, and XML Signature validation. 9. Origin Host Verification. Verify the host ID initiating the W eb-services request before processing the message. This helps in identifying man-in-the-middle, message replay, impersonation, and illegitimate-request attacks initiated from unauthorized hosts. W hen it is determined that a request is from an unauthorized host, the service endpoint must drop those requests without further processing. 10. Adopt Hardw are Cryptographic Devices. Cryptographic keys play a vital role in applying digital signatures and encryption mechanisms. It is important to safeguard the keys so that they are not accessible to hackers, because they are vulnerable to attack by modification, duplication, or substitution. Using hardware cryptographic devices ensures safer and more tamper-proof key management and helps in off-loading computationally intensive operations. 11. VPN Access. Consider using VPN-based limited access for W eb-services solutions deployed within an intranet or an
extranet (extended to potential consumers). Using VPN reduces security risks from external intrusions. 12. Employ Honeypots. Honeypots are intrusion detection decoys deployed with the intent to mislead potential attackers and thereby provide early warning to systems administrators. In-depth analysis of a honeypot's W ebservice traffic can yield useful knowledge for purposes of both research and defense against attacks.
registration, and verification tasks intended for XML signatures and XML Encryption. As a result, it delivers performance gains through reducing the message payload by off-loading all processing of key information to the XKMS trust service. 24. Fault Handling. W hen a W eb services client request cannot be completed or fails, the service-provider endpoint must return an error as a SOAP Fault element. This is represented descriptively in the detail element that provides all of the error information provided by the service endpoint. In case of failures related to an application service, particularly exceptions from underlying application and services, the faults expose the weakness in the application and allow hackers to design potential vulnerabilities based on them. It is important to proactively identify those faults and redefine them with information that does not reveal the weakness of the underlying service endpoint. 25. Logging and Recording of Audit T rails. Create secure transaction logs and audit trails that can be used for forensic investigation about life-cycle events and transactions taken by the services provider based on the requests made by the consumer. This verifies that the initiating clients are accountable for their requested operations with an irrefutable proof of originating request or response. The audit trails provides information that can be used to monitor resources, system break-ins, failed login and breach attempts; to determine security loopholes, violations, and identity spoofing; and to identify users attempting to circumvent security, either intentionally or unintentionally. 26. Avoiding Composability issues. All exposed services must define the security requirements to the service-requester clients that relate to transport-level and message-level security mechanisms. It is important to verify the ability to compose the messages including the required security mechanisms and the endpoint-specific message payload. The composability of the message should not cause any unintentional functional side-effects. 27. I dentity and Policy Management. W eb services should use identity information, trust policies, and their access privileges from underlying applications and should map them between service providers and consumers within a domain or multiple domains. The identity and policies associated with users can be used to define their roles and to access rules that are required as part of requests and responses between the communicating parties. Adopting a Liberty-enabled identity provider with the identity-federation capabilities required by Liberty Alliance specifications helps to aggregate W eb services without compromising security by delivering a federated SSO, global logout, identity registration, and termination.
33. Fault T olerance. Mission-critical W eb services demand fault-tolerance capabilities that provide reliability and solutions to support unpredictable and voluminous concurrent workloads. To handle such requirements requires a recovery mechanism that identifies the service failure, activates a new service-provider instance, and then reads the logs about the outstanding failed request to continue processing. Capturing the state of outstanding requests in order to repeat processing and restart a new service might degrade performance. To meet performance requirements, consider fault-tolerance capabilities for service requests that involve a business transactionbut not an inquiry transaction. Ensure all security tasks are processed as prescribed by the service provider, even though the service endpoint runs on a fault-tolerant mode. 34. Configuration Management. Follow a secure configuration management practice to administer all configuration information applied to the service endpoint. Make sure you adopt a security strategy that restricts access to configuration information to privileged users based on their roles. Any opening to unauthorized access to configuration information may cause a vulnerability that can compromise the security of all exposed services.
Pitfalls
35. Vendor-Specific Security API s. Adopting vendor-specific API mechanisms often affects interoperability and integration of services across vendors due to failures related to message compliance, mismatched crypto algorithms, and schema validation. Choose API mechanisms evolved through community processes and adopt a standards-based infrastructure that enables interoperability and seamless integration with other standards-based technology providers. 36. Content Encryption. Encrypting the messages in their entirety often results in abnormal payloads, increases network bandwidth utilization, and causes processing overheads. Consider adopting element-level encryption that allows encrypting selected portions of messages and then using secure communication channels that ensure data integrity and confidentiality during transit and storage.
References
[XML-DSIG] XML SignatureSyntax and Processing Rules. W3C Recommendation, February 12, 2002. http://www.w3.org/T R/xmldsig-core/ [XML-ENC] XML EncryptionSyntax and Processing Rules. W3C Recommendation, December 10, 2002. http://www.w3.org/T R/xmlenc-core/ [OASIS] WS-Security 1.0 Standard and Specifications. http://docs.oasis-open.org/wss/2004/01/oasis-200401wss-soap-message-security-1.0.pdf [WS-I] Web Services Security Basic Profile 1.0Working Group Draft. http://www.wsi.org/Profiles/BasicSecurityProfile-1.0-2004-05-12.html [W3C] Web Services Architecture. W3C Working group Note, February 11, 2004. http://www.w3.org/T R/wsarch/ [CJP2] Alur, Crupi, and Malks. Core J2EE Patterns, Second Edition. Prentice Hall, 2003. [Sun J2EE Blueprints] Designing Web Services with the J2EE Platform, 2nd EditionGuidelines, Patterns, and Code for Java Web Services. http://java.sun.com/blueprints/guidelines/designing_webservices/ [DWS] Nagappan, Skoczylas, et al. Developing Java Web Services: Architecting and Developing Java Web Services. Wiley, 2002 [SAML] OASIS Security Services T CSAML Specifications. http://www.oasis-open.org/committees/tc_home.php? wg_abbrev=security [SAMLP] WS-I SAML T oken Profile Version 1.0 Working Draft. http://www.ws-i.org/Profiles/SAMLT okenProfile1.0-2005-01-19.html [REL] REL T oken Profile. http://www.ws-i.org/Profiles/RELT okenProfile-1.0-2005-01-19.html [XKMS] W3C XML Key Management Specification 2.0. http://www.w3.org/T R/xkms/
Forces
Y ou want to avoid duplicate program logic for building authentication assertion, authorization decision assertion, and attribute statements. Y ou need to apply common processing logic to similar security assertion statements. Y ou need a helper class to extract similar processing logic to build SAML assertion statements instead of embedding them into the authentication and authorization processes. Y ou want the flexibility to support client requests from a servlet, EJB client, or a SOAP client.
Solution
Use an Assertion Builder to abstract similar processing control logic in order to create SAML assertion statements. The Assertion Builder pattern encapsulates the processing control logic in order to create SAML authentication statements, authorization decision statements, and attribute statements as a service. Each assertion statement generation shares similar program logic of creating the SAML header (for example, schema version) and instantiating the assertion type, conditions, and subject statement information. The common program logic can also be used to avoid locking in with a specific product implementation. By exposing the Assertion Builder as a service, developers can also access SAML assertion statement creation using SOAP protocol binding without creating separate protocol handling routines. Under a single sign-on environment (refer to Figure 12-3), a client authenticates with a single sign-on service provider (also known as the source site) and later requests access to a resource from a destination site. Upon successful authentication, the source site is able to redirect the client request to the destination site, assuming that the source site has a sophisticated security engine that determines the client is allowed to access the destination site. Then, the destination site will issue a SAML request to ask for an authentication assertion from the source site. The Assertion Builder will be used to assemble sign-on information and user credentials to generate SAML assertion statements. This is applicable for both the source site (processing SAML responses) and the destination site (processing SAML requests). The destination site will then respond to the client for resource access. Subsequently, the destination site will handle authorization decisions and attribute statements to determine what access level is allowed for the client request. Figure 12-1 depicts a high-level architecture diagram of the Assertion Builder pattern. In a typical application scenario, developers can design an Assertion Builder to provide the service of generating SAML authentication statements, SAML authorization decision statements, and SAML attribute statements. The Assertion Builder creates a system context (Assertion Context) and produces a SAML assertion statement (Assertion), which can be an authentication statement, an authorization decision statement, or an attribute statement. An EJB client can perform a JNDI service look-up of the SAML Assertion Builder service and invoke the preliminary utilities to assemble the SAML headers. After that, it invokes the relevant statement generation function, such as authentication statement. Similarly, a servlet can perform service invocation by acting as an EJB client. For a SOAP client, the Assertion Builder service bean needs to be exposed as a W SDL. Upon service invocation, the Assertion Builder utilities will marshal and unmarshal the SOAP envelope when the protocol binding is set to SOAP.
Structure
Figure 12-2 shows a class diagram for Assertion Builder service. The core Assertion Builder service consists of two important classes: AssertionContext and Assertion. The AssertionContext class defines the public interfaces for managing the system context when creating SAML assertion statements, SAML assertion types, and the protocol binding. If the SAML binding is set to be SOAP over HTTP, then the Assertion Builder service needs to wrap the SAML artifacts with a SOAP envelope instead of the HTTP header. It has a corresponding implementation class called AssertionContextImpl.
The Assertion class refers to the SAML assertion statement object. It contains basic elements of subject information (such as subject's IP address, subject's DNS address), source W eb site, and destination W eb site for the creation of SAML assertion statements. There is a corresponding data class called Subject, which refers to the principal for the security authentication or authorization. Each Assertion contains a Subject element. The Assertion class is also extended into AuthenticationStatement, AuthorizationDecisionStatement, and AttributeStatement. Each of these three assertion statement classes is responsible for creating SAML assertion statements, respectively, according to the SAML 2.0 specification. Attribute is a data class that encapsulates the distinctive characteristics of a subject and denotes the attributes in a SAML attribute statement.
This is a scenario for a W eb browser interacting with the source and destination sites with single sign-on (i.e., browser profile), and it is not applicable to a server-to-server single sign-on scenario. The Assertion Builder pattern is implemented for Steps 3 through 14. The other steps are provided here to provide a better context only. 1. Client accesses resources provided by the service provider (SourceSite).
SourceSite creates an instance of AssertionBuilder. In this scenario, the instance is for creating a SAML authentication 3. assertion request.
4. DestinationSite also creates an instance of AssertionBuilder. This is for creating a SAML authentication assertion response.
AssertionBuilder retrieves SAML protocol binding (for example, SOAP binding) for the interaction with SourceSite or
5. DestinationSite. 6. SourceSite redirects the resource access to the destination site DestinationSite via URL redirection. 7. Client accesses the artifact receiver URL.
AssertionBuilder assembles information to build the SAML header and tokenizes the user credentials (i.e., creates any 8. security token from the user credentials) to facilitate the interaction with DestinationSite. AssertionBuilder creates the relevant protocol binding. For example, if this is a SOAP binding, then AssertionBuilder creates 9. the SOAP envelope and uses the SOAP binding protocol. AssertionBuilder creates a SAML assertion statement (for example, SAML authentication assertion request) and sends it 10. to DestinationSite.
11. DestinationSite issues a SAML request for any authentication assertion statement to SourceSite.
AssertionBuilder assembles information to build the SAML header and tokenizes the user credentials to facilitate 12. interaction with SourceSite. AssertionBuilder creates the relevant protocol binding. For example, if this is a SOAP binding, then AssertionBuilder creates 13. the SOAP envelope and uses the SOAP binding protocol. AssertionBuilder creates a SAML assertion statement (for example, SAML authentication assertion response) and sends 14. it to SourceSite.
15. SourceSite issues a SAML response to DestinationSite. 16. DestinationSite responds to user response for resources at the destination site.
assertion statements within the customer LAN. By tracking the subject IP and DNS addresses, security architects and developers are able to quickly detect any SAML assertion statements with unusual IP and DNS addresses.
Consequences
By employing the Assertion Builder pattern, developers will be able to benefit in the following ways: Addresses broken authentication flaw . The Assertion Builder pattern can be used to build a helper class that creates and sends SAML authentication statements between trusted service providers. The SAML authentication statement denotes security information regarding authentication data about the subject. This helps to safeguard the authentication mechanisms from a potential broken authentication flaw. Addresses broken access control risk. The Assertion Builder pattern can be used to create SAML authorization decision and attribute statements. The SAML authorization decision statement denotes a critical decision about granting or denying resource access for a subject. This helps to safeguard the access control mechanisms from potential broken access control flaws. Enables transparency by encapsulating the assertion statements. The Assertion Builder pattern encapsulates three different assertion statement creation functionalities with similar processing logic. It is easier to maintain and use. In addition, architects and developers do not need to embed the processing logic of building SAML assertion statements in the business processing logic. Reduces the complexity of integration. The Assertion Builder pattern allows a flexible service invocation from a variety of clients, including servlets, EJB clients, and SOAP clients. It reduces the integration effort with different platforms.
Sample Code
Example 12-1 shows a sample code excerpt for creating an Assertion Builder for SAML assertion requests. The example creates a SAML authentication statement, a SAML authorization decision statement, and a SAML attribute statement. It defines the assertion type (using the setAssertionType method), initializes the assertion statement object, and sets relevant attributes (for example, setAuthenticationMethod for an authentication statement) for the corresponding assertion statement object. Then it uses the createAssertionStatement method to generate the SAML assertion statement in a document node object. It checks its validity upon completion of the SAML statement creation. It also retrieves the service configurations and protocol bindings (for example, SOAP over HTTP binding) before building SAML assertion statements.
protected static final String subjectName = "Maryjo Parker"; protected static final String subjectQualifiedName = "cn=Maryjo, cn=Parker, ou=authors, o=coresecurity, o=com"; // authentication assertion specific protected com.csp.identity.AuthenticationStatement authenticationStatement; protected org.w3c.dom.Document authAssertionDOM; // authorization decision assertion specific protected com.csp.identity.AuthorizationDecisionStatement authzDecisionStatement; protected static final String decision = "someDecision"; protected static final String resource = "someResource"; protected java.util.Collection actions = new ArrayList(); protected java.util.Collection evidence = new ArrayList(); protected org.w3c.dom.Document authzDecisionAssertionDOM; // attribute assertion specific protected com.csp.identity.AttributeStatement attributeStatement; protected com.csp.identity.Attribute attribute; protected Collection attributeCollection = new ArrayList();; protected org.w3c.dom.Document attributeStatementDOM; /** Constructor - Creates a new instance of AssertionBuilder */ public AssertionBuilder() { // common assertionFactory = new com.csp.identity.AssertionContextImpl(); subject = new com.csp.identity.Subject(); subject.setSubjectName(subjectName); subject.setSubjectNameQualifier(subjectQualifiedName); assertionFactory.setAssertionType (com.csp.identity.AuthenticationStatement .ASSERTION_TYPE); // ====create authentication statement============= // create authentication assertion object attribute authenticationStatement = new com.csp.identity.AuthenticationStatement(); assertionFactory.setAuthenticationMethod(authMethod); authenticationStatement.setSourceSite(sourceSite); authenticationStatement .setDestinationSite(destinationSite); authenticationStatement.setSubjectDNS(subjectDNS); authenticationStatement.setSubjectIP(subjectIP); authenticationStatement.setSubject(subject); // create authentication statement System.out.println("**Create authentication statement **"); authAssertionDOM = assertionFactory.createAssertionStatement ((com.csp.identity.AuthenticationStatement) authenticationStatement); //===end of create authentication statement ======== //====create authorization decision statement======= // create authorization decision assertion // object attribute authzDecisionStatement = new com.csp.identity.AuthorizationDecisionStatement(); authzDecisionStatement.setSourceSite(sourceSite);
authzDecisionStatement .setDestinationSite(destinationSite); authzDecisionStatement.setSubjectDNS(subjectDNS); authzDecisionStatement.setSubjectIP(subjectIP); authzDecisionStatement.setResource(resource); authzDecisionStatement.setDecision(decision); authzDecisionStatement.setSubject(subject); assertionFactory.setAssertionType (com.csp.identity.AuthorizationDecisionStatement .ASSERTION_TYPE); // Prepare evidence this.evidence.add("Evidence1"); this.evidence.add("Evidence2"); this.evidence.add("Evidence3"); authzDecisionStatement.setEvidence(evidence); // Prepare action this.actions.add("Action1"); this.actions.add("Action2"); this.actions.add("Action3"); authzDecisionStatement.setActions(actions); // create authorization decision statement System.out.println("**Create authorization decision statement **"); authzDecisionAssertionDOM = assertionFactory.createAssertionStatement ((com.csp.identity.AuthorizationDecisionStatement) authzDecisionStatement); // ===end of create authorization statement ====== // =====create attribute statement ============= // create attribute assertion object attribute attributeStatement = new com.csp.identity.AttributeStatement(); attributeStatement.setSourceSite(sourceSite); attributeStatement.setDestinationSite(destinationSite); attributeStatement.setSubjectDNS(subjectDNS); attributeStatement.setSubjectIP(subjectIP); attributeStatement.setSubject(subject); assertionFactory.setAssertionType (com.csp.identity.AttributeStatement.ASSERTION_TYPE); // Prepare attribute attribute = new com.csp.identity.Attribute(); this.attributeCollection.add("Attribute1"); this.attributeCollection.add("Attribute2"); this.attributeCollection.add("Attribute3"); this.attribute.setAttribute(attributeCollection); attributeStatement.addAttribute(attribute); // create attribute statement System.out.println("**Create attribute statement **"); attributeStatementDOM = assertionFactory.createAssertionStatement ((com.csp.identity.AttributeStatement) attributeStatement); // ===end of create attribute statement === } public static void main(String[] args) { new AssertionBuilder(); } }
Example 12-2 shows how an authentication statement is implemented. An authentication statement extends the object
Assertion, which is an abstraction of SAML assertion statements (including the SAML authentication statement,
authorization decision statement, and attribute statement). This authentication statement is intended to implement how a SAML authentication assertion is created. The previous createAuthenticationStatement method in the last section will invoke the create method from the AuthenticationStatement class in order to create a SAML authentication statement. The create method can be implemented using custom SAML APIs, provided by a SAML implementation offered by open source or commercial vendor solution. In this example, the create method uses a constructor from the OpenSAML library to create a SAML authentication statement and checks for the validity of the SAML assertion statement.
// Create SAML authentication statement // using OpenSAML 1.0 org.opensaml.SAMLAuthenticationStatement samlAuthStat = new org.opensaml.SAMLAuthenticationStatement (samlSubject, authInstant, samlSubjectIP, samlSubjectDNS, null); samlAuthStat.checkValidity(); System.out.println("DEBUG - The current SAML authentication statement is valid!"); } catch (org.opensaml.SAMLException se) { System.out.println("ERROR - Invalid SAML authentication assertion statement"); se.printStackTrace(); } } }
Example 12-3 shows an example of creating a system context for the Assertion Builder pattern, which stores the service configuration and protocol binding information for creating and exchange SAML assertion statements. The AssertionContextImpl class is an implementation of the public interfaces defined in the AssertionContext class. This allows better flexibility in adding extensions or making program changes in the future.
* @return boolean true/false **/ public boolean isValidStatement() { // to be implemented return false; } /** set authentication method * * @param String authentication method **/ public void setAuthenticationMethod(String authMethod) { this.authMethod = authMethod; } /** get authentication method * * @return String authentication method **/ public String getAuthenticationMethod() { return this.authMethod; } /** create SAML assertion statement * * Note - the @return has not been implemented. * * @return org.w3c.dom.Document xml document **/ public org.w3c.dom.Document createAssertionStatement (Object assertObject) { System.out.println("DEBUG - Create SAML assertion in XML doc"); if (this.assertionType.equals (com.csp.identity.AuthenticationStatement .ASSERTION_TYPE)) { // create SAML authentication statement // using OpenSAML 1.0 authStatement = (com.csp.identity.AuthenticationStatement) assertObject; authStatement.create(); } else if (this.assertionType.equals (com.csp.identity.AuthorizationDecisionStatement .ASSERTION_TYPE)) { // create SAML authorization decision // statement using // OpenSAML 1.0 authzDecisionStatement = (com.csp.identity.AuthorizationDecisionStatement) assertObject; authzDecisionStatement.create(); } else if (this.assertionType.equals( com.csp.identity.AttributeStatement.ASSERTION_TYPE)) { // create SAML authorization decision statement // using // OpenSAML 1.0 attributeStatement = (com.csp.identity.AttributeStatement) assertObject;
attributeStatement.create(); } return null; } /** get SAML assertion statement * * @return org.w3c.dom.Document xml document **/ public org.w3c.dom.Document getAssertionStatement() { // to be implemented return null; } /** remove assertion statement * **/ public void removeAssertionStatement() { // to be implemented } /** create assertion reply * * @return org.w3c.dom.Document xml document **/ public org.w3c.dom.Document createAssertionReply(Object assertionRequest) { ... return null; } /** get assertion reply * * @return org.w3c.dom.Document xml document **/ public org.w3c.dom.Document getAssertionReply() { ... return null; } /** remove assertion reply * **/ public void removeAssertionReply() { ... } /** set protocol binding * * @param String protocol binding **/ public void setProtocolBinding (String protocolBinding){ ... } /** get protocol binding * * @return String protocol binding **/ public String getProtocolBinding() { ... return null; } }
Reality Check
Should w e build assertion builder code from scratch? There are a few security vendor products that have out-of-thebox SAML assertion statement builder capability. In this case, architects and developers may either directly invoke the SAML assertion builder function or abstract them under the Assertion Builder pattern. Capturing I P address. Although the SAML assertion statement allows capturing the source IP address, it is rather difficult to capture the real IP address in real life because real IP addresses can be translated into another virtual IP address or hidden from proxies. However, it is still a good practice to capture the IP address for verifying the origin host for authenticity, troubleshooting and other auditing purposes. Dependency on authentication infrastructure. It is plausible to enable single sign-on security by using SAML assertions alone. However, SAML assertions depend on an existing authentication infrastructure. Migration from SAML 1.1 to SAML 2.0. There are some deprecated items and changes in SAML 2.0. The SAML specifications do not provide guidance on how to migrate from SAML 1.1 to SAML 2.0, or how to make them compatible between trading partners running different SAML versions. Thus, it is important to cater to service versioning of SAML messages and to migrate the messaging infrastructure to SAML 2.0.
Related Patterns
Single Sign-on Delegator. Single Sign-on Delegator provides a delegate design approach to connect to remote security services and enables single sign-on within the same security domain or across multiple security domains. It is a good fit to use Assertion Builder in conjunction with Single Sign-on Delegator.
business logic creates many limitations for scalability in client-side performance, network connectivity, server-side caching, and support of a large number of simultaneous connections. In addition, you need to explicitly handle different types of network or system exceptions in the individual vendor-specific business logic while invoking the remote security services directly. One related problem is software code maintenance and release control issues. If any identity management service interface changes, for example, due to a change in security standards or API specifications, developers have to maintain any corresponding client-side code changes. The client-side code also needs to be redeployed. This is quite a considerable software release control and maintenance issue because the tight-coupling architecture model is not flexible enough to accommodate software code changes. Another problem is the lack of a flexible programming model for adding or managing new identity management functionalities if the existing vendor-specific APIs currently do not support them. For example, if the current security application architecture does not support global logout, it defeats the purpose of Single Sign-on in an integration environment, which may create authentication issues and session hijacking risks. At the worst, developers are required to rewrite the security application architecture every time they integrate a new application. Developers may also need to add newer functionalities in order to achieve Single Sign-on.
Forces
Y ou want to minimize the coupling between the clients and the remote identity management services for better scalability or for easier software maintenance. Y ou want to streamline adding or removing identity management or single sign-on security service components (such as global logout), without reengineering the client or back-end application architecture. Y ou want to hide the details of handling heterogeneous service invocation bindings (for example, EJB and asynchronous messaging) and service configuration of multiple security service components (for example, identity server and directory server) from the clients. Y ou want to translate network exceptions caused by accessing different identity management service components into the application or user exceptions.
Solution
Use a Single Sign-on Delegator to encapsulate access to identity management and single sign-on functionalities, allowing independent evolution of loosely coupled identity management services while providing system availability. A Single Sign-on Delegator resides in the middle tier between the clients and the identity management service components. It delegates the service request to the remote service components. It de-couples the physical security service interfaces and hides the details of service invocation, retrieval of security configuration, or credential token processing from the client. In other words, the client does not interact directly with the identity management service interfaces. The Single Sign-on Delegator in turn prepares for Single Sign-on, configures the security session, looks up the physical security service interfaces, invokes appropriate security service interfaces, and performs global logout at the end. Such loosely coupled application architecture minimizes the change impact to the client even though the remote security service interfaces require software upgrade or business logic changes. A Business Delegate pattern would not be appropriate because it simply delegates the service request to the corresponding remote business components. It does not cater to configuring the security session or delegating to the remote security service components using the appropriate security protocols and bindings. Alternatively, developers can craft their own program construct to access remote service components. Using a design pattern approach to refactor similar security configuration (or preambles) for multiple remote security services into a single and reusable framework will enable higher reusability. The Single Sign-on Delegator pattern refactors similar security session processing logic and security configuration, and increases reusability. To implement the Single Sign-on Delegator, you apply the delegator pattern that shields off the complexity of invoking service requests of building SAML assertions, processing credential tokens, performing global logout, initiating security service provisioning requests, and any custom identity management functions from heterogeneous vendor product APIs and programming models. They can create a unique service ID for each remote security service, create a service handler for each service interface, and then invoke the target security service. Under this delegator framework, it is easy to use the SAML protocol to perform single sign-on across remote security services. Similarly, it is also flexible enough to implement global logout by sending logout requests to each remote service because the delegator holds all unique service IDs and the relevant service handlers. Y ou can also use the Single Sign-on Delegator in conjunction with J2EE Connector Architecture. Single Sign-on Delegator can populate the security token or security context to legacy system environments, including ERP systems or EIS. If ERP
systems have their own connectors or adapters, Single Sign-on Delegator can also exchange security tokens by encapsulating their connector APIs. One major benefit of using the Single Sign-on Delegator is the convenience of encapsulating access to vendor-specific identity management APIs. Doing so shields the business components from changes in the underlying security vendor product implementation.
Structure
Figure 12-4 depicts a class diagram for the Single Sign-on Delegator. The client accesses the Single Sign-on Delegator component to invoke the remote security service components (SSOServiceProvider). The delegator (SSODelegator) retrieves security service configuration and service binding information from the system context (SSOContext) based on the client request. In other words, the client may be using a servlet, EJB, or W eb services to access the identity management service. The delegator can also look up the service location via JNDI look-up or service registry look-up (if this is a W eb service) according to the configuration details or service bindings. This can simplify the design construct for accommodating multiple security protocol bindings.
There are three important classes in Figure 12-4: SSOContext, SSODelegatorFactory and SSOServiceProvider. The SSOContext is a system context that encapsulates the service configuration and protocol binding for the remote service providers. It also stores the service status and the component reference (aka handler ID) for the remote service provider. The SSOContextImpl class is the implementation for the SSOContext class. The SSODelegatorFactory defines the public interfaces to creating and closing a secure connection to the remote service provider. It takes a security token from the service requester so that it can validate the connection service request under a single sign-on environment. W hen a secure connection is established, the SSODelegatorFactory will also create a SSOToken used internally to reference it with the remote service provider. The SSODelegatorFactoryImpl class is the implementation for SSODelegatorFactory. The SSOServiceProvider class defines the public interfaces for creating, closing, or reconnecting to a remote service. Figure 12-4 shows two examples of service providers (SSOServiceProviderImpl1 and SSOServiceProviderImpl2) that implement the public interfaces.
Client wants to invoke remote services via SSODelegator. SSODelegator verifies if Client is authorized to invoke remote 1. security services.
2. Upon successful verification, Client creates a delegator instance of SSODelegator.
SSODelegator retrieves service configuration (for example, EJB class name) and protocol bindings (for example, RMI 3. method for EJB) from SSOContext.
4. SSOContext sends service configurations and protocol bindings to SSODelegator.
SSODelegator creates a single sign-on session using the method createSSODConnection and records the user ID and 5. timestamp in the session information using the method setSessionInfo.
6. Client initiates a request to invoke a remote service. 7. SSODelegator creates a service connection to invoke the remote service provider using the method createService.
SSODelegator retrieves the service configuration details (for example, EJB class) and protocol bindings for the remote
8. service.
SSODelegator looks up the service location of the remote security service using ServiceLocator (for example, via JNDI look9. up for the remote EJB).
10. SSODelegator invokes the remote security service by class name or URI. 11. SSODelegator adds the component reference (also referred to as handler ID) to the SSOContext. 12. Client requests to log out and close the connection of existing remote security services. 13. SSODelegator begins to close the security service. 14. SSODelegator initiates a closeSSOConnection to close the remote service.
SSODelegator removes the component reference from SSOContext. It may also remove any existing session information 15. by invoking the method removeSessionInfo.
16. SSODelegator now completes closing the single sign-on session. 17. SSODelegator notifies Client for successful global logout and closing security services.
Client initiates a single sign-on request to access resources under the internal identity provider (or external identity 1. provider).
2. ServiceProvider creates an instance of single sign-on delegator. 3. SSODelegator initiates an authentication assertion request with AssertionBuilder.
Before AssertionBuilder creates an authentication assertion, Client needs to perform an authentication with the identity 4. service provider first. Thus, ServiceProvider redirects the authentication request from Client. 5. ServiceProvider initiates an HTTP authentication request with IdentityProvider.
ServiceProvider obtains the relevant identity service provider identifier (there may be multiple identity service
6. providers). 7. ServiceProvider uses WebAgent (running on top of the application server) to respond to the authentication request. 8. WebAgent redirects the authentication request to IdentityProvider. 9. IdentityProvider processes the authentication request. It presents the authentication login form or HTML page to Client. Upon submission of the authentication login form by Client, IdentityProvider sends the authentication request response 10. artifact to WebAgent. 11. WebAgent sends the request with authentication response artifact to ServiceProvider. 12. ServiceProvider processes the HTTP request with the authentication response artifact with IdentityProvider. 13. IdentityProvider sends the HTTP response with the authentication assertion. 14. ServiceProvider processes the authentication assertion. 15. ServiceProvider sends the HTTP response with the authentication assertion. 16. WebAgent returns the authentication assertion statement.
Figure 12-6. Single sign-on using Single Sign-on Delegator and Assertion Builder sequence diagram
Figure 12-7. Global logout using Single Sign-on Delegator sequence diagram
Consequences
By employing the Single Sign-on Delegator pattern, developers will be able to reap the following benefits: T hw arting session theft. Session theft is a critical security flaw to identity management. The Single Sign-on Delegator creates a secure single sign-on session and delegates the service requests to relevant security services. Client requests must be authenticated with an identity provider before they can establish a secure single sign-on session. This can mitigate the risk of session theft. Addressing multiple sign-on issues. The Single Sign-on Delegator pattern supports a standards-based single sign-on framework that does not require users to sign on multiple times. There are security attacks that target application systems that are vulnerable due to multiple sign-on actions being required. Thus, the Single Sign-on Delegator can mitigate the multiple sign-on issues. More flexibility w ith a loosely coupled architecture. The Single Sign-on Delegator pattern provides a loosely coupled connection to remote security services. It minimizes the coupling between the clients and the remote identity management services. It hides the details of the handling of heterogeneous service invocation bindings and the service configuration of multiple security service components from the clients. It also avoids specific-vendor product lock-in by disallowing clients to invoke the remote security services directly. Better availability of the remote security services. Architects and developers can implement or customize automatic
recovery of the remote security services. They can also provide an alternate security services connection if the primary remote security service is not available. I mproves scalability. Architects and developers can have multiple connections to the remote security services. Multiple instances of the remote security services will help improve scalability. In addition, architects and developers can cache some of the session variables or user identity information on behalf of the presentation tier components, which may help boost performance if there are a large number of simultaneous user connections.
Sample Code
Example 12-4 and Example 12-5 show a scenario where a service requester (for example, telecommunications subscriber) intends to access a variety of remote services via a primary service provider (for example, a telecommunication online portal). These sample code excerpts illustrate how to create a Single Sign-on Delegator pattern (using SSODelegatorFactoryImpl) to manage invoking remote security services using EJB. The Client creates an instance of the SSODelegatorFactoryImpl using the method getSSODelegator, and then invokes the method createSSOConnection to start a remote service. Upon completion, the Client invokes the method closeSSOConnection to close the remote service. The SSODelegatorFactoryImpl creates a single sign-on connection, invokes individual security service, and maintains session information. The code comment adds some annotation about how to add your own code to meet your local requirements or to extend the functionality. In Example 12-4, the SSODelegatorFactoryImpl class initializes itself in the constructor by loading the list of "authorized" service providers (using the method initConfig). Then it creates a SSO token using the method createSSOToken to reference to all remote service connections. W hen the Client requests creating a single sign-on connection to a remote service, SSODelegatorFactoryImpl requires the Client to pass a security token for validation Upon successful validation of the security token, the SSODelegatorFactoryImpl will look up the Java object class or URI of the remote service via the servicelocator method from the SSOContext. The SSOContext stores the service status and service configuration of the remote service. The service locator method is a service locator pattern that provides a few methods to look up the service location via EJB or W eb services. The sample methods used in this code excerpt are examples only. The details can be found at [CJP2], pp. 315-340, or http://java.sun.com/blueprints/patterns/ServiceLocator.html. The SSODelegatorFactoryImpl will then invoke the createService method of the remote service. It will update the service status "CREATED"). The component reference to the remote service will be added to SSOContext. W hen the Client requests to close the remote service, the SSODelegatorFactoryImpl will invoke the closeService method of the remote service. It will update the service status to "CLOSED" and remove the component reference in the SSOContext.
* reconnecting to remote * service provider. * You can implement your security token validation process as * per local requirements. * You may want to reuse Credential Tokenizer to encapsulate * the security token. * * In this example, we'll always return true for demo purpose. */ private boolean validateSecurityToken(Object securityToken) { ... return true; } /** * Create a SSO connection with the remote service provider * Need to pass a security token and the target service name. * The service locator will look up where the service name is. * And then invoke the remote object class/URI based on the * protocol binding. * * @param Object security token (for example, you can reuse * Credential Tokenizer) * @param String service name for the remote service provider */ public void createSSOConnection(Object securityToken, String serviceName) throws com.csp.identity.SSODelegatorException { if (validateSecurityToken(securityToken) == true) { System.out.println("Security token is valid"); try { // load Java object class (or URI) via // serviceLocator com.csp.identity.SSOContextImpl context = servicesMap.get(serviceName); String className = context.serviceLocator(serviceName); Class clazz = Class.forName(className); com.csp.identity.SSOServiceProvider serviceProvider = (com.csp.identity.SSOServiceProvider)clazz.newInstance(); // invoke remote security service provider serviceProvider.createService(context); // update status=CREATE context.setStatus(context.REMOTE_SERVICE_CREATED); // update servicesMap and context context.setCompRef(serviceProvider); servicesMap.remove(serviceName); servicesMap.put(serviceName, context); this.setSSOTokenMap(serviceName); } catch (ClassNotFoundException cnfe) { cnfe.printStackTrace(); throw new com.csp.identity.SSODelegatorException("Class not found"); } catch (InstantiationException ie) { ie.printStackTrace(); throw new com.csp.identity.SSODelegatorException("Instantiation exception"); } catch (IllegalAccessException iae) { iae.printStackTrace(); throw new com.csp.identity.SSODelegatorException("Illegal access exception"); }
} else { // update status=error System.out.println("Invalid security token presented!"); throw new com.csp.identity.SSODelegatorException("Invalid securitiy token"); } } /** * Close a SSO connection with the remote service provider * Need to pass a security token and the target * service name. * The service locator will look up where the * service name is. * And then invoke the remote object class/URI based on the * protocol binding. * * @param Object security token (for example, you can reuse * Credential Tokenizer) * @param String service name for the remote service provider */ public void closeSSOConnection(Object securityToken, String serviceName) throws com.csp.identity.SSODelegatorException { if (validateSecurityToken(securityToken) == true) { System.out.println("Security token is valid"); // load Java object class (or URI) // via serviceLocator com.csp.identity.SSOContextImpl context = servicesMap.get(serviceName); com.csp.identity.SSOServiceProvider serviceProvider = context.getCompRef(); if (serviceProvider == null) { throw new com.csp.identity.SSODelegatorException ("SSO connection not made."); } // invoke remote security service provider serviceProvider.closeService(); // update status=CLOSED context.setStatus(context.REMOTE_SERVICE_CLOSED); // update servicesMap and context context.removeCompRef(); servicesMap.remove(serviceName); servicesMap.put(serviceName, context); this.removeSSOTokenMap(serviceName); } else { // update status=error System.out.println("Invalid security token presented!"); throw new com.csp.identity.SSODelegatorException("Invalid securitiy token"); } } /** * Load the configuration into the SSODelegatorFactory * implementation so that * it will know which are the remote service providers
* (including the * logical service name and the object class/URI for service * invocation. * * For demo purpose, we hard-coded a few examples here. * We can * also use * Apache Commons Configuration * to load a config.xml property * file. */ private void initConfig() { // load a list of "authorized" security // service providers // from the config file // and load into an array of SSOContext try { // create sample data com.csp.identity.SSOContextImpl context1 = new com.csp.identity.SSOContextImpl(); com.csp.identity.SSOContextImpl context2 = new com.csp.identity.SSOContextImpl(); context1.setServiceName("service1"); context1.setProtocolBinding("SOAP"); context2.setServiceName("service2"); context2.setProtocolBinding("RMI"); this.servicesMap.put("service1", context1); this.servicesMap.put("service2", context2); } catch (com.csp.identity.SSODelegatorException se) { se.printStackTrace(); } } /** * * You need to pass a security token before you can get the * SSODelegator instance. * Rationale: * 1. This ensures that only authenticated/authorized * subjects * can invoke the SSO Delegator. * (authentication and authorization requirements). * 2. No one can invoke the constructor directly (visibility * and segregation requirements). * 3. In addition, there is only a singleton copy (singleton * requirement). * * @param Object security token */ public static com.csp.identity.SSODelegatorFactoryImpl getSSODelegator(Object securityToken) { synchronized (com.csp.identity.SSODelegatorFactoryImpl.class) { if (singletonInstance==null) { singletonInstance = new com.csp.identity.SSODelegatorFactoryImpl(); } return singletonInstance; } } /** * This private method creates a SSO token to resemble a SSO * session has been * created to connect to remote security service providers. * In practice, this security token should be implemented in * any object type
* based on local requirements. You can also reuse the * SecurityToken object * type from the Credential Tokenizer. * * For demo purpose, we'll use a string. * You can also use the * String format * to represent a base64 encoded format of a SSO token. */ private void createSSOToken() { // to be implemented this.ssoToken = "myPrivateSSOToken"; } /** * Register a SSOToken in the HashMap that a remote * service provider * connection has been made. * * @param String serviceName */ private void setSSOTokenMap(String serviceName) { this.SSOTokenMap.put(serviceName, this.ssoToken); } /** * Get a SSOToken in the HashMap that a remote service * provider * connection has been made. * * @param String serviceName * @return Object SSOToken (in this demo, we'll use a * String object) */ private Object getSSOTokenMap(String serviceName) { return (String)this.SSOTokenMap.get(serviceName); } /** * Remove a SSOToken from the HashMap that a remote * service provider * connection has been made. * * @param String serviceName */ private void removeSSOTokenMap(String serviceName) { this.SSOTokenMap.remove(serviceName); } /** * Get status from the remote service provider. * Need to pass a security token and the target service name. * The service locator will look up where * the service name is. * And then invoke the remote object class/URI based on the * protocol binding. * * @param Object security token * (for example, you can reuse Credential Tokenizer) * @param String service name for * the remote service provider */ public String getServiceStatus(Object securityToken, String serviceName) throws com.csp.identity.SSODelegatorException { if (validateSecurityToken(securityToken) == true) { System.out.println("Security token is valid");
// load Java object class (or URI) // via serviceLocator com.csp.identity.SSOContextImpl context = servicesMap.get(serviceName); return context.getStatus(); } else { // update status=error System.out.println("Invalid security token presented!"); throw new com.csp.identity.SSODelegatorException("Invalid securitiy token"); } } }
Example 12-5 shows sample code for implementing the SSOContext. The SSOContextImpl class provides methods to add or get the service configuration and protocol binding for the remote service. W hen a new remote service is connected, the SSOContextImpl will add the component reference (aka handler ID) to the remote service using the method setCompRef and will update the status using the method setStatus.
this.sessionInfo.put(sessionVariable, sessionValue); } /** * Get session information from a HashMap. This stores * specific session variables * that are relevant to a particular remote secure service * provider * Need to cast the object type upon return * * @return Object return in an Object (for example String). */ public synchronized Object getSessionInfo(String sessionVariable) { return this.sessionInfo.get(sessionVariable); } /** * Remove session information from a HashMap. The HashMap * stores specific session variables * that are relevant to a particular remote secure service * provider * * @param String session variable name */ public synchronized void removeSessionInfo(String sessionVariable) { this.sessionInfo.remove(sessionVariable); } /** * Get private configuration properties specific to a * particular * remote secure service provider. This object needs to be * loaded during * initConfig(), by the constructor or manually * * @return Properties a Properties object */ public java.util.Properties getConfigProperties() { return configProps; } /** * Set private configuration properties specific to a * particular * remote secure service provider. This object needs to be * loaded during * initConfig(), by the constructor or manually * * @param Properties a Properties object */ public void setConfigProperties(java.util.Properties configProps) { this.configProps = configProps; } /** * Get protocol binding for the remote security service * provider * * @return String protocol binding, for example SOAP, RMI * (arbitrary name) */ public String getProtocolBinding() { return this.protocolBinding; } /**
* Set protocol binding for the remote security service * provider * * @param String protocol binding, for example SOAP, RMI (arbitrary * name) */ public void setProtocolBinding(String protocolBinding) { this.protocolBinding = protocolBinding; } /** * Get service name of the remote security service provider. * This name needs to match the field 'serviceName' in the * SSOServiceProvider implementation classes * * @return String service name, for example service1 */ public String getServiceName() { return this.serviceName; } /** * set service name * * @param String logical remote service name, for example service1 * **/ public void setServiceName(String serviceName) { this.serviceName = serviceName; } /** * Get component reference * * @return SSOServiceProvider component * reference to be stored * in the HashMap * once a connection is created **/ public com.csp.identity.SSOServiceProvider getCompRef() { return this.compRef; } /** * Set component reference * * @param SSOServiceProvider component * reference to be stored * in the HashMap * once a connection is created **/ public void setCompRef(com.csp.identity.SSOServiceProvider compRef) { this.compRef = compRef; } /** * Remove component reference * **/ public void removeCompRef(){ this.compRef = null; } /** * Look up the class name or URI by the service name * * This example hardcodes one class name for demo.
* You may want to replace it by a Service Locator pattern * * @param String service name to look up * @return String class name (or URI) corresponding service **/ public String serviceLocator(String serviceName) { // This example shows 2 remote // security service providers // hard-coded for demo purpose. Refer to the book's // website for sample code download. // You may want to use a Service Locator pattern here if (serviceName.equals("service1")) { return "com.csp.identity.SSOServiceProviderImpl1"; } if (serviceName.equals("service2")) { return "com.csp.identity.SSOServiceProviderImpl2"; } return "com.csp.identity.SSOServiceProviderImpl2"; } /** * set status of the remote service * * @param String status */ public void setStatus(String status) { this.status = status; } /** * get status of the remote service * * @return String status */ public String getStatus() { return this.status; } }
Reality Check
T oo many abstraction layers. Single Sign-on Delegator brings the benefit of loosely coupled architecture by creating an abstraction layer for remote security services. However, if the remote security services have more than one abstraction layer, multiple abstraction layers of remote service invocations will create substantial performance overhead. From experience, one to two abstraction layers would be reasonable. Supporting multiple circles of trust. Currently, Liberty specification 2.0 does not support integrating multiple circles of trust or interoperating with multiple identity service providers simultaneously (for example, when a client wants to perform single sign-on in two different circles of trust or in two different types of single sign-on environments). Single Sign-on Delegator is not designed to support multiple circles of trust, because it is a delegate design
approach that simplifies the connection of remote security services. The support of interoperating with multiple identity service providers is dependent on the Liberty implementation or the remote security services.
Related Patterns
Assertion Builder. A Single Sign-on Delegator can delegate the creation of SAML assertion statements via the Assertion Builder to a remote security service provider that assembles and generates a SAML authentication or authorization decision statement. This does not require adding the business logic of managing SAML assertions in the Single Sign-on Delegator. Credential T okenizer. A Single Sign-on Delegator can delegate the encapsulation of user credentials to the Credential Tokenizer. This does not require building additional business logic to handle user credentials in the Single Sign-on Delegator. Architects and developers can also reuse the credential tokenizer functions for other applications (for example, EDI messaging applications) without using Single Sign-on Delegator. Service Locator. The Single Sign-on Delegator pattern uses a Service Locator pattern to look up the service location of the remote security services. In other words, it delegates the service look-up function to a Service Locator, which can be implemented as a JNDI look-up or a UDDI service discovery. Refer to http://java.sun.com/blueprints/patterns/ServiceLocator.html and [CJP2] for details.
Forces
Y ou need a reusable component that helps to extract processing logic to handle creation and management of security tokens instead of embedding them in the business logic or the authentication process. Y ou want to shield off the design and implementation complexity using a common mechanism that can accommodate a security credential and interface with a supporting security provider that makes use of them.
Solution
Use a Credential T okenizer to encapsulate different types of user credentials as a security token that can be reusable across different security providers. A Credential Tokenizer is a security API abstraction that creates and retrieves the user identity information (for example, public key/X.509v3 certificate) from a given user credential (for example, a digital certificate issued by a Certificate Authority). Each security specification has slightly different semantics or mechanisms to handle user identity and credential information. These include the following characteristics: Java applications that need to access user credentials or security tokens from different application security infrastructures. W eb Services security applications that need to encapsulate a security token, such as username token or binary token, in the SOAP message. Java applications that support SAML or Liberty that need to include an authentication credential in the SAML
assertion request or response. Java applications that need to retrieve user credentials for performing SSO with legacy applications. To build a Credential Tokenizer, developers need to identify the service, authentication scheme, application provider, and underlying protocol bindings. For example, in a SOAP communication model, the service requestor is required to use a digital certificate as a binary security token for accessing a service end-point. In this case, the service configuration specifies the X.509v3 digital certificate as the security token and SOAP messages and SOAP over HTTPS as the protocol binding. Similarly, in a J2EE application, the client is required to use a Client-certificate for enabling mutual authentication. In this case, the authentication requirements specify an X.509v3 digital certificate as the security token and SOAP over HTTPS as the protocol binding, but the request is represented as HTML generated by a J2EE application using a JSP or a servlet. Credential Tokenizer provides an API abstraction mechanism for constructing security tokens based on a defined authentication requirement, protocol binding, and application provider. It also provides API mechanisms for retrieving security tokens issued by a security infrastructure provider.
Structure
Figure 12-8 depicts a class diagram of the Credential Tokenizer. The Credential Tokenizer can be used to create different security tokens (SecurityToken), including username token and binary tokens (X.509v3 certificate. W hen creating a security token, the Credential Tokenizer creates a system context (TokenContext) that encapsulates the token type, the name of the principal, the service configuration, and the protocol binding that the security token supports.
There are two major objects in the Credential Tokenizer: SecurityToken and TokenContext. The SecurityToken is a base class that encapsulates any security token. It can be extended to implement username token (UsernameToken), binary token (BinaryToken), and certificate token (X509v3CertToken). In this pattern, Username token is used to represent a user identity using Username Password. Binary tokens are used to represent a variety of security tokens that resemble a user identity using binary text form (such as Kerberos Tickets). Certificate tokens denote digital certificates issued to represent a user identity. An X.509v3 certificate is a common form of certificate token. The TokenContext class refers to the system context used to create security tokens. It includes information such as the security token type, service configuration, and protocol binding for the security token. This class defines public interfaces only to set or get the security token information. TokenContextImpl is the implementation for TokenContext.
create a security token. For example, the Client may be a service requester that is required to create the Username Password-token to represent in the W S-Security headers of a SOAP message. The CredentialTokenizer denotes the credential tokenizer that creates and manages user credentials. The UserCredential denotes the actual Credential Token, such as username/password or a X.509v3 digital certificate. The following sequences describe the interaction between the Client, CredentialTokenizer, and UserCredential: 1. Client creates an instance of CredentialTokenizer. 2. CredentialTokenizer retrieves the service configuration and the protocol bindings for the target service request.
CredentialTokenizer retrieves the user credentials from SecurityProvider according to the service configuration. For example, it 3. extracts the key information from an X.509v3 certificate.
4. CredentialTokenizer creates a security token from the user credentials just retrieved. 5. Upon successful completion of creating the security token, CredentialTokenizer returns the security token to Client.
Consequences
Supports SSO. The Credential Tokenizer pattern helps in capturing authentication credentials for multifactor authentication. It also helps in using "shared state" (the "shared state" mechanism allows a login module to put the authentication credentials into a shared map and then passes it to other login modules) among authentication providers in order to establish single sign-on, where the Credential Tokenizer can be used for retrieving the SSO token and providing SSOToken on demand for requesting applications. Provides a vendor-neutral credential handler. The Credential Tokenizer pattern wraps vendor-specific APIs using a generic mechanism in order to create or retrieve security tokens from security providers. Enables transparency by encapsulating multiple identity management infrastructures. The Credential Tokenizer pattern encapsulates any form of security token as a credential token and thus eases integration and enables interoperability with different identity management infrastructures.
Sample Code
Example 12-6 shows a sample code excerpt for creating a Credential Tokenizer. The CredentialTokenizer creates an instance of TokenContextImpl, which provides a system context for encapsulation of the security token created. To create a security token, you need to define the security token type using the method setTokenType. Then you need to create the security token using the method createToken, which invokes the constructor of the target security token class (for example, UsernameToken).
Example 12-7 shows a sample implementation for the TokenContext. The TokenContextImpl is an implementation of the public interfaces defined in the TokenContext class. The former can provide methods to fetch the name of the principal (getPrincipal method) and the protocol binding for the security token (getProtocolBinding method).
//System.out.println("get a usernametoken..."); return (Object)usernameToken.getToken(); } else if (this.tokenType.equals(com.csp.identity.BinaryToken.TOKEN_TYPE)) { //System.out.println("get a binary token..."); return (Object)binaryToken.getToken(); } else return null; } /** * get principal * * @return principal return principal in String **/ public String getPrincipal() { if (this.tokenType.equals(com.csp.identity.UsernameToken.TOKEN_TYPE)) { //System.out.println("get principal..."); return usernameToken.getPrincipal(); } else if (this.tokenType.equals(com.csp.identity.BinaryToken.TOKEN_TYPE)) { //System.out.println("get principal..."); return binaryToken.getPrincipal(); } else return null; } /** * get protocol binding for the security token * * @return protocolBinding protocol binding in String **/ public String getProtocolBinding() { return null; } }
Example 12-8 shows a sample implementation of the username token used in previous code examples (refer to Figure 1215 and Figure 1216). The UsernameToken class is an extension of the base class SecurityToken. It provides methods to define and retrieve information regarding the principal name, subject's IP address, subject's DNS address and the password.
/** Constructor - create usernameToken * * In future implementation, the constructor should be * private, and this class * should provide a getInstance() to fetch the instance. */ public UsernameToken(String principal, String password) { this.principal = principal; this.password = password; } /** * Get token ID from the binary token
* * @return binaryToken security token in binary form */ public String getToken() { return this.password; } }
Reality Check
Should w e use username/passw ord as a security token? Some security architects insist that the username/password pair is not secure enough and should not be used as a security token. To mitigate the potential risk of a weak password, security architects should reinforce strong password policies and adopt a flexible security token mechanism such as Credential Tokenizer to accommodate different types of security tokens for future extension and interoperability. What other objects can be encapsulated as security token? Y ou can embed different types of security tokens in the Credential Tokenizer, not just username/password or digital certificate. For example, you can embed binary security tokens, because they can be encapsulated as a SAML token for an authentication assertion statement. In addition, you can also add the REL token (which denotes the rights, usage permissions, constraints, legal obligations, and license terms pertaining to an electronic document) based on the eXtensible Rights Markup Language (XrML).
Related Patterns
Secure Pipe. The Secure Pipe pattern shows how to secure the connection between the client and the server, or between servers when connecting between trading partners. In a complex distributed application environment, there will be a mixture of security requirements and constraints between clients, servers, and any intermediaries. Standardizing the connection between external parties using the same platform and security protection mechanism may not be viable.
Pitfalls
7. Using Reusable Passw ords in Creating Security T okens. Reusable passwords refer to Username Passwords that are constant and are used multiple times to gain access to a system resource. They are usually susceptible to dictionary attacks. 8. Using Default Settings. Some applications provide default settings that turn on all application functions as a convenience, which may open up security loopholes for unauthorized access. For example, a security product may have a default policy rule that grants anonymous access to the system. This type of default setting should be disabled. 9. Using Minimal Security Elements. W hen implementing SAML assertion statements according to the latest SAML specifications, developers may want to populate only the mandatory elements and leave all optional elements. Developers may assume that they should only use the minimal set of data elements, because they may not have a full understanding of how to use all of these optional elements. Some optional security elements such as IPAddress (IP address of the SAML responder) and DNSAddress (DNS address) are helpful information for identifying the message origin, troubleshooting, and tracking suspicious events. W ithout using these optional elements, the tracking of suspicious service requests or problem troubleshooting would be difficult.
References
Open SAML http://www.opensaml.org Sun Java System Access Manager http://wwws.sun.com/software/products/identity_srvr/home_identity.html Sun's XACML implementation http://sunxacml.sourceforge.net/ VeriSign. T rust Gateway http://www.verisign.com/products/trustgateway/index.html and http://www.verisign.com/products/trustgateway/download.html [CJP2] Deepak Alur, John Crupi, and Dan Malks. Core J2EE Patterns: Best Practices and Design Strategies, Second Edition. Prentice Hall, 2003. [SAML11Security] OASIS. Security and Privacy Considerations for the OASIS Security Assertion Markup Language (SAML) V1.1. September 2, 2003. http://www.oasis-open.org/committees/download.php/3404/oasis-_sstcsaml-sec-consider-1.1.pdf
Business Challenges
User provisioning is a preparatory action for supplying a new user prior to initiating the user specific business services. This has several implications. Provisioning a new user may require creating a new user account in multiple applications across many systems. The user account may have multiple user credentials for the same person. Account mapping and provisioning for heterogeneous applications and systems is often complex. Password management, for example, involves resetting user passwords or synchronizing the passwords across multiple systems. This requires sophisticated control processes that can manage secure system interfaces and connectivity between the centralized password management system and the remote application systems. Since some user passwords can access sensitive business data, the password management control process needs to be highly secure, reliable, and timely. In case of application service provisioning, it poses a different challenge when installing and configuring a new instance of a software application. Many software applications require manual configuration (such as creation of specific user accounts) due to the complex system design and local system settings. The manual configuration may result in creating variants of application system instances based on different types of hardware and software configurations.
name for her user account. The Messaging Server uses "maryj," and the directory server uses a mnemonic name, retep4yram. Mary may also be referred to as M_Jane or Mary.Jane when dealing with the Help Desk and external trading partners.
It is a nightmare for an IT administrator to provision a user account for a new employee across application systems with different user account creation rules and using multiple identities. The administrator also needs to determine whether the user account has the same access rights for all of these application systems. If the user account has different access rights in each application system, a change in the user account attributes (for example, a change in the user's department) would probably require a change in the access rights in other systems. If there is no automated user account creation interface across application systems, this kind of a change will require considerable manual administrative processing. If there is an automated user account creation interface, it may expose a security risk (such as weak security token, broken authentication, or broken access control) when exchanging user credentials and synchronizing user passwords with external application systems. Additionally, it is challenging to synchronize all user passwords for these user accounts. Administering the security provisioning servicesfrom creating user accounts to synchronizing user passwordscarries high administrative costs (refer to [Cryptocard] for details). In the typical scenario shown in Figure 13-1, there are three implications to the security design and implementation. First, to provide a flexible identity management capability for multiple identities for different application systems requires reliability (for example, a failover mechanism in case the identity management infrastructure encounters problems), flexibility (for example, policy-based instead of hard-code rules), scalability (fit for a distributed environment), and maintainability (for example, reuse of security best practices and patterns). Unreliable user account provisioning will be exposed to potential security vulnerabilities such as insecure data transit, weak security token, broken authentication, or broken access control. Maintaining and administering identities and user accounts is more than a set of security administration procedures. Second, to automate security service provisioning across application systems requires a standard interface (ideally, a standards-based interface) that works with the existing security infrastructure. If the interface is proprietary, there will be lots of integration and interoperability efforts required. Third, there are needs for enabling proactive and reactive security protection measures to guard the user account creation interfaces among application systems or the password synchronization process among them. Message replay is a common security risk in such scenarios. Some service provisioning interfaces (such as an agent-based architecture) may require some software code changes in the underlying application infrastructure. Thus, they are "intrusive" to the security architecture. Some service provisioning interfaces may require sharing user credentials with external systems or trading partners, which may create a potential data privacy issue. Security architects need to understand the underlying mechanisms of these interfaces and ensure that the user account provisioning process and their interfaces do not expose new or unknown security risks.
identifier to map to different user identity variants. W hen a user wants to access any application systems, the authentication service component looks up this mapping table and verifies whether the user can access any application systems. However, this is a tactical and proprietary solution. There is considerable customization effort required for the authentication service component and for integration with all application systems. Passw ord Synchronization. Security architects use tactical and proprietary vendor solutions to synchronize user passwords in different application systems. These solutions usually have custom adapters or connectors that can populate user passwords into the target application systems. Synchronizing user passwords usually assumes that user accounts are created and provisioned in advance. The password synchronization products are specialized solutions and do not necessarily provision user accounts with different identities. Using tactical and proprietary vendor solutions can end up creating lock-in to a specific vendor's implementation and can make it fairly difficult to interoperate with new security infrastructure or emerging provisioning standards. Single Sign-on. A single sign-on security infrastructure allows a user to sign on to different security infrastructures. However, a single sign-on solution does not provide the functionality to provision user accounts in multiple application systems with different identities. Point-to-Point I nterfaces betw een I dentity Management Products. Custom point-to-point security interfaces can be built between identity management products (for example, a single sign-on or password synchronization product) to provision user accounts with different user identities. These interfaces are usually proprietary and require customization. If one of the identity management products has a version upgrade, the security interface needs to be upgraded as well in order to accommodate any necessary system changes. Standards-Based Security Service Provisioning I nterfaces. The Service Provisioning Markup Language (SPML) is an XML specification for processing service requests for provisioning user accounts. The OASIS Provisioning Service Technical Committee approved it as a standard specification in October 2003. A number of security vendor products now support SPML. W ith SPML, security architects can define mappings between multiple identities and synchronize user passwords between application systems. SPML-based service provisioning systems can also work with single sign-on and portal server products. Not all solution approaches address the entire problem of service provisioning. Most of these approaches are proprietary or vendor product-specific. They also may not be scalable enough to process user account requests for a large number of application systems simultaneously. SPML-enabled service provisioning is becoming more visible, especially after OASIS approved the SPML standard specification. Thus, customers can use a standards-based approach to address user account service provisioning. The following sections introduce the technical architecture of a service provisioning server and discuss how it can integrate with other infrastructure components, such as portal and identity service providers.
Figure 13-3 depicts a decentralized model, where there is no centralized provisioning server or profile management data store. User account profile information can be stored in local application systems. W hen the help desk application receives a client request to create user accounts in local application systems, it issues an SPML request to all application systems.
Under the decentralized model, a local application system can be configured as the Primary Profile Management Data Store, which acts as the master or provider and synchronizes user account profiles with other Profile Management Data Stores. In the sample scenario in Figure 13-3, the help desk application sends the SPML request to PeopleSoft HRMS, which acts as the Principal Profile Management Data Store. PeopleSoft HRMS then replicates the user profile across all application systems, including Sun Java System Directory Server, RSA SecurID, and Microsoft Exchange Server. Table 13-1 summarizes the pros and cons of centralized and decentralized service provisioning models. There is no
definitive rule about whether a centralized or decentralized service provisioning model is ideal. Architects and developers need to decide on the service provisioning model to use based on local requirements and environment constraints. For example, if customers have a large investment in an ERP system such as PeopleSoft HRMS around different regional offices worldwide, they may have a requirement to reuse existing application infrastructure and user credential repositories. Thus, they may find it sensible to adopt a decentralized service provisioning model by leveraging the ERP system to be the principal profile management data store.
Table 13-1. Pros and Cons of Centralized and Decentralized Service Provisioning
Pros It provides a single point of control. C entralized Service Provisioning C entralized service provisioning It has a relatively consistent user may become a single point of interface. failure. It allows an automated provisioning process. Synchronization between provisioning data stores is highly complex. The decentralized model may result in inconsistent user interface and provisioning processes. The failover design scheme and capability is fairly complex, because the decentralized model needs to cater to a multitude of data sources in which the data sources may have a very different data management nature, resilience, and failover requirements. Cons
Many service provisioning products support centralized service provisioning, for example, Thor's Xellerate and Blockade's ManageID. Some products also support both centralized and decentralized service provisioning, such as Sun's Sun Java System Identity Manager.
Logical Architecture
Figure 13-4 depicts the logical architecture of a service provisioning server. The provisioning server usually runs on top of a J2EE application server or a W eb container. It has workflow processing capabilities that create, modify, or delete user account services in each of the SPML-enabled application systems. The provisioning server stores user profile and user account provisioning details in the local Profile Management Data Store, which can be implemented in an RDBMS (Relational Database Management System) platform using JDBC (Java Database Connector).
Figure 13-4. Service provisioning logical architecture for managing user accounts
There are two ways to issue service provisioning requests: from the client administrator and from an application system. The client administrator can use a W eb browser to connect to the W eb front-end of the provisioning server using the HTTP or HTTPS protocol. An application system, such as a help desk application, can also initiate service provisioning requests. It can connect to the provisioning server using different protocols, such as JMAC, ABAP, or JDBC. It can also connect to the provisioning server using a custom SPML-enabled agent or connector. Some provisioning server products provide an SDK or API library to build a custom agent or connector. The extensibility and interoperability of service provisioning is always dependent on the capability to interoperate with different underlying platforms and application infrastructures. These include a variety of operating systems (for example, UNIX, mainframe, and W indows), RDBMS, directories, and custom applications. Thus, the provisioning server should be able to create, modify, or delete user account service by remotely executing user account administration functions provided by the target application systems using standard protocols such as SSH, 3270, JNDI, JDBC, and ADSI. The target application systems need to establish a secure and reliable connection with the provisioning server and be able to process service provisioning requests in the standard protocol (that is, SPML). The ideal service provisioning logical architecture should be able to support multiple application platforms, including UNIX applications, legacy mainframe platforms (such as IBM), and directories (such as Microsoft Active Directory and LDAP-based directory server). A provisioning server has a number of logical components that enable service provisioning services. Figure 13-5 depicts generic logical provisioning components and the underlying provisioning services. The underlying provisioning services provide common and reusable functions that support the core provisioning capability, including monitoring, password management, and connectivity capability. The provisioning components are specific programs or applications that can interact with provisioning administrators. Individual provisioning vendor products have different names for these components, but most of them provide similar functionality.
Figure 13-5. Service provisioning logical architecture components for managing user accounts
Provisioning Components
There are several logical, reusable provisioning components in a service provisioning system: Provisioning Server. This is the core component that processes service provisioning requests. Monitoring. The monitoring component tracks the service requests received and processed, and provides statistics and logging information for audit control or administration support. Passw ord Manager. This component provides an administration function that manages user passwords for the target application systems. It utilizes the underlying password synchronization service. Risk Analyzer. This administrative component analyzes any change impact from service provisioning requests or user account changes (for example, change of user roles) and provides system change information for security change risk analysis of the target application systems. It utilizes the processing rules and relationship defined in the rule engine (which is discussed later in this section). Connector Factory. This administrative component manages the connection or interfaces between the provisioning server and the target application systems. In other words, this is the middleware function of providing APIs to connect the provisioning server to the target application systems using custom adapters or connectors.
Provisioning Services
In a service provisioning system, there are common services that use multiple provisioning components to complete the task of provisioning or de-provisioning a user account. Rule Engine. This rule engine defines the service provisioning processes and security change impact. Workflow Engine. This is a simple workflow engine that supports sequential or programmatic control of user account creation or any account service changes for service provisioning. Synchronization Service. This is the underlying engine for synchronizing user passwords for the target application systems and for synchronizing the underlying Profile Management Data Stores. Reconciliation. This is a simple reporting infrastructure for reconciling the user account service profile (target service provisioning plan) with the provisioned user account services (actual provisioning result). Provisioning Discovery. This underlying service discovers whether there are SPML-enabled application systems in the network or infrastructure. Because there is no standard service provisioning discovery protocol yet, it may not be feasible for a provisioning server to discover different vendor-specific provisioning agents or proxies. Provisioning server products have different levels of sophistication and functionality, and they may have logical architectures very different from this generic logical architecture. They may have several underlying provisioning services combined into one server component, or services that are split into more server components. Since an architect can craft the logical architecture in a variety of ways, provisioning server products may call these logical components by different names. It is not a trivial task to define a generic logical architecture for service provisioning servers.
Portal Integration
Administrators may sometimes have a special requirement to administer service provisioning requests via a portal server. The portal server provides a unified user interface for accessing different application systems using portal channels or portlets. Additionally, administrators can perform application administration from the same "console" without switching to different applications. Figure 13-6 depicts how a provisioning server can integrate with a portal server. In this sample scenario, administrators can create a portal channel that connects to the system administration console of the provisioning server. Nevertheless, if the provisioning server has a fairly different user interface style, administrators might want to consider using the portlet approach. Using a portlet, administrators can customize a consistent user interface style for all applications and an administration front-end. The provisioning server needs to provide APIs that allow a portlet to initiate the administration console or process service provisioning requests. However, not every provisioning server supports portlet integration.
Under the single sign-on security infrastructure, each of the target application systems supports the Liberty and SAML protocol. These application systems are able to integrate with the identity provider infrastructure using Liberty and SAML-enabled W eb agents. User account mapping between the service provisioning server and the target application systems is defined and managed by the administrative function of the service provisioning server. The user account profile is stored in each individual Profile Management Data Store of the target application systems. These data stores are synchronized periodically. W hen the administrator issues a service provisioning request to create a user account in these application systems, the provisioning server can ensure that the same user account that was just provisioned can sign on to all target application systems for which the user account has appropriate access rights.
Some provisioning servers have a broader integration capability with legacy system infrastructure. For example, they can reuse the underlying security infrastructure for storing user credentials and user profiles in the directory server. LDAPbased directory servers are fairly commonly used in authenticating and storing sensitive user account data. Such use is ideal for enterprises whose security architecture is concentrated in directory servers. They can store all service provisioning data in the directory server.
Technology Differentiators
There are technology-related factors that can differentiate service provisioning products. The following identifies some of them: Agent-based versus Agentless Architecture. Some service provisioning products require installing a custom agent, which may need some software code modification, in their application infrastructure. This is also known as agentbased architecture. Agent-based architecture is generally not desirable and may require customization during software upgrades. Some products provide a nonintrusive connector that enables the target application system to intercept service provisioning requests. The connector may be a lightweight servlet running on top of the existing application server or W eb server that doesn't require modification of the application system. This connector approach is sometimes called agentless architecture. Data Model. Some service provisioning products use an index-based data model to encapsulate the user account profile or provisioning data. They implement the data model centrally in the Profile Management Data Store. Other provisioning products choose to implement the data model in distributed Profile Management Data Stores. Extensibility. It is important to have an SDK that can build custom adapters for application systems that do not support SPML or service provisioning requests. Such an SDK allows extending the system functionality to accommodate service provisioning requests. Data I ntegration. Some provisioning servers provide automated data integration with different Profile Management Data Stores or with the user account database of the target application systems. Some provisioning servers require manual data integration, such as creating data feeds to provision user accounts.
Introduction to SPML
Service Provisioning Markup Language (SPML) is an XML representation for creating service requests to provision user accounts or for processing service requests related to the management of user account services. As discussed earlier in this chapter, service provisioning is a loosely defined term. According to the OASIS SPML specification (refer to [SPML10], pp. 9-10), provisioning refers to "the automation of all the steps required to manage (set up, amend, and revoke) user or system access entitlements or data relative to electronically published services." The scope of service provisioning is primarily the management of user account services, not the underlying operating systems or application environment. OASIS's service provisioning introduces the SPML domain model, which uses a Requesting Authority (a client requester that creates service provisioning requests to a known service point) to send service provisioning requests to the Provisioning Service Point (a service provider that intercepts service provisioning requests and processes them). The Provisioning Service Point handles the service provisioning request and creates or modifies user account information in the Provisioning Service T argets (target application systems where the service provisioning requests are executed and implemented). SPML is different from SAML (refer to Chapter 12 for details). SPML defines the processes and steps that are required in order to prepare for user account services to be available, while SAML defines security assertions related to authentication or authorization after the user accounts are available. Directory Services Markup Language (DSML) is an XML specification for expressing directory queries, updates, and the results of directory operations. SPML is different from DSML in that SPML may use directory servers (using DSML) as one of the underlying data store mechanisms to implement some of the user account service requests. Like any SOAP-based messaging, SPML faces security threats such as message replay, message insertion, message deletion, and message modification. The security protection mechanisms discussed in Chapter 6, "W eb Services SecurityStandards and Techniques," is also applicable here.
The SPML specification allows service provisioning products the flexibility to implement how to handle and process service provisioning requests. It defines the language semantics of add, delete, search, and extended operations. Nevertheless, it does not specify the underlying operations of how to create a user account in an application system.
Features in SPML
There are a few unique design features in the SPML version 1.0 specification that are worth discussing. They allow security architects and developers to build SPML-enabled interfaces or integrate SPML-enabled products with their existing architecture with more flexibility and extensibility. These unique design features include: Flexible Request-Reply Model. SPML supports both synchronous (or singleton request) and asynchronous (or multirequest batch) models to meet different technical requirements. In the synchronous request-reply model, the client (also known as the Requesting Authority) creates a session and issues a request to provision a user account. W hile it is waiting for the server to reply to the service provisioning request, the client will hold the session by using a "blocking" wait loop. In other words, it will not issue any new service provisioning request or handle other processing logic while waiting for the server response. The synchronous model is useful for legacy systems that support only synchronous communication. In the asynchronous request-reply model, the client and the server can freely exchange service provisioning requests and replies in any order or sequence. This allows the service provisioning system to manage a large volume of service provisioning transactions simultaneously. Extensibility Using the Open Content Model. The SPML schema follows an open content model in which architects and developers can add additional child elements or attributes to extend the service provisioning requests on top of standard schema. This allows individual service provisioning products to add custom information, such as additional user account profile details or configuration management details, in the SPML schema for the target application systems. Custom Error Handling. Error codes are important in handling error control for service provisioning requests. Different service provisioning systems usually have their own error code system that may not be shared and reusable by other systems. W hen returning an error to a service request, it is fairly helpful to use custom error handling that includes an error message that carries information other than just the specific error code. In SPML, errors are reported in the attribute "error" in the response message if the result of the SPML request shows a failure status in the attribute "result." For example, Example 13-1 illustrates an example of the message addResponse that shows a custom error status for a request to add the e-mail account [email protected]. The custom error status is a non-standard error code in SPML that provides an additional detailed description of the service request error. In the sample message, the attribute "Result" indicates whether this is a success, failure, or pending operation. The attribute "ErrorCode" details the reason for the failure. The attribute "errorMessage" further provides the description of the custom error code. This allows security architects and developers to define their custom error messages and descriptions.
for more vendor products. These service provisioning systems allow creating and managing user account information as well as synchronizing user passwords across systems. Some products come with an SPML API library that allows custom applications or legacy systems to intercept and process SPML service requests. If security architects and developers want to use an open source implementation, they can also download OpenSPML Toolkit from http://www.openspml.org as well. Example 13-2 provides a sample SPML client using OpenSPML Toolkit (supporting SPML version 0.5). The OpenSPML Toolkit can be installed on any W eb container (such as Apache Tomcat W eb container or J2EE System Application Server).
this.userAttr.put("password", this.password); this.userAttr.put("email", this.email); this.userAttr.put("firstname", this.firstName); this.userAttr.put("lastname", this.lastName); this.userAttr.put("fullname", this.fullName); this.request.setAttributes(userAttr); // generate SPML request to add user response = (AddResponse)this.client.request(request); this.client.throwErrors(response); } public static void main(String args[]) { new AddUser(); } }
Executing the sample SPML client will create an SPML request, as depicted in Example 13-3. This is an add operation to create an e-mail user account for user Mary Jane Parker.
Forces
Y ou want to use a programmatic interface that can work with different password administration and management mechanisms of each application system to synchronize user account passwords. Y ou want to hide the details of handling heterogeneous application protocols (for example, proprietary message format) and service configurations of multiple security service components (for example, directory server) that underlie password administration and management mechanisms. Y ou want to standardize the processing control of the return codes and error codes. Different Provisioning Service Targets (application systems) may have their own style and naming convention for the return codes and error codes. W hen synchronizing user passwords with multiple Provisioning Service Targets, security architects and developers may want to use a common and standard interface to encapsulate these return codes and error codes or to translate different types of codes into a common code system. For example, application A may use "20" to denote service failure due to invalid user identifier, but application B may use "30" to denote the same error. The Password Synchronizer may translate these error codes (such as "20" and "30") into its common code system (such as "40").
Solution
Use a Password Synchronizer to centralize management of synchronizing user credentials (including user passwords) across different application systems via programmatic interfaces. A Password Synchronizer is a convenient control center for synchronizing user account services across multiple application
systems. It acts like a hub that issues user account password service commands (including password setting, password resetting, and synchronizing the user passwords) to all application systems that are connected to it. Each of the application systems receives the user account password service request, verifies the authenticity, and processes the request. If the request is successful, the application system will respond with a positive return code. Otherwise, it will return a response with the details of any unsuccessful condition or failure reason. A Password Synchronizer can manage user credential (such as user account password) activities in a programmatic manner. All user account password service requests are logged and tracked. If the target application system is unavailable, the Password Synchronizer can reissue the service request. A Password Synchronizer is extremely important when there are a large number of target application systems and administrators need to synchronize the user account passwords within a short time window. Operational efficiency and accuracy are keys to success. To provide a flexible user account password service, architects and developers may need a number of logical architecture components, as discussed earlier in this chapter. Figure 13-8 shows a simple adaptation of the logical components as they appear in the Password Synchronizer. These logical components are depicted as shown in the figure.
Passw ord Synchronizer Manager. The Password Synchronizer Manager acts as a faade that directs user account password service requests to the provisioning service targets (that is, target application systems). It performs the roles of the provisioning server and password manager in Figure 13-6. Ledger. The Ledger logs each user account password service request. Once the service request is complete, the ledger will mark the request as successful. If the provisioning service target (target application system) is not available, the Password Synchronizer Manager will reissue the service request after the provisioning service target resumes service. It also performs the role of the monitoring component as in Figure 13-6. PST I D-I D Mapping T able. The provisioning service target ID (PSTID) to user ID mapping table references the unique user account ID with the target application systems (provisioning service targets). This table provides information to the Password Synchronizer Manager for issuing any provisioning service requests to the target application systems. For a large-scale deployment environment with a high volume of user account service requests, architects and developers would probably require the Password Synchronizer Manager to handle a large number of requests simultaneously using multithread processing. Typically, asynchronous messaging would be a good design approach. User account password service requests can be placed in a queue, where the Password Synchronizer Manager can create multiple threads to process these requests simultaneously. Because each target application system may be running a different application protocol, the Password Synchronizer Manager must be flexible enough to handle multiple protocols by shielding the client tier from the underlying protocol. Doing this may require the use of connectors or adapters that can transform different underlying protocols.
Structure
Figure 13-9 shows a class diagram for the Password Synchronizer pattern. The core Password Synchronizer service consists of three important classes: PasswordSyncManager, ServiceConfig, PasswordSyncListener and PasswordSyncLedger.
The PasswordSyncManager class is the main process engine that handles user account password requests. The user account password request is created by the class PasswordSyncRequest, which loads the user profile (for example, user name, password) via the class ProvisioningUserProfile. The PasswordSyncManager creates a secure session, connects to each provisioning service target, and issues the relevant user account password request. The ServiceConfig class loads the PSTID mapping file, which stores a list of the provisioning service targets and the underlying application protocol (in a context object called ServiceConfigContext for each provisioning target system) used to process the user account password service requests, such as RMI-IIOP and SOAP/HTTP. This avoids tightly coupling the data transport layer processing logic with the application processing logic in the program codes. The PSTID Mapping file defines the mapping between the unique provisioning service target ID and the user IDs in each application system. Flexibility is increased by using a unique provisioning service target ID, which may be an arbitrary, system-generated reference number that references other user IDs. Using a unique provisioning service target ID allows user IDs to be added or removed from the mapping table without those actions impacting other systems or the application infrastructure. If architects and developers use any of the existing user IDs to map to other user IDs, any change to the user IDs will break the referential integrity (or mapping relationship). In that case, the PasswordSyncManager will not be able to complete the user account password service requests. The PasswordSyncListener class resembles the target provisioning system that receives and processes the user account service request. The PasswordSyncLedger class denotes the system entity that checks whether all user account service requests have been completed.
1. 2. ProvisioningServicepoint creates an instance of the Password Synchronizer. 3. PasswordSyncManager retrieves service protocols and bindings. 4. ServiceConfig sends service protocols and bindings to PasswordSyncManager. 5. PasswordSyncManager creates session variables. 6. ProvisioningServicePoint initiates request to synchronize the user account password service. 7. PasswordSyncManager creates a handle for the password synchronization request. 8.
9. PasswordSyncManager issues an SPML add operation request to each individual ProvisioningServiceTarget. 10. ProvisioningServiceTarget returns the result. 11. PasswordSyncManager writes events to Ledger. 12. PasswordSyncManager updates status by the handle ID. 13. PasswordSyncManager returns the result to ProvisioningServicePoint.
Figure 13-11 shows how the Password Synchronizer pattern can reissue or reprocess the user account password service requests until they are successfully completed. This is useful if architects and developers require the reliability and resilience of handling provisioning requests. The capability of reprocessing service requests is essential for ensuring that all user passwords are synchronized, even if some of the target systems are offline. It is also important that the Password Synchronizer have the capability to roll back to the original user account password after any unsuccessful password synchronization operation. The following sequence shows the reprocessing capability of synchronizing user account passwords.
Figure 13-11. Reprocessing user account password requests after target system resumes operation
1.
2. PasswordSyncManager retrieves service protocols and bindings. 3. ServiceConfig sends service protocols and bindings to PasswordSyncManager. 4. PasswordSyncManager creates session variables. 5. ProvisioningServicePoint initiates a request to synchronize user account password service. 6. ProvisioningServicePoint retrieves a list of outstanding user account password service requests from Ledger. 7. Ledger returns a list of outstanding requests. 8. PasswordSyncManager creates a handle for the password synchronization request. 9. PasswordSyncManager adds a handle to the session information. 10. PasswordSyncManager issues an SPML add operation request to each individual ProvisioningServiceTarget. 11. ProvisioningServiceTarget returns the result. 12. PasswordSyncManager writes events to Ledger. 13. PasswordSyncManager updates status by the handle ID.
Strategies
A Password Synchronizer pattern provides a consistent and structured way to handle service provisioning functions and a flexible way to handle multiple protocol bindings. The following are scenarios discussing important design strategies for use with the Password Synchronizer pattern. Multithreading strategy. The Password Synchronizer should be flexible enough to support sequential and simultaneous processing. Sequential processing denotes that the Password Synchronizer processes each provisioning service target one at a time in a sequential order. However, this will not be scalable if there are a large number of provisioning service targets or service requests to handle. Simultaneous processing denotes the capability to create multiple threads for service request processing. Multiple threads require complex application design when implementing the Password Synchronizer to create them and handle synchronization among them. Post-synchronization event strategy. Architects and developers can invoke a script or a series of actions after processing the user account password service request. For example, the Password Synchronizer can invoke a userdefined service (for example, using EJB or a UNIX script) to notify the client that the password synchronization is unsuccessful and provide details of the problematic provisioning service target's status. This allows timely event notification or the alerting of the administrator upon completion of the service request. However, the Password Synchronizer should not be confused with a work-flow engine, which provides more flexibility of control processing. Automated back-out strategy. If the provisioning service target is unable to process any user account password service request after several attempts, architects and developers can define threshold parameters such as TIMEOUT and MAX_RETRIES to determine whether they want to back out of the user account password service request for the rest of the provisioning service targets. Backing out of the service request is similar to processing a user account password request and synchronizing the user passwords across systems. However, backing out of the user account password service request requires retrieving or storing the previous user account password temporarily. One challenge is that the Password Synchronizer pattern needs to retrieve the current user account password from the security infrastructure. Some security infrastructures (for example, the Solaris operating environment) do not allow retrieving user account passwords in clear text; they store the user account passwords in an encrypted format. Additionally, retrieving and storing the current user account password for a backout operation may create several security risks. For example, security administrators need to determine a secure and safe mechanism for storing the user account password temporarily (for example, in encrypted text), which hackers may be able to access. Protocol binding strategy. It is possible that the administrative client may be using a mixture of protocols (SOAP, RMI) under different use case scenarios. Developers can build administrative clients for each different protocol, for example, a dedicated SOAP client for SOAP-based messaging and an EJB client for the RMI-IIOP protocol. However, it would be more desirable to separate the administrative processing logic from the underlying protocols. Doing so allows a single client to support more than one underlying protocol.
Consequences
By employing the Password Synchronizer pattern, developers can benefit in the following ways: Addressing insecure data storage. The Password Synchronizer pattern uses a secure data store such as Secure LDAP to store the ID mapping table. Using a secure data store is important for addressing any security vulnerability caused by insecure data storage. Addressing broken authentication. If each application system has its own user password, security hackers may easily break into an application system that uses weak user passwords. A possible risk mitigation is to synchronize the user passwords in a timely fashion across all application systems using a strong password policy. This measure can help in addressing the potential security vulnerability of broken authentication caused by weak user passwords. Reusable programmatic interfaces that encapsulate different application protocols to set or reset user account passw ords. The Password Synchronizer pattern uses programmatic interfaces (such as SPML) to encapsulate the interface that instructs the provisioning service targets to set or reset user account passwords. It reduces the complexity by using standard interfaces, not custom-built proprietary connectors or interfaces. The programmatic interfaces can be highly reusable for similar provisioning service targets.
Automated retry if the provisioning service target is offline. The Password Synchronizer pattern will retry sending the user account service requests to the provisioning service targets using a ledger after they resume operations. It ensures that all provisioning service targets are synchronized. This is an essential feature of a reliable and resilient user account provisioning service. Automated back-out during passw ord synchronization. After a number of retries (such as three times) of resending the requests to a specific provisioning service target, administrators can decide to back out of the user account password service request. It is an important design decision, because the back-out operation denotes undoing previously successful user account password service requests from a potentially large number of provisioning service targets. This may be implemented by archiving the user credential data store from each provisioning service target or by storing the current user account password securely and temporarily before executing the current service request. (Nevertheless, if the user account passwords are not securely maintained, they may be vulnerable to security exploits).
Sample Code
This section introduces sample program code for creating a Password Synchronizer to initiate user account password requests. The Password Synchronizer consists of two key components: Passw ordSyncManager (administrative client that initiates a number of user password synchronization requests to the provisioning service targets) and Passw ordSyncLedger (a manager component that monitors the status of the service provisioning requests from a predefined JMS topic). Each of the service provisioning requests is intercepted and processed by Passw ordSyncListener, which resides in each provisioning service target. JMS is used because it provides a reliable message delivery mechanism and allows better scalability with multiple listeners processing the requests simultaneously. Passw ordSyncManager resembles a slight adaptation of the PasswordSyncManager in the Password Synchronizer Pattern section earlier in this chapter. Similarly, Passw ordSyncLedger takes the role of Ledger. pstidMapping.xml is an adaptation of PSTIDMapping and is used by the methods in the class ServiceConfig. Figure 13-12 shows the logical architecture for the sample program codes. In Step 1, Passw ordSyncManager reads from the provisioning service target mapping table pstidMapping.xml and publishes user password synchronization requests to different JMS topics. It renders the service provisioning request in SPML message format if the provisioning service target supports SOAP, according to the service configuration information defined in the mapping table using the methods defined in the class ServiceConfig. Otherwise, it generates a delimited text. Passw ordSyncManager uses the utility Passw ordSyncRequest to transform the SOAP message (or the delimited text) to an object and writes to the JMS topic name. Currently, the JMS topic name uses the application resource name of the provisioning service target.
In Step 2, each provisioning service target uses a JMS listener, Passw ordSyncListener. Passw ordSyncListener intercepts any JMS objects published to the associated JMS topic name. Upon receipt, the listener processes the service provisioning requests in the takeAction method and notifies the Passw ordSyncLedger of successfully synchronized requests. In Step 3, Passw ordSyncLedger is a Password Synchronization Manager ledger process that listens to a predefined JMS topic (such as PASSW ORDSY NC_PROVIDER_OK_LIST). It keeps track of the original list of provisioning service targets (from the mapping table pstidMapping.xml). If all passwords are synchronized, then Passw ordSyncLedger displays a message stating the completion of the user password synchronization requests.
The core component of the Password Synchronizer is the Passw ordSyncManager. Example 13-4 shows a program excerpt for Passw ordSyncManager. It uses a hash table (LinkedHashMap) to store the user password profile. Upon initialization and loading the system configuration, the Passw ordSyncManager retrieves a list of applications from pstidMapping.xml using the class ServiceConfig. Then it publishes the user password synchronization requests in either SOAP or delimited text based on the service configuration information.
this.protocolBinding = configContext.getProtocolBinding(); System.out.println(this.timeStamp + "- " + configContext.getApplicationId() + " is being processed under " + this.topicName + " using " + this.protocolBinding); new PasswordSyncRequest(this.userProfile, this.topicName, this.protocolBinding); } } public static void main(String args[]) { new PasswordSyncManager(); } }
The service configuration for the Password Synchronizer allows different data transportation protocols to be used. Example 13-5 shows a program excerpt for ServiceConfig. The program first retrieves a list of applications from pstidMapping.xml using the class ServiceConfig. The service configuration is stored in a system properties file and is used to indicate the underlying data transport protocol for the password synchronization service, for example, SOAP or JMS.
log.fatal("ServiceConfig constructor cannot" + " find file/file not readable"); ie.printStackTrace(); } catch (JDOMException je) { log.fatal("cannot parse Password Synchronizer" + " config file"); je.printStackTrace(); } } /** * Get config file from JVM options * if the file does not exist, use the default one under config/config.xml * * @return String config file */ private String getConfigFile() { String localConfigFile = new String(); localConfigFile = System.getProperty("config.file"); if (localConfigFile == null) { localConfigFile = this.configFile; return localConfigFile; } else { return localConfigFile; } } /** * Initialize configuration by loading the ulyssesConfig.xml into the * LinkedHashMap * This will include: * 1. Load configFile * 2. Extract global config * 3. Extract private config for each component into UlyssesConfig * 4. Store private config info in LinkedHashMap * * @param String configFile Ulysses config file */ private void initConfig(Document doc) { setComponentsConfig(doc); } /** * Create pstidMapping.xml from LinkedHashMap * * Assumption - must load pstd mapping file and create LinkedHashMap first * * @param Document doc */ private synchronized void setComponentsConfig(Document doc) { String parentElement = "service"; String applicationIdElement = "applicationId"; String applicationClassNameElement = "applicationClassName"; String applicationURIElement = "applicationURI"; String protocolBindingElement = "protocolBinding"; String topicNameElement = "topicName"; int state = com.csp.provisioning.ServiceConfigContext.UNKNOWN_STATE; String applicationId = new String(); String applicationClassName = new String(); String applicationURI = new String(); String protocolBinding = new String(); String topicName = new String(); //String requesterId = new String(); String requesterId = this.requesterId;
ServiceConfigContext context = null; Element root = doc.getRootElement(); List components = root.getChildren(parentElement); Iterator i = components.iterator(); while (i.hasNext()) { Element component = (Element)i.next(); applicationId = component.getChild (applicationIdElement).getText(); applicationClassName = component.getChild (applicationClassNameElement).getText(); applicationURI = component.getChild (applicationURIElement).getText(); protocolBinding = component.getChild (protocolBindingElement).getText(); topicName = component.getChild (topicNameElement).getText(); //System.out.println("topic name = " + topicName); context = new ServiceConfigContext(applicationId, applicationClassName, applicationURI, protocolBinding, state, requesterId, topicName); this.serviceConfigHashMap.put(applicationId, context); } } /** * Retrieve private config in a list * * @param String componentName * @return List a list containing the private config of a Ulysses component */ public ServiceConfigContext getContext (String applicationId) { ServiceConfigContext context; if (this.serviceConfigHashMap == null) { try { if (configFile == null) { log.fatal("Invalid configuration " + "file name"); } else { SAXBuilder builder = new SAXBuilder(false); Document doc = builder.build (new File(configFile)); initConfig(doc); // ensure we can get components config even components not initialized //dumpComponentMap(); context = this.serviceConfigHashMap.get(applicationId); return context; } } catch (IOException ie) { log.fatal("ServiceConfig constructor " + " cannot find file/file not readable"); ie.printStackTrace(); } catch (JDOMException je) { log.fatal("cannot parse config file"); je.printStackTrace(); } return null; } else { context = this.serviceConfigHashMap.get(applicationId); return context; }
} /** * Fetch service config context of all components * * @return LinkedHashMap serviceConfigHashMap * **/ public LinkedHashMap getAllConfigContext() { return this.serviceConfigHashMap; } }
The Password Synchronizer pattern uses the SPML addRequest message to create a new user account and synchronize user passwords across application systems. Example 13-6 shows a program excerpt of Passw ordSyncRequest, which creates a SPML service request. The method createSPMLRequest constructs a SOAP message encapsulating the SPML request. It can be modified to add or change user account details.
protected String lastName; protected String emailAddress; protected String userId; protected String password; /** Constructor - Creates a new instance of PasswordSyncRequest * Default constructor to call. This default will use a default user profile Mary Jo Parker for demo. * */ public PasswordSyncRequest() { // default values if not specified this.protocolBinding = "SOAP"; this.topicName = "PROD_FINANCIAL_FRONTOFFICE"; this.fullName = "Mary Jo Parker"; this.firstName = "Mary Jo"; this.lastName = "Parker"; this.userId = "mjparker"; this.emailAddress = "[email protected]"; this.topicName = "prod_application1"; this.password = "secret"; init(topicName); try { createSPMLRequest(); start(); } catch (Exception ex) { ex.printStackTrace(); } } /** Constructor - Creates a new instance of PasswordSyncRequest */ public PasswordSyncRequest(ProvisioningUserProfile userProfile, String topicName, String protocolBinding) { this.protocolBinding = protocolBinding; this.topicName = topicName; this.fullName = userProfile.getFullName(); this.firstName = userProfile.getFirstName(); this.lastName = userProfile.getLastName(); this.userId = userProfile.getUserId(); this.emailAddress = userProfile.getEmailAddress(); this.password = userProfile.getToken(); init(this.topicName); try { createSPMLRequest(); start(); } catch (Exception ex) { ex.printStackTrace(); } } /** * Initializes JMS settings * * @param String topicName JMS topic name * @exception JMSException ex JMSException */ private void init(String topicName) { try { // Create JMS topic and settings // Can be replaced by ServiceLocator pattern when available topicConnectionFactory =
new TopicConnectionFactory(); topicConnection = topicConnectionFactory.createTopicConnection(); topicSession = topicConnection .createTopicSession(false, Session.AUTO_ACKNOWLEDGE); // topic = null; topic = topicSession.createTopic(topicName); } catch (JMSException je) { je.printStackTrace(); System.out.println("Cannot create topics " + " or topic names"); System.out.println ("Connection problem: " + je.toString()); if (topicConnection != null) { try { topicConnection.close(); } // try catch (JMSException moreEx) { moreEx.printStackTrace(); } } // catch System.exit(1); } // catch } // init() /** * Create SPML add request message in SOAP, and bind to JMS * * This example uses SPML 1.0 syntax for illustration * * @param TopicSession session TopicSession JMS topic session * @param Message message Message JMS message * @param Hashtable PasswordUserProfile Hashtable user password info for * the SPML message * @exception JMSException ex Exception JAXM/SAAJ exception */ private void createSPMLRequest() throws Exception { try { // Create a SOAP envelope MessageFactory mf = MessageFactory.newInstance(); SOAPMessage soapMessage = mf.createMessage(); SOAPPart soapPart = soapMessage.getSOAPPart(); SOAPEnvelope soapEnvelope = soapPart.getEnvelope(); SOAPHeader soapHeader = soapMessage.getSOAPHeader(); SOAPBody soapBody = soapEnvelope.getBody(); // create addRequest SPML message Name name = soapEnvelope.createName("addRequest", "spml", "http://www.coresecuritypattern.com"); SOAPElement element = soapBody.addChildElement(name); SOAPBodyElement addRequest = soapBody.addBodyElement(name); Name childName = soapEnvelope.createName("xmlns"); addRequest.addAttribute(childName, "urn:oasis:names:tc:SPML:1:0"); childName = soapEnvelope.createName("spml"); addRequest.addAttribute(childName, "urn:oasis:names:tc:DSML:2:0:core"); // create identifier childName =
soapEnvelope.createName("identifier"); SOAPElement spmlIdentifier = addRequest.addChildElement(childName); childName = soapEnvelope.createName("type"); spmlIdentifier.addAttribute(childName, "urn:oasis:names:tc:SPML:1:0#GUID"); // create user account id childName = soapEnvelope.createName("id"); SOAPElement spmlID = spmlIdentifier.addChildElement(childName); spmlIdentifier.addTextNode(this.userId); // create user account password childName = soapEnvelope.createName("attributes"); SOAPElement attributes = addRequest.addChildElement(childName); childName = soapEnvelope.createName("attr", "dsml", "http://www.sun.com/imq"); childName = soapEnvelope.createName("name"); SOAPElement attr1 = attributes.addChildElement(childName); attributes.addAttribute(childName, "objectclass"); childName = soapEnvelope.createName("value"); SOAPElement attrObjclass = attr1.addChildElement(childName); attrObjclass.addTextNode("user"); childName = soapEnvelope.createName("name"); SOAPElement attr2 = attributes.addChildElement(childName); attributes.addAttribute(childName, "fullname"); childName = soapEnvelope.createName("value"); SOAPElement attrFullname = attr2.addChildElement(childName); attrFullname.addTextNode(this.fullName); childName = soapEnvelope.createName("name"); SOAPElement attr3 = attributes.addChildElement(childName); attributes.addAttribute(childName, "email"); childName = soapEnvelope.createName("value"); SOAPElement attrEmail = attr3.addChildElement(childName); attrEmail.addTextNode(this.emailAddress); childName = soapEnvelope.createName("name"); SOAPElement attr4 = attributes.addChildElement(childName); attributes.addAttribute(childName, "password"); childName = soapEnvelope.createName("value"); SOAPElement attrPassword = attr4.addChildElement(childName); attrPassword.addTextNode(this.password); childName = soapEnvelope.createName("name"); SOAPElement attr5 = attributes.addChildElement(childName); attributes.addAttribute(childName, "lastname"); childName = soapEnvelope.createName("value"); SOAPElement attrLastname = attr5.addChildElement(childName); attrLastname.addTextNode(this.lastName);
childName = soapEnvelope.createName("name"); SOAPElement attr6 = attributes.addChildElement(childName); attributes.addAttribute(childName, "firstname"); childName = soapEnvelope.createName("value"); SOAPElement attrFirstname = attr6.addChildElement(childName); attrFirstname.addTextNode(this.firstName); // Attach a local file URL url = new URL("http://localhost:8080"); DataHandler dHandler = new DataHandler(url); AttachmentPart soapAttach = soapMessage.createAttachmentPart(dHandler); soapAttach.setContentType("text/html"); soapAttach.setContentId("cid-001"); //soapMessage.addAttachmentPart(soapAttach); soapMessage.saveChanges(); // Convert SOAP message to JMS this.msg = MessageTransformer.SOAPMessageIntoJMSMessage(soapMessage, this.topicSession); } // try catch (Exception ex) { ex.printStackTrace(); System.out.println("Exception occurred: " + ex.toString()); } // catch } /** * Create a string in plain text to encapsulate password sync request * * @return String a string that concatenates userId, fullName, * emailAddress, password, lastName, firstName */ private String createPasswordRequest() { String PasswordRequest = null; PasswordRequest = this.userId + ":" + this.fullName + ":" + this.emailAddress + ":" + this.password + ":" + this.lastName + ":" + this.firstName; return PasswordRequest; } /** * Start processing SPML message, given the JMS topic name, * and the SPML user password info * It is intended that the start() will start and close JMS connection. * For better efficiency, create a batch loop to process multiple requests. * * @param String topicName String JMS topic name * @param Message message Message JMS message content * @param Hashtable PasswordUserProfile Hashtable user password info for * the SPML message * @param String protocol */ private void start() { String PasswordRequestText = null; try { topicPublisher = this.topicSession.createPublisher(topic);
if (this.protocolBinding.equals("SOAP")) { try { createSPMLRequest(); this.topicPublisher.publish(this.msg); } catch (Exception ex) { System.out.println("Cannot create SOAP message"); System.out.println("Message creation error: " + ex.toString()); System.exit(1); } // catch } else if (this.protocolBinding.equals("JMS")) { this.textMsg = this.topicSession.createTextMessage(); PasswordRequestText = createPasswordRequest(); this.textMsg.setText(PasswordRequestText); this.topicPublisher.publish(textMsg); } else { System.out.println("Request protocol " + this.protocolBinding + " is not supported"); System.exit(1); } // if protocol } // try catch (JMSException je) { je.printStackTrace(); System.out.println("Cannot publish SOAP message"); System.out.println("Exception occurred: " + je.toString()); } finally { if (topicConnection != null) { try { topicConnection.close(); } // try topicConnection catch (JMSException jex) { jex.printStackTrace(); System.out.println("Cannot close topicConnection"); System.out.println("Connection problem: " + jex.toString()); } // catch } // if topicConnection } // finally } // try /** * set protocol binding for the service request * * @param String protocolBinding */ public void setProtocolBinding(String protocolBinding) { this.protocolBinding = protocolBinding; } }
Using the JMS infrastructure, a small footprint listener program is required for each Provisioning Service Target to intercept the user password synchronization request. Example 13-7 shows a program excerpt of Passw ordSyncListener. Passw ordSyncListener listens to the predefined JMS topic. Once the password synchronization request is received, the listener processes the service request and notifies the Passw ordSyncLedger when the service request is complete (or fails).
import javax.jms.JMSException; import javax.jms.TopicConnection; import javax.jms.TopicSession; import javax.jms.Message; import javax.jms.Session; import javax.jms.TextMessage; import javax.jms.Topic; import javax.jms.TopicSubscriber; import javax.xml.soap.MessageFactory; import com.sun.messaging.TopicConnectionFactory; import com.sun.messaging.xml.MessageTransformer; import java.util.Date; import java.util.Iterator; import java.util.LinkedHashMap; import javax.xml.soap.AttachmentPart; import javax.xml.soap.Name; import javax.xml.soap.Node; import javax.xml.soap.SOAPBody; import javax.xml.soap.SOAPElement; import javax.xml.soap.SOAPFactory; import javax.xml.soap.SOAPHeader; import javax.xml.soap.SOAPHeaderElement; import javax.xml.soap.SOAPMessage; import javax.xml.soap.Text; public class PasswordSyncListener implements javax.jms.MessageListener { protected TopicConnectionFactory topicConnectionFactory = null; protected TopicConnection topicConnection = null; protected TopicSession topicSession = null; protected Topic topic = null; protected TopicSubscriber topicSubscriber = null; protected TextMessage message = null; protected InputStreamReader inputStreamReader = null; protected String topicName = "prod_application1"; protected String notifyTopicName = "PASSWORDSYNC_PROVIDER_OK_LIST"; protected MessageFactory messageFactory = null; protected String timeStamp; protected ServiceConfig serviceConfig; protected ServiceConfigContext context; protected LinkedHashMap<String,ServiceConfigContext> serviceConfigHashMap = new LinkedHashMap(); /** * Constructor - create new instance of Password Synchronizer listener * This is a default constructor if no param is given at run-time */ public PasswordSyncListener() { serviceConfig = new ServiceConfig(); serviceConfigHashMap = serviceConfig.getAllConfigContext(); System.out.println("PasswordSyncListener - processing password synchronization requests from JMS topic '" + this.topicName + "'"); System.out.println("Note - completed request will be notified under the JMS topic '" + this.notifyTopicName + "'"); init(); // initialize environment snoop(); // listen for SPML requests
} /** * Constructor - create new instance of Password Synchronizer listener */ public PasswordSyncListener(String newTopicName, String newNotifyTopicName) { serviceConfig = new ServiceConfig(); serviceConfigHashMap = serviceConfig.getAllConfigContext(); this.topicName = newTopicName; this.notifyTopicName = newNotifyTopicName; System.out.println("PasswordSyncListener - processing password synchronization requests from JMS topic '" + this.topicName + "'"); System.out.println("Note - completed request will be notified under the JMS topic '" + this.notifyTopicName + "'"); init(); snoop(); } /* * Initializes the JMS settings * @exception ex Exception */ public void init() { // for future enhancement, // use serviceLocator pattern here try { this.messageFactory = MessageFactory.newInstance(); this.topicConnectionFactory = new com.sun.messaging.TopicConnectionFactory(); this.topicConnection = this.topicConnectionFactory.createTopicConnection(); this.topicSession = this.topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); this.topic = this.topicSession.createTopic(this.topicName); } // try catch (Exception ex) { ex.printStackTrace(); System.out.println("Cannot create topics or topic names"); System.out.println("Connection problem: " + ex.toString()); if (topicConnection != null) { try { topicConnection.close(); } catch (JMSException moreEx) { moreEx.printStackTrace(); } } // if topicConnection System.exit(1); } // catch } // init() /* * Displays SOAP header * @param header SOAP header * @exception ex Exception */ private void dumpHeaderContents(SOAPHeader header) { try { Iterator allHeaders
= header.examineAllHeaderElements(); while (allHeaders.hasNext()) { SOAPHeaderElement headerElement = (SOAPHeaderElement) allHeaders.next(); Name headerName = headerElement.getElementName(); System.out.print("<" + headerName.getQualifiedName() + ">"); System.out.print("actor='" + headerElement.getActor() + "' "); System.out.print("mustUnderstand='" + headerElement.getMustUnderstand() + "' "); System.out.println("</" + headerName.getQualifiedName() + ">"); } // while addHeaders.hasNext } catch (Exception ex) { ex.printStackTrace(); } // catch } // dumpHeaderContents /* * Retrieves SOAP message contents, and displays * in indented XML format * * @param iterator Iterator for the SOAP message node * @param indent indent space for displaying * XML messages on screen */ private void getContents(Iterator iterator, String indent) { while (iterator.hasNext()) { Node node = (Node) iterator.next(); SOAPElement element = null; Text text = null; if (node instanceof SOAPElement) { element = (SOAPElement)node; Name name = element.getElementName(); System.out.print(indent + "<" + name.getQualifiedName()); Iterator attrs = element.getAllAttributes(); while (attrs.hasNext()){ Name attrName = (Name)attrs.next(); System.out.print(" " + attrName.getQualifiedName() + "='" + element.getAttributeValue(attrName) + "'"); } // while attrs.hasNext System.out.println(">"); Iterator iter2 = element.getChildElements(); getContents(iter2, indent + " "); System.out.println(indent + "</" + name.getQualifiedName() + ">"); } // if node instanceof else { text = (Text) node; String content = text.getValue(); System.out.println(indent + " " + content); } // else } // while } // getContents /* * Processes each JMS message when received * from the JMS topic
* @param message JMS message in SOAP format * @exception ex Exception */ public void onMessage(Message message) { try { this.timeStamp = new Date().toString(); MessageFactory messageFactory = MessageFactory.newInstance(); // Should invoke other Web services messages // to unmarshall encrypted SOAP messages, SOAPMessage soapMessage = MessageTransformer .SOAPMessageFromJMSMessage( message, messageFactory ); System.out.println(timeStamp + "- Message received! Converting the JMS message to SOAP message..."); SOAPFactory soapFactory = SOAPFactory.newInstance(); SOAPHeader thisSoapHeader = soapMessage.getSOAPHeader(); dumpHeaderContents(thisSoapHeader); SOAPBody thisSoapBody = soapMessage.getSOAPBody(); Iterator soapContent = thisSoapBody.getChildElements(); System.out.println(); System.out.println(timeStamp + "- Rendering SOAP Message Content"); getContents(soapContent, ""); System.out.println("Attachment counts: " + soapMessage.countAttachments()); Iterator iterator = soapMessage.getAttachments(); while ( iterator.hasNext() ) { AttachmentPart soapAttach = (AttachmentPart) iterator.next(); String contentType = soapAttach.getContentType(); String contentId = soapAttach.getContentId(); if ( contentType.indexOf("text") >=0 ) { String content = (String) soapAttach.getContent(); } // if contentType } // while // take action to notify Password Synchroniza //tion Manager about OK status TakeAction action = new TakeAction(); action.init(notifyTopicName); String tempTopicName = findApplicationId(topicName); if (tempTopicName != null) { action .publishPasswordSyncResult(tempTopicName, "SOAP"); } else { System.out.println("ERROR - Mismatch between applicationId and topicName. Please check pstidMapping.xml"); } } // try
catch (Exception ex) { try { TextMessage textMessage = (TextMessage) message; String text = textMessage.getText(); System.out.println(timeStamp + "- Password sync request in delimited text: " + text); // take action to notify Password Synchronization Manager about OK status TakeAction action = new TakeAction(); action.init(notifyTopicName); String tempTopicName = findApplicationId(topicName); if (tempTopicName != null) { action.publishPasswordSyncResult(tempTopicName, "JMS"); } else { System.out.println("ERROR - Mismatch between applicationId and topicName. Please check pstidMapping.xml"); } } catch (Exception anotherEx) { anotherEx.printStackTrace(); } } // catch } /* * Starts listening to the JMS topic * @exception ex JMSException */ private void snoop() { char answer = '\0'; final boolean NOLOCAL = true; try { topicSubscriber = topicSession.createSubscriber(topic, null, NOLOCAL); topicSession.createSubscriber(topic); topicSubscriber.setMessageListener(this); topicConnection.start(); System.out.println("Command Option : Q=quit, then <return>"); System.out.println(); inputStreamReader = new InputStreamReader(System.in); while (!((answer == 'q') || (answer == 'Q'))) { try { answer = (char) inputStreamReader.read(); } catch (IOException e) { System.out.println("I/O exception: " + e.toString()); } // catch } // while !answer } // try catch (JMSException ex) { System.out.println("Cannot subscribe message"); System.out.println("Exception occurred: " + ex.toString()); System.exit(1);
} finally { if (topicConnection != null) { try { topicConnection.close(); } catch (JMSException ex) { System.out.println("Cannot close topicConnection"); System.out.println("Connection problem: " + ex.toString()); System.exit(1); } } // if topicConnection }// finally } /** * Helper class to look up the applicationId * when given a topicName * * @param String targetTopicName * @return String applicationId */ private String findApplicationId(String targetTopicName) { for(ServiceConfigContext configContext: this.serviceConfigHashMap.values()) { if (configContext.getTopicName() .equals(targetTopicName)) { return configContext.getApplicationId(); } } return null; } public static void main(String[] args) { String newTopicName = new String(); String newNotifyTopicName = new String(); // Command syntax helper if (args.length != 2) { // take default topic name and notify topic //name if no param is given at runtime new PasswordSyncListener(); } else { newTopicName = args[0]; newNotifyTopicName = args[1]; new PasswordSyncListener(newTopicName, newNotifyTopicName); } } // public main }
The listener (PasswordSyncListener) of each Provisioning Service Target uses a class called T akeAction to implement how the Provisioning Service Target should handle the user password synchronization request. This may include resetting the user password and notifying the service requester when completed. Example 13-8 shows an implementation of a simple notification action. The class T akeAction can be expanded and modified to include additional processes in the future.
import javax.jms.Session; import javax.jms.TextMessage; import javax.jms.Topic; import javax.jms.TopicConnection; import javax.jms.TopicConnectionFactory; import javax.jms.TopicPublisher; import javax.jms.TopicSession; public class TakeAction { protected TopicConnectionFactory topicConnectionFactory = null; protected TopicConnection topicConnection = null; protected TopicSession topicSession = null; protected Topic topic = null; protected String topicName = null; /** Constructor - Creates a new instance of TakeAction */ public TakeAction() { } /* * Set up the JMS topic connection * * @param String topicName String JMS topic name * @exception Exception ex */ public void init(String topicName) { try { this.topicConnectionFactory = new com.sun.messaging.TopicConnectionFactory(); this.topicConnection = this.topicConnectionFactory.createTopicConnection(); this.topicSession = this.topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); this.topic = this.topicSession.createTopic(topicName); } // try catch (Exception ex) { ex.printStackTrace(); System.out.println("Cannot create topics or topic names"); System.out.println("Connection problem: " + ex.toString()); if (this.topicConnection != null) { try { this.topicConnection.close(); } catch (JMSException moreEx) { moreEx.printStackTrace(); } System.exit(1); } // if topicConnection } // catch } // init /* * Publish successfully completed application * list to a pre-defined topic. * The structure of the message is simply <application> * * @param String application application name where * password is synch * @param String protocol either SOAP or JMS * @exception JMSException ex */ public void publishPasswordSyncResult(String application, String protocol) {
TextMessage sentMessage = null; TopicPublisher topicPublisher = null; try { topicPublisher = this.topicSession.createPublisher(topic); sentMessage = this.topicSession.createTextMessage(); sentMessage.setText(application); topicPublisher.publish(sentMessage); } // try catch (JMSException jmsex) { jmsex.printStackTrace(); System.out.println("Cannot publish SOAP message"); System.out.println("Exception occurred: " + jmsex.toString()); } // catch finally { if (topicConnection != null) { try { this.topicConnection.close(); } catch (JMSException jmsex) { jmsex.printStackTrace(); System.out.println("Cannot close topicConnection"); System.out.println("Connection problem: " + jmsex.toString()); } // catch } // if topicConnection } // finally } }
A ledger (Passw ordSyncLedger) is required to track the status of user password synchronization requests. Example 13-9 shows a program excerpt of using Java Message Service to implement the ledger.
protected final boolean NOLOCAL = true; protected ServiceConfig serviceConfig; protected ServiceConfigContext context; protected LinkedHashMap<String,ServiceConfigContext> serviceConfigHashMap = new LinkedHashMap(); protected MessageFactory messageFactory = null; protected String timeStamp; /** Creates a new instance of PasswordSyncLedger */ public PasswordSyncLedger() { // set default topic name this.topicName = "PASSWORDSYNC_PROVIDER_OK_LIST"; // load Password Synchronizer config file serviceConfig = new ServiceConfig(); serviceConfigHashMap = serviceConfig.getAllConfigContext(); System.out.println("Password Synchronizer Ledger starts."); // set up JMS connection factory init(this.topicName); start(); } /* Set up JMS topic connection, and * initialize the JMS set-up * * @exception ex Exception */ public void init(String topicName) { try { this.topicConnectionFactory = new com.sun.messaging.TopicConnectionFactory(); this.topicConnection = this.topicConnectionFactory.createTopicConnection(); this.topicSession = this.topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); this.topic = this.topicSession.createTopic(topicName); messageFactory = MessageFactory.newInstance(); } // try catch (Exception ex) { ex.printStackTrace(); System.out.println("Cannot create topics or topic names"); System.out.println("Connection problem: " + ex.toString()); if (topicConnection != null) { try { topicConnection.close(); } catch (JMSException moreEx) { // } // catch } // if topicConnection System.exit(1); } // catch } /* * Process each message received * from the listener. * * @exception ex JMSException */ public void onMessage(Message message) {
int foundAny = -1; try { TextMessage textMessage = (TextMessage) message; String syncResult = textMessage.getText(); // assume the application is synchronized this.timeStamp = new Date().toString(); System.out.println(this.timeStamp + "- just complete password synchronization for '" + syncResult + "'"); if (this.serviceConfig.getContext(syncResult).getApplicationId().equals (syncResult)) { // set state to SYNC_STATE this.serviceConfig.getContext(syncResult).setState (ServiceConfigContext.SYNC_STATE); viewResult(); } } // try catch (JMSException jmsex) { jmsex.printStackTrace(); } // catch } /* * Start listening to the topic (passed in args[0]) * under a loop. It will only stop when users press Q * * @exception ex JMSException */ public void start() { try { topicSubscriber = this.topicSession.createSubscriber(topic, null, NOLOCAL); this.topicSession.createSubscriber(this.topic); this.topicSubscriber.setMessageListener(this); this.topicConnection.start(); System.out.println("Command Option : Q=quit, then <return>"); inputStreamReader = new InputStreamReader(System.in); while (!((answer == 'q') || (answer == 'Q'))) { try { answer = (char) inputStreamReader.read(); } catch (IOException e) { e.printStackTrace(); System.out.println("I/O exception: " + e.toString()); } // catch } // while } // try catch (JMSException ex) { ex.printStackTrace(); System.out.println("Cannot subscribe message"); System.out.println("Exception occurred: " + ex.toString()); } finally { if (topicConnection != null) { try { topicConnection.close(); } catch (JMSException ex) { ex.printStackTrace(); System.out.println("Cannot close topicConnection"); System.out.println("Connection problem: " + ex.toString());
} // catch } // if topicConnection }// finally } /** * verify whether all provisioning target systems are synchronized */ private void viewResult() { int totalSync = 0; for(ServiceConfigContext configContext: this.serviceConfigHashMap.values()) { if (configContext.getState() != (configContext.SYNC_STATE)) { totalSync++; } } this.timeStamp = new Date().toString(); if (totalSync == 0) { System.out.println(this.timeStamp + "- Notification - all passwords are synchronized in all systems."); System.out.println("Password Synchronizer Ledger is stopped."); System.exit(0); } else { System.out.println(this.timeStamp + "- " + totalSync + " applications need to be synchronized."); } } public static void main(String args[]) { new PasswordSyncLedger(); } }
Example 13-10 shows the screen display messages when invoking Passw ordSyncManager. The system properties file specifies there are four password synchronization requests. There are three Java Message Service topics (prod_application1, prod_application2, prod_application3 and prod_application4) defined.
Example 13-11 shows the screen display messages from a local instance of Passw ordSyncListener. In this example, Passw ordSyncListener subscribes to the Java Message Service topic "prod_application1," which corresponds to a specific Provisioning Service Target. Upon receipt of the SPML service request, Passw ordSyncListener will display the content of the SOAP message on the screen.
circinus:~/work> java -cp ./PasswordSync_Lib.jar;./PasswordSync.jar com. csp.provisioning.PasswordSyncListener prod_application1 PASSWORDSYNC_PROVIDER_OK_LIST PasswordSyncListener - processing password synchronization requests from JMS topic 'prod_application1' Note - completed request will be notified under the JMS topic 'PASSWORDSYNC_PROVIDER_OK_LIST' Command Option : Q=quit, then <return> Thu Jun 02 07:51:18 PDT 2005- Message received! Converting the JMS message to SOAP message... Thu Jun 02 07:51:18 PDT 2005- Rendering SOAP Message Content <spml:addRequest> </spml:addRequest> <spml:addRequest spml='urn:oasis:names:tc:DSML:2:0:core' xmlns='urn:oasis:names: tc:SPML:1:0'> <identifier type='urn:oasis:names:tc:SPML:1:0#GUID'> <id> </id> mjparker </identifier> <attributes name='firstname'> <name> ... </spml:addRequest> Attachment counts: 1
Example 13-12 shows the notification messages from Passw ordSyncLedger. This sample program excerpt acts as a console that shows the total number of Provisioning Service Targets whose passwords have been synchronized.
password management and synchronization processing is more restrictive in the management LAN or behind the DMZ. Infrastructure of provisioning service targets. Interfacing with external provisioning service targets may impose high security risks depending on whether there is a strong trust relationship between the provisioning server and the provisioning service targets. Direct interface between external hosts and the provisioning server, if residing behind the DMZ or in the management LAN, is highly risky, because it opens itself to host scanning and unauthorized footprinting. A direct programmatic interface can also expose host information or infrastructure details to potential hackers or intruders via host scanning. To mitigate the risks, security architects can allow only the delegated administration function (the Password Synchronizer Manager), upon successful authentication and authorization, to initiate user account password service requests. The Password Synchronizer Manager should disallow any application system (service target) to initiate user account password service requests. Client device interface. If the client device (for example, user password token, mobile personal digital organizer, and so forth) is compromised, intruders or hackers may be able to exploit the current client device interface to access the provisioning server or the Password Synchronizer Manager. Security architects may review and assess the strength of a client device when securing the user account passwordthat is, how the client device stores the user credentials and initiates the interface to connect to the provisioning server (or the Password Synchronizer Manager). Logging. The log files for user account provisioning or user account password synchronization may contain sensitive user account information, which may be a target for host scanning or hacking. Security architects may want to segregate the provisioning event log from the normal system log and ensure the log files are only accessible to the administrative user account (for example, file attribute 700 on UNIX), or store the log events in a database or directory server (which has additional security protection of the logging data). The Secure Logger pattern would be useful in this context. These security measures can reduce the risk of unauthorized access and potential security intrusion. Processing logic of user account password changes. The processing logic of creating or changing a user account password is usually customized in local provisioning service targets. This is typically implemented by reusing existing APIs or the delegated administration interface. The security interface to initiating a user account password change may incur security risks. Security architects need to understand the legacy security integration requirements, such as how the underlying interface connects to the application system and how the service requester is authenticated. It is possible that the provisioning service target exposes an API to handle the processing logic of creating or changing a user account password, but it does not authenticate the service requester. In such a case, security architects need to provide a custom authentication module that can mitigate the security risk of a legacy security environment. This is usually done on a case-by-case basis, and it is difficult to prescribe a specific security protection mechanism. Integration with legacy environment. Password Synchronizer (or the provisioning server) may have requirements for integrating with a legacy operating environment. These may include propagating security credentials in order to access systems in the legacy operating environment. The existing legacy operating environment may not have sophisticated or sufficient security protection. For example, it may not encrypt the data communication channel with external trading partners. Thus, the legacy operating environment will become a hacking target and will be exploited. Security architects may need to harden the legacy operating environment and allow only specific actions that can be performed by the Password Synchronizer (or provisioning server). Protection for SOAP messaging. If the programmatic interface for synchronizing user account passwords uses SOAP messaging, security architects may want to ensure that the SOAP message containing the user account password request is securely protected with a digital signature or encryption. XML encryption and XML digital signature would be essential to secure the SOAP messages. Identity management strategy. A sufficiently secure connection mechanism between the Password Synchronizer server (or the provisioning server) and the provisioning service targets is imperative. The trust relationship established determines whether the provisioning service target (application system) should accept the user account password service requests from the service requester. An insufficient authentication or authorization mechanism between the server and the provisioning service targets may expose a high security risk of unauthorized access. Security architects may adopt an identity management strategy that authenticates the service requester with an identity provider (as discussed in Chapters 7 and 12) that uses a stronger authentication mechanism for user credentials. This can mitigate the security risks associated with user account provisioning services. Single Sign-on and Sign-out. If security architects use a Point-to-Point synchronous communication to connect the Password Synchronizer server to the provisioning service targets while synchronizing user account passwords, the Password Synchronizer will need to perform a sign-on and maintain a secure session with each system. In such a case, a single sign-on and single sign-out process control is essential. Security architects would need to consider any potential security risks that might allow the session information to be tampered with, or any hanging session that can be exploited. The Single Sign-on Delegator pattern discussed in Chapter 12 will be useful then. On the other hand, if the Password Synchronizer is implemented using asynchronous messaging, there is no need to sign on
Reality Check
Should you build Passw ord Synchronizer code from scratch? A few service provisioning vendor products (including OpenSPML initiatives) have an out-of-the-box service provisioning capability. These products provide some basic service provisioning application infrastructure such as error handling and logging. It may not be practical to build the password synchronization functionality from scratch.
Related Patterns
There are other design patterns that are related to the Password Synchronizer pattern. These include: Single Sign-on Delegator. Single Sign-on Delegator provides a delegate design approach to connecting to remote security service providers. Using Password Synchronizer with Single Sign-on Delegator, architects and developers do not need to sign on to each provisioning service target individually in order to initiate user account password service requests. Business Delegate. Business Delegate [CJP2] is a Business tier J2EE pattern that encapsulates access to a remote business service. Password Synchronizer shares some similarities with Business Delegate in encapsulating the complexity of invoking remote business services, but the former is specialized in dealing with user account password service requests using standards-based programmatic interfaces.
Application Design
1. Adopt a lightweight provisioning solution architecture that avoids heavy data replication from the data store (maintained by the provisioning service targets) to the provisioning system (or Password Synchronizer). A databasecentric replication architecture usually brings a relatively large data store overhead and potential data synchronization issues. 2. Cater to rule-based workflow requirements. Rule-based workflow is very useful for handling security service provisioning, including user account password synchronization. It is helpful to provide scripting support (for example, calling a UNIX shell script) to define rule-based work flow. Some software products may have a visual rule-based workflow user interface and drag-and-drop features for defining the work flow sequence. 3. Use standards-based integration protocols (such as JDBC, servlet, JNDI, or SMTP) for accessing resources. Avoid using proprietary application protocols.
Quality of Service
Quality of service refers to the service attributes of how reliable, available, and scalable a system is. This section discusses options that support reliability, availability, and scalability when implementing security service provisioning. 4. Use persistent mode for processing service provisioning requests. It increases reliability to persist service provisioning requests using an intermediary so that the service requests can be reprocessed after any server restart or service recovery. For example, service provisioning requests can be written to JMS in persistent mode for better reliability during message delivery or routing. 5. Add availability options. Service availability is essential for service provisioning functionality such as user password reset or password synchronization. Clustering the service provisioning server and enabling session failover for the service provisioning application in the application server provides high availability practices. 6. Add scalability options. W hen the volume of service provisioning requests or user password synchronization requests increases drastically, scalability becomes a concern. A simple rule of thumb for scaling up service provisioning servers is to deploy three instances of the service provisioning servers with a load balancer (sometimes known as "the rule of three"). W hen one service provisioning server is down, the other two servers are load balanced and are still in service. 7. Use open standards for interoperability. Interoperability between the service provisioning server and the back-end systems is important. Proprietary middleware or vendor-specific connector technology often requires a system rewrite or upgrade when the underlying operating system or product is upgraded or becomes obsolete. Thus, the use of open standards (such as SPML) or J2EE technology (such as JMS) is a key to interoperability between J2EE-based implementations. 8. Define quality of service. The quality of service (such as system response time) for the security service provisioning system depends on the system response time of the Provisioning Service Targets. It is fairly difficult to define the quality of service (such as a five-second response time) for the security service provisioning system if some of the depending Provisioning Service Targets have unpredictable system response times (such as if one has three seconds, but another has seven seconds). Thus, security architects and administrators may want to customize what key attributes (such as availability) are appropriate for the quality of service measurement. 9. Customize your support strategy. Traditional system support for an IT solution often requires that IT support staff only know the product details, because the software systems have been self-contained and do not usually have many external interfaces. Service provisioning solutions have multiple external interfaces to a variety of back-end systems. The root cause of a service incident may span different back-end systems, which may make troubleshooting and diagnosis very complex. The IT support personnel need to be familiar with the dependency of the external interfaces for further escalation if necessary. Thus, security architects and administrators need to define and adopt a flexible support strategy that can accommodate the complexity of cross-product troubleshooting and the escalation procedures.
Summary
Security service provisioning addresses business problems related to account mapping, password synchronization, account provisioning, and so forth. These are operational tasks that incur high running costs and processing time. W hen designing secure service provisioning, architects need to consider the following design factors: centralized or decentralized architecture, integration strategy with existing infrastructure, and the associated security risk mitigation strategies. Security service provisioning can lower the total cost of account provisioning. It can reduce the complexity of account mapping by providing a standard interface and XML schema using SPML. The standard interfaces allow easy interoperability between identity management systems. These business benefits are quantifiablethere are measurable cost savings in adopting service provisioning technologies. The Service Provisioning Markup Language (SPML) is a standards-based interface between the client (requesting authority), resources (provisioning service target), and provisioning service point. A number of security vendor products in the market now support SPML. There is a growing interest in relating W eb services provisioning to SPML. The Password Synchronizer pattern is an example of security design patterns that use SPML to synchronize user passwords across heterogeneous platforms. It illustrates how Java Message Service can provide reliable messaging for password synchronization.
References
This section includes URLs and resources referenced in the chapter. In addition, leading vendor products for security service provisioning and password synchronization are listed.
General
Here are some URLs and resources referenced in this chapter.
[CJP2] Deepak Alur, Dan Malks and John Crupi. Core J2EE Patterns, Second Edition. Prentice Hall, 2003. http://corej2eepatterns.com/Patterns2ndEd/BusinessDelegate.htm [Cryptocard] Cryptocard T echnology. "T he Incredible Cost of 'Free' Passwords." [FisherLai] Marina Fisher and Ray Lai. "Designing Secure Service Provisioning." RSA Conference 2004. [OpenSPML] OpenSPML. http://www.openspml.org [PasswordSync] John Erik Setsaas. "Password Synchronization." EEMA's Directory Interest Group. http://www.maxware.com/News_Reviews/182-Passw-Synch.pdf [PasswordSyncAgent] Password Synchronization Agent. https://pwsynch.dev.java.net/ [PasswordUsage] Protocom Development Systems. "Global Password Usage Survey." Version 1.0.0. October 23, 2003. http://www.protocom.com/whitepapers/password_survey.pdf [SOX1] US Congress. Sarbanes-Oxley Act. H.R. 3763. July 30, 2002. http://www.law.uc.edu/CCL/SOact/soact.pdf [SPML10] "Service Provisioning Markup Language (SPML) Version 1.0." OASIS. October 2003. [SSOvsPasswordSync] Protocom Development Systems. "Single Sign-on Password Replay vs Password Synchronization." Version 1.0.0. 2003. http://www.protocom.com/whitepapers/sso_vs_passwordsync.pdf [Unix2Win] Microsoft. "How T o: Install Password Synchronization on a UNIX Host for a UNIX-to-Windows Migration." February 2, 2004. http://support.microsoft.com/default.aspx?scid=kb;EN-US;324542 [WS-Prov] IBM. "Web Services Provisioning (WS-Provisioning): Draft Version 0.7." October 17, 2003. http://www-106.ibm.com/developerworks/library/ws-provis/
Abridean (abrideanProvisor). http://www.abridean.com/SubPage.php? parent=products&child=UserManagementModules&grandchild=UserManager Blockade Systems (ManageID). http://www.blockade.com/products/index.html BMC Software (CONT ROL-SA). http://www.bmc.com/products/proddocview/0,2832,19052_19429_22855_1587,00.html CA (eT rust). http://2004.rsaconference.com/downloads/CAbroch.PDF Entrust. http://www.entrust.com/identity_management/specs.htm IBM (T ivoli Identity Manager). http://www-306.ibm.com/software/tivoli/products/identity-mgr/ Novell (Nsure Identity Manager). http://www.novell.com/products/nsureidentitymanager/quicklook.html Open Network (Universal IdP). http://www.opennetwork.com/solutions/ Sun Microsystems (Sun Java System Identity Manager, or a.k.a Waveset Lighthouse). http://wwws.sun.com/software/products/identity_mgr/index.html T hor (Xellerate). http://www.thortech.com/product/products_xell_architecture.asp HP (OpenView Select Identity, or a.k.a. T ruLogica). http://www.managementsoftware.hp.com/products/select/index.html
Blockade Systems Corp's ManageID Syncserv Overview of ManageID Suite. http://www.blockade.com/products/index.html ManageID Syncserv Architecture. http://www.blockade.com/products/syncservarchitecture.html Courion's Password Courier Overview. http://www.courion.com/products/pwc/sync.asp Architecture. http://www.courion.com/products/pwc/architecture.asp IBM's Password Synchronization Service with T ivoli's Directory Integrator and T ivoli's Identity Manager T echnical Notes. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0390.html?Open M-T ech's P-synch http://www.psynch.com/docs/psynch-overview.html and http://www.psynch.com/docs/psynch-white-paper.html Proginet's SecurPass Overview. http://www.proginetuk.co.uk/products/securpass-home.htm SecurPass-Syn. http://www.proginetuk.co.uk/pdf/securpasssync.pdf Protocom's SecureLogin Overview. http://www.protocom.com/html/securelogin_password_manage_suite.html Self-service Password Reset. http://www.protocom.com/html/securelogin_self_service_password_reset.html Sun's Sun Java System Identity Manager Overview. http://www.sun.com/software/products/identity_mgr/index.xml
Overview
The case study analyzes a W eb-based business application portal that hosts a set of merchant services from multiple business provider sources and provides member rewards redemption services. Figure 14-1 shows the conceptual representation of the W eb-based business portal (eRewards). In a typical scenario, a subscriber logs in to the W ebbased business portal to check his or her membership award balance and then submits a request to an affiliate content provider (a trading partner of the service provider) to redeem points and obtain a gift.
The portal permits access to users from W eb browsers accessible via Internet or intranet-based enterprise applications from trusted business corporations. To verify user identity, the portal relies on an external Identity Provider that provides user authentication functionality. Thus, the portal does not need to invest resources into the building of its own identity management infrastructure. The eRewards Membership Service stores personal subscriber information and preferences in user profiles. It also keeps an account balance of the membership award points and tracks any redemption for merchandise. The eRewards Catalog Service provides a list of merchandise offered to subscribers in the W arehouse Database. Subscribers can browse through the catalog of merchandise, select specific items, and add them to the in-basket for membership award redemption. Upon user confirmation, the eRewards Order Management Service will process the order that was placed in the in-basket. It will verify the membership award account balance to see if the subscriber has sufficient points to redeem the merchandise. Eligible merchandise information will be forwarded to the affiliate trading partners for order fulfillment via the Merchant Service. The eRewards portal adopts J2EE technology to build business applications. Both the Catalog Service and the Order Management Service are implemented using Java-based W eb services technologies. The integration with external partners will be done by way of XML W eb services. The trading partners use W eb services to enable interoperability and to overcome integration issues related to platforms and technologies, such as C++ and Microsoft-based applications.
defined methodology must be followed. This process includes gathering high-level requirements, implementing and testing the code, production deployment, and its final retirement. There are also other environment- and user-specific security factors that need to be considered and incorporated into the end-to-end security design. For example, each logical tier has its own associated risk in terms of development, deployment, and production. For example, there are risks involved in configuring the W eb server plug-in to support the application server, building a custom login module to adopt a security provider, and in implementing the logging for capturing events and supporting security audit. Compliance and regulatory requirements such as the Sarbanes-Oxley Act, GLB Act, EU directives, and Patriot Act also define and add various security requirements in terms of auditing business transactions. In addition, they mandate traceability of business transactions for the detection of any suspicious activities or potential security threats. Refer to Chapters 9 and 10 [CSP] for details. More importantly, security considerations differ widely from application to application. For example, W eb services using SOAP messaging have security risks that are very different than those faced by a traditional W eb application. The fact that SOAP messaging is language- and platform-neutral also makes it difficult to generalize security protection mechanisms or to implement generic security protection mechanisms that work on all platforms. Safeguarding synchronous or asynchronous W eb services usually requires using Security tokens, XML signatures, XML Encryption, and enforcing access control for sensitive data in the SOAP messages. Security also becomes more complicated if the SOAP messages are routed to multiple intermediaries in a workflow or among external trading partners, where each of the nodes or servers exposes different levels of security risks and exposures. Refer to Chapters 6, "W eb Services SecurityStandards and Technologies," and Chapter 11, "Securing W eb ServicesDesign Strategies and Best Practices," for details about W eb services security and related patterns. Exchanging identity credentials across security domains adds security risks and complexity in business communication among service providers. The services must also facilitate unified sign-on, global logout, and common mechanisms for identity registration, revocation, and termination. In addition to that, the identity infrastructure must protect the identity information against identity theft, spoofing, and man-in-the-middle attacks. Refer to Chapters 7, "Identity ManagementStandards and Technologies," and Chapter 12, "Securing the IdentityDesign Strategies and Best Practices," for details about identity management and related patterns.
Assumptions
First, we need to make some assumptions about our choice of platforms, client and server infrastructure options, and communication protocols in order to set the boundaries and constraints to the business application system environment. W e also need to provide some context for the security requirements before we proceed with the case study. These requirements include: The eRewards online portal is accessible from a W eb browser. W e will address all business functionalities of the eRewards portal using J2EE platform-based application components. W e will use synchronous and asynchronous protocols for invoking W eb services via RPC and document-style W eb services, respectively. Using W eb services provides a standard interface for exposing business services previously implemented in Java (or .NET). The eRewards online portal has made prior arrangements with its trading partners by exchanging W SDL and XML schemas for representing the XML-based business document and by agreeing on the business data semantics. For external SOAP messaging, both the eRewards online portaland its back-end integration with all trading partnersagree to sign and encrypt the XML messages. The identity management solution is provided by a third-party Identity Provider using a standards-based infrastructure using standards such as Liberty Alliance and SAML. This enables us to provide single sign-on access with different security domains over the Internet. User authentication is performed by a trusted Liberty-enabled security provider infrastructure that supports formbased authentication for J2EE over HTTPS as well as SSL client certificates (assuming that they were previously distributed via the eRewards online portal during the initial user or partner registration).
As mentioned earlier, we are focusing only on the security requirements of the portalnot the other nonfunctional requirements. End-to-end security is essential for the eRewards portal because security risks or threats do not come from a single source. Securing the W eb server for the eRewards portal does not necessarily mean that the entire portal is secure. This is because business functions for membership services and merchant services do not come from a single server; they are distributed in different servers and different security domains. Each security domain has different fabrics or substrates of security elements that require specific security design considerations or security risk mitigation measures. A monolithic security model of using HTTPS or using traditional host security-hardening techniques will not be sufficient to handle a mixture of J2EE applications and W eb services. Security should never be an afterthought whose importance goes unnoticed until something unpleasant happens or a security loophole is reported. Security requirements are the key drivers for the reliability and availability of the business services provided by the online portal. These include authentication, authorization, traceability, data privacy or confidentiality, availability, data integrity, and non-repudiation. The business-level security requirements gathered for the eRewards portal also include the following: I dentity Protection. The Identity Provider infrastructure should be able to provide access management for authentication of valid subscribers to the portal. The Identity protection entails a variety of key management security protection mechanisms or risk mitigations that both secure the storage of key pairs (for example, the use of the Hardware Security Module (HSM) or a smart card device to store the private and public keys) and authenticate user identity (for example, the use of strong user authentication mechanisms such as smart cards, biometrics devices, or dynamic passwords) in a secure manner. Thus, the portal should be able to both accommodate various strong user authentication mechanisms on an as-needed basis and support key management operations for securing identity information. Securing Web Servers and Application Servers. The portal should provide security for the infrastructure hosting the W eb servers and application servers. The hosting server infrastructure must make use of a trustworthy operating system and other required services. Secure Client-to-Server Connection. The portal should be able to secure the session and business data exchanged between the client and the server, using HTTP/SSL transport, for example. It should also make sure that the client is authenticated before establishing the user session. Secure Server-to-Server Connection. The portal should be able to secure the session and the business data exchanged with the service providers. Invoking W eb services from external trading partners requires routing XML messages across different intermediaries or multiple processing nodes. Each external intermediary or trading partner node processing the business transaction or participating in the workflow should be secure. However, hosting servers should be able to authenticate each server before establishing the business data exchange. Secure T ransactions. The portal should be able to support data privacy and confidentiality by securing the business transactions with encryption. Business transactions should be logged for traceability and audit control. Message Level Protection. W eb services XML messages over a public network in clear text can be easily intercepted and tampered with. The XML messaging should make use of encryption and signature mechanisms that protect the data exchanged between the processing nodes. The message-level protection must ensure data integrity and nonrepudiation of all business transactions. Single Sign-on. The portal should be able to provide single sign-on to merchant services hosted by external trading partners. Thus, subscribers can provide user credentials to log in once, and they can then access both membership services and merchant services without re-login multiple times. In addition to single sign-on, the portal must facilitate a common mechanism for identity registration, revocation, and termination. Security Considerations for High Availability. No matter how secure and sophisticated the application infrastructure is, a DoS attack can cripple the online portal by making it unavailable for services. Thus, the portal should adopt appropriate preventive (such as load balancing, packet filtering, virus checking, failover, or backup) and service continuity measures that can defend against DoS attacks or other potential security breaches. Security Risk Mitigation. There should be a plan for identifying different security threats and the associated risk mitigation. Based on the security threat analysis, security architects can determine if additional security mitigation measures are necessary to cover any gaps in the security requirements or design elements. Service Continuity and Recovery. There should be an infrastructure plan that ensures the capability of delivering the services in the event of a security breach or human error. If such an event occurs, the W eb portal infrastructure must have a recovery strategy for the worst-case scenario and must provide mechanisms for recovery from the event. Such mechanisms may even stop the event from occurring in the first place.
System Constraints
Based on our requirements, there are system constraints that may impact the security design. One main constraint is the identity management infrastructure. The eRewards portal has a previous investment in an identity management vendor solution and currently uses a trusted external Identity Provider. Thus, there will be no need to build or customize any single sign-on security solution. Many W eb services in the portal are created by exposing home-grown J2EE, .NET, and legacy applications as W eb services. There may be no security built into these home-grown or legacy applications. Thus, there should be separate security considerations made for them during the design process.
In Figure 14-2, Client, a subscriber to the eRewards portal (OnlinePortal), needs to initiate and obtain secure login access to the portal. Upon successful authentication and authorization from the Identity Provider, Client can select different business services available from the portal. In this case study, Client intends to redeem merchandise (gifts offered by the portal and/or their affiliated merchants) using his or her membership award points. First, Client browses through the online catalog and selects the merchandise. Upon confirmation of his or her merchandise selection, Client chooses to redeem the merchandise by deducting membership award points from the available membership award balance. The portal retrieves the membership award balance and determines whether Client is eligible to redeem the merchandise. If Client is eligible to redeem the merchandise, the portal will place an order with the supplier merchant (TradingPartner) and issue an order to fulfill the order according to the pre-registered home address of the subscriber (Client). In this scenario, we primarily focus on the security-related use cases such as User login, Secure Catalog service, Secure order placement, and Secure order fulfillment. The User login use case refers to the user authentication process using an external Identity Provider. The Secure catalog service use case refers to the security mechanisms used to secure the catalog service. The portal catalog service component accesses a W eb service provided by a partner service provider. Secure catalog service requires that the invocation of remote W eb services is secure and that there is fully capable traceability that
ensures there has been no unauthorized access to the catalog service and that supports audit trail or compliance reporting. Additionally, Client should not need to re-login to the catalog service, even though this is a remote W eb service. The Secure order placement use case refers to the security mechanisms used to secure the order management process. The order management back-office function is currently implemented in J2EE. The J2EE component requires that the Client be authorized to invoke the order management functions. It is also necessary that this should reuse the logging infrastructure for traceability. The Secure order fulfillment use case refers to the security mechanisms used to secure the order fulfillment process. The order fulfillment is done by integrating external trading partners (TradingPartner) using document-style W eb services. It is extremely important to authenticate with the external trading partners before routing XML messages in the W eb Services tier. To address the risk of message interception or tampering, the XML messages should be secured by data encryption and digital signature for data integrity and confidentiality purposes.
Actors
The following Actors are the key entities that interact with the security use cases such as secure catalog service and secure order fulfillment. Here is a short description of each of the Actors:
C lient The subscriber to the online portal. Also referred to as eRewards portal, a Web portal application that provides personalized access and a point of entry to multiple business services. A service provider organization that provides a business service to the service requester (C lient).
Online Portal
Trading Partner
In Figure 14-4, we see our three main services in the portal. They connect to the merchant service at our trading partner and use an external Identity Provider for identity management. All of the transactional data is stored in our warehouse database.
System Environment
W e assume the eRewards portal and the service providers work together as a medium-scale business application hosted using heterogeneous platforms including Solaris, Linux, and Microsoft W indows. The portal runs on a J2EE 1.4-compliant application server that also provides support for RPC and document-style W eb services. No vendor-specific application server extension features will be used. The portal and the underlying service providers make use of an external trusted Identity Provider for authentication, authorization, single sign-on, and identity management services.
Application Architecture
Figure 14-4 depicts the high-level application architecture for the eRewards portal. Because the focus of this case study is on building end-to-end security for the portal, we will not rationalize the details of how all the elements of the application architecture are derived. In a nutshell, the portal runs a J2EE 1.4-compliant application server using servlets and EJB components. The servlets make use of JAX-RPC or SAAJ handlers to invoke W eb services provided by the service providers or external partners. The portal makes use of EJBs for processing orders. For identity management, the portal and the external trading partners have established a trusted relationship with service providers, making use of an external Liberty-enabled Identity Provider for user authentication, authorization, single sign-on, and identity management services. All server components of the portal and the service providers make use of a Liberty-enabled agent to communicate with the Identity Provider.
Technology Elements
The application architecture is represented using the following technology elements: Servlets are server-side MVC-style components that handle user presentation and control. EJBs are server-side components that encapsulate business logic to manipulate service requests, handle transaction processing, and retrieve or store business data in the database. JAX-RPC and SAAJ handlers integrate with service providers via XML W eb services. Java Data Objects are an implementation of data access objects that retrieve or store business data in the database using JDBC connectivity.
Security Prerequisites
Based on the application architecture, we need to define the following security prerequisites that will help us derive a conceptual security model. 1. Network perimeter security has historically been a critical requirement in any security architecture. This layer has been the most commonly attacked. Practices such as using a multi-tier firewall for the DMZ, NAT-ed proxies, and packet filters all fit into the security architecture. 2. Application infrastructure security represents safeguards and countermeasures for ensuring the security of enterprise applications, LDAP, and the database. Practices such as using HTTP/SSL to communicate with the application; limiting access to the application via authentication; and enabling role-based access control, digital signatures, and data encryption during transit must be addressed. 3. Using a W eb services-based infrastructure for integration with service providers and exchanging XML messages introduces a newer set of security challenges both at the message and communication layers. Adopting XML-based security mechanisms for message-level security and SSL/TLS for transport-level security must be considered. 4. Identity protection is a key issue when securing applications and exchanging business data across multiple security domains and between trading partners. It becomes more complex when exchanging authentication and authorization information. Adopting single sign-on, global logout, common identity registration, revocation, and termination mechanisms must be considered using an external Identity Provider that establishes a chain of trust among all participating service providers. 5. Host security is another crucial requirement. Adopting a trusted or hardened OS, applying the appropriate patches, and installing an intrusion detection system are the key factors to consider to ensure that the host environment is secured.
Protecting the I dentity I nformation. The Identity Provider should be able to authenticate valid subscribers to the eRewards portal. Using PKI for protecting the identity information during transit and storage and using digital certificates for representing the identity is often considered as a recommended practice. Identity protection also entails secure storage of key pairs (for example, the use of Hardware Security Module (HSM) or a smart card device to store the private keys) and physical access control mechanisms such as biometric technologies. Thus, the portal should be able to provide support for different identity protection mechanisms and for adopting stronger authentication mechanisms. Securing Web Servers and Application Servers. W eb servers and application servers may be the primary targets for security attacks or hacking. The portal should adopt and run on a hardened OS in the host application environment. A hardened OS ensures that all irrelevant and unused services that may be targets of threats are removed from the host environment. In addition, the W eb servers and application servers must be securely deployed, with all default password configurations and sample applications completely removed. Secure Client-to-Server Connection. The portal should be able to secure the session and business data exchanged between the user client and the server by way of HTTP over SSL transport. Adopting a mutual authentication between the client and server before establishing the user session is recommended. Secure Server-to-Server Connection. The portal should be able to secure the W eb services communication exchanged between the service providers. Establishing secure communication with external trading partners requires secure routing of XML messages. It is important to verify that the W eb portal ensures transport-level data integrity and confidentiality during communication with service providers and other participating intermediaries. Secure T ransactions. The portal should be able to support data privacy and confidentiality by securing the business transactions with encryption. Business transactions should be logged for traceability and should audit properly, but sensitive data should be obfuscated in the logging. Securing Messages. To communicate with service providers, the portal should adopt XML W eb services using SOAP messages over HTTP/SSL. Although SSL ensures transport-level security, it does not facilitate message-level security that ensures that the message is received by only the intended recipient. Using message-level protection mechanisms such XML Signature and XML Encryption ensures message-level confidentiality and integrity during the W eb services communication and also at the processing endpoints. This protection provides non-repudiation and trustworthy communication between the W eb portal and the service providers. Single Sign-on. The portal should be able to provide single sign-on to merchant services hosted by external trading partners. Thus, subscribers can provide user credentials to login once and can access both membership services and merchant services without having to login multiple times. Secure Logging and Auditing. The portal must provide a full-fledged logging mechanism that captures and records all events with the corresponding identity as auditable trails. In addition to logging, the W eb portal should provide an auditing mechanism to play back the recorded trails for forensic investigation. Security Considerations for High Availability, Service Continuity, and Recovery. No matter how secure and sophisticated the application infrastructure is, a denial-of-service attack can cripple the W eb portal by making the portal offline and unavailable. Thus, the W eb portal should adopt appropriate preventive measures (such as load balancing, fault-tolerance, failover recovery, and session persistence) that can help defend against a security breach and ensure further service continuity without disrupting legitimate user requests. Risk Mitigation. There should be a plan for identifying different security threats and the associated risk mitigation. Based on the security threat analysis, security architects can determine if additional security mitigation measures are necessary to cover any gaps in the security requirements or design elements. A cost/benefit analysis can then determine the legitimacy of the mitigation strategy.
Security Architecture
After formulating a conceptual security model identifying the key security features, we need to define candidate security architecture that represents an architect's view of the eRewards portal security. This means that we will make further architecture decisions to support development that allow us to start to tie together the security components in a cohesive application architecture. W e will not bore down to the object level, but we will start to define services and subsystems that contribute to the end-to-end security. Upon identifying the candidate security architecture, we will perform a detailed risk analysis and identify mitigation strategies. W e will also perform a trade-off analysis to meet the security criteria and support the business and architectural decisions. Part of the job is already done. Due to business decisions, we have already decided to use a third-party identity management provider. This will drive a lot of the candidate architecture, because it imposes several constraints to begin with. Now we have to come up with a few high-level application components adopting security patterns. This will help us to illustrate candidate security architecture. Figure 14-5 shows the logical representation of candidate security architecture for the eRewards portal. W e adopted the core security patterns to represent the key security requirements in the logical tiers of the application architecture.
In the W eb tier, we will use a Secure Base Action pattern for centralizing and coordinating all security-related tasks with W eb components such as JSPs and Servlets. W e will make use of an Authentication Enforcer pattern to authenticate W eb requests and to verify authentication information established by our third-party Identity Provider. In the Business tier, we will use a Secure Session Faade pattern that can contain and centralize all EJB interactions within a secure session. W e will use an Obfuscated T ransfer Object to protect the business data in a transfer object during transit. To ensure W eb service communication with service providers, we will make use of the Secure Message Router pattern, which facilitates applying message-level and transport-level security mechanisms before sending messages to Order management service and Membership service. To ensure secure sending and receipt of messages with service providers, we will make use of the Message I nterceptor Gatew ay and Message I nspector patterns that allow us to enforce security mechanisms at the transport-level and the message-level, and to verify messages for standards compliance and other content-level vulnerabilities. The candidate security architecture also consists of components at the network and host layers. Perimeter security design, DMZ architecture, patch management, and intrusion-detection systems are other key components that need to be part of the candidate security architecture artifacts. Because the scope of this book targets the application-level security solutions, we will assume that those tasks have been completed with as much due diligence, using known industry best practices.
quantitativelyin light of architecture tiers, possibility of occurrence (expressed in terms of the number of prospects), probability (expressed in terms of likelihood of possible occurrences), impact (expressed in terms of effect that affects the overall architecture), and exposure (expressed as the level of acceptability). W e will also identify the issues that expose those potential risks and how to mitigate them by identifying potential security patterns, best practices, and solutions. Table 14-1 shows the risk analysis and mitigation strategies for a partial list of known risks and vulnerabilities. A complete risk-analysis document is available at the companion W eb site (http://www.coresecuritypatterns.com).
Table 14-1. Risk Analysis and Mitigation
Probability Possibility of 1 Low Tier/Component Occurrence 3 Medium (Single/Multiple) 7 High 10 Extreme Impact 1 Low 5 Medium 7 High 10 Extreme Exposure 1 Low 5 Medium 7 High 10 Unacceptable 10
No
Known Risks
DoS/DDoS Issue:
Web tier
Multiple
10
10
The portal is vulnerable to DoS/DDoS attacks because it is directly accessible over the Internet. There are many possibilities for carrying out DoS/DDoS by sending fake requests, initiating a flood of high-volume connections, malicious data injection causing buffer overflow, and exploiting application/configuration-specific weaknesses. These types of attacks usually consume Web/application server-specific system resources and deny accepting further user requests. During these attacks, if a legitimate user makes a portal access request, the request may fail completely or the page may take longer to download, and then making further transactions may not be possible. Mitigation: The preventive measures for DoS and DDoS include implementing router filtering to drop connections from untrusted hosts and networks and configuring fault-tolerant and redundant server resources. In addition, the Web/Application server must be configured to perform host-name verification, identifying fake requests and denying them from further processing. At the application level, the Web server may adopt security patterns such as Secure Pipe, Intercepting Web Agent, and Intercepting Validator. XML-DoS Issue: The Web service endpoints are vulnerable to XML-DoS, where an attacker can perpetuate XML-based and content-level attacks by sending malformed messages, replaying legitimate messages, sending abnormal payload size, and sending non-compliant messages. These attacks lead to resource-intensive XML parsing, causing endless loops, buffer overflow, endpoint crash, or denial of further service processing. Mitigation: The safeguards and preventive measures for XML-DoS can be carried out by adopting Message Interceptor Gateway and Message Inspector patterns. These patterns secure access to Web service endpoints from message-level threats and vulnerabilities. Man-in-the- Web Tier Web Middle Services Issue: The Web portal and Web service endpoints are vulnerable to man-in-the-middle attacks, where an unauthorized user is able to read or modify the business transactions or messages sent between two endpoints. The service provider or requester is not aware that the communication channel, the business transaction, or the message exchange is being compromised. Mitigation: The preventive measures for safeguarding Web tier components and Web services communication is done by implementing transport-layer security using SSL/TLS or IPSEC protocols. At the application level, the components can make use of Secure Pipe pattern. MessageLevel Security Issue: Web Services Multiple 10 10 10
Multiple
10
Single
10
The messages exchanged between Web service endpoints is vulnerable to malicious data injection, identity spoofing, message validation failure attacks, replay of selected parts, schema poisoning, and element/parameter tampering. Mitigation: Protecting Web service endpoints from processing malicious messages is done by enforcing message-level security and processing messages for endpoint-specific security criterion. The Web service endpoint can make use of the Message Inspector pattern. Protecting Sensitive Business Tier Information Issue:
Multiple
The data passed around in the Business tier is subject to logging and auditing. This data will often contain sensitive information such as a person's credit card number or personal bank information. If this data gets logged, it may be subject to inspection by unauthorized parties and leave open the possibility of identity theft. Mitigation: To ensure that sensitive data does not get inadvertently output to a log file or an audit table, the Web service endpoint can use an Obfuscating Transfer Object pattern. Restricting Access to Business Tier Business C omponents Issue:
Multiple
An attacker who has gained access to the internal network may attempt to communicate directly with the Business tier EJBs using RMI and thus by-pass the Web tier security protocols. This would allow the attacker to gain unauthorized access to Business tier services. Mitigation: Use a Secure Session Faade pattern to ensure that security is checked on the Business tier as well as on the Web tier. This can be done in conjunction with the C ontainer Managed Security or Policy Delegate patterns.
In real-world scenarios, after the qualitative risk analysis, a quantitative risk analysis has to be performed. This helps identify all key risk elements and estimate the projected loss of value for each risk, such as infrastructure cost, potential threat, frequency, business impact, potential loss value, safeguard option, safeguard effectiveness, and safeguard value. Based on that information, we can estimate the potential losses and compute the annual loss expectancy (ALE), which helps to make a business case for identifying security countermeasures and safeguards. Refer to Chapter 8, "The Alchemy of Security DesignMethodology, Patterns, and Reality Checks," for more information about quantitative risk analysis.
1 SSL Encryption 2
Table 14-3 illustrates the TOA effect matrix for weighing different choices for identifying a load-balancing option for the eRewards portal.
Table 14-3. Trade-Off AnalysisEffect Matrix for Load Balancing
Bastion Hosts with Web Server Reverse-Proxy Load Balancing +7 Web application Load-balancing 8 8 8 Web LoadBalancer Appliances Web Servers Remarks
+8
+5
On the Business tier, we need to examine different data obfuscation strategies. The considerations for data obfuscation strategies include performance, security, and implementation costs. Table 14-4 shows the comparisons of the data obfuscation strategies.
Table 14-4. Load Data Obfuscation Trade-Off Analysis (Effect Matrix)
Masked List Strategy +1 2 Encryption Strategy 5 8 XML Encryption Strategy 8 8 Remarks
A TOA artifact is also available at the companion W eb site (http://www.coresecuritypatterns.com). Refer to Chapter 8, "The Alchemy of Security Design: Methodology, Patterns, and Reality Checks," for more information about trade-off analysis.
To mitigate known risks, the application architecture, including the W eb portal and service providers, will make use of the following security patterns: Secure Pipe: Provides transport-level data confidentiality and data integrity between the client, the portal, and the partners. Using client certificate-based mutual authentication also ensures non-repudiation. Secure Base Action: Acts as centralized security manager for the W eb tier. It coordinates authentication, validation, and security logging for all W eb requests. Authentication Enforcer: Authenticates clients by verifying identity information passed by third-party Identity Provider. I ntercepting Validator: Validates input passed in from Client.
Secure Logger: Logs all security-related activity to a secure store. Secure Service Faade: Provides security to all Service Provider components. Provides auditing facilities through the Audit I nterceptor. Audit I nterceptor: Captures audit events using the Secure Session Faade strategy. Obfuscated T ransfer Object: Acts as a generic T ransfer Object and employs a Masked List Strategy for obfuscating sensitive data written to logs and audit trails. Secure Message Router: Handles messages to multiple endpoints securely, applying message-level security and the SSO token. Message I nspector: Verifies messages for message-level security mechanisms including authentication, authorization, signature, and encryption, and identifies content-level vulnerabilities. Message I nterceptor Gatew ay: Acts as the entry point for enforcing security on the incoming and outgoing XML traffic. It ensures transport-level security and verifies messages for standards compliance, conformance to mandatory XML schemas, message uniqueness, and peer authentication of originating host. It makes use of the Message I nspector pattern for verifying message-level security threats and vulnerabilities. These patterns address the key security requirements and mitigate those identified risks that span the W eb-, business-, and W eb Service tiers. W e specifically chose not to address implementations of the Identity tier because such implementations are best performed by the third-party Identity Provider that is chosen by the eRewards portal owner. In this portal scenario, the Identity Provider is identified as a Liberty-enabled security infrastructure whose implementation is not discussed.
In the portal, each exposed business service is represented using a servlet to handle the business processing. To interact with service providers, the servlets use JAX-RPC handlers to invoke remote W eb services. In cases of service providers using the J2EE environment, the servlets communicate with their presentation components, such as JSPs and servlets to invoke their business EJBs. After enforcing authentication and authorization, the catalog servlet enables a Client to browse through the merchandise catalog and select specific product items to place into their shopping carts. The catalog servlet uses a JAX-RPC handler to invoke a remote catalog W eb service over HTTPS. The Secure Message I nterceptor pattern intercepts the SOAP message to examine the user credentials and signature in the SOAP service request (data privacy and data integrity). The order management servlet takes a product or a group of product items selected in the catalog service and places an order with the back-end order management system using a membership service servlet. It uses a membership servlet to retrieve the membership award balance and verify if the requesting Client is eligible and entitled to redeem the merchandise (authorization). The membership servlet communicates with the membership service servlet over HTTPS, which uses the Secure Session Faade pattern to delegate to the back-end membership service, which is implemented in EJBs. Upon successful verification of the membership award balance, the order management servlet will issue a document-style SOAP service request over HTTPS using a JAX-RPC handler to invoke the remote order management service. The handler will make use of a Secure Message Router pattern to route the SOAP messages to the relevant service providers or external partners, applying all required message-level security mechanisms. It ensures that the SOAP message is also encrypted and digitally signed to provide message-level confidentiality and integrity so that the message can be viewed only by the intended recipient.
Web Tier
On the W eb tier, the Secure Pipe pattern was chosen to provide a secure communication channel. This protects communications between the client and the application from eavesdropping, man-in-the-middle attacks, and data injection or corruption. The Secure Base Action pattern is part of the entry point into the application and server to enforce security on the front-end in conjunction with other W eb-tier security patterns. The Authentication Enforcer provides us with a means for encapsulating and decoupling the mechanism used for authentication from the actual establishment of the client's identity. Because the actual authentication is performed outside the application by a trusted external Identity Provider, the actual job of the Authentication Enforcer is simply to verify the trusted provider and its credentials. The encapsulation of the authentication mechanism assures us that if, in the future, authentication is moved back into the application, those changes will be isolated in one spot and not impact the rest of the W eb tier. The I ntercepting Validator is responsible for validating and cleansing the data passed in on the HTTP request. This provides protection from data corruption attacks such as SQL injection attacks, cross-site scripting attacks, and malformed requests aimed at crashing the site. Finally, the Secure Logger logs all of the relevant information about the request to a secure store so that if intruders did gain access to the application, they would not be able to alter the logs and conceal their attack.
Business Tier
On the Business tier, the Secure Session Faade pattern is used to provide a secure entry point into the business services. It has the responsibility of ensuring that auditing and any other security function is performed prior to service invocation. In this case, auditing is delegated to the Audit I nterceptor, which audits service invocations, responses, and exceptions as defined declaratively. The Obfuscated T ransfer Object is used to pass data between the W eb tier and the Business tier and between service providers in the Business tier. It takes responsibility for obfuscating sensitive data passed back and forth, thereby removing that responsibility from service providers themselves. That way, credit card numbers, bank information, and other account details are not improperly written out to logs or the audit store.
Design
This section illustrates the design process for the security use cases and applies the security architecture and those identified security design patterns. UML-style sequence diagrams are used to denote the flow of events and how each logical component interacts with specific security patterns. The security service design discussed here should be vendoragnostic, and can be implemented in practically any language on any platform. W e refer to J2EE-specific terminology because our security use cases and system constraints mandated the use of a J2EE environment, but those should not be misinterpreted as a necessary part of the security design.
Policy Design
Prior to the design elaboration phase, we need to document our policy design. In terms of security, the policy design must provide rules defining appropriate and inappropriate behavior, who and what to trust, level of control, communication requirements, procedures, and associated tools to support the security infrastructure. To do this as part of eRewards portal security architecture, we need to document the following security policy artifacts: User registration, revocation, and termination policy Role-based access control policy PKI management policy Service provider trust policy Data encryption and signature verification policy Service provider assessment policy Service Audit and traceability policy Proactive and Reactive risk-assessment policy Information disclosure and sensitivity policy Password selection and maintenance policy Information classification and labeling policy DMZ Environment access policy Application administration policy Host and network administration policy Application failure notice policy Service failure, continuity, and recovery policy These policies serve as principal guidelines during implementation and deployment. They also contribute to the regulatory compliance requirements specific to the type of business handled by the eRewards portal. Sarbanes-Oxley (SOX) is one example of those business-related regulatory requirements. Policy design is a tedious undertaking, and a full explanation could consume a chapter in and of itself. In this chapter, we are not able to bore down to the details of the above artifacts. The sample artifacts for policy design are available at http://www.coresecuritypatterns.com. Refer to Chapter 8, "The Alchemy of Security Design: Methodology, Patterns, and Reality Checks," for more information about Policy Design.
Factor Analysis
To begin construction of a security design, we must perform a factor analysis to identify the current system
infrastructure-specific constraints and requirements. These constraints and requirements will drive the design decisions as we move forward. Forthcoming activities such as detailed security design and implementation will be based heavily on the analysis done here.
Infrastructure
T arget Platform: W e need to establish the criteria for the host environment selection to provide a secure and reliable environment to run our W eb portal applications. The key aspects to consider include operating systems, compatibility with applications/technologies, maintainability, security features, and deployment requirements. W e need to identify the known security risks, preconditions, and vulnerabilities associated with the selected platform, OS, and software. To host our W eb portal, we choose to use Trusted Solaris 8 as the OS environment for bastion hosts and for all the target hosts running the application server instances. Number of Access Points: W e anticipate that our peak usage for the upcoming year will be around 1 million users from different geographic locations and time zones. So we need to facilitate a dedicated infrastructure to support different time zones. The access points should be geographically separated but colocated in the same network. The host infrastructure should provide fault-tolerance and failover capabilities in case of a service failure at one location. Netw ork Perimeter Security: To design the network security and to control inter-tier traffic flows, we need to identify the routing/firewall appliances, multilayer Ethernet switches, and load-balancing devices. W e choose to separate the W eb tier and application server tier with a separate firewall. W e choose to adopt a horizontally scalable solution including multiple server platforms running at different locations hosting many instances of the W eb server at the W eb tier and multiple server instances in the application server tier. The network design is composed of segregated networks, implemented physically using VLANs configured by network switches. W e choose to use Foundry switches and Netscreen firewall devices. The internal network used the 10.0.0.0 private IP space for better security and portability advantages. Although several networks reside on a single active core switch, network traffic is segregated and secured using static routes, access control lists (ACLs), and VLANs. The trust zones are created using a Netscreen firewall and they map directly to the VLANs. The Netscreen firewall performs the Layer 3 routing. This configuration directs all traffic, resulting in firewall protection between each service tier.
Web Tier
Authentication Requirements: Based on anticipated clients, we will support form-based authentication for our W eb users and client certificate-based mutual authentication for our trading partners. All hosts are identified and trusted using peer authentication. All unauthorized connections from untrusted and IP addresses will be dropped. Client Devices or Platform Used: Our interactive clients will be Internet users who are using a variety of browsers on a variety of platforms. W e therefore want to avoid any platform- or browser-specific code. W e want to avoid or limit as much as possible the use of JavaScript. W e also want to avoid requiring use of client-side mobile devices without browser support. Our trading partners will be connecting by way of server-to-server communication through trusted host authentication. The client devices and platforms are not a concern because the communication is all by way of standardized protocols that are device/platform-independent. Web Agent: The W eb servers running on bastion hosts will make use of a W eb agent to verify all incoming and outgoing W eb requests for session tokens. Upon identification of fake or forged requests using a legitimate identity, the W eb agent will log the request, initiate a request to the Identity Provider to revoke the user, and also send a notification to the administrator to audit those fake requests. This helps to identify and foil DoS and man-in-themiddle attacks on the W eb tier.
Business Tier
Data Obfuscation: To protect sensitive information such as credit card numbers, we need to obfuscate appropriate data in the Business tier. This allows us to log, audit, and transmit data without revealing sensitive information. Auditing: Due to the auditing requirements specified in the policy design, we need to audit our service transactions. Because the auditing requirements are expansive, this capability should be provided as part of the Business tier framework and not implemented on a service-by-service basis.
HTTP and HTTP/SSL protocols and standard ports of the firewall. This lacks support for providing protection against XML-based message attacks and content-layer vulnerabilities such as buffer overflow, malicious data injection, and virus attachments. W e will introduce Message I nterceptor Gatew ay and Message I nspector pattern-based mechanisms for enforcing XML-based security mechanisms and access control policies to the exposed W eb service endpoints and W SDL descriptions.
Security Infrastructure
Based on the identified system requirements, we will evolve a tentative security infrastructure showing the security infrastructure, including application services, hosts, and network topology. Figure 14-8 shows a logical representation of the security infrastructure setup for our W eb portal.
Tier Analysis
A tier analysis becomes necessary because there are tier-specific factors and issues that will influence our design implementation. This analysis will then serve as the final analysis step in our design process.
Web Tier
The decision was made early on to use an external Identity Provider for identity management and authentication. Because authentication will be done externally, we will need to secure the communication channel with the external Identity Provider using a Secure Pipe pattern. The portal will also need to collect the user credentials (username and password) from the client. To do so, we will need to use the Secure Pipe there as well in order to prevent the password being sniffed. W e will implement the Secure Pipe using SSL hardware accelerator-based device mechanisms installed on the bastion hosts that run the W eb servers. This will provide us with better SSL performance. W e chose to use the Authentication Enforcer pattern to represent the form-based authentication process in the W eb tier. In our threat profile modeling, we determine that a hacker may try to guess a user's password. To prevent a hacker's success, we will configure our external Identity Provider to make use of strong password policies with minimum lengths and a mixture of alpha-numeric characters that make guessing impractical. W e will also mandate account lockouts after a certain number of incorrect attempts. One of the business requirements of the application is to provide the user with a form for reward selection. To prevent attackers from crashing the system by sending junk data in the request, we will use an I ntercepting Validator pattern to scrub the data when it is received. W e will also use a Secure Logger pattern to securely log all incoming requests for security monitoring and auditing. This will allow us to detect malicious activity and take preventive measures.
Business Tier
On the Business tier, we will address auditing and data obfuscation. W e have identified the Audit I nterceptor and Obfuscated T ransfer Object patterns to address these factors. On the Business tier, we are only concerned with business-level auditing. W e will not have sufficient insight into the incoming requests to do much security auditing. W e can audit some business-level security audit events such as profile modification, but in general, security auditing will not be necessary on the Business tier. W e will use an Intercepting Session Faade strategy that will provide us with an easy way to incorporate auditing into our Business tier without impacting our business object developers. This will reduce risk and provide a means to add or modify auditing requirements in parallel to other development. W e also must address data obfuscation on the Business tier. On this tier, that means obscuring application-layer data sent between business services. Our Business tier resides in a trusted environment, so we are not going to address securing data within the application, only obscuring sensitive information written to logs or sent from our Business tier
outside our environment to our trading partners. W e will use two strategies for data obfuscation. Internally, we will use the Obfuscated T ransfer Object pattern. Because this is a protected tier and we are not concerned with host-level intrusions for this issue, we will use the Masked List strategy, which will provide data obfuscation for sensitive data written to a log or otherwise output.
Identity Tier
W e elected to use an external Identity Provider for managing user identities and performing authentication and authorization for the W eb portal and the participating trusted service providers. To meet our identity-management requirements and avoid the common pitfall of vendor lock-in, we will choose a vendor that provides a Liberty II protocolbased single sign-on and global logout mechanisms. Through the use of SAML assertions and a Liberty II protocol, we will deliver a vendor-neutral identity infrastructure and provide an industry-standard interface for identity federation and enabling SSO with service providers.
Trust Model
W ith the use of a trusted external Identity Provider, our trust modeling becomes quite straightforward. The trust model in this case is simply based on the vendor's product implementation, which allows establishing trusted relationships between the W eb portal and service providers. The W eb portal and the service providers trust the SAML assertions issued by the external Identity Provider for authentication and authorization decisions. The portal and service providers make use of Liberty-enabled agents to communicate with the Identity Provider.
Threat Profiling
For this case study, we will not perform exhaustive threat profiling. W e will take a simple attack tree and look at two branches. Based on our use case scenario, we will assume that the goal (that is, the root node of the tree) is to gain an unearned reward from our partner service. One branch of the attack tree impacts the W eb tier and the other impacts the W eb Services tier. The first branch involves an attacker trying to gain access to a legitimate user's account. From there, the attacker can modify the user's address information and order rewards for that user that will get shipped to the attacker. To do this, the attacker can use two approaches: Guess the user's password. Use a network-based packet sniffing tool to obtain the user's password. The second branch deals with an attacker trying to plagiarize a W eb service request to the service provider. The two nodes under this goal in the branch are: Spoof a message from scratch. Alter a legitimate message en route. Figure 14-10 is a diagram of the attack tree based on our modeling of this simple profile.
Security Design
The first step in our security design is to flesh out our patterns based on the analyses. As part of the architecture, we identified a set of initial security patterns to be used. W e can now go back and validate that those patterns fit or choose others as necessary. W e will then begin data modeling and create our business and data access objects and services.
The following section describes the list of business data objects that we will use in the application design and the related data classes. Because this chapter is a case study of the security design, not the functional application design, this section will discuss the security implications of these data objects and classes only.
Data Class
Figure 14-11 depicts a class diagram of the classes represented in the eRewards portal application. The controller servlet uses the order management servlet, the membership servlet, and the catalog servlet. Each of these servlets has its corresponding helper class. Some of the methods or attributes are implementation-specific, and some are added to support specific security requirements. For example, the controller servlet has two methods that are specific to supporting single sign-on functions: redirectLogin (as used in the user login use case) and verifySSOToken (which is used for verifying the security token that is used by the Identity Provider for single sign-on).
Service Design
For our service design, we have elected to focus on four major services: User Login Service Catalog Service Order Management Service Order Fulfillment Service These services represent the main services within the application. W e will not address every possible service for the case study because there may be many auxiliary services in a good service-oriented architecture. These are the main services that will allow us to demonstrate the security process sufficiently.
SecureBaseAction processes the login request and invokes the user authentication service using AuthenticationEnforcer. AuthenticationEnforcer prompts Client to obtain user credentials for user authentication. AuthenticationEnforcer sends user credentials to LibertyAgent for authentication. LibertyAgent sends user credentials to IdentityProvider for authentication.
Upon successful (or unsuccessful) authentication, IdentityProvider returns status code to the service requester. LibertyAgent passes down the status code to ControllerServlet, which will respond to Client.
Catalog Service
The user browses the online catalog to select product items or merchandise for which rewards will be redeemed. The online catalog aggregates merchandise information from various sources, including external partners and service providers. It is essential that the W eb servers and the application servers are secure and that the security service introduced here is able to protect the client-to-server or server-to-server session and business transactions associated with the catalog service. In other words, the security patterns here should be able to support authorization, traceability, data privacy or confidentiality, availability, data integrity, and non-repudiation. Figure 14-13a and Figure 14-13b depict the detailed process of how security is used to protect the user while invoking the catalog service. Client refers to the subscriber who wants to sign on to the eRewards portal.
Client sends a request to view the product catalog from the W eb portal upon successful authentication and
authorization.
FrontController processes the request and dispatches Client to the Catalog page.
The Catalog invokes CatalogAction to retrieve the data. CatalogAction first delegates to the SecureBaseAction for security processing. The SecureBaseAction invokes the verifySSOToken method on the AuthenticationEnforcer. The AuthenticationEnforcer verifies the SSOToken for assuring single sign-on. The SecureBaseAction then invokes SecureLogger to log the request. The SecureLogger writes the message out to a flat file log. The CatalogAction then invokes the SecureServiceFaade on the Business tier to request the product details from the CatalogService. The SecureSessionFaade invokes the getCatalog method on the CatalogBO. The CatalogBO, in turn, sends the request message to the MessageInterceptorGateway. The MessageInterceptorGateway makes use of MessageInspector to verify the message for authentication and authorization assertions and message-level security mechanisms. The MessageInspector authenticates and authorizes the request using the IdentityProvider.
Once validated, the MessageInterceptorGateway then invokes getProductDetails on the CatalogService. The CatalogService retrieves and returns the product details. The product details are returned to the CatalogAction. The Catalog then gets the data from the CatalogAction and displays it to the user.
Client initiates placing an order. ControllerServlet places an order with OrderManagementServlet. OrderManagementServlet forwards request to SecureSessionFaade. SecureSessionFaade uses OrderManagementHandler to invoke order management service. OrderManagementHandler initiates remote order management service from OrderManagementService. OrderManagementService gets membership record status from MembershipServlet. OrderManagementService verifies membership award status for eligibility before processing the order placement.
Upon successful (or unsuccessful) verification, MembershipServlet returns status, membership award balance, and personal information (for example, delivery address).
OrderManagementServlet processes the order if the status returned is positive. ControllerServlet logs the order management service request in the audit log.
Figure 14-15 depicts the details of processing the membership award record.
OrderManagementService gets membership record status from MembershipServlet. MembershipServlet forwards request to SecureSessionFaade. SecureSessionFaade delegates the request to MembershipHandler to initiate remote membership W eb service. MembershipHandler gets redemption points from MembershipService. MembershipService returns status and membership record to the service requester. Membershiphandler passes down the record to OrderManagementService.
Client initiates order fulfillment service after confirming the order details with the recipient and the delivery address. ControllerServlet initiates order fulfillment service from OrderManagementService. OrderManagementService sends order fulfillment message in document-style SOAP messaging to SecureMessageRouter. SecureMessageRouter defines the message destination endpoint and its message-level security mechanisms. SecureMessageRouter routes order fulfillment messages to PartnerMerchantService. PartnerMerchantService verifies the message routing information, credentials, and transaction type. MessageInspector examines and verifies the message for data integrity and non-repudiation.
Upon completion of security checking, PartnerMerchantService updates the back-end system with order fulfillment details.
PartnerMerchantService sends an acknowledgement to SecureMessageRouter. SecureMessageRouter logs the order fulfillment events to the audit log for traceability.
Upon completion of order fulfillment processing, SecureMessageRouter returns status to the service requester, including ControllerServlet and Client. These are our core services. Now that we have created the sequence diagrams and fleshed out the design details, we can turn them over to the developers for implementation.
Development
Based on the security architecture and design, we have to implement them as components and integrate them into the W eb portal application. Because the scope of this chapter is limited to delivering end-to-end security architecture, we do not delve into the implementation details of the case study.
Testing
After successful unit and integration testing as part of the development cycle, the testers should already have completed their test cases and been exposed to development builds of the code. Now the testers will execute all functional and nonfunctional test cases. In particular, we will pay close attention to the security testing team. This team has the responsibility for performing security testing of the system, both white box and black box testing. Like our developers, these security testers will be hand-picked, dedicated to the security testing, and appropriately trained and mentored.
Our black box testing revealed an input parameter attack on our rewards payment page. This vulnerability opens up the application to a cross-site scripting attack. This was sent back to the designer who analyzed the risk and decided to have one of the security developers make a fix. The security developer was able to quickly implement a fix by updating the I ntercepting Validator's validation rules. Once it was unit tested, it was sent back to testing. Further testing revealed no significant holes and therefore turned the code over for deployment.
Deployment
W e have had our operations staff prepping the environment, setting up policies, procedures, and products. They have been testing development builds and are starting to track change management requests. W e have set up our management and monitoring products and have hired an external security consulting firm to perform a suite of penetration tests on our hosting environment. W e have also applied all of the best practices mentioned throughout the book related to the environment. Everything is now locked down and ready for production support. W e can now deploy our application to production.
Configuration
A critical step in securing the environment for production is configuration. Configuration management is always a tedious and time-consuming task. It applies to all aspects of the environment, just like security. It is also the basis of a security infrastructure because poor configuration is blamed for a large amount of security holes. A poorly configured router or firewall is more of a security problem than not having one at all, because it provides a false sense of security. Intrusion Detection Systems (IDSs) are one way of managing host configurations. They do not necessarily provide configuration management, but they are good at reporting when a file in a file system has been added, changed, or deleted. Most host-based attacks involve changes to one or more files, either to open up additional holes or to compromise the system in a way that the initial penetration is unable to achieve. An IDS can detect this change and notify an administrator, who can then take corrective action.
Monitoring
W e are also going to use our IDS to monitor our network for malicious activity. The IDS can detect an attack in progress and notify our administrators. Depending on your IDS, it may also be able to react to the attack and take proactive action such as blocking the IP address of the attacker. In addition to network monitoring, we need to monitor our host and the application itself. For the host, we need to monitor log files, but we also need to monitor resource consumption. An application can often be taken down by the most mundane of factors, such as running out of hard disk space. There are many enterprise management tools. Many of these tools provide a range of sophisticated monitoring capabilities with the ability to set alarms and thresholds and the ability to provide a number of notification mechanisms. Our eRewards portal application itself will be monitored using JMX interfaces of the J2EE platform. W e built the ability to declaratively define the attributes and operations we want to monitor and to set alarms and notification options on the business components through the MBeans framework provided by the J2EE platform. This allows us to monitor various aspects, such as security, within our application.
Auditing
The last step in a successful deployment of our portal is ongoing audits of the system. Both financial and security audits are part of our business requirements and provide a sound means of ensuring security requirements are being met throughout the lifetime of the application. For our auditing, we have brought in a security auditor to provide auditing of the network, host system, and applicationlevel security mechanisms and infrastructure. This will provide us with the end-to-end security architecture verification that ensures we are adequately protected. W e can now rest assured that our application is sufficiently secure and will remain so throughout its life cycle. W hile there is no guarantee that we are protected from all attacks, we are certain we have taken all the necessary steps to provide the level of security defined by our business requirements and all known threats and vulnerabilities
Summary
In this chapter, we put our security patterns, best practices, and strategies to the test. W e world scenario. W e derived the requirements and then we systematically put our new-found secure W eb portal application. It wasn't easy, but we made it through. Y ou now have some designing, and implementing a secure W eb portal application. W e also understand how and security patterns in traditional J2EE W eb applications as well as W eb services. began by looking at a realskills to use, creating a experience in architecting, when to apply the core
W e have also been introduced to using the Security Disciplines in the software development life cycle described in Chapter 8, "The Alchemy of Security Design: Methodology, Patterns, and Reality Checks." These disciplines define activities that need to take place within the software development process to ensure that security is baked, monitored, and kept up-to-date within the system throughout its lifetime.
Lessons Learned
There were several lessons learned as we worked our way through the case study. Overall, we learned that there are no silver bullets when it comes to security. Security is a holistic process that must begin at the start of the software development process and continue through the life cycle until the application is finally retired. It must be addressed at every stage and by a number of different roles. W e also learned that there are a number of factors that go into determining the patterns and strategies for securing an application. A good security design takes all the factors into account and derives the necessary security requirements from the business needs and other nonfunctional system requirements. Decisions and trade-offs must be made at every step of the process. W e do this as part of the risk analysis and mitigation. The security design also holds true across the tiers of the system by verifying through factor analysis and tier analysis. Often, as is the case in our scenario, different patterns or strategies are used in different tiers, depending on external factors. In the W eb tier, we chose to implement our own form-based authentication mechanism using the Authentication Enforcer pattern. In the W eb Services tier, we chose to use authentication and authorization using SAML assertions for message-level security and SSL-based mutual client certificate authentication for ensuring transport-level security.
Pitfalls
W e avoided many of the common pitfalls by adopting the security patterns and following the best practices mentioned earlier in this book. W e chose to use a methodology that allowed us to address security throughout the development life cycle. This prevented us from being constrained by the architecture or design at the end, as often happens in real-world applications that fail to identify security requirements and incorporate them from the beginning. W e also have come close to falling into the pit of vendor lock-in. Our decision was to use a standards-based external identity provider and adopt standards-based mechanisms to demonstrate interoperability. The major difference is that in this case there were no vendor-specific APIs that would lock us in to a particular identity provider vendor. W e have therefore employed the best practice of buy versus build over the slight drawback of using a vendor-specific identity provider. Throughout the case study, we followed a patterns-driven design process that helped us with a highly reusable solution approach and avoided our generating a vast number of unneeded security requirements from our business requirements.
Conclusion
This case study has provided us with a good example of the many factors that go into securing a real-world application. It has taught us some lessons, revealed some pitfalls, and provided a detailed example of how to use some of the core security patterns. If you have made it this far, then congratulations are in order. Y ou are now ready to start putting your experience to the test in the real world. W hen it comes to security, the devil is in the details. As you move forward and begin to bake security into your development process, remember the process itself and pay attention to the details. Y ou simply can't take shortcuts. A good security implementation requires that you make it a part of your development culture and follow through on it thoroughly.
References
[CSP] Chris Steel, Ramesh Nagappan, and Ray Lai. Core Security Patterns: Best Practices and Strategies for J2EE, Web Services and Identity Management. Sun Microsystems Press. [CJP2] Deepak Alur, John Crupi, and Dan Malks. Core J2EE Patterns: Best Practices and Design Strategies, Second Edition. Prentice Hall, 2003.
Chapter 15. Secure Personal Identification Strategies Using Smart Cards and Biometrics
T opics in T his Chapter Physical and Logical Access Control Enabling Technologies Smart Card-Based Identification and Authentication Biometric Identification and Authentication Multi-factor Authentication Using Smart Cards and Biometrics Best Practices and Pitfalls Secure personal identification enhances the confidence, accuracy, and reliability of verifying a human identity and its eligibility for physical or logical access to security-sensitive resources and restricted areas. Secure personal identification and verification technologies enable a high degree of access protection to restricted locations, network infrastructures, IDs, banking, financial transactions, law enforcement, healthcare, and social organizations' services. These resources include computer systems, applications, data, documents, business processes, ATM machines, personal devices, and doors to restricted locations. Traditional identification and verification mechanisms often verify a person's knowledge of information such as passwords and PINs (magnetic stripe cards). These mechanisms are highly susceptible to fraud, because they can be forgotten, stolen, predicted, forged, manipulated, impersonated, and hacked while being used in trusted resources. Historically, it has been proven that passwords and PINs are inefficient and inaccurate when a trusted resource requires physical verification of an identity. Trustworthy personal identification requires verification of an individual beyond username/password and PINs; it requires a strong authentication similar to face-to-face interaction between the person and the verifying agent. In Chapter 1, we discussed the importance of smart cards and biometrics technologies along with their increasing rate of use in the IT industry for the prevention of security issues related to personal identification and authentication. Adopting secure personal identification technologies using smart cards and biometrics facilitates a high degree of logical and physical verification and enables a stronger authentication process. Using secure personal identification technologies helps to identify and thwart identity fraud and impersonation crimes when an individual wrongfully obtains another individual's identity credentials and claims to be that other person. In secure personal identification, smart cards and biometrics provide a means of verifying an identity by verifying the person's proof of possession and proof of the person's physiological and behavioral properties, respectively. This chapter explores the concepts, technologies, architectural strategies, and best practices for implementing secure personal identification and authentication. W e discuss using smart cards and biometrics as well as enabling multifactor authentication with a combination of both methods. In particular, we will study how to incorporate smart cards and biometrics-based authentication in J2EE-based enterprise applications UNIX and W indows environments.
Enabling Technologies
Before we delve into the mechanisms and architectural strategies, it is quite important to understand the enabling technologies that contribute to implementing and incorporating smart card and biometric authentication mechanisms in an IT environment.
Global Platform
The Global Platform delivers standards for portable and interoperable infrastructure smart card solutions. It supports implementation on a wide range of systems, including card reader devices, PDAs, mobile phones, contactless chip technology, and infrared devices. The Global Platform-based Java Card delivers a hardware-neutral OS with compatibility and interoperability among smart cards, smart card readers, applications, devices, card personalization systems, and key management systems. Global Platform enables smart cards to run multiple applications as a multi-application enabled card. This helps a card holder to use his or her card as an authentication token in order to gain access to privileged areas. The smart card can also be used for personal data storage of medical and financial information. For more information about Global Platform, refer to the W eb site located at http://www.globalplatform.org/. The Global Platform Card Specification v2.1.1 is recognized by the ISO to support the ISO IEC* 7816 standard series for smart cardsthe ISO/IEC 7816 part 13 standard, which is for application management in a multi-application environment. The Global Platform contribution was also supported by the U.S. International Committee for Information Technology Standards (INCITS) and the American National Standards Institute (ANSI).
PKCS#11
PKCS#11 is an RSA cryptographic token interface standard that defines an application programming interface (API) for performing cryptographic operations on hardware-based security devices, including smart cards. It defines device independence and resource sharing so that multiple applications can access the cryptographic token. Most operating systems and Web browsers provide support for integrating hardware-based cryptographic operations via PKCS#11 interfaces.
PKCS#15
PKCS#15 is an RSA cryptographic token format standard that defines storage of keys on smart cards, devices, and other conventional hardware tokens/IC cards. Most smart card-based National Ids (such as Belgium's eID, Finland's FINEID, Malaysia's MyKad, and Sweden's SEIS) conform to the PKCS#15 standard.
PC/SC Framework
PC/SC defines the architecture for the integration of smart card readers and the use of smart cards in a PC-based environment. PC/SC ensures interoperability through vendor independence among smart card readers and smart card products, PC applications, and card issuers. PC/SC also defines device-independent APIs and resource management to allow multiple applications to share smart card devices. This has led to a common interface for using smart cards and readers. Most vendors provide PC/SC-compliant drivers to support their smart card infrastructure. For more information about understanding the PC/SC Framework, refer to the PC/SC W orkgroup specifications available at http://www.pcscworkgroup.com/specifications/overview.php.
OpenSC
OpenSC is an open source framework initiative for enabling smart cards to support security operations. It provides a set of API libraries and tools for integrating smart card readers and accessing to smart cards. The OpenSC framework focuses on running cryptographic operations and facilitates smart card use in security applications such as mail encryption, authentication, and digital signature. OpenSC provides implements the PKCS#11 API so that applications supporting this API on operating systems such as Linux, W indows and Solaris and W eb browsers/e-mail clients such as Mozilla, Firefox and Thunderbird can use it. OpenSC also implements the PKCS#15 standard. For more information on using OpenSC framework, refer to the architecture and developer guide available at http://www.opensc.org/docs.php
BioAPI
The BioAPI is a standardized API for developing personal identification applications that interface with biometric verification devices such as fingerprint scanners, facial recognition devices, iris and retina scanners, voice recognition systems, and so forth. It was developed by a consortium consisting of industry vendors that support biometric technologies. The BioAPI Version 1.1 [refer BioAPI] is an approved standard that is compliant with the Common Biometric Exchange File Format (CBEFF). The BioAPI is also accepted by the American National Standards Institute (ANSI). BioAPI facilitates development and deployment of biometrics-based personal verification and authentication in a vendor-neutral way with standardized interfaces, modular access to biometric matching algorithms, and support for running across heterogeneous platforms and operating systems. For more information about BioAPI, refer to the W eb site for the BioAPI Consortium located at http://www.bioapi.org/. To support Biometrics Match-on-the-Card and Match-off-the-Card requirements, The Java Card Forum Biometric Task Force and the NIST Biometric Consortium W orking Group have developed a Java Card Biometric API specification that
defines API mechanisms to facilitate integration of the Java Card API with biometric authentication. This specification provides all required biometric authentication functions, such as enrollment, verification, and identification processes of a biometric service provider (BSP). For more information about the Java Card Biometric API, refer to the API specification located at http://www.javacardforum.org/Documents/JCFBioAPIV1A.pdf.
login auth
required
pam_biologin.so bio_finger
The configuration fields are usually represented in the order of service name, facility name, control flag, module name, and module arguments. Any additional fields are interpreted as additional module arguments. Smart card and Biometrics vendors provide PAM modules for integration with application and UNIX environments. PAM also facilitates multifactor authentication, which allows combining smart cards and biometricsstoring biometric samples of the person who is the smart card holder. During authentication, PAM acquires the biometric sample from the scanner and matches it with the value stored in the smart card of the sample presented at enrollment in order to allow or deny user access. For more information about PAM modules, refer to the Sun W eb site at http://www.sun.com/software/solaris/pam.
JAAS is a Java-based API framework that allows implementing authentication and authorization mechanisms in Java applications. It implements a Java technology version of the standard PAM framework. JAAS allows J2EE applications to remain independent from underlying authentication technologies. J2EE allows plugging in security providers as JAAS LoginModules for use with J2EE application components without requiring modifications to the application itself. JAAS is commonly used for integrating authentication technologies such as RSA SecurID, Kerberos, smart cards, biometrics, and so forth. In a J2EE environment, JAAS LoginModules are usually configured as Realms that map applications and their user roles to a specific authentication process. Configuring realms is more vendor-specific; refer to the vendor documentation on how to configure realms using JAAS LoginModules in a J2EE application server. For more information about using JAAS APIs and implementing JAAS login modules, refer to the section "Java Authentication and Authorization Service" in Chapter 4, "Java Extensible Security Architecture and APIs."
Logical Architecture
Figure 15-1 represents a logical architecture showing a smart card authentication-enabled application infrastructure involving J2EE applications, Solaris, Linux, and W indows environments.
Let's explore the logical architecture in terms of its infrastructure components and its role in enabling smart card-based authentication.
Smart Cards
Smart cards provided by a card vendor include the Smart Card OS and card management utilities. Most cards have the capability to run multiple applications and store personal and biometrics templates. It is also important that the card be PKCS#15-compliant so that it can support storing certificates and can execute PKI operations on the card. Most smart card vendors provide support for Global Platform, PC/SC, Java Card, and PKCS#11 interface specifications. The card also provides an adequate amount of memory to meet the storage requirements of the application. The card issuer usually
pre-installs the card with Card Unique Identifier (CUID), Private Key, and a Certificate (X.509 certificate with public key). The user sets the PIN during enrollment.
with back-end databases and Enterprise Information Systems (EIS). To enable smart card-based authentication, the J2EE platform requires an appropriate JAAS LoginModule that incorporates an authentication mechanism provided by a smart card authentication server.
Operational Model
Smart card-based security architecture has a lot in common with many authentication solutions. For a better understanding of the architecture and the relationships between the infrastructure components, we need to understand the different life-cycle operations managed by the architecture, such as card enrollment, authentication, and termination.
Figure 15-2. Sequence diagram for Smart Card authentication process (using Challenge-Response protocol)
The key participants and their activities involved in the authentication process are as follows: 1. The Client inserts the smart card and attempts to access the J2EE application using a W eb browser. The J2EE application verifies the request using the JAAS LoginModule and then initiates authentication by forwarding 2. the request to the smart card authentication server. The authentication server creates a random string (Challenge) and then requests the client to encrypt the Challenge 3. using the client's private key. 4. The client opens the card using the PIN and then asks the card to encrypt the Challenge using the private key. 5. The encrypted Challenge and CUI D of the card is sent back to the authentication server as Response. The authentication server verifies the response identifying the user's CUID from the directory server and uses the 6. public key for the corresponding CUID to decrypt the encrypted response. 7. If the decrypted string matches the original Challenge, the authentication is considered successful. 8. Based on the authentication result, the JAAS LoginModule allows or denies access to the requested application. In UNIX and W indows environments, the PAM and GINA modules, respectively, play the role of JAAS LoginModule in the authentication process. The desktop login can be done using the smart card and the PIN. In desktop login, the removal of the card from the reader locks the window or logs the user out, depending upon the configuration.
Fingerprint Matching
A fingerprint consists of a series of furrows (shallow trenches) and ridges (crests) on the surface of a finger. The uniqueness of a fingerprint is determined based on the patterns of ridge-ending, bifurcations, divergences, and
enclosures. These patterns are referred to as minutiae points, or typica (see Figure 15-4). A typical fingerprint can show from 30 to 40 minutiae points. A typical fingerprint template size ranges from 250 bytes to 1.2 Kbytes.
Fingerprint matching is usually done based on two common approaches: minutiae-based and correlation-based. In the minutiae-based approach, a fingerprint is identified with minutiae points and their relative placement on the finger is mapped (see Figure 15-4). In the correlation-based approach, the matching is done on the entire representation of the fingerprint based on location point. The minutiae approach is commonly adopted by most fingerprint scanner vendors.
restricted resource. The lower the ATV, the greater the accuracy and reliability of the authentication. A higher ATV results in high FMR, which decreases the reliability of the verification. ATV can be computed as follows:
Logical Architecture
Figure 15-5 represents a logical architecture showing a fingerprint-based biometric authentication infrastructure involving J2EE applications, Solaris, Linux, and W indows environments.
Let's explore the logical architecture in terms of its infrastructure components and its role in enabling fingerprint technology-based authentication.
Fingerprint Scanner
A fingerprint scanner device scans the surface of a finger and identifies the patterns of the fingerprint in terms of valleys, ridges, ridge-ending, bifurcations, divergences, and enclosures. Using a device driver, the fingerprint scanner integrates
with a computer by way of USB, Ethernet, or serial interfaces. The scanned fingerprint image is converted to a biometric template as part of enrolling a person's biometric profile, verifying against an existing template, or searching for a match against other templates. Because fingers can be soft, dry, hard, dirty, oily, or worn, it is important that the scanner is able to scan any fingerprint with a high degree of accuracy. There are a variety of devices that can acquire a fingerprint image; the most popular devices are optical scanners and capacitance scanners. Optical Scanner: The optical scanners are based on mechanisms quite typical to digital camera technology, which makes use of a charge-coupled device (CCD). The CCD is an array of light-sensitive photosites that generates an electrical signal in response to light. The photosite records the pixels once the light is flashed on the surface of a finger. The pixels represent the digital image of the scanned surface of the finger. The scanner also verifies the captured image for quality image definition; if the image is not dark enough, it rejects the image and attempts to scan it again. Capacitance Scanner: The capacitance-based scanners are sensors based on capacitors that use electrical current. The capacitors make use of two conductor plates insulated from each other. They are connected to an electrical circuit built around an inverting operational amplifier. Typical to any other amplifier, the inverting amplifier alters the supplied current based on fluctuations in another current. W hen a finger is placed on the scanner, the surface of the finger acts as a third capacitor plate, and it is insulated with a pocket of air. Capacitance-based scanners capture a fingerprint image as peaks and valleys that affect the electrical current. W hen a finger is placed on the scanner, only the peaks make contact with the scanner surface. Capacitors under the peaks thus have a higher capacitance, and the capacitance is lower in the valleys because of air pockets. Based on this difference, an image is electrically acquired. Some fingerprint scanners provide an Ethernet interface that allows assigning an IP address to them. Using Ethernetinterface based scanners helps to identify the IP address and verify the initiating host machine and its domain. This also helps identify the user from the host machine who is privileged to access or not privileged. In addition, the scanner communication can also be secured using the SSL/TLS protocol using the certificate and keys stored in the scanner itself.
To support the W indows environment, most biometric vendors provide GINA modules that allow W indows Login using biometric authentication. Replacing the Microsoft-default GINA with biometric authentication-based GINA library enables biometric authentication in a W indows environment.
Operational Model
The operational model of biometrics-enabled security architecture has a lot in common with smart card authentication solutions. Let's take a look at the different life-cycle operations such as biometric enrollment, authentication, and termination.
Figure 15-6. Biometric enrollment process using BiObex (Courtesy: AC Technology, Inc.)
During enrollment, the system associates the biometric samples of a person (such as fingerprint images or face geometry) with the other personal information stored in the directory. Multiple samples may be acquired based on the biometric technology in use (for example, for fingerprint-based authentication, usually all fingers from both hands will be acquired). The acquired biometric samples are processed using relevant algorithms and then converted to a template format (referred to as a reference template). The enrollment system securely stores the templates in a directory. Once complete, the enrollment officer assigns the user to the privileged machines, scanners, and applications, specifying biometric authentication for that user. The enrollment officer also activates the user's access control privileges, roles, and the authorized actions specific to the user's business responsibilities. This completes the user enrollment process with a biometric-enabled authentication system. To terminate the user, the enrollment officer deactivates the user access by disabling the user account, scanner entry, and associated privileges so that no further authentication can be done using the assigned scanner (for example, the fingerprint scanner submission of images will no longer be accepted). The user's privileges can also be temporarily revoked if the user's biometric samples do not match after multiple attempts to obtain a match are made. A revoked user account cannot be accessed without the intervention of an enrollment officer.
Figure 15-7 represents the sequence diagram for the biometric authentication process in a J2EE environment and identifies the key participants and their activities. The key steps involved in the process are as follows: 1. The Client requests access to a protected J2EE application using a W eb browser. The J2EE application verifies the request using the JAAS LoginModule and then initiates authentication by forwarding 2. the request to the biometric authentication server.
3. The authentication server initiates a biometric callback. 4. The client provides the biometric sample in the assigned biometric scanner and submits it for authentication. The authentication server verifies the biometric sample by matching it with the reference template acquired during 5. the enrollment process. 6. If the matching score exceeds the required threshold limit, the authentication is considered successful. 7. Based on the authentication result, the JAAS LoginModule allows or denies access to the requested application. In the case of UNIX and W indows environments, using PAM and GINA modules, respectively, play the role of JAAS LoginModule in the authentication process.
Let's assume that a biometric authentication server is configured as the default authentication service in an identity provider infrastructure for providing access to a business portal. W hen a user attempts to access the business portal managed by an identity provider, the business portal redirects the user to a biometric login that requests submission by the user of biometric samples to the identity server, which acts as a client to the biometric authentication server. The biometric authentication server authenticates the user by acquiring one or more biometric samples from the user and matching them against the user's biometric reference template. If the biometric authentication is successful, the identity provider grants access to the business portal by issuing an SSO token that represents the user's sign-on and session information. If the authentication fails, the identity provider returns an error page to the user. The identity provider makes use of the policy agents for securing the business portal by intercepting requests from unauthorized intrusions, verifying and validating the user's SSO token if it exists, and controlling access to resources based on policies assigned to the user.
To learn more about building a biometric SSO for J2EE, W eb, and enterprise applications using a vendor solution, refer to http://developers.sun.com/prodtech/identserver/reference/techart/bioauthentication.html.
Using Biometrics
Use multiple biometrics samples for single authentication. To reduce the possibility of fake or forged biometric sampling, use multiple biometric samples for single authentication. Use random sequences when acquiring samples (for example, with fingerprint authentication, request left-hand index finger first and then right-hand thumb). This helps thwart attacks using residual fingerprints obtained from previous authentication sessions, fingerprints from glasses, or latent fingerprints on scanners. This also helps in preventing gummy finger-based forged fingerprint attacks. [Gummy] Assign biometric scanners to users. Assign scanners to individuals and verify the originating host for all authentication requests to ensure that the biometric sample is transmitted from user-assigned scanners only. Authentication must be considered successful only if the matching sample is obtained from the assigned scanner of the individual. This helps make monitoring and logging of events easier. Preventing mimic scanner attacks. To thwart playback attacks mimicking scanners, use validation of scanner-stored certificates before processing the samples. This can be accomplished by establishing SSL/TLS communication between the scanner and the authentication server. Control access to administration and enrollment system. It is important to establish roles for privileged users who will be authorized for performing enrollment. To increase the level of access capabilities to a user with administration and enrollment, it is necessary to follow an authorization approval workflow involving multiple officials. Multiple login attempts. If a user attempts to log in multiple times and fails to generate a matching score, the user account pertaining to the user ID must be temporarily revoked and reported for further investigation by an administrator. It is important to verify the time of access and the device in use to identify the user. Logging, auditing, and monitoring. All enrollment, administration, and authentication events and related actions
must be captured in secure logs that include timestamps, host, and user information. It is also important to store audit trails to identify fake attempts and the originating sources. The system must also be monitored for system activity and use alerts whenever a potential breach or violation occurs. It is also important to periodically inspect the scanners and their stored keys and certificates for validity. Securing the biometric information repository. It is important to secure the biometric information stored in a directory (LDAP) or relational database. It is strongly recommended to use encryption mechanisms during storage so that the information remains confidential. Secure communication. All network communication involved with a smart card or biometrics-based authentication must be secured using SSL/TLS protocols. This ensures the information is not intercepted or captured during transit by preventing man-in-the-middle attacks from reading CUID (smart cards) or fingerprint images (biometrics) and then impersonating using replay of previously recorded information. Match-on-the-card biometrics. Sizing the processor and memory capabilities of a smart card is necessary before test-driving match-on-the-card biometric authentication. For example, a typical fingerprint template size ranges from 250 bytes to 1.2 Kbytes and it differs from person to person. The smart card must be tested for storage performance and reliability when using multiple biometric samples.
Pitfalls
Architectural complexity. The complexity of implementing a smart card and biometrics-based security infrastructure depends on the level of geographical dispersion and the systems requiring physical and logical access, the centralized or decentralized nature of administration, and the directory infrastructure. These factors result in potential scalability and performance issues with the overall architecture. Lost, stolen, and revoked smart cards. There is always a possibility of potential abuse of lost or stolen cards in terms of impersonation or gaining physical access to a location. If the card uses a biometric template for authentication, however, the card cannot be used. False Acceptance Rate (FAR) and False Rejection Rate (FRR). Biometric authentication systems are prone to err in terms of false acceptance and false rejection. Depending upon the security requirements, and using CER, it is important to strike a balance between the percentages of FAR and FRR. For example, you may not want to set a high score threshold to lower FAR but it may affect some legitimate persons as FRR. There are other factors, such as physical conditions, positioning, location, weather, injury, biometric device, and so forth that must be considered before deployment. These factors can directly influence the accuracy of the overall biometric authentication process.
References
[BioAPI] BioAPI Version 1.1 specifications http://xml.coverpages.org/BIOAPIv11.pdf [BioSSO] Ramesh Nagappan and T uomo Lampinen. Building Biometric Authentication for J2EE, Web and Enterprise Applications. http://developers.sun.com/prodtech/identserver/reference/techart/bioauthentication.htm [GINA] Keith Brown. Customizing GINA http://msdn.microsoft.com/msdnmag/issues/05/05/SecurityBriefs/default.aspx [GlobalPlatform] Global Platform specifications and documents http://www.globalplatform.org/ [Gummy] Impact of "Gummy" Fingers on Fingerprint Systems. http://cryptome.org/gummy.htm [JavaCard] Java Card 2.2.1 Specifications http://java.sun.com/products/javacard/specs.html [JavaCardForum] Biometric Application Programming Interface (API) for Java Card http://www.javacardforum.org/Documents/Biometry/BCWG_JCBiometricsAPI_v01_1.pdf [NIST ] NIST Biometric Consortium Working Group http://www.nist.gov/bcwg. [OpenCard] OpenCard Framework specifications and documents http://www.opencard.org [PAM] Vipin Samar and Charlie Lai. Making Login Services Independent of Authentication T echnologies. http://www.sun.com/software/solaris/pam/pam.external.pdf [PC/SC] PC/SC Working Group Specifications http://www.pcscworkgroup.com/ [Smart Card Alliance] Smart Cards and Biometrics FAQ white paper http://www.estrategy.gov/smartgov/information/smart_card_biometric_faq_final.pdf
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
-addcert option -addjarsig option -certreq option -export option -genkey option -import option -keyalg option -keypass option -keystore option -list option -printcert option -showcert option -storepass option -storepasswd option <<IssuesInstant> element <application-desc> element <AttributeValue> element <AuthenticationContextStatement> message <AuthnRequest> message 2nd <AuthnResponse> message 2nd <AuthorizationDecisionStatement> feature 2nd <CanonicalizationMethod> element <CipherData> element <CombinerParameters> element
<condition> element
EPAL XACML <Decision> element <DigestMethod> element <DigestValue> element <ds:KeyInfo> element <ds:Signature> element <EncryptedData> element <EncryptedKey> element <EncryptionMethod> element <EncryptionProperties> element <Environment> element <EnvironmentMatch> element <ExactlyOne> element <FederationTerminationNotification> message <information> element <KeyInfo> element <Manifest> element <NameIdentifierMappingRequest> message <Object> element <PolicySetIdReference> element <r:license> element <Reference> element <Request> element <resources> element <Result> element <rule> element <RuleCombinerparameters> element <saml:Assertion> element <security> element
<Signature> element <SignatureMethod> element <SignatureProperties> element <Signaturevalue> element <SignedInfo> element <Status> element <Target> element <Transforms> element <VariableDefinition> element <VariableReference> element <Version> element <wsse:BinarySecurityToken> element <wsse:Security> element <wsse:SecurityTokenReference> element <wsse:UsernameToken> element <wsu:TimeStamp> element <xenc:EncryptedData> element <xenc:EncryptedKey> element
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Ability to verify (ATV) probability abort method Abstract Factory pattern Abstract objects Abstraction layers Access control 2nd Assertion Builder pattern broken Business tier patterns 2nd DMTF EPAL for smart cards IETF Policy Management Working Group J2EE management services 2nd Parlay Group physical and logical Web services 2nd Access control lists (ACLs) J2EE JMS Access points in case study AccessController class Accountability, checklist for
Administration
in biometric systems in Web tier patterns reality checks for Administrator privileges Advanced Encryption Standard (AES) 2nd 3rd Advice in SAML assertions Advisory policies Agent-based and agentless architecture for user account provisioning Agent-based authentication 2nd Agent-based policy enforcement Aggregation, service Alchemy of security design
Alerts
SSL Web services patterns AlgorithmParameter class AlgorithmParameterGenerator class
Alteration attacks
SAML Secure Logger pattern 2nd Annual Loss Expectancy (ALE) 2nd Anonymous EJB resources AOP (Aspect Oriented Programming) techniques
Apache Struts
in form validation XML in Web data validation with SecureBaseAction with SimpleFormAction APDUs (Application Protocol Data Units)
APIs
BioAPI CertPath JAAS Java Java Card JCA JCE JSSE SAAJ 2nd 3rd SASL Vendor-specific Applets for smart cards Java Card signed Appletviewers
Appliances
firewall
strategies for XML-aware Application Controller Application data messages in SSL Application Protocol Data Units (APDUs) Application Requests Application security assessment model Application Security Providers Application-based authentication Applications and application security access control as weakest links audit and logging authentication buffer overflow CLDC coding problems configuration data cross-site scripting data injection flaws data transit and storage deployment problems DOS and DDOS attacks encryption error handling in case study input validation failures Intercepting Web Agent pattern J2EE JSSE man-in-the-middle attacks multiple sign-ons output sanitation password exploits policies Secure Pipe pattern security provisioning patterns security tokens
servers
for biometrics for smart cards in use cases session identifiers session theft Web tier patterns Applying security patterns
Architecture
in case study 2nd
in security patterns
Authentication Enforcer
Business tier Intercepting Validator Intercepting Web Agent Secure Base Action Secure Service Proxy inefficiencies J2EE J2ME Java Liberty Alliance patterns-driven security design personal identification systems biometrics smart cards risk analysis SAML 2nd Secure UP 2nd
Assertions
Java System Access Manager SAML attribute
authentication 2nd authorization WS-Policy WS-Security assertRequest method Assessment checklists Asset valuation Asymmetric ciphers Attachments in SOAP messages Attack trees AttributeQuery class
Attributes
J2EE
SAML
assertion 2nd authority 2nd mapping profile repository Secure Service Facade pattern XACML 2nd AttributeStatement class 2nd ATV (ability to verify) probability Audit Interceptor pattern 2nd 3rd and Message Inspector pattern consequences forces in case study 2nd 3rd 4th participants and responsibilities problem reality check related patterns sample code security factors and risks solution strategies structure audit method AuditClient.java file
Auditing
Assertion Builder pattern
Web services 2nd Web tier patterns 2nd AuditLog class 2nd AuditLogJdbcDAO class AuditRequestMessageBean.java file Authentication assessment checklists biometrics 2nd 3rd 4th broken 2nd 3rd in case study
in security patterns
Assertion Builder 2nd
forces in case study 2nd 3rd 4th 5th participants and responsibilities problem reality checks in related patterns Container Managed Security Secure Base Action sample code security factors and risk in solution strategies in structure
in security patterns
Dynamic Service Management Intercepting Web Agent Policy Delegate Secure Base Action Secure Session Object J2EE 2nd 3rd declarative programmatic Web tier
JAAS
implementing strategy SAML 2nd 3rd Security services Security Wheel trust model Web services XACML 2.0 Authorization and Access Control service Authorization Enforcer pattern consequences forces participants and responsibilities problem reality check
related patterns security factors and risks solution strategies structure Authorization providers AuthorizationEnforcer class AuthPermission class Automated back-out strategy Automated password retry
Availability
identity management patterns in case study in use cases J2EE network topology Message Interceptor Gateway pattern Secure Message Router pattern security provisioning patterns Security Wheel Web services
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] B2B (Business-to-Business) applications
identity management in Liberty Alliance transaction support in Back-out password strategy
Basic authentication
in web.xml J2EE 2nd Basic Information Basic Profile Basics of security
personal identification
biometrics smart cards
accuracy architecture and implementation best practices in multi-factor authentication operational models SSO strategy verification process Biometric service providers (BSPs) Black box testing in case study Secure UP 2nd 3rd Blanket MIDlets Block ciphers Block encryption algorithms Bodies in SOAP messages Broken access control risk Broken authentication Assertion Builder pattern Password Synchronizer pattern
Browser plug-ins
for biometrics for smart cards Brute force attacks BSPs (biometric service providers) Buffer overflow Build portion in patterns-driven security design
Business tier
in case study 2nd 3rd
overview 2nd pitfalls Policy Delegate references Secure Service Facade Secure Session Object
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
CA (connector architecture) CA SiteMinder WebAgent Caching in Single Sign-on Delegator pattern CADs (card acceptance devices) Caesar ciphers California, notice of security breach requirements CallbackHandler class 2nd 3rd 4th 5th Callbacks in J2EE Canadian Public Accounting Board Canonical transformations Canonicalization algorithms 2nd CAP (Converted Applet) files Capacitance-based scanners Capstone project Card acceptance devices (CADs) 2nd Card Unique Identifiers (CUIDs)
for certificates
issuing revoking for signed applets J2EE Case study architecture 2nd 3rd assumptions challenges conceptual security model conclusion deployment design Business tier 2nd classes in data modeling and objects factor analysis Identity tier infrastructure policy security patterns services in threat profiling tier analysis trust model Web Services tier 2nd Web tier 2nd development
lessons learned overview pitfalls references risk analysis and mitigation security patterns 2nd summary trade-off analysis
Centralization
auditing authentication Authorization Enforcer pattern encryption logging 2nd Message Interceptor Gateway pattern policies 2nd routing transaction management validations Web services patterns Centralized model in user account provisioning CER (Crossover Error Rate) probability Certificate revocation lists (CRLs) Certificate Signing Requests (CSRs) 2nd CertificateFactory class 2nd Certificates and certificate keys 2nd CA role certificate chains for applets for JAD files for keytool for SSL importing 2nd in JSSE mutual authentication PKI printing revocation 2nd Secure Pipe pattern security pattern factor analysis tokens 2nd Web tier patterns Certificates of Authority (CAs)
CertPath
for certificates
issuing revoking for signed applets J2EE
CertPath
classes and interfaces in for certificate chains CertPathBuilder class CertPathValidator class CertStore class CGI in Web tier patterns Challenge-response protocol authentication Change management request (CMR) system ChangeCipherSpec messages 2nd Check Point patterns checkPermission method checkRead method Child nodes in attack trees Children's Online Privacy Protection Act (COPPA) CIM (Common Information Model)
Cipher class
in JCE 2nd in Secure Logger pattern CipherInputStream class CipherOutputStream class Ciphers asymmetric
JCE
block stream symmetric CipherState messages
Circles of trust
in Liberty specifications in Single Sign-on Delegator pattern Claims in WS-Security
Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator
Classes
CertPath in case study JAAS Java JCA JCE JSSE Classification of security patterns ClassLoader CLDC (Connected Limited Device Configuration) Client Device tier, reality checks for
Client-certificate authentication
Authentication Enforcer pattern in web.xml J2EE ClientHello messages ClientKeyExchange messages
Clients
Identity Provider Agent strategy in case study
in security patterns
Assertion Builder Audit Interceptor Authentication Enforcer Container Managed Security Credential Tokenizer Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate 2nd Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Proxy
Secure Session Object Single Sign-on Delegator 2nd J2EE 2nd 3rd JAAS authentication for JSSE Liberty specifications SAML SASL 2nd server connections 2nd use cases closeService method 2nd closeSSOConnection method Clustered PEP pattern CMR (change management request) system code, Java obfuscation reverse engineering Codebase in Java 2 CodeSource in Java 2 Coding problems Cold Standby pattern Collisions in one-way hash functions Command APDUs
commit method
LoginModule SAML commitTransactions method Common Biometric Exchange File Format (CBEFF) Common classes in JAAS Common Information Model (CIM) Common Open Policy Service (COPS) Common SAML functions
Communication
biometrics JGSS Liberty Alliance Web services 2nd Web tier patterns Compact Virtual Machine (CVM) Comparator-checked Fault Tolerant System pattern
Compatibility
in proprietary systems in Secure Pipe pattern Compiling applets
Complexity
Assertion Builder pattern Authorization Enforcer pattern personal identification systems Policy Delegate pattern 2nd Secure Pipe pattern Compliance COPPA Data Protection Directive
Gramm-Leach-Bliley Act HIPPA in other countries in Security Wheel in Web services patterns justifications Notice of Security Breach Sarbanes-Oxley Act
Component security
Business tier patterns J2EE authentication authorization context propagation HTTP session tracking users, groups, roles, and realms Web tier Component-managed sign-on Composability issues Computer Security Institute survey Conceptual security model
Concurrency
Message Inspector pattern Secure Session Object pattern
Conditions
Parlay policy design SAML assertions Confidentiality 2nd breaches
in security patterns
Assertion Builder Authentication Enforcer Dynamic Service Management Message Inspector Obfuscated Transfer Object Policy Delegate Secure Logger Secure Pipe Security Wheel Web services 2nd Configuration Assertion Builder pattern in case study insecure J2ME Web services patterns Configuration class Conformance requirements Connected Device Configuration (CDC) Connected Limited Device Configuration (CLDC)
Connections
client-server 2nd in case study in use cases SSL 2nd Connector architecture (CA) Connector Factory Consequences in security patterns Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger 2nd Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator Constants in Java System Access Manager
Constraints
authorization in use cases Contact cards Container authenticated strategy Container Managed Security pattern consequences forces participants and responsibilities problem reality check related patterns sample code security factors and risks solution strategies structure Container-based security authentication
authorization declarative JACC programmatic protection domains in sign-ons in TLS Content encryption in Web services patterns Content-specific policies
Context
in J2EE in XACML 2nd propagation of
Cookies
HTTP session tracking Liberty Alliance COPPA (Children's Online Privacy Protection Act) COPS (Common Open Policy Service) CORBA-based clients Core Web services standards SOAP UDDI WSDL XML Corporations, identity management in
Correlation
in fingerprint matching in Web services patterns Countermeasures CRC (cyclic-redundancy check) algorithms 2nd
create method
AddUser AuthenticationStatement Create, read, update, and delete (CRUD) form data createAssertionReply method createAssertionStatement method createAuthenticationStatement method 2nd createCondition method createMBean method createObjectName method createPasswordRequest method createRule method createServerSocket method createService
createSocket method createSPMLRequest method 2nd createSSLEngine method createSSOConnection method createSSODConnection method
createSSOToken method
AssertionContextImpl SSODelegatorFactoryImpl createToken method Credential Collector Credential Tokenizer pattern 2nd and Single Sign-on Delegator pattern consequences forces participants and responsibilities problem reality check related patterns sample code security factors and risks solution strategies structure
Credentials
delegation of J2EE Liberty Alliance CRLs (certificate revocation lists) 2nd Cross-domain federations
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
DAP (Directory Access Protocol) Data Encryption Standard (DES) 2nd Data flow in XACML Data injection flaws Data Protection Directive Data Transfer HashMap pattern Data transformations Database communication DCE PAC Profile DDOS (distributed DOS) attacks 2nd Debuggers in white box testing Decentralized model in user account provisioning Declarative auditing Declarative authorization 2nd
Declarative security
Container Managed Security pattern 2nd EJBs J2EE 2nd Decompiling Java code
Decoupling
in Audit Interceptor pattern in Intercepting Web Agent pattern validations from presentation logic
problems in Web services patterns DES (Data Encryption Standard) 2nd DescriptorStore class
Design alchemy of. [See Alchemy of security design] in case study. [See Case study]
policy Design patterns Destinations in JMS DestinationSite class 2nd destroy method Detached signatures 2nd Detecting data deletion Developers in J2EE Development in case study
Devices
in case study in security pattern factor analysis Differentiators Diffie-Hellman (DH) key agreement 2nd Digest authentication digest method 2nd Digester class
Digests
JCA XML signatures
Digital certificates. [See Certificates and certificate keys] Digital Signature Algorithm (DSA)
Cryptographic Service Providers XML signatures
Discovery
in user account provisioning service Distributed DOS (DDOS) attacks 2nd Distributed Management Task Force (DMTF) Distributed policy stores Distributed security DLLs (dynamically linked libraries) DMTF (Distributed Management Task Force) DMZs (Demilitarized Zones) 2nd doAs method doAsPrivileged method Document style web services doFinal method Domain models
domains, protection
J2EE
Java 2
doPost method
for new sessions SingleProxyEndpoint
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
EBJContext interface EbXML registry ECP (Enhanced Client and Proxy) profile EEPROM in smart cards EER (Equal Error Rate) probability Effect Matrix EIS (Enterprise-information system) tier 2nd connector architecture in JDBC in JMS in EJB tier in J2EE anonymous and unprotected resources context propagation from web-tier to declarative authorization principal delegations programmatic authorization run-as identity Ejb-jar.xml deployment descriptor
ejbCreate method
AuditRequestMessageBean SecureSessionFacadeSessionBean ejbRemove method
Encapsulation
Assertion Builder pattern Credential Tokenizer pattern 2nd Java Secure Base Action pattern encrypt method EncryptDecryptionWithAES class EncryptDecryptWithBlowfish.java program Encryption and cryptography asymmetric ciphers
challenges hardware-based HTTP-POST in authentication in case study Java 2nd JCA JCE 2nd AES PBE JGSS Obfuscated Transfer Object pattern one-way hash function algorithms Secure Logger pattern Secure Pipe pattern
Engine classes
JCA JCE Enhanced Client and Proxy (ECP) profile
Enrollment systems
biometrics 2nd smart card 2nd Enterprise Java Beans (EJBs) Container Managed Security pattern declarative security for for programmatic security helper classes in in case study Enterprise Privacy Authorization Language (EPAL) Enterprise-information system (EIS) tier 2nd connector architecture in JDBC in JMS in EnterpriseService class Entitlement in Web services Entity management Enveloped Signature transform algorithms Enveloped signatures Envelopes in SOAP messages
Enveloping signatures
examples XML Environment setup in Secure UP EPAL (Enterprise Privacy Authorization Language) EPCGlobal standards EPCs (Electronic Product Codes) Equal Error Rate (EER) probability
ERewards Membership Service. [See Case study] Errors and error handling
improper reporting SPML translation European Union (EU) Data Protection Directive EventCatalog class Exclusive canonicalization encryption
execute method
Policy Delegate pattern PolicyDelegateInterface Secure Base Action pattern SecureSessionFacadeSessionBean executeAsPrivileged method
Expertise
Message Interceptor Gateway pattern problems in
Exporting
keystore certificates policies for Exposure risk factor Extended SPML operations
Extensibility
Message Inspector pattern Message Interceptor Gateway pattern Secure Logger pattern Secure Message Router pattern SPML user account provisioning 2nd
Extensible Access Control Markup Language. [See XACML (Extensible Access Control Markup Language)] Extensible Markup Language. [See XML (Extensible Markup Language)]
Extensible Rights Markup Language (XrML) External policy server strategy Extract Adapter pattern
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Facial recognition
Factor analysis
for security patterns in case study Factory pattern
Failover
J2EE network topology service Failure to Enroll (FTE) probability False Acceptance Rate (FAR) probability 2nd False Match Rate (FMR) probability False Non-Match Rate (FNMR) probability False Reject Rate (FRR) probability 2nd FAR (False Acceptance Rate) probability 2nd Fault handling
Fault tolerance
J2EE network topology Message Interceptor Gateway pattern Secure Message Router pattern Web services patterns FBI survey Federal regulations 2nd Federal Trade Commission survey Federated affiliates Federated data exchange Federated identity 2nd Federated SSO 2nd Federation management in Liberty Alliance Federation services Federation termination protocol Federations, cross-domain fileChanged method Final classes in Java
Financial losses
from confidentiality breaches reported Financial Privacy Rule Financial Services Modernization Act findApplicationId method Fine-grained security Fingerprint matching approaches to logical architecture Fingerprints, key Finished messages
Firewalls
DMZs for for Java Card applets Secure Service Proxy pattern Web services patterns
Form validation
in XML using Apache Struts Web tier patterns
Form-based authentication
in web.xml J2EE 2nd Form-POST-based redirection Foundstone Enterprise testing tool Fowler, Martin
Frameworks, security
adopting in Secure Service Facade pattern Front Controller pattern 2nd 3rd FRR (False Reject Rate) probability 2nd FTE (Failure to Enroll) probability Full View with Errors pattern
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Gambling casino Gang of Four (GoF) design patterns Gartner Group report
Gateways Message Interceptor Gateway pattern. [See Message Interceptor Gateway pattern]
Parlay generateKey method Generic products, XACML for genKeyPair method genPrivate method genPublic method getAction method getAlgorithm method getAllConfigContext method getApplicationBufferSize method getAssertionReply method getAuthenticationMethod method getCallerPrincipal method 2nd getCallersIdentity method getConfigFile method getConfigProperties method getConnection method getContents method getContext method getData method getEncoded method getFormat method
getInstance method
Cipher KeyAgreement KeyGenerator KeyPairGenerator MBeanFactory MBeanManager MessageDigest Signature getMaxInactiveInterval method getPacketBufferSize method getPermissions method getPrincipal method getProtectionDomain method
getProtocolBinding method
AssertionContextImpl SSOContextImpl TokenContextImpl getRegistryFileName method getRemoteUser method 2nd getSecurityInfo method getSecurityManager method getServiceName method getServiceStatus method
getSession method getSessionInfo method getSSODelegator method getSSOTokenMap method getStatus method getSubject method
getToken method
TokenContextImpl UsernameToken getUserPrincipal method 2nd 3rd GINA (Graphical Identification and Authentication)
GINA Module
for biometrics for smart cards GLB (Gramm-Leach-Bliley) Act
Global logout
in identity management Liberty Alliance SAML Single Sign-on Delegator pattern Global Platform technology Goals, security GoF (Gang of Four) design patterns Gramm-Leach-Bliley (GLB) Act Grant statement
Granularity
Container Managed Security pattern Intercepting Web Agent pattern Graphical Identification and Authentication (GINA)
Groups
identity management J2EE GSS-API
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] HA. [See High availability (HA)]
Hand geometry handle method Handlers Handshake messages Hardening in Web services patterns in Web tier security Hardware acceleration
Helper classes
AuditInterceptor EJB Hierarchical resource profiles
Horizontal scaling
Business tier patterns J2EE network topology Host operating systems
Host security
in case study in Web services patterns using JSSE HostnameVerifier class Hot Standby pattern
HTTP
basic authentication 2nd 3rd digest authentication
POST messages
identity management SAML 2nd Web tier patterns redirection
sessions
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
I/O, non-blocking ID-FF (Identity Federated Framework) version 1.2 ID-SIS (Identity Service Interface Specification) version ID-WSF (Identity Web Services Framework) version IDEA symmetric cipher Identification processes Identity Federated Framework (ID-FF) version 1.2 Identity federation 2nd 3rd cross-domain Liberty Alliance SAML Identity management 2nd 3rd 4th
Implementation
Assertion Builder pattern AssertionContextImpl class biometrics JAAS authorization LoginModule class Policy Delegate pattern Secure UP 2nd smart cards SPML UserNameTokem class implies method Importing certificates 2nd Inclusive canonicalization encryption Information aggregators Informative policies
Infrastructure
Application Security Provider in case study 2nd 3rd in security patterns Business tier factor analysis Intercepting Web Agent Password Synchronizer Secure Pipe Web services 2nd Web tier J2EE policies
Security Services
init method
AuditClient Cipher HTTPProxy MBeanFactory MBeanManager PasswordSyncLedger PasswordSyncListener Policy Delegate pattern SimpleSOAPServiceSecurePolicy TakeAction WriteFileApplet
initConfig method
ServiceConfig SSODelegatorFactoryImpl
initialize method
KeyPairGenerator LoginModule initSign method initVerify method Injection flaws Input validation failures Insider attacks
Integrity
as security goal in Security Wheel Secure Pipe pattern 2nd Web services Intellectual property
consequences forces in case study 2nd 3rd 4th 5th participants and responsibilities 2nd problem reality check related patterns security factors and risks solution strategies structure Intercepting Web Agent pattern consequences forces in case study participants and responsibilities problem reality check related patterns Container Managed Security Secure Service Proxy sample code security factors and risks solution strategies structure Intercepting Web Agent strategy Interceptor strategy
Interfaces
CertPath JAAS JCA JCE JSSE Password Synchronizer pattern PKCS#11 and PKCS#15 standards Policy Delegate pattern Secure Service Facade pattern Intermediary infrastructure Internet Message Access protocol (IMAP) Internet Scanner testing tool
Interoperability
Liberty Phase 1 Secure Message Router pattern Secure Pipe pattern security provisioning patterns user account provisioning Web services Intrusion Detection Systems (IDSs)
Invalidating HTTP sessions Invocation, rule-based invoke method IP address capture IP filtering Iris verification isAuthorized method
isCallerInRole method
EJBContext J2EE authorization Issuing authority in SAML isUserInRole method 2nd 3rd isValidStatement method Iterative development in Secure UP ITS4 testing tool
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
J2EE (Java 2 Enterprise Edition) platform architecture and logical tiers authorization 2nd 3rd declarative programmatic web-tier clients 2nd 3rd
component security. [See Component security] container-based security. [See Container-based security]
definitions in EIS tier 2nd connector architecture in JDBC in JMS in
client-side callbacks for biometrics vs. JGSS JAAS Authorization policy file JAAS Module JACC (Java Authorization Contract for Containers) 2nd JAD (Java application descriptor) files JADTool utility JAR (Java archive format) files for signed applets in Web tier patterns signing verifying Jarsigner tool for signed applets for smart cards
Java 2 Enterprise Edition) platform. [See J2EE (Java 2 Enterprise Edition) platform] Java 2 Micro Edition (J2ME)
architecture configurations MIDlets in profiles Java 2 platform security 2nd applet security for smart cards Java Card signed biometrics CertPath code obfuscation reverse engineering extensible importance
J2EE. [See J2EE (Java 2 Enterprise Edition) platform] J2ME. [See Java 2 Micro Edition (J2ME)] JAAS. [See JAAS (Java Authentication and Authorization Service)]
Java Card technology 2nd API framework applets in development kit model for smart cards
JCA. [See JCA (Java Cryptography Architecture)] JCE. [See JCE (Java Cryptographic Extensions)]
JGSS
MIDlets
components of signed trusted references reusable components SASL clients 2nd installing servers security model AccessController bytecode verifiers ClassLoader codebase CodeSource permissions policies protection domains SecurityManager summary tools jarsigner keystores 2nd keytool policytool Web services
Java Authentication and Authorization Service. [See JAAS (Java Authentication and Authorization Service)]
Java Authorization Contract for Containers (JACC) 2nd Java Card runtime environment (JCRE) Java Card technology 2nd API framework applets in development kit model for smart cards Java Card Workstation Development Environment (JCWDE) Java Certification Path Java Cryptographic Extension Keystores (JCEKS)
Java Cryptographic Extensions. [See JCE (Java Cryptographic Extensions)] Java Cryptography Architecture. [See JCA (Java Cryptography Architecture)]
Java Data Objects (JDO) 2nd Java Database Connectivity (JDBC) 2nd Java Development Kit (JDK) Java Generic Secure Services (JGSS) Java GSS-API Java Management Extension (JMX) technology
Java Secure Socket Extension (JSSE). [See JSSE (Java Secure Socket Extension)]
Java System Access Manager 2nd Java system web server Java Virtual Machine (JVM) Java Web Services Developer Pack (JWSDP) Java Web Start (JWS) security Java.security file Javac command Javax.net.* package Javax.net.ssl.* package Javax.security.auth package Javax.security.cert.* package
JAX-RPC API
for Web services in case study in Message Inspector pattern JAXR (Java API for XML Registry) JCA (Java Cryptography Architecture) API classes and interfaces cryptographic services digital signature generation key pair generation message digests JCE (Java Cryptographic Extensions) Advanced Encryption Standard API classes and interfaces Cryptographic Service Providers encryption and decryption 2nd hardware acceleration key agreement protocols MAC objects Password-Based Encryption sealed objects smart card support strong vs. unlimited strength cryptography JCEKS (Java Cryptographic Extension Keystores) JCRE (Java Card runtime environment) JCWDE (Java Card Workstation Development Environment)
JDBC (Java Database Connectivity) 2nd JDK (Java Development Kit) JDO (Java Data Objects) 2nd JGSS (Java Generic Secure Services) JiffyXACML JKS (Java keystores)
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
K Virtual Machine (KVM)
Kerberos
in JAAS Login Module strategy in JGSS Kerievsky, Joshua
for keystore passwords for printing certificate information for private/public key pairs for signed applets for smart cards Ktoolbar tool KVM (K Virtual Machine)
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] Labeling
in security patterns in Security Wheel Layered Security pattern LDAP (Lightweight Directory Access Protocol) certificate revocation issues cryptography challenges J2EE key management random number generation SASL 2nd trust models Leaf nodes in attack trees Ledger 2nd
Legacy systems
Intercepting Web Agent pattern 2nd Password Synchronizer pattern Secure Service Proxy pattern Lessons learned in case study Liberty Alliance consortium Liberty Alliance Project architecture for SAML Liberty Phase 1 Liberty Phase 2 meta-data and schemas relationships security mechanisms SSO strategy usage scenarios communication security credentials federation management global logouts identity registration and termination Java System Access Manager multi-tiered authentication provider session state maintenance single sign-on 2nd Web redirection in Web services in Liberty-enabled clients Liberty-enabled proxies
Libraries
DLL
Lightweight Directory Access Protocol. [See LDAP (Lightweight Directory Access Protocol)]
Limited View pattern Load Balancing PEP pattern Load-balancing in case study loadRegistry method Locate service in X-KISS LogFactory class Logging alteration detection for 2nd failures in biometrics in case study
in security patterns
Identity management Password Synchronizer Policy Delegate Secure Base Action
Logical architecture
biometric systems smart cards user account provisioning Logical tiers in J2EE Logical views in use cases Login attempts in biometrics
login method
Authentication Enforcer LoginContext 2nd 3rd LoginModule Login service in case study
LoginContext class
JAAS authentication 2nd JAAS Login Module strategy 2nd LoginModule class Authentication Enforcer pattern biometrics implementing providers for smart cards LogManager class
logout method
LoginContext LoginModule Logout requests in SAML
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
MAC (message authentication code) 2nd Mac class Magnus.conf file
Manageability
J2EE network topology Secure Base Action pattern Secure Logger pattern Manifest files Manipulation attacks
Mapping
in Container Managed Security pattern SAML attributes user account Masked list strategy Match-off-the-card strategy 2nd Match-on-the-card strategy 2nd 3rd Matrix, Effect MBean strategy MBeanFactory class MBeanFactory.java file MBeanManager.java file MBeanServer class MD5 cryptography Cryptographic Service Providers JCA message digests Media in security pattern factor analysis
Memory
for Secure Session Object pattern in smart cards Memory cards Message authentication code (MAC) 2nd Message authentication encryption Message Configurators 2nd
Message digests
encryption algorithms for JCA Message injection attacks Message Inspector pattern 2nd
consequences forces in case study 2nd 3rd 4th participants and responsibilities problem reality checks related patterns Intercepting Validator Message Interceptor Gateway Secure Message Router security factors and risks solution strategies structure Message Interceptor Gateway pattern 2nd consequences forces in case study 2nd 3rd participants and responsibilities problem reality check related patterns Audit Interceptor Intercepting Web Agent Message Inspector 2nd Secure Message Router security factors and risks solution strategies structure
Message replay
SAML security provisioning patterns Message Routers Message-handler chain strategy
MessageDigest class
JCA Secure Logger pattern
Meta-data and schemas 2nd Methodology choices in use cases Methods, Java Microprocessor cards MIDlets components of signed trusted MIDP (Mobile Information Device Profile)
Migration
in Message Interceptor Gateway pattern SAML 1.1 to SAML 2.0 Mimic scanner attacks Minimization and hardening in Web services patterns Minutiae-based fingerprint matching MITM (man-in-the-middle) attacks in case study in SAML in Web services Mobile Information Device Profile (MIDP) Model MBean strategy
Models
biometrics conceptual data domain JWS security smart cards threat trust 2nd 3rd user account provisioning 2nd Web services
Modification attacks
SAML Secure Logger pattern 2nd Modify operations in SPML ModifyResponse message
Modularity
Message Inspector pattern Message Interceptor Gateway pattern Secure Message Router pattern
Monitoring
biometrics Business tier patterns in case study Secure UP 2nd Security Services user account provisioning
Mutual authentication
J2EE JSSE Web tier patterns
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Name Identifier Management Profile Name-value (NV) pairs
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] OASIS standards
in identity management
Obfuscation
Business tier patterns in case study 2nd Java code Web tier patterns Obj.conf file Object Name Service (ONS) Objects in case study OCF (OpenCard Framework) OCSP (Online Certificate Status Protocol) 2nd 3rd ODRL (Open Digital Rights Language) One-to-many/one-to-one Policy Delegate One-way hash function algorithms Oneshot MIDlets onFault method Online Certificate Status Protocol (OCSP) 2nd 3rd Online portals 2nd
onMessage method
AuditRequestMessageBean PasswordSyncLedger PasswordSyncListener ONS (Object Name Service) Open Content model Open Digital Rights Language (ODRL) OpenCard Framework (OCF) OpenSC framework Operating systems 2nd
Operational models
biometrics smart cards Web services
Operational practices
Operations
Secure UP SPML Optical scanners Optimization Optional flag
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Padding in JCE block ciphers paint method PAM (Pluggable Authentication Module) 2nd for biometrics for smart cards PAPs (Policy Administration Points) 2nd ParamValidator class Parlay Group 2nd Partial content of XML documents, accessing
Passwords
Credential Tokenizer patterns exploits Identity management 2nd in authentication JAAS authorization keystore SAML smart cards synchronization 2nd [See also Password Synchronizer vendor products for Web tier patterns PasswordSyncLedger class notification messages from sample code 2nd PasswordSyncListener class sample code 2nd screen display messages from PasswordSyncManager class 2nd 3rd PasswordSyncRequest class 2nd 3rd
pattern]
Patches
in Secure UP problems from
Performance
helper classes for
in security patterns
Audit Interceptor 2nd Business tier Intercepting Validator Message Interceptor Gateway Obfuscated Transfer Object Policy Delegate Secure Logger 2nd Secure Pipe 2nd
Permissions
J2EE Java 2 JNLP MIDlets tag library for Web tier patterns PermissionsCollection class Persistent mode Personal Data Ordinance Personal Health Information (PHI) Personal identification 2nd authentication best practices
biometric. [See Biometric identification and authentication] enabling technologies. [See Enabling technologies for personal identification]
physical and logical access control pitfalls references RFID-based
Pitfalls
in case study in personal identification
in security patterns
Business tier Identity management security provisioning Web services PKCS#11 interface standard 2nd PKCS#15 interface standard PKCS1 algorithm
Plug-ins
for biometrics for smart cards in Java System Access Manager Pluggable Authentication Module (PAM) 2nd for biometrics for smart cards Point-to-Point Channel pattern Point-to-point interfaces Pointers in Java POJO business objects 2nd policies failures in case study in security patterns Business tier 2nd Identity management Intercepting Web Agent Secure Service Facade Web tier in Security Wheel J2EE domains for JAAS authorization Java 2 management DMTF EPAL IETF Policy Management Working Group in Web services 2nd 3rd 4th Parlay Group services for reality checks for XACML 2nd 3rd Policy Administration Points (PAPs) 2nd Policy class Policy Decision Point Authority
Policy repository
SAML XACML Policy sets Policy stores Policytool tool 2nd 3rd
Portals
in use cases 2nd in user account provisioning SSO through
Presentation tier
J2EE 2nd reality checks for Pretexting Provisions
Principals
Authorization Enforcer pattern delegation of J2EE JAAS authorization JAAS Login Module Strategy Liberty specifications propagation of resource Printing certificate information Priorities
Privacy
Secure Pipe pattern
security provisioning patterns Security Services XACML Privacy-rule administrators Private keys Private/public key pairs PrivateCredentialsPermission class PrivateKey interface PrivilegedAction Proactive assessment Proactive security 2nd Probability risk factors Problem in security pattern templates Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator process method processPasswordSyncRequests method Profiles in case study J2ME SAML 2nd 3rd XACML
Programmatic security
authentication
authorization
Authorization Enforcer pattern J2EE 2nd 3rd 4th Container Managed Security pattern EJB method using Password Synchronizer pattern validation logic Proprietary solutions
Protected resources
Protection domains
J2EE Java 2 ProtectionDomain class
Protocols
Business tier patterns Java System Access Manager SAML Security Services Protocols stack
Provider classes
JCA JCE
Providers
authorization 2nd J2EE JMS JSSE Liberty specifications 2nd LoginModule PCKS Secure Message Router pattern session state maintenance Web services 2nd Provisioning Service Points 2nd Provisioning Service Targets
Public keys
in assessment checklists LDAP 2nd PublicKey interface publishPasswordSyncResult method
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Qualitative risk analysis
Quality of services
reality checks security patterns factor analysis security provisioning Quantitative risk analysis
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
RA (risk analysis) Radio Frequency Identification (RFID)
RAM
for Secure Session Object pattern in smart cards Random number generation Rationale in security design RBAC profiles RC6 algorithm Reactive security
Readers
RFID smart card 2nd Reality checks 2nd Business tier Client Device tier for administration for policies for quality of services in security pattern templates Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator Integration tier Presentation tier Web tier
Realms
for smart cards J2EE 2nd JAAS Reconciliation in user account provisioning Recovery 2nd in case study in use cases in XKMS key service Redirection, web Redundancy in Policy Delegate pattern Refactoring security design Reference templates for biometrics registerObject method
Registration
identity UDDI
Registries
Dynamic Service Management pattern UDDI Web services XACML RegistryMonitor class Regulatory policies Reissue service, key REL (Rights Expression Language) Related patterns in security pattern templates 2nd Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator
Reliability
Assertion Builder pattern Secure Message Router pattern reloadMBeans method
Replay attacks
Intercepting Web Agent pattern SAML Web services XKMS
Reporting practices
Gramm-Leach-Bliley Act Sarbanes-Oxley Act Reporting services in identity management
Repository
for biometric information SAML XACML
Request messages
Message Inspector pattern 2nd Secure Message Router pattern
Request-reply model
SAML attribute assertion authentication assertions SPML RequestContext class Authentication Enforcer pattern Authorization Enforcer pattern JAAS Login Module Strategy Requesters for Web services Requesting Authority RequestMessage class Requests, XACML Required flag
Requirements
in use cases Secure UP 2nd security basics Requisite flag Resource principals Resource profiles Resources tier respond method Response APDUs
Response Message
Message Inspector pattern Message Interceptor Gateway pattern Retinal analysis
Risks
in case study 2nd in patterns-driven security design
in security patterns
Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object
Single Sign-on Delegator in security provisioning patterns in trust model in use cases in Web services
Roles
Business tier 2nd Container Managed Security pattern identity management J2EE 2nd ROM in smart cards Root certificates Root nodes in attack trees Rotate ciphers
Rules
EPAL in policy design XACML run method Run-as identity
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] SAAJ API
for Web services in case study in Message Inspector pattern Safeguards Rule SAML (Security Assertion Markup Language) 2nd 3rd architecture 2nd assertions 2nd 3rd 4th attribute authentication 2nd authorization domain model for access control Identity management patterns in XACML 2nd 3rd J2EE-based applications and web services Java System Access Manager with migration in motivation Policy Administration Point Policy Enforcement Point profiles 2nd 3rd request-reply model SAML 1.0 2nd SAML 1.1 SAML 2.0 2nd 3rd SSO in 2nd usage scenarios DOS attacks global logout man-in-the-middle attacks message replay and message modification third-party authentication and authorization XML signatures in SAML Token profile
Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator SampleAuthorizationEnforcer.java file Sarbanes-Oxley Act (SOX) identity protection in 2nd in security provisioning patterns SASL (Simple Authentication and Security Layer) API clients 2nd installing servers SATAN (Security Administrator Tool for Analyzing Networks) tool SBU (Sensitive But Unclassified) information
Scanners
fingerprint in biometrics
in case study 2nd 3rd 4th participants and responsibilities 2nd problem reality checks related patterns sample code security factors and risk solution strategies structure Secure Communication patterns Secure data logger strategy Secure log store strategy Secure Logger pattern 2nd consequences 2nd forces in case study 2nd 3rd participants and responsibilities problem reality check related patterns Message Inspector Secure Base Action sample code security factors and risks solution strategies structure Secure Message Interceptor pattern Secure Message Router pattern consequences forces in case study 2nd 3rd 4th participants and responsibilities problem reality check related patterns Message Interceptor Gateway Secure Service Proxy security factors and risks solution strategies structure Secure Pipe pattern 2nd 3rd consequences forces in case study 2nd 3rd 4th in secure log store strategy 2nd participants and responsibilities problem
reality check related patterns Authentication Enforcer Credential Tokenizer Dynamic Service Management Secure Logger sample code security factors and risks solution strategies structure Secure Service Facade pattern 2nd 3rd consequences forces in case study participants and responsibilities 2nd problem reality check related patterns sample code security factors and risks solution strategies structure Secure Service Proxy pattern consequences forces participants and responsibilities problem reality check related patterns Container Managed Security Intercepting Web Agent Secure Service Facade sample code security factors and risks solution strategies structure Secure service proxy single service strategy Secure Session Facade pattern 2nd Secure Session Manager 2nd Secure Session Object pattern consequences forces participants and responsibilities problem reality check related patterns sample code
security factors and risks solution strategies structure Secure Session pattern
SecureBaseAction class
Authentication Enforcer pattern Authorization Enforcer pattern 2nd Intercepting Validator pattern JAAS Login Module strategy with Apache Struts SecureClassLoader class SecureID SecureRandom class SecureServiceFacade class SecureSessionFacadeSessionBean.java file Security Administrator Tool for Analyzing Networks (SATAN) tool
Security Assertion Markup Language. [See SAML (Security Assertion Markup Language)]
Security by default 2nd application security business challenges
Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator Security levels in J2EE network topology Security patterns application security assessment model applying Business tier 2nd Audit Interceptor best practices Container Managed Security Dynamic Service Management factor analysis Obfuscated Transfer Object overview 2nd pitfalls Policy Delegate references Secure Service Facade Secure Session Object classification existing factor analysis Identity management 2nd 3rd Assertion Builder best practices Credential Tokenizer pattern pitfalls references Single Sign-on Delegator pattern in case study 2nd in patterns-driven security design infrastructure and quality of services Integration tier labeling in policy design in references relationships
security provisioning
best practices and pitfalls Password Synchronizer threat profiling tier analysis trust model Web services 2nd best practices Message Inspector Message Interceptor Gateway pitfalls references Secure Message Router Web tier 2nd 3rd Authentication Enforcer Authorization Enforcer best practices Intercepting Validator Intercepting Web Agent references Secure Base Action Secure Logger Secure Pipe Secure Service Proxy Security principles, references for Security Provider patterns
Security provisioning
references
security patterns
best practices and pitfalls Password Synchronizer summary
Security realms
for smart cards J2EE 2nd JAAS Security requirements and goals authentication authorization confidentiality integrity non-repudiation Security Services
SecurityToken class 2nd Self-healing in Web services patterns Sensitive But Unclassified (SBU) information
Sensitive information
in case study Secure Logger pattern Secure Session Object pattern Web tier patterns Separation of responsibility
Sequence diagrams
identity provider agent strategy in security patterns 2nd Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator 2nd Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator JAAS Login Module strategy Sequence numbers for deletion detection 2nd Server Gated Cryptography (SGC) Server mutual authentication Server-side communication Server-side SSL example
Server-to-server connections
in case study in use cases Web tier patterns ServerHello messages
Servers
DMZ for biometrics for smart cards in provisioning 2nd in use cases
Service providers
for Web services in Liberty specifications 2nd Single Sign-on Delegator pattern 2nd Service provisioning business challenges identity management relationship in Security Services scope security patterns for 2nd
ServiceEndpoint class
Message Inspector pattern Message Interceptor Gateway pattern Secure Message Router pattern serviceLocator method ServiceLocator service ServiceManager class 2nd ServiceProvider class 2nd
Services
aggregation of as weakest links continuity and recovery in use cases in Web services strategies directory 2nd in case study catalog order fulfillment order management
Sessions
MIDlet
states
Liberty Alliance SSL theft Single Sign-on Delegator pattern Web services timeouts in
tracking
cookies and URL rewriting in Web tier patterns weak identifiers setActionList method setAssertionType method 2nd setAuthenticationMethod method 2nd setComponentsConfig method setConfigProperties method setConfRef method setData method setLoginContext method setMaxInactiveInterval method setMessageDrivenContext method
setProtocolBinding method
AssertionContextImpl PasswordSyncRequest SSOContextImpl setRegistryFileName method setSecureTransferObject method setSecurityManager method setServiceName method setSessionInfo method 2nd setSSOTokenMap method setStatus method setTokenType method Setup IDS setupDefaultUserProfile method SGC (Server Gated Cryptography) SHA-1 cryptography for JCA message digests in Cryptographic Service Providers SHA1 encryption SHA256 encryption SHA512 encryption
sign method
Signature Signer
Sign-ons
EIS tier multiple 2nd
Simple Object Access Protocol. [See SOAP (Simple Object Access Protocol) and SOAP messages]
SimpleFormAction class Single Access Point patterns Single Logout Profile Single Loss Expectancy (SLE) Single service secure service proxy strategy Single sign-on (SSO) mechanisms 2nd 3rd Assertion Builder pattern biometrics Credential Tokenizer patterns cross-domain 2nd federated identity management in case study in use cases J2EE authentication JAAS authorization JGSS Liberty Alliance 2nd 3rd 4th Password Synchronizer pattern SAML in through portals
user account provisioning 2nd Web services Single Sign-on Delegator pattern 2nd 3rd consequences forces participants and responsibilities problem reality check related patterns Assertion Builder Password Synchronizer sample code security factors and risks solution strategies structure SLAs (service-level agreements) 2nd SLE (Single Loss Expectancy) Smart cards 2nd 3rd architecture and implementation model as Java key stores best practices components for physical access control in Java security in JCE in multi-factor authentication Java Card technology logical architecture operational model snoop method SOA (Service-Oriented Architecture) 2nd 3rd SOAP (Simple Object Access Protocol) and SOAP messages in security patterns 2nd Message Inspector 2nd 3rd Password Synchronizer Secure Message Router Secure Service Proxy SAML SPML WS-Policy WS-Security 2nd Socket factories SocketFactory class Solution in security patterns Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer
Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator SOP (Standard Operating Procedure) documents Source code scanners SourceBaseAction class SourceSite class 2nd SOX (Sarbanes-Oxley Act) identity protection in 2nd in security provisioning patterns
Spoofing
and client-side validations in Web services
SQL
embedded commands injection vulnerability SQLValidator SSL (Secure Socket Layer) accelerators 2nd for RMI socket factories in case study issues J2EE 2nd 3rd JSSE for secure socket connections HTTP over SSL role of vs. TLS Web services 2nd
Standards
Authentication Enforcer pattern smart cards Web services 2nd 3rd
start method
PasswordSyncLedger PasswordSyncRequest State maintenance in Liberty Alliance sessions Stateful firewalls Stateful transactions Stateless transactions Stateless/stateful Policy Delegate Static conformance requirements Static mappings Stolen smart cards Storage, insecure Strategies in security patterns 2nd Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator
Stream ciphers Stress testing String encryption Strong cryptography Structural transformations Structure in security patterns Assertion Builder Audit Interceptor Authentication Enforcer Authorization Enforcer Container Managed Security Credential Tokenizer Dynamic Service Management Intercepting Validator Intercepting Web Agent Message Inspector Message Interceptor Gateway Obfuscated Transfer Object Password Synchronizer Policy Delegate Secure Base Action Secure Logger Secure Message Router Secure Pipe Secure Service Facade Secure Service Proxy Secure Session Object Single Sign-on Delegator
Subject class
Authentication Enforcer pattern Authorization Enforcer pattern JAAS authorization 2nd Subject Descriptor pattern
Subjects in JAAS
authorization 2nd Login Module strategy Sufficient flag Summaries of security factors SunJCE provider SunJSSE provider SunPKS11 provider Super encryption Support strategy in security provisioning patterns Symmetric ciphers
Symmetric keys
Obfuscated Transfer Object pattern Secure Logger pattern XML
Synchronization
identity management
user account provisioning 2nd System constraints System Entry Point System environment in use cases
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] Tags
libraries for RFID TakeAction class Tamper-proofing transformations
Targets
in case study
in security patterns
Audit Interceptor Intercepting Validator Obfuscated Transfer Object 2nd Password Synchronizer Secure Session Object SPML XACML Technology differentiators Technology elements in case study 2nd
Templates
biometrics Java System Access Manager
Testability
Message Interceptor Gateway pattern Secure Message Router pattern Testing black box in case study Web services patterns white box
Theft
identity in Web services session Third-party authentication and authorization Third-Party Communication pattern Threat modeling
Threat profiling
for security patterns in case study Threats to Web services Three-factor authentication Tier matrices
Tiers
in case study in J2EE in risk analysis in security patterns Time checking strategy
Timeouts
HTTP sessions URLConnections Web tier patterns
Timestamps
Web services patterns WS-Security
Tokens
biometrics
Transactions
in case study in use cases J2EE network topology Liberty Alliance
Transparency
Assertion Builder pattern Credential Tokenizer patterns Transport Layer Security (TLS) issues in J2EE 2nd 3rd JMS JSSE Web services WS-Security XML encryption TRIPLEDES encryption algorithm
Trust models
for security patterns in case study LDAP TrustAnchor class
Trusted certificates
for applets importing Trusted MIDlets TrustManager class TrustManagerFactory class Trusts in WS-Security TrustStore property TSIK services Tunneling, proxy Twofish algorithm Types, Java
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z] UDDI (Universal Description, Discovery, and Integration)
and Secure Logger pattern attacks on for Web services 2nd UIDGenerator class Unclassified data Unified credential tokens 2nd Unified Process (UP) references for
architecture
centralized model vs. decentralized components of logical differentiators for identity management identity provider infrastructure integration
User login
biometrics in case study use case
Usernames
JAAS authorization WS-Security UsernameToken class 2nd Users in J2EE UserStore class
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
validate method Validate service validateSecurityContext method validateSecurityToken method
Validation
certificate chains failures in case study
in security patterns
Business tier
Vendor-specific security
session management Web services APIs
Vendors
password management service provisioning Verification biometric certificate chains host name jar files signatures 2nd Web tier patterns verify method VeriSign CA 2nd Version numbers in XACML Vertical scalability in J2EE network topology viewResult method
virtual machines
CVM JVM KVM VLANs Voice verification VPN access
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Watermarking Java code Weakest links
Web
application load-balancing authentication redirection
servers
in SSL in use cases Web tier patterns
Web tier patterns. [See Web tier] Web-based transactions. [See Secure Pipe pattern]
Web browser SSO Profile Web of trust models Web Services Definition Language (WSDL) 2nd 3rd 4th Web Services Interoperability Organization (WS-I) Web Services Policy Framework (WS-Policy) Web services policy language (WSPL) Web services tier 2nd 3rd architecture and building blocks communication styles core issues in case study 2nd 3rd 4th in J2EE in Liberty Alliance 2nd infrastructure Java-based providers message-layer security network-layer security operational model policies 2nd 3rd protocols stack references requirements SAML in security patterns 2nd 3rd best practices factor analysis Message Inspector Message Interceptor Gateway pitfalls references Secure Message Router
standards 2nd
Web tier
container managed security strategy in case study 2nd 3rd in J2EE 2nd authentication authorization context propagation from HTTP session tracking reality checks for security patterns 2nd 3rd Authentication Enforcer Authorization Enforcer best practices factor analysis Intercepting Validator Intercepting Web Agent references Secure Base Action Secure Logger Secure Pipe Secure Service Proxy
Web.xml file
basic HTTP authentication entry client certificate based authentication entry deployment descriptor form based authentication entry
WebAgent class
in case study Single Sign-on Delegator pattern 2nd Wheel edge in Security Wheel Where in security Which in security White box testing in case study Secure UP 2nd 3rd Who in security Why in security Wireless Toolkit (WTK) 2nd Wireless Transport Layer Security (WTLS) Workflow Engine WorkflowRecipient class wrap method 2nd WriteAppletPolicy.policy file WriteFileApplet.html file WriteFileApplet.java file 2nd WS-I (Web Services Interoperability Organization)
WS-I Security profiles WS-Policy (Web Services Policy Framework) WS-Security definitions encryption 2nd in JWSDP motivation namespaces SAML and REL in signatures SOAP messages tokens WSDL (Web Services Definition Language) 2nd 3rd 4th WSPL (Web services policy language) WTK (Wireless Toolkit) 2nd WTLS (Wireless Transport Layer Security)
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
X-BULK X-KISS (XML key information services) locate service validate service X-KRSS (XML key registration service) recovery registration reissue revocation X.500/LDAP Profile
algorithms anatomy arbitrary content element level example scenarios motivation super encryption firewalls for performance for Web Services in J2EE in Message Inspector pattern in Message Interceptor Gateway pattern signatures algorithms anatomy Assertion Builder pattern creating examples in SAML motivation verifying and validating XACML for XML Common Biometric Format (XCBF) XML Denial of Service (XML-DOS) attacks XML Digital Signature algorithm XML key information services (X-KISS) locate service validate service
XML key management system. [See XKMS (XML key management system)]
XML key registration service (X-KRSS) recovery registration reissue revocation XML messaging provider strategy XML Schemas 2nd for XACML in Message Inspector pattern in Message Interceptor Gateway pattern XML-aware security infrastructure Message Inspector pattern Message Interceptor Gateway pattern Xpath transform algorithms XrML (Extensible Rights Markup Language) XSS (cross-site scripting) Xverify command
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Yarrow random number generator
Index
[SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]
Zero knowledge testing Zimmerman, Phil