The SysAdmin Handbook - The Best of Simple Talk
The SysAdmin Handbook - The Best of Simple Talk
ISBN: 978-1-906434-42-7
Red Gate Books Newnham House Cambridge Business Park Cambridge CB2 0WZ United Kingdom
ISBN 978-1-906434-42-7
Copyright Notice Copyright 2010 by Red Gate Books. All rights reserved. Except as permitted under the Copyright Act of 1976, no part of this publication may be reproduced under the Copyright Act of 1976. No part of this publication may be reproduced in any form or by any means or by a database retrieval system without the prior written consent of The Central Publishing Group. The publication is intended for the audience of the purchaser of the book. This publication cannot be reproduced for the use of any other person other than the purchaser. Authors of the material contained in this book retain copyright to their respective works.
Disclaimer Red Gate Books, Simple-Talk.com, and the authors of the articles contained in this book are not liable for any problems resulting from the use of techniques, source code, or compiled executables referenced in this book. Users should review all procedures carefully, test first on a non-production server, and always have good backup before using on a production server.
Table of Contents
Exchange
24
High Availability in Exchange 2007 ............................................................................................................................................................... 24 Mailbox Server Role ...........................................................................................................................................................................................24 Hub Transport Server Role ...............................................................................................................................................................................28 Client Access Server Role ..................................................................................................................................................................................28 Unified Messaging Server Role.........................................................................................................................................................................29 Edge Transport Server Role ..............................................................................................................................................................................30 Summary..............................................................................................................................................................................................................30 Message Hygiene in Exchange Server 2007 ................................................................................................................................................... 31 Message Hygiene in Exchange Server 2007: A Defense-in-Depth Approach ............................................................................................31 Connection Filtering Agent ...............................................................................................................................................................................33 Sender Filtering Agent ....................................................................................................................................................................................... 34 Recipient Filtering Agent ..................................................................................................................................................................................35 SenderID Filter Agent .........................................................................................................................................................................................35 Content Filtering Agent .................................................................................................................................................................................... 36 Summary.............................................................................................................................................................................................................. 38 Using Exchange 2007 for Resource Booking .................................................................................................................................................38 Introduction ....................................................................................................................................................................................................... 38 Environment ....................................................................................................................................................................................................... 39 Creating or Migrating resource mailboxes .................................................................................................................................................... 39 Feature Investigation ....................................................................................................................................................................................... 40 Summary..............................................................................................................................................................................................................47 Controlling Email Messages using Exchange's Transport Rules................................................................................................................ 47 Microsoft Exchange Server 2007: Controlling Email Messages using Exchange's Transport Rules ....................................................47
Focus of Transport Server Roles ......................................................................................................................................................................48 Scope of Transport Server Roles ......................................................................................................................................................................48 What types of messages do Transport Rules work against? ........................................................................................................................48 Anatomy of a Transport Rule ...........................................................................................................................................................................49 Regulatory Compliance ......................................................................................................................................................................................51 Ethical Walls .........................................................................................................................................................................................................51 Message Classifications ......................................................................................................................................................................................51 Creating a new Transport Rule ........................................................................................................................................................................ 52 Backing up Transport Rules ............................................................................................................................................................................. 58 Summary.............................................................................................................................................................................................................. 59 Exchange 2007 Mailbox Server Clustering ....................................................................................................................................................59 Exchange 2007 Mailbox Server Clustering .................................................................................................................................................... 59 Exchange 2007 Single Copy Cluster ................................................................................................................................................................ 59 Single Point of Failure........................................................................................................................................................................................62 Replication Techniques .....................................................................................................................................................................................62 Local Continuous Replication.......................................................................................................................................................................... 63 Clustered Continuous Replication ..................................................................................................................................................................64 Conclusion ..........................................................................................................................................................................................................69 Top Tips for Exchange Admins ....................................................................................................................................................................... 70 Exchange Database Technologies................................................................................................................................................................... 72 What's on the disk? ............................................................................................................................................................................................ 73 ESE Extensible Storage Engine .....................................................................................................................................................................74 The mailbox database ........................................................................................................................................................................................74 Database pages .................................................................................................................................................................................................... 75 How does it fit together? ...................................................................................................................................................................................76 Conclusion ......................................................................................................................................................................................................... 80 Message Classifications in Exchange 2007....................................................................................................................................................80 Introduction ....................................................................................................................................................................................................... 80
Creating the XML File ....................................................................................................................................................................................... 81 Locating the XML File .......................................................................................................................................................................................82 Required Registry Modification ....................................................................................................................................................................... 83 Creating a Classified Message .......................................................................................................................................................................... 83 Creating Custom Classifications......................................................................................................................................................................86 Manipulation With Transport Rules...............................................................................................................................................................88 Summary..............................................................................................................................................................................................................92 Deploying Exchange 2007 on Windows Server 2008 .................................................................................................................................. 92 Designing a solution ..........................................................................................................................................................................................92 Building a new mail server ................................................................................................................................................................................ 93 Finishing the Org ...............................................................................................................................................................................................98 Conclusion ..........................................................................................................................................................................................................98 Exchange Server Log File Replay .................................................................................................................................................................... 99 Creation of the database ...................................................................................................................................................................................99 Offline backups ................................................................................................................................................................................................. 101 Offline Restore .................................................................................................................................................................................................. 103 Online Maintenance ........................................................................................................................................................................................ 105 Offline defragmentation ................................................................................................................................................................................. 105 Worst case scenario: Repairing your database............................................................................................................................................. 107 Conclusion ........................................................................................................................................................................................................109 Configuring Exchange Server 2007 to Support Information Rights Management ...............................................................................110 The Rights Management Server .................................................................................................................................................................... 110 New Rights Management Features ............................................................................................................................................................... 110 Rights Management Functions ....................................................................................................................................................................... 111 Rights Management Services and Exchange Server 2007...........................................................................................................................112 Some Caveats to Be Aware of ..........................................................................................................................................................................112 Preparing to Use the Prelicensing Agent .......................................................................................................................................................112 Using Microsoft Outlook with Rights Management Services ................................................................................................................... 113
Conclusion ......................................................................................................................................................................................................... 115 Reporting on Mobile Device Activity Using Exchange 2007 ActiveSync Logs ........................................................................................ 116 Top Tips for SysAdmins No 1...........................................................................................................................................................................116 Online Exchange Backups ............................................................................................................................................................................. 120 NTBackup.......................................................................................................................................................................................................... 120 Creating a full backup.......................................................................................................................................................................................121 Creating an incremental backup.................................................................................................................................................................... 122 -1018 Errors.........................................................................................................................................................................................................123 Online restore ................................................................................................................................................................................................... 124 VSS or snapshot backups................................................................................................................................................................................. 126 VSS Restore........................................................................................................................................................................................................ 130 Replication and backups .................................................................................................................................................................................. 131 Third-party application vendors ..................................................................................................................................................................... 131 Conclusion ......................................................................................................................................................................................................... 131 Optimizing Exchange Server 2007................................................................................................................................................................ 132 My Approach to Exchange 2007 Optimization ............................................................................................................................................ 133 The Microsoft Exchange Best Practices Analyzer ........................................................................................................................................ 133 Disk I/O ..............................................................................................................................................................................................................134 Storage Groups ..................................................................................................................................................................................................134 Mailbox Distribution ........................................................................................................................................................................................135 One Last Disk Related Consideration ...........................................................................................................................................................136 Backups ...............................................................................................................................................................................................................136 The Windows Operating System ....................................................................................................................................................................137 Conclusion .........................................................................................................................................................................................................137 Exchange: Recovery Storage Groups ............................................................................................................................................................ 138 Backup and Restore ..........................................................................................................................................................................................138 Recovery Storage Group in Exchange Server 2007 ..................................................................................................................................... 142 Recover Mailbox Data.......................................................................................................................................................................................145
Conclusion ........................................................................................................................................................................................................ 147 Exchange E-mail Addresses and the Outlook Address Cache ...................................................................................................................148 Upgrading to Exchange Server 2007 ............................................................................................................................................................. 152 Exchange Server 2003 environment ...............................................................................................................................................................152 Upgrading to Exchange Server 2007 ..............................................................................................................................................................154 Upgrading the Active Directory ......................................................................................................................................................................155 Installing the first Exchange Server 2007 server ..........................................................................................................................................158 Certificates.........................................................................................................................................................................................................160 The Mailbox Server Role ..................................................................................................................................................................................161 Goodbye Exchange ExMerge, Hello Export-Mailbox .................................................................................................................................162 Determining MS Exchange Disk Performance............................................................................................................................................167 Performance The Big Three ........................................................................................................................................................................ 167 LogicalDisk vs. PhysicalDisk........................................................................................................................................................................... 169 What do I measure?.......................................................................................................................................................................................... 170 What Defines Good Performance? .................................................................................................................................................................171 Upgrading to Exchange Server 2007: Part 2 ................................................................................................................................................172 Upgrade to Exchange Server 2007 Part II ................................................................................................................................................. 172 Changing the Internet e-mail acceptance .....................................................................................................................................................173 Changing the Internet facing Client Access ................................................................................................................................................ 176 Replicate Public Folders System Folders to Exchange Server 2007.......................................................................................................... 177 Move the Recipient Policies ............................................................................................................................................................................ 179 Moving Mailboxes from 2003 to 2007 ..........................................................................................................................................................180 Removing the Exchange Server 2003 servers ............................................................................................................................................... 182 Message Tracking in Exchange 2007 ............................................................................................................................................................ 183 Configuring Message Tracking .......................................................................................................................................................................183 Searching Message Tracking Logs ..................................................................................................................................................................185 Working With the Search Results.................................................................................................................................................................. 186 Advanced Searches ........................................................................................................................................................................................... 186
Third Party High Availability and Disaster Recovery Tools for Microsoft Exchange Server ...............................................................188 Limitations of Native Exchange Server Recovery Tools ............................................................................................................................ 189 Advantages and Disadvantages of Third Party Exchange Server DR tools: ............................................................................................ 189 About Standby Continuous Replication (SCR) and Clustered Continuous Replication (CCR) ..........................................................190 Guidelines for Selecting DR Tool...................................................................................................................................................................190 Best Practices .....................................................................................................................................................................................................191 Summary............................................................................................................................................................................................................ 192 Resources .......................................................................................................................................................................................................... 192 Exchange Server 2010 The First Public Beta Version .............................................................................................................................. 193 So, what's new? ..................................................................................................................................................................................................193 Installing Exchange Server 2010 .................................................................................................................................................................... 194 Clients. ............................................................................................................................................................................................................ 196 So, What Conclusion?...................................................................................................................................................................................... 199 Emulating the Exchange 2003 RUS for Out-of-Band Mailbox Provisioning in Exchange 2007 ......................................................... 199 Using Exchange 2007 Transport Rules to Protect the First Entry in the Address Book ...................................................................... 201 Cluster Continuous Replication Network Design ..................................................................................................................................... 205 Introduction ......................................................................................................................................................................................................205 Network Teaming............................................................................................................................................................................................ 206 Configuration Procedure ................................................................................................................................................................................207 Choose Replication Host Names and IP Addresses ....................................................................................................................................207 Mixed Cluster Networks ................................................................................................................................................................................ 208 Strict Name Checking (Windows 2003)........................................................................................................................................................ 210 Name Resolution .............................................................................................................................................................................................. 210 Enable NetBIOS (Windows 2008) ................................................................................................................................................................. 212 Enable Continuous Replication Host Names ...............................................................................................................................................213 Log Shipping Using Continuous Replication Host Names ....................................................................................................................... 216 Database Seeding via Redundant Networks ................................................................................................................................................ 218 Summary............................................................................................................................................................................................................ 218
Building an Exchange Server 2007 environment ........................................................................................................................................218 Exchange Server 2003 ...................................................................................................................................................................................... 219 Exchange Server 2007 ......................................................................................................................................................................................220 Hardware used for Exchange Server 2007 ...................................................................................................................................................220 Performance ...................................................................................................................................................................................................... 222 Conclusion ........................................................................................................................................................................................................ 223 An Introduction to Messaging Records Management............................................................................................................................... 224 Installing Hyper-V and Beyond .................................................................................................................................................................... 228 Installing Hyper-V ............................................................................................................................................................................................228 Virtual Hard Disks............................................................................................................................................................................................ 232 Virtual Networks .............................................................................................................................................................................................. 232 Virtual Machines .............................................................................................................................................................................................. 234 Finishing the installation ................................................................................................................................................................................ 237 Using a dedicated or pass-through disk........................................................................................................................................................ 237 Conclusion ........................................................................................................................................................................................................ 238 Restricting Outlook Client Versions in Exchange 2007.............................................................................................................................239 Determining Which Client Versions are in use ........................................................................................................................................... 239 Determining Which Client Versions to Block ............................................................................................................................................ 240 Implementing the Blocking Settings............................................................................................................................................................. 241 To implement a per-server restriction .......................................................................................................................................................... 241 To implement a per-mailbox restriction.......................................................................................................................................................242 Using Twitter and PowerShell to Find Technical Information and Join a Community........................................................................243 Getting Started ................................................................................................................................................................................................. 243 Twitter Clients .................................................................................................................................................................................................. 245 PowerShell and Twitter ...................................................................................................................................................................................248 Conclusion .........................................................................................................................................................................................................253 Update: Exchange Server 2010 Release Candidate .....................................................................................................................................254 Changes and new features in Exchange Server 2010 .................................................................................................................................. 254
And how does it work? .................................................................................................................................................................................... 257 Conclusion ........................................................................................................................................................................................................ 258 Exchange backups on Windows Server 2008...............................................................................................................................................258 Windows Server Backup ................................................................................................................................................................................. 259 Backup schedules .............................................................................................................................................................................................262 Restore the Exchange Backup ........................................................................................................................................................................ 263 Conclusion ........................................................................................................................................................................................................267 Moving to Office Communications Server 2007 R2 Part 2.................................................................................................................... 268 Back- to Front-End ...........................................................................................................................................................................................268 Communications Web Access ........................................................................................................................................................................269 The Director and Edge ....................................................................................................................................................................................270 Monitoring and Compliance .......................................................................................................................................................................... 271 Enterprise Voice ................................................................................................................................................................................................ 271 Conclusion ........................................................................................................................................................................................................ 271 Monitoring and Scheduling Exchange 2007 Database Online Maintenance .........................................................................................273 Event Log Entries for Online Defragmentation..........................................................................................................................................274 Performance Monitor Counters for Online Defragmentation .................................................................................................................278 Setting the Online Maintenance Window ................................................................................................................................................... 283 Exchange 2010 High Availability .................................................................................................................................................................. 284 Database Availability Groups ......................................................................................................................................................................... 285 Mailbox Servers and Databases .....................................................................................................................................................................287 The Active Manager ........................................................................................................................................................................................288 Client Access Server Changes ........................................................................................................................................................................289 Summary .......................................................................................................................................................................................................... 290 Implementing Cluster Replication Part 1 .................................................................................................................................................291 Planning the Cluster ........................................................................................................................................................................................ 291 Creating the Cluster.........................................................................................................................................................................................292 Some Additional Configuration Tasks ..........................................................................................................................................................294
Creating a Majority Node Set File Share Witness .......................................................................................................................................294 Conclusion ........................................................................................................................................................................................................ 295 The Active Directory Recycle Bin in Windows Server 2008 R2 ............................................................................................................... 296 Pre-Windows Server 2008 R2 .........................................................................................................................................................................296 Authoritative Restore .....................................................................................................................................................................................296 Tombstone Reanimation.................................................................................................................................................................................297 Active Directory Recycle Bin ..........................................................................................................................................................................297 Active Directory Recycle Bin PowerPack for PowerGUI ............................................................................................................................307 Summary............................................................................................................................................................................................................. 311 Using Group Policy to Restrict the use of PST Files ................................................................................................................................... 312 Applying PST Group Policy for Outlook 2007 ............................................................................................................................................. 313 Introduction to Exchange Server 2010 ......................................................................................................................................................... 317 Under The Hood: What's changed? ...............................................................................................................................................................317 Getting Started ..................................................................................................................................................................................................318 What's been removed from Exchange Server 2010? ....................................................................................................................................318 What's new in Exchange Server 2010? ...........................................................................................................................................................319 Exchange Server 2010 and Active Directory ................................................................................................................................................ 324 Exchange Server coexistence .......................................................................................................................................................................... 326 Exchange Server 2010 Server roles ................................................................................................................................................................ 327 Summary............................................................................................................................................................................................................ 334 The Ego and the System Administrator .......................................................................................................................................................334 Implementing Windows Server 2008 File System Quotas ........................................................................................................................336 Quota Example Home directories with a 5GB limit................................................................................................................................340 Quota Exceptions / Folder-Specific Quotas ................................................................................................................................................. 347 How Quotas Affect Clients ..............................................................................................................................................................................351 Viewing Quotas .................................................................................................................................................................................................353 Implementing Cluster Continuous Replication, Part 2 .............................................................................................................................354 Deploying the Failover Cluster Feature ........................................................................................................................................................ 354
The Failover Cluster Management Console .................................................................................................................................................355 Creating the Cluster..........................................................................................................................................................................................355 Configuring Networking for the Cluster .......................................................................................................................................................357 Configure the Node and File Share Majority Quorum .............................................................................................................................. 359 Choose the Node and File Share Majority option.......................................................................................................................................360 Installing Exchange Server..............................................................................................................................................................................360 Configuring the Active Node...........................................................................................................................................................................361 Installing the Passive Node .............................................................................................................................................................................364 Conclusion ........................................................................................................................................................................................................364 Active Directory Management with PowerShell in Windows Server 2008 R2 .......................................................................................365 Active Directory Scripting .............................................................................................................................................................................. 365 Active Directory Web Services .......................................................................................................................................................................366 Getting Started .................................................................................................................................................................................................366 Active Directory Provider ............................................................................................................................................................................... 367 Active Directory Cmdlets ................................................................................................................................................................................369 FSMO Roles ....................................................................................................................................................................................................... 374 Further Information .........................................................................................................................................................................................375 Summary.............................................................................................................................................................................................................375 Upgrade Exchange 2003 to Exchange 2010 .................................................................................................................................................. 375 Exchange Server 2003 ...................................................................................................................................................................................... 376 Coexistence with Exchange Server 2010 ...................................................................................................................................................... 376 Mailbox Storage Design .................................................................................................................................................................................. 382 Installing the Mailbox Server role ................................................................................................................................................................. 383 Configuring the Exchange Server 2010 servers ...........................................................................................................................................384 Relocate the Mailbox Databases ....................................................................................................................................................................384 Unified Communications Certificate ............................................................................................................................................................384 OWA Configuration......................................................................................................................................................................................... 385 Public Folder Replication ................................................................................................................................................................................ 385
Summary Part I ................................................................................................................................................................................................. 387 Customizing the Outlook Address Book .....................................................................................................................................................388
General Articles
394
A SysAdmin's Guide to Change Management ............................................................................................................................................. 394 Overcoming the Status Quo ...........................................................................................................................................................................394 Research: Mapping your Path ........................................................................................................................................................................ 395 Design and Scheduling .................................................................................................................................................................................... 395 Development .....................................................................................................................................................................................................396 Training and Support ......................................................................................................................................................................................396 Project Evaluation ............................................................................................................................................................................................ 397 Summary............................................................................................................................................................................................................ 397 A SysAdmin's Guide to Users......................................................................................................................................................................... 398 User Trust .........................................................................................................................................................................................................398 User Support ....................................................................................................................................................................................................399 Administrative Policies .................................................................................................................................................................................. 400 The Bottom Line ............................................................................................................................................................................................. 400 Change Management What It Is and Why You Need It.......................................................................................................................... 401 What is Change Management? ......................................................................................................................................................................401 Why Change Management?............................................................................................................................................................................401 Creating a change control board................................................................................................................................................................... 402 The Framework ................................................................................................................................................................................................405 Finalizing the documentation ....................................................................................................................................................................... 406 Sample Change Management Documentation .......................................................................................................................................... 406 Manage Stress Before it Kills You ................................................................................................................................................................407 The warning shot ............................................................................................................................................................................................ 407 All in the mind?................................................................................................................................................................................................ 407
Stress management ......................................................................................................................................................................................... 408 Hiring System Administrators .....................................................................................................................................................................409 Planning ........................................................................................................................................................................................................... 409 Execution ...........................................................................................................................................................................................................410 An Experience in Hiring...................................................................................................................................................................................411 Evaluation .......................................................................................................................................................................................................... 412 Increase Your Value as a Professional in the Technical Industry ..............................................................................................................412 Join Several Professional Associations ...........................................................................................................................................................413 Become an ACTIVE Member of those Associations ....................................................................................................................................413 Join an On-Line Professional Community ....................................................................................................................................................413 Give Talks .......................................................................................................................................................................................................... 414 Write ................................................................................................................................................................................................................... 414 The Art of Dealing with People .....................................................................................................................................................................415 Lesson #1: Get a Mirror ................................................................................................................................................................................... 416 Lesson #2: Take them to lunch ...................................................................................................................................................................... 417 Lesson #3: It's not your Thesis Defense stop proving how smart you are ........................................................................................... 417
Virtualization
419
Virtual Exchange Servers................................................................................................................................................................................419 Windows Server 2008 Hyper-V...................................................................................................................................................................... 419 Virtual Exchange Servers................................................................................................................................................................................. 421 Exchange database and Log Files ...................................................................................................................................................................424 Backup and Restore ......................................................................................................................................................................................... 425 Snapshots ...........................................................................................................................................................................................................426 Exchange Server 2003 ......................................................................................................................................................................................426 Conclusion ........................................................................................................................................................................................................426 Virtualizing Exchange: points for discussion ............................................................................................................................................. 427
Introduction ......................................................................................................................................................................................................427 Is it supported?..................................................................................................................................................................................................427 Virtualizing Exchange in practice ..................................................................................................................................................................429 Benefits of Exchange Virtualization ..............................................................................................................................................................429 Problems with Exchange Virtualization .......................................................................................................................................................430 Summary............................................................................................................................................................................................................430 Build Your Own Virtualized Test Lab .......................................................................................................................................................... 431 The Big Picture ..................................................................................................................................................................................................431 The Journey Begins .......................................................................................................................................................................................... 433 Messaging Service ............................................................................................................................................................................................434 The Client Side ................................................................................................................................................................................................. 435 Extend Your Reach ...........................................................................................................................................................................................436 The Final Pieces ............................................................................................................................................................................................... 440 Wrapping Up .....................................................................................................................................................................................................442 Parting Words ...................................................................................................................................................................................................443 A Beginner's Guide to Virtualizing Exchange Server Part 1 ................................................................................................................... 443 Microsoft's Support Policy ............................................................................................................................................................................. 444 Server Roles ...................................................................................................................................................................................................... 444 Resource Consumption .................................................................................................................................................................................. 446 The Anatomy of a Virtual Machine .............................................................................................................................................................. 448 Conclusion ........................................................................................................................................................................................................450 Windows Server Virtualisation: Hyper-V, an Introduction ..................................................................................................................... 450 Hyper-V Introduction ......................................................................................................................................................................................450 Windows Architecture......................................................................................................................................................................................451 Hyper-V Architecture ...................................................................................................................................................................................... 452 Micro-kernelized hypervisor .......................................................................................................................................................................... 453 Security ..............................................................................................................................................................................................................454 Integration Components ................................................................................................................................................................................454
Server Virtualization Validation Program ....................................................................................................................................................456 Conclusion ........................................................................................................................................................................................................456 A Beginner's Guide to Virtualizing Exchange Server Part 2 ..................................................................................................................457 So Why the Discrepancy? ................................................................................................................................................................................ 457 Monitoring CPU Utilization........................................................................................................................................................................... 457 Planning for Exchange.....................................................................................................................................................................................458 The Microsoft Assessment and Planning Toolkit .......................................................................................................................................459 Gathering Performance Data .........................................................................................................................................................................459 Analyzing the Performance Data .................................................................................................................................................................. 460 Viewing the Results ......................................................................................................................................................................................... 460 Conclusion ........................................................................................................................................................................................................ 461 Increasing the Availability of Virtualized Applications and Services .......................................................................................................461 Terms .................................................................................................................................................................................................................. 461 Windows Clustering ........................................................................................................................................................................................462 Virtual Machines and High Availability .......................................................................................................................................................462 How does Hyper-V Virtual Machine Resource DLL help in the failover process? ................................................................................467 Conclusion ....................................................................................................................................................................................................... 470 Microsoft Hyper-V Networking and Configuration - Part 1 .................................................................................................................... 470 Terms Used Throughout This Article: .......................................................................................................................................................... 471 Virtual Networking Overview ........................................................................................................................................................................472 Hyper-V Virtual Network Switch Overview ................................................................................................................................................472 Microsoft Hyper-V Virtual Network Switch Types..................................................................................................................................... 473 Hyper-V Virtual Networking Maximum Configuration ............................................................................................................................474 What happens when you create a Virtual Network Switch? ..................................................................................................................... 475 Conclusion ........................................................................................................................................................................................................478
Unified Messaging
479
An Introduction to Unified Messaging........................................................................................................................................................ 479 What is Unified Messaging? ............................................................................................................................................................................479 Misconceptions about Unified Messaging ...................................................................................................................................................483 The Benefits of Unified Messaging ................................................................................................................................................................483 The Future of Unified Messaging ................................................................................................................................................................. 484 Conclusion ........................................................................................................................................................................................................485 Moving to Office Communications Server 2007 R2 ................................................................................................................................. 485 Migration Strategies........................................................................................................................................................................................ 486 Pre-Migration Tasks .........................................................................................................................................................................................487 Infrastructure Readiness ................................................................................................................................................................................ 489 Let the Show Begin ......................................................................................................................................................................................... 489 Administration.................................................................................................................................................................................................. 491 Conclusion ........................................................................................................................................................................................................492 References..........................................................................................................................................................................................................492
PowerShell
493
Managing Exchange 2007 Mailbox Quotas with Windows PowerShell ................................................................................................. 493 Exchange Mailbox Quota Types ....................................................................................................................................................................493 Customizing Quota Messages ........................................................................................................................................................................493 Retrieving Mailbox Sizes and Quotas ...........................................................................................................................................................495 Setting Mailbox Quotas ................................................................................................................................................................................. 496 Configuring the Mailbox Information Cache Refresh Interval ............................................................................................................... 498 So You Thought PowerShell Was Only For Exchange 2007...................................................................................................................... 500 Introduction ..................................................................................................................................................................................................... 500 What Do I Need To Get Started .................................................................................................................................................................... 500
Querying Event Logs Using WMI.................................................................................................................................................................. 501 Exchange 2003 WMI classes ...........................................................................................................................................................................504 Querying Active Directory to Determine Exchange Information in a Network ...................................................................................508 Exchange 2003 Powerpack for PowerGUI .................................................................................................................................................... 510 Conclusion .........................................................................................................................................................................................................513
Ben Lye
Ben Lye is a senior systems administrator at a multinational software company. He has over ten years' experience of supporting and administering Windows and Exchange, and has been MCSE and MCP certified since 1999. Ben is passionate about automating and streamlining routine tasks, and enjoys creating and using tools which make day-to-day administration easier.
Michael B. Smith
Michael B. Smith is a well-known author and consultant in the Exchange Server and Active Directory arenas. His most recent book is Monitoring Exchange Server 2007 with System Center Operations Manager 2007, published in February 2009. You can contact Michael via e-mail at [email protected] and read his blog at http://TheEssentialExchange.com.
Jonathan Medd
Jonathan Medd is a Senior Technical Consultant for a large local government organisation in the UK. He has been working with Windows Infrastructure products since 1997, in the last few years particularly around Active Directory and Exchange and, most recently, virtualisation with VMware products. He has held the MCSE certification since 1998 and VMware VCP certification since 2008. In 2007, he discovered Windows PowerShell and he spends a lot of time encouraging IT pros he meets to use PowerShell by talking with them, by giving presentations to User Groups, or via posts on his blog at http://jonathanmedd.net. He also co-hosts a regular PowerShell podcast which contains info on how to learn PowerShell and what's going on in the PowerShell world you can find this at http://get-scripting. blogspot.com. You can follow him on Twitter at http://twitter.com/jonathanmedd.
Brien Posey
Brien Posey is a freelance technical writer, and a five-time Microsoft MVP. Over the last thirteen years, Brien has published literally thousands of technical articles and whitepapers, and written or contributed to dozens of books. Prior to becoming a freelance writer, Brien served as CIO for a national chain of hospitals and healthcare facilities. He has also served as a network administrator for the Department of Defense at Fort Knox, and for some of the nation's largest insurance companies.
xx
Matt Simmons
Matt Simmons is an IT Administrator with several years' experience on small and medium networks. He is currently employed in the financial services industry, and has previously worked in logistics and Internet services. Matt maintains a systems administration blog, and spends his spare time reading and learning new things. He can be reached at [email protected] or via his blog at http:// www.standalone-sysadmin.com.
Neil Hobson
Neil is a Principal Consultant with Silversands, a UK-based Microsoft Gold Partner and is responsible for the design, implementation, and support of Microsoft infrastructure systems, most notably Microsoft Exchange systems. He has been in the IT industry since 1987 and has worked with Exchange since V4.0 days. He has been an Exchange MVP since 2003 and spends some of his spare time helping others in various Exchange mailing lists, and the public Exchange newsgroups; he also contributes to the MSExchange Blog.
Desmond Lee
Desmond Lee specializes in end-to-end enterprise infrastructure solutions built around proven business processes and people integration across various industries. He is a popular speaker at major international events, and contributes frequently to several highly rated publications and public forums/newsgroups. Desmond is a Microsoft Most Valuable Professional (Communications Server), Microsoft Certified Trainer and founder of the Swiss IT Pro User Group, an independent, non-profit organization for IT Pros by IT Pros championing Microsoft technologies. You can follow his IT adventures at www.leedesmond.com.
Hilal Lone
Hilal works as a Senior Integration Engineer (SME on Email Security) with Continuous Computing (www.ccpu.com). Continuous Computing specializes in deploying uniquely architected solutions comprised of telecom platforms and Trillium software. Currently, Continuous Computing caters to the LTE, DPI, Femtocell, and Professional Services market. Hilal has been working in IT messaging, Security and Networking for more than six years; he specializes in end-to-end messaging and is deeply involved in Information Security and Networking. His qualifications include B.Sc., M.Sc. (Information Systems), MCSE, MCITP (Enterprise Messaging), CCNA, CCSP, and CEH. Apart from Exchange Server, he specializes in multi-platform firewalls and Cisco Networking equipment.
Nicolas Blank
Nicolas Blank is an Exchange MVP and consultant at Symbiotech (http://www.symbiotech.co.za), a consultancy that specializes in the Microsoft Infrastructure and related tools space with a strong focus on messaging, collaboration, migration, and security solutions. Nicolas currently builds solutions based on Active Directory, Exchange, Office Communication Server and a variety of third-party vendors. Nicolas consults, speaks, writes, and delivers seminars on various topics and blogs at http://blankmanblog.spaces.live.com.
William Lefkovics
William Lefkovics, B.Sc., MCSE, is the Technical Director at Mojave Media Group, LLC in Las Vegas, NV. He is the co-author of Microsoft Exchange Server 2007: The Complete Reference and contributes a monthly column on Outlook at Windows IT Pro Magazine.
xxi
Nathan Winters
Nathan Winters is a Unified Communications Consultant for Dimension Data; a Microsoft Gold Partner on five continents whose clients include over 70% of the Global Fortune 100. He has been working in IT for four years, and specializes in Exchange, Active Directory, and Virtualization. Recent work has included an Exchange 2007 design for several clients and an OCS 2007 voice design including Enterprise Voice. Midway through 2006, Nathan founded the Microsoft Messaging and Mobility User Group UK, which holds regular meetings in the UK to discuss topics related to Exchange. In April 2007, he was awarded an MVP (Exchange Server) for his work with MMMUG and regular contributions to the Mark Minasi Forum. He is a regular contributor to the MSExchange.org website and his other articles have been published by Penton Media (Exchange and Outlook Administrator newsletter), Microsoft (TechNet Industry Insiders) and on the MMMUG website. For more of his articles see the links below: http://www.mmmug.co.uk/files/default.aspx http://www.msexchange.org/nathan_winters http://www.windowsitpro.com/Authors/AuthorID/1651/1651.html You can contact Nathan at [email protected] or through his blog at http://www.mmmug.co.uk/blogs/nweb.
Nirmal Sharma
Nirmal is an MCSEx3, MCITP, and was awarded a Microsoft MVP award in Directory Services four times. He specializes in Directory Services, Microsoft Clustering, Hyper-V, SQL, and Exchange. He has been involved in Microsoft Technologies since 1994, and followed the progression of Microsoft Operating System and software. He specializes in Microsoft technologies. In his spare time, he likes to help others and share some of his knowledge by writing tips and articles. He can be reached at [email protected].
Dr Masha Petrova
Dr Masha Petrova is the founder and CEO of MVP Modeling Solutions, a company dedicated to the improvement of engineering R&D processes, and a creator of www.SuccessfulUnemploymentToolkit.com. She received her Ph.D. from the University of California at San Diego, where she conducted research in chemical kinetics. She has created and taught courses on engineering computer modeling and has trained professionals all over the globe. She is a co-founder of the Women in Combustion group, a featured speaker for the ACS Speaker Service and an official instructor for the ACS courses. She writes a monthly column for the Product Design and Development online magazine. She is also a member of the Executive Board of Directors for the Franklin Foundation, an organization dedicated to promoting education and innovation in America's technology and science fields.
Bill Holmberg
Bill Holmberg has been an IT professional since shortly after building Altair, Heathkit, and Apple computers from kits in the late 70s. Since then he has helped companies build IT infrastructures, help desks, WANs, and even an onsite support group rated "World Class" by Gartner. As a consultant, for over 12 years, to AlphaBetas (which he helped found, and for which he is the CIO) he has helped bring over 100 games and software titles to market, whether as the Project Manager, QA/QC analyst, or even as voice-over talent. Bill also helps present classes funded by the Department of Justice for Minneapolis' InfraGard (an FBI volunteer group) on Security and Forensics Education (SAFE) and has dealt with many network intrusions for private clients. He is currently re-engineering the 8-location WAN to best practices for a food processor and recycler in Minnesota, and implementing their DR plan. In his "spare" time, he has resurrected his Grandfather's namesake family business, Hart Skis.
xxii
Introduction
Over the past two years, Simple-Talk has published articles on a variety of SysAdmin topics, from Exchange to Virtualization, and including everything from Powershell to Unified Messaging. We have brought the best of these articles together to form The SysAdmin Handbook. With over fifty articles packed into this book, it will be an essential reference for any Systems Administrator, whether you have years of experience or are just starting out. Simple-Talk is an online technical journal and community hub for working SQL Server and .NET developers and administrators, as well as enthusiasts and those taking their first steps with these Microsoft technologies. Simple-Talk provides in-depth technical articles, opinions and commentary to more than 388,000 subscribers. The Simple-Talk SysAdmin newsletter is a fortnightly publication. Subscribing to the newsletter means you will get the latest straightforward and practical technical articles straight to your inbox. If you do not already receive the newsletter, subscribe for free here. We would like to thank all the authors who have contributed SysAdmin articles to Simple-Talk over the past two years, as well as those who have taken their time to read the articles, and participate in the community. Because of this, we have been able to develop a successful SysAdmin community on Simple-Talk, filled with high-quality and useful technical articles.
23
Exchange
High Availability in Exchange 2007
10 June 2008
by Neil Hobson
Neil Hobson writes about the ways that MS Exchange 2007 can ensure that your organisations messaging remains available. He looks at the way that the Mailbox Server role can be made more available using features as Single Copy Clusters, Local Continuous Replication, and Clustered Continuous Replication. He also discusses ways of improving the resilience of the other server roles.
If you are thinking of deploying Exchange 2007, the chances are that you have a requirement to implement the product with high availability in mind. More and more organizations are placing much more importance on the messaging infrastructure and, as a result, high availability and site resilience have a much higher emphasis than they did several years ago. In this article I'm going to cover the high availability options within Exchange 2007, giving you an outline of each one and the options you have for implementing the product with high availability in mind. I'm going to assume that you've already made the decision to implement Exchange 2007 in a highly available manner, so I won't be covering the discussion points around whether high availability is appropriate for you or your organization. That will make a useful discussion topic for a future article. Another subject that will also make a good discussion topic for a future article is the site resilience option available for Exchange 2007 and therefore I won't be covering that at this time. Finally, bear in mind that I'm going to be covering the native options available in Exchange 2007 and therefore no 3rd-party high availability solutions will be detailed within this article. As you likely know, Exchange 2007 is comprised of five different server roles and each one has high availability options, some of which differ quite significantly from others. We'll take each of the five roles in turn and detail the options you have for making these roles highly available.
24
Figure 1: Single Copy Clusters. In Figure 1, you can see three nodes within the SCC environment, two of these are curretly active nodes (i.e. they are actively servicing users), while the third is passive. Exchange 2007 clusters only support the active/passive model: it's possible to have up to eight nodes configured as long as one is passive The passive node is available to take ownership of the data currently belonging to one of the active nodes, if and when an issue arises with one of those active nodes. With up to eight nodes available, some interesting combinations can be designed. For example, you might consider a six-active and two-passive node system, or perhaps a four-active and four-passive node system. The latter option actually falls within the Microsoft best practice recommendations, since Microsoft recommends that you have at least one passive node for every active node that you implement. You might consider a configuration such as a two active and two passive node SCC environment to be wasteful in terms of hardware resources, since two nodes aren't actually offering a service to the users. However, you have to consider the reason that you may have implemented such a design: high availability, and therefore the preservation of your messaging service in the event of a node failure. For example, consider the case in Figure 1, where you have two active nodes and a single passive node. If Node A fails, the service will failover to use Node C and users are largely unaffected. However, you are now in the situation where you no longer have a passive node available until you resolve the problem with Node A. Therefore, if Node B were to fail before Node A is brought back into service, there is nowhere for the services on Node B to failover to and thus users will be affected. Therefore, having at least one passive node per active node is good practice. You can clearly see that an SCC environment is excellent at providing high availability in cases where there is a server failure. Although SCC environments can and do provide good levels of uptime, the main drawback with SCC is the fact that there is only a single copy of the
25
Figure 2: Local Continuous Replication. It makes the most sense to deploy the second set of disks via a separate disk array controller to cover for the failure of the primary disk array controller. As I've already mentioned, LCR is not a clustered solution and therefore does not protect you should the entire server fail. However, it is a good solution to protect against mailbox data corruption and additionally offers the ability to offload Volume Shadow Copy Service (VSS) backups from the active copy of the databases. In this scenario, you configure the VSS backup to be performed against the passive storage group. This is better for disk performance on the active storage group as well as allowing online maintenance to be unaffected by the backup process.
26
Figure 3: Clustered Continuous Replication. As I stated earlier, a CCR environment consists of two nodes; it cannot be scaled out to eight nodes as with the SCC environment. However, the fact that CCR implements two separate copies of the databases and transaction logs is usually attractive to many organizations. Also, CCR can be implemented using Direct Attached Storage (DAS) rather than SAN storage. This, too, can appear attractive to many organizations although it is important to note that this may not necessarily be a straight decision to make as there are other factors to consider. Also, VSS backups can be taken from the passive node, which is obviously good for the performance of the active node, plus it has the added benefit of allowing online database maintenance to run outside of the backup window. Some organizations may choose to implement the active CCR node in one data center with the passive CCR node in a different data centre. This is technically fine, although it should be noted that if you are deploying CCR on the Windows 2003 operating system, both CCR nodes must be in the same subnet. You will therefore need to check with the network team in your organization as to how the network is configured across the two data centers. This restriction does not apply to Windows 2008 since clusters can be implemented with nodes from different subnets, although the nodes must be in the same Windows Active Directory site. With nodes in different data centers, the CCR environment can be considered a stretched cluster. You will also need to factor in considerations such as the available network bandwidth between the two data centers as well as the latency of the network connection. Such factors can affect the performance of the system, especially when you consider the fact that the various server roles will communicate via Remote Procedure Calls (RPCs) and thus require sufficient bandwidth. Consequently, many organizations choose to implement a CCR environment within their production data center and Standby Continuous Replication (SCR) to a disaster recovery data center. CCR then protects from server and data failure within the same data center, which is more likely than the complete loss or failure of a single data center. Having said that, you may still need to plan for that eventuality which is where SCR comes in. SCR is a site resilience solution available in Exchange 2007 Service Pack 1 that currently falls outside of the topics that I want to cover in this article.
27
28
Figure 4: Windows NLB. As I mentioned in the previous section, on Hub Transport servers, some organizations choose to combine the Client Access Server and Hub Transport server roles onto the same server. This configuration wasn't officially supported by Microsoft in the Release To Manufacturing (RTM) version of Exchange 2007 but is supported now that Exchange 2007 Service Pack 1 has been released. However, don't forget what I previously mentioned within this article around the load balancing of client connectors, and the need to exclude the Hub Transport server SMTP communications from the load balancing configuration. Equally it should be noted that you don't actually have to combine the Hub Transport and Client Access Server roles on the same server. In fact, in larger deployments the Client Access Server and Hub Transport server roles are separated onto different servers. This means that deployments consisting of a single CCR environment can actually see six servers deployed, namely two for the CCR environment, two for the Hub Transport server role and two for the Client Access Server role. This can be further added to by the deployment of the Unified Messaging and Edge Transport server roles which will be covered in the next two sections.
29
Summary
In this article we've looked at the high availability options for each of the Exchange 2007 server roles. As you've seen, the most interesting options are available for the Mailbox server role but at the same time it's vital to implement high availability for the other server roles if you've elected to choose a clustering solution for your Mailbox servers.
30
by William Lefkovics
Around four out of every five email messages are spam. Now that the nuisance threatens to engulf what has become the normal route of business communications worldwide, all the vendors of Mail Servers are taking more active steps to confront the problem. William Lefkovics explains how Microsoft Exchange 2007 is meeting the challenge of the rising tide of spam.
Spam Botnets Most spam today is distributed by networks of unsuspecting workstations infected with malware. These bots are small applications capable of, among other things, forming and sending SMTP messages en mass by remote command. Spammers typically pay for use of botnets (short for "robot networks") rather than maintain their own. In the United States, the Federal Bureau of Investigation (FBI) has been working through "Operation Bot Roast" which identified 1 million infected hosts in the US alone and has resulted in the arrest and conviction (or guilty pleas) of several botnet operators. (http://www.us-cert.gov/press_room/botroast_200711.html)
Microsoft, in response to this ongoing battle, improved upon their set of native tools in Exchange Server 2003 Service Pack 2. They provided a more complete system for administrators to eliminate known spam early in the SMTP conversation and give them more control in assessing spam levels on more questionable messages. Exchange Server 2007 expands and improves on this defense-in-depth approach giving administrators effective sequential tools without additional software cost. There is no silver bullet that will prevent spam from reaching inboxes, but there are several steps Exchange administrators can take to reduce inbound spam significantly. With Exchange Server 2007, Microsoft introduced server roles. There are five roles in total - Mailbox, Client Access, Hub Transport, Unified Messaging, and Edge Transport. The Edge Transport role is intended to reside in a perimeter network and perform the majority of the inbound message hygiene functionality for the Exchange organization. The Edge Transport server is typically the gateway SMTP server for the enterprise. Assuming so, message hygiene begins here.
31
Figure 1. Figure 1 shows the inbound flow of messages passing through the various anti-spam agent layers. Each layer implements a different test in the effort to validate messages. If 80% of messages arriving at your gateway are spam, then it certainly makes sense to drop those e-mails as early as possible in the SMTP conversation. The anti-spam tools available on an Exchange 2007 Edge server are shown in Figure 2.
Figure 2.
32
33
Figure 3. Figure 3 shows the IP Block List Providers Properties window with two DNSBLs entered.
34
35
36
Figure 4. Figure 4 shows the options in the EMC to assign actions, based on specific minimum SCL ratings. This also shows the current settings on my Edge Server. You will note in Figure 4 that messages with an SCL value of 7 will be quarantined to a separate mailbox. This means that SCL values of 6 or less will be have the rating appended to the header and the message passed through to the Hub Transport and on to the recipient. These settings are not required and different values may be more appropriate in different companies. Administrators should test different threshold values to see what is most effective for their enterprise. There is also a Junk Mail Threshold for SCL messages. This is configured through the EMS only and simply determines the SCL value at which messages are moved to the Junk Mail folder of the recipient's mailbox. The final thing I will mention is Sender Reputation. Sender Reputation is a new addition to the message hygiene efforts in Exchange Server. Exchange analyzes characteristics of SMTP sessions of unknown senders. After a sender has sent 20 messages, Exchange maintains a Sender Reputation Level (SRL) analogous to the SCL for content filtering. Sender Reputation, as implemented in Exchange 2007, is a Microsoft creation that is in its infancy and presents potentially compelling options for the future of anti-spam technologies.
37
Summary
Spam is a moving target. Currently there is no single tool to deploy in the enterprise that will solve this ongoing problem easily. With Exchange 2007, Microsoft provides, out of the box, a multi-faceted defense against UCE. The key to effective spam filtering, using the native Exchange Server 2007 tools, is to determine unwanted messages as early as possible in the message flow. When used together, the tools provide a competitive anti-spam solution at no additional software cost.
by Nathan Winters
The process of booking various resources to go with a meeting room just got a whole lot easier with Exchange 2007. Nathan Winters explains Exchange's resource-scheduling functionality and offers practical advice on how you might go about configuring Exchange to allow users who are scheduling meetings to specify that such things as video conferencing, projection equipment, and refreshments are required for the meeting.
Introduction
Although previous versions of Exchange were regularly used to host mailboxes that represented resources, Exchange 2007 was the first version in which these mailboxes were actually different from normal mailboxes. This means that there is now no need to rely on add-in software such as the "Auto Accept Agent'. It also means that the design process has been rather more thought through, thereby ensuring a useful solution both at the server and client end. In Exchange 2007, it is possible to create resource mailboxes that can represent either Rooms or Equipment. These exist as disabled accounts in Active Directory and can be managed in various ways. Exchange 2007 also contains the type of "Auto Accept Agent" functionality that was missing from Exchange 2003. Having a mailbox type of "Resource" is useful because it makes administration separate. It allows these mailboxes to have special functionality but yet to be secure as disabled users. It is also helpful to be able to distinguish between rooms and equipment as it is easier for users, and gives more flexibility when adding custom resource properties, as we shall see later. With the "Availability Service" it is now possible to get real-time information about resource availability without the need to wait for public folder replication to occur (so long as Outlook 2007 is being used). To make best use of this solution you must ensure that Outlook 2007 is being used by those who will be interacting with the resources and managing their use. This is important because much of the user interface, such as the Scheduling Assistant, is held in Outlook 2007. I would however be careful in rolling out Outlook 2007 to the entire user base before thorough testing is carried out. Unless you are careful, you will have dissatisfied users; this is because Outlook 2007 does not always perform as expected, especially with third party add-ons.
38
Environment
My test environment consists of servers in two sites. Each site has a Mailbox server and a combined Hub-Transport and Client-Access server. Outlook 2007 and OWA 2007 are used as clients. Various users have been set-up in each site such as users, admin users (secretaries), resources (projectors), and rooms. I have configured such a test environment because it is useful to be able to show different sites with a different naming convention. Equally I was interested to see how time zones are handled and discovered that, in my view, this could be done better. For example, as I'll mention later, Outlook is not as flexible as I would like in helping people to arrange meetings in different time zones that require a room in each zone. You could easily test the vast majority of this functionality on a single Exchange server with a single Outlook client.
39
Feature Investigation
I have now discussed how to create resource mailboxes and how to configure their basic option of responding by accepting requests. As you saw, the initial setup is really fairly simple; it is just important to remember that, when migrating from 2003, you must convert the mailbox to the new resource type. Now, let's move on and look at some of the key resource-centric features that Exchange 2007 offers.
40
Figure 2 The Resource Settings page in OWA options. Once you have reached the Resource Settings section of the mailbox options in OWA, you can see the policies that are available. There are three policies, BookInPolicy, RequestInPolicy, and RequestOutOfPolicy. BookInPolicy is a list of users who can schedule the resource automatically. By default it is set to Everyone. RequestInPolicy is a list of users who can request to schedule to resource and have to have it authorized by the resource administrator. RequestOutOfPolicy is a list of users who can submit a request for a resource even if it is already booked. (This might for example be used for the CEO!)
This page enables you to allow conflicts, set maximum meeting length, allow recurring meetings, customize the text of notification mails and a host of other settings.
A policy example
So what if you want to set a policy that makes it mandatory for a delegate to authorise the request when anyone books catering? This can be done by first setting up the delegate for the catering resource mailbox as above. Then log into OWA (also as above) and set the "These users can schedule automatically if the resource is available:" policy to "Select Users and Groups." On the "These users can submit a request for manual approval if the resource is available:" policy to "Everyone." Save the settings. At this point, when anyone tries to submit a request, the delegate user will get a mail like the one in Figure 3.
41
Figure 3 The Request info forwarded to delegates. The delegate user can then accept the request (and go ahead and book the catering with the caterer!). Meanwhile the requesting user will get a mail as in Figure 4, showing the pending request. The meeting is held as tentative to prevent the resource being booked by someone else.
Figure 4 The Tentative mail received by the user whilst waiting approval. Once the delegate user approves the request, the meeting will be booked and acceptances sent out. More info about the policies and the shell commands available can be found here: http://technet.microsoft.com/en-us/library/aa996340.aspx http://technet.microsoft.com/en-us/library/bb124987(EXCHG.80).aspx
42
Figure 5 The Scheduling Assistant. All rooms can be added from the "Add Rooms" button. In the main window you will see which rooms are available: this is supplemented by the "Suggested Times" area on the right hand side. Although it isn't obvious, it is possible to set the meeting time zone when creating a meeting using the Time Zones button highlighted in Figure 5. However, it is not possible to view and additional time zone in the scheduling assistant as you can in the main Outlook calendar.
Scheduling resources
The Scheduling tab is where most bookings will take place. The basic scheduling of resources is simple. First open a meeting request and enter a subject and if necessary, the attendees. Next move to the Scheduling tab. Use the new "Add Rooms" button to open the window shown in Figure 6. This shows the "All Rooms" address list which gives a view of the capabilities of each room. As you can see, "SiteBRoom2" is listed with a capacity of 2 which is a default property that can be customized on each room, using the Exchange Management Console as shown in Figure 7.
43
Figure 7 Setting the capacity of the Room. If you are searching for an available room, then select those with the correct properties (such as capacity), and add them. When back on the Scheduling tab, you can then use the "attendees" tab to add other attendees or resources such as projectors. Figure 8 below shows the booking facilities.
44
45
Figure 9 Showing the method of adding custom resources. Having done that, one way to view these extended attributes is by using the PowerShell command below: Get-Mailbox -RecipientTypeDetails RoomMailbox |fl Name,ResourceCustom However, with a little more investigation, I have found another more user-friendly way! As it would appear that only the "All Rooms" address list has the right GUI part to show the "Description" field, I have edited the filter to include the EquipmentMailbox type as well.
Figure 10 The combined Rooms and Equipment resources showing custom properties. I used this command to edit the filter. Set-AddressList -Identity "All Rooms" -RecipientFilter {(Alias -ne $null -and (RecipientDisplayType -eq 'ConferenceRoomMailbox' -or RecipientDisplayType -eq 'SyncedConferenceRoomMailbox' -or RecipientDisplayType -eq 'EquipmentMailbox'))} The results of the filter change are shown in Figure 10.
46
Summary
This article describes the Exchange Resource Scheduling functionality. I realize I haven't covered every single feature, but I have aimed at doing enough to give you the skills needed to continue developing your resource booking solution using Exchange 2007. In general, with a little investigation and some color-coding in Outlook, you can create a decent resource booking system. Of course it is not as polished as some of the 3rd-party systems, but then a lot of them are not that great! Some improvements are necessary in the user-interface. For example, when you are adding custom resource properties you can't actually see them in Outlook unless you are modifying the Rooms address list. It should be possible to use the Exchange Web Services to create a custom front end for working with the "resource mailboxes" that you create. Let's hope someone does it soon.
by William Lefkovics
Some tasks that should have been easy in previous versions of Exchange just weren't. Now, with Transport Rules in Exchange 2007, administrators have a much improved set of tools to manage message flow.
Microsoft Exchange Server 2007: Controlling Email Messages using Exchange's Transport Rules
As a long time Exchange administrator, one of the great frustrations I have found over the years has been how difficult certain seemingly trivial administrative tasks are to implement in Exchange Server. It is almost embarrassing to tell management that it is not that simple to add a disclaimer to outbound SMTP messages or insert text into the subject of certain messages or control message flow for compliance requirements. With Exchange Server 2007 Microsoft addresses many of these deficiencies by completely changing the underlying architecture. SMTP has been brought back into Exchange instead of extending the SMTP service in Internet Information Server (IIS). Arduous transport event sinks have been replaced by Transport Agents on the Hub Transport and Edge Transport roles. Transport rules are implemented in the Exchange 2007 architecture through Transport Agents. There is a Transport Rules Agent on all Hub Transport Servers and an Edge Transport Agent on all Edge Servers. Administrators now have a basic UI to control messages using Transport Rules - one of the killer features of the latest Exchange Server.
47
48
Transport Rules are not all that different from Outlook Rules in their logic. Each rule is composed of at least one condition, an Action or Actions and optionally, one or more Exceptions. If no conditions are selected, then the rule will apply to ALL messages. Where there are multiple conditions, they all must be met; however, no matter how many exceptions there are, it only takes one to prevent the rule from firing.
Figure 1. Figure 1 shows this logical flow through a transport rule. Transport rules are quite flexible with a solid set of options.
49
Figure 2.
50
Regulatory Compliance
Many companies these days are required to assert greater control over their messaging solutions. Regulatory agencies in various countries demand certain e-mail communications be archived and discoverable. In addition, corporate policy may be designed to minimize liability exposure by providing employees with a working environment safe from harassment or loss of productivity through electronic communications. Transport Rules in Exchange 2007 provide rudimentary solutions to assist administrators in deploying effectively compliant messaging systems. For example, archiving sensitive information may be required in some jurisdictions. Who wants to be investigated by the UK Information Commissioner's Office or the US Securities and Exchange Commission and not be able to provide the content they require?
Ethical Walls
Since every message must pass through a hub transport server, the hub becomes the point of message control for policy enforcement and compliance. Transport rules can fire on messages where senders or recipients belong to specific groups. This makes it easy to allow or prevent mail flow based on universal distribution group membership. A transport rule can prevent confidential email sent, either intentionally or accidentally, from the Finance department to a factory worker simply by restricting delivery of e-mail between those groups. This virtual blockade of e-mail communication between groups is referred to as an Ethical Firewall or Ethical Wall. A policy may be put in place to have CFO emails, which are often of a sensitive nature, blocked from being sent to factory workers. In the rare case where the CFO needs to send something, then perhaps the HR department can send that e-mail instead. Every company is different and Transport Rules provide some flexibility for securing the flow of e-mail for diverse scenarios. Ethical walls using Transport Rules reduce the potential for confidential information from getting into the wrong inbox.
Message Classifications
Note Message Classifications are not available to Outlook clients by default. Outlook 2007 clients require Message Classifications to be exported from AD and copied to the local registry on the workstation for Outlook to access. This manual process allows administrative control on who can apply classifications to messages.
With Exchange 2007 there is a special type of server-side message categorization called Message Classification, usable with OWA 2007 and Outlook 2007 clients. These are custom labels stored in Active Directory that can be applied to e-mail messages. Transport rules can either act upon messages with specific classifications or can assign a message classification to messages based on specific properties. Exchange 2007 actually has a few sample message classifications by default. These are not accessible through the EMC; however, they are fairly easily managed using the EMS. To create a list of Message Classifications using the EMS type the following cmdlet: [PS]C:\>Get-MessageClassification | ft
51
Figure 3. Working with Message Classifications goes beyond the scope of this article. More information can be found at the source of course: http:// technet.microsoft.com/en-us/library/aa998271(EXCHG.80).aspx.
As you probably already know, the Exchange Management Console (EMC) was built upon the Exchange Management Shell (EMS). Each action performed from the EMC holds an equivalent EMS cmdlet. Transport rules can be managed from either the EMC or the EMS. We will look at both options. Whether you are creating a new transport rule on a Hub server or an Edge server, the process is very similar. We will walk through an example using the Hub Transport server where messages are copied to another mailbox based on keywords in the subject. On an internal Exchange 2007 server, the EMC has a few containers outlining menu options based on scope. Hub Transport rules are stored in Active Directory in the Exchange Configuration container, so they are replicated throughout the entire AD forest. Logically, Transport rules are thus managed using the Hub Transport option under the Organization container in the EMC as shown in the left pane in Figure 4.
52
Figure 4. Also in Figure 4, you can see the Transport Rules tab in the center pane is selected. To launch the wizard, click on the New Transport Rule option in the Action pane of the EMC.
Figure 5.
53
Figure 6. This condition will result in the message being copied to an article archive mailbox (see Figure 7) unless the message has been tagged with the Message Classification "Company Confidential" (see Figure 8).
54
Figure 7.
Figure 8.
55
Figure 9. Clicking New will complete the rule and present the EMS code that was used to create it (see Figure 10).
56
Figure 10. A CTRL-C will copy this EMS command to the clipboard. In most places in the EMC, the cmdlet is complete; however, for transport rules, the variables are not displayed in the UI. They must be assigned manually if you are entering the transport rule using the NewTransportRule cmdlet. In our example, this command is as follows: [PS]C:\> $Condition = Get-TransportRulePredicate SubjectContains $Condition.Words = ("Simple Talk,""Red Gate") $Action = Get-TransportRuleAction CopyTo $Action.Addresses = @((Get-Mailbox "Article Archive")) $Exception = Get-TransportRulePredicate HasClassification $Exception.Classification = (Get-MessageClassification ExCompanyConfidential).Identity new-TransportRule -Name 'Simple Talk Archive' -Comments 'Copy messages related to Simple Talk to article archive storage' -Conditions $Condition -Actions $Action -Exceptions $Exception -Enabled $true -Priority '0' The last parameter sets a transport rule priority. Transport rules are applied in order of priority starting with "0'. Rules are added in the order they are created. It may be necessary to move a rule up the list and is controlled using the -priority parameter in EMS for either NewTransportRule or Set-TransportRule cmdlets. This is also easily done in the EMC by using Change Priority option in the Actions pane when the desired rule is selected.
57
Figure 11. Figure 11 shows the interface for entering a numerical priority from 0 (highest) to the number of rules less 1. Microsoft recommends a maximum of 1000 rules, mostly because that is where they stopped testing. This should be more than enough for most companies. You modify existing transport rules in much the same way - either with the EMC or EMS. The EMS uses the Set-TransportRule cmdlet for updating rules. For a list of cmdlets for managing Transport rules in the EMS, see Get-Help as shown in Figure 12.
Figure 12.
58
Summary
Exchange Server 2007 takes great steps forward in controlling messages and message transport from an administrator perspective. Some tasks that were challenging or required third party applications in previous versions of Exchange have been made more accessible through transport rules in Exchange 2007. With the Edge Server role concerned with security and the Hub Transport role focused on compliance and policy enforcement, Transport Rules provide a much improved set of tools for administrators to manage message flow in their Exchange organizations.
by Jaap Wesselius
Note The original, rather complex, Exchange clustering, 'Single Copy Cluster', protects you against a server failure but not a database failure. 'Local Continuous Replication' protects you from database failure, but not server failure. The more simple 'Clustered Continuous Replication' protects against both. If you use it with Transport Dumpster, you won't lose emails.
59
Figure 1. Single Copy Cluster. The most important thing to remember in an SCC environment is regarding 'shared storage." Databases and log files are placed on a SAN or other central storage solution. This can be a Fiber Channel solution or an iSCSI solution. Data placed on the SAN is shared by all members of the cluster. Only one node is owner of the data and thus able to process the data at a given time. When this node fails, another node can take over the databases and log files and continue the processing of the data. For the end user, this is a fully transparent process; he or she will hardly notice that something has happened in the background. Microsoft fully supports a clustered environment, but only if the complete configuration, including the Network Interfaces, host Bus Adapters and even the switches are listed in the Microsoft Logo'd Products List (http://microsoft.com/hcl). If the hardware configuration is listed, the hardware has been fully tested by both Microsoft and the vendor for use in a cluster. Before installing Exchange, the cluster has to be built. Windows needs to be installed and the hardware needs to be properly configured. All nodes of the cluster need full access to the shared storage locations on the SAN. A special location on the SAN must be reserved for the quorum. This is basically the brains of the cluster, the location where the cluster configuration is actually stored. This is stored on the SAN, but is very limited in size. The smallest unit of allocation within your SAN is sufficient, and normally around 1 GB. When the hardware is setup correctly you can setup your Windows Server 2003 cluster. This consists of the following steps: Create a cluster service account in Active Directory. This account will be used by the cluster service on the Windows Server 2003 cluster nodes. This can be a normal domain user account, but must be a member of the local administrators group on the Windows 2003 cluster node. Start the "Cluster Administrator" via the Administrative Tools menu and select "Create New Cluster." Using the New Server Cluster Wizard enter the new cluster name, the service account that you created in step 1, the public IP address of the cluster and the computer name of the first node of the new cluster.
60
Please make sure that you have reserved sufficient IP addresses on the public network for your cluster configuration. You need an IP address for every node, an IP address for the Windows Cluster, and an IP address for every Clustered Mailbox Server (CMS) that you create. In figure 1 this means that six public IP addresses are needed. For internal communication within the cluster a private network is used. This network should not be used for other communications, but only for the cluster. The cluster service uses this network for sending so called "keep alive" messages. Because of sending and receiving these messages a particular node in the cluster is aware of the presence and health of the other nodes. If the first node in the cluster is configured correctly, you can start adding other nodes to the cluster. Just start the Cluster Administrator in the Administrative Tools menu and select 'Add node." The new node contacts the cluster, reads its information, and adds itself to the cluster. Naturally you need to repeat this step for all nodes you want to add to the cluster. Installing Exchange Server 2007 on the cluster is similar to installing Exchange Server 2007 on a regular server. However you may only install one of two specific clustered mailbox roles. Instead of installing the normal Mailbox Role you have to install the 'Active Clustered Mailbox Role." This will install an instance of Exchange Server 2007 in the cluster.
61
Whichever you choose, it will most probably be a lengthy operation. Suppose a normal tape backup device will restore at 50 GB/hour and you have a 150 GB database. In that case it will take at least three hours to restore your database from tape. Add to that the time needed to find the right tape, to start the restore itself and maybe to replay log files that are newer than the last backup. There have been situations where an Exchange Server in a clustered environment was out-of-order for more than 24 hours. Using the Microsoft Exchange tools to repair a corrupt mailbox database can also be a lengthy operation. Tools like Microsoft's ESEUTIL can process between 5 and 10 GB/hour, depending on the underlying hardware. But 10 GB/hour processing related to the 150 GB database, ESEUTIL will take at least 15 hours to finish processing. And Microsoft PSS would like you to perform an offline defrag with ESEUTIL after you ran a repair with ESEUTIL. So this adds another 15 hours of processing your database. This totals 30 hours when your database and therefore your mailboxes are not available. Implementing a SCC environment will help you protect against server failure, but since you only have a single copy of your data, it will not protect against database failure. You can lower the risk and possible impact by implementing multiple smaller databases in your environment instead of one large database. If a database failure occurs, only a part of your organization is impacted. But there's a part still impacted and this part will see an outage of the messaging service.
Replication Techniques
To prevent an outage due to a database failure, Microsoft built some techniques into Exchange Server 2007 that are based on replication techniques. With replication techniques a second copy of the database, or databases are maintained, therefore eliminating this single point of failure. In Exchange Server 2007 RTM ("Release To Manufacturing," i.e. the original version of Exchange Server 2007) Microsoft introduced two types of replication: 1. 2. Local Continuous Replication Clustered Continuous Replication
In Exchange Server 2007 SP1 Microsoft introduced a third type of replication: 3. Standby Continuous Replication
Since Standby Continuous Replication is not dependent on any type of Windows Server 2003 clustering it is out-of-scope for this document.
62
Figure 3. Enable LCR in the Exchange Management Console on the First Storage Group. When finished configuring LCR, Exchange Server 2007 needs some time to replicate the database and the log files to the passive location and have the database duplicate up-and-running. This can be seen in Figure 4. Please note that the files are located on the c:\ drive instead of a different drive. This is configured on a demo system that has no additional disk. In a real world scenario LCR should use a physical separate disk. Depending on the size of the database the this can take a serious amount of time.
63
Figure 4. The additional LCR copy as seen in Explorer. When the Exchange 2007 Mailbox Server role is processing normal message data, all information is flowing through the log files in the active copy. The active copy is the copy of the database that is used for normal operations. When Exchange has filled a log file and the log file is closed it is sent to the passive copy. This is the copy that is in use by LCR. The log file that was received is inspected and replayed into the passive copy of the database. When the active copy of the database gets corrupt, or the database hardware fails, the end-users will experience an outage of the Exchange service. The messaging administrator now has to replace the active copy of the database, which is the unavailable copy, with the passive copy of the database. In order to cause this replacement to occur, the following cmdlet should be run in the Exchange Management Shell: Restore-StorageGroupCopy Identity "SERVER\First Storage Group" Replace the name of the server and the Storage Group with the actual names in your environment. Using LCR protects you from a database failure by maintaining a copy of the database. Although manual interaction is needed in case of a failure, this works very well. Unfortunately it does not protect you from a server failure like SCC does. For protecting against a server failure and a database failure, Microsoft combined the cluster technology with the replication technology. This resulted in the Clustered Continuous Replication (CCR).
64
Figure 5. Clustered Continuous Replication. When building a CCR environment, the Windows environment needs to be built first. A CCR environment does not use shared storage, so there's no quorum resource like in a SCC environment. Yes, there is a quorum, but it is shared between the nodes using normal networking. The cluster technology used for a CCR environment is a special version of clustering available in Server 2003 and Server 2008 called "Majority Node Set" or MNS. This is also a cluster version with an uneven number of nodes. In case of a failure, half of the number of nodes plus one should be available to keep the cluster up-and-running. For CCR, the MNS has changed to support only 2 nodes in a cluster.
A CCR environment can only have 2 nodes. More nodes is not supported and not built in the product. When implementing a CCR environment, a special Windows share is created outside the cluster. This is normally created on a Hub Transport Server. This share is called the 'File Share Witness." In case of cluster problems, the passive node uses this share to determine whether the active node is still alive, or not. In the latter case, the passive node will start up. Before implementing Exchange Server 2007 in a CCR environment, a cluster needs to be built. This can be using the cluster administrator or using a command line. The command line is useful if you have to build several clusters since you only have to change a couple of variables within a script. In this example we will build a Windows Server 2003 clustering using the command-line. The first step is to create a cluster service account in your Active Directory. This account will be used by the cluster service on the individual nodes. This can be normal domain user account, but it must be a member of the local administrators group on the individual cluster accounts. Then create the File Share Witness on the Hub Transport Server. The File Share Witness is used by the Windows 2003 Server cluster to determine which node will form the cluster when the individual nodes cannot communicate with each other anymore. The File Share Witness is created using this script:
65
66
Figure 6. The newly created CCR environment as seen in the Exchange Management Console. Note the healthy state of the Copy Status.
Figure 7. The newly created CCR environment as seen in the Cluster Administrator. In contrast to LCR, where recovery is a fully manual process, recovery in a CCR environment is fully automatic. When the passive node fails, whether it is a server failure or a database failure, the passive node automatically takes over. The replication process is identical to LCR. When the active node has finished processing an Exchange log file, the log file is replicated to the passive node. After receiving the log file, it is inspected and replayed into the passive copy of the database. There's a delay however, between the active and the passive copy. Depending on the utilization of the active Exchange 2007 mailbox server this delay can be more than 20 log files. This means that when a fail-over occurs and the passive node takes over, there is a potential loss of e-mail. To prevent this, the Exchange 2007 Hub Transport server keeps a cache of all messages sent to the active node of the cluster. This cache is called the Transport Dumpster. When a fail-over occurs, the replication service contacts the Transport Dumpster to request a redelivery of the last messages. This way a near zero mail loss fail-over can be achieved.
67
68
Conclusion
Exchange clustering techniques can be used to increase the availability of your Exchange Mailbox Server. The traditional clustering technique, now called Single Copy Cluster, does protect you against a server failure but does not protect you against a database failure. Also the requirement of shared storage can make the configuration complex, especially when your cluster hosts a large number of servers. Replication techniques in Exchange Server 2007 can safeguard you from a database failure. With Local Continuous Replication you are protected against a database failure, but since LCR is built on top of a single server it does not protect against a server failure. Furthermore, recovery from a database failure is a fully manual process. CCR is a very nice solution, protecting against both a server failure as well as a database failure. Also the combination with the Transport Dumpster will lead to an almost zero mail loss scenario. Since the CCR solution is not built on shared storage, the CCR solution is far less complex than a SCC environment and it even works well with Direct Attached Storage. The best option to implement is a CCR solution when you want to achieve a high availability Exchange Sever 2007 environment.
69
by Michael Francis
Our top tips competition this month has been dominated by one man, an admin of rare enthusiasm. Ben Lye of The MathWorks has sent us three tips this week, and these are as follows. In the spirit of the Olympics.the Gold goes to. Disabling PST files straightforward I know, but this is something every admin should do. The effort this takes, vs. the effort it could save, means we have a clear victor. Ben writes. "Microsoft Outlook supports a couple of options for restricting PST files. The first is a registry change called PSTDisableGrow prevents new data being added to existing PST files. The second is a registry change called DisablePST which prevents users creating or opening PST files altogether. The settings can be applied individually or together to phase out the use of PST files, and they can both be applied through group policy. PSTDisableGrow doesn't have a great deal of documentation, but DisablePST is pretty well documented here:http://support.microsoft.com/ kb/896515." Silver goes to a hardworking runner-up, striving for automation and convenience.. Smallest database script Put in the effort, quite useful, but doesn't quite have the sheer vital, everyone-must-do-this-now urgency of the first place. "It's based on one we use as part of our mailbox provisioning process, and it returns the smallest database on a specified server. By default it sums the sizes of all the mailboxes in each database, but it can also look at the size of the EDB file instead. (Summing the mailbox size will avoid counting whitespace in the EDB file.) It would be pretty easy to modify to look at specific storage groups, or use other search criteria (we look for databases which are named according to physical office locations, and pick the smallest database for that office). The script takes two parameters the server name and the optional flag to look at EDB file sizes instead of mailbox sizes. If the server name isn't specified you will be prompted for it. To get the smallest database on a server name SERVER01 using mailbox sizes: Get-SmallestDatabase.ps1 -server SERVER01 To get the smallest database using the EDB file size: Get-SmallestDatabase.ps1 -server SERVER01 -edb The script returns the database object as the output, but again that could easily be changed to suit a particular need."
70
$database.Server + "\" + $database.EDBFilePath. name,length).length by summing the size of all mailboxes in the database -Database $database.Identity | Measure-Object
71
-Property TotalItemSize -Sum).Sum } # Compare the sizes to find the smallest DB if (($dbsize -lt $smallestdbsize) -or ($smallestdbsize -eq $null)) { $smallestdbsize = $dbsize $smallestdb = $database }
And finally, the Bronze. Change the mailbox information cache refresh Neat, but I guess most of you that are in a hurry to apply mailbox information already caught onto this. Third place. "By default the Exchange Information Store service caches information about mailboxes for two hours, meaning that any changes to mailbox quotas can take up to two hours to take effect. You can change the cache refresh interval to a recommended value of 20 minutes which means that you only have to wait a maximum of 20 minutes for mailbox quota changes to take effect. This TechNet article describes the problem and the registry changes required to reduce the cache refresh interval: http://technet.microsoft.com/en-us/library/bb684892.aspx Ben has won the grand prize of a $50 Amazon voucher for his efforts (that's about 1/6th the value of a real Olympic Gold, fact fans). If you want a chance of winning a $50 Amazon voucher, or you simply feel the need to share knowledge with your fellow humans, then send your Exchange top tips for next month's competition to [email protected]
by Jaap Wesselius
One of the most misunderstood technologies in Exchange Server, regardless of its version, is the database technology. Most people, even Exchange administrators know it is something to do with ESE and tools like ESEUTIL, but once it's running they leave it that way for the rest of their lives. It's too difficult and you'd better not touch it in case it breaks.
In this article I'll explain some of the fundamentals of the Exchange Server database technology. The primary focus is on Exchange Server 2007, but when appropriate I'll refer to Exchange Server 2003. This article is the first in a series about Exchange database technologies, backup, restore and disaster recovery.
72
Figure 1. Database files in a standard Exchange setup. We can see several files in this directory: mailbox database.edb this is the actual mailbox database where all the mailboxes are hosted. One or more mailbox databases can be hosted in one directory E00.log and subsequent log files these are log files that are used by Exchange server for transactional processing of all information e00.chk a checkpoint file that keeps track of the relation between the information in the log files and in the database E00tmp.log a temporary log file E00res00001.jrs and E00res00002.jrs temporary reserved log files used by Exchange server in case a disk full situation occurs.
These files belong together and create a so-called "Storage Group." During the installation of Exchange Server one database is created in a Storage Group. In Exchange 2007 the Public Folder database is created in a second Storage Group. When replication technology (like LCR, CCR or SCR) in Exchange 2007 is used than only one database per Storage Group can exist. If no replication technology is used, up to five databases can be created in each Storage Group. So, a Storage Group is a set of one or more database files that share a common set of log files.
73
Transactions can be seen in normal life as well. Suppose you go to the bank to transfer money from your savings account to your normal account. The money is withdrawn from your savings account and then added to your normal account and both actions are recorded and maybe printed on paper. This can be seen as one transaction. You don't want it to end in an unknown state, where the money is withdrawn from your savings account but not added to your normal account. The same principle goes for Exchange Server. Suppose you move a message from your inbox to a folder named "Simple Talk." From a transaction point of view it starts by adding the message to the "Simple Talk" folder, then it updates the message count from this folder, it deletes the message from the Inbox and updates the message count for the Inbox. All these actions can be seen as one transaction.
74
Figure 2. A Binary Tree used in Exchange Server. The actual data is stored in the leafs. One or more Trees in a database make a table. There are several kinds of tables in an Exchange server: Folders table Message table Multiple Message Folder tables Attachment table.
The tables hold the information we can see in Outlook (Web Access). The tables consist of columns and records, the columns are identified as MAPI properties, the records contain the actual information. Multiple Trees exist in one database and sometimes Trees need to be split when the Tree fills up. Exchange Server's internal processes will reorganize all information from one Tree into two Trees. This is called a "split." It is not possible to predict how many Trees will be in a particular database, but it will be hundreds or thousands of Trees. This can be seen in the header of the database which will be explained later in this article.
Database pages
A page is the smallest unit of data in an Exchange Server environment. For Exchange Server 2007 a page is 8KB in size, all earlier versions use a 4KB page size. Increasing the page size from 4KB to 8KB results in a dramatic increase in performance of the database technology. There are also several hardware storage solutions that perform much better with a page size of 8KB. A page consists of a header, pointers to other pages, checksum information and the actual data from Exchange Server regarding messages, attachments or folders for example. A database file can consist of millions of pages. The total number of database pages can easily be calculated by dividing the total size of the database by the page size 8KB. If a database for example is 100 GB in size, it consists of 100 GB/8KB = approximately 13.1 million pages. Each page is sequentially numbered. Whenever a new page is created it gets a new incremented number. When pages are read from the database and altered, they get a new number before being written to the log file and flushed to the database file. Needless to say, this sequential number must be a very large number. It's a 64-bit number which means 18 quintillion changes can be made to a database.
75
Figure 3. Schematic overview of the working of the Exchange Server database. The above mentioned process is the same in all versions of Exchange Server.
76
Memory constraints will always result in a higher disk IO, and when the storage is not designed properly this will result in a bad performance. Most of the Exchange 2003 performance issues are related to bad storage design. Exchange 2007 is a 64-bit platform and memory usage on a 64-bit server is only limited by your budget. Exchange 2007 will use as much memory as possible for caching, which will result in fewer IO operations on the disk. So, the more memory in your Exchange 2007 server, the less disk IO you will see on your storage. All transactions in an Exchange server flow through the log files. This is the only way to achieve a consistent environment. This includes everything - new messages as they arrive, the creation of new folders, moving messages between folders, deleting messages, appointments, notes etc. All information in the mailbox goes through the log files. The creation of the mailbox (not the mailbox properties in Active Directory though!) is recorded in the log files, and even the creation of a mailbox database is recorded in the log files. This automatically means that if you lose your database and you still have all your log files available (maybe in combination with your last backup set) you can reconstruct everything up to the point you lost your database!
Note It is very important to separate your log files and your database files. Separate them on multiple disks or multiple LUNs. If you separate them on multiple LUNs on your SAN, make sure that under the hood multiple, separated disks are used. This will have a positive impact on your performance and it will make sure you have a recovery path when either one of the disks is lost!
One important point to remember is that the log files are always in advance of the database, so there's always data not yet written into the database. This data can cover a number of log files and this number of log files or amount of data is called the "checkpoint depth." This automatically means the database in a running state is always in a non-consistent state. To get it in a consistent state all log files in this range are needed. This is because data is already written to the log files, but not yet to the database file. In Exchange Server this is called a "dirty shutdown" state. When a database is dismounted it is brought into a consistent state. All data not yet written to disk is flushed to the database and all files are closed. All information is now written into the database. This is called a "clean shutdown" state of the database. The log files needed to get the database into a consistent state or " clean shutdown" is recorded in the header of the database. The header of a database is written into the first page of the database file and contains information regarding the database. The header information can be retrieved using the ESEUTIL tool. Just enter the following command in the directory where the database file resides: ESEUTIL /MH "Mailbox Database.edb" Which will result in an output like this: F:\SG2>eseutil /mh mbx2.edb Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating FILE DUMP mode... Database: mbx2.edb File Type: Database Format ulMagic: 0x89abcdef
77
78
The same kind of information is logged in the header of the log files (ESEUTIL /ML E00.LOG) and in the header of the checkpoint file (ESEUTIL /MK E00.CHK). As these file are grouped together you can match these files using the header information.
79
Conclusion
A Storage Group in an Exchange Server is one or more database files, a set of log files and a checkpoint file. These files belong together and have a tight relationship. But if you understand the basic principles of transactional logging it isn't too difficult. All information in a database is first recorded in the log files for recovery purposes, so deleting "unneeded" log files to create additional space is a bad idea since it will break the recovery process. Don't just delete these log files, but have them purged by a backup application. This way you have a recovery process using this backup and the log files that are generated after the last backup. Bad things can happen to databases and log files in recovery scenarios. These bad things are not normally caused by Exchange Server itself, but by administrators that do not know what they are doing and the implications of their actions. Install a test server, create some storage groups and mailboxes and start playing around with the databases. See for yourself what happens during a crash and discover the databases and the header information. It will be the first step in a solid disaster recovery plan! In my next article I will explain what happens during recovery, replay of log files, and offline backups
by Neil Hobson
In Exchange 2007, you can now classify your messages in any way you wish, so that, for example, you can flag messages as being sensitive information that should not be sent outside the company. You can also create transport rules for all messages of a particular category. It is an easy way of implementing email policies within a company.
Introduction
Exchange 2007 has a new feature known as message classification that allows users to apply a classification to a message in order that the actual usage of that message is understood by both the sending and receiving parties. In previous versions of Exchange the concept of message classification has been restricted to marking messages with high, normal or low importance. Now it's possible to not only choose between the default message classifications that ship with Exchange 2007 but also to create custom classifications, as I'll show you later. It's important to understand that message classifications are also an Outlook 2007 feature and therefore this is the version of Outlook you need to deploy to take advantage of this feature. However, there are several configuration changes required in order to make message classifications available to Outlook 2007 clients. If you have Outlook Web Access 2007 clients, these can use message classifications without any further modifications.
80
In the next section within this article I'll be showing you how to export the message classifications to an XML file that the Outlook 2007 clients within your environment can locate. This is to allow Outlook 2007 to display the classifications within email messages. Therefore, if you plan on creating new custom message classifications, you should do so before you export the classifications to an XML file. I'll be covering the creation of custom message classifications later in this article.
Figure 1: Exchange Scripts Folder. To use this script, bring up the Exchange Management Shell and run the following cmdlet: ./Export-OutlookClassification > c:\classifications.xml
81
Figure 2: Contents of classification.xml. The classifications that you have exported to the XML file are those classifications that can be chosen by the users who are sending the message; they have nothing to do with the type of message classification that a user can receive. I will expand on this later in the article. Now that you have exported the classifications.xml file, there are two additional parts of the overall message classification configuration to complete. First, you need to store the classifications.xml file in a location that each Outlook 2007 client can access and second, you need to make a registry change to each Outlook 2007 client to enable message classifications. I'll cover these two configuration elements in the next two sections of this article.
82
83
Figure 4: Outlook 2007 Permission Button. Click the small down-arrow to the right of the Permission button and you will be presented with the six message classification options as defined in the XML file created earlier. You can see this in Figure 5.
Figure 5: Default Classifications in Outlook 2007. Let's say that I choose to classify this new message as Company Confidential. Once I've classified my message, it appears as shown in Figure 6 below.
Figure 6: New Message Classified as Company Confidential. What if the recipient, a user called Ann in this case, isn't enabled for message classifications and therefore doesn't have the required registry modifications in place? In this case, Ann just sees an ordinary message as shown in Figure 7.
84
Figure 7: Received Message Without Classification. The message classification metadata is still associated with the message even if Ann's client is not able to show it. We can determine that this is true by adding the required registry changes and restarting Ann's Outlook 2007 client. Once this has been done, we can see the message classification is now shown as you can see in Figure 8.
85
Figure 9: Default Message Classifications. If you run the New-MessageClassification cmdlet without any additional parameters, you'll be prompted for three parameters to complete the creation process. They are the Name, DisplayName and SenderDescription parameters. Name. This is the administrative name of the classification. For example, if you want to retrieve details about the message classification with a Name attribute of Custom, you can use the Get-MessageClassification Identity Custom cmdlet. DisplayName. The DisplayName attribute is the name of the classification as seen in Outlook 2007, as you have seen earlier in this article in Figure 5. SenderDescription. This is the description that the sender of the message sees in Outlook 2007. This is the orange bar that you can see in Figure 6.
Figure 10 below shows the process of creating a new message classification using just the three basic parameters.
Figure 10: Creating a New Message Classification. Here's something to note. Immediately after you have created this new classification, run the following cmdlet: Get-MessageClassification custom | fl This obtains full details about the newly created Custom message classification as you can see from Figure 11. What you may notice is that
86
Figure 11: Custom Message Classification Parameters and Values. As you can probably guess, the RecipientDescription attribute contains the text that the recipient of the message will see when opening the classified message. Once you've created the message classification, you can easily alter the parameters as with any other Exchange Management Shell cmdlet. For example, the following cmdlet alters the RecipientDescription attribute on the Custom message classification that we've recently created. Set-MessageClassification Custom RecipientDescription "The recipient will see this description after opening a message sent with the Custom Classification" Once you have configured your message classifications, you need to re-export the entire list of message classifications into a new XML file and re-distribute to the Outlook 2007 clients. Therefore, you should ideally plan your custom message classifications before you export the list of classifications into an XML file for the first time. As you can see from Figure 11, once Ann opens a new message that has been classified with the Custom message classification, the new recipient description text is now displayed.
87
Figure 13: Transport Rules Tab. Right-click the Hub Transport container and choose New Transport Rule from the context menu. This invokes the new transport rule wizard. On the opening screen of the transport rule wizard, give your rule a suitable name and make sure that the Enable Rule checkbox remains selected. Click Next to advance to the next screen. The next screen of the wizard is the Conditions screen. Here, choose the marked with classification condition in the Step 1 area of the screen. You should now see that, in the Step 2 area of the screen, the marked with classification condition has now been added. An example is shown in Figure 13 below.
88
Figure 14: Transport Rule Conditions. In the Step 2 area of this screen, click the underlined word "classification." This brings up the Select message classification window as you can see from Figure 14. Select the relevant message classification, which in this example is the Custom Classification, and then click OK.
89
Figure 16: Transport Rule Actions. Clicking Next on the Actions screen takes you to the Exceptions screen where you can choose to apply exceptions to the rule. For the example within this article I'm not going to add any exceptions so I will simply click Next and move on to the next screen. Finally we are now at the Create Rule screen that allows you to review your selections. If you are happy with your selections, click New to create the new transport rule. If everything has been successful, you are presented with the Completion screen informing you of a successful creation as you can see in Figure 17.
90
Figure 17: Transport Rule Completion Screen. The expected outcome of this transport rule is that whenever a message is sent and is marked with the Custom message classification, the subject line of that message should be prepended with the text "CUSTOM CLASSIFICATION:" As you can see from Figure 18, the transport rule works perfectly.
91
Summary
It's now possible for the end users of an Exchange 2007 system to classify their messages such that recipients of those messages understand that there may be restrictions relating to the message content. For example, the message may contain sensitive information that should not be sent externally from the company. The flexibility of message classifications is further extended when you consider that transport rules can be created to perform specific actions on messages that have been classified by the users as you have seen in this article. In my experience not every company that deploys Exchange 2007 implements message classifications but nonetheless it is a useful and welcome addition to the Exchange 2007 feature set, particularly for those organizations that need to comply with regulations or other email policies.
by Nicolas Blank
Nicolas Blank recounts his experiences deploying Exchange 2007 SP1 on Windows Server 2008, when not everything worked out of the box as it should have. In this article Nicolas writes about the fixes to the issues he faced when installing on Server 2008.
My customer's scenario wasn't quite typical he had an unstable mail server running Exchange 2003, as well as Active Directory issues, one of which included the requirement to rename the directory tree. The customer wanted a brand new environment and in order to realize the scalability and security benefits of Microsoft's 64 bit OS decided on Windows Server 2008. This meant I was called in to perform a "Green Fields" migration, where a new target environment is built and all users, machines and mail are migrated to it. To complicate matters, the customer was on a tight hardware budget, meaning he could only afford a single large machine for a 200 user site.
Designing a solution
The design was relatively straightforward since I only had a single machine available, I had to place the HUB, CAS and mailbox role onto that machine. Having all of the roles on one machine is well catered for in the available design guidance from the Exchange team at Microsoft. The machine had 16 GB of memory and a quad core processor. I also had enough disks to create a decent set of mirrors for the OS, page file, logs and a RAID 5 array for the Exchange Database. SPAM handling was done at the ISP, which meant one less burden for the HUB role to handle, since the budget did not allow for additional hardware for an edge server. AV would be handled by the ISP, though this did not preclude internal attack, and I chose Forefront to handle AV on the Exchange server to scan both existing mail in the stores and transmitted mail via the HUB role.
92
Using the command line was dramatically faster than using the GUI would have been and allowed me to script all of the required prerequisites, thereby eliminating any potential mistakes installing the prerequisites. After this I invoked the Exchange installer and since all of the prerequisites were met, Exchange had no issues installing.
93
94
The problem, IPv6. The last thing I added before installing Exchange was the prerequisite for the CAS role to host Outlook Anywhere and mobile clients, namely RPC over HTTP, using this command ServerManagerCmd -i RPC-over-HTTP-proxy I noticed that RPC over HTTP didn't always work. The solution lay in the limited support for the CAS role and IPv6. Running Netstat a n From the command line gave the following results
95
If you're familiar with IPv4, you'll know that in the first picture the IP stack is listening on open ports 6001, 6002 and 6004, but these ports were missing on the IPv6 stack on the same address "[::]:" . This meant that one of the core requirements for RPC over HTTP, communication with the local server, had been compromised. At first glance, the fix seems simple, surely you just disable IPv6? Correct, but that wasn't as easy as you might think. First I had to unbind IPv6 from the Network Adapter, but just like Vista, Server 2008 requires a registry hack in order to disable the protocol altogether.
Using Regedit I navigated to: HKLM\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters, added a DWORD32 called DisabledComponents and gave it the following value: 0xff, effectively disabling all IPv6 components. See this article from the MS Exchange team for background.
96
97
Note that the last line comments out the IPV6 address
Note This issue is current as of Exchange 2007 SP1 Rollup 3, though it should be resolved with Rollup 4 (if it comes out for real!). None of my Exchange customers run IPv6 ,and even with this issue resolved, I would still disable IPv6 or any other protocol not actively used in the environment. After a reboot running netstat a n again revealed that IPv6 was indeed gone for good.
Conclusion
If you get the chance to upgrade, Windows Server 2008 offers a number of enhancements in the OS which benefit Exchange 2007 deployment and management greatly. Security and resilience are enhanced and Windows ships with a better IP stack allowing more RPC connections, amongst other features. This "Green Fields" migration path is particularly straightforward, but even the more complex methods are well worth following if you have the budget. A few things remain incompatible, for example, Server 2008 contains no native backup utility for Exchange 2007, and Exchange 2007 does not support the new Read Only Domain Controller feature in Server 2008. The first of these at least is likely to change in the near future. IPv6 is irritating , but it is quickly disabled since it offers no value over IPv4 at this point. Server 2003 is still available at the time of writing, but I wouldn't hesitate to deploy Server 2008, and gain advantages such as Hyper-V support or "free" geo-clustering with CCR and SCR clusters replicating over the WAN. It is worth remembering that Exchange is a large application, making every deployment worth planning for, irrespective of which operating system it is deployed against.
98
by Jaap Wesselius
Exchange Server stores information in a database and uses log files for transactional processing. To restore, defragment or repair a database, the ESEUTIL tool is essential . You can always recover data when the database is lost, if you have backed up the database.
In my previous article, Exchange Database Technologies, I discussed the underlying database technology in Exchange Server, the Extensible Storage Engine or ESE. One of the most important points in that article was that all changes to the Exchange Server database go through the log files. This is done for recovery purposes. Let's look at the log files, and the replay of log files in case of a recovery scenario
At this point log files, a database file and a checkpoint file have been created and the Exchange Server's database is ready to use.
Tip In the old Backoffice 4.x resource kit there was a small utility called "mailstorm." This MAPI utility was very useful to send a massive amount of messages in a short timeframe. Using mailstorm it is possible to see the creation of log files. You can still find the original mailstorm utility here:. Unfortunately mailstorm doesn't work with Exchange Server 2007 anymore, but there's a PowerShell script available having the same functionality. Using this script a variety of of predefined messages can be sent to your Exchange Server 2007 to check the log file creation. The PowerShell version of mailstorm can be downloaded here:
Create a user account in Active Directory and create a mailbox using the Exchange Management Console. Log on to the mailbox and start sending messages until you have a couple of messages and a couple of log files. What happens when the database file is lost? If we have all the log files still available it should be possible to retrieve all information. Remember what I wrote in my previous article: "everything is logged in the log files, even the creation of database files!"
99
Figure 1. Error message when a mailbox database appears to be missing. In my humble opinion, the yellow exclamation mark should be replaced with a very large red "X" since this is a very important message. When you click "Yes" a new mailbox will be created in the same location as the "old" database. This is a completely new database. Although it has the same name "mailbox database.edb" as in the previous step, it has new signatures, as explained in my previous article, which Exchange Server uses to link databases and log files file together. Recovery of old log files will not result in information being replayed into the database because it is another database in this scenario. And remember, since all information is logged into the log file the creation of this new database is also logged. Choose No, and then delete the checkpoint file E00.chk and try to mount the database again. No error message is raised and the database is mounted. Even better, when you log on to your test mailbox you will see that no information is lost either! This is what happens when you do click "Yes": during the mount process Exchange Server cannot find the database file and it cannot find the checkpoint file. Therefore it starts to recover all information by replaying the available log files. It starts with the oldest log file E0000000001.log which also contains the creation of the initial database file. All information in the other log files is replayed into this database until it reaches the end of the last log file E00.log. The database is mounted and ready to use. When Exchange Server cannot find the database file but it does find the checkpoint file it will not replay the log files. It starts at the end of the last log file E00.log and it will create a new database. How can you tell which files belong together? Dismount the Exchange Server's database, open a command prompt in the database's directory and check the header information using the ESEUTIL tool. You will find the following information: K:\sg1>eseutil /mh "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating FILE DUMP mode... Database: mailbox database.edb DB Signature: Create time:08/25/2008 22:54:52 Rand:4416518 Computer: Log Signature: Create time:08/25/2008 22:54:50 Rand:4412688 Computer: Operation completed successfully in 0.140 seconds. K:\sg1> When you check the header information of the log files with the ESEUTIL tool you will find corresponding information:
100
As you can see the log signature hasn't changed (still the same set of log files) but the database has a new signature, meaning that although the database has the same name and it is in the same location it is a new database! Key take-a-way: the checkpoint file determines if log files are replayed and where log file replay will start and therefore what happens during the mounting process.
Offline backups
You can create backups by copying the database files to a safe location. The steps to do so are: Dismount the database (meaning it is not available!). Copy the database file to a safe location. Mount the database. Perform a consistency check on the database copy. If everything is OK, delete the log files.
101
102
Note Starting with Exchange Server 2003 Service Pack 1 Exchange has error correcting code for checksum errors. If a certain page contains a checksum error, which is usually caused by only one bit being "flipped," Exchange Server can automatically correct this bit. When the page is flushed to disk the error is automatically corrected. Please check the Microsoft kb article 867626 (Microsoft kb article 867626) for more background information regarding this change.
Offline Restore
When something happens we have to restore the offline backup on the Exchange server. An offline restore requires the following steps. Copy the offline backup mailbox file to the correct location. Replay the log files Mount the database.
Now remember the first part of this article. If we leave the checkpoint file in its original location the Exchange Server will check this file and will determine that no replay is needed and the copy of the database will be mounted. If we delete the checkpoint file the Exchange Server will start replaying log files from the oldest log file available. This will result in all information after the offline backup being replayed into the copy of the database. The result will be a complete up-to-date database. If you check the event log of the Exchange server you can see which log files were replayed. For every log file an entry is logged with ID 301 from the ESE Source.
103
Figure 2. Log file replay events in the event viewer. Besides just mounting the database it is also possible to replay the log files into the database manually using the ESEUTIL tool. This can be achieved using the /R option for Recovery. L:\sg1>eseutil /R E00 /i /dK:\SG1 Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating RECOVERY mode... Logfile base name: E00 Log files: <current directory>t;t; System files: <current directory>t;t; Database Directory: K:\SG1 Performing soft recovery... Restore Status (% complete) 0 10 20 30 40 50 60 70 80 90 100 |----|----|----|----|----|----|----|----|----|----| ................................................... Operation completed successfully in 5.819 seconds. L:\sg1> Note. the drive letter K: immediately follows the /d option, there's no space in between.
104
Online Maintenance
There's an internal process in the Exchange server that's responsible for the maintenance of the mailbox database. This is called the online maintenance, or OLM. OLM runs every night, by default between 1 am and 5 am. Online maintenance consists of 10 tasks that are performed during this period. Five of them are specifically for the Public Folder database and five of them are for the Public Folder database and the mailbox database. I want to highlight two of them in this article: OLM checks the mailbox database for deleted messages that are beyond their retention time. If there are, they are fully deleted from the database and their pages are freed up (that is, marked as "white space"). OLM is also responsible for permanently deleting mailboxes that are beyond their retention time. If messages and mailboxes are beyond their retention time they are really gone, the only way to get these back is by restoring an old backup of the mailbox database. Archiving solutions can help here, but they are out of the scope of this article. When the mailboxes and messages are permanently deleted the pages they used are freed up and available for new messages. But the free pages are scattered across the database. The OLM also performs an online defragmentation of the database file by consolidating freed up pages from the previous step and if possible, placing them in one contiguous block of free data. This makes the addition of new data to the database more efficient since the data doesn't have to be split and separated across multiple segments of the database. Online Maintenance is a very disk intensive application and it should be monitored very closely. If the Exchange server cannot finish the OLM in the designated timeframe errors are logged in the event log. Carefully examine these errors and check what causes them. Make sure that no other processes are working with the mailbox database at the same time. Typically maintenance processes are scheduled at night, such as backups or archiving. I will cover online backups in the next article in this series. The movement of large numbers of mailboxes, which is also often scheduled during the night, will also have a negative impact on the OLM. For a complete overview of the online maintenance please visit the Microsoft Exchange team blog. Key take-away: Online Maintenance is responsible for online defragmentation; it defragments inside the database only. This does not result in a decrease of the physical size of the mailbox database.
Offline defragmentation
Please note that offline defragmentation should never be considered a part of normal maintenance of an Exchange database. Only if your free space exceeds 30% of your total database size, or if you are told to do a defragmentation by Microsoft Customer Support Services, should you consider doing an offline defragmentation. While the online maintenance does defragmentation within in the database, it does not compact the database in physical size. If you have a 100 GB database and it has only 10 GB of data in it, the online maintenance will eventually free up 90 GB within the database (it will be marked as "white space" or "available space" within the logical tables of the database). The file "mailbox database.edb" as it resides on disk still remains 100 GB. To compact the database we have to perform an offline defragmentation. This can only be done using the ESEUTIL tool with the /d option. As the name implies, this has to be done when the database is offline. K:\sg1>eseutil /d "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating DEFRAGMENTATION mode... Database: mailbox database.edb Defragmentation Status (% complete)
105
106
Figure 3. Error message when the database won't mount after a crash. In this example one of the log files needed for the mounting process is not available anymore (I deleted one of these log files for demonstration). If this occurs it can be checked by comparing the values in the "logs needed" section of the database header against the actual log files that are on disk. So now we have a database that is not consistent anymore and that has to be fixed before it can be used again. To accomplish the ESEUTIL tool should be used again but now with the /P option (for repair). Note. Only perform this step when you have made a copy of all databases, log files and checkpoint file and placed them in a safe location. Performing a repair action can result in data loss! When you enter this command an error is raised stating that if you continue information might be lost. What actually happens is that ESEUTIL checks all information in the database and it checks all tables and pointers. If pointers do not point to the correct location, or information in this location cannot be read, the pointer is deleted and the index entry is cleaned up. All information that the pointer points to will be lost! K:\sg1>eseutil /p "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating REPAIR mode... Database: mailbox database.edb
107
Note Perform these steps only if you have backed up the database(s), log files and the checkpoint file since performing a repair action can lead to data loss!
To be 100% sure that all issues are fixed on all levels you can create a new mailbox database and move all mailboxes from the repaired database to the new database. When all mailboxes are moved you can delete the repaired database. But please be aware that a repair action with ESEUTIL can delete data, so despite all activities there might be circumstances that end users mail can be lost. Unfortunately it is not predictable what and how much will be lost. You can find more information on the Microsoft website: http://technet.microsoft.com/en-us/library/aa997152(EXCHG.80).aspx and http://msexchangeteam.com/archive/2004/06/18/159413.aspx.
108
When using ESEUTIL, there is one very important issue. ESEUTIL is very powerful, but not very fast. Depending on the hardware in use for the Exchange server, as a rule of thumb ESEUTIL will process the Exchange database with a rate of between 5 GB/hour and 10 GB/hour. When you have a 250 GB Exchange database, a repair action on your database would take 25 hours or longer to complete A following offline defragmentation can also take up to 25 hours, so you will see an outage of at least 50 hours. I don't take the ISINTEG tool and moving the mailboxes to a complete new database into account, but I'm pretty confident that this will not match your Service Level Agreement (SLA). On an Exchange server using a single copy database, i.e. a single server or a single copy cluster, it is best to use a maximum database size of 50 GB. This will keep your database in a manageable state from a service level point of view. So why does Microsoft suggest using a maximum 200 GB database size in an Exchange Server 2007 Continuous Cluster Replication (CCR) environment? As explained in my first article, you have an extra copy of your database available in your Exchange 2007 cluster. So when the database of the active node crashes and is beyond recovery (i.e. cannot be mounted anymore) the passive copy will take over and start servicing your (Outlook) clients. When this happens you have time to start investigating the broken node of your cluster and repair it. You are not tied to the 5 GB/hour processing time of ESEUTIL, which ought to make you more comfortable with your Service Level Agreement.
Conclusion
Exchange Server stores information in a database and uses log files for transactional processing. This way it is always possible to recover data when the database is lost, if you have backed up the database. The tool for database maintenance is ESEUTIL. Using ESEUTIL you can check your database, recover data from the log files and replay it into your database or repair your database when it's corrupt. When you need to compact your database, for example after you deleted or moved multiple mailboxes, you have to perform an offline defragmentation, but remember to create a new backup after performing an offline defragmentation. One important thing to remember is the checkpoint file. Although a tiny file only 8KB in size it has a major impact on the results when either recovering data using ESEUTIL or mounting the database in the Exchange Management Console. When you run into issues in your test environment that isn't a problem. You can and you should start working with database and ESETUIL to see what exactly happens in the different scenarios. Make sure you fully understand (and document!) all the necessary steps for a successful recovery when you lose your database. However, do this in your TEST environment, not in your live/production environment. When problems happen in your production environment and you face a major outage, ask Microsoft Customer Support for help. Microsoft CSS will guide you through a successful recovery, but when you have some experience in your test environment you know at least what's happening and why Microsoft Customer Support asks you to perform the various steps. In my last article I will talk about online backups and VSS backups and explain the differences between these and the steps documented in this article.
109
by Brien Posey
In Exchange Server 2007, Information Rights management is easy to set up once you have set up the prerequisites. It is also much cheaper, and easier to use. This is just as well, because of the increasing statutory regulations to prevent the mishandling of confidential information in emails.
Exchange Server has supported the use of rights management since the first days of Exchange Server 2003. Even so, the Exchange Server 2003 implementation tended to be a bit annoying for users. When users attempted to open messages subject to rights management, they were prompted to enter their authentication credentials every time, prior to being able to view the message. As you can imagine, this seriously reduces efficiency if a user needs to open a significant number of rights managed messages. Fortunately, things have improved in Exchange Server 2007. Users who are running at least Microsoft Office Outlook 2007 or Windows Mobile 6.0 now have the ability to open rights managed messages without being prompted to enter their credentials. In this article, I will explain why this is possible, and how to configure rights management in an Exchange Server 2007 environment.
110
AD RMS Auditor
The other new Rights Management Services feature is its integration with the Active Directory Federation Service (ADFS). ADFS is beyond the scope of this article, but I wanted to at least mention it.
111
112
Figure A: Executing GetTransportAgent. Get-TransportAgent Keep in mind that this command may also report the status of some unrelated transport agents, such as the Transport Rule Agent or the Journaling Agent. Now that you have verified the Prelicensing Agent's status, you can enable it. Once again, you will perform this task through the Exchange Management Shell. The process involves enabling the Prelicensing Agent, and then stopping and then restarting the Exchange Transport Agent. After the Exchange Transport Agent starts back up, I recommend checking the Prelicensing Agent's status one more time, just to make sure that it is enabled. This entire process can be completed by entering the following four commands: Enable-TransportAgent "AD RMS Prelicensing Agent" Net Stop MSExchangeTransport Net Start MSExchangeTransport Get-TransportAgent
113
Figure B: Sign-up screen for Rights Management Service. Of course this raises an interesting question. Why should you bother with the hassle of setting up your own Rights Management Server if Microsoft offers one that you can use for free? Well, there are a couple of reasons why it is better to have your own Rights Management Server. For starters, if you read the text on the dialog box that I talked about earlier, it indicates that this is a "trial" service that Microsoft may choose to discontinue at any time. Granted, this trial service has been available for at least a couple of years now (I can't recall exactly when Microsoft first started offering it), but you never know when Microsoft may choose to end the trial program. Microsoft has indicated that if they decide to discontinue the trial program, then users will continue to have access to rights managed documents for three months after the program ends. From a business prospective, it is simply a bad idea to encrypt e-mail messages using a service that may be discontinued at the drop of a hat. This is especially true since business needs or federal regulations often mandate the long term storage of e-mail messages. If a message is important enough to protect with RMS, then there is a good chance that it may also be important enough to hang on to for an extended period of time. If you are still thinking of using Microsoft's free RMS service, remember that they have only committed to making documents accessible for three months after the service ends. They have not committed to notifying you that the service is no longer being offered. In all likelihood, Microsoft will probably notify their customers, if or when the free RMS service is ended, but you just never know whether or not your spam filter could snag the notification. Microsoft probably isn't going to wait around for you to confirm that you have received notification of the service's termination. More than likely, you would simply lose access to protected documents and messages after three months.
114
Conclusion
In this article, I have explained that configuring Exchange Server 2007 to support rights management is a fairly simple process, although there are a number of prerequisites that must be met. If you need additional assistance with the initial deployment and configuration of AD RMS in Windows Server 2008 then I recommend checking out my book The Real MCTS / MCITP Upgrading Your MCSA on Windows Server 2003 to Windows Server 2008 (Syngress ISBN 978-1-59749-236-2). The book is designed to serve as a study guide for Microsoft's 70648 certification exam, but it covers AD RMS deployment step by step.
115
by Ben Lye
I was recently asked to generate reports on how many users are using mobile devices to access their Exchange mailbox, what kind of devices are being used, and how that use has changed over time. Fortunately, Exchange 2007 includes a PowerShell cmdlet which will parse the IIS log files on a client access server and produce CSV output files detailing the Exchange ActiveSync usage. So, with a small amount of effort it's possible to extract the relevant data from the Exchange logs and produce some interesting reports. The command for exporting ActiveSync logs is intuitively called ExportActiveSyncLog. It takes an IIS log file as input, and generates six CSV files as output.
Description ActiveSync activity by user, including items sent and received, as well as device type and ID ActiveSync activity by client access server Hour-by-hour report of ActiveSync activity Summary of the HTTP response codes issued in response to ActiveSync requests Report on device compliance with ActiveSync policy Summary of the different user agents used to access Exchange
For my purposes Users.csv is the most interesting part of the output as it can be used to identify who the users are, which device types are the most popular, and how much use the service is getting. It's worth noting that the data in the reports is taken from the server's perspective, so "Total Emails Sent" refers to the number of emails that the server sent to the client device. In an Exchange environment with multiple client access servers (such as an environment with servers in multiple Active Directory sites, or one using an internet-facing network-load-balancing array) you will need to export the logs from all client access servers which mobile devices connect to. If you have a single client access server exposed to the internet which all mobile devices connect to, you'll only need to export the logs from that one. To use Export-ActiveSyncLog you need: The Exchange Server Administrator role Read-only access to the directory that contains the IIS log files
116
117
# Get the column names from the first line of the CSV file $CSVFileProperties = Get-Content "$_" -totalcount 1 | % {$_.split(,"")} # Loop through each entry in the CSV file ForEach ($Entry in $CSVFile) { # Ignore lines with an empty Device ID If ($Entry."Device ID" -ne "") { # Construct the SQL insert statement $SQLString = "INSERT INTO Users (" Foreach ($Prop in $CSVFileProperties) { $SQLString = $SQLString + "[$Prop]," } $SQLString = $SQLString + "[Date]) VALUES (" Foreach ($Prop in $CSVFileProperties) { $SQLString = $SQLString + "'" + $Entry."$Prop" + "'," } $SQLString = $SQLString + "'$Date')" # Add the record to the database $null = $objConnection.Execute($SQLString)
} } }
118
As I said at the beginning, it takes a small amount of effort to extract the data and get it into a format suitable for long-term reports, but once the pieces are in place it's a relatively simple task. More information on the Export-ActiveSyncLog cmdlet can be found on the Microsoft Exchange TechNet website can be found on the Microsoft Exchange TechNet website.
119
by Jaap Wesselius
In the third of Jaap's popular series on Exchange Backups, ( see Exchange Database Technologies, and Exchange Server Log File Replay), he explains Online backups. These have major advantages over offline backups since the backup application does all the logic, and work, for you.
In my previous article, Exchange Server Log File Replay, I explained how to create an offline backup, how to check the database for consistency, how to purge the log files in a safe way and how to restore an offline backup. The major disadvantages are that the database has to be offline, it's a lot of work and you need a very good understanding what you're doing. A much better solution is to use online backups. Online backups do all the work, check the database and purge the log files. And they do this all online, so there's no interruption for the end users.
NTBackup
Windows NT, Windows 2000 and Windows 2003 have a neat little backup utility called NTBackup. NTBackup is a lightweight version of a (very old) Veritas BackupExec version. But it's cheap and it does precisely what we can expect from a backup solution. When installing Exchange Server on a Windows 2003 Server the NTBackup application is extended with two ESE DLL's which makes it possible to create online backups from the Exchange Server. The process is the same for all backwards versions of Windows and all backwards versions of Exchange Server. Unfortunately Windows Server 2008 is configured with a new backup application (it's a feature that has to be installed separately) that can create snap-shot backups of your Windows Server. It does not contain a plug-in for Exchange Server, so when you run Exchange Server 2007 on a Windows Server 2008 machine you have to buy a 3rd-party application to backup your Exchange databases. In NTBackup and all other streaming backup applications there are four types of backups: Full backup makes a backup of the entire mailbox database and purges the log files. Incremental backup only the changes made since the last full backup are backed up. Since all changes are written to the log files only the log files since the last full backup are backed up. When finished they are purged. Differential backup only the changes made since the last full backup are backed up, but the log files are not purged. Copy backup this is the same as a full backup, but it does not interfere with your normal backup cycle, i.e. the header information is not updated and the log files are not purged.
120
Figure 1. For backing up the Exchange databases select the "Microsoft Information Store." When you start the backup the following things are happening: The current position of the checkpoint file is recorded. This location is needed for purging purposes; all log files older than the current log file can be deleted when the backup is finished. This location is recorded in the header of the database in the "Current Full Backup" section. NTBackup begins the backup operations and it starts reading the pages in the database. It starts with the first page and continues until it reaches the last page at the end of the database. During this operation all pages are checked for their checksum before they are streamed to the backup media. If a page fails its checksum test the backup operation is aborted and an error is written to the Windows Event log. New pages that are created during the online backup will still be flushed to the database, even when they are flushed to a portion of the database that already has been backed up. This is no problem since all transactions are still written to the log files and thus create a recovery path. During a restore the Exchange server will correct this during a so called "hard recovery'. When all database pages are written to the backup media the database is safe. All information that's written to the log files needs to be written to the backup media as well. To achieve this, a "log file rollover" is forced. This means that the current log file E00.log is closed (or E01, E02 etc., depending on the storage group), a new log file is created and the lGeneration number (check my first article on the lGeneration number) is increased. The log files from the point recorded in step 1 until the last log file created in step 3 are now written to the backup media. Because of the log file "roll over" you will never see the E00.log file in the backup set. All log files prior to the point recorded in step 1 are now purged from the log file disk. The "previous full backup" section of the database is now updated with the last information when the backup was running. NTBackup is now finished with the backup of the database.
121
If you check the header information of the database again you will see both the header information of the full backup as well as from the incremental backup: K:\sg1>eseutil /mh "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating FILE DUMP mode...
122
-1018 Errors
One of the tasks performed by a streaming backup is a checksum check on all pages being streamed to tape. As explained in my previous article an incorrect page can be in the database, but as long as the page isn't touched by the Exchange server you would never know. If the backup touches the page it sees that the checksum isn't correct. What will happen depends on the version of Exchange: The release version of Exchange Server 2003 and earlier The original release of Exchange Server 2003 (and earlier) do not contain any error correcting code for CRC errors. If an issue is detected the backup will fail and an error with Event ID 474 is logged in the eventlog. In the description you can read: "The read operation will fail with error -1018 (0xfffffc06)." As per Microsoft knowledgebase article 867626 you have to perform a database repair action or restore the database from a previous backup. Exchange Server 2003 SP1 and later Service pack 1 for Exchange Server 2003 contains error correcting code for checksum errors and thus can handle database pages that have an incorrect checksum. A streaming backup will check the pages and will notice any inconsistencies. Owing to the error correcting code in SP1 the backup application will continue, but will fix the page and write a notice in the eventlog with Event ID 399: "Information Store (2672) First Storage Group: The database page read from the file "G:\SG1\priv1.edb" at offset 6324224 (0x0000000000608000) for 4096 (0x00001000) bytes failed verification. Bit 24032 was corrupted and has been corrected." A lot of detailed information can be found in the Microsoft Exchange Server 2003 SDK Documentation which can be found here. Also Jerry Cochran's book "Mission-Critical Microsoft Exchange 2003: Designing and Building Reliable Exchange Servers" is a very valuable source of information as it describes backup technologies from a programming perspective. It is sold for example via Amazon.
123
Online restore
When performing an offline backup you have to take care of the database, the log files, and the checkpoint file and take all the necessary steps in the correct order. An online backup does all the dirty work. When performing an online restore make sure that you mark the database for being overwritten. This is a property of the database and can be set using the Exchange Management Console.
Figure 2. This database can be overwritten by a restore. In NTBackup select the "Restore and Manage Media" tab and select the appropriate full backup set. After clicking on the restore button you are presented another, important Window.
124
Figure 3. Is this really the last restore set? If so then checkmark the checkbox If you want to restore an incremental backup DO NOT SELECT the "Last Restore Set." If you do, the database is being hard recovered immediately after the restore is finished and you do not have the option anymore to restore an incremental backup. But also if this is the last restore set I'd like to not set the checkbox. This allows the possibility to check the database and the log files before a hard recovery is started. The log files that were written to the backup are restored in the Temporary location path c:\temp. When the restore is finished you will see all the log files in the directory c:\temp\first storage group (or any other storage group that you've restored) and a file called restore.env. This file contains the information needed for hard recovery and can be read using the ESEUTIL tool: C:\temp\First Storage Group>eseutil /cm Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Restore log file: C:\temp\First Storage Group Restore Path: C:\temp\First Storage Group Annotation: Microsoft Information Store Server: MBXSERVER Backup Instance: First Storage Group Target Instance: Restore Instance System Path: Restore Instance Log Path: Databases: 1 database(s) Database Name: Mailbox Database GUID: 10364919-F177-46D7-B5BE46D7D1B7C03F Source Files: k:\sg1\Mailbox Database.edb Destination Files: k:\sg1\Mailbox Database.edb Log files range: E0000000022.log - E0000000034.logog Last Restore Time: <none> Recover Status: recoverNotStarted Recover Error: 0x0000000000 Recover Time: Tue Sep 16 19:34:41 2008
125
Operation completed successfully in 0.0 seconds. C:\temp\First Storage Group> The process to fix the database with the log files that were restored from backup is called hard recovery. You can manually start the hard recovery using the ESEUTIL /CC command. This will replay all log files in the temporary directory into the database. The database itself is already in the correct location as you can see in the output above. When more log files exist in the normal production environment beyond the point of backup (and thus beyond the point of restore) they will be replayed into the production database as well. This will bring your database into a consistent state up to the most current point possible. If you set the checkmark at "Last Restore Set" in Figure 3 this will happen automatically. The database will be hard recovered and all additional log files will be replayed as well. When finished the database can be mounted automatically as well.
Note This process is the same for all backup applications from major vendors that support streaming backups with Exchange Server.
Note NTBackup cannot create VSS or snapshot backups from an Exchange Server database. It does however contain functionality to create a file level VSS backup. If you see a snapshot backup of your Exchange Server database in NTBackup it is very likely that the Exchange Server database is selected on the filesystem level and not on the Information Store level!
A snapshot is just a point-in-time and at this point-in-time an image is created. This image can be used to roll back to in case of a disaster. The Volume Shadow Copy Service in Windows Server 2003 and later provides an infrastructure to create these point-in-time images. These images are called Shadow Copies. There are two kinds of Shadow Copies: Clone (Full Copy or Split Mirror) a complete mirror is maintained until an application or administrator breaks the mirror. From this point on the original and clone are fully independent of each other. At this point it is effectively frozen in time. Copy on Write (Differential Copy) a shadow copy is created that is a differential rather than a full copy of the original data. Using the Copy on Write a shadow copy of the original data is made before the original data is overwritten. Effectively the backup copy consists of the data in the shadow copy combined with the data on the original location. Both need to be available to reconstruct the original data.
The Volume Shadow Copy Infrastructure consists of the following components: Requestor the software that invokes the VSS and creates, breaks or deletes the shadow copy. Typically the Requestor is the backup application.
126
The following steps occur when a VSS backup is performed: The requestor, i.e. the backup application, sends a command to the Volume Shadow Copy Service to take a shadow copy of the Storage Groups. The VSS service sends a command to the Exchange writer to prepare for a snapshot backup. The VSS service sends a command to the appropriate storage provider to create a shadow copy of the Exchange Storage Group. This storage provider can be a hardware storage provider or the default Windows storage provider. The Exchange writer temporarily stops, or quiesces the Storage Group and puts them in read only mode and all data is flushed to the database. Also a log file roll-over is performed to make sure that all data will be in the backup set. This will hold a couple of seconds for the snapshot to be created (in the next step). All write I/O's will be queued. The shadow copy is now created. The VSS service releases the Exchange server to resume ordinary operations and all queued write I/O's are completed. The VSS service queries the Exchange writer to confirm that the write I/O's were successfully held during the shadow copy creation. If the writes were not successfully held it could mean a potentially inconsistent shadow copy, the shadow copy is deleted and the requestor is notified. The requestor can retry the shadow copy process or fail the operation. If successful, the requestor verifies the integrity of the backup set (the clone copy). If the clone copy integrity is good the requestor informs the Exchange Server that the backup was successful and that the log files can be purged.
127
Steps 1 through 6 usually take about 10 seconds, this is the time needed to create the actual snapshot. This is not the time to create a backup though. A backup application still has to create a backup to another disk or to tape, which still can take hours to complete depending on the size of the databases. When the backup application has finished the header information of the Exchange database is updated as well. Using ESEUTIL /MH as explained in my previous article you can check the status of the database. K:\sg1>eseutil /mh "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Previous Full Backup: Log Gen: 13-14 (0xd-0xe) - OSSnapshot Mark: (0xF,8,16)6)6) Mark: 10/02/2008 14:27:30 Previous Incremental Backup: Log Gen: 13-16 (0xd-0x10) - OSSnapshot Mark: (0x11,8,16) Mark: 10/02/2008 14:34:25 Operation completed successfully in 0.171 seconds. K:\sg1> Note This screenoutput has been edited for readability.
Microsoft does not currently offer a GUI-based VSS requestor for Exchange and using command-line tools puts you in a difficult support situation in regards to creating restores. Microsoft is working on a solution which can be read on the Microsoft Product Group teamblog: For testing purposes Microsoft offers the Exchange Server 2007 Software Development Kit, the accompanying documentation can be found on the Microsoft website: the Microsoft website. Microsoft also provides a VSS Software Development Kit. This SDK contains a very small test tool that is able to create VSS backups of the Exchange Storage Group. This command line tool is called BETest and can be used for test, development, troubleshooting or demonstrating VSS and Exchange 2007. More information regarding BETest can be found here. The Exchange product team released a blog on troubleshooting VSS issues which can be found here. Using the BETest tool you can make very basic VSS backups. The backup is written to disk (on a location you can enter on the command line) and the log files are purged. Since the responsibility of performing a consistency check is at the backup application this check is not performed when using BETest.
128
Figure 4. The backup on disk after using the BETest tool. The original database location was on disk k:\, the log files were on disk L:\ When the database contains a faulty page you will never notice during the backup. Even the backup of the database on the disk will contain the faulty page. You have to manually perform a consistency check on the backup of the database using the ESEUTIL tool with the /K option: C:\BETest\VOLUMEk\sg1>eseutil /k "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.01 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating CHECKSUM mode... Database: mailbox database.edb Temp. Database: TEMPCHKSUM2496.EDB File: mailbox database.edb Checksum Status (% complete) 0 10 20 30 40 50 60 70 80 90 100 |----|----|----|----|----|----|----|----|----|----| WARNING: page 455 checksum verification failed but the corruption ( bit 35816 ) can be corrected ................................................... 770 pages seen
129
VSS Restore
A Volume Shadow Copy backup is created on a storage group level. Exchange Server 2003 is very rigid on restoring, it can restore to the same location, to the same server and only on a Storage Group level. Exchange Server 2007 is much more flexible, it can restore on the same location, but also to other servers or to a Recovery Storage Group for example. Restoring a backup is a straight forward process and basically the same as an online restore. The database is restored, the log files in the backup set are also restored and hard recovery takes place to bring the database in a consistent state. If needed additional log files can be replayed as well to bring the database up-to-date to the last moment possible.
130
Conclusion
Creating online backups has major advantages over an offline backup solution since the backup application does all the logic for you. It also checks the database for any inconsistencies and if checksum errors are found they are automatically corrected before the data is sent to the backup set. Recovery from an online backup is much easier than recovery from an offline backup. The database and the log files are automatically recovered from the backup set using the so called hard recovery. Additional log files that were created after the last backup set was created are automatically replayed, bringing the database up-to-date to the last moment. A streaming backup is a default Windows 2003 solution, but Microsoft is shifting its focus from the streaming backup to the VSS (snapshot) backup due to the dramatic performance increase of VSS backups. Microsoft offers the System Center Data Protection Manager which can make backups, but it works very different from the standard VSS Backup solution that third-party vendors offer. In these three articles (Exchange Database Technologies, Exchange Server Log File Replay, and this one) I have explained some basics about Microsoft Exchange database technologies, replaying log files, recovery techniques and backup and restore solutions. It's your turn now to start working with this information in a lab environment and start thinking about a disaster recovery solution for your Exchange Server environment. Document all the steps needed when a disaster strikes and perform regular fire drills on disaster recovery. Only this will help you recover more quickly when something bad happens in your Exchange Server environment.
131
by Brien Posey
Brien Posey ponders an "off the cuff" remark that Exchange 2007 runs so well with a default configuration that you don't even have to worry about optimizing Exchange any more. He decides that there is actually plenty that can be done to help Exchange to perform even better.
A couple of weeks ago, I was in Las Vegas presenting several Exchange Server sessions at one of the many IT conferences that they have out there. After one of my sessions, I was having a conversation with one of the other conference attendees. During the conversation, he made an off the cuff remark that Exchange 2007 runs so well with a default configuration, that you don't even have to worry about optimizing Exchange any more. Maybe it was because I was because I was still jet lagged, or because my mind was still on the session that I had just presented, but the remark didn't really register with me at the time. Later on though, I started thinking about his comment, and while I don't completely agree with it, I can kind of see where he was coming from. I started working with Exchange at around the time when people were first moving from Exchange 4.0 to version 5.0. At the time, I was working for a large, enterprise class organization with roughly 25,000 mailboxes. Although those mailboxes were spread across several Exchange Servers, the server's performance left a lot to be desired. I recall spending a lot of time trying to figure out things that I could do to make the servers perform better. Although there is no denying how poorly our Exchange Servers performed, I think that the server hardware had more to do with the problem than Exchange itself. For example, one night I was having some database problems with one of my Exchange Servers. Before I attempted any sort of database repair, I decided to make a backup copy of the database. In the interest of saving time, I copied the database from its usual location to another volume on the same server. All of the server's volumes were using "high speed" RAID arrays, but it still took about three and a half hours to backup a 2 GB information store. My point is that Exchange has improved a lot since the days of Exchange 5.0, but the server hardware has improved even more dramatically. Today multi-core CPUs, and terabyte hard drives are the norm, but were more or less unheard of back in the day. When the guy at the conference commented that you don't even have to worry about optimizing Exchange 2007, I suspect that perhaps he had dealt with legacy versions of Exchange on slow servers in the past. In contrast, Exchange 2007 uses a 64-bit architecture, which means that it can take full advantage of the CPU's full capabilities and that it is no longer bound by the 4 GB memory limitation imposed by 32-bit operating systems. Although Exchange 2007 does perform better than its predecessors, I would not go so far as to say that optimization is no longer necessary. Think about it this way If you've got an enterprise grade Exchange Server, but you've only got 20 mailboxes in your entire organization, then that server is going to deliver blazing performance. If you start adding mailboxes though, you are eventually going to get to the point at which the server's performance is going to start to suffer. This illustrates two points. First, the Exchange 2007 experience is only as good as what the underlying hardware is capable of producing. Second, even if your server is running well, future growth may require you to optimize the server in an effort to maintain the same level of performance that you are enjoying now. Fortunately, there are some things that you can do to keep Exchange running smoothly. I will spend the remainder of this article discussing some of these techniques.
132
Figure A.
133
Disk I/O
If you are an Exchange veteran, then you might have been a little surprised when I listed running the ExBPA as the single most important step in optimizing an Exchange Server. In every previous version of Exchange, the most important thing that you could do from a performance standpoint was to optimize disk I/O. There are a couple of reasons why I didn't list disk optimizing disk I/O as my top priority. First, if an Exchange Server has a serious problem in which the configuration of the disk subsystem is affecting performance, then, the ExBPA should detect and report the issue when you run a performance baseline scan. More importantly though, you can always count on your users to tell you when things are running slowly. Another reason why disk I/O isn't my top priority is because Exchange Server 2007 uses a 64-bit architecture, which frees it from the 4 GB address space limitation imposed by 32-bit operating systems. Microsoft has used this larger memory model to design Exchange 2007 in a way that drives down read I/O requirements. This improves performance for database reads, although the performance of database writes have not really improved. Of course that doesn't mean that the old rules for optimizing a disk subsystem no longer apply. It is still important to place the transaction logs and the databases onto separate volumes on high speed raid arrays for performance and fault tolerant reasons. Keep in mind that you can get away with using two separate volumes on one physical disk array, but doing so is not an ideal arrangement unless you divide your bank of disks into separate arrays so that each volume resides on separate physical disks. If the same physical disks are used by multiple volumes, then fault tolerance becomes an issue unless the array is fault tolerant. That's because if there is a failure on the array, and both volumes share the same physical disks, then both volumes will be affected by the failure. Even if the volume is fault tolerant, spanning multiple volumes across the same physical disks is sometimes bad for performance. In this type of situation, the volumes on the array are competing for disk I/O. If you were to separate the volumes onto separate arrays or onto separate parts of an array though, you can eliminate competition between volumes for disk I/O. Each volume has full reign of the disk resources that have been allocated to it.
Storage Groups
As was the case in previous versions of Exchange, Microsoft has designed Exchange 2007 so that multiple databases can exist within a single storage group, or you can dedicate a separate storage group to each individual storage group. Microsoft recommends that you limit each storage group to hosting a single database. This does a couple of things for you. First, it allows you to use continuous replication, which I will talk about later on. Another thing that it does is that it helps to keep the volume containing the transaction logs for the storage group from becoming overwhelmed by an excessive number of transactions coming from multiple databases, assuming that you use a separate volume for the transaction logs from each storage group.
134
Mailbox Distribution
Since Exchange Server can accommodate multiple Exchange Server databases, it means that you have the option of either placing all of your mailboxes into a single database, or of distributing the mailboxes among multiple databases. Deciding which method to use is actually one of the more tricky points of optimizing Exchange Server's performance. There are really two different schools of thought on the subject. Some people believe that as long as there aren't an excessive number of mailboxes, that it is best to place all of the mailboxes within a single store. The primary advantages of doing so are ease of management and simplified disaster recovery. There is also however an argument to be made for distributing Exchange mailboxes among multiple stores (in multiple storage groups). Probably the best argument for doing so is that if a store level failure occurs then not all of your mailboxes will be affected. Only a subset of the user base will be affected by the failure. Furthermore, because the store is smaller than it would be if it contained every mailbox in the entire organization, the recovery process tends to be faster. Another distinct advantage to using multiple mailbox stores is that assuming that each store is placed on separate disksfrom the other stores, and the transaction logs are also distributed onto separate disks, the I/O requirements are greatly decreased, because only a fraction of the total I/O is being directed at any one volume. Of course the flip side to this is that this type of configuration costs more to implement because of the additional hardware requirements. As you can see, there are compelling arguments for both approaches, so which one should you use? It really just depends on how much data your server is hosting. Microsoft recommends that you cap the mailbox store size based on the types of backups that you are using. For those running streaming backups, Microsoft recommends limiting the size of a store to no more than 50 GB. For organizations using a VSS backups without a continuous replication solution in place, they suggest limiting your database size to 100 GB. This recommendation increases to 200 GB if you have a continuous replication solution in place There are a couple of things to keep in mind about these numbers though. First, the limits above address manageability, not performance. Second, the database's size may impact performance, but it ultimately comes down to what your hardware can handle. Assuming that you have sufficient hardware, Exchange 2007 will allow you to create databases with a maximum size of 16 TB. If you do decide to split an information store into multiple databases, there are some hardware requirements that you will need to consider beyond just the disk configuration. For starters, each database is going to consume some amount of CPU time, although the actual amount of additional CPU overhead varies considerably. You are also going to have to increase the amount of memory in your server as you increase the number of storage groups. Microsoft's guidelines state that a mailbox server should have at least 2 GB of memory, plus a certain amount of memory for each mailbox user. The table below illustrates the per user memory requirements: User Type Light Medium Heavy Definition 5 messages sent / 20 messages received per day 10 messages sent / 40 messages received per day Anything above average Amount of Additional Memory Required 2 MB 3.5 MB 5 MB
The 2 GB base memory, and the per user mailbox memory requirements make a couple of assumptions. First, it is assumed that the server is functioning solely as a mailbox server. If other server roles are installed, then the base memory requirement is increased to 3 GB, and the per user mailbox requirements remain the same.
135
Number of Storage Groups 14 58 912 1316 1720 2124 2528 2932 3336 3740 4444 4548 4950
Backups
I don't know about you, but when I think about optimizing a server's performance, backups are not usually the first thing that comes to mind. Even so, my experience has been that you can get a big performance boost just by changing the way that nightly backups are made. Let me explain. Most organizations still seem to be performing traditional backups, in which the Exchange Server's data is backed up to tape late at night. There are a couple of problems with this though. For starters, the backup process itself places a load on the Exchange Server, which often translates into a decrease in the server's performance until the backup is complete. This probably isn't a big deal if you work in an organization that has a nine to five work schedule, but if your organization is a 24-hour-a-day operation then a performance hit is less than desirable.
136
Conclusion
In this article, I have explained that while Exchange Server has improved over the years, there are still a lot of things that you can do to help Exchange to perform even better. This is especially important in larger organizations in which the server's finite resources are being shared by many different users.
137
by Jaap Wesselius
It can happen at any time: You get a request, as Admin, from your company, to providethe contents of somebody's mailbox from a backup set as part of an investigation. The Recovery Storage Group is usually the easiest way to do this. It may either mean using theExchange Management Console or the Exchange Management Shell. Jaap explains all.
Very recently I was at a customer and the Exchange administrators hadreceived a request from the legal department to retrieve somebody's mailbox from several backup sets and create .PST files of the inbox. The .PST files had to be handed over to the legal department. Of course the particular user was not aware of these actions. The Exchange administrator retrieved the Exchange database files from backup (130 GB!) but had absolutely no idea how to retrieve the mailbox data out of the files. The Recovery Storage Group in Exchange Server 2003 and Exchange Server 2007 can be very useful in a situation like this.
138
When the Recovery Storage Group is created the database that needs to be restored must be selected. In the Exchange System Manager, right click the Recovery Storage Group and select "Add Database to Recover." All mailbox databases that are known in Active Directory are displayed, so Joe's Mailbox Store should be selected, this is in this example "Mailbox Store (2003SRVR)." After clicking OK the Mailbox Store properties are shown, leave these at their default values and click OK.
DO NOT MOUNT THE DATABASE The database is now created in the Recovery Storage Group in Active Directory. The next step is to restore the database from the backup set. No additional steps are needed here. When a Recovery Storage Group is created the behaviour of an online restore is changed. The database is not restored to its original location, but it is automatically redirected to the Recovery Storage Group location that's created earlier. Just restore the "Mailbox Store (2003SRVR)," it will be automatically placed in the right directory.
Figure 1. The Recovery Storage Group directory and its contents right after a restore operation. In a normal Storage Group in Exchange Server 2003 the log files have a prefix like E00, E01, E02 or E03. In a Recovery Storage Group the prefix is R00. This way it is always clear when a particular directory contains data from a Recovery Storage Group.
139
Figure 2. After mounting the database in the Recovery Storage Group contain disconnected mailboxes. The last step is to create a .PST file from Joe Sixpack's mailbox. The Exchange System Manager contains some functionality to copy or merge this data into the running mailbox of the user, for creating a .PST file the free Microsoft tool EXMERGE have to be used. Exmerge can be downloaded from the Microsoft website: http://www.microsoft.com/downloads/details.aspx?familyid=429163EC-DCDF47DC-96DA-1C12D67327D5&displaylang=en. More information about the usage of Exmerge can be found in Microsoft knowledgebase article 327304: http://support.microsoft.com/?kbid=327304
140
Figure 3. Select the database from the Recovery Storage Group. Start the Exmerge.exe utility from the local disk. In the Microsoft Exchange Mailbox Merge Wizard select "Extract or Import (Two Step Procedure)" and click Next. Select "Step 1: Extract data from an Exchange Server Mailbox" and click Next. In the Source Server window enter the servername and the domain controller name and click Next. In the Database Selection Windows select the Mailbox Store in the Recovery Storage Group. This is the "old" store where Joe's data need to be extracted from. In the next windows select Joe Sixpack's mailbox.
Figure 4. Select Joe Sixpack's mailbox to retrieve the data from. In the Next Windows select the locale, the default setting is "English (US)." The last mailbox login locale can also be selected by clicking the Checkbox. Select the Folder (directory) where the .PST file will be stored. Any directory can be used.
141
Joe Sixpack's mailbox that was in the backup set is now saved in the .PST file on local disk. This .PST file can be handed over to the legal department for further investigation while Joe Sixpack never noticed anything about the whole operation.
Now create the Database in using the New-MailboxDatabase cmdlet. This database has the same name as the original mailbox database that contains Joe Sixpack's mailbox: [PS] C:\>New-MailboxDatabase -MailboxDatabaseToRecover "Mailbox Database" -StorageGroup MBXSERVER\RSG -EdbFilePath "k:\rsg\mailbox database.edb" Name Server StorageGroup -------------------Mailbox Database MBXSERVER RSG Recovery -------True
The newly created database in the Recovery Storage Group should have the ability to be overwritten by a restore, so set this property: [PS] C:\>Set-MailboxDatabase -Identity "MBXSERVER\RSG\Mailbox Database" -AllowFileRestore:$TRUE Before the database can be mounted, the database should be restored from the backup set. When a Recovery Storage Group is created the restore is automatically redirected to the Recovery Storage Group by the Information Store, just like in Exchange Server 2003. Restore the database using your backup application. When the database is restored into the Recovery Storage Group directory on the disk the database can be mounted: [PS] C:\>Mount-Database -Identity "MBXSERVER\RSG\Mailbox Database" Now the Recovery Storage Group is created, the mailbox database is created, restored from the backup set and mounted. It is still invisible to the end users and no one except the administrator can access the mailbox database in the Recovery Storage Group. Please remember that it is only accessible through the Management Shell, so colleagues using the regular Exchange Management Console also do not notice anything.
142
When the database is mounted the database statistics can be requested of the database in the Recovery Storage Group with the GetMailboxStatistics cmdlet. These are disconnected mailboxes and are unknown to Active Directory. As far as Active Directory is concerned the actual mailboxes are located in the "mailbox database.edb" in the First Storage Group. Joe Sixpack's mailbox is clearly visible. [PS] C:\>Get-MailboxStatistics -Database "RSG\Mailbox Database" DisplayName ItemCount ------------------Test User 2 145 Test User 4 122 Test User 3 129 Eric James 64 Test User 9 85 Administrator 1 Joel Olson 40 Test User 7 99 Test User 6 105 Test User 5 111 Test User 8 93 Test User 10 72 Joe Sixpack 56 Microsoft System Attendant 0 SystemMailbox{10364919-F1 402 77-46D7-BEB5-3FC0B7D155A9 } Test User 1 163 [PS] C:\> Exchange Server 2007 SP1 has a new cmdlet to export data to a .PST file and as such the successor of EXMERGE that was used in Exchange Server 2003 and earlier. There are a few caveats with the Export-Mailbox cmdlet: To use this cmdlet Outlook 2003 SP2 (or higher) has to be installed on the server where the cmdlet is run. This can be a management workstation (or a management server) with only the Exchange Management Tools installed. It should also be a 32-bits server since the 64-bits Exchange Management Tools cannot cooperate with the 32-bits Outlook client software. To Export-Mailbox cmdlet runs only against a normal Exchange Server 2007 SP1 database and not against databases running in the Recovery Storage Group. StorageLimitStatus LastLogonTime -----------------------------10-11-2008 14:18:29 10-11-2008 14:18:29 10-11-2008 14:18:29 10-11-2008 14:18:30 13-11-2008 15:03:25 10-11-2008 10-11-2008 10-11-2008 10-11-2008 10-11-2008 13-11-2008 13-11-2008 14:18:30 14:18:30 14:18:29 14:18:30 14:18:29 17:12:21 16:34:10
10-11-2008 14:18:29
To overcome the second limitation a special "recovery mailbox" can be created in an ordinary Storage Group. When this mailbox is created the contents of Joe Sixpack's mailbox can be restored into this recovery mailbox:
143
Figure 5. The progress bar shown during the Recover Maillbox operation. When the Restore Mailbox operation is finished the contents of Joe Sixpack's mailbox is available in the Recovery Mailbox. When using Outlook to access this mailbox the contents is available in the \Recover folder in the mailbox. The \Recover folder in the Recovery Mailbox can be exported using Outlook, in the File menu select the "Import and Export option" and then select "Export to a file." Follow the wizard to complete the export to a .PST file. When Outlook 2003 SP2 (or higher) is installed on the Management Server the Export Mailbox cmdlet can also be used: Export-mailbox identity "Recovery Mailbox" -PSTFolderPath c:\temp\joesix.pst IncludeFolders "\ Recover" This will export the contents of the \Recover folder, and therefore the recovered mailbox from Joe Sixpack to a .PST file in the c:\temp directory on the local disk.
Figure 6. The Contents of the \Recovery folder and therefore of Joe Sixpack's mailbox. Joe Sixpack did, indeed, send out sensitive information.
144
Figure 7. Select Exchange Tasks in the Recovery Storage Group to restore mailbox data. After selecting Exchange Tasks please select "Recover Mailbox Data" (the only option available) and select the destination mailbox store. This should be the database that hold Joel's mailbox, in this case the "Mailbox Store (2003SRVR)" in the First Storage Group. Now you have two options to continue: Merge Data when you select "Merge Data" the data from the mailbox copy in the Recovery Storage Group will be merged into the original data. If any item exists it will be detected and not be overwritten. Items that no longer exist (i.e., that have been deleted) will be recreated during the merge process. Copy Data when you select "Copy Data" the data from the mailbox copy in the Recovery Storage Group will be copied to a separate location, i.e. a special folder named "Recovered Data" followed with the data and time of the recovery process in the user's mailbox.
145
Figure 8. Merge the data into the mailbox or copy it to a "recovered items" folder in the user's mailbox. When finished you have recovered the mailbox data from the backup that's in the Recovery Storage Group into the user's mailbox. When using Exchange Server 2007 you have to use the Exchange Management Shell or the Database Recovery Management in the Tools option in the Exchange Management Console. Again, restore the database that holds Joel's mailbox into the Recovery Storage Group. After restoring you can use the Restore-Mailbox cmdlet in the Exchange Management Console to restore the content to Joel's mailbox. [PS] C:\ >Restore-Mailbox -Identity "Joel Olson" -RSGDatabase "MBXSERVER\RSG\Mailbox Database" Confirm Are you sure you want to perform this action? Recovering mailbox content from the mailbox 'Joel Olson' in the recovery database 'MBXSERVER\RSG\Mailbox Database' into the mailbox for 'Joel Olson ([email protected])'. The operation can take a long time to complete. [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"):a [PS] C:\ > You can also use the Exchange Management Console to recover mailbox items to Joel's mailbox. In the left-hand pane in the Exchange Management Console select the Tools option and in the middle pane select "Database Recovery Management" under "Disaster Recovery Tools". If needed enter the Exchange Server's name and the Domain Controller's name. Click Next to continue. After a couple of seconds some tasks are shown. Select the "Merge or Copy contents" task. The next window shows what database is mounted in the Recovery Storage Group; this is the Mailbox Database that was restored earlier. Click "Gather Merge Information" to continue.
146
Conclusion
When you need to retrieve a mailbox or mailbox content from a backup set the Recovery Storage Group is a great tool to use. Depending on the task, you have to perform everything in the Exchange Management Console or you may need to perform some actions in the Exchange Management Shell. In both Exchange Server 2003 as well as Exchange Server 2007 it is possible to create a .PST file from a restored mailbox, but in Exchange Server 2007 it's a bit more confusing because of the usage of the Exchange Management Shell. Before you can efficiently use the Recovery Storage Group and its purposes I would advice to try it a couple of times, to see in what scenario's it can help you in your day-to-day Exchange Management tasks. Make sure that you document all steps very well. In this case you know exactly what steps to perform when a request is made to restore a single mailbox, either to its original location or to a .PST file.
147
by Ben Lye
Because Exchange auto-complete cache uses X.500 addresses for e-mail sent to addresses within the Exchange organization, it will bounce back messages from a re-created mailbox even after you give the user account all the old SMTP addresses. This is because the old X.500 address in the auto-complete cache is missing, and this causes Exchange to reject the messages. Ben Lye explains how to solve this common problem.
A little while ago I had a case where, after all other troubleshooting had failed, I had to solve a mailbox corruption problem by exporting the mailbox content to a PST file, removing the existing mailbox, recreating a new mailbox, then finally importing the PST file back in. This solved the immediate problem of the corrupt mailbox, but created a new one when Outlook users tried to e-mail the user either by replying to an existing message or by using Outlook's auto-completion of the user's e-mail address, the message would bounce back to the sender. This happened even though I had re-added all the SMTP addresses that the user previously had. E-mail from external senders was being received properly, and replies to new messages were OK. This problem occurs because while the Outlook auto-complete cache stores SMTP addresses for e-mail sent to external addresses, it uses X.500 addresses for e-mail sent to addresses within the Exchange organisation. Even though we had given the user account all the old SMTP addresses, the old X.500 address which Outlook was sending to was missing, and this was causing Exchange to reject the messages. The use of X.500 addresses goes back to before Exchange 2000, when previous versions of Exchange maintained their own LDAP directory. Since Exchange 2000 the mailbox's X.500 address has been stored in the legacyExchangeDN attribute in Active Directory. The legacyExchangeDN value is set when a mailbox is created, and includes the name of the Exchange administrative group where the mailbox belongs. LegacyExchangeDN values typically look like this: /o=Organisation/ou=Administrative Group/cn= Recipients/cn=Username Because the legacyExchangeDN value includes the administrative group name changes to admin group names will influence legacyExchangeDN values. For example when you upgrade from Exchange 2003 to Exchange 2007 your user-defined admin groups are replaced by a single admin group named "Exchange Administrative Group (FYDIBOHF23SPDLT)" existing mailboxes are unaffected, but mailboxes created after the upgrade will use the new admin group name in their legacyExchangeDN values. (Incidentally, if you've ever wondered why the Exchange 2007 admin group has this name, or what it means, it's the text EXCHANGE12ROCKS, with all the characters shifted to the right by one!) The current X.500 address of a mailbox can be retrieved from Active Directory using a tool such as ADSIEdit, or LDP.exe, or by using the Exchange Management Shell: [PS] C:\>Get-Mailbox juser | fl LegacyExchangeDN LegacyExchangeDN : /o=Example/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/ cn=juser [PS] C:\> The X.500 address of a message sender can be retrieved using a tool such as Microsoft Exchange Server MAPI Editor to open a message and get the PR_SENDER_EMAIL ADDRESS property:
148
Alternatively, you can use a hex editor to open the Outlook auto-completion cache file and retrieve X.500 addresses from there. The cache is stored in a file in the user's profile, typically ... %userprofile%\AppData\Roaming\Microsoft\Outlook\[Outlook profile name].NK2 ... on Windows Vista, or ... %userprofile%\Application Data\Microsoft\Outlook\[Outlook profile name].NK2 ... on Windows 2000, XP or 2003. There are also other tools available on the Internet which will allow viewing and editing of the content of the auto-completion cache file, but they may not expose the X.500 addresses. Diagnostic information for administrators: Generating server: demo01.example.com [email protected] #550 5.1.1 RESOLVER.ADR.ExRecipNotFound; not found ## In my case, due to our upgrade to Exchange 2007, the user's legacyExchangeDN value had changed from this on the old mailbox (which had been created prior to the Exchange 2007 upgrade): /o=Example/ou=USA/cn=Recipients/cn=juser To this on the new mailbox: /o=Example/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=juser Any new e-mail sent from Outlook using the previously cached X.500 address was being rejected because the old X.500 address no longer existed in the organisation.
149
150
To add the proxy address in the shell we use the Get-Mailbox and Set-Mailbox cmdlets: [PS] C:\>$ProxyAddresses = (Get-Mailbox juser).EmailAddresses [PS] C:\>$ProxyAddresses += [Microsoft.Exchange.Data.CustomProxyAddress]("X500:/o=Example/ou=USA/ cn=Recipients/cn=juser") [PS] C:\>Set-Mailbox juser -EmailAddresses $ProxyAddresses Breaking these commands down: [PS] C:\>$ProxyAddresses = (Get-Mailbox juser).EmailAddresses ...retrieves the existing proxy addresses for the mailbox and stores them in the $ProxyAddresses variable. [PS] C:\>$ProxyAddresses += [Microsoft.Exchange.Data.CustomProxyAddress]("X500:/o=Example/ou=USA/ cn=Recipients/cn=juser") ...adds the new X.500 proxy address to the variable which contains the existing proxy addresses. [PS] C:\>Set-Mailbox juser -EmailAddresses $ProxyAddresses Updates the mailbox with the new set of proxy addresses This technique can be used to solve this problem in a number of other scenarios where the legacyExchangeDN attribute has changed, and is not limited to mailboxes. For example, if someone leaves the Exchange organisation and you want their e-mail to go to an external e-mail address you would create a contact record with the necessary SMTP proxy addresses. If you also added the legacyExchangeDN of the old mailbox to the contact record as an X.500 proxy address Outlook users wouldn't get bounced messages if they used the old entry in their auto-complete caches.
151
by Jaap Wesselius
Jaap starts a series on the necessary steps to migrate an existing installation of Exchange to Exchange Server 2007. Of course it's simple! Everybody says so, but it is in the detail where you can get caught out.
There are several reasons to migrate an existing Exchange Server 2003 setup to Exchange Server 2007. Beside the benefit of the the new features you get when Outlook 2007 is used in combination with Exchange Server, some of the most compelling business cases for doing so are the availability options of the mailbox servers the scalability of the 64-bit architecture the option to connect your PABX infrastructure to your messaging infrastructure.
Currently, I'm involved in consolidating and centralizing a 25.000-mailbox Exchange system and upgrading it to Exchange Server 2007. The customer intends that this new Exchange Server 2007 environment should be the foundation for their Unified Messaging infrastructure. There are several papers available on the Internet about making the transition to Exchange Server 2007 that imply that it is a simple operation: But is it really that simple?
152
Figure 1. The existing Windows Server 2003 and Exchange Server 2003 infrastructure. Internal Outlook 2003 clients connect directly to the Exchange Server 2003 mailbox server using the default MAPI protocol. The server that is hosting the user's mailbox is configured in the profile that Outlook 2003 is using. Laptop users can either use a VPN connection to connect to the internal network or use the "RPC over HTTPS" functionality in Outlook 2003. A couple of clients are also using Outlook 2007, but, because only Exchange Server 2003 is used in the company, any additional new functionality such as "Autodiscover" or the new "Exchange Web Services for free/busy or Out-of-Office functionality" are not available. Home PC's can use the Exchange Server 2003 Outlook Web Access. PDAs use the company's preferred mobile carrier to access the Exchange Server Activesync service.
Figure 2. Windows Mobile 6 working with an Exchange Server 2003. To provide a secure connection from all outside clients, an SSL connection is used for all clients. This is a simple SSL certificate with a single name: https://webmail.inframan.nl. If you are using Windows Mobile for Activesync in your Exchange environment, please make sure that the certificate you're using is a supported one. Some vendors offer certificates that are working fine with websites and are fully
153
Note If you want to test your connectivity for Windows Mobile you can download the Windows Mobile emulator from Microsoft. This emulator can be run on any Windows computer and as long as you have network connectivity you can use the emulator. Figure 2 is a screenshot from my laptop running at home, while the Exchange 2003 environment is running in a datacenter 100 miles from my home, just over an ADSL connection. You can download the Windows Mobile emulator at Microsoft Device Emulator 3.0 Standalone Release.
In the Exchange 2003 organization there's one Administrative Group. An Administrative Group is a management boundary. With Administrative Groups you can use "delegation of control." Suppose there are multiple messaging departments in your organization, and each department has control over its own Exchange Servers. In this case you can use multiple Administrative Groups, one for each department. Every messaging administrator in a department has full control over his own Exchange Servers, and not over other departments' Exchange Servers.
The following steps need to be performed to prepare your Exchange Server 2003 environment for the implementation of Exchange Server 2007: The Domain Controllers and Global Catalog server need to be on Windows Server 2003 SP1 level or higher The Active Directory domain functional level needs to be "Windows 2000 native mode" No Exchange Server 5.5 may exist in your Exchange organization. To enforce this requirement the Exchange organization needs to be running in "native mode" The Active Directory Schema needs to be updated to support Exchange Server 2007 The Active Directory organization needs to be upgraded to include Exchange Server 2007 permissions and system objects The Active Directory domain needs to be upgraded to include Exchange Server 2007 permissions and system objects.
154
155
Corresponding Exchange version Exchange Server 2000 RTM Exchange Server 2000 service pack 3 Exchange Server 2003 RTM Exchange Server 2003 service pack 2 Exchange Server 2007 Exchange Server 2007 service pack 1
Note If you have multiple domain controllers in your Exchange Server environment you have wait for the Domain Controller replication to finish.
Figure 4. Check the schema version. This schema is on the Exchange Server 2007 SP1 level. After upgrading the Schema the current Exchange Server 2003 organization can be upgraded to Exchange Server 2007. This is achieved by running the following command from the Exchange Server 2007 installation media: Setup.com /PrepareAD / Running this command wil configure the global Exchange objects in Active Directory (residing in the Configuration container of Active Directory), creates the Exchange Universal Security Groups in the root of the domain and it prepares the current inframan.local domain for Exchange Server 2007.
156
Note For those wondering where FYDIBOHF23SPDLT and DWBGZMFD01QNBJR come from: take the string EXCHANGE12ROCKS and increase all individual letters with one (E becomes F, X becomes Y, etc) or decrease all individual letters (E becomes D, X becomes W, etc.).
To verify that this step completed successfully, make sure that there is a new organizational unit (OU) in the root domain called Microsoft Exchange Security Groups. This OU should contain the following new Exchange USGs: Exchange Organization Administrators Exchange Recipient Administrators Exchange View-Only Administrators Exchange Servers ExchangeLegacyInterop.
After performing this step the new Administrative Group will show up in the Exchange System Manager on an "old" Exchange Server 2003 machine (Figure 5).
Figure 5. An additional Administrative Group appears after preparing the Exchange 2003 organization. The last step in preparing your environment for the implementation of Exchange Server 2007 is to prepare the Active Directory domain or domains for Exchange Server 2007. The domain is prepared by running the following command from the Exchange Server 2007 installation media:
157
158
During the installation the setup program will ask where to connect to the existing infrastructure. A little background: Exchange Server 2007 will be installed in a separate Administrative Group in the Exchange organization. Exchange Server 2007 does not use the routing infrastructure that Exchange Server 2000 and Exchange Server 2003 used for routing messages; instead it relies on Active Directory Sites and Services. This is the same routing infrastructure Windows uses for Active Directory replication traffic. The Exchange Server 2007 routing model and the Exchange Server 2003 routing model are not compatible, so Exchange Server 2007 has a legacy Routing Group, just for connecting with an existing Exchange Server 2003 Routing Group. When this question is presented the answer is used for creating the legacy Routing Group Connector between Exchange Server 2003 and Exchange Server 2007. Using this connector messages will be routed between Exchange Server 2003 and Exchange Server 2007. After installation of the Exchange Server 2007 Hub Transport and Client Access Server inbound messages will arrive on the new Hub Transport Server. Since all mailboxes still reside on the Exchange Server 2003 mailbox server all messages will be relayed to this server via the legacy Routing Group Connector. Outlook Web Access (OWA), RPC over HTTPS (now called Outlook Anywhere) and Exchange ActiveSync can also be transferred to the new Client Access Server. But there a couple of caveats that you need to be aware of: As soon as you implement Exchange 2007 and start migrating mailboxes to the new mailbox server, Outlook 2007 will notice immediately. Outlook 2007 has new functionality called autodiscover and Exchange Web Services. When starting Outlook 2007 it will query Active Directory for a Client Access Server object called the Service Connection Point (SCP). You have to be very carefull about the order in which to install the various server roles. The services mentioned in point 1 are all HTTPS based services. Even when you don't use RPC over HTTPS (aka Outlook Anywhere) at all, Outlook 2007 will use these services over HTTPS. When clients are domain joined this should not be a problem since Outlook will use the default FQDN's as used during the installation (i.e. 2007CASHUB.inframan.local). If clients are not domain joined Outlook 2007 will try connecting using the domain part of an e-mail address, i.e. @inframan.nl. Since no certificates are installed yet and the external DNS is not registered yet this will fail and users will start noticing errors like "Outlook not able to download the Offline Address Book."
So, the first server that we are going to install is a combined Hub Transport Server and Client Access Server. There are two ways to install an Exchange Server 2007, via the command line or using the GUI setup. Both can be started from the root of the installation media. The advantage of the command line is that you can script it, which makes it possible to install a larger number of servers in an identical manner. The Exchange Server can be installed by starting the setup application (setup.exe) from the Exchange Server 2007 installation media. This will check for the prerequisites to be installed and will show a graphical interface where the roles to be installed can be selected:
159
Figure 7. The setup program with only Hub Transport and Client Access selected. When the installation is finished the first Exchange Server 2007 is installed and the server object is configured in the Active Directory. Outlook 2007 will immediately notice this and start using the Client Access Server.
Certificates
The next step is to configure certificates for the Client Access Server. The name of the certificate can be webmail.inframan.nl, just like the Exchange Server 2003 front-end server. But Outlook 2007 and Windows Mobile 6 can also use the autodiscover functionality. Outlook 2007 will setup an additional connection to the Client Access Server via autodiscover.inframan.nl. This is an HTTPS connection, so it needs a certificate. To use the same Client Access Server a so called "Unified Communications" or SAN (Subject Alternate Name) certificate needs to be used. This type of certificate can have multiple names. Besides the external names you should also register its internal name. When clients connect to the Client Access Server from the internal network the name of the Client Access Server can be resolved to its internal name, like 2007CASHUB.inframan.local. The names that should be used in this case should be: Webmail.inframan.nl Autodiscover.inframan.nl 2007cashub.inframan.local Mail.inframan.nl (for SMTP).
160
161
by Ben Lye
ExMerge was a great way of exporting a mailbox to an Exchange PST file, or for removing all occurences of an email virus, but it has now been replaced by two new Exchange Management Shell cmdlets, Export-Mailbox and Import-Mailbox which are equally as useful to have at hand.
Most Exchange administrators who've worked with Exchange prior to Exchange 2007 will be familiar with ExMerge. It was a useful tool to have available if you needed to export a mailbox to a PST file, for example if a user left and wanted to take a copy of their mailbox with them, or you needed a PST file as an archive copy of one or more mailboxes. It was also useful if you needed to search your Exchange databases for a particular message and remove it, such as removing an email virus. ExMerge was born in the Exchange 5.5 days as a product support tool, and was later re-released for Exchange 2003 as a web download. In Exchange 2007 ExMerge has been replaced by two new Exchange Management Shell cmdlets, Export-Mailbox and Import-Mailbox. The RTM release of Export-Mailbox was limited to moving messages only from one mailbox to another, but the SP1 release was enhanced to include the export-to-PST functionality which is familiar to ExMerge users, and that's the version I'm referring to in this article. There is no graphical interface for either Export-Mailbox or Import-Mailbox; they are only available in the shell. It's important to note that Export-Mailbox is not intended for use as an Exchange migration tool. It's designed for moving mailbox content rather than an entire mailbox. If you need to migrate an entire mailbox you should use the Move-Mailbox cmdlet. At a high level, Export-Mailbox can be used to Export mailbox content from a mailbox to another mailbox Export mailbox content from a mailbox to a PST file.
During mailbox exports content can be filtered by: Included or excluded folders Message sender keywords Message recipient keywords Message subject keywords Message and attachment content keywords Attachment file names Date range.
If you're going to use the filter options, make sure you have at least Update Rollup 4 for Exchange 2007 SP1, as this update included improvements to the filtering options.
162
The source and destination mailbox must be located on one of these versions of Exchange: Exchange2007 ExchangeServer2003 SP2 or later Exchange2000SP3 or later.
Export-Mailbox will export any items in mailbox's dumpster by converting them back to regular messages. For multiple mailbox exports, the output of the Get-Recipient or Get-Mailbox cmdlets can be piped to Export-Mailbox. When piping input to Export-Mailbox, the MaxThreads parameter can be used to increase the number of mailboxes processed simultaneously.
163
Optionally the AllowMerge parameter can be used to merge the content into the target folder. When using the AllowMerge parameter the top-level folder isn't time-stamped, this means that the source mailbox content can be exported to the destination multiple times: Export-Mailbox -Identity [email protected] -TargetMailbox [email protected] -TargetFolder Joe AllowMerge
Specific folders can be included or excluded by using the IncludeFolders and ExcludeFolders parameters. For example, to include only the Inbox and Contacts folders: Export-Mailbox -Identity [email protected] -TargetMailbox [email protected] -TargetFolder Joe IncludeFolders \Inbox, \Contacts To export all content except for the Sent Items and Deleted Items folders: Export-Mailbox -Identity [email protected] -TargetMailbox [email protected] -TargetFolder Joe ExcludeFolders "\Sent Items," "\Contacts"
164
To export a user named Joe's mailbox to the PST file C:\Temp\joe.pst: Export-Mailbox -Identity [email protected] PSTFolderPath C:\Temp\joe.pst To export all the mailboxes in the sales mailbox database to individual PST files named <alias.pst>: Get-Mailbox Database "Sales Mailbox DB" | Export-Mailbox PSTFolderPath C:\PSTFiles
To remove any message with the words "Company confidential" in the subject line from all mailboxes on the server EXCHANGE01, processing 10 mailboxes at a time: Get-Mailbox -Server EXCHANGE01 | Export-Mailbox -TargetMailbox ExportMailbox -TargetFolder MessageCleanup -SubjectKeywords "Company confidential" DeleteContent MaxThreads 10 To remove any messages from the sender [email protected] from all mailboxes on the server EXCHANGE01, processing 10 mailboxes at a time: >Get-Mailbox -Server EXCHANGE01 | Export-Mailbox -TargetMailbox ExportMailbox -TargetFolder MessageCleanup -SenderKeywords [email protected] DeleteContent MaxThreads 10 The messages will be copied to the "MessageCleanup" folder in the target mailbox and deleted from the source mailboxes. The target folder will include a replica of the folder structure in each of the source mailboxes.
165
Troubleshooting Export-Mailbox
The most common problems with using Export-Mailbox are related to permissions. To use Export-Mailbox you need to be delegated the Exchange Server Administrator role, be a member of the local Administrators group for the target server, and have full access to the source and destination mailboxes. The source and destination mailboxes must be in the same Active Directory forest. These are some common permissions-related errors: Error "The specified mailbox database [Mailbox Database Name] does not exist" "Error occurred in the step: Creating target folder in the target mailbox. An unknown error has occurred., error code: -2147221233" Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" ExMerge may be gone, but Export-Mailbox is equally as useful a tool to have at hand. The flexible filtering options make it possible to do more granular exports than were possible with ExMerge, and the command shell interface makes it easy to script. More information on Export-Mailbox can be found on the Microsoft TechNet website: http://technet.microsoft.com/en-us/library/aa998579.aspx More information about the dumpster can also be found on the TechNet website: http://technet.microsoft.com/en-us/library/aa997155.aspx Cause The user running the Export-Mailbox command needs to be delegated the Exchange Administrator role for the Exchange server. The user running the Export-Mailbox does not have full access to the destination mailbox.
The user running the Export-Mailbox does not have full access to the source mailbox.
166
by Michael B. Smith
With an Exchange Server, how do you measure performance? How, in particular do you do about measuring your disk subsystem's performance? Whereas the CPU Usage and Memory usage are both easy to measure, the measurement of disk performance is often described as a "black art." In fact, it isn't so difficult, as Michael Smith explains.
Figure 1: The First Sample Windows Task Manager Dialog. In Figure 1, it's pretty easy to see that the processor on this computer is overburdened. For the entirety of the CPU Usage History graphical display, the utilization of the processor was almost always 100%. Similarly, when we look at the various memory indicators, it is also pretty plain that the memory on this computer is in short supply. It takes one more indicator to be certain of that, but we can reach that conclusion based on the following indicators:
167
Now, it is often true that processor utilization and memory utilization can be tightly related. For example, if a computer has so little memory that the operating system is constantly required to move data into and out of the swap file, this tends to keep the processor utilization higher than it would be otherwise. However, since disk is so much slower than memory, it's rare that such a situation can cause a processor to be 100% utilized. Also, you should note that physical memory being 99% utilized is not, by itself, an indication of memory depletion. In the general case, Windows will try to cache system files and system data into memory, if memory is available to do so. This allows Windows to access those items much more quickly, since memory is so much faster than disk. If Figure 2, you can see another picture of that same computer, just a short while later.
Figure 2: The Second Sample Windows Task Manager Dialog. In Figure 2, you see a processor that is fairly idle with no memory pressure. The total physical memory has not changed, but in this second performance snapshot, the amount of free memory is up to 1,621 MB and the page file is only 46% in use. Based on what we see in Task Manager, this computer should be performing very well, shouldn't it? Of course, that is a trick question. In the first paragraph of this article I mentioned three key contributors to performance and so far, I've only discussed two. The third? Disk subsystem performance. Unfortunately, there is no sure-fire, single-graph view that will absolutely identify whether your disk subsystem is performing well or not. You have to do some investigation. That investigation is why some folks consider disk subsystem performance management something that is "more art than science." However, I would disagree. Diagnosing performance issues is actually very direct. Now, predicting end-user disk subsystem utilization - that is a black art!
168
If you put all of these together, this means that in the case where a physical disk contains only a single Windows volume, LogicalDisk and PhysicalDisk measure the same thing.
Note The highest performance disk devices currently available (Fiber Channel dual-controller disks) provide about 180 input-output operations per second (IOPS). They are quite expensive. More common enterprise class disk devices provide about 150 IOPS (for a 15K RPM SAS disk). Workstation class disk devices have much lower performance, often around only 3550 IOPS.
Somewhat confusingly, disk aggregators (this includes RAID controllers, Storage Area Networks, Network Attached Storage, iSCSI, etc.) may present many physical disks as a single logical device to Windows. However, each of these devices (known as a logical unit number or LUN) may again actually represent multiple logical or physical disks. Thankfully, from a Windows performance perspective, those distinctions can be ignored, at least until a specific LUN is identified as having a performance issue. In that case, in order to acquire more specific data, you will have to use performance tools from the aggregator's provider as the disk aggregators can sometimes provide unpredictable results. Is there a conclusion here? Yes, there is. The conclusion is that in the most common cases, the performance of a LogicalDisk object is in what you are most interested.
Note Lots of SAN and NAS software provides a feature called "LUN stacking" or "disk stacking" which allows multiple LUNS to exist on a single physical disk. This just complicates your life. Avoid it. Just always remember that you have to be able to identify what you are measuring and the boundaries on that measurement. If you have multiple applications accessing a single physical disk, then your performance will always be non-deterministic and difficult to predict.
169
What do I measure?
Now that you know which performance object is interesting, what do you do with it? As you probably know, any performance object is composed of a number of individual performance counters. A counter contains a discrete value which may be any of: A value that has been increasing since the last time the counter was cleared (which normally happens at system reboot). A value that represents the delta (change) since the last measurement. A value that represents the percentage change since the last measurement. A value that represents an instantaneous measurement of some item. A value that is a percentage against some absolute (such as a time interval).
For the LogicalDisk performance object, there are 23 counters: 1. 2. 3. 4. 5. 6. 7. 8. 9. % Disk Read Time the percent of wall-clock time spent processing read requests since the last sample. % Disk Time % Disk Read Time plus % Disk Write Time. % Disk Write Time the percent of wall-clock time spent processing write requests since the last sample. % Free Space the amount of unused space on the disk, expressed as a percentage of the total amount of space available on the disk. % Idle Time the percent of time spent idle since the last sample (that is, processing neither read nor write requests). Avg. Disk Bytes/Read the average number of bytes transferred from the disk in each read operation since the last sample. Avg. Disk Bytes/Transfer the average number of bytes transferred from or to the disk in each I/O operation since the last sample. Avg. Disk Bytes/Write the average number of bytes transferred from the disk in each write operation since the last sample. Avg. Disk Queue Length the average numbers of I/O requests (both read and write) queued for the selected disk since the last sample.
10. Avg. Disk Read Queue Length the average number of read requests queued for the selected disk since the last sample. 11. Avg. Disk sec/Read the average amount of time that it took for a read request to complete for the selected disk since the last sample.
12. Avg. Disk sec/Transfer the average amount of time that it took for any I/O request to complete for the selected disk since the last sample. 13. Avg. Disk sec/Write the average amount of time that it took for a write request to complete for the selected disk since the last sample. 14. Avg. Disk Write Queue Length the average number of write requests queued for the selected disk since the last sample. 15. Current Disk Queue Length the current number of I/O requests queued for the selected disk. 16. Disk Bytes/sec the average rate, or speed, that bytes are transferred from or to the selected disk during any I/O operation.
170
For [4], it should be obvious that if any write operations are occurring to a disk, if it fills up, performance will rapidly approach zero, the event log will starting filling with errors, and users will be calling your phone saying "Exchange is down" (or name your favorite application instead of Exchange). On the other hand, if a disk is read-only, such as a DVD-ROM database, there is no need to measure either [3] or [4]. Writes cannot occur. But that is a special case, and since we are primarily concerned with Exchange, it doesn't support any type of readonly volume. You may find it interesting that I use the average values instead of the current values. That is because any system can be instantaneously overwhelmed. At any given moment of time, you may have 50 I/O's queued to a particular disk waiting to be satisfied; but that is unlikely to be the normal case. If that high queue value becomes the normal case, then the average values will trend quite high as well. That will server to provide you with an absolute indication of an issue which needs to be addressed. If the average values are not high, then you had what is called a "usage spike" in which a device was temporarily overwhelmed. And that is why we have queues anyway What is a "high value"? A high value occurs when the device cannot empty the queue. Because each I/O is variable in size, its arrival cannot be predicted, and the I/O takes a variable amount of time to complete; the point at which a device is considered saturated or overburdened with I/O requests actually occurs a little earlier than you may presume. Now, I am not a statistician, nor do I play one on TV. However, like many of you, I took basic queuing theory in college. A device becomes saturated when its I/O queue exceeds 70% on an average basis. To put that another way: if the average disk queue length of a device exceeds 0.70 then the performance of that disk is negatively impacting user performance.
171
The overall Average Disk Queue Length is an average of the read and write average queue lengths. The read and write queues are maintained independently. A disk may (and in fact, usually will) have significantly different read and write profiles. While checking the Average Disk Queue Length is an important indicator of disk performance, if it indicates heavy usage, your first order of business is to determine whether read performance or write performance (or both) are causing the performance issue. Therefore my recommendation to continuously monitor all three counters. So now you know: the best way to check your Exchange disk performance is to: Make sure there is room on the disk, and Monitor the average disk queue lengths.
The next thing you may want to know is how can I predict how my disk subsystem should perform? But that is a topic for another article.
by Jaap Wesselius
Jaap completes his series on the necessary steps to migrate an existing installation of Exchange to Exchange Server 2007. He deals with the problems of installing the Hub Transport server, moving policies and mailboxes and decommissioning the old servers.
172
Figure 1. A combined Exchange Server 2003 / Exchange Server 2007 environment. Since we have introduced Exchange Server 2007 into the environment and it is fully up and running, it's time to start moving resources from Exchange Server 2003 to Exchange Server 2007. This includes: Changing the Internet e-mail acceptance. Changing the Internet facing Client Access. Replicate Public Folders and System Folders to Exchange Server 2007. Moving the Recipient policies. Moving Mailboxes. Remove the Exchange Server 2003 servers.
173
Figure 2. Add the anonymous users to the Permission Groups. The reasons for this is in the potential implementation of an Exchange Server 2007 Edge Transport server. This server, that's typically installed in the network's Demilitarized Zone (DMZ) does accept anonymous connections by default. The connection between the Edge Transport and the Hub Transport server is not anonymous, but of the type "Exchange Servers." Depending on your companies anti-spam policies and solution you might want to enable the anti-spam services on the Hub Transport Server. Microsoft supplies a script that enables all anti-spam functionality. The scripts is called "install-AntiSpamAgents.ps1" and is located in C:\Program Files\Microsoft\Exchange Server\Scripts. Open an Exchange Management Shell command window, go to this directory and execute the script to enable the anti-spam agents on this Hub Transport Server.
174
When opening the Exchange Management Console and selecting the Hub Transport under the Organization Configuration, a new tab will appear in the results pane with Anti-Spam. Here you can change the anti-spam settings like Content Filtering, IP Allow list, IP Block list, Recipient Filtering, etc.
Figure 3. The new anti-spam tab after enabling the anti-spam options. After making these changes the inbound SMTP mail flow can be changed. Depending on the infrastructure, you have to change the IP number of the MX record on the public DNS or change the port forwarding on your firewall. In either case you have to change it to the IP number of the Exchange Server 2007 Hub Transport Server. After the change you can check new messages in Outlook by looking at the Internet message header. It will show clearly the name and IP number of the new Hub Transport Server. By default Exchange Server 2007 does not allow internal clients such as web servers, printers or applications to relay SMTP traffic. If you want to implement an anonymous connector for relaying SMTP traffic please visit the following Microsoft article: http://technet. microsoft.com/en-us/library/bb232021.aspx.
175
Figure 4. A Unified Communications Certificate from Digicert containing multiple names. The certificate used for this environment is from Digicert (www.digicert.com) and is supported by most of the clients like Windows (Internet Explorer), Windows Mobile (for ActiveSync) and Outlook (for Outlook Anywhere). Now that the certificate is right for usage with Exchange Server 2007 the following services have to be configured correctly on the Exchange Server 2007 Client Access Server: Outlook Web Access. ActiveSync. Outlook Anywhere.
To configure the client setting navigate to Client Access under the Servers Configuration in the console tree in the Exchange Management Console. When you open the properties of the OWA virtual directory on the Client Access Server, on the General tab you can set
176
Note If you have an interim situation where PDA's access the Exchange Server 2007 Client Access Server but where the user's mailbox is still on Exchange Server 2003, then you have to change the authentication on the Microsoft-Server-ActiveSync Virtual Directory on the Exchange Server 2003 mailbox server. By default only "Basic Authentication" is selected on the Virtual Directory, but in the interim scenario you also have to select "Windows Integrated" authentication.
Outlook Anywhere, previously known as RPC over HTTP has to be enabled on the Exchange Server 2007 Client Access Server. In the actions pane (on the right side) of the Exchange Management Console you can select "Enable Outlook Anywhere," the button to enable this is somewhat hidden. Enter the external hostname (i.e. webmail.inframan.nl) and select the method of authentication. You have to remember this setting in case you want to manually configure the Outlook 2003 or 2007 clients. In the Outlook profile you also have to select the method of authentication. If these setting don't match the client's settings the client will not be authenticated and the user will not be able to connect to his or her mailbox. If you have changed all the settings mentioned above according to your own situation and company policy you can change the firewall settings so that external clients can access the new Exchange Server 2007 Client Access Server. All requests will now be accepted by the Exchange Server 2007 Client Access Server and be forwarded to the old Exchange Server 2003 mailbox server successfully.
177
Figure 5. On the Exchange 2003 server add the Exchange Server 2007 Public Folder database to the replication list. The Public Folder information has to be replicated from Exchange Server 2007 to the Exchange Server 2003 Public Folder database as well. To do this logon to the Exchange Server 2007 using the administrator credentials and open the Exchange Management Console. In the console tree select the Toolbox and in the results pane select the Public Folder Management Console. If no default server is selected then select the Exchange 2007 Mailbox server as the default server. In the Public Folders tree navigate to the System Public Folders and navigate down the tree to the Schedule+ Free Busy Information. Open the properties of this folder, select the Replication tab and add the Exchange 2003 Public Folder to the replication list.
178
Figure 6. Add the Exchange 2003 Public Folder Database to the replication list. During the replication you will see that the free/busy information from the "other" server will appear in the System Public Folder list. This can take some time to complete though. To improve the replication speed change the Public Folder replication interval to "always" and the "replication message priority" to high. If you are using "normal" Public Folders you have to replicate the Public Folder hierarchy from Exchange Server 2003 to Exchange Server 2007 as well as the actual content of the Public Folders that your users are using. To achieve this you can use the Public Folder Migration Tool which is available from the Microsoft download site. You can use the Migration Tool to replicate Public Folders from Exchange Server 2003 to Exchange Server 2007. For more information regarding the Public Folder Migration tool please visit the following Microsoft knowledgebase article: http://support.microsoft.com/kb/822895/en-us. The latest version of the Deployment Tools is available on the Microsoft download site: Microsoft Exchange Server Deployment Tools
179
.Figure 7. Error message when a Recipient Policy is opened on an Exchange Server 2007 Management Console. Before you continue with changing the Recipient Policies it's a good time to run the Exchange Best Practices Analyzer to see if your environment is in good shape. The following steps are one way only, there's no way back so you have to make sure everything is running fine! You have to change all Recipient policies using the Exchange Management Shell. This is one example of an action that cannot be performed with the Exchange Management Console. You can retrieve a list of all Recipient Policies in your Exchange environment using this command in an Exchange Management Shell window: Get-EmailAddressPolicy | where { $_.RecipientFilterType -eq "Legacy" } This output can be used as input for the Set-EmailAddressPolicy commandlet using the PIPE functionality in the Exchange Management Shell. This results in the following command: Get-EmailAddressPolicy | where { $_.RecipientFilterType -eq "Legacy" } | Set-EmailAddressPolicy IncludedRecipients AllRecipients The last things that need to be converted are the Address Lists. Exchange Server 2003 Address Lists cannot be managed by the Exchange Server 2007 Management Console and vice versa. Like the Recipient Policies the Address Lists can only be converted using the Exchange Management Shell. In an Exchange Management Shell windows enter the following commands: Set-AddressList "All Users" -IncludedRecipients MailboxUsers Set-AddressList "All Groups" -IncludedRecipients MailGroups Set-AddressList "All Contacts" -IncludedRecipients MailContacts Set-AddressList "Public Folders" -RecipientFilter { RecipientType -eq 'PublicFolder' } Set-GlobalAddressList "Default Global Address List" -RecipientFilter {(Alias -ne $null -and (ObjectClass -eq 'user' -or ObjectClass -eq 'contact' -or ObjectClass -eq 'msExchSystemMailbox' -or ObjectClass -eq 'msExchDynamicDistributionList' -or ObjectClass -eq 'group' -or ObjectClass -eq 'publicFolder'))} For each command a confirmation needs to be given and the object will be upgraded. After the upgrade it won't be possible anymore to manage the Address Lists from an Exchange Server 2003 System Manager. For more detailed information regarding the upgrade of Recipient Policies and Address Lists visit the blog of the Microsoft Exchange product team: http://msexchangeteam.com/archive/2007/01/11/432158.aspx - Address List and EAP filter upgrades with Exchange Server 2007.
180
Figure 8. Moving my own mailbox using the Exchange Management Shell to Exchange Server 2007. The last this that has to be moved to the new Exchange Server 2007 environment is the Offline Address Book generation. To do this open the Exchange Management Console on an Exchange Server 2007 server and navigate to the Mailbox section in the Organization Configuration. Select the Offline Address Book tab and right click on the Default Offline Address Book. The second option gives the possibility to select a new Offline Address Book generation server, which should be the Exchange Server 2007 mailbox server.
Note For several steps in this article you have to wait for replication to finish. This can be for Active Directory replication as well as for Public Folder replication. It is also possible, like the Offline Address Book generation that a process occurs only once a day, for example in the middle of the night. If you're too fast with the several steps you can miss some actions which can result in erratic behavior.
181
The Exchange Server 2003 mailbox server is now the last Exchange 2003 server in our organization and ready to be removed. Open Control Panel, select "Add/Remove Programs" and remove Exchange Server 2003 from this server. Again make sure that you have the installation media available during the remove process. The last step is to remove the Write DACL right for the Exchange Servers group should be removed from the root of the domain. This can be achieved by running the following command on an Exchange Server 2007 management shell: Remove-ADPermission "dc=Inframan,dc=local" User "Inframan\Exchange Enterprise Servers" AccessRights WriteDACL -InheritedObjectType Group All legacy Exchange Domain Servers and Exchange Enterprise Servers security groups can now be deleted from Active Directory. Please make sure that they are empty and not in use for other purposes!
182
Note When you check Active Directory with ADSIEdit you'll notice that the old Exchange Server 2003 Administrative Group is still available, although empty. Do not remove this Administrative Group unless you're absolutely sure there's no object in Active Directory referencing this Administrative Group in the ExchangeLegacyDN attribute. For more information please check this Microsoft knowledgebase article: http://support.microsoft.com/kb/945602. Users who use Outlook 2003 cannot publish their free/busy data in Exchange Server 2007. My personal opinion would be just to leave it there and not touch it. Nobody will see this Administrative Group and will bother nothing else, don't touch it. More information regarding the removal of the last legacy Exchange Server can be found on the Microsoft website: http://technet. microsoft.com/en-us/library/bb288905.aspx - How to Remove the Last Legacy Exchange Server from an Organization
by Ben Lye
'Where did my mail go?'. In order to answer this question, to troubleshoot mail problems and to analyse mail flow, the Exchange administrator can use message-tracking logs. Ben Lye elaborates on these essential logs and explains how you can use Powershell commands to search them for those emails that have gone adrift.
Exchange message tracking records the SMTP activity of messages being sent to and from Exchange servers running the Edge Transport or Hub Transport roles. Exchange administrators can use message tracking logs for mail flow analysis as well as troubleshooting and answering the ever-familiar "where did my mail go" question.
183
To modify the message tracking settings you can use the Set-TransportServer and Set-MailboxServer cmdlets. Using these cmdlets you can: Enable or disable message tracking (enabled by default) Enable or disable logging of message subject lines (enabled by default) Set the maximum age of message tracking log files (30 days by default) Set the maximum size of the log file directory (250MB by default) Set the maximum size of each log file (10MB by default) Change the path of the log file ("C:\Program Files\Microsoft\Exchange Server\TransportRoles\Logs\MessageTracking" by default).
If you change the path of the message tracking log directory, then new log files will be written to the new path straight away, but existing log files are not moved or copied from the old path to the new path. Old log files are removed when either the maximum directory size has been reached, or the log file is past the maximum age. In the case of the maximum size being reached, the oldest log file is removed even though it may not have met the age limit. Because of this, if you are in a site with many users and where a lot of e-mail is sent, you may want need to increase the maximum directory size as you might find that the log files are being deleted well before the maximum age is reached. You can use this command to increase the maximum directory size to 2GB and the maximum log file age to 90 days (adjust the values as appropriate for your environment): [PS] C:\>Set-TransportServer EXCHANGE01 -MessageTrackingLogMaxDirectorySize 2GB MessageTrackingLogMaxAge 90.00:00:00 To configure Message Tracking you must be delegated the Exchange Organization Administrator role and be a member of the local Administrators group on the Exchange server.
184
To perform a search using the Message Tracking Tool, launch the Exchange Management Console, navigate to the Toolbox pane, and double-click "Message Tracking." After a brief check for updates you'll be able to go to the Welcome Screen, where you can enter search parameters to begin looking for messages in the tracking logs. While you are constructing your search a box at the bottom of the tool shows you the Get-MessageTrackingLog command which will be used to perform the search. To perform a search using the Get-MessageTrackingLog cmdlet, searching the server EXCHANGE01 for messages sent from john@ example.com to [email protected], sent between 12/3/2009 and 13/3/2009: [PS] C:\>Get-MessageTrackingLog -Server EXCHANGE01 -EventID SEND -Sender [email protected] -Recipients [email protected] -Start 12/3/2009 -End 13/3/2009 ResultSize 100 To perform the same search and return only the first 100 matching records: [PS] C:\>Get-MessageTrackingLog -Server EXCHANGE01 -EventID SEND -Sender [email protected] -Recipients [email protected] -Start 12/3/2009 -End 13/3/2009 ResultSize 100 If you are using Exchange 2007 SP1 you must be delegated the Exchange View-Only Administrator role to use the GetMessageTrackingLog cmdlet. If you are using Exchange 2007 RTM you need to be delegated the Exchange Server Administrator role and be a member of the local Administrators group on the target server.
185
Advanced Searches
PowerShell scripts can be used to do some interesting manipulation of the message tracking log data. Here are a few examples of what can be done without much effort.
186
187
More information on configuring and managing message tracking and searching message tracking log files can be found on Microsoft TechNet: http://technet.microsoft.com/en-us/library/aa997984.aspx http://technet.microsoft.com/en-us/library/bb124375.aspx http://technet.microsoft.com/en-us/library/bb124926.aspx
Third Party High Availability and Disaster Recovery Tools for Microsoft Exchange Server
30 March 2009
by Hilal Lone
Clustering Exchange 2007 to ensure high-availability has its disadvantages. It is expensive and complicated. Is it possible, as an alternative, to recover rapidly from a downed Exchange Server using one ot the third-party Exchange Server Disaster-Recovery tools?
The worst nightmare of any Exchange Server administrator is when the Exchange Server databases will not mount or there is a hardware failure resulting in mail flow stopping. Even seasoned Exchange Server admins break into a sweat when they hear that a database is corrupted, or that the server will not send or receive any email. I have been through this many times and I believe that most Exchange Administrators have as well. Even when we take all reasonable precautions, and do everything by the book, there is always a chance that we will get a call in the middle of the night to say that somehow the mail server is no longer accessible. In Exchange 5.5, Exchange 2000 and Exchange 2003 days, the only hope of getting back up and online when a whole server failed was to use the backups. This was tedious, complex and had a low success rate. This method of recovery would take hours for an average sized database, during which time everyone from CEO to the guy at the reception desk will be expressing veiled (and not so veiled!) disappointment with your performance. Just yesterday you were their best buddy and now you are the target of their frustrations. So not only do you have to be up to date with the backups and follow them religiously, but also when we are attempting a recovery we are always under pressure. So it becomes very important that not only should we be confident about the recovery process, but also we need to be able to get it done with as little downtime as possible. Fortunately, now there are other options which allow us to do a soft or hard recovery without much fuss and with minimum downtime. During the last few years, third party Exchange Server high availability and disaster recovery tools have flooded the messaging world. They provide alternatives to traditional tools and to clustering. The only hope of high availability before Exchange 2007 was to put a cluster in place, with a minimum of two nodes. And even then, the complex licensing system that Microsoft employs makes it really hard to sell the idea to management to get the required finances. Besides the investment in hardware, the required investment in software and storage was tremendous. One of my colleagues said that it was better to have a day without email than sending the clustering proposal to senior management. Not that the third party tools are cheap, but they cost considerably less and have a flat licensing system.
188
Third Party High Availability and Disaster Recovery Tools for Microsoft Exchange Server
189
Third Party High Availability and Disaster Recovery Tools for Microsoft Exchange Server
Near automatic mail route redirection (based on DNS, Routing Groups or SMTP connectors). No Client re-configuration required. Intuitive GUI Administration consoles.
About Standby Continuous Replication (SCR) and Clustered Continuous Replication (CCR)
If you have already migrated to Exchange Server 2007 and have the appropriate hardware and software available, then please go for it. Particularly, SCR would provide most of the features of third party DR tools at no extra cost and it comes bundled with Exchange Server 2007 SP1, but in my experience about 5060% of Exchange deployments are still based on Exchange server 2003 SP2. CCR and SCR both have their limitations which are listed below: Only Mailbox Servers can be clustered, so remaining server roles have to be recovered manually. CCR has to be on the same subnet, so it is not site resilient. Hence cannot be considered a right DR solution. CCR, LCR and SCC are high availability solutions not DR solutions. Limited to only one database per storage group.
190
Third Party High Availability and Disaster Recovery Tools for Microsoft Exchange Server
Must have more than one Network traffic redirection possibilities, for example a solution should offer us not only DNS redirection, but also some other option like changing IP Address on Standby server to match master server in case of a disaster. Must be aware of IIS, DNS and AD services related issues. So in case of disaster, these services can be automatically switched over to the standby server. DR solution must have near zero Recovery Point Objective (RPO) so that users should not notice any significant amount of downtime. Must be site resilient, that is, it should offer replication to a remote site which is not part of the same subnet as that of the Master Exchange server. Should offer active notifications and administrative alerts for monitoring replication hygiene Should offer fire drill options, that is, automatically test the standby server periodically, to ensure that it can be recovered if the master server goes down. Must offer Master Server recovery after the switchover to Standby Server, i.e., offer reverse replication so that master server can be functional again.
Best Practices
Now that you have decided to deploy DR tool of your choice, here are some recommendations to make deployment smooth, operation easy and recovery faster: Despite everything, never forget to take a full backup on a daily basis. Always have more than one internal DNS server with proper forwarders configured. Make sure that DNS response time is within acceptable limits. Periodically test LDAP query response time. Monitor bandwidth, both on LAN and WAN, closely. On Master Exchange server, have only the Default Web Site configured. Do not configure any additional Web servers on that machine. If connected on the same LAN, put master and replica servers on separate LAN switches. Always manage the DR scenario from the Replica Server. Configure notifications for events related to DR scenario. Configure notifications for the services related to DR tool, on Exchange Server monitoring option. Do not modify the scenario too often as it could lead to database corruption on the standby server. In all probability the documentation about the product will not be available online, so give the vendor a call and have them send the documentation to you (it worked for me). Again, never forget that full Backup.
191
Third Party High Availability and Disaster Recovery Tools for Microsoft Exchange Server
Summary
Exchange Server Administrators, like everyone else, need normal 6-8 hours of sleep daily and it may be that they want to catch a movie sometime. It does not have to be always finding that lost email or tracking where it went. Most of the tasks of an Exchange Server admin revolve around creating, deleting, managing and delegating. Backing up and monitoring the Exchange Server is also one of the core tasks, but it is not the only one. So sometimes we usually do not pay much attention to how we are going to recover the server in case of a disaster. I can confidently say that 8 out of 10 administratorsrarely test their backups. So when a disaster strikes we are often caught unaware. And actually doing a fire drill on Exchange Recovery Scenario is hardly ever done, maybe once a year ,but you never know when that e00 log goes missing and you have to bring your Exchange Server back online before CEO makes a visit to your cubicle (and believe me - that is not pretty). On average, messaging infrastructure design changes every three years, and when it does, make absolutely sure that you have proper Disaster recovery plans in place. Investing in third party tools provides you flexibility to manage your DR scenario the way you like it and it can reduce the anxiety of recovering Exchange Server from a disaster. Third-party DR Tools might not be perfect, but if used carefully, they can provide you a solid DR plan.
Resources
DoubleTake Exchange Disaster Recovery Solution CA XOSoft WanSync HA and AR Acronis Exchange Recovery Exchange Server 2007 SP1 Standby Continuous Replication
192
by Jaap Wesselius
Jaap takes a first look at the new Beta 1 of Exchange Server 2010, which is now available for download, and likes what he sees, especially with Outlook Live, mailbox replication, and the "self-healing" of the database. Much will change before the release, but there are already a lot of new features that are worth a look.
Microsoft has been working quite some time on a new version of Exchange Server and very recently Microsoft has released its first public beta of this new version. It's time to have a first look. Exchange Server 2010 will be the next version of Exchange server and the successor of Exchange Server 2007. It is targeted for release by the end of this year, but since we're only at the first beta this date is not really fixed of course. We might see a public Beta 2 and a Release Candidate version as well. Beta 1 is by far not feature complete and it can contain bugs and other undocumented features. For sure it will change later this year, so features we're now missing can be added later on. Or vice versa, features we see now can be removed in later versions. A quick look at Exchange Server 2010 might lead you to believe that it's just another version of Exchange Server 2007 but that's not entirely true. Of course it builds on top of Exchange Server 2007, but there are major improvements and new technologies in this product.
193
Furthermore there are a lot of changes in the administration of Exchange Server 2010, the routing model, compliancy features etc. Too many to mention in an article like this.
194
Figure 1. The Exchange Management Console of Exchange Server 2010. As you can see in Figure 1, the Exchange Management Console looks familiar. But, because of the new high availability features and the flattened database model the database is no longer tied to a particular server but to the Exchange organization. When you want to mount or dismount a database you have to go to the Organization Configuration in the Exchange Management Console and no longer to the Server Configuration. Be aware of this, otherwise it can take you some time before you figure out what's wrong. Storage Groups no longer exist in Exchange Server 2010, so all cmdlets regarding Storage Groups are removed. Exchange Server 2010 still uses the ESE database, the accompanying log files and checkpoint files, so all Storage Group commandlet options that are still valid for the log file and checkpoint file configuration have been moved to the Database commandlets. Another neat feature in the new Management Console is the "Send Mail" option. When you are working on recipients and need to send a (test) mail to this recipient you can just right click the recipient and select "Send Mail." No need to send test messages from Outlook or Outlook Live anymore. As said earlier Microsoft has introduced a concept called "database copies" in Exchange Server 2010. You can install a second Exchange server into the organization and the Exchange setup program takes care of everything. In Exchange Server 2007 only the mailbox role could be installed on a Windows Failover Cluster, in Exchange Server 2010 this is no longer the case. All server roles (except for the Edge Transport role) can be installed on a high availability cluster. When you've installed a second server holding the Mailbox Server role you can create a copy of the database. Right click on the database and select "Add Mailbox Database Copy," select the 2nd server and you're done. Names of Mailbox Databases should be unique in the organization and you have to setup a fairly clear naming convention for your Mailbox Databases. If you do not you will certainly get confused with the databases and their copies.
195
Clients.
One of the major improvements on the client side is Outlook Live, previously known as Outlook Web Access. A design goal was to create a cross browser experience so that non-IE users get the same user experience. First test: take an Apple computer, start a Safari browser and open Outlook Live. Wow that works like a charm:
196
Figure 2. A Safari browser on an Apple Macbook gives a great user experience! Fairly new in Exchange Server 2010 is the end-user administration option. End users have a lot of extra possibilities regarding the control of their personal information. They can change (basic) user properties in their personal property set like Name, Location, Phone Number etc., but they can also perform some basic administration regarding Distribution Groups.
197
Figure 3. The options page for end users, a complete HTML management interface. See the "Edit" button? Click here and you can change settings like Contact Location, Contact Numbers and General Information. On the right hand side in the actions pane there are quick links to the most important features for users, like the Out-of-Office assistant or the rules assistant. And the Groups option in the navigation pane, right here users can create their own distribution groups and manage their own group membership. Don't worry, group owners can restrict ownership. And there's a difference between public and private distribution groups. The default view in Outlook Live is now in conversation mode in the results pane (=middle pane). In the right pane a quick view of the last message is visible, and below that just some quick notes of earlier messages of this conversation. Other improvements in Outlook Live are: Search Folders. Message filtering. Side by side view for calendars.
198
Enhanced right-click capabilities. Integration with Office Communicator. Send and Receive text messages (SMS) from Outlook Live. Outlook Live Mailbox polices
Emulating the Exchange 2003 RUS for Out-ofBand Mailbox Provisioning in Exchange 2007
08 May 2009
by Ben Lye
Exchange's Recipient Update Service was important in Exchange 2000 or 2003 in order to complete the provisioning or updating of mailboxes created from an out-of-band process or from an Active Directory User. In Exchange 2007, full provisioning is automatic, but the functionality can replaced by Powershell scripting if it is required.
Exchange 2000 and Exchange 2003 included a component known as the Recipient Update Service or RUS. The RUS runs as a sub-process of the System Attendant, and is responsible for discovering partially provisioned mailboxes and fully provisioning them. The RUS is required because when user accounts are configured to have a mailbox using Active Directory Users and Computers (ADUC), they only have a few key attributes assigned. When the RUS discovers these partially configured accounts they are stamped with the remaining required attributes and a mailbox will subsequently be created in the appropriate Exchange database when the mailbox is either logged onto or receives a message.
199
Emulating the Exchange 2003 RUS for Out-of-Band Mailbox Provisioning in Exchange 2007
Many Exchange administrators who have worked with Exchange 2000 or 2003 will have experienced times when the RUS hasn't worked at all or has worked slowly, and as a result have had problems with mailbox provisioning or updates. In Exchange 2007 the RUS is no longer present because when a mailbox is provisioned in either of the Exchange management tools it is automatically fully provisioned straight away. This is great if mailboxes are only ever provisioned using the Exchange 2007 tools, but if you use an out-of-band process to provision mailboxes you will run into trouble as those mailboxes will never be picked up and fully provisioned by Exchange 2007. For example, in Exchange 2003 it's possible to cause a mailbox to be provisioned simply by setting the LegacyExchangeDN, Mailnickname, and homeMDB attributes on a user object. The RUS will find the user account and will finish provisioning the mailbox. If you try this with Exchange 2007 you won't get a usable mailbox Fortunately, there is a way to emulate the Exchange 2000/2003 RUS on Exchange 2007 using two Exchange 2007 PowerShell cmdlets, Update-EmailAddressPolicy which will apply an email address policy to mailboxes, and Update-AddressList which will update address list memberships. The easiest way to run the "update" cmdlets is to pipe input to them from the "get" cmdlets. These commands will update all email address policies and address lists: Get-EmailAddressPolicy | Update-EmailAddressPolicy Get-AddressList | Update-AddressList After running these commands the mailbox will be provisioned and will appear in the Exchange address lists, but it will appear in the Exchange Management Console as a "Legacy Mailbox." A legacy mailbox will still function for sending and receiving email, but it's not possible to enable or disable Exchange 2007 features such as Messaging Records Management or Unified Messaging. Legacy mailboxes can be converted to Exchange 2007 mailboxes using the Set-Mailbox cmdlet with the -ApplyMandatoryProperties parameter. If your Exchange environment has only Exchange 2007 servers this command will update all legacy mailboxes to Exchange 2007 mailboxes: Get-Mailbox -RecipientTypeDetails LegacyMailbox | Set-Mailbox ApplyMandatoryProperties If you have mailboxes hosted on legacy versions of Exchange and on Exchange 2007 you will need to filter the command to only touch those hosted on Exchange 2007. This command will do that by first getting only the Exchange 2007 servers then finding all the legacy mailboxes on those servers: Get-ExchangeServer | Where{$_.AdminDisplayVersion.ToString().SubString(0, 10) -eq"Version 8."} | ForEach{ Get-Mailbox -Server $_.Name -RecipientTypeDetails LegacyMailbox | Set-Mailbox -ApplyMandatoryProperties} The same problem exists for changes made to users outside of the Exchange Management tools. Any changes made within the Exchange 2007 tools are instantly reflected in the user's email addresses and address list membership, but changes which would impact email address polices and address list memberships that are made using out-of-band tools such as ADUC, ADSIEdit, or other tools for directly editing Active Directory will not show up in the Exchange 2007 mailbox properties until the same "update" commands are run. For example, if there is an email address policy which applies email addresses in the format firstname.lastname, and the last name of a user is changed using an Exchange 2007 tool the user will instantly get a new email address with the new last name. If the same change is made using an out-of-band tool or process then the name change will not be reflected in the user's Exchange email addresses. This problem can also be solved by running the Update-EmailAddressPolicy and Update-AddressList cmdlets. So, if your environment requires out-of-band mailbox provisioning or user updates then the simplest solution is to put these three commands into a PowerShell script and schedule it to run as often as needed (don't forget that if you still have mailboxes hosted on legacy Exchange servers you need to change the last command to the filtered version shown above): Get-EmailAddressPolicy | Update-EmailAddressPolicy Get-AddressList | Update-AddressList Get-Mailbox -RecipientTypeDetails LegacyMailbox | Set-Mailbox ApplyMandatoryProperties
200
To run the Update-EmailAddressPolicy cmdlet you need to be delegated the Exchange Server Administrator role and have local Administrators rights for the Exchange server. To run Update-AddressList you need to be delegated the Exchange Organization Administrator role. To run the Set-Mailbox cmdlet you need to be delegated the Exchange Recipient Administrator role. More Information on the cmdlets can be found on the Microsoft TechNet website: Update-EmailAddressPolicy http://technet.microsoft.com/en-us/library/aa996869.aspx Update-AddressList http://technet.microsoft.com/en-us/library/aa997982.aspx Set-Mailbox http://technet.microsoft.com/en-us/library/bb123981.aspx
Using Exchange 2007 Transport Rules to Protect the First Entry in the Address Book
08 June 2009
by Ben Lye
Global Address Lists in MS Exchange can cause problems because the first person in the list often gets the reply. Ben Lye shows how one can eliminate any such problems with Global Address lists by creating a mail contact, a transport rule and a custom Delivery Status Notification.
I was recently asked to add an entry to the Outlook address book which would prevent mail being inadvertently sent to the person who normally appeared at the top of the Global Address List (GAL) the person was getting frustrated by receiving e-mail which was clearly not intended for him. There are several possible ways this could be achieved: a mailbox with an auto-reply or out-of-office rule, a public folder with an autoresponse, a non-Exchange auto-responder, or simply a mail contact with an invalid external address, to name a few. Ideally I wanted the e-mail to be stopped on Exchange without requiring an extra mailbox or public folder, and I wanted the sender should get a helpful error message. These requirements meant that a mailbox or public folder with an auto-reply rule could not be part of the ideal solutions, and neither could a non-Exchange auto-responder as that would mean that the email would have to leave the Exchange environment before being stopped. Additionally, using a contact record with an invalid address was not perfect either; because a user who e-mailed the contact would simply receive a rather unhelpful "address unknown" non-delivery report (NDR). Fortunately Exchange 2007 provides mechanisms which can be used to provide a neat solution to this problem. The solution has three parts: a mail contact which will appear at the top of the GAL, a transport rule to prevent mail being sent to the contact, and a custom delivery status notification (DSN) to provide the user with information about why their message was not delivered. Mail Contacts are Active Directory objects which are typically used to add e-mail addresses which are external to Exchange 2007 to the Global Address List. To create a new mail contact you must be delegated the Exchange Recipient Administrator role and the Account Operator role ActiveDirectory container where you wish to create the contact.
201
Using Exchange 2007 Transport Rules to Protect the First Entry in the Address Book
Transport rules run on Exchange 2007 servers which have either the Hub Transport or Edge Transport role installed. They can be used to control the flow of e-mail messages within the Exchange 2007 organization and can be used for a variety of purposes including for restricting e-mail between certain individuals or groups, or for applying a footer to all e-mail destined for Internet recipients. Custom DSN messages give Exchange 2007 administrators the facility to create new DSN messages for custom delivery notifications and the ability to customize existing DSN messages. They are a useful tool if you wish to provide users with links to further information such as links to self-help knowledge base articles, or contact information for help-desk staff. To create transport rules and custom DSN messages you must be delegated the Exchange Organization Administrator role. The first step in implementing this solution is to create a new mail contact which has a display name that will ensure it is shown as the first entry in the GAL. An easy way to do this is to prefix the display name with a period or underscore or any other valid character which does not normally appear in Exchange display names (spaces are prohibited as leading characters in display names). We also need to specify an e-mail address which is not in use by another e-mail enabled object. The mail contact can be created in the Exchange Management Shell using the New-MailContact cmdlet: New-MailContact -Name ."First Address Book Entry" -ExternalEmailAddress "[email protected]" -Alias "firstaddressbookentry"
The new mail contact will appear in the Outlook address book:
The second step is to create a new custom delivery status notification (DSN) message which will be sent to anybody who e-mails the new mail contact. Custom DSN messages can contain plain text or HTML, and in this case will provide useful information to the user pointing out that their message probably did not reach the intended recipient. We'll create a DSN message for DSN code 5.7.10, which is the first available enhanced status code (the valid range is 5.7.10 through 5.7.999 inclusive).
202
Using Exchange 2007 Transport Rules to Protect the First Entry in the Address Book
The Exchange Management Shell cmdlet for creating DSN messages is New-SystemMessage: New-SystemMessage -DsnCode 5.7.10 -Text 'You have accidentally sent an e-mail to the first entry in the Outlook Global Address List.<br>You may wish to resend your message to the correct recipient.' -Internal $True -Language en
The final step is to create a new transport rule which will send the new DSN message to anybody who e-mails the new mail contact. Transport rules consist of three components: conditions, actions, and exclusions. To create a new transport rule we must specify at minimum the action to be taken, but in this case we'll specify a condition and an action. The transport rule can be created in the Management Shell using these commands, incorporating the New-TransportRule cmdlet: $Condition = Get-TransportRulePredicate SentTo $Condition.Addresses = @(Get-MailContact ."First Address Book Entry") $Action = Get-TransportRuleAction RejectMessage $Action.RejectReason = "E-mail to the first entry in the GAL is not allowed." $Action.EnhancedStatusCode = "5.7.10" New-TransportRule -Name "Block e-mail to the first entry in the GAL" Conditions @($Condition) -Actions @($Action) -Priority 0
203
Using Exchange 2007 Transport Rules to Protect the First Entry in the Address Book
With the new mail contact in the Global Address List, the new DSN created, and the transport rule set up, if you sent a test e-mail message to the new contact you will receive this NDR message back:
Using this solution my objectives have been met: I didn't have to create a mailbox or public folder, the email message doesn't leave the Exchange environment, and the sender receives a useful error message. For more information about creating mail contacts: http://technet.microsoft.com/en-us/library/aa997220.aspx For more information about custom DSN messages: http://technet.microsoft.com/en-us/library/aa998878.aspx For more information about Transport rules: http://technet.microsoft.com/en-us/library/aa995961.aspx
204
by Neil Hobson
Cluster continuous replication (CCR) is a means of providing a more resilient email system with faster recovery. It was introduced in MicrosoftExchangeServer2007 and uses log shipping and failover. The host-name feature of Exchange 2007 Service Pack 1 and the use of multiple redundant private network interfaces can improve the effectiveness of CCR.
Introduction
In a Cluster Continuous Replication (CCR) environment, each node of the cluster will usually have two network interfaces configured. One network interface will be used for client communications whilst the other will be used for intra-cluster communications. However, there are additional configuration options that you should consider when designing the network configuration of your CCR environment. In Exchange 2007 Service Pack 1, a new feature was introduced that allowed additional redundant networks to be added to a CCR environment for the purposes of log shipping and database seeding. With specific networks dedicated to log shipping and database seeding, the remaining network is dedicated to its task of servicing client communications. Such a configuration avoids situations where the network that is servicing client communications also has to process a large number of transaction logs. Additionally, since redundant networks are used, overall high availability of the CCR environment is enhanced. In this article, I am going to describe a highly-available configuration that is required to support multiple redundant networks used for the purposes of log shipping and database seeding. The actual configuration process is slightly different depending on whether the CCR environment is being implemented on Windows 2003 or Windows 2008, so I will be highlighting the differences where appropriate. However, please note that the focus of this article is the Windows 2008 operating system. In the configuration that I am describing within this article, each node of the CCR environment is equipped with four network interfaces configured in the following fashion: Two network interfaces are configured as a network team and will be configured as the public network used for client connectivity. A single network interface will be configured as a private network for intra-cluster communications as well as transaction log shipping. Another single network interface will also be configured as a private network that is also used for intra-cluster communications as well as transaction log shipping.
With this configuration, Exchange will select one of the private networks for log shipping. If this private network were to fail, Exchange will select the remaining private network and will only revert to using the public network should both private networks fail or be otherwise unavailable. Figure 1 shows this configuration, including the IP addresses in use. The cluster node server names are CCRA and CCRB respectively.
205
Figure 1: CCR Environment with Three Networks. The advantage of having two separate network interfaces configured as private networks is that this configuration ensures that there is full redundancy for the private networks. However, this does not mean that it's not possible to implement redundant networks for log shipping and database seeding if there are only three network interfaces in each cluster node. It's perfectly possible but the configuration will be slightly different. With just three network interfaces available, the following configuration will be possible: Two network interfaces configured as a network team and configured as the public network used for client connectivity. A single network interface configured as a private network for intra-cluster communications as well as transaction log shipping.
With this configuration, Exchange will use the private network for log shipping and will only revert to using the public network should the private network fail or be otherwise unavailable. Therefore, utilizing four network interfaces in each cluster node gives more flexibility to the design and is the configuration that I am using in this article.
Network Teaming
Network teaming is simply the process whereby multiple network interfaces are grouped together to form a single logical network interface, with the objective being to offer fault tolerance and load balancing. In the network interface configurations mentioned previously within this article, it may have been noticed that only the public network interface is teamed. This configuration exists because Microsoft does not support teamed network card configurations for the private network interfaces in a clustered environment. More information on this can be found in Microsoft Knowledgebase Article 258750.
206
Configuration Procedure
Now that the background information on the network configuration has been described along with the setup that will be configured in this article, it's time to take a look at the actual configuration process involving continuous replication host names. These are additional host names that are assigned to each cluster node for the purposes of transaction log shipping and database seeding. The configuration process will be divided into the following broad stages: Choose the continuous replication host names and IP addresses. Configure the private network interfaces as mixed networks in the cluster management application. Disable strict name checking. This is only required if a CCR environment is being deployed on the Windows 2003 operating system. Configure name resolution. Enable NetBIOS on the private network interfaces. This is only required if a CCR environment is being deployed on the Windows 2008 operating system. Enable the continuous replication host names. Check that the correct continuous replication host names are being used.
207
208
209
Name Resolution
Regardless of whether the CCR environment is deployed using Windows 2003 or Windows 2008, some aspects of name resolution must be configured on each private network interface being used for continuous replication. There are two configuration requirements to complete. The first requirement is to clear the check box named Register this connection's addresses in DNS. To do this in Windows 2008, follow these steps: Bring up the properties of the network interface. Select the Internet Protocol Version 4 (TCP/IPv4) object and then click the Properties button. In the resulting properties window, click the Advanced button. In the Advanced TCP/IP settings window, click the DNS tab. Clear the check box named Register this connection's addresses in DNS as shown in Figure 5.
210
Figure 5: DNS Configuration Change. Next the continuous replication host names must be entered in the Domain Name System (DNS), or host file entries must be created. Generally, DNS is recommended where possible as tracking manually created host file entries can sometimes be problematic. Therefore, in this article it is assumed that static DNS records have been created for the continuous replication host names. Don't underestimate the importance of this step as name resolution plays a vital role in allowing the log shipping or database seeding to take place via the redundant networks being configured. Therefore, in the example environment being described within this article four static DNS A records have been created, one for each continuous replication host name. These records are highlighted in Figure 6.
211
212
213
It can be useful to have the cluster management program open before running these cmdlets, as it's possible to see that a new cluster group is created each time that one of the cmdlets is run. For example, Figure 8 shows the Windows 2008 Failover Cluster Management application after the running of the first cmdlet listed above.
Figure 8: New Cluster Resource Group 5. It was discussed earlier in this article that NetBIOS must be enabled on the private network interfaces in order that the cluster resources can be brought online successfully. If this is not performed the following error will be seen when running the EnableContinuousReplicationHostName cmdlet: Cluster Common Failure Exception: Failed to bring cluster resource Network Name (name) in cluster group (cluster group name) online. The event log may contain more details. Cluster Common Failure Exception: The cluster resource could not be brought online by the resource monitor. (Exception from HRESULT: 0x8007139A)
214
Figure 9: Resource Error in Exchange Management Shell. Within the Failover Cluster Management application, it can be seen that the corresponding Network Name resource has failed to come online, as can be seen in Figure 10.
Figure 10: Resource Error in Failover Cluster Management. Once NetBIOS has been enabled on the private network interface, the configuration process can continue successfully as shown in Figure 11 and Figure 12. Note that a new cluster group has been created for each continuous replication host name.
215
216
Figure 13: Get-ClusteredMailboxServerStatus Output. In Figure 13 it can be seen that the OperationalReplicationHostNames parameter contains the full list of continuous replication host names available for the CCR environment to use for log shipping purposes. In the example configuration presented within this article, there are two redundant networks available for use and the Microsoft Exchange Replication Service is responsible for choosing between them. It can be seen from Figure 13 that the InUseReplicationHostNames parameter shows which of the continuous replication host names has been selected by the Microsoft Exchange Replication Service for use. The Microsoft Exchange Replication Service checks the status of the networks every five minutes and adjusts the configuration as required. For example, if the private network associated with the continuous replication host names CCRAREP2 and CCRBREP2 is disabled, the Microsoft Exchange Replication Service then chooses the remaining redundant network as can be seen in Figure 14. Notice how the CCRAREP2 and CCRBREP2 host names are now classified as FailedReplicationHostNames.
217
Summary
CCR is an excellent high availability solution that can be deployed with just two network interfaces in each cluster node. However, there's no doubt that the ability to deploy redundant networks using the continuous replication host name feature of Exchange 2007 Service Pack 1 greatly enhances the overall performance and high availability of a CCR environment. If you are about to implement a CCR environment, be sure to give due thought to implementing redundant networks for the purposes of log shipping and database seeding. In particular, take the time to decide whether you should be implementing multiple redundant private network interfaces as described within this article.
by Jaap Wesselius
Of course, changing a 32,000 mailbox system, based in 40 Exchange Servers, to a centralised 25,000 mailbox solution is not a difficult task as long as you follow the Microsoft guidelines; It just takes patience, and time to understand Exchange's storage design principles. Jaap describes the design process and shares his experiences with the chosen server hardware.
I have been working on a project recently where we had to migrate 25,000 mailboxes from a decentralized Exchange Server 2003 with 40 servers to a centralized Exchange Server 2007 environment. Designing and building an environment like this is fun and in this article I will explain the approach we took for designing this environment. I will not cover the actual migration and decommissioning the Exchange Server 2003 platform, just the design phase, especially for the mailbox servers.
218
Table 1. Usage scenario on Exchange Server 2003. All users were using Outlook 2003. Desktop were running in online mode due to the roaming profile, laptops were running in cached mode. Outlook Web Access is used by only a small number of users and so is POP3. There were approximately 3,000 Blackberry device and approximately 1,000 Windows Mobile devices.
Table 2. Mailbox limits and the number of mailboxes. The customer wanted to maintain the mailbox limits in the Exchange Server 2007 environment. As it turned out the mailbox types did follow the mailbox profiles we found earlier, in general a large mailbox had a higher IO profile compared to a mailbox which was of a smaller type.
219
Note The actual design was made in the summer of 2007 with an older version of the storage calculator, based on Exchange Server 2007 RTM. With SP1 and with a newer version of the storage calculator different values will be found.
So now we know the total number of Exchange Mailbox servers: 9. The number of Hub Transport Servers and Client Access Servers can be derived from this number, or better, from the number of processors in the servers. For this customer no hard numbers were used though. The customer wanted to deploy Exchange Server 2007 into two separate rooms (in one building), so in each room 2 Hub Transport Servers and 3 Client Access Servers were used. A Unix based anti-virus and anti-spam solution was already available so an Edge Transport Server is not used.
220
Because EqualLogic virtualizes storage a volume is spread over multiple arrays in a pool to a maximum of three possibly utilizing 42 disks giving maximum performance and IOPS.
Figure 1. Schematical overview of the mailbox servers and the storage solution.
221
All units are configured in a RAID-50 scenario, this will give the best performance on these particular devices. In total a net capacity of approximately 50 TB of storage is created this way and this should be sufficient to host the 25,000 mailboxes and provide enough for future growth.
Performance
The performance of the Equallogic PS-series really surprised me, they were performing better than I initially expected. During testing we manged to get 35,000 IOPS from the storage which was a lot more than we needed. After migration 25,000 mailboxes to the new platform all nine mailbox servers are still running fine with a pretty low number of disk I/O's and a fairly low processor utilization.
222
Are there challenges? Oh yes, think about the Online Maintenance. When using such an amount of databases and this sizing OLM really has a hard time to finish in time, so only two databases are configured each night to run the OLM. Each database is running OLM once every week; otherwise it cannot finish its run. Reseeding a database is nothing else than a file copy over the network from the active node in the CCR cluster to the passive node. Copying a 150 GB or 200 GB mailbox database can take some time; imagine you have to reseed five databases. You also need a good performing backup solution to have the backup finish within the available timeframes. Also monitoring becomes more important when dealing with these amounts of databases.
Conclusion
Designing a 25,000 mailbox solution is not a difficult task as long as you follow the Microsoft guidelines; use the Exchange Storage Calculator for designing a proper storage solution and understand exchange storage design principles. Failing to do so will always result in a poor performing Exchange server environment. Using a storage solution from Dell/Equallogic is also possible, but special care has to be taken to design and implement the members the correct way, using the right number of portals, pools and members. If you do so, it will perform well. Please note that in this scenario the Equallogic solution was only used for storing the Exchange Server mailbox databases and logs. No other data, such as like SQL databases, is stored on the EqualLogic arrays.
Disclaimer This article is a true scenario, but the entire project from the Exchange Server 2003 inventory up to decommissioning the Exchange Server 2003 servers took more than a year with three people, working full time on all aspects. The documentation that was created during the project covers more than 300 pages, so it is not possible to cover everything in great detail. My main objective was to explain a bit more about using the Equallogic hardware in combination with a 25,000 mailbox environment.
223
by Ben Lye
There are a number of features in Exchange that can be used in creative ways to solve problems in Exchange as they crop up. Ben Lye recently found a great use for Messaging Records Management when he hit the problem of a meeting-room mailbox that stopped replying to booking requests. Here he shows how to apply policies that apply to all folders of a particular type, and how to schedule a regular task to do it.
Messaging Records Management (MRM) is a feature of Exchange 2007 which lets administrators define policies to control the storing of e-mail data. Essentially MRM allows you to create folder-level rules which determine how long messages stored in that folder should be retained. MRM policies can be applied to default mailbox folders or custom mailbox folders, however applying policies to custom folders requires the use of Enterprise CALs. MRM can be used to ensure compliance with company policy or legal requirements, or other regulations. MRM can also be used as an administrative tool. Recently I had to fix an issue with a meeting room mailbox which had stopped replying to booking requests. It turned out that the mailbox was over the database's default Prohibit Send quota, so was unable to send any messages. The problem was that all the old meeting requests, which had been deleted after they had been processed by the Calendar Attendant, were still in the Deleted Items folder, and that was causing the mailbox to exceed its quota. The quick-fix was to empty the deleted items folder, but this didn't really fix the root cause of the problem - the mailbox would continue to grow over time. Another option was to apply a higher quota, but this would just make the problem come back again later when the new quota was reached. Ideally I wanted to find a way to make sure that deleted messages in resource mailboxes were permanently removed. This is where MRM comes in - as well as applying policies for storing e-mail messages, MRM can be used to apply policies to delete e-mail messages. To solve the problem with my resource mailbox I created an MRM policy which would automatically remove messages from the deleted items folder of all resource mailboxes. There are five steps to setting up an MRM policy: Choose the folder you want the policy to apply to. Create the managed content settings for the folder. Create a managed folder mailbox policy. Apply the managed folder mailbox policy to the mailboxes. Schedule the managed folder assistant to run.
To set up the MRM settings and policies you must be delegated the Exchange Organization Administrator's role. I already know that I want to apply the policy to the Deleted Items folder; however I want to create a new managed folder so that if there are future needs for different managed content settings on the Deleted Items folder in other mailboxes, they can be accommodated. Because the Deleted Items folder is a default folder I can use managed default folders.
224
With the new managed folder created the next thing to do is create the managed content settings. The managed content settings define the actual retention criteria. Creation of the managed content settings can be done through either the Exchange Management Console or the Exchange Management Shell. I want settings which will remove items from the deleted items folder 30 days after they were received by the resource mailbox. The deleted messages should be available for recovery according to the database's standard deleted item recovery policy. To create the managed content settings we will use the New-ManagedContentSettings cmdlet: New-ManagedContentSettings -Name "Remove Deleted Items After 30 Days" ` -FolderName "Deleted Items (30-day limit)" ` -MessageClass * ` -RetentionEnabled $true ` -RetentionAction DeleteAndAllowRecovery ` -AgeLimitForRetention 30 ` -TriggerForRetention WhenDelivered
Because a mailbox can only have one managed folder mailbox policy applied managed folder mailbox policies are used to group managed folder content settings so that multiple settings can be applied to a mailbox. The next step is to create the managed folder mailbox policy. To create a new policy containing the new deleted item managed content settings we use the New-ManagedFolderMailboxPolicy cmdlet: New-ManagedFolderMailboxPolicy -Name "Resource Mailbox Policy" `
225
The next step is to apply the new managed folder mailbox policy to the resource mailboxes using the Set-Mailbox cmdlet (once for room resources and once for equipment resources). You will be warned that Outlook clients prior to Outlook 2007 do not support all MRM features, and clients older than Outlook 2003 SP2 are not supported): Get-Mailbox Set-Mailbox Get-Mailbox Set-Mailbox -RecipientTypeDetails RoomMailbox | -ManagedFolderMailboxPolicy "Resource Mailbox Policy" -RecipientTypeDetails EquipmentMailbox | -ManagedFolderMailboxPolicy "Resource Mailbox Policy"
The final step is to schedule the managed folder assistant (the process which actually enforces the policy) to run. This is done with the Set-MailboxServer cmdlet. Each Exchange server only has one schedule for the managed folder assistant and you should schedule it to run as often as needed to meet you requirements. To run the managed folder assistant on all Exchange 2007 mailbox servers daily between 1am and 3am the command would be:
226
The managed folder assistant can also be started on demand using the Start-ManagedFolderAssistant cmdlet. Running the managed folder assistant can be a resource-intensive process, and it should be scheduled to run during off-peak hours. The application of this MRM policy will ensure that my resource mailboxes will remain small, as the content of their deleted items folders will be automatically removed. More information on Messaging Records Management and messaging policies can be found in Microsoft TechNet: http://technet.microsoft.com/en-us/library/aa998599.aspx
227
by Jaap Wesselius
In his previous article "Hyper-V, an introduction" Jaap Wesselius explained about the Hypervisor, the parent partition, the child partition, and Integration Components. In this article Jaap discusses installing Hyper-V, all kinds of Virtual Hard Disks, Virtual Networks, and some best practices.
Installing Hyper-V
Hyper-V is a Server Role within Windows Server 2008. This means that you have to install Windows Server 2008 before installing Hyper-V. The hardware requirements for installing Hyper-V are: The processor needs to support hardware virtualization (AMD-V or Intel VT). Hardware enforced Data Execution Prevention (DEP). The Intel XD bit needs to be enabled or the AMD NX bit needs to be enabled. The processor needs to be an X64 processor.
Furthermore you need plenty of physical memory in the server. For testing purposes the absolute minimum is 4 GB of memory, the amount of memory needed in a production environment is dependant of the services you want to run. Windows Server 2008 Hyper-V is a Server Role in the Standard Edition, Enterprise Edition and the Datacenter Edition. From a technical perspective there's no difference, it's a licensing issue: Install Standard Edition and you are allowed to run one Virtual Machine using the same license. Install Enterprise Edition and you are allowed to run four Virtual Machines using the same license. Install Datacenter Edition and you are allow to run as many Virtual Machines as the server can handle.
Note There's also a Hyper-V Server available. This is a very small Operating System (command-line based) that's only capable of running the Hypervisor, nothing else. There are a few graphical UI's, like the date and time applet, but it's mainly command-line. Hyper-V Server is a free download, but when configuring Virtual Machines you need one license for each Virtual Machine!
Install Windows Server 2008 using your corporate standards and bring it up to date with the latest hotfixes or service packs. Windows Server 2008 is now running on "bare metal" and there's no difference with any other Windows Server. Logon to the server and open the Server Manager. In the left pane select "Roles" and in the right pane you'll see "Roles: 0 of 18 installed." Click on "Add Roles" and select "Hyper-V" in the Select Server Roles Window.
228
Figure 1. Need to create a new image. Click Next to install the Hyper-V Server role. The wizard also includes the possibility to create Virtual Networks. We'll cover that later in this article so leave all options blank and click Next to continue.
229
Figure 2. In the Add Roles wizard there's the possibility to create Virtual Networks. You can create these later as well. Right now the actual Hypervisor "slides" under the Operating System, making the original Windows Server 2008 instance a Virtual Machine. Also the VM Worker processes (responsible for emulating devices), some WMI interfaces, the VM Bus and the Virtual Storage Provider (VSP) are installed. When finished the server needs to be rebooted. During the reboot process the Hypervisor will be started initially which will hand-over the boot process to Windows Server 2008, which is now the Parent Partition. The Parent Partition is also referred as the Root Partition. When you logon after rebooting you'll see nothing special, it just looks like Windows Server 2008, and it is. The only new part is the Hyper-V Manager that can be found in the "Administrative Tools" menu. The Hyper-V Manager is an MMC snap-in, with a server pane (left), a results pane (middle) and an actions pane (right). In the server pane you'll see only the server that you're logged on to, additional Hyper-V servers can be added later on.
230
Figure 3. The Hyper-V Manager just after installing Hyper-V. One of the first things that has to be done is changing the settings. Click on "Hyper-V Settings" in the actions pane and enter new paths for the Virtual Hard Disks and the Virtual Machine. By default these will be placed on the C:\ drive, but from a performance perspective these should be configured on a separate drive. This can be a physical disk in the server, a storage cabinet attached to the server or a SAN (iSCSI or Fiber).
231
There's also a fourth type of disk, the pass-through or dedicated disk. This is a dedicated disk on the Parent Partition that's connected to a disk controller of a Virtual Machine. The complete disk is used in this configuration without the VHD file. Therefore it has no overhead. Besides the fixed size disk Microsoft also recommends using this kind of disk in a production environment. Check the Technet site for more information. The pass-through disk can be a physical hard disk in the Hyper-V host, but it can also be a LUN on an iSCSI or Fiber Channel SAN. To create a new Virtual Hard Disk open the Hyper-V Manager, in the Actions Pane click New and select "Hard Disk." The New Virtual Hard Disk Wizards shows up with a welcome screen. Click Next to continue. In the Choose Disk Type window you can select what type of VHD needs to be created. Please be aware that creating a large fixed VHD file will take a considerable amount of time! I will cover a dedicated or pass-through disk later in this article, after the creation of a Virtual Machine.
Virtual Networks
By default Virtual Machines are not connected to any network, so they are pretty useless actually. To connect Virtual Machine to each other, to the Parent Partition or to the network outside the Hyper-V host Virtual Networks need to be used. Three types of Virtual Networks are available: Private Virtual Network this is a virtual network that's only available for Virtual Machines running on the Hyper-V server. There's absolutely no possibility for the VM's to connect to the outside world, or to the Hyper-V server. Internal Virtual Network This is a virtual network that's available for the Virtual Machines, but also for the Parent Partition. VM's can connect to the Parent Partition using the Internal Virtual Network. When Internet Connection Sharing (ICS) is enabled on the Parent Partition the VM's can connect to the outside world using this connection.
232
It is a Microsoft recommendation to use multiple network interfaces. One interface should be used for management purposes and be only available for the host. One network interface should be used for an External Virtual Network so that VM's can access the physical network as well. If you're using an iSCSI solution an additional network interface for this should be used as well. Check the following Technet article for more information: 7 Best Practices for Physical Servers Hosting Hyper-V Roles. So, for having a Virtual Machine communicating with the outside world we have to create an External Virtual Network. Logon to the server and open the Hyper-V Manager. In the tasks pane select the Virtual Network Manager. Click Add, enter a name like "Public Virtual Network" and select the network interface that needs to be used by this Virtual Network. In this example this is the first Broadcom that's available.
Figure 4. During creation of an External Private Network you have to select the physical network interface. When you click OK the new Virtual Network will be created. A warning message will pop-up indicating you may temporarily lose network connectivity with the Hyper-V host. If you're connected via a Remote Desktop sessions you may have to re-establish the connection. When the new Virtual Network is created and you check the Network Connections you'll notice a new entry, this is the new Virtual Network.
233
Figure 5. The new Network Connection after the creation of a External Virtual Network. When you create a Virtual Machine and connect it to this new Virtual Network the Virtual Machine will be able to access resource on the public network.
Virtual Machines
On Hyper-V there are two types of Virtual machines, Supported and Unsupported Guest Operating Systems. This has nothing to do with the ability to run in a Hyper-V Virtual Machine, but whether it is directly supported by Microsoft: Supported Guest Operating Systems these are Guest Operating Systems that are fully supported running on a Hyper-V server where the Guest Operating System can fully utilize the Hyper-V Infrastructure like the VMBus. Examples of these Operating Systems are Windows Server 2008, Windows Server 2003 SP2, Windows 2000 Server with SP4, Windows Vista SP1, Windows XP Professional SP2 and SUSE Linux Enterprise Server 10 with Service Pack 2. For a complete list of all supported Operating Systems check the Microsoft site: http://support.microsoft.com/kb/954958. Guest operating systems that are supported on a Hyper-V virtual machine Unsupported Guest Operating Systems these are Guest Operating Systems that can run on a Hyper-V server but do not utilize the Hyper-V infrastructure. These are running in an emulated environment and cannot use the native Hyper-V architecture. This is the VM Worker process. An emulated environment is running in User Mode in the Parent Partition. When a VM makes a call to for example a disk or a network it goes through the VM, to the emulator and then to the actual hardware. This results in a serious I/O penalty, resulting in a slower performance.
Suse Linux Enterprise Server is a Supported Guest Operating System. This will run under Hyper-V and if you encounter an issue you can call Microsoft Product Support. SCO Unix is an Unsupported Guest Operating System. This will run perfectly under Hyper-V but it is not supported by Microsoft, but fully supported by SCO. For more information check the site of SCO. Now we are going to create a Virtual Machine running Windows Server 2003 SP2. You can use the Windows installation CD or you can use an ISO image. The latter is usually faster. Logon to the Hyper-V server and start the Hyper-V Manager. In the Actions pane click New and select Virtual Machine. The New Virtual Machine Wizard will show up, click Next to continue. Enter a name for the new Virtual Machine, for example "Windows Server 2003 X64."
234
Figure 6. The New Virtual Machine Wizard when creating a new Virtual Machine. The location will be same as we configured in the Hyper-V settings earlier in this article. You can check the "Store the virtual machine in a different location" checkbox when you want to have the new Virtual Machine on another location (i.e. another disk). Click Next to continue. Enter the amount of memory that you want to assign to the VM. Please not that there's no real difference with a physical machine. If you fail to assign enough memory the new Virtual Machine will be slow and sluggish. Another thing to remember is that unlinke VMWare, there's no way to overcommit memory. If you have 16 GB of RAM available, you can only assign 14 Virtual Machines 1 GB of RAM (assuming approximately 2 GB is used by the Parent Partition).The next stop is to connect the new Virtual Machine to a Virtual Network. Connect the Virtual Machine to the "Public Virtual Network" that we created earlier. The new Virtual Machine needs a Virtual Hard Disk (VHD) to use. Since we did not create a VHD in advance use the default setting "Create a Virtual hard disk."
235
Figure 7. A new Virtual Hard Disk will be created. If we created a Virtual hard Disk in advance, for example a fixed size Virtual Hard disk you have to select "use an existing virtual hard disk" and select the Virtual Hard Disk in the Location field. If you're using disk intensive applications like SQL Server, MySQL or Exchange Server fixed disks or pass-through disks should be used for best performance. Click Next to Continue. Since we will be installing a new Operating System we have to select the installation media. This can either be a CD/DVD or an ISO image, whichever your prefer. Click Finish to end the New Virtual Machine Wizard. The new Virtual Machine is now ready to be booted and we can continue to install the Operating System in the Virtual Machine. Double click on the new Virtual Machine in the Hyper-V Manager and press the "Virtual Power button" to boot the new Virtual Machine. Install Windows Server 2003 as you are used to...
Note If you are accessing the Hyper-V server using Remote Desktop then it's not possible to use your mouse in the new Virtual Machine until you've installed the Integration Components. And Integration Components can only be installed after finalizing the setup of Windows, so the setup itself can be challenging. When you have access to the console itself then it's no problem and the mouse works as expected.
236
Figure 8. The dedicated disk needs to be offline in the Parent Partition! In the Hyper-V Manager right-click the Virtual Machine and select Settings. Select the IDE Controller, select Hard Drive and click Add. Under Media select Physical Hard Disk and select the disk you want to assign to this controller.
237
Figure 9. Add the physical disk to the disk controller. Note that this is the same disk as in Figure 8! When you boot your Virtual Machine and open Disk Management the new disk will show up, ready to be partitioned and formatted. Is is possible to boot from a pass-through disk? Yes, but only if assigned to the IDE controller. SCSI Controllers can be used for data disks. When creating a Virtual Machine that needs to boot from a pass-through disk you have to create the VM Configuration first, then add the pass-through disk to the IDE controller and then start up the VM.
Conclusion
In this second article I tried to give more information regarding the installation of Hyper-V, Hyper-V's Virtual Networks, Virtual Hard disks and Virtual Machines running under Hyper-V. In my next article I'll discuss the deployment of Virtual Machines and the use of System Center Virtual Machine Manager (VMM) 2008 in a Hyper-V environment.
238
by Ben Lye
There are good reasons for preventing old versions of Outlook from connecting to Exchange Server. You'll probably, at least, want to do it for security. Before you do so, you'll alsoneed know what versions are out there being used so you can make sure that blocking of legitimate users is prevented. Ben Lye explains how it is done.
Outlook has been around for a long time, and there are many versions with different features and varying levels of security fixes. It's not always desirable to let any version of Outlook connect to Exchange Server as you may require your clients to utilize a specific feature set, such as Messaging Records Management, or to have certain security fixes. You may also have corporate policy which dictates that a particular version or versions of Outlook are used. Fortunately Exchange has the ability to restrict which versions of Outlook can connect by blocking MAPI client versions, a feature which was introduced in Exchange 2000 Service Pack 1. In Exchange 2007 MAPI client blocking can be implemented on a per-server basis using a registry change, or on a per-mailbox basis using the Exchange Command Shell. Additionally, Microsoft recommends implementing Outlook client blocking as a best practice, and if you run the Exchange Best Practices Analyzer against a server which does not have client blocking enabled it will suggest that you configure it.
239
Table 1: Emsmdb32.dll version by Office release. There are some important points to note: The MAPI client version numbers listed in Table 1, and those in the results of the Get-LogonStatistics cmdlet, are in the format x.0.y.z. When specifying MAPI versions to be blocked you must use the format x.y.z. For example, the version number for Outlook 2003 RTM becomes 11.5604.0. When setting per-server restrictions it is very important to avoid restricting clients with version numbers 6.y.z as Exchange Server makes use of MAPI for server-side component connections and uses MAPI versions within the 6.y.z range (with the version number potentially varying by Exchange component and patch level). This does not apply to per-mailbox restrictions. Microsoft recommends that at a minimum you block all MAPI clients with version numbers equal to or earlier than 5.3164.0.0.
Single versions are blocked by specifying the version in the format <version>, an open-ended range is blocked by using the format -<version1> or <version2>-, and a specific inclusive range is blocked by using the format <version3>-<version4>. Multiple sets of client versions can be disabled using a comma or semi-colon separated list. Range Type <version> -<version> <version><version>-<version> Example 11.5604.0 -11.0.0 11.0.011.0.0-11.9.9 Effect Block the specified MAPI version. Block the specified version number, and all previous versions. Block the specified version number, and all newer versions. Block the specified version numbers, and all clients between the specified versions.
240
-5.9.9;7.0.0-
11.0.0-11.9.9
You need to be a local administrator on the Exchange server in order to edit the registry. Start the registry editor on your Exchange 2007 Mailbox server. Locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem registry key. Right-click ParametersSystem, select New, and then select String value.
241
The change will be effective within 15 minutes, or to make it effective immediately you can restart the Microsoft Exchange Information Store service. Once the change takes effect any existing client connections which do not meet the version requirements will be terminated.
242
Using Twitter and PowerShell to Find Technical Information and Join a Community
30 July 2009
by Jonathan Medd
Using PowerShell and a little bit of .NET Framework and XML knowledge, it is possible to delve a little deeper into the information which is potentially available to you from Twitter. Jonathan explains about Twitter and shows how to use Powershell to access twitter automatically.
For somebody so interested in new technology, particularly within the Enterprise workplace, I remain healthily sceptical of the latest new fads and how useful they might actually be. A colleague of mine had for some time been pestering me to use social networking sites saying I was missing out on some great tools for information gathering and also being part of various online communities. As far as my initial look took me I couldn't really see much benefit past a bit of fun time wasting at idle parts of the day and potentially having to deal with some privacy issues. After a while though, seeing how enthused about it he was and how much he seemed to be getting out of it I gave in and signed up. There are plenty of social networking sites out there these days, each of which has their possible different uses in different situations in this article I'm going to look at the micro-blogging service Twitter and how you can use it to get technical information online and also participate in a community. As with many ideas the best ones can typically be the simplest ones. Using messages of up to 140 characters, tweets, can be published to your Twitter page and consequently anybody can be subscribed to your updates with a variety of different applications. Based on SMS messaging technology they typically describe what you are doing right at that moment in time, but instead of being read by just one person you have sent a text to, anybody following you on Twitter is able to read them. These can range from the informative "I just fixed this technical problem by doing x....." to the banal "Just missed the bus, I'm going to be late for work." It's these trite comments which are often highlighted as Twitter being a great time waste, but if you ignore these there is some great technical content just waiting for you to grab hold of it.
Getting Started
First of all you will need to sign up for an account at Twitter. Once complete you can use the web interface as a basic starting point.
243
Using Twitter and PowerShell to Find Technical Information and Join a Community
This is a typical view of a Twitter home page. From here you can post updates, add Twitter users you wish to follow, see recent updates from those you are following, view a list of those following you, carry out some basic searching and view trending topics (some of the current most popular words being talked about on Twitter, which usually gives a good barometer for some of the most commonly talked about news stories of that point in time).
244
Using Twitter and PowerShell to Find Technical Information and Join a Community
Be careful what you post. Once it's there, although you can delete it, it doesn't take long for a message written in haste to be sent round the globe can cached on a server somewhere.
Also it's probably not a good idea to tell the world that you are going on holiday and leaving an empty house for two weeks. (See:Note to Self: Don't Tweet Vacation Plans) If you want to be extra cautious you can approve each person who wishes to follow your Twitter updates before you let them see the updates. If you are somebody who already likes to engage with a technical community and share content you can use Twitter to help promote a new post on your blog, a new episode of a podcast or an update for an open-source project you work on. For instance as well as manually posting a link to a new blog post you have just written, why not use a service like http://twitterfeed. com to automatically publish a tweet for the new post. Or over at the Get-Scripting Podcast we have created a Twitter account for the podcast and use it to keep listeners up-to-date with show releases and what's coming up on future shows. It never ceases to amaze me of new uses for Twitter that people find. At a technical event I attended recently the presenter encouraged those in the audience to send him questions via Twitter during his presentation. I have also attended another event in the past which was a Q&A session; during the session I put out a tweet saying where I was and ended up asking questions for people elsewhere sending me them via Twitter. Another example comes from a situation where I had made a forum post which had gone unanswered for a few days; I was particularly keen to get an answer so I tweeted about it and within a couple of hours I had some answers which had previously started to look like they were not going to appear. A number of corporate and marketing types have begun to cotton on to this real-time and potential personal interaction with customers and will provide Twitter feeds about their products and company. Whilst IT pros may not always be particularly interested in marketing spiel it can be useful to say find out when new updates for a product are released or maybe follow the product manager who may ask what kind of features / updates people would like to see in a product. It's great that these opportunities now exist in these more open times and even massive companies like Microsoft who may in the past have hidden away the technical people behind the marketeers are now much more open and easier to interact with.
Twitter Clients
Fairly soon you will find that the basic web page for Twitter doesn't have enough features for your now hopefully growing Twitter use. Since Twitter publishes an API for accessing their service there are a plethora of clients out there you can use instead. A couple of the most commonly used clients on desktop and laptop machines are Twhirl and TweetDeck, both of which are free to use. Using a client gives you a feature rich experience for using Twitter.
Twhirl
Twhirl gives you features like notifications for new messages, spell checking, coloured replies (green) and direct messages (yellow), posting images to twitpic and automatically finding tweets mentioning your username in the web client you won't see replies from people unless you are following them.
245
Using Twitter and PowerShell to Find Technical Information and Join a Community
TweetDeck
TweetDeck offers fairly similar features to Twhirl, with one particularly great extra feature the ability to maintain extra columns based on searches. So say for instance you are interested in what people are talking about Exchange 2010, you can create a search based on the phrase "Exchange 2010" and view the results in a column next to your main Twitter feed. The difference being that the results are dynamic, so the client will poll for new results allowing you to keep track of conversations.
246
Using Twitter and PowerShell to Find Technical Information and Join a Community
Once you start to see information coming in like this, not only will you be really up-to-date with all the latest news of the topics you are interested in, you'll find yourself wanting to engage with these communities, join in by posting your own information and starting conversations with people with similar interests.
Mobile devices
Most IT people these days will likely be carrying around some kind of Internet connected mobile device for everything from keeping up with emails to remote administration, naturally there are Twitter clients for these devices too. I won't cover too much on these since you are far more likely to use Twitter on a mobile device for things like up-to-date travel information, but you may wish to view some topics whilst out and about.
247
Using Twitter and PowerShell to Find Technical Information and Join a Community
Windows Mobile
Twikini seems to be one of the most recommended clients for Windows Mobile based devices. It has most of the features you would expect from a Twitter client. iPhone TwitterFon gets a lot of good reviews and appears generally popular. As with most of these kinds of things it's probably a decision for personal preference and worth trying out a few options to see what works best for you.
Note The following code is slightly amended from James's original function since at the time of writing Twitter appear to be making their search available via Atom rather than RSS; the RSS search gives intermittent errors)
248
Using Twitter and PowerShell to Find Technical Information and Join a Community
Function Get-TwitterSearch { Param($searchTerm, [switch]$Deep) if ($WebClient -eq $null) {$Global:WebClient=new-object System.Net.WebClient } $results=[xml]($webClient.DownloadString("http://search.twitter.com/search.atom?rpp=100&page=1 &q=$SearchTerm")) $Searchitems=$results.feed.entry if ($Deep) { $refresh = $results.feed.link | Where-Object {$_.rel -eq 'refresh'} $MaxID= $refresh.href.split("=")[-1] 2..16 | foreach { $Searchitems += ([xml]($webClient.DownloadString("http://search.twitter. com/search.atom?rpp=100&max_id=$maxID;&page=$_&q=$SearchTerm"))).feed.entry} } $SearchItems } At that point we could stop the search there, but that would only have given us 100 results. It may well be better to get more information, in which case the search function has a "deep" switch which essentially runs the search another 15 times further back each time in the Twitter timeline. A good example for why you might wish to do this is searching for people who Twitter about topics you are interested in and might be interesting to follow. If we carry out a "deep" search and store the results into a variable we can then use standard PowerShell techniques to manipulate the data into something potentially interesting. For instance earlier we used TweetDeck to search on "Exchange 2010" and it would show recent and new posts about that topic. Let's run the same search using the Get-Twitter search function. $twittersearch = Get-TwitterSearch "Exchange 2010" deep Now let's group the results by the author of the tweets, sort them by the authors with the most tweets, pick out the top 20 and display them in a table with their count and the author's name. $twittersearch | Group-Object {$_.author.name} | Sort-Object count -Descending | Select-Object -first 20 | Format-Table count,name auto
You can then check out the authors in question by looking at http://twitter.com/authorname for their recent updates and see if they might be interesting to follow. Exchange 2010 is still a relatively new topic so the numbers above aren't massive. The below screen shows a search on PowerShell and gives a pretty good indication of the people who Twitter about it a lot.
249
Using Twitter and PowerShell to Find Technical Information and Join a Community
You can obviously now run these searches based on the topics of your choice and find people worth following.
Note Be aware that you may get HTTP errors like "502 Bad Gateway" or "503 Service Unavailable" when searching, particularly if you run multiple "deep" searches or are using it at peak Twitter usage times. This is because the Twitter service uses rate limits to throttle back accounts running too many searches; they also currently sometimes have capacity issues at peak times which can mean running these kinds of processes are not quite as reliable as you would hope.
Something else you can check out is your list of Twitter friends. On the webpage it's quite difficult to see a complete list of people you are following; below James has put the ability to pull this data down into a PowerShell function. (Note that I have amended the function slightly from the original so that if you are following more than 100 Twitter users you will see them all; the original function only pulled down the first page of 100) Essentially we again create a new WebClient object, give it the credentials to use for Twitter, pull down the data in XML format and return the data about the users. Function Get-TwitterFriend { param ($username, $password, $ID) if ($WebClient -eq $null) {$Global:WebClient=new-object System.Net.WebClient } $WebClient.Credentials = (New-Object System.Net.NetworkCredential -argumentList $username, $password) $nbrofpeople = 0 $Friends = @() if ($ID) {$URL="http://twitter.com/statuses/friends/$ID.xml?page="} else {$URL="http://twitter.com/statuses/friends.xml?page="} do {$Friends += (([xml]($WebClient.DownloadString($url+($nbrofpeople/100 +1)))).users.user) $nbrofpeople += 100 } while ($Friends.count -eq $nbrofpeople) $Friends } So to run the function and see your friends listed in alphabetical order with the Name, Location and Description properties, you would type: Get-TwitterFriend username password | Sort-Object Name | Format-Table Name,Location,Description
250
Using Twitter and PowerShell to Find Technical Information and Join a Community
This would give you results like those shown below.
There are nine other functions in James's Twitter function library for you to play with including things like Get-TwitterReply to see replies you have sent, Publish-Tweet to send a message to Twitter or Add-TwitterFriend to add people to your friends list. For instance say you have run one of the previously mentioned searches and found the top 20 people talking about the topic you are interested in; you could then pipe the results onwards and using the Add-TwitterFriend function add all of them to your friends list in one easy go.
251
Using Twitter and PowerShell to Find Technical Information and Join a Community
First of all use the Credentials node to store your Twitter username and password into a global variable so that you don't need to supply them each time you run one of the scripts. You can then run any of the script nodes and see the results in the grid pane. In the below example I'm using the Get-TwitterFollowers script to see the list of people who follow me and I've clicked on the name column to sort them alphabetically.
Another way to find people on Twitter to follow is to use one of the maintained lists of people who talk about particular subjects. A couple of these lists feature in the Twitter PowerPack, below is the VMWare list maintained over at the http://Virtu.al.net website. Clicking the script node will produce a list of the VMWare Twitterers; you could select multiple or all people and then use the AddTwitterFriendFromList action based on the previously mentioned Add-TwitterFriend function to add these all to your follow list in one go.
252
Using Twitter and PowerShell to Find Technical Information and Join a Community
A potential drawback of these lists is that they are statically maintained. To use a more dynamic approach you could use a website like http://www.tweetertags.com where Twitter users tag themselves with subjects they are interested in and you can search these tags for people to follow. Not surprisingly you can search these tags using PowerShell and I've included that in the Twitter PowerPack. For instance using the Get-TweeterTag node and searching for "Exchange" tags on the website returns the following list of users; you could then use the AddTweeterFriend action to add a multiple selection of people to your follow list.
Conclusion
Twitter is one of those things where you can get as much out of it as you want to; it can be anything from a fun time-wasting tool to something really useful to find information and people around the globe with similar interests to yourself. The various Twitter clients for differing devices can help you keep in touch with these communities wherever you are. Using PowerShell and a little bit of .NET Framework and XML knowledge you can delve a little deeper into the information which is potentially available to you from this social network.
253
by Jaap Wesselius
Exchange Server 2010 Release Candidate is now available for download from Technet, so we asked Jaap to review it for us.
In an earlier article (Exchange Server 2010 The first public beta version) that I wrote in April of 2009, I discussed the first public beta of Exchange Server 2010. Microsoft has now released an update of this beta, Exchange Server 2010 Release Candidate which can be found here: . The Release Candidate is now feature complete, so no more new features will be added. The only thing Microsoft now has to do until the release of Exchange Server 2010 later this year is bug fixing. So what is new and what has changed in this Release Candidate version of Exchange Server 2010?
254
Figure 1. The online archive is visible in Outlook Web Access 2010. Backup Exchange Server 2010 Release Candidate contains a plug-in for Windows Server Backup (WSB). Exchange Server 2010 does not support streaming backups anymore, only VSS (snapshot) backups are supported. Unlike NTBackup in Windows Server 2003 R2 and earlier there's nothing visible about WSB supporting Exchange Server. When creating a backup of a volume containing Exchange Server 2010 databases the status indicator reveals that something is done with Exchange Server databases, but that's all. In REF _ Ref238257218 \h Figure 2, drives F:\, G:\ and H:\ contain Exchange Server 2010 databases (and log files) and these disks are selected for backup. Windows Server Backup is responsible for checking the databases for consistency (VSS itself does nothing about consistency, it just makes the shadow copy) and this is the only thing that's visible about the Exchange support in Windows Server Backup. Windows Server Backup itself is pretty limited when it comes to backing up the Exchange Server environment. It can only create full backups (incremental or differential backups are not supported) and only the Active Copy of an Exchange Server 2010 database can be backed up. Creating a backup of a Passive Copy of a database is not supported. The next version of Microsoft System Center Data Protection Manager (DPM) will fully support Exchange Server 2010 and offer a lot more features than Windows Server Backup.
255
Figure 2. Windows Server Backup checking Exchange databases for consistency. Sizing Microsoft is now releasing some information regarding sizing. In my earlier article I already explained that one of the design goals of Exchange Server 2010 was to host a database and the accompanying log files on one (large) SATA disk. The maximum database size that's recommended in Exchange Server 2010 will be 2 TB (10 times the recommended maximum size of a database in an Exchange Server 2007 server in a Cluster Continuous Replication (CCR) configuration!). It is up to the Exchange administrator to determine if a 2 TB database is useful or not, think about the implications when reseeding the database (creating a new copy) is necessary. In this case the complete database will be copied over the network. Or when a utility like ESEUTIL or ISINTEG needs to work on a 2TB database. But it is still to early to say anything about the performance of Exchange Server 2010 with such a configuration. Powershell The beta of Exchange Server 2010 came with two flavors of the Exchange Management Shell: the local version and the remote version. This turned out to be very challenging and numerous issues occurred. The Release Candidate of Exchange Server 2010 now comes with only one Exchange Management Shell. This Exchange Management Shell still supports managing both the local Exchange server as well the a remote Exchange server. Windows Server 2008 R2 support The beta of Exchange Server 2010 was only supported on Windows Server 2008. Windows Server 2008 R2, even build 7000, was not supported on Exchange Server 2010 beta. Now Windows Server 2008 R2 is released also Exchange Server 2010 is supported on this platform. All of the prerequisite software needed for installing Exchange Server 2010 (except for the filter pack needed for the Mailbox Server role) is available by default on Windows Server 2008 R2 so this can speed up the installation of Exchange Server 2010. Also the improvements in Windows Server 2008 R2 (like performance) make this a better for running Exchange Server 2010 (but that's my personal opinion ;-) Improvements in the setup when setting up an Exchange Server 2010 Client Access Server there's the possibility to enter the external domain name when the Client Access Server is Internet facing. If this option is used all external setting settings on the Client Access Server (for the Offline Address Book, ActiveSync and Exchange Web Services) are automatically configured.
256
Figure 3. Enter the external domain name when the Client Access Server is internet facing. If you prefer not to use this option than the mentioned settings can be changed later using the Set-OABVirtualDirectory, SetWebServicesVirtualDirectory and the Set-ActiveSyncVirtualDirectory cmdlets in the Exchange Management Shell.
257
server is hosting the Active Copy then you lose the ability to create a backup. Also when you have multiple servers hosting multiple Active Copies you have to configure Windows Server Backup on each Mailbox Server. I'm pretty sure that this will be far more convenient when the next version of Data Protection Manager becomes available. This is not only true for Data Protection Manager but for all 3rd-party backup application vendors like Symantec, UltraBac, CA, etc.
Conclusion
The latest update of Exchange Server 2010 is the Release Candidate version. This version is feature complete thus no new features will be added. Compared to the first Public Beta the most important additions are the archive and the backup feature. Also the stability and performance have been improved so far, it looks really good. There are still some challenges, documentation that is not yet up-to-date but personally I think Microsoft is on-track with Exchange Server 2010. So, you better start looking and evaluating Exchange Server 2010. And if you're still on Exchange Server 2003 you may think about skipping Exchange Server 2007. Personally I think it's worth it!
by Jaap Wesselius
NTBackup lives no more, replaced by VSS backups. Unfortunately, "Windows Server Backup" didn't work with Exchange Sserver 2007 until now. Exchange Server 2007 Service pack 2 contains a backup plug-in that makes it possible. Although pretty limited compared to full backup application like Microsoft Data Protection Manager or Symantec Backup Exec it does what you want it to do: create backups. Jaap explains.
In one of my earlier articles (Online Exchange Backups) I discussed the online backup options in Exchange Server 2007, streaming backups using the Backup API (Application Programming Interface) and snapshot backups using VSS. This article was written for Exchange Server 2007 running on Windows Server 2003. Windows Server 2008 is a different story since it has a new feature called "Windows Server Backup" and NTBackup is discontinued. Windows Server Backup creates Volume Shadow Service (VSS) backups, but it is not Exchange aware. So for backing up your Exchange Server 2007 mailbox databases running on Windows Server 2008 you are dependent on 3rd-party products. No out-of-the-box solution like NTBackup in Windows Server 2003 is available in Windows Server 2008. Now that Exchange Server 2007 Service Pack 2 is available, this has changed. Amongst other new features and functionality, Microsoft has added a new backup plug-in that makes it possible to create snapshot backups using VSS from Windows Server Backup. I'll show you how.
258
Figure 1. Windows Server Backup. There's no indication about the Exchange awareness. Windows Server Backup can backup only on Volume level. When you have a default installation you can backup the entire system disk, including the C:\Program Files\Microsoft\Exchange directory. Since Windows Server Backup is aware of Exchange Server 2007 Service Pack 2 the Exchange Databases will be backed up, the database will be checked for consistency and the log files will be purged. There might be scenarios where the database and the log files are placed on a separate disk. In this example I've located the database in G:\ MDB1 and the accompanying log files in G:\LOGS1. The Public Folder database is located in G:\MDB2 and the accompanying log files in G:\LOGS2. This way you can backup only the database and the log files. In the Windows Server Backup, in the Actions Pane click "Backup Once." The Backup Once Wizard opens, click Next to continue. You can select a Full Server backup or a Custom backup. Select the Custom Backup if you only want to backup the Exchange database and the log files and click Next to continue. In the Select Items window you can select the G:\ disk where the database and the log files are located. Remove the check at "Enable System Recovery" to unselect the System disk and click Next to continue.
259
Figure 2. Remove the checkmark at "Enable System Recovery" to deselect the C:\ Drive. The backup itself can be placed on any disk, except the disk that's being backed up and the System Disk. If you want to backup to disk you have to create an additional disk on the server. It's also possible to backup to a Remote Share. In this example I'll write the backup to another disk, click Next to continue. Select the Backup destination (X: in this example) and click Next to continue. It is possible to create a VSS Copy Backup or a VSS Full Backup. A VSS Copy Backup is a full backup of the database, but the header information will not be updated with backup information and the log files will not be deleted. You can select this option when you want to create a backup using Windows Server Backup, but you don't want to interfere with other backup solutions running on this particular server. If you select VSS Full Backup a full backup will be created, the database header information will be updated with the backup information and the log files will be purged (if the consistency check succeeds). In both options the database will be checked for consistency. Select "VSS Full Backup" to create a full backup of the Exchange database and click Next to continue. Check the confirmation page and click "Backup" to start the actual backup process.
260
Figure 3. Notice the status bar, this is the only indication you're running an Exchange Server backup. The only way to see if the actual Exchange backup is performed is by looking at the console. You can see the Status bar: "Running consistency check for application Exchange." You can close the Backup Once Wizard and the backup will continue to run. If you check the Application log in the Event Viewer you'll see entries about the backup from the Shadow copy service and the Exchange VSS writer about the backup. Please check my other article http://www.simple-talk.com/exchange/exchange-articles/onlineexchange-backups/ with detailed information regarding the VSS backup process. When you check the header information of the Exchange database you'll find information about the backup as well: G:\MDB1>eseutil /mh "mailbox database.edb" Extensible Storage Engine Utilities for Microsoft(R) Exchange Server Version 08.02 Copyright (C) Microsoft Corporation. All Rights Reserved. Initiating FILE DUMP mode... Database: mailbox database.edb File Type: Database DB Signature: Create time:07/02/2009 12:38:52 Rand:172660 Computer: cbDbPage: 8192 dbtime: 1172010 (0x11e22a) State: Clean Shutdown Log Required: 0-0 (0x0-0x0) Log Committed: 0-0 (0x0-0x0) Log Signature: Create time:07/06/2009 08:19:40 Rand:239950899 Computer: OS Version: (6.0.6001 SP 1)
261
Previous Full Backup: Log Gen: 208-227 (0xd0-0xe3) - OSSnapshot Mark: (0xE4,8,16) Mark: 07/20/2009 21:00:08 Previous Incremental Backup: Log Gen: 0-0 (0x0-0x0) Mark: (0x0,0,0) Mark: 00/00/1900 00:00:00 Operation completed successfully in 0.110 seconds. G:\MDB1> Note. This output has been edited for readability. Windows Server Backup can only backup active Exchange Servers. This means that it is not aware of a Continuous Cluster Replication (CCR) environment and therefore you cannot backup the passive node of CCR Cluster. Microsoft Data Protection Manager (DPM) 2007 or other 3rd-party products like Backup Exec can backup the passive node and therefore reduce the load of the active node of the cluster.
Backup schedules
It becomes more interesting to use Windows Server Backup when you can schedule backups and create a backup once a day for example without any hassle. Open the Windows Server Backup application and in the Actions pane select "Backup schedule." Like the Backup Once wizard you can select a Full Server backup or a Custom Backup. If you select Custom Backup it is possible to select the disk containing the Exchange Server database and log files. When creating a backup schedule the system volume will be included by default, it is not possible to deselect this. The next step is to specify the backup time, you can create a backup once a day or multiple times a day.
262
Figure 4. A backup will be created at 9pm. It is also possible to create multiple backups. When selecting the destination disk it will be formatted by Windows Server Backup and all information on the disk will be lost. The drive doesn't get a drive letter during format, but you can still see it through Disk Management in the Server Manager.
Note Although you can select "Always create an incremental backup" in the Windows Server Backup application it is not possible to do this for Exchange. You can only create full backups, either in the Backup Once and in the Backup Schedule option.
263
Figure 5. Select the backup you want to restore. Click Next to continue. In the Recovery Type window you can select the kind of information that needs to be restored. Since this is an Exchange database restore select "Applications" and click Next to continue.
264
Figure 6. Select the application you want to restore information for. If you do not want to roll-forward log files tick the checkbox. Select "Recover to original location" to restore the database to its original location. Do not forget to tick the checkbox "This database can be overwritten by a restore" in the database properties in Exchange Management Console. If you fail to do so the restore of the database will fail. In the Confirmation window check your selections and click "Recover" to start the restore process.
265
Figure 7. You can check the progress of the restore process. Information stored in log files newer than the database (and thus not restored with the database) will be automatically rolled-forward. When possible the recovery process will automatically replay, or roll-forward all available log files from the restore point up to the latest possible point in time. There's an easy way to check this. Create a full backup of your mailbox database. When the backup is successfully finished send a couple of message to yourself. Make them easy to identify. Dismount the database and restore the database from the backup. The messages you just sent are not in this backup, but in the log files written after the creation of the backup. These log files will be automatically replayed. When you logon to the mailbox after the restore of the database you'll find the messages again, even if they weren't in the actual backup.
266
Figure 8. The last 3 messages were not part of the backup but were rolled-forward from the log files.
Conclusion
Exchange Server 2007 Service pack 2 will contain a backup plug-in that makes it possible to use Windows Server Backup in Windows Server 2008 to create Exchange Server backups. Although pretty limited compared to full backup application like Microsoft Data Protection Manager or Symantec Backup Exec it does what you want it to do: create backups. You can create backups manually or create a schedule for the backup application to run. It is a backup to disk solution, backup to tape is not available. It can restore backups and it can roll forward log files after a restore, so the basic functionality is available. Nothing more, nothing less.
Note Fellow MVP Michael B. Smith created a couple of scripts that are capable of creating backups of Exchange Server 2007 running on Windows Server 2008, without the need of Windows Server Backup. You can find his solution here.
267
by Desmond Lee
In the second part of his article on Moving to Office Communications Server 2007 R2, Desmond looks into recommendations to standardize the client desktop to support the different R2 applications, other infrastructure concerns and migration of the remaining internal server roles before moving out to the Edge servers in the screened network.
In a nutshell, you will need to set up R2 server roles alongside the RTM counterparts to provide equivalent functionality. This is because not all RTM and R2 server roles can interoperate with one another directly without first sorting out certain changes which are vital for smooth operations. Therefore, the different RTM server roles should remain in operations until UC-enabled users are fully migrated and homed on new R2 server pools. Only then should you considering bringing in R2 specific server roles such as Group Chat and Outside Voice Control after the complete RTM infrastructure has been migrated to R2. In part 2 of this article (see Part 1), I shall take you through some of the highlights of how to implement an Enterprise consolidated topology from the inside-out in a side-by-side migration from OCS (Office Communications Server) 2007 RTM.
Back- to Front-End
Security As part of security patch management best practice, do not forget to test and apply the most current patches, update packages and hotfixes prior to full scale production rollout. For example, you can download and install the July 2009 hotfix rollup packages for OCS 2007 R2 server roles (KB968802), Communicator 2007 R2 (KB969695) and R2 Attendant (KB970275). If you do not have an automated patch management system in-house, think about using Microsoft Update to deliver update packages to your R2 back-end servers as well as R2 clients. This functionality has been extended to the OCS family of products around the May 2009 timeframe. To maintain absolute control over patch delivery and actual patch application (what, when, why and how) particularly crucial on the R2 back-end servers manually configure the option to read "Notify me but don't automatically download or install them." Better still, deploy the free WSUS 3.0 SP1 solution with Active Directory Group Policy to regain centralized control of your patch management needs.
Before you start, check that you always login with an account that has membership in the Domain Admins, RTCUniversalServerAdmins as well as local Administrators group. You will find that not all but some of the steps in the R2 setup will fail if this condition is not fulfilled. With the named SQL database instance on the Back-End database ready, you can go ahead to run the Create Enterprise Pool task. This step should be carried out right on the back-end database server running x64 editions of SQL Server. For those of you who are still using the 32bit version or SQL Server 2005 SP2 (in particular) as the Back-End database, you will have to install the SQL Server client tools on the new Front-End server before proceeding further. The database itself will be created for you when the R2 Enterprise pool is created. A number of shared folders must already be prepared ahead of time preferably on a dedicated file server to support client and device updates as well as data files for meeting presentations, Address Book Service (ABS), etc. You may find that deploying them directly on a Front-End server may just work for you in your environment. Following that, you install the first R2 Front-End server to the pool and activate it. For administrators that are responsible for both the RTM and R2 infrastructure, it would be a good idea to use the same RTC* accounts in Active Directory. I suggest that you let the R2 setup program to create these service accounts (if they do not already exist)
268
269
DNS Records According to published documentation, the Director (RTM) server role is required to support a mixed environment of RTM and R2 Enterprise Edition server pools in the same SIP domain. This setup facilitates automatic configuration and sign-in of internal MOC 2007 RTM and R2 clients simply by querying for the _sipinternaltls._tcp DNS SRV record that points to a single pool of servers. For users homed on the R2 pool, they will automatically be redirected after being authenticated on the RTM pool. If you do not have or intend to introduce a Director (RTM) server role, you can work around this constraint by using manual client configuration or push out the settings through Group Policy for MOC R2 users. At a minimum, you will need to create new A host records in DNS describing the FQDN of each R2 pool. If you only have a single Enterprise Edition server without a hardware load balancer (HLB), point the A record to the IP address of the host machine. Otherwise, configure the A record with the virtual IP address of the HLB. Subsequently, you can update the DNS SRV to point to the new R2 Director once the RTM version is decommissioned. See DNS Requirements for Enterprise Pools and Standard Edition Servers for more information.
You should be aware of restrictions such as public IM and federation for side-by-side configurations in the same organization and SIP domain. Remote users with MOC R2 clients will not be able to communicate until the R2 versions of the Director and Edge server roles are operational. This is another good reason to hold back deployment of MOC R2 for users that need to consume any R2 services outside the corporate firewall in the absence of a VPN. With or without hardware load balancer, RTM and R2 server roles cannot co-exist in the same Front-End Pool, CWA server pools, an Edge array or Director array. However, the same HLB can front RTM and R2 Enterprise Pools at the same time. As with the R2 FrontEnd configuration, the Access Edge, Web Conferencing Edge and Audio/Video Edge services now reside on the same physical machine in a consolidated Edge topology. In any case, only the Access Edge server role at the data center is active and responsible for SIP signaling traffic for the entire organization. This is regardless of how many branch office Edge servers you may have out there. On top of the classic DNS and certificate dependencies, I also want to draw your attention to how DNAT and SNAT could affect your firewall deployments (destination and source network address translation). See Planning for External User Access for more information on this topic.
270
Enterprise Voice
The Quality of Experience Monitoring Server Administration Tool (RTM) does not have any built-in capability to associate a RTM Mediation server with a R2 Enterprise Pool. As such, you must take extra steps to realize this by applying KB956829 followed by the Microsoft provided script (Associate.vbs). The purpose is to enable QoE reports to be delivered to the R2 Monitoring Server that is bound to its R2 Standard or Enterprise Edition Pool in a coexistence environment. For this to work, UC-enabled users must be homed on the R2 Pool. Like the other RTM server roles, the QoE Monitoring Server can be decommissioned once you are completely migrated to the R2 counterparts. For those of you who intend to integrate R2 with the Unified Messaging feature in Exchange Server 2007 SP1 or an existing PBX, you should closely follow the TechNet Library documentation entitled Deploying Enterprise Voice. The outlined steps apply equally regardless of whether you have a current RTM setup or not. Like the different R2 setup steps during installation, it is safe to re-run scripts and applications that you have executed previously e.g. from exchucutil.ps1, ExchUMutil.exe to OCSUMutil.exe. Obviously you should carefully review the installation logs for hints in resolving any warnings or errors that surface. Configuration information from location profiles, dial plans to Exchange AutoAttendant functionality should continue to work without modifications. Nevertheless, you will want to perform tasks such as defining a new Exchange UM IP Gateway object and point it to the new R2 Mediation Server to optimize traffic routing, for instance.
Conclusion
The principle behind a side-by-side migration is rather straightforward and can be tedious nonetheless. It mirrors that of a brand new R2 installation parallel to your production OCS 2007 RTM deployment. In this final installation of the article, you have learnt that to achieve this goal, you can essentially adopt the same set of official R2 deployment guidelines with a careful lookout for any gotchas.
271
The Client Side Story Besides Office Outlook 2007, you will find that the majority of users will primarily be using the MOC R2 and Office Live Meeting 2007 clients to conduct their day to day business from instant messaging, presence, peer-to-peer video and voice call to web conferencing. With additional R2 productivity features, the new Office Communications Server 2007 R2 Conferencing Attendant and Office Communicator 2007 R2 Group Chat clients should be listed in your deployment plans as well. For those of you on the RTM version of MOC, you can directly upgrade to the R2 equivalent without first uninstalling the former. It is important to understand that users using the R2 version of MOC can only be homed on an R2 Enterprise pool or Standard Edition server, although users running MOC RTM can also be hosted on the latter two. In other words, you can first move users running MOC RTM to home on R2 pools and defer upgrading to the MOC R2 client to a later date. Taken together, all of the R2 client applications rely on different minimum versions of Windows service packs and supporting components to operate correctly. To take full advantage of the latest functionality, simplify rollout and on-going maintenance, I recommend that you consider standardizing at least on the following: Windows XP SP3 or Windows Vista SP1 Microsoft Core XML Services (MSXML) 6.0 SP1 (ships as part of XP SP3 and Vista SP1) .NET Framework 3.5 SP1 .NET Framework 3.5 Service Pack 1 Update (KB959209) Conferencing Add-in for Microsoft Office Outlook Microsoft Office Communicator 2007 R2 client Live Meeting 2007 client update package (KB960165) Conferencing Add-in for Outlook update (KB960164) Office Communicator Web Access Plug-in (Desktop-sharing feature) Conferencing Attendant application (optional for specific groups of users; requires the Office Communicator Web Access client to provide Dial-in Conferencing Web page for participants). In contrast with the back-end R2 server roles, Windows client computers are not required to be 64-bit capable to run any of the R2 client applications. Although Windows 7 as well as Service Pack 2 for Windows Vista have recently been released (KB948465), IT shops may still need to devote sufficient time to test for compatibility with their line-of-business applications. Besides, Vista SP1 is a prerequisite for SP2. Unless your Vista client desktops are already deployed with SP1, it makes sense to start off at this base level before pushing out SP2 within the next 24 months (see Lifecycle Supported Service Packs for more information).
272
Ben Lye
To keep the Exchange database healthy, it is important to make sure that online maintenance and online defragmentation are running properly, or are at least conforming to Microsoft's recommendations. Ben shows how easy it is to automate the maintenance and monitor the defragmentation task.
Exchange database maintenance is an important part of keeping Exchange healthy. Exchange Server automates online maintenance tasks and runs them based on the schedule specified by the Exchange administrator. The online maintenance tasks are: Purge mailbox database and public folder database indexes. Maintain tombstones. Clean up the deleted items dumpster. Remove public folders that have exceeded the expiry time. Remove deleted public folders that have exceeded the tombstone lifetime. Clean up conflicting public folder messages. Update server versions. Check Schedule+ Free Busy and Offline Address Book folders. Clean up deleted mailboxes. Clean up reliable event tables.
For detailed information on these tasks you can refer to Microsoft TechNet: Maintaining Mailbox Databases and Public Folder Databases. If one of these maintenance tasks is performed on a database then online defragmentation will be performed on that database. If the online maintenance tasks are unable to complete in a single schedule window the tasks will be suspended and then resumed in the next window. In this way maintenance is guaranteed to eventually complete, however it is important to make sure that the scheduled maintenance windows are properly configured so that the maintenance tasks are able to complete regularly. When scheduling online maintenance there are several guidelines to consider: Online maintenance should be scheduled for times when there is little activity on the database.
273
The default online maintenance schedule is nightly from 1 am to 5 am. To aid in customizing the online maintenance schedule, the Windows event log can be used to see how often online defragmentation is completing and Performance Monitor counters can be used to check the efficiency of online defragmentation. This data can be used to adjust the online maintenance schedule to give more or less time.
Events 701 and 703 indicate a complete pass. In the case of event 703, completion of a resumed pass, the event text will include information about how long defragmentation took to complete, and how many times it was invoked.
274
To make it easier to check how often online maintenance completes the Exchange Management Console and a PowerShell script can be used to parse the Application log. This script will search the event log for online defragmentation messages for each database on the server on which it's run (or the clustered mailbox server if running on a cluster node) and return the amount of time taken to complete online defragmentation of each database. # # # # # # # # # # Script to check the status of Exchange database online defragmentation tasks Written by Ben Lye The script will parse the event log of the local machine looking for online defrag related messages. Messages are parsed to determine when online defrag last finished for each database and how long it took to complete. The script needs to be run on an Exchange 2007 mailbox server or mailbox cluster node If the script is run on a cluster node it should be the active node, or event log replication needs to be enabled (this is the default).
# The $records variable defines the number of events to retrieve from the log # It can be increased or decreased according to the needs of a particular server. # The script will run faster if fewer records are retrieved, but data may not be # found for all databases. $records = 10000 # Get the hostname $hostname = Get-Content env:computername # Check if the local machine is an Exchange mailbox server
275
276
# Retrieve the events from the local Application log, filter them for ESE messages $logs = Get-EventLog -LogName Application -Newest $records | ` Where {$_.Source -eq "ESE" -and $_.Category -eq "Online Defragmentation"} # Create an array for the output $output = @() # Loop through each of the databases and search the event logs for relevant messages ForEach ($db in $databases) { # Create the search string to look for in the Message property of each log entry $s = "*" + $db.EdbFilePath + "*" # Search for an event 701 or 703, meaning that online defragmentation finished $end = $logs | where { $_.Message -like "$s" -and ($_.InstanceID -eq 701 -or $_.InstanceID -eq 703) } | select-object -First 1 # Search for the first event 700 which preceeds the finished event $start = $logs | where { $_.Message -like "$s" -and $_.InstanceID -eq 700 -and $_.Index -le $end.Index } | select-object -First 1 # Make sure we found both a start and an end message if ($start -and $end) { # Get the start and end times $db.DefragStart = Get-Date($start.TimeGenerated) $db.DefragEnd = Get-Date($end.TimeGenerated) # Parse the end event message for the number of seconds defragmentation ran for $end.Message -match "total of .* seconds" >$null $db.DefragDuration = $Matches[0].Split(" ")[2] # Parse the end event message for the number of invocations and days $end.Message -match "requiring .* invocations over .* days" >$null $db.DefragInvocations = $Matches[0].Split(" ")[1] $db.DefragDays = $Matches[0].Split(" ")[4] } else { # Output a message if start and end events weren't found
277
# Print the output $output Microsoft recommends that in large organisations with large databases (150200GB) and many storage groups (up to 20) on a single server online defragmentation should complete at least once every two weeks for each database. In smaller organizations with smaller databases it should complete more often. In either case, if online defragmentation is completing within two days then it is probably safe to shorten the online maintenance window for the database. If defragmentation is not completing within 14 days the online maintenance window should be lengthened.
Exchange 2007 Service Pack 1 adds these two additional counters. MSExchange Database ==> Instances \Online Defrag Pages Freed/Sec. MSExchange Database ==> Instances \Online Defrag Data Moves/Sec.
278
Note Incorrectly editing the registry can cause serious problems that may require you to reinstall your operating system. Problems resulting from editing the registry incorrectly may not be able to be resolved. Before editing the registry, back up any valuable data.
Start the registry editor on your Exchange 2007 Mailbox server. Locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ESE\Performance registry key. Right-click Performance, select "New," and then select "DWORD value." Name the new DWORD value "Show Advanced Counters." Double-click "Show Advanced Counters." In the "Value data" field, enter 1. Close the registry editor.
Once the extended ESE performance counters are enabled Performance Monitor can be used to log the counter data to a file. Start Performance Monitor on the Exchange 2007 mailbox server (or the active cluster node in a CCR cluster) by clicking Start -> Programs -> Administrative Tools -> Performance. Expand "Performance Logs and Alerts" and select "Counter Logs." Right-click "Counter Logs" and select "New log settings." Enter a name for the new counter, such as "EDB Defrag," and click "OK."
279
Click the "Add Counters" button, and then select "MS Exchange Database ==> Instances" from the "Performance object" drop-down. Select the "Online Defrag Pages Freed/sec" and the "Online Defrag Pages Read/sec" counters from the list, select the Information Store storage groups, and click the "Add" button.
280
On the schedule tab, set the start time to a few minutes before the start of the next run of online maintenance and the end time to a few minutes after the end.
281
The next time online maintenance runs the performance counter data will be gathered. This data can then be analyzed by looking at the average ratio of pages read to pages freed. The resulting CSV file will look similar to this:
To determine the Pages Read:Pages Freed ratio the data is averaged over the duration of the maintenance window, and the average Pages Read/sec is divided by the average Pages Freed/sec. This gives the number of pages freed/sec for every one page read/sec. For example, if the average Pages Read/sec is 545, and the average Pages Freed/Sec 6, the ratio is 90:1 and the online defragmentation window is appropriately set.
282
Alternatively, use the Set-MailboxDatabase PowerShell cmdlet to set the maintenance window. This command will configure the maintenance window of the database "Mailbox Database" on server SERVER01 to run every day from 3 am to 4 am local time using the shell: Set-MailboxDatabase -Identity "SERVER01\Mailbox Database" ` -MaintenanceSchedule Sun.03:00-Sun.04:00, Mon.03:00-Mon.04:00, ` Tue.03:00-Tue.04:00, Wed.03:00-Wed.04:00, Thu.03:00-Thu.04:00, ` Fri.03:00-Fri.04:00, Sat.03:00-Sat.04:00 With a relatively small amount of monitoring and analysis it is reasonably easy to ensure that online maintenance is running effectively and efficiently, helping to ensure that your Exchange databases stay in good shape. The techniques I've shown here should help you to check that online maintenance and online defragmentation are running optimally, or are at least conforming to Microsoft's recommendations.
283
by Neil Hobson
In April 2009 Microsoft released a public beta of Exchange 2010, the latest and greatest version of a part of its unified communications family of products. Recently in August 2009, a feature complete Release Candidate version was released for public download. In this article Neil Hobson takes a look at some of the high availability features of Exchange 2010.
For many years Exchange only offered a high availability solution based on the shared storage model, whereby use of Microsoft clustering technologies protected against server-based failure but did nothing to protect against storage failures. Although there were improvements to this form of high availability in Exchange 2007, where it was known as a Single Copy Cluster (SCC), the real changes to high availability in Exchange 2007 came with the introduction of a technology known as continuous replication. With this technology, transaction logs are shipped from one copy of a database to another which allows an organization to deploy an Exchange high availability solution that also dealt with storage failure. This storage failure protection was available on a single server with the use of Local Continuous Replication (LCR) and was also available across servers with the use of Cluster Continuous Replication (CCR). Therefore, with LCR, CCR and SCC, Exchange 2007 administrators had three different high availability methods open to them. It was also possible to cater for site failure with an extension to the continuous replication technology that was known as Standby Continuous Replication. I won't go into detail on these Exchange 2007 solutions here as I've covered them in a previous article called Exchange 2007 High Availability here on Simple-Talk. However, the bottom line is that many organizations have deployed technologies such as CCR in order to provide high availability and technologies such as SCR to provide site resilience. From my experiences, more organizations have deployed CCR in preference to SCC and it comes as no surprise to learn that SCC has been dropped entirely from Exchange 2010. As you will shortly see, the continuous replication technology lives on in Exchange 2010 but there are many changes in the overall high availability model. With Exchange 2007 Service Pack 1, a full high availability solution was generally deployed using a total of four servers. Two servers were installed as a single CCR environment, giving high availability for the users" mailboxes. The other two servers were deployed as combined Hub Transport and Client Access Servers, and were configured as a load-balanced pair. The reason for this was simply that if the mailbox server role was clustered, it was not possible to implement the additional Exchange 2007 server roles, such as the Hub Transport and Client Access Server role, on the server running the mailbox role. For the larger enterprises, this wasn't an unreasonable approach but for the smaller organizations a total of four servers sometimes seemed to be overkill for an internal messaging system. To address this specific issue, Microsoft has designed Exchange 2010 such that all server roles can be fully redundant with as few as two servers, providing you have deployed an external load balancer for incoming Client Access Server connections. In other words, it's now possible to combine the mailbox server role with other roles such as the Hub Transport and Client Access Server role. Of course, larger organizations will still be likely to implement dedicated servers running the various server roles but this is something that will definitely help the smaller organizations to reduce costs. Remember, though, the external load balancer requirement for incoming Client Access Server connections. With this in mind, let's get going and look at some of the high availability features of Exchange 2010. Don't forget that this is a high-level look at the new features; in later articles here on Simple-Talk, we'll be diving much more deeply into these features and how they work. Right now, the idea with this article is to get you to understand the concepts behind these new features and to allow you to do some initial planning on how you might use them in your organization.
284
285
Figure 1: ADSIEdit DAG View. You can see that a DAG has an object class of msExchMDBAvailabilityGroup and that the actual Database Availability Group container location is found under the Exchange 2010 administrative group container. Bringing up the properties of the DAG object in ADSIEdit reveals the important configuration items such as the file share witness share and directory names as you can see in Figure 2.
286
Figure 2: DAG Properties. Once a DAG has been created, mailbox servers can be added to it as required. This is another simple process that can be achieved by rightclicking the DAG object in the Exchange Management Console and choosing the Manage Database Availability Group Membership option from the context menu. The corresponding Exchange Management Shell cmdlet is the Add-DatabaseAvailabilityGroupServer cmdlet. For example, to add the mailbox server called E14B1S1 to a DAG called DAG1, you'd run the following cmdlet: Add-DatabaseAvailabilityGroupServer Identity DAG1 ` MailboxServer E14B1S1 Since DAGs make use of several Windows Failover Clustering components, it comes as no surprise to see that the Enterprise Edition of Windows Server 2008 is required on mailbox servers that are added to a DAG, so do ensure that you take this into account when planning your Exchange 2010 implementation. When creating a DAG, there are options around network encryption and compression that can be set. This is possible because Exchange 2010 uses TCP sockets for log shipping whereas Exchange 2007 used the Server Message Block (SMB) protocol. For example, it's possible to specify that the connections that occur using these TCP sockets are encrypted. Equally, it's also possible to decide that these same connections also use network compression.
287
Figure 3: Database Copies. In Figure 3, you can see that a DAG named DAG1 consists of two mailbox servers called MBX1 and MBX2. There are a total of three active mailbox databases, shown in green, across both servers and each active mailbox database has a passive copy, shown in orange, stored on the alternate server. For example, the active copy of DB1 is hosted on the server called MBX1 whilst the passive copy of DB1 is hosted on the server called MBX2. The passive copies of mailbox databases are kept up-to-date via log shipping methods in the same way that was used in Exchange 2007, such as between the two cluster nodes within a single Exchange 2007 CCR environment. As you might expect, the active copy of the mailbox database is the one which is used by Exchange. Within a DAG, multiple passive copies of a mailbox database can exist but there can only be a single active copy. Furthermore, any single mailbox database server in a DAG can only host 1 copy of any particular mailbox database. Therefore, the maximum possible number of passive copies of a mailbox database is going to be one less than the number of mailbox servers in a DAG, since there will always be one active copy of the mailbox database. For example, if a DAG consisted of the maximum of 16 mailbox servers, then there could be a maximum of 15 passive copies of any single mailbox database. However, every server in a DAG does not have to host a copy of every mailbox database that exists in the DAG. You can mix-and-match between servers however you wish. As mentioned earlier in this section, the unit of failover in Exchange 2010 is now the database. However, if an entire mailbox server fails, all active databases on that server will need to failover to alternative servers within the DAG. One other vital piece of mailbox database information that you should consider in your planning for Exchange 2010 is the fact that database names are now unique across the forest in which Exchange 2010 is installed. This could be a problem in organizations that have deployed Exchange 2007 with the default database name of mailbox database. Therefore, if you are going to be transitioning from Exchange 2007 to Exchange 2010 in the future, take time now to investigate your current database naming standards.
288
289
Figure 4: CAS Array. When an Outlook client connects to an Exchange 2010 Client Access Server, the Client Access Server determines where the active copy of the user's mailbox database is located and makes the connection with the relevant server. If that particular mailbox database becomes unavailable, such as when the administrator wishes to take the database offline or it fails, one of the passive copies will become the new active copy as previously described within this article. It's the Client Access Server that loses the connection to the old active copy of the mailbox database; the actual connection from the client to the Client Access Server is persistent which is obviously good from a user experience point of view. Then, the Client Access Server will fail over to the new active mailbox database in the DAG as directed by the PAM.
Summary
In this article we've taken a high-level look at some of the new Exchange 2010 high availability features and how they come together to provide an overall high availability solution. If you're planning on looking at Exchange 2010, it makes sense to start understanding these new features and how they can benefit your organization. Also, there are other interesting features available in Exchange 2010 that further serve to increase the overall high availability and reliability of the messaging environment, such as shadow redundancy in the Hub Transport server role. In future articles here on Simple-Talk, we'll be covering these areas in much more detail.
290
by Brien Posey
Imagine that you're researching Continuous Cluster Replication, looking for a simple, direct guide to the basics of getting it set up and running. What do you do if everything you find is either too specialised, or goes into far more detail than you need? If you're Brien Posey, and you can't find that practical, to-the-point article, you roll up your sleeves and write it yourself. To kick off, he tackles the rather tedious task of setting up a Majority Node Set Cluster.
A few weeks ago, someone asked me to help them with an Exchange Server deployment. In order to make the deployment a bit more robust, they wanted to cluster their mailbox servers using Cluster Continuous Replication, or CCR. Although I had set up CCR a few times in the past, it had been a while since I had done one, so I decided to look online to re-familiarize myself with the process. As I looked around on the Internet, I began to realize that there weren't any articles that met my needs. Some of the articles that I found only covered part of the process. Others covered the entire process but contained a lot more material than what I wanted. Nobody had published a simple article, written in plain English, which described the procedure of setting up CCR from start to finish. That being the case, I decided to write one myself.
291
Another thing that you are going to need before you begin creating the cluster is a cluster service account. You should create a dedicated account in your Active Directory domain, and set its password so that it never expires. One last thing that I want to mention before I show you how to create the cluster is that the Clustered Mailbox Server role cannot be combined with any other Exchange Server roles. Since every Exchange 2007 organization requires at least one hub transport server and client access server, you will need at least one additional Exchange 2007 server in your organization.
292
293
294
The first command in this sequence tells Windows to use the share that we have created as the Majority Node Set File Share Witness. When you enter this command, you will receive an error message telling you that the properties were stored, but that your changes won't take effect until the next time that the cluster is brought online. The easiest way to get around this problem is to move the cluster group from Node 1 to Node 2 and then back to Node 1. That's what the second and third commands in the sequence above accomplish for us. The last command in the sequence above simply causes Windows to display the private properties for the Majority Node Set. The first line of text within the list of properties should make reference to the share that we have created for our Majority Node Set File Share Witness. This confirms that the cluster is using our Majority Node Set File Share Witness.
Conclusion
As you can see, creating a Majority Node Set Cluster can be a bit tedious. In Part 2 of this series, we will wrap things up by installing Exchange Server onto our cluster and then working through the failover procedure.
295
by Jonathan Medd
It has always been a curse as well as a blessing that Active Directory has allowed the rapid removal of whole branches. Until now, administrators have looked in vain for an "undo" function after having accidentally deleted an entire division of their company. At last, with Windows Server 2008 R2, comes a way to rollback changes, as long as you are handy with Powershell. Jonathan Medd explains.
Since Active Directory was included as part of Window Server 2000, administrators have often asked for a simple way to roll back mistakes, whether that is the incorrect deletion of the wrong user account to the accidental removal of thousands of objects by deleting an OU. Before the release of Windows Server 2008 R2 there were a number of ways using built-in or third-party methods to restore Active Directory objects, but typically they were not as quick or complete as say retrieving a deleted email or file. Microsoft has included with their release of Windows Server 2008 R2 the facility, under the correct conditions, to enable a Recycle Bin for Active Directory and allow simple restoration of objects which have been erroneously removed. In this article we will briefly cover some of the options prior to 2008 R2 and then examine how to enable the new Recycle Bin and restore objects from it.
Authoritative Restore
The Microsoft KB article 840001(http://support.microsoft.com/kb/840001) details how to perform the restoration of a user account using a system state backup of a domain controller. Typically, you would use a global catalog so that you can also restore all group membership information.
296
Tombstone Reanimation
The above article also details how to recover an account when you don't have a system state backup by using tombstone reanimation which was introduced with Windows Server 2003 you can retrieve objects from the Deleted Objects container where they are kept after deletion until their tombstone period expires. Obviously regular system state backups of Active Directory are critical for your full disaster recovery procedures, but taking advantage of tombstone reanimation means you can get objects back quicker than having to go through the full authoritative restore process. You could use the procedure in the article which utilises the ldp.exe tool, but there are other methods around which you may find simpler. The article itself links to a Sysinternals tool, ADRestore (http://technet.microsoft.com/en-us/sysinternals/bb963906.aspx), which is a command line tool for reanimating objects. The free ADRestore.Net, a GUI tool made by Microsoft PFE Guy Teverovsky. http://blogs.microsoft.co.il/blogs/guyt/ archive/2007/12/15/adrestore-net-rewrite.aspx. Quest produces a freeware product Object Restore for Active Directory, an easy to use GUI tool. http://www.quest.com/objectrestore-for-active-directory/ (Note: there is a commercial version with more features, Recovery Manager for Active Directory.) Quest also produces a cmdlet library for managing Active Directory with Windows PowerShell (http://www.quest.com/powershell/ activeroles-server.aspx). As of version 1.2 a number of the cmdlets had a Tombstone parameter added to them so that a search of objects would also include items which have been tombstoned. These results could then be piped through to the new cmdlet RestoreQADDeletedObject to undelete the object represented by the tombstone. For instance the command Get-QADUser Tombstone -LastChangedOn ((Get-Date).adddays(-1)) | Restore-QADDeletedObject would restore all user accounts deleted yesterday.
The drawback with tombstone reanimation is that because most of the object's attributes are removed at the time of the object's deletion, a restored object using this method requires many properties of the account, such as address fields and group membership, to be manually repopulated. Whilst this is obviously preferable to re-creating an account from scratch it does not make for a quick overall process. However, you will at least get back the objectGUID and objectSid attributes which means there would be no need to re-configure a user's workstation profile. The original release of Windows Server 2008 introduced snapshot backups for Active Directory. You can take point-in-time snapshots of your Active Directory with the NTDSUTIL command line utility which utilizes Volume Shadow Copy to provide a snapshot. It is then possible to mount this snapshot using different ports on the same domain controller as the live Active Directory database and use standard tools to compare the two. This could really make the tombstone reanimation a lot simpler because after restoring the object you could view two versions of Active Directory Users and Computers side by side and view the properties of the restored object from a previous time, so making it simpler to repopulate properties. The Directory Service Comparison Tool (http://lindstrom.nullsession.com/?page_id=11) takes advantage of these snapshots and makes the repopulation process more streamlined. For those with Microsoft Exchange messaging environments, once you have the Active Directory account back, you can use the Reconnect Mailbox feature within Exchange to tie the restored account back up with the mailbox. This is of course providing you have a similar tombstone retention period for mailboxes that you do for AD accounts.
297
Naturally organizations are typically cautious when upgrading Active Directory and these types of infrastructure projects don't tend to happen quickly, but the Recycle Bin could be one of the features which gives you more weight behind a decision. You should also be aware though that enabling the Recycle Bin is a onetime only move, there's no easy way to disable it again, so careful consideration of this decision must be taken. It's worth noting that if you are making a fresh forest install of Windows Server 2008 R2 the Active Directory schema will already include all of the necessary attributes for the Recycle Bin to function. If however you are upgrading your domain controllers from previous versions of Windows Server then you will need to run the well known procedure of adprep /forestprep andadprep /domainprep (for each domain) and possibly adprep /domainprep /gpprep (for Group Policy preparation) before you can introduce Windows Server 2008 R2 domain controllers into the environment. So let's go ahead and run through all the steps we need to get the Recycle Bin enabled. Firstly, ensure that all of your domain controllers are running Windows Server 2008 R2 and then we need to use PowerShell; the great news with Windows Server 2008 R2 is that version 2 of PowerShell is installed by default and is placed directly on your taskbar.
After you have installed Active Directory Domain Services the Active Directory specific cmdlets are available to use via a module; modules essentially are the evolution of snapins from version 1 of PowerShell. To access these cmdlets you can either open the Active Directory specific version of the PowerShell console from the Administrative Programs menu, or the method I would prefer, use the Import-Module cmdlet.
Tip You could add the following expression to your PowerShell profile so that the cmdlets are available every time you open PowerShell PS> Import-Module activedirectory Once complete all of the Active Directory cmdlets will be at your fingertips. As previously discussed we now need to get the functional level of the forest up to the level of Windows Server 2008 R2. The most common way to do this previously was through Active Directory Domains and Trusts.
298
Now though we can do this through PowerShell. The Get-ADForest cmdlet will return information about your forest and the SetADForestMode cmdlet will enable you to raise the current functional level since it is such a significant change to your environment you will be prompted to confirm that you wish to go ahead. PS> Get-ADForest | Set-ADForestMode ForestMode Windows2008R2Forest
Now that our forest is at the correct functional level we can enable the Recycle Bin, to do so we use the Enable-ADOptionalFeature cmdlet. This must be either run on the DC with the Domain Naming Master FSMO role or directed at that server with the server parameter. Again you will be prompted to confirm your command since the action is irreversible. PS> Enable-ADOptionalFeature 'Recycle Bin Feature' -Scope ForestOrConfigurationSet -target 'test. local'
299
Now that we have the Recycle Bin enabled it's time to go check out how we recover some deleted objects. In this environment we have a very simple AD structure with a couple of test accounts to illustrate the example.
Let's take the situation where an administrator accidently deletes the Users OU. One of the most common reasons this can happen is because it is actually possible to delete OU's from the Group Policy Management tool, not just Active Directory Users and Computers so an administrator might think they are removing a GPO and in a bad moment delete the wrong item and remove a whole OU. The administrator is prompted for what they are about to do, but I have seen it happen more than once!
300
The initial release of Windows 2008 Server actually included a new checkbox "Protect object from accidental deletion'. In the example of the OU below any attempt to delete the OU will be met with an Access is denied response and the administrator will actually have to remove the tick from that checkbox before the OU can be deleted.
However, what you would naturally expect to happen as a consequence of the Protect object from accidental deletion would be any user or computer account created in that protected OU would also be supported by the same mechanism. Unfortunately by default they are not, so as a good practise you would either need to build that into your account creation process or programmatically check and set that checkbox on all accounts in the OU on a regular basis.
301
Consequently, in the above example if we accept the warning to delete the OU we are greeted with an Access is denied message since the OU has protection set.
302
So we were saved from deleting the OU, but all of the unprotected child objects were deleted.
(For the purposes of this article I now remove the Users OU by first clearing the checkbox for protecting the object from accidental deletion.)
303
We can see from the resultant output that we have both the Users OU in there and the two user accounts. So let's try restoring one of the user accounts back, to do so we need the Restore-ADObject cmdlet and supply the ObjectGuid property of the user account. PS> Restore-ADObject identity 2df74fba-7e86-4f75-b16d-5725ef45a45f
Oh dear, it failed to restore, but PowerShell tells us that it failed because the object's parent no longer exists either, i.e. we need to first restore the Users OU. (Note: an alternative would be to use the -targetpath parameter and re-direct the restore to a different OU.) To restore the Users OU we can use the same cmdlet (Restore-ADObject) as to restore users, just supply the ObjectGuid of the OU. PS> Restore-ADObject identity 20142376-8a48-4b56-9972-0e64eb9e9a0f The Users OU returns.
304
Now we just need to get those user accounts back. Rather than have to type out the ObjectGuid for each account we wish to restore we can instead create a search which will match all of the accounts we wish to restore and then use the PowerShell pipeline to send those results to the Restore-ADObject cmdlet. PS> Get-ADObject -ldapFilter "(lastKnownParent=OU=Users,OU=Resources,DC=test,DC=local)" -includeDeletedObjects | Restore-ADObject
305
If we check the properties of the account we can confirm that, different from tombstone reanimation, we get all of the properties back.
306
307
It is also possible to use the Configure recycle bin action to set the values for DeletedObjectLifetime, the amount of days objects reside in the Recycle Bin, and TombstoneLifetime, the amount of days objects can be restored using Tombstone Reanimation after they have left the Recycle Bin. In Windows Server 2008 R2 both of these values default to 180 days, in some earlier versions of Windows Server this value was 60 days and if you upgrade those domain controllers it will remain the same so you may wish to change the values you can use the Modify action to do this.
For this example I have deleted from Active Directory the Resources and Users containers and the two user accounts which you can see nicely in the below screenshot using PowerGUI.
308
This time we will restore the account Joe Bloggs, but to an alternative location using the Restore to.... Action. (Remember: this is done in PowerShell using the targetpath parameter of the Restore-ADObject cmdlet) Simply input the path to the Organisational Unit you wish to restore the object to. In this example we use the default Users container as the target location.
The user has been restored to the alternate location; this is particularly useful if we did not wish to bring back the entire OU(s) as we did previously.
309
If however, you do wish to bring back the contents of an entire OU and everything below it there is an action, Restore (recursive).
Using the Restore (recursive) action in this scenario brings back both the Resources and Users OU's as well as the single account remaining in it, Jane Smith.
310
Summary
One of the most requested features for a long time with Active Directory has been a Recycle Bin. Microsoft has finally delivered this with the release of Windows Server 2008 R2. It may not be a feature that enterprises get to use for a little while given the system requirements of all 2008 R2 Domain Controllers and your Active Directory Forest at 2008 R2 functional level, but it could be one of those compelling reasons that enables you to pursue an upgrade. Administration is via the new Active Directory PowerShell cmdlets which Microsoft is using to provide a consistent command line interface across all of their products. Although currently there is no native GUI for these administration tasks, the Active Directory Recycle Bin PowerPack for PowerGUI enables administrators to leverage the underlying PowerShell functionality and provide a graphical interface for carrying out these tasks.
311
Ben Lye
Outlook PST files are a problem for Exchange users, and give no benefits over Exchange mailboxes. You have to use them on a local drive, they are difficult to back up, and tricky for the administrator to manage. It is possible to use Exchange Group Policy settings to limit the use of PST files, and thereby alleviate some of the difficulties they cause.
Outlook Personal Folder Files (PST files) are a data store that can be used by all versions of Outlook to store e-mail data. PST files have long been seen as a way to archive mail out of an Exchange mailbox, often to get the mailbox under a quota limit. However, the use of PST files causes Exchange administrators some serious pain when it comes to managing e-mail. Briefly, these are some of the problems with PST files: Microsoft does not support using PST files over a LAN or WAN network (KB267019). Using PST files located on network shares can slow down Outlook and can cause corruption of the PST file. Anti-virus countermeasures cannot be implemented on PST files as easily as Exchange Server mailbox databases. It is difficult to accurately report on PST file use, making reporting on organisational mail storage and planning for future growth difficult. Managing content of PST files is difficult. Exchange Server provides tools to manage the content of mailboxes (such as Messaging Records Management) and to export or remove data from mailboxes (such as the Export-Mailbox cmdlet) but there are no such tools to manage the content of PST files. Local PST files are difficult to back up, making them vulnerable to data loss. Fortunately for the Exchange administrator, it is possible to restrict the ability to use PST files. There are two settings available, and both can be applied using Group Policy registry changes PSTDisableGrow and DisablePST. PSTDisableGrow prevents new data being added to existing PST files, and DisablePST prevents users creating or opening PST files altogether. Description Disable PST files Prevent PST file growth Registry Path HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\12.0\ Outlook HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\12.0\ Outlook\PST Registry Value DisablePST PstDisableGrow
Table - PST Restriction Registry Values Note that the registry paths are specific to Outlook versions "12.0" refers to Outlook 2007, the registry path for Outlook 2003 would be ."..\ Office\11.0\Outlook" and so on. In an environment where PST files already exist, these settings can be applied separately or together to phase out their use. The first step could be to implement restrictions on the growth of PST files using PSTDisableGrow which would allow users to access existing data but not allow it to be added to. Subsequently, all PST file use could be disabled by implementing DisablePST.
312
Right-click "Group Policy Objects" and select "New." Give the new GPO a name, for example "PST Policy," and click OK. (Skip this step if you want to add these settings to an existing GPO.) Right-click "PST Policy" (or the existing policy you wish to edit)and choose "Edit."
313
To implement the DisablePST restriction enable the "Prevent users from adding PSTs to Outlook profiles..." setting and set the option to "Only Sharing-Exclusive PSTs can be added." This will allow PST files for application such as SharePoint lists, but will prevent usercreated PST files from being added.
314
To implement the PSTDisableGrow restriction enable the "Prevent users from adding new content to existing PST files." setting.
If the PSTDisableGrow setting is implemented users will still be able to create and open PST files, but they will not be able to add any data to any PST files. If they try they will receive this error message:
If the DisablePST setting is implemented the user will see changes in the Outlook user interface. While any PST files which were already loaded will remain part of the profile, the options to create new PST files or to open any other existing PST files will no longer be in the menu. Archive options will also be removed.
315
Disable PST implemented. PST files can be a headache for Exchange administrators, but they don't have to be. With easily-applied Group Policy settings the use of PST files can be limited, and the problems they cause can be eradicated.
316
by Jaap Wesselius
What's new in Exchange Server 2010 and what features from Exchange Server 2007 have been deprecated? What has been done to further integrate Exchange with Active Directory? In an extract from his new book, Jaap gives a rooftop view of Exchange's new features and their significance, and spells out the main reasons why it is worth upgrading.
First things first let's cover some basic background: Exchange Server 2010 is an e-mail and calendaring application that runs on Windows Server 2008 and, like its predecessor Exchange Server 2007, can also integrate with your phone system. It is the seventh major version of the product and, while not revolutionary, it does include some important changes and lots of small improvements over Exchange Server 2007. The scalability of Exchange Server 2010 has improved, especially when compared to the complex storage requirements of Exchange Server 2007. The user experience has also improved in Outlook Web App, and a lot of complex issues have seen solved, or the complexity has been removed, to make the administrator's life much easier. In this article I will give a brief overview of what's changed in Exchange Server 2010, what the new features are, what features have been removed, and how it makes your life as an Exchange administrator easier.
317
Getting Started
Exchange Server 2010 will be available in two versions: Standard Edition, which is limited to hosting 5 databases. Enterprise Edition, which can host up to 100 databases.
However, the available binaries are identical for both versions; it's the license key that establishes the difference in functionality. Exchange Server 2010 is also only available in a 64-Bit version; there is absolutely no 32-bit version available, not even for testing purposes. Bear in mind that, as 64-Bit-only software, there's no Itanium version of Exchange Server 2010. Exchange Server 2010 also comes with two Client Access License (CAL) versions: Standard CAL This license provides access to e-mail, calendaring, Outlook Web App and ActiveSync for Mobile Devices. Enterprise CAL This is an additive license, and provides Unified Messaging and compliance functionality, as well as Forefront Security for Exchange Server and Exchange Hosted Filtering for anti-spam and anti-virus functionality.
This is not a complete list. For more information about licensing, check the Microsoft website at http://www.microsoft.com/exchange.
318
Figure 1. Outlook Web App running on an Apple MacBook using a Safari browser! Outlook Web App offers a very rich client experience and narrows the gap between a fully-fledged Outlook client and Outlook Web Access. To reinforce that experience, a lot of new features have been introduced. To name a few: Favorites, Search Folders, attaching messages to messages, integration with Office Communicator, a new Conversation View (which works very well!), integration with SMS (text) messages and the possibility to create Outlook Web Access policies, which give the Exchange organization administrator the ability to fine tune the user experience. The Web App is a feature which you will find mentioned throughout the book.
319
High Availability
The Exchange Server 2007 Cluster Continuous Replication (CCR) and Standby Continuous Replication (SCR) features are now combined into one new feature called database availability. Database copies exist just as in an Exchange Server 2007 CCR environment and are created in a "Database Availability Group," but it is now possible to create multiple copies. The replication is not on a server level as in Exchange Server 2007 but on a database level, which gives the Exchange administrator much more fine control and granularity when it comes to creating a high available Exchange organization. The servers in such a Database Availability Group can be at the same location, or other locations to create an offsite solution. There's also no longer any need to install the Microsoft Cluster Service (MSCS) before setting up the Database Availability Group, as all cluster operations are now managed by Exchange.
320
321
Active Directory is discussed throughout this book, as the Exchange Server 2010 has a much close relationship with AD than previous versions of Exchange Server.
322
Permissions
Previous versions of Exchange Servers relied on delegation of control via multiple Administrative Groups (Specifically, Exchange Server 2000 and Exchange Server 2003) or via Group Membership. Exchange Server 2010 now contains a Role Based Access Model (RBAC) to implement a powerful and flexible management model.
Mailbox Archive
Exchange Server 2010 now contains a personal archive; this is a secondary mailbox connected to a user's primary mailbox, and located in the same Mailbox Database as the user's primary mailbox. Since Exchange Server 2010 now supports a JBOD (Just a Bunch of Disks) configuration this isn't too big a deal, and the Mailbox Archive really is a great replacement of (locally stored) .PST files.
Unified Messaging
The Exchange Server 2010 Unified Messaging Server Role integrates a telephone system, like a PABX, with the Exchange Server messaging environment. This makes it possible to offer Outlook Voice Access, enabling you to interact with the system using your voice, listen to voice mail messages, or have messages read to you. Exchange Server 2010 offers some new functionality like Voicemail preview, Messaging Waiting Indicator, integration with text (SMS) messages, additional language support etc. Unified Messaging is, unfortunately, a little outside the scope of this book, so you won't find me going into too much detail later on.
323
Note It is possible to install Exchange Server 2010 on an Active Directory Domain Controller. However, for performance and security reasons it is recommended not to do this, and instead to install Exchange Server 2010 on a member server in a domain.
324
Delegation of Control
Figure 3. The Configuration partition in Active Directory holds all information regarding Exchange Server 2010 in an Administrative Group. In Exchange Server 2003 the concept of "Administrative Groups" was used to delegate control between different groups of administrators. A default "First Administrative Group" was created during installation, and subsequent Administrative Groups could be created to install more Exchange 2003 servers and delegate control of these servers to other groups. The Administrative Groups were stored in the Configuration Partition so all domains and thus all domain controllers and Exchange servers could see them. Just shift all letters in the word FYDIBOHF23SPDLT to the left and you get EXCHANGE12ROCKS.
Exchange Server 2007 used Active Directory Security Groups for delegation of control, and only one Administrative Group is created during installation of Exchange Server 2007, called "Exchange Administrative Group (FYDIBOHF23SPDLT)." All servers in the organization are installed in this Administrative Group. Permissions are assigned to Security Groups and Exchange administrators are member of these Security Groups. Exchange Server 2010 uses the same Administrative Group, but delegation of control is not done using Active Directory Security Groups, as Microsoft has introduced the concept of "Role Based Access Control" or RBAC.
325
Figure 4. Two subnets in Active Directory, one for the main office and one for the Amsterdam Datacenter. Exchange Server 2010 uses Active Directory sites for routing messages between sites. Using our current example, if there is an Exchange Server 2010 Hub Transport Server in Amsterdam and an Exchange Server 2010 Hub Transport Server in London, then the IP Site Links in Active Directory are used to route messages from Amsterdam to London. This concept was first introduced in Exchange Server 2007, and nothing has changed in Exchange Server 2010. Exchange Server 2003 used the concept of Routing Groups, where Active Directory already used Active Directory Sites; Active Directory Sites and Exchange Server Routing Groups are not compatible with each other. To have Exchange Server 2003 and Exchange Server 2010 work together in one Exchange organization, some special connectors have to be created - the so called Interop Routing Group Connector.
326
These server roles can be installed on dedicated hardware, where each machine has its own role, but they can also be combined. A typical server installation, for example in the setup program, combines the Mailbox, Client Access and Hub Transport Server role. The Management Tools are always installed during installation, irrespective of which server role is installed. By contrast, the Edge Transport Server role cannot be combined with any other role. In fact, the Edge Transport Server role cannot even be part of the (internal) domain, since it is designed to be installed in the network's Demilitarized Zone (DMZ). There are multiple reasons for separating Exchange Server into multiple server roles: Enhanced scalability since one server can be dedicated for one server role, the scalability profits are huge. This specific server can be configured and optimized for one particular Role, resulting in a high performance server. Improved security one dedicated server can be hardened for security using the Security Configuration Wizard (SCW). Since only one Server Role is used on a particular server, all other functions and ports are disabled, resulting in a more secure system. Simplified deployment and administration a dedicated server is easier to configure, easier to secure and easier to administer.
327
Figure 5. The Mailbox Server role is hosting Mailboxes and Public Folders. Note that Internet Information Server is needed on a Mailbox Server role in order to implement the Role Based Access Control model (RBAC) even if no client is accessing the Mailbox Server directly. As I mentioned, Storage Groups no longer exist in Exchange Server 2010, but mailboxes are still stored in databases, just like in Exchange Server 2007. Although rumors have been circulating for more than 10 years that the database engine used in Exchange Server will be replaced by a SQL Server engine, it has not happened yet. Just as in earlier versions of Exchange Server, the Extensible Storage Engine (ESE) is still being used, although major changes have been made to the database and the database schema. By default, the first database on a server will be installed in the directory: C:\Program Files\ Microsoft\Exchange Server\V14\Mailbox\Mailbox Database <<identifier>>
328
Figure 6. The default location for the Mailbox Databases and the log files. The <<identifier>> is a unique number to make sure that the Mailbox Database name is unique within the Exchange organization. It is a best practice, from both a performance and a recovery perspective, to place the database and the accompanying log files on a dedicated disk. This disk can be on a Fiber Channel SAN, an iSCSI SAN, or on a Direct Attached Storage (DAS) solution. Whilst it was a design goal to limit the amount of disk I/O to a level that both the database and the log files could be installed on a 1TB SATA disk, this is only an option if Database Copies are configured and you have at least two copies of the Mailbox Database, in order to avoid a single point of failure.
329
Note SMTP Services are not offered by the Client Access Server. All SMTP Services are handled by the Hub Transport Server.
At least one Client Access Server is needed for each Mailbox Server in an Active Directory site, as well as a fast connection is between the Client Access Server and the Mailbox Server. The Client Access Server also needs a fast connection to a Global Catalog Server. The Client Access Server should be deployed on the internal network and NOT in the network's Demilitarized Zone (DMZ). In order to access a Client Access Server from the Internet, a Microsoft Internet Security and Acceleration (ISA) Server should be installed in the DMZ. All necessary Exchange services should be "published" to the Internet, on this ISA Server.
Figure 7. The Client Access Server is responsible for providing access to (Internet) clients. The ISA Server is not in this picture.
330
Figure 8. The Hub Transport Server is responsible for routing all messages All messages are always routed via the Hub Transport Server role, even if the source and the destination mailbox are on the same server and even if the source and the destination mailbox are in the same Mailbox Database. For example in Figure 8: Step 1: A message is sent to the Hub Transport Server. Step 2: A recipient on the same server as the sender means the message is sent back. Step 3: When the recipient is on another mailbox server, the message is routed to the appropriate Hub Transport Server. This is then followed by Step 4: The second Hub Transport Server delivers the message to the Mailbox Server of the recipient.
The reason for routing all messages through the Hub Transport Server is simply compliancy. Using the Hub Transport Server, it is possible to track all messaging flowing through the Exchange organization and to take appropriate action if needed (legal requirements, HIPAA, Sarbanes-Oxley etc.). On the Hub Transport Server the following agents can be configured for compliancy purposes: Transport Rule agents using Transport Rules, all kinds of actions can be applied to messages according to the Rule's filter or conditions. Rules can be applied to internal messages, external messages or both. Journaling agents using the journaling agent, it is possible to save a copy of every message sent or received by a particular recipient.
Since a Mailbox Server does not deliver any messages, every Mailbox Server in an Active Directory site requires a Hub Transport Server in that site. The Hub Transport Server also needs a fast connection to a Global Catalog server for querying Active Directory. This Global Catalog server should be in the same Active Directory site as the Hub Transport Server. When a message has an external destination, i.e. a recipient on the Internet, the message is sent from the Hub Transport Server to the "outside world." This may be via an Exchange Server 2010 Edge Transport Server in the DMZ, but the Hub Transport Server can also deliver messages directly to the Internet.
331
Figure 9. The Edge Transport Server is installed between the Internet and the Hub Transport Server. The Edge Transport Server can also provide the following services: Edge Transport Rules like the Transport Rules on the Hub Transport Server, these rules can also control the flow of messages that are sent to, or received from the Internet when they meet a certain condition. Address rewriting with address rewriting, the SMTP address of messages sent to or received from the Internet can be changed. This can be useful for hiding internal domains, for example after a merger of two companies, but before one Active Directory and Exchange organization is created.
The Edge Transport Server is installed in the DMZ and cannot be a member of the company's internal Active Directory and Exchange Server 2010 organization. The Edge Transport Server uses the Active Directory Lightweight Directory Services (AD LDS) to store all information. In previous versions of Windows this service was called Active Directory Application Mode (ADAM). Basic information regarding the Exchange infrastructure is stored in the AD LDS, like the recipients and the Hub Transport Server which the Edge Transport Server is sending its messages to. To keep the AD LDS database up to date, a synchronization feature called EdgeSync is used, which pushes information from the Hub Transport Server to the Edge Transport Server at regular intervals.
332
Figure 10. Overview of the Unified Messaging Infrastructure. The Unified Messaging Server role provides users with the following features: Call Answering this feature acts as an answering machine. When somebody cannot answer the phone, a personal message can be played after which a caller can leave a message. The message will be recorded and sent to the recipient's mailbox as an .mp3 file. Subscriber Access sometimes referred to as "Outlook Voice Access." Using Subscriber Access. users can access their mailbox using a normal phone line and listen to their voicemail messages. It is also possible to access regular mailbox items like messages and calendar items, and even reschedule appointments in the calendar.
333
Auto Attendant using the Auto Attendant, it is possible to create a custom menu in the Unified Messaging system using voice prompts. A caller can use either the telephone keypad or his or her voice to navigate through the menu.
The Unified Messaging service installed on the Unified Messaging Server role works closely with the Microsoft Exchange Speech Engine Service. This Speech Engine Service provides the following services: Dual Tone Multi Frequency (DTMF) also referred to as the touchtone (the beeps you hear when dialing a phone number or accessing a menu). Automatic Speech Recognition. Text-to-Speech service that's responsible for reading mailbox items and reading the voice menus.
The Unified Messaging Server role should be installed in an Active Directory site together with a Hub Transport Server, since this latter server is responsible for routing messaging to the Mailbox Servers. It also should have a fast connection to a Global Catalog server. If possible, the Mailbox Server role should be installed as close as possible to the Unified Messaging Server role, preferably in the same site and with a decent network connection.
Summary
Exchange Server 2010 is the new Messaging and Collaboration platform from Microsoft, and it has a lot of new, compelling features. The new High Availability, management and compliancy features make Exchange Server 2010 a very interesting product for the Exchange administrator. In fact, the new features in Exchange Server 2010 will generally result in less complexity, which is always a good thing! The whole eBook can be downloaded here.
by Matt Simmons
Because the Sys Admins and DBAs who are efficient end up wielding a great deal of power, there is always a danger that it can 'go to their head'. If ever you start imposing your decisions on the organisation for no better reason than the fact that they're yours, then it is time to read Matt's simple guide on how to prevent your actions being controlled by your ego.
Tim, the Sys Admin, took a moment from flitting back and forth between email, IM, and the ticket queue, in order to assess the state of his network using a screen full of graphs and a highly tuned monitoring system. No aberrant event could occur in the infrastructure without his knowing. Almost as a premonition, he flipped to the virtual terminal that held his email, just as his iPhone chirped an alert. It was a security warning. One of the users had attempted to log into a machine to which they didn't have access.
334
335
File system Quotas are used to restrict the amount of space users can consume or to report on the space consumed by them. They are useful for reporting on those users or folders that are consuming large amounts of disk space on a file server. Ben Lye shows that File system quotas are quick and easy to set up, with three different methods available for configuring them.
Disk quotas have been available in Windows since Windows 2000 was released, and could be used by administrators to limit the amount of space users could use on an NTFS volume. Disk quotas are based on file ownership rather than folder structure and because of this they are not particularly useful in all situations. For example, if your server had a single storage volume and you need to apply quotas to different folders on the volume then disk quotas will not help. File system quotas, which were first introduced in Windows Server 2003 R2, and are a part of the File Server role in Windows Server 2008 (and Windows Server 2008 R2), offer many benefits over disk quotas. With file system quotas we can set quotas for specific folders on the volume, we can use templates to ensure consistent application of quotas, and we can set quotas which are automatically applied to all subfolders of a folder. Additionally, file system quotas are useful not just for limiting the amount of space users can consume, but also for reporting on space used quotas can be set with so-called "soft" limits which are used for monitoring rather than enforcing limits. This functionality can be extremely useful for quickly determining which users or folders are consuming large amounts of disk space on a file server. Quota thresholds can be configured so that users or administrators receive notifications when quotas have been reached or are about to be reached. Multiple thresholds can be configured for individual quotas, and actions can include sending e-mail messages, logging to the Windows event log, running commands or scripts, or generating storage reports. In Windows Server 2008 file system quotas are managed with the File Server Resource Manager (FSRM) console (which is installed as a role service in the File Services role), the command line utility dirquota, or with Windows PowerShell using a COM API.
336
Figure 1: Windows Server 2008 File Server Resource Manager. There are two kinds of quota available - hard quotas which set a limit and enforce it, and soft quotas which set a limit but only report on it. Soft quotas are useful for monitoring disk space use. Quotas are commonly applied using quota templates, which are a mechanism for easily applying the same quota settings to one or more folders. Quota Templates are the recommended way to configure quotas and FSRM includes some example templates which cover a range of scenarios, including using both hard and soft quota types. Before we start to configure quotas which will generate e-mail messages, the quota File Server Resource Manager needs to be configured with an SMTP server, and optionally, the default administrator recipients, and the default "from" address. Like all aspects of quota management, the FSRM settings can be applied using three different tools and you can choose the method appropriate to your needs. To configure FSRM using the FSRM console: Launch the File Server Resource Manager. Select the root node, labelled "File Server Resource Manager." In the Action Pane click "Configure Options..." Enter an SMTP server and if desired configure the other settings.
337
To configure FSRM using the command line: Open an elevated command prompt window. Enter the command "dirquota admin options /From:[email protected] /AdminEmails:[email protected] / SMTP:smtp.example.com."
338
To configure FSRM using Windows PowerShell: Open Windows PowerShell. Enter these commands (or save them as a script and run it): # Create a new COM object to access the FSRM settings $fsrm = New-Object -com Fsrm.FsrmSetting # Set the default administrators e-mail address $fsrm.AdminEmail = "[email protected]" # Set the from address $fsrm.MailFrom = "[email protected]" # Set the SMTP server $fsrm.SmtpServer = "smtp.example.com"
339
340
Note To set a soft quota (for monitoring only) check the "Soft Quota" radio button.
Click the "Add" button to add a notification threshold. Set the notification percentage to 90, check the "Send e-mail to the following administrators," and enter an appropriate destination e-mail address. You can also customise the message text.
341
To create the new quota template using the dirquota command line utility: Open an elevated command prompt (or Windows PowerShell) window. Create a text file called notification.txt containing the text of the notification message (an example of this text message can be downloaded from this article). Enter this command: dirquota Template Add /Template:"Default Home Directory (5GB Limit)" /Limit:5GB /Type:Hard /AddThreshold:90 /Add-Notification:90,M,notification.txt
342
Note: To set a soft quota (for monitoring only) change "/Type:Hard" to "/Type:Hard" To create the new quota using Windows PowerShell: Open Windows PowerShell. Enter these commands (or save them as a script and run it): # Create a new COM object to access quota templates $fqtm = New-Object -com Fsrm.FsrmQuotaTemplateManager # Create a new template object $template = $fqtm.CreateTemplate() # Set the template's name $template.Name = "Default Home Directory (5GB Limit)" # Set the quota limit $template.QuotaLimit = 5GB # Set the quota type to hard limit (the flag for a hard limit is 0x100) $template.QuotaFlags = $template.QuotaFlags -bor 0x100 # Add a quota threshold $template.AddThreshold(90) # Add a threshold e-mail action $action = $template.CreateThresholdAction(90, 2) # Set the e-mail message recipient $action.MailTo = "[Admin Email]" # Set the e-mail message subject
343
344
To create the quota using the dirquota command line tool: Open an elevated command prompt window. Create a text file called notification.txt containing the text of the notification message (an example of this text message can be downloaded from this article). Enter the command "dirquota autoquota add /Path:C:\Home /SourceTemplate:"Default Home Directory (5GB Limit)".
345
To create the quota using Windows PowerShell: Open Windows PowerShell Enter these commands (or save them as a script and run it): # Create a new COM object to access quotas $fqtm = New-Object -com Fsrm.FsrmQuotaManager # Create the new quota $quota = $fqtm.CreateAutoApplyQuota("Default Home Directory (5GB Limit)," "C:\Home") # Save the new quota $quota.Commit()
346
347
Click "OK."
To create the quota exception using the dirquota command line tool: Open an elevated command prompt window.
Enter the command: "dirquota quota add /Path:C:\Home\John_Smith /SourceTemplate:"Default Home Directory (5GB Limit)" /Limit:10GB /Overwrite"
348
To create the quota exception using Windows PowerShell: Open Windows PowerShell. Enter these commands (or save them as a script and run it): # Create a new COM object to access quotas $fqtm = New-Object -com Fsrm.FsrmQuotaManager # Get the existing quota $quota = $fqtm.GetQuota("C:\Home\John_Smith") # Set the new quota limit $quota.QuotaLimit = 10GB # Save the quota $quota.Commit()
349
Note To disable the quota, insert the line "$quota.QuotaFlags = $quota.QuotaFlags -bor 0x200" before saving the quota.
350
Figure: Mapped drive showing hard quota limit as volume size. When a hard quota is met or exceeded clients will receive a message telling them that the volume is out of disk space.
351
352
Viewing Quotas
Administrators can view hard and soft quotas using FSRM, and viewing quotas this way can be a quick method for finding large folders or large consumers of space.
Figure: Quotas in File Server Resource Manager. File system quotas are quick and easy to set up, with three different and flexible methods available for configuring them. Properly applied they can be a good tool to help ensure efficient use of storage resources, a convenient countermeasure against storage waste, and a useful tool for reporting on storage utilisation. There is an IO performance penalty for using quotas, but the benefits will probably outweigh the small performance cost. More information about File Server Resource Manager is available in Microsoft TechNet: http://technet.microsoft.com/en-us/library/cc771092%28WS.10%29.aspx. More information about the dirquota command line tool is also available in TechNet: http://technet.microsoft.com/en-us/library/cc754836%28WS.10%29.aspx. More information about the COM API for working with FSRM is available in MSDN: http://msdn.microsoft.com/en-us/library/bb972746%28VS.85%29.aspx.
353
by Brien Posey
Cluster continuous replication (CCR) helps to provide a more resilient email system with faster recovery. It was introduced in MicrosoftExchangeServer2007 and uses log shipping and failover.configuring Cluster Continuous Replication on a Windows Server 2008 requires different techniques to Windows Server 2003. Brien Posey explains all.
In the first part of this article series, I showed you how to configure Windows to form a Majority Node Set Cluster. Although the procedure that I gave you works well, it is intended for use with Windows Server 2003. You can also host a clustered mailbox server on Windows Server 2008, but the procedure for creating the cluster is quite a bit different from what you saw in the first article. In this article, I want to show you how to create the required cluster on Windows Server 2008. In the previous article, I spent a long time talking about the hardware requirements for building a cluster, and about the cluster planning process. This same information roughly applies to planning for a Windows Server 2008 cluster. You are still going to need two similar servers with two NICs installed. The requirements for the number of names and IP addresses that I covered in Part 1 also remain the same. Once again, I will be referring to the cluster node that is initially active as Node1, and the node that initially acts as the passive node as Node 2.
354
355
Figure B.
356
Figure C. Enter the name and IP address that you want to assign to the cluster. After entering the cluster's name and IP address, click Next, and you should see a summary screen outlining the details of the cluster that you are creating. Take a moment to read the summary and make sure that everything is correct. Assuming that all is well, click Next, and Windows will create your cluster. When the process completes, click Finish.
357
Figure D. Both networks should be displayed in the Failover Cluster Management Console. Right click on Cluster Network 1, and choose the Properties command from the shortcut menu. When you do, you will see a properties sheet for the connection. Make sure that this is the connection isn't the private segment that you are reserving for communication between cluster nodes. Now, verify that the Allow Cluster to Use this Connection option is selected You should also make sure that the Allow Clients to Connect Through This Network check box is selected. Click OK to close the properties sheet. Now, right click on Cluster Network 2, and select the Properties command from the resulting shortcut menu. This network connection should be the private segment that services the cluster nodes. Once again, you want to make sure that the Allow Cluster to Use This Network Connection option is selected. However, the Allow Clients to Connect Through This Network option should not be selected. Click OK to close the properties sheet and return to the Failover Cluster Management Console.
358
Figure E.
359
Figure F. Enter the path to the file share that you created.
360
361
Figure G. Select the Active Clustered Mailbox Role check box. Click Next, and Setup will ask you what type of clustered mailbox server you want to create. Choose the Cluster Continuous Replication option. Before continuing, you must enter the name that you want to assign to your clustered mailbox server, as shown in Figure H. This is the name that will be unique to Exchange Server.
362
Figure H. Choose the Cluster Continuous Replication option and then enter the name that you want to use for your clustered mailbox server. Click Next, and you will be prompted to enter the server's IP address. You should enter the IP address that will be unique to Exchange Server. Click Next, and Setup will perform a prerequisite check to make sure that all of the necessary components are in place. Assuming that the check succeeds, click the Install button to begin the installation process. When the installation process completes, click Finish. You may see a message telling you that a restart is required before placing the server into production. If you receive such a message, just click OK to acknowledge it. Rather than restarting the server immediately, we need to stop the clustered mailbox server service, and move it to our passive node. As you will recall though, we haven't actually installed Exchange onto the passive node yet. This makes things a little bit messy, but it isn't going to cause us any problems. My personal preference is to perform this operation from the command line, but you can use the Failover Clustering Console.
363
Conclusion
In this article I have explained how to deploy Cluster Continuous Replication in a Windows Server 2008 environment. In Part 3, I will conclude the series by showing you various techniques for managing your cluster.
364
by Jonathan Medd
One of the first things you notice with Windows Server 2008 R2 is that PowerShell 2.0 has become central to the admin function There is a powerful Active Directory module for Powershell that contains a provider and cmdlets that are designed to allow you to manage Active Directory from the command line. Now, you can also use versions for previous versions of Windows Server.
Windows Server 2008 R2 and Windows 7 both ship with PowerShell 2.0 installed by default, a fact which begins to demonstrate how important a consistent command line interface is becoming to Microsoft products. Which is why you as a sysadmin should be aware that, in order to excel as a Windows administrator in the 21st Century you will need to get to grips with PowerShell. Starting with products like Exchange Server 2007 and System Center Virtual Machine Manager (SCVMM), moving into the core OS with Windows 2008 R2 and Windows 7, other product groups are now providing PowerShell support with their latest releases. Active Directory Domain Services in Windows Server 2008 R2 ships with PowerShell support via cmdlets and a provider; in addition the new Active Directory Administrative Center is built on top of Windows PowerShell technology. In this article we will look at how you can use these new tools to more effectively manage your environment.
Each approach had its good and bad points, but going forward using the PowerShell cmdlets which ship as part of Active Directory in Windows Server 2008 R2 will be the route to take given that will be Microsoft's focus for administrators compared to the above examples.
365
The Active Directory PowerShell cmdlets and provider which ship with Windows Server 2008 R2 are included as part of a module (modules are essentially the evolution of snapins from version 1 of PowerShell) and use this Web Service to manage Active Directory. The good news for organisations who will not be immediately upgrading their Active Directory environments to Windows Server 2008 R2 and are currently running either Windows Server 2003 SP2 (or R2 SP2) or Windows Server 2008, with or without SP2, is that this Web Service has been made available as a Release To Web update. This means that you can have the same automation experience today with your current environment without going through the upgrade process for Active Directory which typically for most organisations involves significant planning and quite possibly cost. There are some system requirements for this downlevel release which you should be aware of before you go diving straight in: .NET Framework 3.5 with SP1 must be installed on the Windows Server 2003 or 2008 Domain Controller. PowerShell 2.0 for Windows Server 2003 or 2008 must be installed, it's available from Knowledge Base (KB) article 968929. For Windows Server 2003 and 2008 based Domain Controllers, you must download and install the hotfix that is described in KB article 969166. For Windows Server 2003& based Domain Controllers, you must download and install the hotfix that is described in KB article 969429. For WindowsServer2008 based Domain Controllers without SP2, you must download and install the hotfix that is described in KB article 967574.
Note that it is not possible to install the Active Directory module on these downlevel servers and you will instead require a Windows Server 2008 R2 or Windows 7 instance to remotely manage these systems with the Active Directory module. Whilst not a requirement that you install the Web Service on every single downlevel Domain Controller, you may run into issues if you do not give enough coverage across your Domain Controllers. The majority of the Active Directory cmdlets though do include a Server parameter which allows you to specify a particular Domain Controller to connect to.
Getting Started
Once you have the Active Directory Web Service installed, either as part of a native Windows Server 2008 R2 installation or using one of the downlevel versions as described above, you can start to manage Active Directory with the PowerShell module. To use this module you can either Remote Desktop to connect to a Domain Controller in your environment or more typically, and also better practise, use tools on your management workstation.
366
To begin you can use the familiar cd (which is an alias in PowerShell for Set-Location) to change your location to that of the Active Directory hierarchical navigation system.
367
You can use other familiar commands to navigate your way around and make changes. We can use the well-known dir to show us what is available. Then cd 'DC=test,DC=local' to start exploring the domain and then dir again will give a well-known view to that seen when first opening the Active Directory Users and Computers GUI tool.
If we then drill down further into the Users OU below the Sales OU we can observe some user accounts again by using dir. This time we would like to update the Description field for the Joe Bloggs user account to read "Marketing Manager'. To do that we can use the Set-ItemProperty cmdlet with the path to the object we wish to change and the values we wish to set: Set-ItemProperty -Path "CN=Joe Bloggs" -Name Description -value "Marketing Manager" We can then read this property back with the Get-ItemProperty cmdlet: Get-ItemProperty -Path "CN=Joe Bloggs" -Name Description
368
Or in a GUI view:
369
For simplicity's sake we'll keep it to a few basic properties like Name and Description, of course you could and normally would have significantly more. Let's also say to make your life easier you've added a few extra columns of you own like the SamAccountName and the Path (OU) where you would like the account to be created. We can take of advantage of a standard PowerShell cmdlet Import-CSV which will read in a standard CSV file and create a set of objects based on the data inside the file. We can then send the results of this cmdlet down the pipeline to the New-ADUser cmdlet from the ActiveDirectory module and create the 10 accounts in a matter of seconds. Import-CSV C:\scripts\users.csv | New-ADUser It really is as simple as that and now you also can leave early on that Friday night like everyone else! Some of the results of those commands are shown below, nicely created in the OU specified in the CSV file.
You might be wondering how we made the leap from the information specified in the CSV to the New-ADUser cmdlet without having to specify any parameters with this cmdlet. In fact you could create a new user account with something like this: New-ADUser Name "John Smith" SamAccountName "john.smith" Description "Sales Manager" Department "Sales" EmployeeID "45896" Path "ou=users,ou=sales,dc=test,dc=local" Enabled $true We have actually taken advantage of something known as "Pipeline Parameter Binding'. Since we gave the columns in the CSV file the same names as the parameters of the New-ADUser cmdlet and these parameters "Accept Pipeline Input ByPropertyName" PowerShell is clever enough to match them up and use them. It's almost like someone has thought to try and make your administrator life just that little bit easier.
370
Administering Groups
Another frequent maintenance task for the Active Directory administrator is the upkeep of groups and in particular membership of them. Let's take the scenario of the newly created Marketing user accounts and a request to add all of them to the "Marketing Application" group which is used to provide access to one of their main tools. Again you could do that through Active Directory Users and Computers, edit the membership of the "Marketing Application" group and manually enter all of their names to add them to the group. Or you could be smart, use the Get-ADUser cmdlet with a filter to select those users, pipe the results to the standard PowerShell cmdlet ForEach-Object and then use the Add-ADGroupMember cmdlet to populate the group. OK it might not make that big a difference for only five users, but imagine if that was five thousand. Get-ADUser -filter * -SearchBase "ou=users,ou=marketing,dc=test,dc=local" | ForEach-Object {Add-ADGroupMember -Identity 'Marketing Application' -Members $_}
Note The $_ at the end of that command refers to the "current object in the pipeline', i.e. imagine it cycling through each of those users in the filter and substituting that user account for $_ each time.
The results of those cmdlets displayed in the GUI view of the group.
371
There's an equivalent Set-ADDefaultDomainPasswordPolicy which will allow you to make changes to any of the above settings. You could though have done all of this through Group Policy Editor. The real gains come from being able to create and manage a FineGrained Password Policy. Use: New-ADFineGrainedPasswordPolicy -Name "Standard Users PSO" -Precedence 500 -ComplexityEnabled $true -Description "Standard Users Password Policy" -DisplayName "Standard Users PSO" -LockoutDuration "0.00:12:00" -LockoutObservationWindow "0.00:15:00" -LockoutThreshold 10 to create a new Fine-Grained Password Policy called "Standard Users PSO" and policy settings defined as per the parameter values. In the above example the Lockout parameters should be specified in the format Day.Hour:Minutes:Seconds, i.e. LockoutDuration has been set to 12 minutes. Fine-Grained Password Polices are applied to Users or Groups, so to apply the Standard Users PSO policy to the Marketing Users group the Add-ADFineGrainedPasswordPolicySubject cmdlet is available to you. Use: Add-ADFineGrainedPasswordPolicySubject 'Standard Users PSO' -Subjects 'Marketing Users' to make this change. Of course you may now have more than one Fine-Grained Password policy. To view details of them all use: Get-ADFineGrainedPasswordPolicy -Filter {name -like "*"} and you will see results similar to the below so that you can compare them.
372
Consequently some users may have more than one policy applied to them, so how do you tell which policy will be effective on them? If a user is directly linked to a particular policy then that policy wins. (Generally it is bad practise to link a policy directly to a user; groups should be used for more effective management.) If a user is a member of different groups which each have different policies applied to them then the policy with the lowest Precedence value will win. If there are no Fine-Grained polices created then the Default Domain Policy will apply.
The best way to determine what policy will be applied to a user is to use the Get-ADUserResultantPasswordPolicy cmdlet. For example, George Maker is in the Marketing Users and High Security Users groups, both of which have different Fine-Grained Password Policies applied to them. By running the command: Get-ADUserResultantPasswordPolicy -Identity 'George.Maker' we will be given the resulting answer of which password policy will be applied to him. In this case it's the "High Security Users PSO" which is applied because it had the lowest precedence value.
373
FSMO Roles
In an Active Directory environment all Domain Controllers are equal, although some are more equal than others. There are five roles known as Flexible Single Master Operation roles which typically will live on different domain controllers. Domain Naming Master. Schema Master. Infrastructure Master. PDCEmulator. RID Master.
You can use the Get-ADForest and Get-ADDomain cmdlets to determine which Domain Controllers are holding these roles. (In my test environment there is only one DC, but you get the idea)
Whilst you could use the Netdom.exe command line tool to obtain the same information, you would need to use an alternative command line tool Ntdsutil.exe, with a different style and syntax, to transfer a FSMO role between Domain Controllers. Consequently by using the Move-ADDirectoryServerOperationMasterRole cmdlet to transfer FSMO roles between Domain Controllers, the administrator using the PowerShell module benefits from a consistent command line experience. Move-ADDirectoryServerOperationMasterRole -Identity "2008R2DC2" -OperationMasterRole PDCEmulator,RIDMaster
374
Further Information
If you want to know more, the best place to look is the Active Directory PowerShell Blog maintained by the team who put the module together. Also, I have made an Active Directory PowerShell Quick Reference Guide which you might find useful for getting started or a handy reference to keep by your desk.
Summary
Windows Server 2008 R2 ships with both PowerShell 2.0 and an Active Directory module containing a provider and cmdlets to enable you to manage Active Directory from the command line. This module has also now been made available for downlevel versions of Windows Server making it more readily accessible to those who might not be in a position to upgrade their Active Directory environments just yet. It is highly worth spending some time with this module, even if it just getting to grips with the basics; it will help make you a better and more effective administrator of Active Directory.
by Jaap Wesselius
In this article,the first of two in which Jaap describeshow to movefrom Exchange Server 2003 straight to Exchange Server 2010, he shows what is required before moving mailboxes from Exchange Server 2003 to Exchange Server 2010.He shows how to upgradeActive Directory, installboth Exchange Server 2010 and certificates,and set upthe Public Folder replication.
Microsoft released Exchange Server 2010 in October 2009, and this new version of Exchange Server contains a lot of compelling new featuressuch as thenew High Availability, the facility to store your Exchange databases on JBOD (Just a Bunch of Disks), the archiving option and the new Outlook Web App. Oh, and do not forget the new Windows Mobile 6.5 and its new mail client. If you have an Exchange Server 2003 environment you may want to skip Exchange Server 2007 and move directly to Exchange Server 2010. The easiest way to achieve this is to integrate Exchange Server 2010 into the existing Exchange Server 2003 environment, a so called intraorganizational migration. This is also known as transitioning from Exchange Server 2003 to Exchange Server 2010. But what does it take and what issues might arise? This is part 1 of a series of two about moving from Exchange Server 2003 to Exchange Server 2010 and in this document I'll show you what's needed before you start moving mailboxes from Exchange Server 2003 to Exchange Server 2010.
375
Inframan is using only one namespace for accessing all services from the Internet: webmail.inframan.nl. This is used for Outlook Web Access, Outlook Anywhere and Windows Mobile devices. Recently Inframan has been thinking about upgrading to Exchange Server 2007, but decided to move directly to Exchange Server 2010.
Be careful when upgrading your Active Directory Domain Controllers since not all versions are supported to run with Exchange Server 2003. For a complete overview check the Microsoft Technet Site: http://technet.microsoft.com/en-us/library/ee338574.aspx.
376
In Exchange Server 2007 Internet clients could connect to the Exchange Server 2007 Client Access Server while the mailbox was still on Exchange Server 2003. The Client Access Server retrieves the data out of the mailbox and sends it back to the Internet client. In Exchange Server 2010 this has changed. When a client connects to Exchange Server 2010, it actually connects to the Exchange Server 2010 Client Access Server and if the mailbox is still on the Exchange Server 2003 Mailbox Server then the client is redirected to the Exchange Server 2003 front-end server. This front-end server then handles the connection request. This automatically means the namespaces of the Exchange environment will change. For Inframan this means that the following namespaces are used: https://webmail.inframan.nl This is used by all Internet clients that connect to the Exchange environment. This name is not different than in the Exchange Server 2003 namespace, but it will now point to the Exchange Server 2010 Client Access Server. https://autodiscover.inframan.nl This is used by Outlook 2007 and (Outlook 2010) clients for autodiscover purposes. https://legacy.inframan.nl This will be the new namespace for the Exchange Server 2003 front-end server. This automatically means that the namespace for the Exchange Server 2003 front-end server is going to change! The servers that will hold the Exchange Server 2010 server roles have the following prerequisites: The servers need to be running on Windows Server 2008 or Windows Server 2008 R2. .Net framework 3.5 with SP1 needs to be installed. PowerShell 2.0 needs to be installed. Office 2007 Filter packs needs to be installed for the Hub Transport Server role and the Mailbox Server role.
377
Note that the value is the same in Exchange Server 2007 service pack 2 and in Exchange Server 2010 RTM this is because Exchange Server 2007 service pack 2 will install the Exchange Server 2010 schema extensions. Setup.com /PrepareAD This command upgrades the Exchange organization, which is stored in the configuration partition in Active Directory to support Exchange Server 2010. In Exchange Server 2003 information is stored in the "First Administrative Group" or perhaps more if you created additional Administrative Groups. The Exchange Server 2010 setup application will create a new Administrative Group called "Exchange Administrative Group (FYDIBOHF23SPDLT)" where all Exchange Server 2010 configuration information is stored. This will be visible in the Exchange Server 2003 System Manager:
Setup.com /PrepareDomain This is the last step in preparing the Active Directory and will create all necessary groups in the domain being prepared.
378
During the installation of the combined Hub Transport and Client Access Server a so called "custom setup" will be used. This means we can select which server roles will be installed. In the Inframan example the following needs to be selected during setup:
379
When continuing the setup application a window will be shown asking if this Client Access Server is Internet facing and if so, what the external domain will be. This is an important step because it configures the Client Access Server automatically with the appropriate settings. Check the "The Client Access server will be Internet-facing" option and enter the external domain name. This is "webmail. inframan.nl" in our example.
380
Exchange Server 2003 uses Routing Groups to determine the proper way to route messages while Exchange Server 2010 uses Active Directory sites for routing. These are not compatible with each other so a legacy Routing Group Connector will be created within Exchange Server 2010. This legacy connector connects Exchange Server 2010 with Exchange Server 2003 so messages can be sent between the two Exchange versions. During setup of the first Hub Transport Server an Exchange Server 2003 Hub Server needs to be selected. This is the server the legacy Routing Group Connector will connect to:
381
Note that this choice can be changed and/or added to after setup is complete. Now finish the setup wizard and install the Client Access and Hub Transport Server roles on this server. It is also possible to use the command line setup application to setup the above mentioned configuration. Open a command prompt, navigate to the installation media and enter the following command: Setup.com /mode:install /roles:ht,ca,mt /ExternalCASServerDomain: webmail.inframan.nl /LegacyRoutingServer:2003FE.inframan.local
382
An interesting part of Exchange Server 2010 is the database technology. Microsoft has made significant changes to the database structure to lower the disk performance requirements. It should be sufficient to run the Mailbox databases and its accompanying log files from SATA disks. In the Requirements Calculator there's the possibility to enter the disk configuration. For the new Inframan Mailbox server 7.200 RPM SATA disks with a capacity of 500 GB will be used for storing the databases and 7.200 RPM SATA disks with a capacity of 250GB will be used for storing the log files. This disk configuration is not exactly a high end configuration, but it is by far the most cost effective solution. The Requirements Calculator contains a tab called "Storage Design." When using the above mentioned values the Calculator recommends a RAID1/0 configuration with 6 SATA disks for storing the Mailbox Databases and a RAID1/0 configuration with 2 SATA disks for storing the Log Files.
383
384
During the coexistence phase Internet clients will connect to the Exchange Server 2010 Client Access Server while their mailbox is still on Exchange Server 2003. The client request will then be redirected to the old Exchange Server 2003 front-end server. This server will therefore get a new FQDN (Fully Qualified Domain Name) and thus need a new certificate. This new FQDN will be legacy.inframan.nl.
OWA Configuration
During installation of the Exchange Server 2010 Client Access Server all settings have been configured for use on the Internet. The only thing that needs to be configured is the coexistence information for Outlook Web App. The Client Access Server needs to be configured in case a mailbox is still on Exchange Server 2003 and the client needs to be redirected to the Exchange Server 2003 front-end server. On an Exchange Server 2010 server enter the following Management Shell Command: Set-OWAVirtualDirectory <CASHUB01>\OWA " -ExternalURL https://webmail.inframan.nl/OWA " -Exchange2003URL https://legacy.inframan.nl/exchange This will make sure that when a user connects to Exchange Server 2010 Client Access Server for Outlook Web Access and the mailbox is still on Exchange 2003 the client will be redirected to the old Exchange Server 2003 front-end server.
385
The "Manage Public Folder Settings wizard" will appear. Click Next on the Welcome page and select the "Modify lists of replica servers." Follow the wizard and add the Exchange Server 2010 Mailbox Server role as a new replica. When finished, the folder and all its subfolders will be replicated to the Exchange Server 2010 Public Folder database. Repeat this step for the second Offline Address Book folder and the Schedule+ Free Busy folder.
Notes When the "Manage Settings" option is not available you can select "Properties" and select the replication tab to add the Exchange Server 2010 Public Folder Database. Replication of public folders can take quite some time.
The (default) Public Folder that are located on the Exchange Server 2010 Mailbox Server should be replicated to the Exchange Server 2003 Mailbox Server. To accomplish this logon to the Exchange Server 2010 Mailbox Server, open the Exchange Management Console and navigate to the Tools node. Under the Tools node open the Public Folder Management Console. Right-click the Offline Address Book in the results pane, select Properties and click the Replication tab.
386
Add the Exchange Server 2003 Mailbox Server to the replica list, the contents will now be replicated to the Exchange Server 2003 Mailbox Server. Be aware that Public Folder replication is a low priority mechanism, so it takes some time before both Public Folder databases are in sync. Repeat these steps for the Schedule+ Free/Busy folder.
Summary Part I
In this first article out of a series of two, I explained what steps are needed before you can start moving mailboxes from Exchange Server 2003 to Exchange Server 2010. In this article the Active Directory was upgraded, two servers with Exchange Server 2010 were installed, certificates were installed and the Public Folder replication was setup. In the next article I will cover the actual movement of the mailboxes and the steps that are needed to decommission the Exchange 2003 servers like moving the Offline Address Book generation server and conversion of Recipient Policies and Address Books. Stay tuned!
387
by Ben Lye
It is possible to change the fields in the Outlook address book to make them a better fit for your organisation. Exchange provides templates for Users, Contacts, Groups, Public Folders, and Mailbox Agents that are downloaded to Outlook. Any changes will be visible in the address book. As usual with Exchange, careful planning before you make changespays dividends, Ben Lye explains.
I was recently asked if it was possible to change the information fields which are displayed in the Outlook Address Book for users the person making the request wanted to add an additional telephone number field on the General and Phone/Notes property pages to display an internal extension. This kind of customisation is probably something that many Exchange organisations can benefit from, and the changes to the Outlook Address Book can be implemented easily within Exchange using the Exchange Details Template Editor. The Details Template Editor is an MMC snap-in which provides a GUI for editing the object properties which are displayed when an object opened from the address book. Details templates can be modified for Users, Contacts, Groups, Public Folders, and Mailbox Agents. The Advanced Find search dialogue box can also be edited. Each of the six template types can be modified in 50 different languages. The Details Template Editor is installed along with the Exchange Management Tools. To start the Details Template Editor in Exchange 2007 RTM: On the taskbar, click Start, and then click Run. Type "mmc" in the Open field. On the Console menu bar, click File, and then click Add/Remove Snap-in. In Add/Remove Snap-in, on the Standalone tab, click Add. In Add Standalone Snap-in, select Details Templates Editor from the list of available stand-alone snap-ins, and then click Add. Click Close to close the Available Snap-ins dialog box, and then click OK on the Add/Remove Snap-in dialog box.
To start the Details Template Editor in Exchange 2007 SP1 or SP2 or Exchange 2010 either use the method above, or: Launch the Exchange Management Console. Select Toolbox in the console tree. Double-click Details Template Editor in the results pane.
388
Note In order to use the Details Template Editor you need to be delegated the Exchange Organization Administrator role.
In my case I needed to edit the English User template to open the template for editing double-click it and the template editor window is shown.
389
Figure Editing the English User template. The template editor is divided into three panes: the toolbox pane on the left, the designer pane in the centre, and the properties pane on the right. New fields can be added to the template by selecting the appropriate element type in the toolbox pane, placing the element in the designer pane, and linking it to an object attribute in the properties pane. The properties pane can also be used for fine-grained control over the size and position of elements as well as tab order and text size.
Note The template editor does not include any undo functionality you cannot undo any changes made in the editor, but the template can be reverted back to default settings. Templates are restored by right-clicking the template to be restored and selecting "Restore."
The object attributes which can be displayed on a template are limited to those provided by Microsoft. While it is technically possible to extend the set of attributes by modifying the Active Directory schema, doing so is not supported by Microsoft. If additional attributes which are not in the standard set are required the supported method of displaying the data is to use one of the fifteen Exchange extension attributes. In my case I wanted to use the Active Directory attribute IpPhone, which had already been populated with the IP telephone numbers for our staff. As this attribute is not one included in the standard set I had to copy the data to another attribute which could be used. To do this I copied the data from the IpPhone attribute on each user record in AD to the Exchange extension attribute extensionAttribute1. The easiest way to do this in the Exchange Management Shell is with a short script.
390
Once the data is in an attribute which can be exposed via the details templates, then modifying the templates is relatively easy. An existing element can be re-labelled and linked to the new data, or a new element can be added. When modifying or adding elements, you must match the element type to the AD field type single-valued elements (checkbox or edit box) must be used for single-valued AD attributes (such as primary telephone number fields), and multi-valued attributes (listbox, multivalued dropdown and multivalued listbox) should be used for multi-valued AD attributes (such as "other" telephone number fields). Mismatching single and multi-valued elements and AD attributes will result in data not displaying. I decided to replace "Assistant" on the "General" tab of the user details with the new IP phone number. To complete my changes I opened the English user details template, changed the label text from "Assistant" to "IP phone," and changed the AttributeName property of the element from "ms-Exch-Assistant-Name" to "ms-Exch-Extension-Attribute-1."
391
Figure Editing the elements. After saving the change to the template the changes are immediately available to Outlook clients which are not in cached-mode. Cached-mode clients will need to wait for the offline address book to be updated and a full download of the offline address book to occur (which by default in Outlook 2007 is only attempted once in a 13-hour period). The server update of the offline address book can be forced to run by running Get-OfflineAddressBook | Update-OfflineAddressBook and Get-ClientAccessServer | UpdateFileDistributionService -Type OAB in the Exchange Management Shell. Once the server-side update has run and the client has downloaded the new templates the changes will be visible in the address book.
392
Figure The new details template in Outlook. Editing the details templates is relatively easy, but like all things Exchange careful planning will make implementing the changes much easier. When planning for template changes it's important to know what type of data you intend to add to the templates, and understand that while not all data in AD can be exposed directly to the Outlook Address Book there are workarounds available. More information on customising the Outlook Address Book by editing the details templates is available in Microsoft TechNet: http://technet.microsoft.com/en-us/library/bb124525.aspx
393
General Articles
A SysAdmin's Guide to Change Management
06 January 2009 by
Matt Simmons
In the first in a series of monthly articles, "Confessions of a Sys Admin," Matt describes the issues involved in Change Management, and gives a simple guide.
As I sit down in New York's Grand Central station to write this, my first "Confessions of a Sysadmin" editorial for Simple-Talk, with people coming and going all around me, it feels only natural to tackle the topic of change, and how to manage it. Each of us has to manage change to some degree, in our working and personal lives, but I'd be willing to bet that whoever said that "change is the only thing that ever stays the same" probably worked in IT. As a systems administrator, welcome change or shun it, change will happen. It is inevitable, constant, and unyielding. Failing to learn how to use change to your advantage would be foolish and dangerous; your competitors are surely going to. However, in order to benefit from change you need to know how to manage it effectively, and I'd like to review some of the key points and skills that will help you accomplish this goal. Usually, the spur for change is the recognition that a problem exists in a process, followed by a "grand vision" for a better way to do it. Sounds easy, right? It isn't. I'll show you precisely how difficult it can be as we go through the steps of a successful change. Early last year, the president of our company came to me with the news that our corporate headquarters was relocating from downtown Manhattan to central New Jersey. We'd also be opening an office in mid-town to keep a presence in New York City, and I was responsible for ensuring that there would be no interruption in IT services before, during, and after the move. Let's see how I handled it and where I could have improved.
394
395
Development
Once the specifics of your design start to emerge, you'll need to begin development in order to keep your schedule. It's easy to become myopic during the development of the solution, particularly when working on a small segment of a much larger design. Through regular meetings, keep the teams aware of each others' status, and keep your individual groups moving towards the original vision and design. On occasion you will, either through short-sightedness or lack of accurate information, discover that the design which was carefully developed is incompatible with the real world. You may need to fall back into design on a specific task or, if you are particularly unlucky, it may involve several smaller units. Bring together the involved teams and the original design team and work together for a solution. When solutions are updated, remember to update the requirements and user documentation which was produced beforehand. As your teams approach completion of their specific tasks, care must be taken to ensure that every item is met satisfactorily, according to the agreed specifications. Only in this way can you be sure that the solution will effectively solve the stated problems and create a positive change. As the team leader, sign off when the team completes the requirements placed on them, and reminders may be required if they are neglecting one or more tasks. It should go without saying, but every major change needs to be built in an environment that is segregated from the "real world," otherwise the usability of whatever you are improving will be at the mercy of your design and development processes. Build a complete solution and use it to replace the status quo, don't destroy the status quo while you build the solution, or you will have a vacuum in the interim and a lot of miserable people. When you have a solution that satisfies your requirements, you need to create a schedule for replacing the existing system that will not create undue outages or break functionality. My advice is to work with your team to practice putting the change in place, then use that practice to calculate the required time, and multiply that by one third, as discussed previously. For my migration, IT resources were required to be in place prior to the physical move. This required thorough testing of network ports, telephone access, and the like. As the spaces were independent of our inhabited office, there was sufficient time to test these resources. In a more time-sensitive operation, more manpower would have been needed.
396
Project Evaluation
After a suitable length of time, it is advisable to evaluate the performance of the team and its leaders. Future performance can be improved by examining the mistakes that were made, correcting them and developing better strategy for next time. Review each team member's performance, and offer praise for appropriate conduct, or constructive criticism if it could have been improved. Be fair and honest, and the team members will respect you for it, even if they aren't happy with everything that you have to say. We've been in the new offices for several months now, and things are going very well. Management follow-up on the move has been positive, and my users seem content in their new locations. Initial problems could have been resolved more quickly or eliminated altogether had I relied on statistics for bandwidth requirements rather than guessing, but overall the transition was smooth. I received kudos from our president for making the migration happen as well as it did, and I've now got several more upcoming changes to manage.
Summary
How can anything so simple be so complex? Change is more than just moving from one state to another. It is an ideology, a force of nature, and a tool for improvement. Change is the ability to alter the world around you. You are not powerless in your life, or in your work, despite what other people may tell you. Put change to work for you and reap the benefits. Form your goals, develop your plan, and implement the changes.
397
by Matt Simmons
What level of trust do you afford users by default? What level of support do you offer users who have low technical skills? Are you creating a system which is more difficult to use with little or no payback? Are you adopting administrative policies as a punitive measure? do users believe that you are deriding them for their ignorance? Matt provides some timely advice.
If management is the art of utilizing resources to accomplish goals, then administration might be thought of as the maintenance of those resources to ensure that they can be properly utilized. Administration of users is, for many, the most challenging aspect of their positions. In addition to the technical and logistical challenges provided by all resources, interpersonal relationships add a complexity that cannot be abstracted away in a script. To help administrators cope with these issues, this column will explore various administrative tactics, how their effectiveness varies according to the traits displayed by their co-workers and we'll learn from examples of positive and negative administrative decisions. There are many pressures placed on the IT administrators of the world. Corporate regulations, compliance requirements, and security best-practices compete for our time and resources. After meeting legal demands, there are sometimes few tactics available with which to approach administration. Many times, the tone and phrases used make greater differences in the perceived attitude than actual policies. For our purposes, we can consider the laws and business requirements immutable. What we have left is really the core of administrative policies. What level of trust do you afford users by default? What level of support do you offer users who have low technical skills? Are the decisions you are making creating a system which is more difficult to use with little or no payback? These questions should be in your mind as you design system policies.
User Trust
In IT security, it is widely deemed to be the best practice to deny access by default. It has also become apparent that the most potent threat to an IT infrastructure comes in the form of insider access. This leads to the solution that users should have no unnecessary trust, be universally logged, and treated with general suspicion. From the strictest security perspective, this makes sense. In practice, however, mistrust breeds mistrust, and you may soon find that the users who's lives are made difficult may find ways to return the favor to the administrators. I propose that there may be a happy center ground. There are various occupations with extremely high turnover where extreme caution is called for. The same goes for certain financial, medical, and governmental institutions, but the majority of users and administrators find themselves under somewhat less pressure. In these cases, I believe that it is in the best interest of administrators to give the users leeway. Many long-term technically oriented users have displayed responsibility in their decision making. Affording them administrative access to the machine that they use day in and day out shows an amount of reciprocity in appreciation. There is the chance that the user could install non-approved software, or through inattention allow a virus onto the machine. Proper end point protection software can all but alleviate the latter, and to be honest, the former probably isn't the end of the world. Obviously the situation is different if the software in question is illegal or opens an attack vector, but barring these complications, perhaps you should think twice before punishing the user for
398
User Support
It may help to examine how the typical administrator is viewed by the typical user. Obviously, all users are different, as are all administrators, but as a generalization, the relationship might be thought of as similar to that of a car owner and an automobile mechanic. The owner shows faith that the mechanic will properly maintain or repair the vehicle, and the mechanic makes their best effort to do so. Of course, there are mechanics who will abuse this relationship, as are there IT administrators. No one wants to work with someone who will deride them for ignorance, and belittling someone is not an administrative tactic, despite how boneheaded some peoples" action might seem. On the other hand, coddling users and excessive handholding will ensure that a user who doesn't know how to do something will never have to learn. Where is the line drawn? As each user is different, the approach taken must vary as well. Some users show eagerness to acquire new skills. Repeatedly performing a task for these users will do nothing but cause them frustration. Instead, demonstrate the task, and ask if there are any questions. The next time it is to be performed, be on hand to offer support if needed, but otherwise remain in the background. Offer encouragement and praise, or constructive criticism as necessary. This user's goal is to learn how to accomplish the task, and they will appreciate any efforts you make in helping them succeed. These users are valuable assets who generally have no problems expanding beyond their current tasks and taking on more responsibility. Many people are not interested in employing skills which they deem unnecessary or time consuming. These people are very goal oriented, and want the path streamlined as much as possible. Attempting to teach the intricacies of a task to these users is seen by them as a waste of time. Success might be found in developing a solution which requires less interaction or is less time intensive. These types of solutions are generally an improvement for the business at large, as processes such as these consume everyone's time. Because of this, the goal oriented users become barometers for inefficiencies that, despite their reporting being mostly negative, instigate progress for everyone else. Users also exist who learn and operate by rote. These users are typically very quick and efficient workers, as long as their environment is consistent. Large operating changes can unsettle users such as these, so a forewarning that a change will be implemented, along with a phone call or visit when the change takes place will ease them into their comfort zone. While the amount of personal attention required might seem overwhelming at times, remember that this is time invested. These users will not only pay you back with efficient operation, but they will act as red flags to system changes that even the keenest monitoring systems may fail to recognize. This column would be incomplete without mentioning the infamous "bad user." Though lumped into one category, the reason a user earns this label varies a great deal. Either through actions or attitude, this user has shown that they do not want to contribute positively to the task at hand. Assuming that you do not have a supervisory role over this user, the best advice might try to make the best of a bad situation and work around them. I would advise against granting these users any more access than they need, regardless of technical prowess. Providing disgruntled users security clearance is asking for abuse. If the situation gets bad enough, it may warrant speaking to your supervisor about it. Under no condition should you speak with their supervisor, unless you both report to the same manager. The corporate hierarchy exists for a reason, and crossing boundaries will not endear you to the difficult coworker, their manager, or your manager.
399
Administrative Policies
As a general rule, the policies that you write and put in place should exist to best enable the business of the company to proceed, while meeting the various legal and professional regulations. Never adopt administrative policies for punitive measure, and do not allow your emotion to cloud your judgment. Policies provide guidance and protection, not punishment. A potential positive opportunity to educate users can easily become negative if handled incorrectly. A user or group singled out in a policy can quickly become bad users, which lowers the efficiency of the company, which in turn makes it more difficult to conduct business. Focus on preventing the undesired actions in the future and ignore the past when writing policy. I think we're all guilty of retribution at some point in our lives, even if we're not proud of it. I'd be lying if I said that I didn't implement firewall rules at an ISP in direct response to user activity. The user didn't break the terms of service (TOS) in literal word, but in spirit, and I reacted to that by changing the firewall rules specifically to prevent them from doing what they were doing. What I should have done was discuss the situation with management, edit the TOS, and the firewall to reflect it. Fortunately in my job roles after that, I have reacted more maturely to issues such as these.
400
by Bill Holmberg
Bill Holmberg takes a practical look at Change Management, and list the steps you need to take. To make things simpler, he also provides a sample Word template you can use to follow through a complete change control cycle on a given project. Leaving nothing to doubt, he finishes with an illustration; a fictional exercise so you can see his guidelines in action.
Change will inevitably happen. However, in order to benefit from changewe need to know how to manage it effectively,so perhaps we need to review some of the key points that will helpus accomplishour goals.
401
402
In some companies it is not practical to have the entire set of personnel above meet on a regular schedule, so a core group may be whittled down with an ancillary group, or alternates individually available for consultation or attendance when warranted. Again, at the least these folks should be designated so that they are available for comment after the meeting where the change is first presented. In some companies certain individuals will fulfil one or more of these roles.
403
404
The Framework
The following are generally considered to be the proper steps or framework within which to study and implement changes.
Project Evaluation
Who will implement it and who is affected?
What do we need? What is it worth to us? What does it cost? Why do we need it? When must it be in place? Where will it occur? How will it be done?
Project re-evaluation
It is also a great idea to do a Post Mortem, or follow up on the change control items after they are implemented whether successfully or not. Here are some points to go over afterwards. What variances from the expected outcomes were encountered? What did we do right? What could be done better for future projects? Were our initial assumptions correct?
405
406
by Matt Simmons
The key to a long career in IT is in learning how to cope adaptively with stress. Matt Simmons, like many, didn't realise it applies to everyone until he was rushed to hospital with what looked to be a heart attack. It was a salutary experience. He then discovered that there are many effective ways of unwinding, and it pays to be aware of them before you get the warning shot.
407
Stress management
It's because my stress management up to that point was nonexistent. My coping method would be to shove stress out of the way. Move past it, compress it into a little bottle and ignore it. That night, it refused to be ignored and it fought back. I should consider myself fortunate. The stress attack that night, and the tests that followed it, served as a warning. My mechanisms for dealing with stress were badly out of kilter, and unchecked, it could get worse. Much worse. To investigate how bad it can get, lets examine how stress works. Humans, like all animals, have a fight or flight reflex, keenly evolved over hundreds of thousands of years. Originally designed to protect us from physical threats, this instinct is now triggered by psychological threats as well. Anyone who has accidentally erased an entire directory's worth of important files can attest to this. In addition to pure fight-or-flight stress, day-to-day situations apply mental strain and tax our body's resources. Work is a large source of stress, since many times we perceive our professional (and by extension, financial) statuses to be at risk. In addition, our personal lives and relationships are frequently a major cause of stress. Stress is not, in and of itself, a bad thing. We're designed to be able to handle a certain amount of it. In fact, we require stress to operate at peak performance. Many highly dynamic people will tell you that stress provides them the fuel they need to accomplish their goals. Not everyone functions like that, though. Stress has three phases on the body. The general phase, sometimes called the "alarm phase," is the initial response to the stress. Flight or fight. Some muscles tense, others loosen, and adrenaline enters the blood stream. In short-term stress, this phase passes quickly and our bodies return to normal. In longer-term stress, our body invokes coping mechanisms, and we start to adapt to the stress. This resistance phase taxes our body's physical and mental reserves, and the longer it goes, the more stretched out and weary we feel. The final phase is when our body is exhausted, and gives up fighting the stress. We're no longer able to resist whatever is causing our stress, and we stop being able to deal with it. The final stage is where we experience burnout. We're far past our peak performance and well into misery. Personally, when I get to this phase, I get irritable and hard to deal with. I snap at otherwise normal requests, I'm sullen, and generally not nice to be around. You probably know how you feel when you get to this phase, and I'm sure you don't like it any more than I do when it happens to me.
Extended periods of stress can do a lot more than strain personal relationships. It increases blood pressure, and over time, high blood pressure damages walls in blood vessels. When the walls heal, scar tissue is left, and these stresses cause blockages similar to cholesterol build-up. This is a very bad thing, because it can eventually cause a real heart attack. Fortunately, even the hard chargers and workaholics among us can lower their stress levels and live healthier (and longer) lives. James Manktelow, author of "Manage Stress," advocates starting and maintaining a stress diary. A spreadsheet would be ideal for this: in the first column, rank your stress from one, being the least stressful, to ten, being the most stressful. Record what the stress was in the next column, and beside it, enter how the stress made you feel. Make a column for the number of times you feel that particular stress, and increase the number each time you experience it. In the far right column, record how you deal with that stress. Sort your stress diary by the number of times you have experienced each situation, and make a note of the top entry. This line represents your most frequent source of stress, and gives you a target to focus on when removing stress from your life. After recording your most frequent stressor, sort your diary by how stressful you rated each situation. When complete, the top entry will be that stressor which causes you the greatest trauma. Elimination of its source will also provide you the single greatest relief. You should make it a priority to eliminate these two top sources of stress from your life. They cause the majority of damage to your health, and by eliminating them you will improve your overall stress level and well-being. Take action and use one of these three methods of dealing with them:
408
Action oriented Confront the root source head on. If your stress is work related, deal with it directly. Discuss the problem with your supervisor, explain how you feel, and be honest about it. Ultimately, if your employer is unable or unwilling to improve your conditions, leave. Your health is too precious to squander for a job that in all likelihood you don't like anyway. Emotion oriented Attempt to change how you feel about the source of the stress. Personal relationships cause a lot of stress, but lots of problems can be effectively dealt with by trying to change how you feel about the problem, rather than changing the problem itself. Examine it from a different perspective, try to understand the underlying causes, and tolerate their effects. Attempt to be flexible in your outlook. Acceptance oriented When all else fails, and you can't change the problem or how you feel, accept the source of the stress. Focus on surviving by building buffers between yourself and the source of the stress. Cope any way you can until you can improve the situation.
Stress is powerful, for good or ill, and effectively managing it is key to maintaining a healthy life. If you are having problems with stress in your own life, please contact a local stress counsellor or the human resources department of your company. Help is available for people who need it. You are able to defend yourself against the stresses of life. You only have to make the first step.
by Matt Simmons
Hiring someone for a technical post is a task that should never be tackled half-heartedly. Matt Simmons provides some general advice on the subject, illustrated by a recent experience of hiring an IT administrator to help share the load. It opened his eyes to the real state of the economy.
It may seem that nowadays, when unemployment is high, and we're in a worldwide recession where millions of people are out of work and things may get bleaker before they improve, it may seem the wrong time to talk about hiring staff. Although many sectors are being hit hard, there are markets which have remained relatively unscathed, and some are even growing. I believe that hiring should take place over three discrete, independent steps. Planning, Execution and Evaluation.
Planning
Tom Limoncelli, author of "The Practice of System and Network Administration" suggests that there are two types of job openings. One is for the position, the other the person. If you need someone to do a specific task, then you would hire someone who has the required skills. Other skills and experience that are possessed by that person are less relevant, because you need someone to perform the specified tasks, and it is essential to success that they are done well. On the other hand, you may need someone to run a department, or to perform an array of less-well defined tasks. In that case, you would do better to hire a person who is capable of the flexibility that is required by such a role. Finding a qualified candidate is going to be less
409
Execution
Work with your hiring team to develop a job description that will be distributed to the various job boards and websites. In my case, the description simply consisted of the required skills and the responsibilities that the successful candidate would assume when hired. Many organizations use far more complex documents. Your Human Resources representative will be the best person to contact for more information. Once you've got a completed description for the position, you can submit it to the job sites. but you need to make sure that you use the appropriate sites. The technical skill level of the ideal candidate should be taken into account. While an entry level candidate could be found easily enough on Monster.com or even Craigslist, a more precise search would improve the signal to noise ratio for highly specialized technical appointments. In the past, I've recommended the job boards on various professional organizations (IEEE, USENIX/ SAGE, LOPSA) as well as high profile commercial sites. The serverfault.com job boards in particular attract very intelligent people, for the most part. Once you commit your posting, applicants will begin to pour in immediately. Even in the best of times, it's possible to get deluged with applications, but with a large segment of the workforce displaced, the onslaught can be especially overwhelming. I would recommend that you stick to your guns and use the requirements you put together in the previous step (as well as any personnel you have at your disposal) to filter through the applications. How many are you likely to receive? Of course, it depends on your local area and the available position itself. After the filtering has been completed, you presumably have a pool of candidates to select from, and chances are very good that you've got more candidates than you could reasonably interview in a short span. I have several interviewing guidelines, but no real instructions on how to interview a candidate, because I feel like it's a very individual sort of endeavor. Your company's culture and your team will dictate the particulars of the interviewing process, just as mine did, but if I can pass along a few general tips, the first and most important would be to listen more than you talk. Ask open ended questions and see where the candidate goes with it. Don't interrupt unless they're floundering. A great bit of advice I got from "Hiring Smart," by Dr. Pierre Mornell, was that a few minutes before the end of the interview, give the candidate a warning that their time is almost up. This gives them a chance to air anything that has been nagging at them, and can sometimes be very enlightening. Dr. Mornell related a tale of one applicant who waited until the five minute warning to let the interviewer
410
An Experience in Hiring
In part because of the state of the financial services sector and in part because of the organic growth of the company, I recently found myself in the position of needing to hire another IT administrator to help share the load. The hiring process opened my eyes to the real state of the economy. I explicitly stated that this was an entry-level position, and that I was looking for candidates who didn't necessarily have administration experience. In spite of that, I had someone apply who had spent 20 years at their previous company. Many people spent over 5 years as senior administrators at their last company. That's the sort of thing that happens in this economy. In my case, I knew that I was looking for a certain type of person. My budget dictated that whomever I hired had to be inexperienced. This made the selection process more difficult, as "systems administrator" would be a new role to most of my candidates. I received several hundred applications in the span of only a week or two. It started only 5 minutes after the posting went live and didn't stop until a week after the posting expired. I selected individuals having experience supporting technology in some measure and I sought out people who had experience using platforms similar to those my critical systems are built on. Most importantly, I selected those people who demonstrated a thirst for knowledge and for trying new things. It was from a pool such as this that I selected the junior systems administrator. I emailed each of the potential candidates and scheduled a telephone interview with them. It wasn't a long, detailed interview; ten minutes was sufficient to tell me whether I wanted to bring a person in to interview them in person. I was thankful that I took the time to conduct the telephone interview. There were a few cases where I talked to people who had impressive resumes, but upon questioning admitted things, such as, "well, no, actually I didn't implement that VPN connection, but I was there when someone from the vendor came and did it." These are things it's good to know before you waste several hours interviewing someone in person. My interviews took, on average, two and a half hours. This seems like a long time, but it took about a half an hour before the candidates really relaxed and their personalities showed. At the same time, I wanted to extract a large sampling of their knowledge, and more importantly, introduce subjects new to them, to observe their learning process. Existing knowledge wasn't as important as the ability to acquire more of it, for my position. I was upfront in the job posting with the salary range that we were prepared to pay for the candidates. When I performed the phone interview, I made sure to discuss pay with the candidates, and to get their thoughts on where in the posted range they fell. I was really surprised to hear that most of the candidates I talked to evaluated themselves very honestly, and for the most part, they suggested pay rates which were close to my mental target. During the interview, I openly discussed the question of pay with the candidates, because I wanted the major negotiation to happen in person. Your organization may have different customs and policies, so again, make sure to discuss your plans with Human Resources beforehand.
411
Evaluation
The hiring process does not end when someone is hired. It doesn't really end until the person's first annual review. All of the processes that we take part in can stand to be reviewed, and hiring is no exception. Although this length of time hasn't passed for me yet with the IT Administrator, I intend to review the hiring process with my interviewing team after the new admin has been with us for a quarter. I want the emphasis to be on the process, and whether or not we succeeded in our goals, and what we could have done better. These sorts of occasional self-evaluations can lead to increased performance the next time I, or one of my teammates, is in the position to hire another employee. Time will tell, but from what I've seen, everything is going to be great.
How certain are you that you are keeping your job this year? Three or four years ago, if you lost your job and had a technical degree, it was only a matter of weeks until you secured another position, probably with even better pay. I remember that in my undergraduate engineering class, fellow mechanical engineering students with GPAs less then 2.0 (out of 4.0) and no internship experience were getting jobs in the industry right out of college. In contrast, this year when I gave a talk at the University of Delaware, students with graduate degrees and multiple publications were telling me that they were unable to get jobs and were going back to school to get their second Ph.D. The world has changed in the past several years. Whether you are looking for a job in USA or UK, your options have become much more limited. These days, your technical degree, whether computer science or engineering, no longer guarantees your employment. What can you do if you were swept up, along with many others, by the world-wide recession? What should be your plan of action if your one month layoff stretches to six months or a year? How can you make yourself more qualified and more appealing to future employers, while you're not technically working? If you are one of the lucky ones to have kept your job, how can you increase your chances of keeping it? For those of you who need answers to these questions, here are some steps that you can take now, in order to increase your professional value, whether or not your are currently employed.
412
Why do you want to get involved? It's simple. The more people know you as a professional in your field the more doors open up to you in terms of new jobs and professional recommendations. In addition, when you give away your expert knowledge in the form of articles, blog posts, and committee participation, you are building your credibility as an expert. If you keep at it, companies might actually hear of you through your professional community work, before you even apply for a job!
413
Give Talks
As professionals in a technical industry, we are often taught that explaining things in a clear and understandable manner should not be high on our priority list. In fact, in many cases, we are taught that the more confusing we sound, the more we will be perceived as intelligent and important individuals. When was the last time that you attended a talk at technical conference where at least a few of the attendees were not fidgeting with their laptops or just plain asleep?
The ability to present complex technical ideas in a clear and understandable fashion is an extremely valued skill in the corporate world. It is also a skill that is immediately apparent in an interview. Learning to be a good communicator and speaker will put you miles ahead of other IT professionals searching for a job. A good way to start to develop those communication skills is join Toastmasters International: www.toastmastersinternational.com. This is a wonderful organization that can help develop your public speaking, communication and leadership skills. For a nominal yearly fee, you can attend weekly meetings, get feedback on your speaking ability from other members and, most importantly, get plenty of practice speaking in front of a group of people.
Write
Finally, I recommend that you develop your written communication skills in the same way that you develop your public speaking abilities. To help you get started on improving you writing skills, here is another very simple idea. Create a blog and contribute to it every week. It is very easy to set up and it's free. Visit the Google blog service, www.blogspot.com to get started.
Here is another idea. You are reading www.simple-talk.com an on-line journal for SQL Server and .NET developers and administrators. You might notice that there are a number of other authors in this journal whose article you enjoy reading. Why not become one of them?
414
A more advanced step for those of you, who have built an established blog and feel comfortable with writing, is to find on-line journals or other blogs in your area of work and ask if the editors might be interested in some of your article ideas. They very well might be, in which case, I look forward to reading your piece in Simple-Talk in the future! You are an expert in the technical field. You have excellent problem solving abilities. As do most of the other software scientists, system administrators and IT professionals looking for a job. Now is the time to start developing the other aspects of your professional skills if you want to stand out of the crowd job seekers. Develop your professional relationships, evolve your communication skills and become more involved in the technical community. Do this, and this recession might just turn out to be the best thing that ever happened to you. For more information on Dr. Petrova's Steps to Increase Your Professional Value, please visit: www.SuccessfulUnemploymentToolkit. com. Images 1 and 2 courtesy of Pat Wright taken at the 2009 PASS Summit.
A few years ago, I was sitting in my cubicle staring intently at my computer screen. I was working for an engineering software company, and have been trying to figure out why a subroutine in the software that my company was developing, kept crashing. I mostly worked on numerical engineering research, but every once in a while, my projects would involve writing and de-bugging code. I enjoyed those times because I could sit uninterrupted in my cubicle for hours at a time, living in my own little microcosm of numerical algorithms. But that day, my peaceful bubble was about to burst. The Sales Guy infiltrated the personal space of my cubicle. "Dr. Petrova!! Heeeeeey!! How are things?!!" he roared with such enthusiasm, you'd think he just downed a whole bottle of Prozac. Every few days, the company Sales Guy would make his rounds through the engineering department, interrupting work, shooting the breeze, and in general spreading the blessings of his persona throughout the R&D department. We would all politely smile, answer his questions, and try to make small talk, while secretly wondering how the heck the Sales Guy manages to get paid for doing absolutely nothing. As it turned out, there was quite a bit to learn from the Sales Guy. Fast-forward three years. I have joined the sales and marketing team as a support engineer because I discovered an extrovert side in me that enjoyed working with people. The problem was that all of those years in front of the computer screen, gave me little time to practice my people skills. Those skills happened to be crucial if you need to work with, well, people.
415
By that time I figured out that sales people have a different way of talking to you, so I was not instantly offended. "Not that often," I said reluctantly. "Why?" "Well," said the Sales Guy, still beaming, "Do you know what your face looks like when you talk to people?" He stumped me with that one. That was a weird question even for a Sales Guy. "It looks likeI amin deep thought..?" I said uncertainly. "It looks scary," said the Sales Guy, smiling as if he was watching baby bunnies frolic in the meadow. "If you want to have any hope of relating to people, get a mirror and put it on your desk. When you're talking to anyone who comes into your office, glance into the mirror and see what the other person is seeing." With that he was gone and I was left wondering why in the world I left my cozy engineering R&D position for this. In the spirit of learning, I decided to give the mirror method a shot and was amazed at what my face was telling people. The first time I glanced into the mirror, while I was talking to a coworker, I saw the face of an axe murderer staring back at me. Up to that point, I was sure that no one cared about what my facial expressions looked like, but once I started using the mirror and adjusting my facial expressions, the results were astonishing. People were more willing to talk to me, give me needed information and in general seemed to like me better. Being aware of what your body language and facial expressions are saying is an important first step in relating to people.
416
If you are an introvert looking to grow yourself as a professional, one of the first things you should to be working on is training your internal extravert to come out at your command. Asking various people at your place of work to have lunch with you is a great way to start. It might be uncomfortable and probably even scary at first. I recommend starting with some fellow programmers, IT people, or engineers at your company. Chances are they will be even more introverted than you, which would make for a comfortable and quiet lunch hour. Then move on to the sales and marketing guys. Take this to be an exercise. You don't have to try to be their best friend, but you are looking to learn how to best deal with different types of people.
Lesson #3: It's not your Thesis Defense stop proving how smart you are
During the second week of my sales career, the Sales Guy told me that I will be sitting in on a call with a potential client. I was terrified and thrilled at the same time. Finally, a chance to showcase my brilliance!
417
"Can you tell me what the role of this engineer in his department is?" "Well, I didn't ask." "Can you at least tell me if they can afford our software?" How am I supposed to know that?" I said irritated. "He didn't tell me!" "Interesting," said the Sales Guy. "You just spend an hour of company time on this call and did not gather a single piece of information that would actually help us make a sale. In addition, the engineer doesn't think that you care about him or the problems he is trying to solve." "Why?" I was taken aback, "I gave him a lot of information about our software!" "Yes. You took up his time to demonstrate how smart you were instead of asking him questions that would show him that you are interested in helping him solve his problem." The engineer never called back. Trying to prove how smart you are during conversations makes people not want to talk to you. People like to feel that they are being heard. Ask a question and then be quiet and just listen. Figure out a way to be interested in what others are telling you. You might have to fake it in the beginning; especially if you are not that interested in a particular conversation topic. Beware, faking it all the time does not work, others will be able to tell and will be offended. If you would like to better master the art of dealing with people, take some cues from my Sales Guy. Be aware of what your face and body language is saying to people, practice bringing out your inner extrovert, and stop focusing on your own brilliance. With those skills you will be well on your way to a more prosperous career.
418
Virtualization
Virtual Exchange Servers
20 November 2008
by Jaap Wesselius
Microsoft now supports running Exchange Server 2007 in server virtualization environments, not just on Hyper-V, but on any virtualizing solution that is validated in the Server Virtualization Validation Program. Before virtualizing Exchange Server, you will need to be clear about the business advantages, and consider the size of the installation, otherwise the results can be a disappointment
Virtualization is a hot topic these days. Microsoft has released Windows Server 2008 Hyper-V just before the summer and Hyper-V is Microsoft's answer to the supremacy of VMware ESX Server. Customers have been running applications like Microsoft Exchange Server on VMware for years now, despite the formal support statement from Microsoft that this wasn't supported. Okay, there was a "commercially reasonable effort" support, but that was basically it. This has changed since Hyper-V was released. All server virtualization products that are validated in the SVVP (Server Virtualization Validation Program) are now fully supported by Microsoft. Needless to say that all major virtualization vendors have signed up in this program, so all recommendations regarding virtualization are not only valid for Hyper-V but for all vendors in the SVVP program. Microsoft has a list of all Microsoft applications that are supported in a Virtual Machine, this list is published via kb article 957006 (kb article 957006). There are some remarks in this document, and one of them is for running Exchange Server in a virtualized environment. And that's exactly what I want to talk about.
Note The official Support Policies and Recommendations for running a virtualized Exchange server can be found here:
419
To integrate the native Virtual Machine with Hyper-V special drivers need to be installed in these Virtual Machine. These drivers let the Virtual Machine use the new VMBus structure as can be seen in REF _Ref212978047 \h Figure 1. These drivers are called Synthetic Drivers and are part of the "Integration Components" of Hyper-V. Other operating system run very well in Hyper-V as well, but they use the hardware emulation instead of the new VMBus structure.
420
Figure 1. Windows Hypervisor structure with multiple VM's. Note the Emulation part in the 3rd VM. So, the original Windows Server 2008 image becomes a Virtual Machine as well, but this one should not be used for production purposes. For running Exchange Server in a virtual environment we need to install a new "native Virtual Machine" that's capable of using the new VMBus structure. Never install Exchange Server in the parent partition.
421
Figure 2. Exchange Server 2007 SP1 runs on its own disks. Before installing Windows Server 2008 as the child partition, a Virtual Hard Disk must be created. This can be fixed size Virtual Hard Disk (.VHD file), remember that dynamic .VHD files are not supported for Exchange Server 2007 SP1. A better solution is to use a dedicated or pass-through disk. This is a dedicated disk that can only be used by the Virtual Machine. Tests have shown that a dedicated disk under Hyper-V has a similar performance characteristic as a native disk in a non-virtualized Exchange Server. Our server has two physical disks. Disk one is used for the Parent Partition which is of course running Windows Server 2008. The second disk will be dedicated for our Windows Server 2008 Child Partition where we install Exchange Server 2007 SP1. First bring the 2nd physical disk offline, you can do this in the Disk Management option under Storage in Server Manager. By placing it offline it is not available anymore for other operating systems.
422
Figure 3. Bring the disk in offline mode in Server Manager. Now create a new Virtual Machine: Open the Hyper-V Manager in the Administrative Tools menu. In the tasks pane click New and select "Virtual Machine." The New Virtual Machine wizard starts and after the welcome screen you have to enter all kinds of information regarding the new Virtual Machine: specify a name and location to store the Virtual Machine. assign it a certain amount of memory, for example 4 GB. bind it to a publically available network.
At the "Connect Virtual Hard Disk" windows select the "Attach a virtual hard disk later" option. Specify the ISO that will be used to install Windows Server 2008 x64; After reviewing the summary screen click Finish
Do not start the Virtual Machine at this point. When finished open the properties of the Virtual Machine and go to the IDE Controller 0. This is the controller where the boot disk should be attached to. Check the "Physical Hard disk" option and select the physical disk we've put offline earlier. Click OK to save the configuration. Unfortunately it is not possible to add a SCSI controller and the disk to this SCSI controller. The Hyper-V SCSI controller is part of the Integration Components and thus runs against the VMBus structure. A very flexible solution, but only available when the Virtual Machine is up and running.
423
Figure 4. Add the physical drive to the IDE Controller 0. The new Virtual Machine is now ready to be installed, just power it on and install Windows Server 2008. After installing and configuring the server the server needs to be updated. The RTM version of Windows Server 2008 comes with a beta version of Hyper-V and this needs to be updated as soon as possible. When updating the Virtual Machine make sure that you install Microsoft hotfix kb950500, this hotfix brings the Virtual Machine to the RTM level of Hyper-V and thus includes the latest version of the Integration Components. Installing the Integration Components in a new Virtual Machine can be challenging. Before the Integration Components are installed there's no network available, and without a network it's impossible to copy drivers and updates to the Virtual Machine. It is a good idea to create an ISO image with the most important updates, including the Hyper-V RTM hotfix and install this on the host server. This way you can always mount the ISO images and install the latest Integration Components. After the Installation Components are installed and the server is brought up-to-date we have a fully functional Windows Server 2008 server. Assuming that you already have an Active Directory available on your network you can continue installing Exchange Server 2007 SP1.
424
Again, there's no difference between designing a bare metal Exchange server and a virtual Exchange server. Always design your Exchange server with the best performance in mind! It is not yet possible to use a fiber channel solution with Virtual Machines natively. The HBA's (Host Based Adapter) are not yet available for use with the VMBus structure. HBA vendors are working on this however, but it is unknown yet (as of October 2008) when this will be available. It is possible however to use a fiber channel solution on the host system and expose LUN's on a SAN as disks that can be used using the pass through mechanism.
Figure 5. Creating a Hyper-V VSS backup does interact with the Exchange server running in the Virtual Machine.
425
Snapshots
A snapshot is a point-in-time copy of your system, in this case the Virtual Machine. After creating a snapshot of our Virtual Machine it is always possible to return to the state of this Virtual Machine at the moment of creation of the snapshot. The following takes place during a snapshot: A copy of the configuration file is created. The Virtual Machine's memory is flushed to disk. A differencing disk is created and the original disk is set to read-only. The Virtual Machine resumes operation.
While this is a great technology to return to a known state at a certain point in time it is not supported for running Exchange Server 2007 SP1 in a production environment.
Conclusion
Microsoft now supports running Exchange Server 2007 in server virtualization environments. This is not only Hyper-V, but all vendors that have their solution validated in the Server Virtualization Validation Program (SVVP) are fully supported. Important to remember is that all design guidelines that are valid for a bare metal Exchange environment need to be used for a virtualized environment as well. Use disks that are capable of handling the expected IO load, it is very likely that dynamic disks will not meet these requirements. Therefore the use of dynamic disks is not officially supported by Microsoft when running Exchange Server 2007 SP1 under Hyper-V. One should always take the business requirements into account, why do you want to virtualize and what are the project goals against which costs? I have seen customers returning from the virtualization twilight zone being very frustrated and ending up in a bare metal
426
environment. For larger environments I have seen a lot of large implementations where the Exchange Server 2007 Client Access Servers and Hub Transport Servers were running under VMware, but where the mailbox servers were running on bare metal. I have to admit though that these environment were larger than 4000 mailboxes. For these environments virtualizing the mailbox servers is questionable, but this might change in the future and this is something that is hard to foresee.
by Nathan Winters
With the increasing acceptance of the use of Virtualization as a means of providing server infrastructure, this technology is being applied to production Exchange servers. This is a solution that is not just limited to the small shop, Nathan Winters discusses the pros, cons and challenges that lay ahead in providing a flexible and highly available email system.
Introduction
The subject of virtualization has been a pretty hot topic for the last couple of years. It seems to offer a massive amount, for example: consolidating underused servers and therefore providing cost savings on hardware, rack space, electricity and cooling. On top of that, virtualizing your server infrastructure brings other benefits such as the ability to easily move a virtual machine from one piece of physical hardware to another and also to rapidly provision new servers where required. Although this sounds great, there has been somewhat of a problem with virtualising Exchange 2007 because, Microsoft has not, until very recently, supported Exchange 2007 on any virtualization platform. To be fair though, that has not stopped a lot of people from putting their Exchange servers on a virtual platform. In fact it is something I have, myself, done for a handful of clients utilising the VMware ESX platform. However, as mentioned, the support situation has now changed, and these changes form the topic of the next section.
Is it supported?
In this rapidly changing market place, there is no straight-forward answer to the question of support. Essentially, as far as Microsoft is concerned, this all boils down to what they have tested. Exchange 2003 has long been supported on the Microsoft Virtual Server platform, although only if you had a Microsoft Premier Support Contract. I have seen almost nobody using Virtual Server as anything other than a test platform, and other products like VMware Workstation were rather better for that. So why is Exchange 2007 different? It is because Exchange 2007 is the first Microsoft Server application which required a 64 bit (x64) operating system (OS). At the release of Exchange 2007, Microsoft still only had Virtual Server as their virtualization offering, which does not support the 64 bit OS required to run Exchange 2007. Therefore, Exchange 2007 was not supported in a virtual environment as Microsoft would have been reluctant to test on another company's virtualization platform!
427
The final issue listed here deserves some clarification as it has already caused some confusion. Essentially you can do one or the other of the following where the preference would be the second option: (1) cluster the hypervisor roots, or (2) cluster the guests (using multiple roots, for example CCR where one node is a VM on one root and one node is a VM on another root). You cannot, however, combine the two, which suggests that using technology like VMware's VMotion and HA/DRS is not supported in conjunction with CCR or SCC. Perhaps this again will change when Microsoft release they Live Migration technology in R2 of Hyper-V although having said that, Microsoft currently has Quick Migration which also isn't supported when using Exchange clusters. We will simply have to wait and see. Although the use of hypervisor high-availability techniques, in conjunction with Exchange clusters, is not actually supported, using VMware VMotion to move the active node in a CCR cluster does actually work (on a test system at least), and doesn't seem to cause a failover either! You do get a bunch of Event ID 1122 and 1123 messages telling you about the lost network connectivity, but things appear to keep working. Of course this may well not be true for a heavily loaded system as depending on the amount of time required to VMotion, a failover may be triggered. All in all, it isn't supported and frankly, going to the trouble of setting up an high availability system, only to run it in an unsupported way, seems rather perverse to me! Alongside the support announcement discussed above, Microsoft also changed some licensing conditions as the below quote from the Exchange team blog describes: Microsoft is waiving its 90-day license reassignment policy to enable customers who virtualize Exchange to move their licenses between servers within a data farm as often as necessary.
So all in all this means that it is very much supported to run Exchange under the conditions stated on the Microsoft Hyper-V platform and to have flexibility to move virtual machines between physical hosts. What I guess a number of you are wondering is what does this mean for VMware support? Well to answer that we have to take a look at the Server Virtualization Validation Program or SVVP. The website for SVVP can be found at the link below: Server Virtualization Validation Program
428
429
Summary
Microsoft Hyper-V, VMware and other virtualization platforms provide a great platform for Exchange especially as you now have the comfort of knowing that the solution is supported. Realistically it is very likely that as virtualization becomes more and more accepted as the normal way of providing server infrastructure, production Exchange servers will be virtualized. In my opinion, there are clearly a few challenges to be faced when virtualizing Exchange, but so long as the challenges/limitations of the infrastructure are understood, the guidelines laid out by manufacturers are followed and you thoroughly test the performance of the implementation before rolling it out in production, virtualizing Exchange can be very successful. What is very clear is that virtualizing Exchange is no longer just an option for the small shop, but is now a solution for even the largest Exchange deployments, looking to provide a flexible and highly available platform for their email system.
430
by Desmond Lee
Desmond Lee explains the fundamentals of building a fully functional test lab for Windows Servers and enterprise applications (e.g. Exchange and Office Communications Server) from scratch with Hyper-V on a supported x64 machine. This will simulate a majority of test scenarios. He highlights the key points, and suggests tips and tricks to help to make the journey smoother
If you are involved in supporting test, training or development servers, then you will find that server- or host- based virtualization will solve many of your problems. Instead of having to stock up several physical machines, each dedicated to certain roles and functionality, you will be able to consolidate them onto a number of powerful boxes with speedy multi-core processors, massive RAM memory and fast disk drives with huge capacity. This will simplify administration, lessen the use of floor space, and reduce both costs and energy consumption. In this article, I'll use Microsoft's host virtualization solution, Hyper-V, to build a representative test installation. Hyper-V is available as part of some editions of Windows Server 2008, or more recently as a free standalone product similar to VMware ESXi.
431
name
ram (mb)
ip/24
configuration FQDN = testlab.local Raise domain functional level to "Windows Server 2003" Enterprise root CA = TestLabRootCA DNS SRV record = _sipinternaltls._tcp. domain. com:5061
LABDC01
384
10.0.0.11
WWW only Certificate Services CA and Certificate Services Web Enrollment Support Windows Server 2003 SP1 x86 WWW only NET Framework v2.0 Redistributable Package x86 Windows PowerShell 1.0 RTW x86 KB926776 A hotfix rollup package for 913297, 913393, 918995, and 924895 December 2007 cumulative time zone update for Microsoft Windows operating systems (KB942763) NET Framework 2.0 SP1 runtime x86 Windows Server 2003 SP2 x86 Exchange Server 2007 SP1 x86 Windows Server 2003 SP1 x86 WWW only VC++ 2005 Redistributable
Exchange organization (native) = TestLabExchange compatible with Outlook 2003 (public folder)
LABEX01
768
10.0.0.12
LABOC01
512
10.0.0.13
.NET Framework v2.0 Redistributable Package Execute step1 to 8 in OCS setup. Windows Server 2003 adminpak.msi Windows XP SP2 x86 Outlook 2003
testlab.local forest / Global Properties / Meetings tab / Default Policy / Edit / Check "Enable web conferencing" and "Enable program and desktop sharing"
LABXP01
384
10.0.0.51
Office Communicator 2007 client Live Meeting 2007 client Office Outlook Conferencing Add-in
432
name
ram (mb)
ip/24
configuration
LABVT01
448
10.0.0.52
Office Communicator 2007 client Live Meeting 2007 client Office Outlook Conferencing Add-in
Figure 1: Configure and associate IP subnet with AD site. We use the fully qualified domain name (FQDN) of testlab.local to scope the authoritative DNS and AD name space to the private intranet. Being the first DC in the domain implies that the Global Catalog role will also be automatically enabled. As a best practice, create an IP subnet object (10.0.0.0/8) and associate it with the Default-First-Site-Name AD object from which all applications that use AD directory services will gain optimal traffic routing (see Figure 1).
433
Messaging Service
The process of setting up Exchange Server 2007 with SP1 is a bit more tedious and time consuming. As a first step, you will need to install the WWW service from the Windows Server 2003 SP1 media. Unlike its predecessor, the SMTP service is not a requirement to setup Exchange 2007.You will save time and avoid frustration if you next download items 3 to 9 listed in Table 1 from your administrative workstation. When you are ready, install the files in the order listed onto the VM named LABEX01 that is joined to the domain. You will require SP2 for Windows Server 2003 in order to run Exchange Server 2007 SP1 on this version of Windows. Because the Microsoft Management Console 3.0 is shipped as an integral part of Windows Server 2003 SP1 and above, you won't need to install it separately. Likewise, all the pre-requisites would have been met and you can straightaway launch step 4 as soon as you run the Exchange 2007 with integrated SP1 executable. The typical Exchange Server installation option offers the minimal server roles that are compatible for setup and running Exchange on the same machine, namely Client Access, Hub Transport and Mailbox server roles. Throughout the installation process, accept all the proposed installation options except one and name the Exchange organization TestLabExchange.
434
Figure 2: Configure download client compatibility. To maintain compatibility with Outlook 2003 and earlier or Entourage clients, confirm that the Yes radio button is selected at the "Client Settings" page (see Figure 2); then you won't have to worry about any extra configuration steps later on. Updates to the Active Directory schema and configuration is part of the Exchange installation process so there is another one less thing to be concerned with. When you've completed setup and reboot the machine as prompted, your messaging platform is almost ready for service. Start Exchange Management Console on LABEX01 and create 2 new mailbox-enabled user accounts, say Alice and Bob. We have not yet mentioned the application clients which we shall do so next.
435
436
437
438
Figure 7: Specify Web Farm FQDN. For a simple OCS standard edition deployment scenario for internal user access, the core set of features such as Instant Messaging, Audio/ Video and Web Conferencing all reside on the same machine. Again, the WWW service is featured prominently and must be installed ahead of time. Unlike Exchange Server 2007, OCS ships with all the pre-requisite applications. Once you start setup, you will have the opportunity to install the Visual C++ 2005 Redistribution Kit as well as the .NET Framework runtime. Similar to the other installations, you should login to LABOC01 with the built-in Administrator account in the domain. By default, this account has membership in the Domain Admins and Schema Admins groups. This permits you to execute steps 1 through 8 in sequence without having to switch to and fro to the DC that holds the critical Schema Masters and other FSMO roles (LABDC01); see Figure 5. I recommend that you use testlab.
439
440
441
Wrapping Up
Generally, you should install the Hyper-V Integration Components into each guest VM. This is carried out to boost performance and enhance host-guest integration. Other than the machines with Vista SP1 and Windows Server 2003 SP2 in our test lab, all the other Windows operating systems do not fulfill the pre-requisites to support the integration components. Therefore, the integration components are not installed in our test lab at the expense of VM performance improvements. This is a deliberate decision and does not affect the lab infrastructure. Not surprisingly, there are already a number of patches and hotfixes since the official RTM of the various products. After you determine that functional tests of the enterprise services (directory, messaging, unified communications, etc.) perform as expected, consider applying the updates to bring the applications current. Before you do that, it is a good idea to save a working set of all the VMs by using Hyper-V's export feature. Subsequently, you can take a snapshot for each VM to enable quick rollback to a known state.
442
Using Hyper-V as the host virtualization solution is not without its shortcomings though. Because there is no native support for built-in sound card and USB devices, you cannot extend the test lab to include scenarios that involve enterprise voice communications (VoIP), Exchange 2007 Unified Messaging, fax integration or any hardware dependent services. Moreover, Novell SUSE Linux Enterprise 10 is the only non-Windows operating system supported in Hyper-V. You will have to look elsewhere if you are considering building integration solutions based on other Linux distributions or operating systems. One good example is the recently released Cisco Unified Communications Server 7.0 family of products. To overcome such constraints, you can set up another physical machine to run VMware Server 2.0 or VMware Workstation 6.5 hosted on Windows 2003/2008 x64 editions. Unfortunately, these products will not even install when Hyper-V is detected on the host machine, therefore this extra hardware investment is essential.
Parting Words
By now, you should have a fully functional test lab to simulate a majority of real world scenarios. All traffic is confined to the internal network (local-only, private virtual network) and it does not take much to extend the test lab to cover external network segments. Subsequently, more VMs can be added to provide important services such as security, system configuration, monitoring and patch management. We'll look at these in a subsequent article.
by Brien Posey
The advantage of virtualizing your servers is that it helps you make better use of your hardware resources, and reduces some of your licensing costs, However, there are disadvantages:With Exchange server, it isn't always obvious as to which server roles are suitable for virtualization, and it is rather hard to work out what system resources a virtual server is actually using. Brien Posey explains.
Virtualization is one of the hottest trends in IT today. Virtualization allows administrators to reduce costs by making better use of underused server hardware. Often times, virtualization has the added benefit of reducing server licensing costs as well. For all of its benefits though, there are some down sides to virtualizing your servers. Capacity planning and scalability become much more important in virtualized environments, and yet measuring resource consumption becomes far more difficult. In this article series, I want to talk about server virtualization as it relates to Exchange Server.
443
Server Roles
One of the biggest considerations that you must take into account when you are planning to virtualize Exchange Server 2007 is which server roles you want to virtualize. Microsoft's official support policy states that they support the virtualization of all of Exchange Server's roles, except for the Unified Messaging role. As such, you would think that figuring out which roles are appropriate for virtualization would be relatively simple. Even so, this is a fiercely debated topic, and everyone seems to have their own opinion about the right way to do things. Since I can't possibly tell you which roles you should virtualize without being ridiculed by the technical editors and receiving a flood of e-mail from readers, I am simply going to explain the advantages and the disadvantages of virtualizing each role, and let you make up your own mind as to whether or not virtualizing the various roles is appropriate for your organization.
444
445
Resource Consumption
Capacity planning is an important part of any Exchange Server deployment, but it becomes even more important when you bring virtualization into the picture, because virtualizing implies that your Exchange Server is only going to be able to use a fraction of the server's overall resources, and that the virtualized Exchange Server is going to have to compete with other virtual machines for a finite set of physical server resources. The problem with this is that it can be very tricky to figure out just how much of a server's resources your virtual Exchange Server is actually using. To show you what I am talking about, I want to show you three screen captures. First, take a look at the screen capture that is shown in Figure A. This screen capture shows a lab machine that is running three virtual servers. One of these virtual servers is a domain controller, another is running OCS 2007, and the third is running an Exchange 2007 mailbox server (this is just a sample configuration, and is by no means a recommendation).
Figure A. This lab machine is currently running three virtual servers. If you look at the figure, you will notice that all three machines are running, and have been, for a few days straight. You will also notice that the Hyper-V Manager reports each server as using 0% of the server's CPU resources. Granted, all three of these machines are basically idle, but there is no way that an Exchange 2007 mailbox server is not consuming at least some CPU resources. The second screen capture that I want to show you is in Figure B. This screen capture was taken from the same server, but this time I put a load on my Exchange 2007 mailbox server. While doing so, I opened up two instances of the Performance Monitor. One instance is running within the host operating system, and the other is running within the virtual Exchange server. Even though these two screen
446
Figure B. The host operating system and the guest operating systems have completely different ideas about how resources are being used. The last screen capture that I wanted to show you is the one shown in Figure C. What I wanted to show you in this screen capture is that even the Hyper-V Manager and the Windows Task Manager, which are both running within the host operating system can't agree on how hard the CPU is working. The Hyper-V Manager indicates that five percent of the CPU resources are being used, while the Windows Task Manager implies that three percent of the CPU resources are in use (97% of the CPU resources are free).
447
448
449
First, Hyper-V allows you to decide how many virtual processors you want to allocate to each virtual server. This can be a limiting factor in and of itself. For instance, if a server has four CPU cores, and you only allocate one virtual processor to a child partition, then that partition can never consume more than 25% of the server's total CPU resources regardless of how the partition's CPU usage is actually reported. Things can get a little bit strange when you start trying to allocate more virtual processors than the number of physical CPU cores that the server has. For instance, suppose that you have a server with four CPU cores, and you allocate eight virtual processors. In this type of situation, the virtual machines will try to use double the amount of CPU time that is actually available. Since this is impossible, CPU time is allocated to each of the virtual processors in a round robin fashion. When this occurs, CPU utilization is reported as being very low, because the workload is being spread across so many (virtual) processors. In reality the virtual machine's performance will be worse than it would have been had a fewer number of virtual processors been allocated. Allocating fewer virtual processors tends to cause CPU utilization to be reported as being higher than it would be had more virtual processors been allocated, but performance ultimately improves because there is a significant amount of overhead involved in distributing the workload across, and then allocating CPU time to all those virtual processors.
Conclusion
As you can see, the values that are reported by the Performance Monitor and by other performance measuring mechanisms vary considerably depending on where the measurement was taken. In Part 2, I will show you why this is the case, and how you can more accurately figure out how what system resources a virtual Exchange Server is actually consuming.
by Jaap Wesselius
For SQL Server and Exchange Server, Windows Server Virtualization is going to be increasingly important as a way for the administrator to allocate hardware resources in the most efficient way, to offer more robust services, and for deploying services. Jaap Wesellius starts his new series on Hyper-V by explaining what Hyper-V is, how it relates to Windows Server 2008 and how it compares to ESX, Virtual Server and Virtual PC.
Hyper-V Introduction
Microsoft released Hyper-V, its hypervisor based virtualization product, in the summer of 2008; but what's the difference with Virtual Server? And why is Hyper-V a better product than Virtual Server? And what's the difference with VMware ESX for example? In a series of articles I'll try to explain what Hyper-V is, how it relates to other products and try to give some best practices regarding the use of Hyper-V.
450
Windows Architecture
Before we take a look at the Hyper-V architecture we take a look at the Windows Server 2008 (and basically all Windows NT servers) architecture. When Window Server 2008 is installed on appropriate hardware two modes can be identified: Kernel mode this is a protected space where the kernel, the "heart" of Windows Server 2008 is running and where processes run that interact directly with the hardware, for example the device drivers using buffers allocated in kernel mode. When such a process crashes it is very likely that the server will crash as well which will result in a blue screen of death. User mode this is a more protected space where applications are running, for example Microsoft Office, or SQL Server, or Exchange Server. When an application in User Mode crashes only the application stops and the server continues running.
Figure 1. User and Kernel mode running under Windows Server 2008. When an application needs to access a piece of hardware, for example the hard disk or the network interface the application needs to communicate with the appropriate driver running in Kernel mode. Switching from User mode to Kernel mode is a costly process and consumes a considerable amount of processor cycles. This is known as "mode switching." Virtual Server and Virtual PC are applications and as such are running in User Mode, the complete environment where the Virtual Machine is running in is emulated. After installing the Virtual Machine additions or when using Hardware Assisted Virtualization some kernel processes are handled directly by the processor. Every piece of hardware the Virtual Machine has to access has to go from User Mode to Kernel Mode and vice versa. The overhead in this scenario is large and will have a large performance impact. The same is true for VMware Server and VMware workstation.
451
Hyper-V Architecture
Hyper-V is a so called hypervisor. The hypervisor is installed between the hardware and the operating system. Hyper-V is a role in Windows Server 2008 and can only be installed after Windows Server 2008 is installed. When installing the Hyper-V role the hypervisor is "slid" between the hardware and the operating system. Besides the hypervisor a little more is installed as well. The VMBus is installed which is running in kernel mode as well as a Virtual Storage Provider (VSP). Furthermore a WMI provider is installed which is running in User Mode. A VMWorker process is spawn for every Virtual Machine that's started when Hyper-V is running.
Note Hyper-V is only available on Windows Server 2008 X64 edition. Besides X64 capable hardware the server should support hardware virtualization and Data Execution Prevention (DEP) should be enabled on the server. The server's BIOS should support these settings as well.
After installing the Hyper-V role in Windows Server 2008 the server needs to be rebooted and the server is operational. The original Windows Server 2008 that was installed is turned into a Virtual Machine as well, this one is called the "root" or the "parent partition." It is a very special Virtual Machine since it controls the other Virtual Machines running on the server. I'll get back to this later in this article. Virtual Machines and the parent partition on Hyper-V are running side-by-side as shown in Figure 2. Virtual Machines are called "child partitions." There are three types of Virtual Machines: Hypervisor aware Virtual Machine like Windows Server 2003 and Windows Server 2008. Non-hypervisor aware Virtual Machines like Windows Server 2000 and Windows NT4. These Virtual Machines run in an emulated environment. Xen enabled Linux kernels (which also support the VMBus architecture). The only one that's available as a standard distribution at this point is SUSE Linux.
452
Micro-kernelized hypervisor
One difference between ESX and Hyper-V is the type of hypervisor. Microsoft uses a micro-kernelized hypervisor where VMware uses a monolithic hypervisor. So what are the differences between these two? A micro-kernelized hypervisor is a very thin hypervisor (less then 800 Kb) when an absolute minimum of software in the hypervisor. Drivers, memory management etc. needed for the Virtual Machines are installed in the parent partition. This means that Windows Server 2008 with the appropriate, certified hardware drivers can be used for a Hyper-V server. A monolithic hypervisor is a hypervisor that contains more software and management interfaces. Network drivers and disk drivers for example are part of the hypervisor and not of the parent partition. This automatically means that only servers that are certified by VMware and have certified drivers can be used for an ESX Server.
Figure 3. Monolithic versus Micro-kernelized Hypervisor. Both solution have pros and cons, time will tell which solution is the best one and offers the best performance and scalability.
453
Security
After installing the Hyper-V role in Windows Server 2008 the original Windows installation automatically turns into a Virtual Machine, the so called parent partition or root. After logging in to the parent partition this just looks like an ordinary Windows Server 2008. But it controls all other Virtual Machines running on the server, so special care needs to be taken. When the parent partition is compromised with a virus or a Trojan horse not only the parent partition is under somebody else's control, but potentially all Virtual Machines running on this server. The Hyper-V manager is available on this server as well as all WMI interfaces that control the Virtual Machines running on this server. It is a best practice to install no other software on the parent partition and not use it for example for browsing on the Internet. All applications and software should be installed on Virtual Machines and NOT on the parent partition. A better solution is to use Windows Server 2008 Server Core. This is very minimalistic instance of Windows Server 2008 with few software or services installed. Also the explorer is not present on the Server Core and after logging in to this Server Core only a Command Prompt is shown. Some small GUI's are available though, for example the data-time applet to set the data and time on the server. Managing a Windows Server 2008 Server Core is definitely more difficult than management a "normal" server with a Graphical User Interface (GUI) but once you're used to it and can fully manage it is much safer due to the reduced attack surface. Microsoft made a couple of design decisions with respect to security. Not using shared memory for example is such a decision. When using shared memory you can over commit memory on your host server. Over committing is assigning more memory to Virtual Machines than there's available on the host server. By sharing memory pages between Virtual Machines it is possible to achieve this. Although this is definitely true it was a security decision made by Microsoft to not use this feature. Virtual Machines can be compromised as well and this is also a situation you do not want to occur. But when a Virtual Machine is compromised it is not possible to access the hypervisor to take over the host server. It is also not possible to access other Virtual Machines. This also means that when you have to copy data from one Virtual Machine to another it's just like physical machines. You have to copy this data across the network using file shares. The only option that's possible is to copy plain text between your Parent Partition and a Virtual Machine using the "Copy Text" option in the Hyper-V Manager.
Integration Components
When installing a Virtual Machine initially this is running in an emulated environment. As explained earlier this is not the most efficient way of running a Virtual Machine. After the initial installation you have to install the Integration Components. Open the Hyper-V Manager, select the Virtual Machine, choose Action and select "Insert Integration Services Setup Disk." This will install the Integration Components in your Virtual Machine. When finished reboot the Virtual Machine and it's done. When installing the Integration Components the synthetic drivers are installed in the Virtual Machine, making it possible to have the Virtual Machine communicate via the VMBus architecture. This will speed up performance dramatically. You can see the Integration Components using the Virtual Machine's device manager:
454
Figure 4. After installing the Integration Components the emulated hardware is replaced by Hyper-V specific hardware.
455
Conclusion
Microsoft Windows Server 2008 Hyper-V was released in the summer of 2008 and is Microsoft first real hypervisor virtualization solution. It is not an emulated environment like Virtual Server or Virtual PC, but as a hypervisor solution it "sits" between the hardware and the Operating System. With the Integration Components installed you can fully use the functionality offered by Hyper-V. You have to secure the Parent Partition as much as possible to prevent compromising the complete system. In the next articles I will talk more about the Hyper-V best practices, deploying Virtual Machines, using the System Center Virtual Machine Manager (VMM) 2008 and the "high availability" options and why these aren't really high available in the current release of Hyper-V.
456
by Brien Posey
It isn't easy to measure the consumption of physical resources by servers in virtual machines, since each partition has its' own virtualised view of system resources. Not only that, but there is a subtle difference between virtual processors and physical processors. Brien Posey explains the special performance counters that can be used to get an accurate assessment, and goes on to describe how to test your physical servers to see if they are good candidates for virtualization.
If you read my first article in this series, then you got something of a crash course in Hyper-V's architecture. Of everything that I covered in that article though, there are two main points that you need to keep in mind: When it comes to virtualizing Exchange Server, it is critical that you monitor resource consumption so that you can ensure that there are sufficient hardware resources available to effectively service Exchange and any other virtual machines that may be running on the server. Each of the various resource monitoring mechanisms that I showed you tells a completely different story regarding how much of the server's physical resources are actually being consumed.
In other words, it is important to find out how much of the server's resources are being consumed, but you are not going to be able to do so in the usual way.
457
458
459
460
One thing that is important to understand is that the wizard does not perform any types of benchmarks on your host server. Therefore, the recommendations that the wizard makes are only as good as the information that you provide it with.
Conclusion
There are obviously many more virtual server resources that you can monitor other than just the CPU. Keep in mind though, that once you understand the basic concepts involved in monitoring CPU resources, many of those same concepts can be applied to monitoring other types of resources.
by Nirmal Sharma
By using a virtualized clustering computing environment with failover, you can improve server availability without using as many physical computers.A group of independent computers can work together to increase the availability of virtualised applications and services. If one of the cluster nodes fails, another node takes over to provide the service without disrupting the service. Nirmal Sharma explains the failover process under Hyper-V and how to improve the performance of a failover.
This article explains the internal process behind the Hyper-V Virtual Machine Resource DLL and the functions used to interact with cluster components to improve the failover process for virtual machines. Most of the article talks about Hyper-V Resource DLL. It doesn't really show how to cluster Virtual Machines or how to configure Quick Migration in Hyper-V for Virtual Machines. Instead the article focuses more on the Hyper-V Resource DLL for Virtual Machines and the Failover Process for Virtual Machines running on Hyper-V Server.
Terms
Before we move ahead, let's define some important terms that we will be using. Cluster Service The Cluster Service is the main component of the Clustering Software which handles the communication between Resource Monitor and its managers. All the clustering managers run under the Cluster Service.
461
Windows Clustering
Microsoft introduced its first version of clustering software in Windows NT 4.0 Enterprise Edition. Microsoft has significantly improved the clustering software in Windows 2000, Windows Server 2003 and Windows Server 2008. There are two types of clustering technologies: Server Cluster (formerly known as MSCS) and Network Load Balancing Cluster (NLB). MSCS or Server Cluster is basically used for High Availability. NLB is of course used to load balance the TCP/IP traffic. The MSCS or Server Cluster capability is also known as Failover Clustering. The support for Virtual Machines running on Hyper-V in a cluster is available only with Failover Clustering.
462
The Clustering Software Resource Monitor tracks the Hyper-V Virtual Machines availability through VMCLUSRES.DLL by performing two checks: IsAlive and LooksAlive. Implementing these tests is application specific and hence why cluster-aware applications are expected to provide their resource DLL. The Cluster Server doesn't need to know about application-specific functions. It just executes the functions provided by the Resource DLLs. Hyper-V implements many other functions in its Resource DLL. The functions are shown in Figure 1.1. These functions are Hyper-V virtual Machine-specific and not related to clustering in any way.
Tip The two basic checks (IsAlive and LooksAlive) are supported by every Resource DLL or cluster-aware application.
FIGURE 1.1Cluster Components and Hyper-V Cluster Resource DLL. In Figure 1.1 you can see the DLL VMCLUSRES.DLL is installed when the Hyper-V Role is enabled initially. Before you can cluster Virtual Machines running on Hyper-V, you need to install the Failover Clustering Software on Windows Server 2008 or 2008 R 64-bit edition. After installation is completed, you click on "Services and Applications" in Failover Cluster Management and then select the "Virtual Machines" as Cluster resource.
463
Regsvr32.exe /u VMCLUSRES.DLL Regsvr32.exe VMCLUSRES.DLL The above command reregisters the VMCLUSRES.DLL with the Failover Clustering Software. The next DLL is VMCLUSEX.DLL. This DLL works as a proxy between the Cluster Administrator and the Hyper-V Manager. The main function of this DLL is to provide interfaces to configure and control Virtual Machines configuration parameters and screens. If this DLL is missing or corrupted you can't access Virtual Machines. VMCLUSEX.DLL doesn't implement any cluster-specific control functions. As an example, when you right click on a Virtual Machine resource using the Failover Cluster Manager, you will get "Bring this Virtual Machine Online" option to start the Virtual Machine. The same will be reflected in Hyper-V Manager. You will see the Virtual Machine starting in the Hyper-V Manager also. VMMS.EXE which is the main process of Hyper-V needs to know the status of Virtual Machines running on the Hyper-V Server. The Resource DLL is written to update the status of the Virtual Machines in a cluster to VMMS.EXE. VMMS.EXE, in turn, shows the status of each Virtual Machine in Hyper-V Manager. VMCLUSRES.DLL which sits between Resource Monitor and Virtual Machines plays an important role in the failover process. Without this DLL Hyper-V cannot function as a cluster-aware application.
Tip A malicious code running in your system may corrupt the DLL files.
Re-run the Hyper-V Setup (disabling and enabling the role). Copy VMCLUSRES.DLL from a working computer.
Figure 1.1, above, also shows the functions defined in VMCLUSRES.DLL. The Hyper-V Virtual Machine-Specific functions are mapped with the cluster-specific functions. For example, Cluster's IsAlive and LooksAlive functions are mapped with VM IsAlive and VM LooksAlive respectively. However, there are no static mappings defined within VMCLUSRES.DLL. VMCLUSRES.DLL knows which function to execute. The same way, other Virtual Machines functions are also mapped to related cluster functions as shown in Figure 1.1. VM IsAlive and VM LooksAlive functions are executed by VMCLUSRES.DLL at a predefined interval. Most of the monitoring task is done by performing a VM IsAlive query. VM IsAlive is implemented in such a way that it performs all the checks for Hyper-V Virtual Machines. It checks to make sure all the: Virtual Machines in cluster are online. Virtual Machines are configured with correct dependencies. The registry entries for Virtual Machines resources are configured correctly.
VM LooksAlive is used to perform a thorough check on the Virtual Machines in the cluster. This check might take some time as it includes checking the configuration of Virtual Machine, Virtual Machine Configuration file location (XML), VHD location, etc. It might take some time for LooksAlive to perform these checks and report back the status to the Resource Monitor. To avoid the delays in reporting, the Resource Monitor cluster component depends on the results reported by IsAlive which is configured to execute every 5 seconds by default. IsAlive only checks the status of Virtual Machine in the Cluster (e.g. Online or Failed). Based upon that, the action is taken by the Resource Monitor. Think of a situation where only LooksAlive is used to get the status of Virtual Machines in the Cluster. This may result in slightly more downtime of the Virtual Machines as LooksAlive calls are executed every 60 seconds! Now, you could ask why not decrease
464
FIGURE 1.2: IsAlive and LooksAlive Interval of Virtual Machine Resource. The default interval can be changed per Virtual Machines to improve failover response time as shown above in Figure 1.2. In previous versions of Windows Clustering, it was not possible to define the IsAlive and LooksAlive interval per Resource. Now, starting with Windows Server 2008 cluster, it is possible to define the IsAlive and LooksAlive intervals per resource. When you setup a cluster for the first time, the Cluster Service running on the node takes a snapshot of the cluster configuration and saves it in HKLM\Cluster key. This Key contains the cluster configuration such as the resource name, their GUID, node holding the resources and status. This is generally called cluster configuration database. As an example, for Virtual Machines it includes the following:
The PersistentState keeps the status of the Resources or Virtual Machines in the Cluster. The above shown Status column is just for your reference. The PersistentState 1 means Online and 0 means Offline. The "Status" column is not stored as a registry entry. This is also shown in the Cluster Registry hive:
465
FIGURE 1.3: PersistentState Entry in the Cluster Registry for Virtual Machine. As you can see in Figure 1.3, the PersistentState registry entry value of Virtual Machine "Test Cluster VM" is 1 which indicates that the Virtual Machine is Online in the cluster. Before the Resource Monitor executes any cluster function against the Virtual Machines or Cluster Groups, it looks at the cluster configuration database to check the status of all resources and their GUIDs. For example, let say we have a cluster group named "HyperV VMs." All the Virtual Machines of Hyper-V reside in this group. When IsAlive interval expires (5 seconds by default), the Resource Monitor executes the IsAlive call against the "Hyper-V VMs" Cluster Group. It hands over the Resource GUID and Status to the Hyper-V Virtual Machines Resource DLL (VMCLUSRES.DLL). VMCLUSRES.DLL in turn executes the VM IsAlive call to check the Virtual Machines availability. Please note that VMCLUSRES.DLL doesn't really know about the status of Virtual Machines. It is the Resource Monitor who supplies this information to VMCLUSRES.DLL. Next we look at VM Open, VM Close, VM Online and VM Offline. These functions are called whenever Virtual Machines are moved across Hyper-V Servers or taken offline/online or when there is the need to call them. For example, you might want to take a Virtual Machine offline for maintenance purposes on a Hyper-V node. In that case, the Resource Monitor executes the Offline function and in turn VMCLUSRES.DLL executes the VM Offline function to take the Virtual Machine offline. The same will be updated to the VMMS. EXE process in background so that it is aware of the Virtual Machine status. We will discuss these functions later in this article. As a whole, these functions are executed by the Cluster Service and supported by the Hyper-V Resource DLL. That's why Hyper-V Virtualization are known as pure cluster-aware Virtualization Software! The Resource Monitor determines the state of Virtual Machines by checking the PersistentState value at the registry. This value could be either 1 or 0. 1 is for Online and 0 is for Offline. For example, if you stop a Virtual Machine on a cluster node, the value 0 is set for that service or resource at the registry. If you stop the Virtual Machine using command line or Hyper-V Manager, the value is still updated in the Cluster Configuration Database. It is because Resource DLL of Hyper-V and VMMS.EXE always talk to each other to get the status of Virtual Machines and update accordingly in the Cluster Configuration Database. When you stop a Virtual Machine using a command line or WMI Script, you are actually interacting with VMMS.EXE service which, in turn, executes the Stop command on behalf of you. The status of Virtual Machine is updated in the Cluster Configuraiton Database. This may not work for other applications in the cluster. As an example, Exchange Server. Operations occurring out of the cluster for Exchange Server resources are not reflected at the cluster configuration database. In this case, the IsAlive query may not function correctly. The value supplied by the resource monitor will indicate that the Resources are running. Thus IsAlive will not take any action against the stopped Cluster Resources. The value is updated in the Cluster Configuration Database only when the LooksAlive is executed which performs a thorough check for the resources. The thorough check includes checking the Exchange Services.
466
How does Hyper-V Virtual Machine Resource DLL help in the failover process?
The status messages shown above Figure 1.1 are generated through IsAlive calls. When the IsAlive interval expires, the Resource Monitor executes the Cluster IsAlive calls. The Hyper-V Cluster Resource DLL in turn executes VM IsAlive against all Virtual Machine Resources. The messages returned by these calls include one of the following: Online/Offline. Online/Offline Pending. Failed.
The above status messages are passed back to the Resource Monitor. In turn this reports the need to take any action to the Cluster Service. As shown in Figure 1.1, the Resource Monitor sits between the Hyper-V Resource DLL and the Cluster Service. Any calls made to Hyper-V Virtual Machine Resources have to take place at VMCLUSRES.DLL first. For example, if the Cluster Service needs to check the availability of Hyper-V Virtual Machine resources, it will make a call to the Resource Monitor; in turn this will ask VMCLUSRES.DLL to check the status of the Hyper-V Virtual Machine Resources and report back. If the Resource Monitor doesn't receive any response from VMCLUSRES.DLL or it cannot detect the Virtual Machine availability, it will pass the status back to Cluster Service. Cluster Service then passes this status message to related Managers as shown in above figure. Managers take the action as per the status passed by lower layer components. The status message could indicate a failure of Virtual Machine resources or could indicate a simple status message. These messages and cluster actions are discussed later in this article with an example. In addition, if functions executed by the Resource Monitor do not exist in the Resource DLL, the request is simply discarded and no operation is carried out. Hyper-V Server doesn't really utilize its own mechanism to failover the Virtual Machines on the surviving node. Instead Resource DLLs are written to "support" the failover process. The following figure shows a simple failover process:
467
FIGURE 1.4 VMCLUSRES.DLL and Status Messages in Hyper-V Virtual Machines Failover Process. After IsAlive interval expires (by default every 5 seconds), Cluster Service asks the Resource Monitor to report the status of Virtual Machines. Resource Monitor checks the status of Virtual Machine Resources in Cluster configuration database (HKLM\Cluster). It provides VMCLUSRES.DLL with the Virtual Machine Resources GUID and their current status (PersistenState. VMCLUSRES.DLL executes its own function (VM IsAlive) after it receives a signal from the Resource Monitor to perform a check on the Virtual Machines. It checks and reports back the status to Resource Monitor. VMCLUSRES.DLL will report the following status messages:
Online/Offline Online/Offline Pending Failed/Stopped After the Resource Monitor receives the status, it compares the status messages received from VMCLUSRES.DLL with the one stored in the Cluster configuration database. It then takes the action as per the status reported by the VMCLUSRES.DLL as listed below:
If comparison is successful, no action is taken. For example, status message received in step 2 is "Online" and VM IsAlive query also reports the same status.
468
Tip The Resource Monitor doesn't take any action for Online/Offline status messages because an Administrator might have stopped the resource for maintenance purposes, but the same should also be reflected in the Cluster configuration database before IsAlive is called. The Resource Monitor only takes action when the comparison is not successful as stated above. Furthermore, there shouldn't be any inconsistencies in the Cluster configuration database. If there were any, these wouldn't last longer than 5 seconds since IsAlive calls always update the status at the Cluster configuration database.
The mechanism isn't really straight forward. There could be one more message returned by VMCLUSRES.DLL that is "Failed." In this case the Resource Monitor sends a message (Restart) back to VMCLUSRES.DLL to restart the Virtual Machine resource in the cluster. VMCLUSRES.DLL in turn executes the "VM Online" function to bring the failed Virtual Machines online.
Tip VMCLUSRES.DLL doesn't actually implement a separate Restart function. Instead it always uses its own implemented VM Online function. If a resource doesn't come online within the specified interval or after a few attempts, the resource is considered to be failed and then the Failover process starts. The same is notified to the VMMS.EXE as it needs to keep the status of all the Virtual Machines running in the Cluster.
After the Virtual Machine resource has failed, the message is passed back to the Resource Monitor. The Cluster Service receives this message from the Resource Monitor and starts the failover process with the help of the Failover Manager. The Failover Manager on each node will communicate with the Failover Manager on another selected cluster node to improve the failover process. Before Failover Manager on the node where the Virtual Machine resource has failed communicates with another Failover Manager, it needs to get the list of nodes available in Cluster. This is where the Node Manager comes into picture. It supplies the list of nodes available in the cluster and the first available node at the top of the list will be selected for failover. Once the list of nodes has been obtained by the source Failover Manager, it will talk to Failover Manager on the target node. The Failover Manager on the target node supplies the list of Virtual Machines Resources along with GUID and PersistentState to Resource Monitor. Since this is a failover process, the Resource Monitor knows what to do next. It lists all the Virtual Machines with its flag (Online or Offline) and instructs the Resource DLL of Hyper-V to execute the VM Online function from its library. The Resource DLL, in turn, executes the VM Online function to bring the resources online on the target node. The same is updated to the VMMS.EXE process of Hyper-V. If the Virtual Machine is started successfully within a few attempts, the failover process doesn't occur.
Thus if there is no Resource DLL for Hyper-V Virtual Machines, the failover process could take a longer time to move the resources from one node to another surviving node. Because Hyper-V Resource DLL is competent enough to handle the cluster functions executed by the Clustering Software, it doesn't need to wait to decide which action to take. As stated above, the cluster-aware functions are mapped with Hyper-V Resource DLL-specific functions, so it is easier for Hyper-V Resource DLL to execute these functions as soon as they are executed from the Resource Monitor. In figure 1.4 you see VMMS.EXE and Hyper-V Manager. Every function executed by the VMCLUSRES.DLL is also notified to VMMS. EXE. VMMS.EXE, in turn, refreshes the status of its VMs on the Hyper-V Server. This is required in order to know the exact status of a VM running on the Hyper-V Server. As an example, an Administrator could open the Hyper-V Manager to get the status of all the Virtual
469
Machines on the Hyper-V Server. If a Virtual Machine has failed and this is not communicated to VMMS.EXE, then there could be confusion, since the Failover Cluster Manager would report one status and the Hyper-V Manager would report a different status.
Tip IsAlive is executed every 5 seconds for a Virtual Machine in the cluster. You could decrease this value to 1 or 2 to speed up the failover process.
Conclusion
To summarize, Virtual Machines running on Virtual Server are not cluster-aware because they do not provide any Resource DLL. Virtual Machines running on Hyper-V are cluster-aware because they provide a Resource DLL as they ship along with a cluster Resource DLL. We saw how the Cluster Service doesn't talk to VMCLUSRES.DLL directly. In fact, it uses its Resource Monitor. The status messages passed by the Hyper-V Resource DLL are received by the Resource Monitor to perform any appropriate action. Finally we also saw how the Hyper-V Resource DLL plays an important role for its Virtual Machines in the cluster. Resource DLLs allow Hyper-V Virtual Machines to be fully cluster-aware VMs. The functions executed by the Resource Monitor on behalf of the Cluster Service are supported by the Hyper-V Resource DLL. This makes the failover process faster.
by Nirmal Sharma
In the first of a series of articles on Hyper-V Networking, Nirmal explains Hyper-V networking and VLAN tagging, and shows how to set up a Virtual Network switch. Once this is done, the Hyper-V Virtual Network Switches can be used to connect unlimited no. of Virtual Machines.
Most of the article talks about Hyper-V Networking. It doesn't really elaborate on basics of Networking. Instead the article focuses more on the Hyper-V Networking and VLAN Tagging with examples. The first article in this series explains the following topics:
470
471
472
FIGURE 1.1 Hyper-V Virtual Network Types. As you can see in Figure 1.1, there are four networking types or Virtual Switches in Hyper-V. There is one more type called Dedicated. This is either not visible or not available in Hyper-V Virtual Network Manager. You see only three types. We will discuss the Dedicated type later in this article series. Before I get into more details, let me explain you the default configuration of Hyper-V Networking. In a default Hyper-V implementation: There are no virtual Network Switches created. Virtual Machines created on Hyper-V Networking are not associated with any of the Virtual Network Switch types shown in Figure 1.1. There is no Network Adaptors configured in Virtual Machines (depending on the guest Operating Systems).
The default Hyper-V Implementation looks like as shown below in Figure 1.2:
FIGURE 1.2 Hyper-V Default Implementation and Network Configuration. As you can see in Figure 1.2, there are three VMs created on Hyper-V Server; VM1, VM2 and VM3. By default, these VMs are not associated with the Virtual Network Switch and cannot have any communication with each other. If any of the VMs running on Hyper-V (VM1, VM2
473
FIGURE 1.3 Virtual Network Types and Maximum Configuration. As you can see in figure 1.3, Hyper-V Virtual Machine supports two types of Networking Cards; Legacy and VMBus NICs. The support for Legacy Network Card is included for Guest Operating Systems which are not supported by Hyper-V. There can be 4 Legacy Network Adaptors. You install a Legacy Network Adapter from the property of Virtual Machine and then selecting "Add New Hardware." Legacy Network Adapters use Device Emulation architecture to have communication with Parent Partition and to access Network resources. Please check the MSDN Article that lists all the Guest Operating Systems which are supported on Hyper-V: Guest operating systems that are supported on a Hyper-V virtual machine. VMBus Network Card Adapters are available only when you install the Integration Services Component. The Integration Services component of Hyper-V leverages the VMBus architecture for best networking performance. There can be a maximum of 8 VMBus Network Cards.
474
Note There is no support for Direct Wireless for Virtual Machines running on Hyper-V. Instead you need to create a bridge between Wireless NIC and a Virtual Machine NIC.
475
FIGURE 1.4 Property of Physical NIC. When you create the External Network Virtual Switch using Virtual Network Manager, Hyper-V Components or VMMS.EXE process makes the following changes: Unbind the following services, protocols, clients from the Physical NIC: Client for Microsoft Networks File and Print Sharing for Microsoft Networks TCP/IP Protocol IPv4 TCP/IP Protocol IPv6 Any other Service, client or protocol
Bind the "Microsoft Virtual Network Switch Protocol" Create a new network connection in the Network Connections folder with the name you had specified when creating the External Virtual Network Switch. Let's say the name while creating the Virtual Switch you gave is "EXT NET Switch." Bind the following Services, protocols, clients to the External Virtual Network Switch (EXT NET Switch): Client for Microsoft Networks File and Print Sharing for Microsoft Networks TCP/IP Protocol IPv4 TCP/IP Protocol IPv6
Unbind the following protocol from the External Virtual Network Switch; EXT NET Switch: "Microsoft Virtual Network Switch Protocol"
476
FIGURE 1.5 Network Connections with External Virtual Network Switch. The bindings will also be changed. When you look at the property of Physical NIC and virtual Network Switch, the resulting image will look like as shown in Figure 1.6.
477
Property of Local Area Connection. FIGURE 1.6 Property of Virtual Network Switch and Physical Connection. In the third step, the Virtual Network Manager unloads and loads the Network Driver for the Physical NIC from the memory. That's the reason the network connection is lost temporarily when you create External Virtual Network Switch. Any connected device to Hyper-V Server has to retry to reconnect. As an example; if you had connected to a Virtual Machine from a remote computer using VMConnect.exe, the connection is lost and you need to reconnect using VMConnect.exe. A warning is shown to the user about temporary loss of network connection when you create External Virtual Network Switch. The creation of Private and Internal Network Virtual Switches is same as shown above. However, you don't have to have the physical NIC to create these switches.
Conclusion
In this article we saw how Hyper-V Networking is different and the networking types introduced in the RTM version of Hyper-V. There are three networking types available; External, Internal and Private. We also saw the default configuration of Hyper-V Networking in which no network communication is possible unless a Virtual Network Switch is created. The Hyper-V Virtual Network Switches can be used to connect unlimited no. of Virtual Machines. In the next series of this article, we will primarily focus on the following topics: Hyper-V Networking and Packet Flow. Hyper-V Networking Examples (including VLAN Configuration). Hyper-V Networking using SCVMM. Configuring Hyper-V Networking using SCVMM. Three Ways to configure VLAN Trunking for Hyper-V Virtual Machines.
478
Unified Messaging
An Introduction to Unified Messaging
26 March 2009
by Brien Posey
For many years, everyone involved in the administration of offices has wished that there was a way of combining the voicemail, fax, the PBX with the Exchange Email system, particularly for highly mobile users. . With Exchange Server 2007, this has now become a reality. Even though it forces the Exchange Administrator to become familiar with telephony, the benefits are such that it is well worth the effort.
One of the least understood Exchange 2007 features has got to be Unified Messaging. Unified Messaging isn't really all that difficult to install, but it does require some specialized hardware and a degree of telephony knowledge, and that seems to scare some administrators away. I'll start out by talking about what Unified Messaging is, and about some of its features. I will also talk about some common misconceptions associated with Unified Messaging. Later on I will talk about the benefits that Unified Messaging can bring to the enterprise, and about where I think that Unified Messaging is headed in the future.
479
Figure A: Voice messages are stored in the user's Inbox. If you look at Figure B, you can see what it looks like when I open this voice message. As you can see in the figure, Outlook displays an audio plug-in just beneath the message's Subject line. This allows you to play the message directly through Outlook.
Figure B: This is what a voice message looks like. Remember earlier when I said that users often take notes about voice messages on paper, and a lot of times those notes end up getting lost? Well, if you look just beneath the audio plug-in, there is an Audio Notes field. Users can take notes about the message in this field. These
480
481
Figure D: Users are automatically sent a message containing their extension number and PIN. If a user wants to use OVA, they simply call the subscriber Access number. When they do, Exchange Server will prompt them to enter their extension followed by their PIN. The user enters this information by using the touch tones on their telephone. This authentication method is used in place of the traditional username and password. Once the user has logged in to OVA, they are free to listen to their e-mail messages or to verbally compose new e-mail messages. They also have the ability to interact with their calendar. Users can schedule or reschedule meetings, mark a timeslot as busy, or do just about anything else that they could do to their calendar through Outlook. The nice thing about OVA is that users can interact with it verbally, or they can use the touchtone keypad on their phone. It has been my experience that the voice recognition software that OVA uses is quite good. In fact, I have even been able to use OVA to verbally check my e-mail in my car while I am driving.
482
483
484
a company implement a solution that they know is going to put them out of a job. Instead, they would probably look for a competing product that offered the promise of job security. I think that Microsoft probably realizes this and although they are committed to delivering on their software + services initiative, they will still make future versions of Exchange and OCS flexible enough to meet varying demands of their customers and complicated enough to appease Exchange admins.
Conclusion
Unfortunately, I don't have a crystal ball so I can't tell you what the future holds for Unified Messaging or for unified communications in general. Even so, I can tell you that both Unified Messaging and OCS seem to be gaining popularity, so it is highly unlikely that they are going to go away in the future. Organizations have been using PBX systems for many years as a way of giving them the freedom to configure their internal phone systems without having to involve the telephone company each time that they want to make a change. I think that as future versions of these products are released, we will eventually start seeing unified communications products such as Exchange and OCS start to replace traditional PBX systems in the enterprise.
by Desmond Lee
Office Communications Server 2007 R2 has some exciting new features, which make its deployment well worth considering. Desmond focuses on using the side-by-side method to migrate existing OCS RTM servers and clients over to R2, which you can likewise adopt to create an entirely new R2 installation.
Like many organizations, you may already be operating a Unified Communications infrastructure based on Live Communications Server 2005 SP1 or Office Communications Server 2007. OCS 2007 R2 edition has come with some major improvements and exciting new productivity features that compel companies to look into deploying this new version to stay one step ahead. Whether you are beginning with bare soil or have an existing OCS setup, this article lays out the key considerations to help you move as early and as effortlessly as possible to the world of Unified Communications with OCS 2007 R2.
485
Migration Strategies
Those of you who are starting out from scratch may like to read my last article to kick start your OCS deployment with R2. Although it is written from a virtualization perspective, the principles and instructions in that last article apply equally to a production environment. Nevertheless, look out for pointers in this article to help ease your implementation woes and challenges. At the time I write this, running OCS in a virtualized environment in production is not supported regardless of the underlying virtualization platform i.e. Microsoft Hyper-V, VMware Esx Server or the recently announced vSphere. Update (13 May 2009): Microsoft released a blog post on Office Communications Server 2007 R2 Virtualization. You can read and find out more about this topic here. If you already have a production setup based on either Live Communications Server 2005 SP1 (LCS) or OCS 2007 RTM (some refer it to R1), resist the urge to simply pop in the R2 DVD to run setup. There are significant architectural changes in the product with updated AD environment and back-end database requirements. They must be fulfilled for a successful rollout whether building the environment brand new or migrating to R2 in phases with coexistence of previously supported versions. Because there is no direct upgrade path to R2 from OCS 2003, Office Communications Server 2003 users will have to migrate to at least LCS 2005 SP1 first. You can take one of two alternative migration strategies on the road towards implementing R2; these are "uninstall/reinstall" and "side-byside."
"Side-By-Side"
To keep operations going with minimum disruptions and inconvenience to end-users, the side-by-side migration is typically the preferred approach. This is at the expense of having to invest in new hardware for the R2 infrastructure with varying degrees of complexity to maintain proper coexistence. In addition, new names for the servers, pools and IP addresses have to be assigned. Since OCS 2007 R2 at its core is exclusively 64-bit, it can only run on x64 versions of Windows Server 2003 SP2 or higher and Windows Server 2008.
"Uninstall/Reinstall"
The uninstall/reinstall method allows you to preserve system settings and repurpose compatible hardware wherever possible. However, you will have to be prepared for longer periods of downtime to deal with major tasks such as backup/restore of existing user, configuration data and server rebuilds. To help you quickly get up to speed with migrating the core infrastructure to R2, I have attempted to distill the mass of information down to what I believe are the most important and essential steps. This can be taken as a guide to migrate using the uninstall/reinstall or side-byside methods. The latter is the focus of this article where migration starts from the internal network before moving out to the perimeter network. This is commonly known as an "inside-out" migration strategy. During the migration process, the R2 deployment operates independently yet can communicate with existing LCS or OCS RTM deployments in the same AD framework.
486
Pre-Migration Tasks
Quick side-by-side Migration Checklist
Raise Active Directory domain and forest functional level. Move default location of OCS global settings. Apply hotfixes to OCS RTM server roles and clients (co-existence). Prepare back-end database server and x64 server for R2 installation. Run preparation steps for AD Schema, forest and domain. Prepare DNS records and digital server certificates. Deploy R2 server roles and services and install R2 administrative tools. Move users to home on R2 Front End servers. Inside-out migration for remaining internal server roles out to DMZ. Standardize and rollout R2 client applications and supporting components. Decommission OCS RTM servers.
The very first step is to raise both the domain and forest functional level to at least the Windows Server 2003 mode required by R2 to function (see TechNet Library here). To do this, login to an active domain controller holding the Schema Master role with an appropriate administrative account in Active Directory. This task should be performed at the earliest possible moment so that the changes have sufficient time to propagate to all domain controllers throughout the AD forest. The domain controllers themselves can be running Windows Server 2003 SP1 or higher and Windows Server 2008 (Standard or higher editions) with a minimum of one Global Catalog server available. The processor architecture does not matter and can either be x86 or x64. By default, both LCS and the RTM versions of OCS store the global settings in the root domain's System Container within AD. LCS/OCS servers must be able to access these settings anywhere in the forest. In an environment already installed with LCS 2005 SP1 or OCS 2007 RTM, the ability to change the location for storing the global settings in the System Container to the Configuration Partition or vice versa is disabled (see Figure 1).
487
Figure 1. You can download the Microsoft Office Communications Server 2007 Global Settings Migration Tool and follow the instructions in the TechNet library to relocate the OCS global settings. You must use the OCS RTM tools to move the global settings before launching the step to prepare the AD schema. Ascertain that you first stop all LCS and OCS services ahead of time and re-start them after the move. Otherwise, you will not be able to make this change once the schema has been upgraded to R2. The OCS global settings themselves are actually created during the Forest Prep step. Like raising an AD domain or forest functional level, this operation is a one-way street. Presently, no official tools are known to exist that can move this back or to enable relocation of the global settings after a R2 schema extension is complete. A clean installation of R2 will by default store its global settings in the Configuration Partition. Why should I care if I have only a single domain, single forest AD structure? Basically, a number of AD sites are usually created to optimize network traffic in a decentralized AD topology involving multi-site deployments. If sub-domains are present or added at a later stage, OCS servers and UC-enabled users deployed in the child domains will continue to reach out to a domain controller and Global Catalog server in the root domain to retrieve data for the global settings. Failure to move the OCS global settings could cause long startup times for the core OCS services, synchronization and update errors, long replication delays, inter-domain instant messaging/presence issues or user login problems. This becomes even more critical if you are considering deploying Enterprise Voice (VoIP) for mobile users across disparate geographical locations with anywhere access via Microsoft Office Communicator client (MOC) or a supported browser in R2. You therefore have to move the OCS global settings to the Configuration Partition from the pre-R2 default location. Also known as a Naming Context (NC), this is one of 3 stores in AD Schema, Configuration and Domain (or System Container in OCS speak) where only the first 2 NCs are replicated to every domain controller in the forest. In so doing, the global settings can be directly accessible on local DCs and GCs in the respective domains; all without the need to traverse over slow, costly and often unreliable WAN links. You can access the various global settings by navigating to the Forest - <FQDN> node / Properties then click on the Global Properties, Voice Properties or Conferencing Attendant Properties menu options using the R2 administrative snap-in.
488
Infrastructure Readiness
To ensure that R2 servers can operate problem free in an existing OCS RTM environment, you should download and apply the list of updates on the RTM server roles (Table 1). Check that you have updated documentation of your current environment (including user account inventory) and make a full system backup before you go ahead with any changes. Since it usually takes an organization much more time to test and deliver patches to its fleet of Windows client machines, I suggest that you also prepare the user base running the MOC client well ahead of the first batch of R2 users going live. Until all clients are migrated and homed on R2 server pools, different MOC versions will negotiate and continue to communicate using the most common feature sets available to them. Eventually, you should develop a plan to rollout the MOC R2 client to all your users after the entire end-to-end infrastructure has been upgraded to the R2 equivalents. Hotfix/Update All OCS Server Editions & roles X Mediation Server role MOC 2007 Notes
Supersedes KB952783 (Aug 2008). Fixes issues described in KB958560 (Nov 2008) and KB958561 (Nov 2008) Supersedes KB946763 (Jan 2008). Fixes issues described in KB957489 (Oct 2008), KB957595 (Oct 2008) and KB957648 (Oct 2008). Supersedes KB952780 (Aug 2008). Fixes issues described in KB957490 (Oct 2008) and KB957648 (Oct 2008) Supersedes KB957465 (Dec 2008) with fixes for a number of issues.
Fixes registry extensions permission errors with program installation in Vista and Windows Server 2008. Must apply before installing the R2 administrative tools. Applies to machines with .NET Framework 2.0 SP1 on Windows (x86 and x64).
Table 1: OCS and MOC RTM Updates. In the event that you are installing an R2 Enterprise pool, a dedicated Windows server (or active/passive cluster) running Microsoft SQL Server 2005 SP2 or SQL Server 2008 must already be setup and configured on another physical server. Both the operating system and database application can either be 32- or 64- bit editions. A new database instance will be automatically created when you install the R2 Enterprise pool. The reason behind this is due to the back-end database schema change in R2. This same box can host multiple SQL databases and instances supporting the unique needs of the Enterprise pool, Archiving or Monitoring Server R2 server roles. Nonetheless, it is not a recommended practice to collocate the pool's back-end database and archiving database for performance reasons. Note also that collocating any R2 server roles on the same back-end SQL Server remains unsupported. For R2 Standard Edition setup, the SQL Server 2005 Express Edition SP2 database is automatically created on the local machine to store user and configuration data as part of the setup process.
489
Application Server The new Application Server is an integrated component of an R2 front-end server comprising of the Conferencing Attendant, Conferencing Announcement Service, Response Group Service and Outside Voice Control. Each of these UC applications can be activated individually. The Application Server component always runs as a separate process within the front-end server and cannot be segregated for deployment on another machine in the pool.
Other than making sure that the prerequisites for hard- and soft- ware are met, you should be ready to call up the passwords for the RTCService and RTCComponent service accounts that were created from the previous OCS installation when prompted during setup. For R2 Standard Edition or Enterprise consolidated topology setup on Windows Server 2008 to succeed, you need to separately install the Web Server (IIS) server role then select the IIS 6 Management Compatibility and security role services (Figure 2). On top of that, you should arrange for the required number of digital server certificates configured with the correct Fully Qualified Domain Name (FQDN) of the distinct R2 server roles. You will need this at the "Configure Certificate" setup step. An OCS infrastructure relies heavily on digital certificates to enforce server and client authentication as well as securing all communications channels with encryption (signaling and media).
490
Figure 2. Nowadays, you can procure what is known as "Unified Communications (UC) Certificates" from public root Certificate Authorities (CA) such as Entrust. This type of certificate is designed to be compatible with Exchange Server 2007 and OCS 2007. It supports multiple, unique domains and hostnames that are inherently more secure than wildcard certificates (the latter is not supported in any OCS versions). Also, added support for X.509v3 Subject Alternative Name (SAN) extensions makes deploying the same SSL certificate for multiple virtual hosts, applications or different SIP domains on the same physical box possible. It is worthwhile to note that certain R2 server roles require SAN certificate to be deployed. To make certificate management easier and further contain cost, consider setting up an internal Windows Server 2008 Enterprise root and sub-ordinate CAs integrated with AD and turn on the automatic web-enrollment feature. For external, public facing R2 server roles, you should continue to procure and deploy certificates from trusted public root CAs to avoid unnecessary errors from unknown or unrecognized certificates that are generated from private, internal CAs. You should be able to execute the remaining setup steps to arrive at a functional side-by-side R2 environment comparable to an entirely new installation. I recommend that you run the R2 validation tests at the end and verify the core functionality with a small test bed of RTM and R2 users. Once you feel comfortable and confident enough, you can plan for the definite move of production users over to the new home pool.
Administration
Unlike their predecessors, you need to manually install the updated set of R2 administrative tools using the Office Communications Server 2007 R2 Deployment Wizard. They essentially come in 4 different flavors a new Office Communications Server 2007 R2 Administrative snap-in, management components for Active Directory Users and Computers (dsa.msc), a snap-in extension for the Computer Management console plus the LCSCmd.exe command-line tool. Furthermore, other standalone administrative tools must be installed to administer new R2 applications such as the Group Chat Administration Tool and Response Group Service. While R2 is available in a 64-bit edition only, the OCS 2007 R2 Administrative Tools are shipped in both x86 and x64 versions. Supported platforms include Windows Server 2003 SP2, Windows Server 2008 (x86, x64) and Vista Business or Enterprise with SP1 (x86, x64). Beware of the fact that you need to apply the relevant patches for the latter 2 platforms beforehand.
491
Conclusion
You have learnt that there are two migration strategies you can consider in deploying OCS 2007 R2 in your organization. In this article, I focused on using the side-by-side methodology to migrate existing OCS RTM servers and clients over to R2. This approach can similarly be adopted for an entirely new R2 installation since it operates independently from existing OCS RTM or LCS 2005 SP1 in the environment. After raising the Active Directory domain and forest functional levels to the required mode and laying the groundwork by relocating the OCS global settings, you apply the appropriate hotfixes to ensure peaceful co-existence of OCS RTM and R2. Once the relevant digital server certificates are prepared, we launched into the actual R2 setup process to deploy server roles and services. A successful deployment enables users to be moved to an R2 home server and migration to continue in phases with the remaining internal servers roles. I have also showed you that it is necessary to manually install the new and updated R2 administrative tools while keeping the previous versions to administer pre-R2 user settings. In Part 2 of this article, I'll look into recommendations to standardize the client desktop to support the different R2 applications, other infrastructure concerns and migration of the remaining internal server roles before moving out to the Edge servers in the screened network.
References
Supported Migration Paths and Coexistence Scenarios http://technet.microsoft.com/en-us/library/dd425356(office.13).aspx Migration From Office Communications Server 2007 http://technet.microsoft.com/en-us/library/dd572505(office.13).aspx Microsoft Office Communications Server 2007 R2 Resource Kit (ISBN: 9780735626355) http://www.microsoft.com/learning/en/us/Books/13113.aspx Microsoft Office Communications Server 2007 R2 Documentation http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=e9f86f96-aa09-4dca-9088-f64b4f01c703
492
PowerShell
Managing Exchange 2007 Mailbox Quotas with Windows PowerShell
24 November 2008
by Ben Lye
The use of PowerShell with Exchange Server 2007 can do a great deal to ease the task of managing mailbox quotas. You can, for example, use scripts to customize quota messages, retrieve mailbox sizes, and set the quotas. Ben shows you how, in the 2nd installment of his Top Tips for SysAdmins.
Most Exchange organizations use mailbox quotas to help users manage the size of their mailbox and to keep storage utilization in check. Managing quotas can be a burden on Exchange administrators, the help desk, and users, but with a few tweaks things can be made easier.
493
494
495
496
497
Note Setting the cache refresh intervals too low can adversely affect the performance of Exchange. Incorrectly editing the registry can cause serious problems that may require you to reinstall your operating system. Problems resulting from editing the registry incorrectly may not be able to be resolved. Before editing the registry, back up any valuable data. You need to be a local administrator on the Exchange server in order to edit the registry.
Start the registry editor on your Exchange 2007 Mailbox server Locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem key.
498
Create the "Mailbox Cache Age Limit" value Right-click ParametersSystem, select New, and then select DWORD value. Name the new DWORD value "Mailbox Cache Age Limit." Right-click Mailbox Cache Age Limit, and then click Modify. Enter a decimal value of 20 (20 minutes)
Locate the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchange ADAccess key. Create the "CacheTTLUser" value Right-click MSExchange ADAccess, select New, and then select Key. Name the new key Instance0. Right-click Instance0, select New, and then select DWORD value. Name the new DWORD value "CacheTTLUser." Right-click CacheTTLUser, and then click Modify. Enter a decimal value of 300 (5 minutes)
Alternatively, copy this text file and paste it into a file called MailboxCache.reg, then import it into the registry of each of your Exchange 2007 Mailbox servers Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem] "Reread Logon Quotas Interval" =dword:000004b0 "Mailbox Cache Age Limit" =dword:00000014 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchange ADAccess\Instance0] "CacheTTLUser" =dword:0000012cc The Exchange Information Store service needs to be restarted for the change to become effective. More information about these changes can be found on the Microsoft TechNet website.
499
by Jonathan Medd
PowerShell makes a lot of sense as a means of gathering information, and automating large sections of your administration tasks. It is not just for the latest version of Exchange, though. Powershell can be used on previous versions too. Jonathan Medd gets you started!
Introduction
When Microsoft released Exchange 2007 they built the Exchange Management Console on top of Windows PowerShell 1.0; when you execute commands from the console, underneath it uses PowerShell cmdlets to carry out the requested actions. Exchange 2007 SP1 ships with around 400 native PowerShell cmdlets to efficiently configure, manage and automate your messaging environment; everything from mailbox and database management, through junk mail settings and public folders. Whilst this has been a fantastic move forward for those organisations who have either migrated from previous Exchange versions or other messaging platforms or even a completely fresh Exchange 2007 installation not every Exchange administrator is lucky enough to be in that boat and may well still be managing an Exchange 2003 (or earlier) environment. The aim of this article is to demonstrate how you can use standard PowerShell techniques to query information stored in Event Logs, WMI and Active Directory to make management of your pre-Exchange 2007 environment more efficient than with the standard GUI based tools provided.
PowerShell downloadable executable, there are different versions for XP, Vista and x86 or x64 available here http://www.microsoft. com/windowsserver2003/technologies/management/powershell/download.mspx. For the WMI queries we are going to run you will need to provide an account with local admin rights on the Exchange Server since that is a requirement for remote WMI access.
500
PowerShell by default will return a standard set of properties for the WMI class you have queried as above, however there are typically many more properties hidden away underneath which can be exposed. You can pipe the results of your WMI query to the Format-List cmdlet to discover what properties are available to you.
501
Now you know what's available you can stipulate the particular properties you wish to display. PowerShell will then only output the results for the information you are specifically interested in.
Moving on, we can now use some slightly more advanced techniques to query the event logs on remote machines. If you are an experienced Exchange administrator it is likely that at some point you will have taken your mailbox databases offline so that they can be defragged to reclaim the white space inside them. In a large environment you may have hundreds of databases and looking for candidates for defragging could be a time consuming process, typically you would use an event log entry similar to that below from the Application log which shows how much free space is in a database after the online defragmentation process has completed. You would not want to manually browse through these records on multiple Exchange servers over multiple mailbox stores.
502
Instead, it is far better to use PowerShell to query those entries in the event logs and return to you only the information you need. For instance, give you only those databases which have over 2GB free space in them. The following command will carry out most of this work for you: Get-WmiObject -computer ExchangeServerName-query("Select * from Win32_NTLogEvent Where Logfile='Application' and Eventcode = '1221' and TimeWritten >='"+$WmidtQueryDT+"'") (Do not worry too much about $WmidtQueryDT, it's supplied for you in the full script file, Get-FreeSpace.ps1, where we use some .NET code to get a date into a format that WMI likes) Step-by-step we use the: Get-WmiObject cmdlet to query the remote server ExchangeServerName by using the computer parameter. Use the query parameter to specify a SQL style query. Select everything from the Win32_NTLogEvent WMI class. Filter it on the Application Log. Filter it on Event ID 1221. Filter it on the last 24 hours this assumes that you run online maintenance on your databases daily.
This will return the information we need. In the script file we then run some extra code to read those log files and filter it down so that only those whose Message Text includes a value over 2GB and export the results to a CSV file ready for use.
503
504
A common scenario for Exchange management is the demand from users for large mailboxes and consequently lots of storage. Your manager may well want to know who is using up all the extra storage he only just paid for last month that you told him you needed. Using the GUI Exchange tool it is not easy to aggregate this information across your server farm if you have multiple databases over many Exchange servers. However, we can take the previous example, make some modifications to it and hey presto you will have the report for your manager. This time instead of sorting by MailboxDisplayName we sort by the Size of the mailbox, then we use the Select-Object cmdlet and use the First parameter to return only a subset of the results in this case 10. This will give us the top 10 largest mailboxes (you can obviously adjust the number of results returned to suit your needs). Since we return the Exchange Servername, Storage Group and Mailbox Store it also makes it simple to track these mailboxes down. Get-WMIObject -namespaceroot\MicrosoftExchangeV2-classExchange_Mailbox-computer ExchangeServerName| SortObjectSize-Descending| Select-Object-First10 | Format-TableMailboxDisplayName,Servername,Storag eGroupName,StoreName,Size-auto
505
It's not just mailbox data we can query though with Get-WmiObject, another useful Exchange WMI class is the Exchange_Logon class. Not surprisingly this will tell us about who is logged into Exchange and other useful information like what client they are using, for instance you might wish to report on which of your users are logged in using Outlook Web Access. This time we use the filter parameter of Get-WmiObject to return a restricted amount of data, in this case where the ClientVersion reported by the Exchange_Logon class matches "HTTP" ; we also exclude entries where the LoggedOnUser is NT Authority\System which is something we are obviously not particularly interested in. We may well get multiple results per user returned via the Exchange_ Logon class for the same client connection so when sorting the data with Sort-Object we use the unique parameter so that we only get one relevant result in the output. Get-WMIObject -namespaceroot\MicrosoftExchangev2-classExchange_Logon-ComputerNameExchangeServerNamefilter"ClientVersion = 'HTTP' and LoggedOnUserAccount != 'NT AUTHORITY\\SYSTEM'"| SortObjectMailboxDisplaynameunique| Format-TableLoggedOnUserAccount,MailboxDisplayName,Servername, StorageGroupName,StoreName,ClientVersion-auto
(In the demo environment used for this article I was not able to simulate multiple logons, but you can imagine the kind of output you would receive in a larger environment.) As an Exchange administrator another common best practise is to keep mailboxes spread evenly over Mailbox Stores and Storage Groups as best is possible. To do that though you need the information easily to hand of the number of mailboxes across these areas; again the out of the box Exchange GUI tools do not help you with this task, particularly across large environments. PowerShell can again come to your rescue and very simply provide this information. We use the Exchange_Mailbox class again, but this time instead of returning names, locations and sizes of mailboxes we count the results and return that data. We store in a text file the names of the Storage Groups we are interested in (as below) and use the Get-Content cmdlet to store these names into a variable. $StorageGroups =Get-ContentC:\Scripts\StorageGroups.txt
506
We then use a foreach statement to loop through each of these Storage Groups and count the number of mailboxes in each one. You will see below that the WMI query is filtered on the StorageGroupName property and the entire command is encapsulated in brackets with .count added at the end to give us the total in that query. Finally Write-Host is used to display some text and the results to the screen. foreach ($SGin$StorageGroups) { $Total=(Get-Wmiobject-namespaceroot\MicrosoftExchangeV2-classExchange_Mailbox-computer ExchangeServerName-filter"StorageGroupName = '$SG'").count;Write-Host$SG":"$Total"users"}
It doesn't take a genius to extend this to be more granular and look at Mailbox Stores instead of Storage Groups. Provide your list of Mailbox Stores to PowerShell with $MailboxStores =Get-Contentc:\Scripts\MailboxStores.txt
Then use a foreach statement to loop through each of the stores and return the number of users in each one.
507
508
To save typing all of those lines out each time you want to run the code you could turn it into a filter like this: filterGetExchangeServers{ $root =New-ObjectSystem.DirectoryServices.DirectoryEntry("LDAP://RootDSE") $configpartition =[adsi]("LDAP:// CN=Microsoft Exchange,CN=Services," +$root. configurationNamingContext) $searcher =New-ObjectSystem.DirectoryServices.DirectorySearcher($configpartition) $searcher .filter ='(objectclass=msExchExchangeServer)' $ExchServer =$searcher.FindAll() $ExchServer | foreach{$_.properties.name} } and then you can either include it in a script or if it's something you might run regularly then include it in your PowerShell profile (http:// www.microsoft.com/technet/scriptcenter/topics/winpsh/manual/profile.mspx).
Note A filter is similar to a function in PowerShell, however it enables you to take advantage of the pipeline.
What we have now though is something potentially very powerful. By putting the GetExchangeServers filter together with one of our earlier WMI queries we can now run that query against all of our Exchange Servers without even having to specify their names! To find the top 10 largest mailboxes on each Exchange server run the GetExchangeServers filter and pipe it to the Get-WmiObject command.
509
510
Simply click one of the script nodes and the script will go off and do its work in the background and return the results into the grid pane. For instance if you chose "Top 10 Largest Mailboxes Per Server" the exact same PowerShell script as previously mentioned will be used and you will see the results like below.
511
Included is a section for managing the user and groups part of Exchange management. These require the AD PowerShell cmdlets also provided free by Quest (http://www.quest.com/activeroles-server/arms.aspx). A good example for their use would be the node "GetEmptyDistributionGroups'. It's pretty common for a distribution group to be set up for short-term use, say for a project and then at the end of the project people are removed from the group, but nobody ever thinks to tell the administrator that the group is no longer needed. Simply run this PowerShell script and it will give you a list of all distribution groups which are empty it's even got a Delete button so that you can remove them from the same console.
512
Conclusion
So you have seen that just because you may not be using Exchange 2007 you don't have to be a second-class citizen in terms of managing Exchange from scripts or the command line. Using PowerShell and underlying technologies like WMI and Active Directory you can quickly and simply gather information from your Exchange environment where using the standard GUI tools for the same tasks requires far more manual effort. Make one further step by scheduling these scripts to run and you will soon be automating large sections of your admin tasks and be free to get on with those interesting projects you never have time to work on.
513