ITTC MS SysAdmin 1.0.B
ITTC MS SysAdmin 1.0.B
doc
Forest trust...........................................................................................................................................33
Realm trust..........................................................................................................................................34
Managing Trusts.....................................................................................................................................36
Access Resources using External/Forest Trusts.............................................................................36
Selective-authentication.................................................................................................................37
Read-Only Domain Controllers............................................................................................................38
Password Replication on RODCs.....................................................................................................39
Significant Points for RODCs........................................................................................................40
Managing and Maintaining an Active Directory Infrastructure.......................................................41
Managing Schema Modifications.........................................................................................................43
Replication...............................................................................................................................................44
Intra-Site...............................................................................................................................................44
Inter-Site...............................................................................................................................................45
Forest and Domain Replication............................................................................................................46
Intra-Site Replication..........................................................................................................................46
Intra-Site Replication..........................................................................................................................46
Active Directory Sites.............................................................................................................................47
Site Creation........................................................................................................................................48
Creating Subnets.............................................................................................................................48
Inter-Site Replication..........................................................................................................................49
Site Links..........................................................................................................................................49
Bridgehead Servers.........................................................................................................................50
Site Link Bridges.............................................................................................................................50
Inter-Site Transports.......................................................................................................................51
Managing AD Sites.................................................................................................................................52
Creating Boundaries with Subnets...................................................................................................52
Bridgehead Selection Process................................................................................................................53
Manually Selecting Bridgeheads.......................................................................................................53
Monitoring Replication..........................................................................................................................55
Event Viewer...................................................................................................................................55
DNS Interfaces..................................................................................................................................139
Advanced DNS Server Properties......................................................................................................141
Test the DNS Server service................................................................................................................143
Manage and Monitor DNS..................................................................................................................144
DNS Debug Logging........................................................................................................................146
Group Policies and DNS......................................................................................................................147
Securing DNS........................................................................................................................................148
DNS Naming Considerations..........................................................................................................148
Enhancements to DNS in 2008............................................................................................................149
GlobalNames Zone...........................................................................................................................149
Background Zone Loading..............................................................................................................149
Enhanced Support for IPv6.............................................................................................................150
WINS Integration with DNS...............................................................................................................151
Troubleshooting DNS Issues...............................................................................................................152
Incorrect query results..................................................................................................................153
Too much zone transfer traffic....................................................................................................153
Event Viewer.........................................................................................................................................154
Event Subscriptions..........................................................................................................................154
Configure Forwarding computer................................................................................................155
Configure Collecting computer...................................................................................................155
Check the forwarded Event Viewer entries...............................................................................155
DFS -- Distributed File System............................................................................................................156
DFS Namespaces..................................................................................................................................157
Create a namespace :....................................................................................................................157
DFS Replication.....................................................................................................................................159
Create a replication group...........................................................................................................159
Create a replicated folder.............................................................................................................160
DFS Requirements............................................................................................................................161
DFS Commands....................................................................................................................................162
Server Manager.....................................................................................................................................164
Domain vs Workgroup
A workgroup is Microsoft's terminology for a peer-to-peer PC computer network.
Microsoft operating systems in the same workgroup may allow each other access to their files,
printers, or Internet connection. Members of different workgroups on the same local area
network segment and TCP/IP network can only access resources in workgroups to which they
are joined.
A Windows Server domain is a logical group of computers running versions of the Microsoft
Windows operating system that share a central directory database. This central database
(known as the Active Directory starting with Windows 2000[1], also referred to as NT Directory
Services on Windows NT Server operating systems, or NTDS) contains the user accounts and
security information for the resources in that domain. Each person who uses computers within
a domain receives his or her own unique account, or user name. This account can then be
assigned access to resources within the domain.
In a domain, the directory resides on computers that are configured as "domain controllers." A
domain controller is a server that manages all security-related aspects between user and domain
interactions, centralizing security and administration. A Windows Server domain is normally
more suitable for moderately larger businesses and/or organizations.
Windows Workgroups, by contrast, is the other model for grouping computers running
Windows in a networking environment which ships with Windows. Workgroup computers are
considered to be 'standalone' - i.e. there is no formal membership or authentication process
formed by the workgroup. A workgroup does not have servers and clients, and as such, it
represents the Peer-to-Peer (or Client-to-Client) networking paradigm, rather than the
centralised architecture constituted by Server-Client. Workgroups are considered difficult to
manage beyond a dozen clients, and lack single sign on, scalability, resilience/disaster recovery
functionality, and many security features. Windows Workgroups are more suitable for small or
home-office networks.
A domain does not refer to a single location or specific type of network configuration. The
computers in a domain can share physical proximity on a small LAN or they can be located in
different parts of the world. As long as they can communicate, their physical position is
irrelevant.
What Is a Schema?
The Active Directory schema defines the kinds of objects, the types of information about those
objects, and the default security configuration for those objects that can be stored in Active
Directory.
The Active Directory schema contains the definitions of all objects, such as users, computers,
and printers that are stored in Active Directory. On domain controllers running Windows
Server 2003, there is only one schema for an entire forest. This way, all objects that are created in
Active Directory conform to the same rules.
The schema has two types of definitions: object classes and attributes. Object classes such as
user, computer, and printer describe the possible directory objects that you can create. Each
object class is a collection of attributes.
Attributes are defined separately from object classes. Each attribute is defined only once and
can be used in multiple object classes. For example, the Description attribute is used in many
object classes, but is defined only once in the schema to ensure consistency.
You can create new types of objects in Active Directory by extending the schema. For example,
for an e-mail server application, you could extend the user class in Active Directory with new
attributes that store additional information, such as users’ e-mail addresses.
On Windows Server 2003 domain controllers, you can reverse schema changes by deactivating
them, thus enabling organizations to better exploit Active Directory’s extensibility features.
You may also redefine a schema class or attribute. For example, you could change the Unicode
String syntax of an attribute called Department to Unit.
Existing domain - the domain controller becomes an additional domain controller in an existing
domain. The domain name must be specified along with the Domain Admins credentials. AD is
installed and it receives all its information by replicating with an existing domain controller in
the domain.
NOTE: DCPROMO is also used to remove Active Directory. When the command is used, the
system detects it is already a domain controller and will start the wizard to remove Active
Directory.
After installing a domain controller, the settings for that domain controller can be saved as an
answer file through the wizard.
Use a current backup either from a network share or removable media to install Active
Directory. Must be from a domain controller in the same domain. Backup cannot be any older
than the "tombstone lifetime", typically 180 days. The more recent the backup, the less
replication traffic will be created during synchronization of the new domain controller after AD
installation. At the Run command, type DCPromo /adv.
FQDN Rules:
The FQDN can be viewed by using the IPCONFIG command at a prompt or by opening the
properties of the My Computer icon. One important difference between NetBIOS names and an
FQDN name is that a NetBIOS name cannot begin with a number. An FQDN is limited to 254
characters. DCs are limited to 155 characters for a FQDN. To read a FQDN, start from right to
left, beginning with the invisible '.' and proceeding to the far left, which is the host name.
NetBIOS Names
Characters a-z, A-Z, 0-9 and hyphens (not case sensitive) .
Cannot begin with a number.
No more than 15 characters.
A 16th Character is automatically added to identify the service.
Spaces are OK
While not case-sensitive but, always displayed in all capital letters.
During the creation of an Active Directory domain, that domain must be provided with DNS
and NetBIOS domain names. After creating a DNS name for the domain, a NetBIOS name will
automatically be generated using the first 15 characters, not including the dotted notation, of
the domain's DNS name. All NetBIOS names must be unique so the WINS server will not allow
the domain to have a duplicate NetBIOS name. If the first 15 characters are exactly the same, it
will give an error and the administrator must alter the name in some manner. NetBIOS is a flat
naming structure and it doesn't distinguish between domain, computer or user names. If there
is already a NetBIOS name of CHARLES for a domain, a computer or user cannot have a
NetBIOS name of CHARLES.
Use .net or .local to separate your public domain namespace and the private
namespace.
DNS Zones
Database file representing a portion of the namespace
Divided up based on needs of network
Single zone for the root domain with subdomains for each child domain
Single zone for the root domain and one subdomain for one child with a
separate zone for he other child domain
Cannot be two subdomains without the parent
A zone is a portion of the namespace that has a database file generated in DNS to support name
registration and name resolution for that zone. The namespace can be divided based on the
needs of the network.
A DNS domain is a portion of the namespace which allows multiple computers to have a name
in common. A zone must contain at least one domain and may contain multiple domains. E.g.
The left half of the slide shows one zone "fabrikam.com" which includes the parent domain
fabrikam.com and two child domains research and training. All records ending in
fabrikam.com, research.fabrikam.com, or training.fabrikam.com will be found in a single zone
file. The right half of the slide shows two zones. The first zone is called fabrikam.com and has
two domains fabrikam.com and research.fabrikam.com. The second zone has the single domain
training.fabrikam.com. Any records ending in .training.fabrikam.com are only on the DNS
server shown in the training.fabrikam.com domain. Zones are often divided like this to reduce
bandwidth consumption, allow for localized administration of DNS records, or to accommodate
differences that preclude sharing a single file between multiple entities.
Domains for an Active Directory tree can share the same zone. Since the parent and child
domain all share the same namespace (name of the parent), the parent will have an
authoritative server responsible for the entire namespace. If it is a larger network and the
infrastructure plan is to allow one of the child domains to take care of its own namespace, the
zone could be designated for just that child domain.
A zone cannot have two child domains together without the parent. They no longer have a
common namespace (parent is no longer involved), so they are not allowed to share a zone.
When creating zones, a parent must be with a child or a child can stand alone.
The zone name and the domain name match. Even though we have two separate naming
structures, an Active Directory namespace which represents objects in Active Directory and
DNS naming which represents resource records, the names look the same.
See also: DNS Namespace Planning (KB25468)
Resolver Cache
Each time a client receives an answer to a DNS query from a DNS server, an entry is made in
the resolver's cache. The next time the client needs to resolve a name, it checks its cache to see if
it has already resolved this name in the recent past. The resolver cache is cleared and the
HOSTS file's contents are reloaded into cache each time the computer is booted. The resolver
cache is also updated each time the HOSTS file is saved.
The HOSTS file is a text file that is available on the local system. It can be configured to provide
hostname to IP address resolution, but each entry must be entered manually. To use the HOSTS
file in a domain, the file must be modified on each machine manually.
To display the resolver cache, type IPCONFIG /displaydns. To clear this cache, type IPCONFIG
/flushdns.
Name Resolution
HOSTS File - Static, manually managed text file of name-to-IP mappings
DNS Server is a distributed database made up of Resource Records (RRs):
o A (Host): name to IP resolution for computers and printers
o AAAA: Equivalent of A records, but for IPv6 PTR (pointer): IP to name
resolution
o NS (name servers): used to identify authoritative DNS servers
o SOA (start of authority): used to provide configuration to secondary zones
o SRV (service locator): use to locate Kerberos, GC, LDAP
o MX (mail exchanger): identifies mail servers CNAME (canonical name): aliases
helpful for server consolidation
o WINSLookup: required to integrate WINS with DNS for down-level clients.
There are two ways to provide name resolution to clients in a Windows Server 2008 network:
Hosts file or DNS server. The Hosts file is a text-based file that can have mappings entered for
name to IP resolution. All entries are made manually and the file must be created an each client.
This could be useful if there was a specific name to IP address resolution needed for an
individual server and not desired for the remaining servers. Since the Hosts file is checked
before going to the DNS server, it provides a way of doing this type of specialized entry.
The HOSTS file is located in the %systemroot%\system32\drivers\etc\ folder (similar to
UNIX). Open in a text editor (Notepad or WordPad) and modify accordingly. Notice the semi-
colon at the beginning of some of the lines. This indicates not to use these lines because they are
documentation only. The loopback address is in the Hosts file by default as localhost. When at a
command prompt, if you type ping localhost, it will resolve it to 127.0.0.1. After making any
entries, save the text file as the same name, "hosts" and do not allow Windows to add a file
extension or the file will not work.
Since the local system checks the Hosts file and cache first in the process of name resolution,
any entries in the Hosts file will be used instead of the DNS server entries. A stale entry in the
Hosts file will result in the client being unable to reach the server even if that server's record is
current in the DNS zones.
The DNS server provides a centralized database for all clients to register and use to resolve
hostnames. This can be a dynamic environment, if configured properly, where the resource
records are automatically recorded in the proper zone for that client. It provides a much better
way to maintain a name resolution environment than with distributed Hosts files.
Resource Records
The resource records are the entries made in the DNS server to represent the hosts, services and
other DNS specific items. The most popular resource records are listed below.
A - Host record, maps a FQDN to an IP address.
PTR - Pointer Record maps an IP address to a FQDN (Allows for reverse lookups) CNAME -
Canonical name maps an alias to the actual A record
SOA - Identifies key information about a zone including the authoritative server
NS - Identifies a name server that can answer queries for that zone
SRV - Identifies services within a specific domain including identifying domains, domain
controllers and sites. These records are required for Active Directory.
MX - Identifies a Mail server for a particular domain name. Needed for UNIX Sendmail, MS
Exchange, Novell Groupwise and other Mail Transfer Agents.
Some of the records can be created automatically. If it is necessary to create a resource record,
right-click the zone where the record is needed and select the appropriate record type. The
A(host) record is the only record which has an actual IP address entered. The remaining records
point to the A(host) record. Because of this, if the A (host) record is not correct, the other records
pointing to it will not function properly.
SRV Records
The SRV records are a very important part of the DNS server. They are created when Active
Directory is installed and the domain is created. The information contained in the SRV records
includes domain records, listing of domain controllers in each domain, the site structure
(associates domain controllers with the correct site), and pointers and information regarding
other services.
This information is required when: joining host systems to the domain, creating a new domain
controller in the domain, a computer or user performs a network logon, and connecting to
various services within Active Directory structure such as the Global Catalog. If the SRV
records are not available and accurate, these items will not work properly, if at all. Put another
way, "If DNS is broke, Active Directory is broke."
In addition to being identified by an FQDN in DNS and by a Windows full computer name,
domain controllers are also identified by the specific services that they provide. Windows uses
DNS to locate domain controllers by resolving a domain or computer name to an IP address.
This is accomplished by SRV resource records that map a particular service to the domain
controller that provides that service.
When a domain controller starts, the Net Logon service running on the domain controller uses
the DNS dynamic update feature to register with the DNS database the SRV resource records
for all Active Directory–related services that the domain controller provides. Therefore, a
computer running Windows can query a DNS server when it must contact a domain controller.
For Active Directory to function properly, DNS servers must provide support for SRV resource
records. SRV resource records allow client computers to locate servers that provide specific
services, such as authenticating logon requests and searching for information in Active
Directory. Windows uses SRV resource records to identify a computer as a domain controller.
SRV resource records link the name of a service to the DNS computer name for the domain
controller that offers that service.
SRV resource records also contain information that enables a DNS server to locate:
A domain controller located in a specific Windows domain or forest.
A domain controller located in the same site as a client computer.
A domain controller that is configured as global catalog server.
A domain controller that is configured as the PDC emulator.
A computer that runs the Kerberos Key Distribution Center (KDC) service.
SRV Resource Records and A Resource Records
When a domain controller starts, it registers SRV resource records which contain information
about the services that it provides. It also registers an A resource record that contains its DNS
computer name and its IP address. A DNS server then uses this combined information to
resolve DNS queries and return the IP address of a domain controller so that the client
computer can locate the domain controller.
In Windows, domain controllers are also referred to as Lightweight Directory Access Protocol
(LDAP) servers because they run the LDAP service that responds to requests to search for or
modify objects in Active Directory.
All SRV resource records use a standard format, which consists of fields that contain the
information used to map a specific service to the computer that provides the service. SRV
resource records use the following format:
_service_.protocol.name ttl class SRV priority weight port target
As domain controllers are added, they will register their information in the appropriate SRV
containers, also known as "the underscore subdomains". If records for domain controllers are
missing, make sure dynamic updates are allowed in the zone's properties. On the domain
controller that is missing SRV records, stop and restart the Netlogon service. An IPConfig
/registerdns will not update SRV records, only A and PTR records.
Forwarder
When a DNS server cannot resolve a name request, it will then send the query to a Forwarder,
and if that fails, use its Root Hints which normally entails iterating the FQDN beginning with
the Internet's Root Name Servers. The Forwarder's role is to resolve the request or send it on to
the Roots for resolution. This can be a way to protect your internal network if you have multiple
DNS servers because there is only one connection between the internal network and the
Internet. If all DNS servers in an organization queried the Internet Roots, then there would be
multiple access points through the organization's firewall(s), which is not secure. To utilize the
forwarder for a single point of access, configure all of the other DNS servers with the IP address
of the Forwarder, the one system that is going to be able to send queries to the Internet.
Chained Forwarder
It is possible for a Forwarder to then send a query to another Forwarder. This creates a chain
and Internet name resolution is not provided until all Forwarders have not been able to
complete the query. If you had a main office and multiple branches where the main office had
the sole Internet connection (a hub and spoke arrangement), a Chained Forwarder arrangement
may be appropriate. Have one DNS server at the home office configured to use an ISP DNS
Server as a Forwarder. Each branch office would have its DNS server point to the central DNS
(Forwarder).
Installing DNS
Manual Install
To install the DNS Server service, use Server Managers Roles feature. It is not necessary to
reboot when the install is complete. When installed manually, no zones are created and it is
considered a Caching Only server.
When installing a DNS server in a workgroup, this is the only method available.
Default Installations
Basic install (not during DCPromo)
o Zones not created
Standard Primary - is the only copy of the zone that can be modified. A Standard
Primary is combined with servers hosting Secondary zone files in a traditional DNS zone of
authority. There can be only one Primary copy of the zone in a traditional DNS configuration of
a zone of authority.
Secondary - is a duplicate of the primary zone and is stored only as a text file. It is used for
name resolution only and uses Zone Transfers to replicate from its Master Name Server.
Stub - a partial copy of the Primary zone that includes only specific records. It has the Start of
Authority (SOA) record, all the Name Server (NS) records and the Host (A) record of the
authoritative server. When configured, the IP address of the serer that hosts the zone is
indicated in order to create the Stub zone. Most often implemented on a parent domain to keep
updated name server records for a child domain.
Example: Given a zone of authority made up of 1,500 records managed by 4 name servers, the
Standard Primary zone would have all 1,500 records and any new records must be created in
that copy of the zone. A name server with a Secondary copy of the zone would also have 1,500
records. A third server with a Stub copy of the zone would have 9 total records - 1 SOA record,
4 NS records, and the 4 A records of the name servers.
Store the Zone in Active Directory (Available only if DNS Server is a Domain Controller) Select
this checkbox in order to store either the Primary or Stub zone in Active Directory. Storing a
Primary zone in the AD database makes it an ADIP zone (Active Directory Integrated Primary).
This provides ease of administration, conserves network bandwidth and increases security. The
DNS records then synchronize automatically as part of Active Directory Replication. By default,
the database replicates to all other domain controllers running the DNS server in the AD
domain where the primary is located. Additional settings are available to specify the replication
behavior of the database. It can be directed to replicate to all domain controllers in the forest or
to all domain controllers in the domain, whether or not they are running DNS server. Also a
custom replication scope can be created.
During AD replication, the data is encrypted before sending it to another domain controller,-
This would provide encryption of DNS records passing between DCs with ADIP zones. A
secondary zone on another DNS server can be utilized when the database is being stored in
Active Directory, but zone transfers are not secured by encryption.
KEY POINT
It is technically incorrect to refer to a DNS server as a primary or secondary. For example, a
server that was providing DNS and Active Directory Services could participate in many
different zones of authority at the same time. That DC/DNS Sever could have the ADIP zone
"fabrikam.com", and the Standard Primary zone file "mlabs.com" and a Secondary zone file for
"elabs.net" and a stub zone file for "contoso.com".
How To: Configure a Secondary Name Server in Windows Server 2003 (KB816518)
How To: Replace the Current Primary DNS Server with a New Primary DNS Server in
Windows Server 2003 (KB323383)
Secondary Zone
A secondary zone is a read-only copy of a DNS zone that is authoritative for all resource records
in a particular namespace (one or multiple domains). Secondary zones are used to achieve fault
tolerance of name resolution, but due to their read-only nature, fault tolerance of name
registration is not available. Client systems must still register at the IP address of a server
hosting a Standard or ADI Primary zone.
Secondary zones obtain records and updates from the server listed as the Start of Authority. The
secondary zone configured on the screen shows that dc1.fabrikam.com is the Start of Authority
for the fabrikam.com zone of authority. The dc1.fabrikam.com server is the server that will be
queried for all changes to the DNS zone data. Note that the zone transfer that occurred resulted
in a copy of all available records from the SOA.
Stub Zone
Stub zones are new to Windows Server 2003 and are used to facilitate name resolution across
parallel domains or between parent and child domains broken into separate zones of authority.
Unlike a secondary zone, a stub zone does not copy all records from the Start of Authority. A
stub zone limits the records it keeps to only the SOA record, the Name Server records, and the
A (host) records for all the name servers authoritative for the zone. With a constant querying of
the SOA the list of name servers available for that particular namespace is maintained
dynamically. This is an excellent solution when the list of name servers needs to be kept current.
The alternatives to Stubs: conditional forwarding and "delegate down - forward up" are static
arrangements that require additional administrative effort to be kept current.
Servers with stub zones are NOT authoritative for that zone. An authoritative name server is
one that has ALL the records for a particular zone of authority.
Forwarding
Good solution for name resolution across slow links
Conditional Forwarding
o Forwards all traffic for a specific domain
o Multiple IP addresses for each domain can be entered
o Good for multi-tree forest/partnerships
Simple Forwarding
o If name query does not match any domain specified, server uses "All other DNS
domains" which may have different IP addresses specified, e.g. ISP DNS server
Two drawbacks
o Administrative effort
o Static nature of configuration
When a name query cannot be resolved by the local DNS name server, the query can be
forwarded to another DNS name server. Queries will be sent to configured forwarders before
using the Root Hints, which point to the Internet root servers for resolution.
Forwarding is available whenever there is not a Root zone on the DNS name server.
Forwarding is configured on the Forwarders tab in the Properties of the DNS server. There is a
listing for All other DNS domains with no IP address provided. If there is a DNS name server in
the ISP being used or another specific name server that all requests should be forwarded,
supply the IP address with this entry. Multiple addresses can be configured and will be
contacted in order. If the first address is unable to resolve the query, it will then send it to the
second and so on.
Conditional Forwarding
Both Windows Server 2003 and 2008 support Conditional Forwarding. It provides the option to
direct specific name queries to the DNS name server for that domain. If names need to be
resolved to another area in our network or to a company we are working with on a project,
obtain the domain name and the IP address for the DNS name server for that domain. The
domain names are added in the Forwarders tab with the corresponding IP addresses for the
name servers. Any name queries that are received by the DNS name server that cannot be
resolved, will check the specific domain names in the Forwarders tab first and if a match if
found, will send the name query directly to the name server in that domain. If no matches are
found, it sends the name query to the IP address listed under All other DNS domains.
For Example: Users in contoso.com are working with one of our divisions, prep.com, on a
special project. They need to be able to contact them easily. Add a New domain entry for
prep.com with the address 192.168.4.5, which is the name server for prep.com. All name queries
that come through the DNS server will now go directly to the name server for prep.com.
The longest domain name will be checked first when trying to match domain names. If domain,
sales.prep.com and prep.com are both listed, the DNS server will try to match sales.prep.com
first. So if the name query is for server1.sales.prep.com, it will find the appropriate name server.
If sales.prep.com was not listed, it would try prep.com.
Simple Forwarding
When firewalls or routers prevent DNS traffic to all DNS servers except a specific external DNS
server, forwarding should be configured on the internal DNS servers rather than provide the
clients with the IP address of the external DNS. DNS forwarding eliminates a delay in name
resolution availability.
Allowing an internal DNS server to query an ISP name server is more secure as the Firewall
requires less modification and the internal server is likely hosting a DNS zone needed for an AD
domain that the DNS client PC would belong to.
Creating a Delegation
The first step to create a delegation is to create the Primary zone "ad.fabrikam.com" on server2.
Make sure to point the DNS address to it and change the DNS server address on clients in the
domain to point to server2. Configure a Forwarder ofserverl.fabrikam.com so any name queries
sent to the child DNS server that cannot be resolved will be sent to the parent.
On the parent server, run the New Delegation Wizard from the shortcut menu of the
"fabrikam.com" zone. A folder for the delegation will appear in the parent zone, containing a
NS record(s) specifying the DNS server(s) which is delegated control of the child zone of
authority.
Remember "Delegate Down / Forward Up"
The major drawback of "Delegate Down - Forward Up" is that it is a static arrangement. If new
name servers were to host additional copies of "ad.fabrikam.com", server1 would be unable to
direct queries to them until an administrator manually updated serverl.
How To: Integrate DNS with an Existing DNS Infrastructure If Active Directory Is Enabled in
Windows Server 2003 (KB323418)
DNS Design
Stub zones are abbreviated copies of a zone that speed up name
Resolution by dynamically maintaining SOA, NS, and Glue (A) records. Stubs
require zone transfer but are not authoritative for the names pace.
Forwarding is used to speed up name resolution without the overhead of zone
transfer.
Secondary zones are full copies of a zone that are authoritative for that
namespace but incur the overhead of zone transfers.
When a Stub zone is created, it only maintains the Start of Authority (SOA) record, Name
Server (NS) records and the Glue record that identifies the name server that is authoritative for
the zone. The SOA and NS records are updated on a regular interval by the Master Server
indicated during the configuration of the zone. Like a Secondary zone, information for the zone
of authority cannot be modified in the Stub Zone.
The use of Stub Zones makes the process of name resolution much more efficient and reliable.
Name queries can be resolved faster because the name server information is readily available
instead of having to query other DNS servers to get the information. Using the traditional
delegation without a stub zone, name server records would have to be manually updated to the
parent DNS server. With a Stub Zone, these records are kept up to date through scheduled zone
transfers.
Stub zones can also help with DNS administration for areas that require name resolution but
having data redundancy is not important. Instead of Secondary servers, use a Stub Zone. Name
resolution will occur and network traffic will be reduced because of not having the large zone
transfers for the secondary zones.
Updates for a Stub zone are determined by the refresh interval in the Start of Authority record.
There are three options to update the Secondary and Stub zone data manually.
Reload - Reloads the Secondary or Stub zone from the local storage of the DNS
server hosting it (hard drive to memory)
Transfer from Master - SOA record will be checked to see if the serial number has
changed and then executes a standard zone transfer from the master server
Reload from Master - Executes a complete zone transfer even if the SOA serial
number has not changed. (This option places the largest load on the network.)
Dynamic Updates
Windows 2000+ clients only
o Manual configuration or additional applications needed for MAC/UNIX
Register A host record and PTR record
Configure DHCP to dynamically update on behalf of down-level clients
Secure updates available for Active Directory Integrated zones only
Configure dynamic update on the properties of the zone after creation; the
default is set to None on a Standard Primary zone.
Windows 2000, Windows XP, Windows Server 2003 and Windows Server 2008 clients are
capable of registering their A and PTR records dynamically. There are a couple things that will
influence whether or not they do. First, if the client is not a DHCP enabled client, it will be
responsible for both the (A) and (PTR) record all of the time. If it is a DHCP enabled client, the
results will solely depend on the configurations of the DHCP server.
Zone Transfer
Enabled on a Standard Primary
Default setting Only Transfer to servers listed on name servers tab
Add secondary servers to name servers tab
AD Integrated zone - disabled
o Must be enabled on Zone Transfers tab and add secondary servers to Name
Servers tab
Option to specify IP addresses of servers permitted to zone transfer
"Transfer from master", on secondary, forces a zone transfer
DNS notify reduces the zone latency as Primary server notifies all listed servers
of updates.
DNS Notify SHOULD NOT be configured to systems that are caching-only DNS
servers or non-DNS servers.
SOA Record
Zone Transfer Process Controlled by SOA record
Refresh Interval
o Frequency Secondary checks Master for updates
o Increasing this value delays next SOA request
Retry Interval
o Time to wait before retrying after a failed zone transfer
Expires after
o Length of time Secondary will attempt to contact Master
o Time expires, zone expires
o Zone expires, it will stop responding to queries
TTL
o Length of time records are cached
o Increasing TTL allows records to be kept longer in cache – fewer authoritative
queries
The Start of Authority record controls the zone transfer process and provides information
regarding the authoritative server for the zone. The SOA tab can be accessed in the Properties of
the zone. The Serial number which increments when changes are made, can also be incremented
manually to initiate a transfer. It is the Serial number changing that triggers the notifications to
go to the secondary servers.
The Primary Server is the name of the authoritative server for the zone. The user account that
created the zone is the responsible person.
The bottom section pertains to the handling of the secondary zones, how they are to refresh,
time intervals to use and the life of the records they receive.
Refresh Interval- The frequency that the secondary servers check the Masters for updates. This
is outside of the Notification process. The default setting is that the secondary server will
request updates from the Master Server every 15 minutes.
Retry Interval - If the request was not successful for the Refresh Interval, the Retry interval will
be used and the secondary server will Retry the contact with the Master Server every 10
minutes by default.
Expires after - This is the length of time the secondary server will continue to attempt contacting
the Master Server. After one day (by default), the time expires and the zone expires. Once the
zone expires, the secondary server will no longer respond to queries for the expired zone.
TTL - The time to live is the amount of time cached records are maintained. The setting for all
cached records is 1 hour by default.
The TTL at the bottom of the window is for the SOA record itself. It has a default time-to-live of
1 hour.
Zone Transfers
Zone Transfers occur from Master Server:
o Primary to Secondary
o Secondary to Secondary
o AD Integrated to Secondary
Full Transfer (AXFR)
o Entire database replicated between two servers - Used when secondary is created
Incremental Transfer (IXFR)
o Only modified records replicated
o Not supported by Windows NT
o BIND 8.2 and later support this option
Zone transfers help maintain the DNS data for a particular zone. Which servers can obtain a
zone transfer are controlled by the settings on the Zone Transfer tab in the Properties of the
zone. This allows the secondary to store a copy of the zone so that it can resolve client requests
without contacting the Primary Server.
When a secondary zone is created, it is required as part of the configuration wizard to provide
the IP address of the Master Server. This is the DNS that will be providing the updates to the
secondary zone. A Master Server can be a Primary server, another Secondary server, or a server
with an Active Directory Integrated zone.
How To: Configure a Secondary Name Server in Windows Server 2003 (KB816518)
Appending Suffixes
This feature allows the users to use one word host names instead of a FQDN. The default
setting, "Append primary and connection specific DNS suffixes" means that a user in
sales.fabrikam.com requesting ServerA with NetBIOS over TCP/IP disabled would cause a
query to be sent to the DNS server asking for the IP address of serverA.sales.fabrikam.com. If
the checkbox "Append parent suffixes of the primary DNS suffix" was checked, then the client
would have attempted serverA.sales.fabrikam.com followed by serverA.fabrikam.com.
By specifying "Append these DNS suffixes (in order)" as shown in the screenshot, the computer
would send serverA.sales.fabrikam.com then serverA.fabrikam.com even though the DNS
client itself might belong to contoso.com. This "Append these DNS suffixes (in order)" option is
manually configured on each machine or there is a Group Policy setting that will configure this
for all clients where the policy is applied.
If you had a multi-homed server that needed to be accessible in two different zones, you could
use the "DNS suffix for this connection" setting. For example, enter sales.fabrikam.com on one
adapter's properties then contoso.com on the second. To maintain current listings, select the
checkbox "Use this connection's DNS suffix in DNS registration" and that server would have A
records in both zones. Please note that the default selection above reads "Append primary and
connection specific DNS suffixes."
Register this connection's addresses with DNS: This is enabled by default and enables the
system to dynamically register its Host (A) record and PTR record, depending on the DHCP
server setting. If the system has been statically assigned, it will register both records in DNS. If
this box is cleared, the system will not register its own records, no matter how the DHCP server
is configured.
Use Connection's DNS Suffix / DNS suffix for this Connection: The system is assigned a DNS
suffix that becomes part of the full name of the computer. It is seen in the System Properties /
Computer Name tab. In a multi-homed system, it may be required to have a different DNS
suffix used to identify a network interface. Check the box Use the Connection's DNS suffix and
insert the DNS suffix that should be used for that interface in the area provided. Remember the
DNS suffix is what is used to register with the DNS server, so there must be a zone with that
DNS suffix name or the interface will not be able to register.
Appending DNS suffixes allows users to find resources without needing to know the exact fully
qualified domain name. CAUTION! Resources with same host names in different domains may
return erroneous info. There are only two ways to specify multiple DNS suffixes for a client:
Manually on each client, or using a GPO. A DHCP lease can only specify one entry for option
015 - DNS domain name.
Manual Registration
The proper records can be manually created in the DNS console. When creating the A record,
there is an option to also create the PTR record. In order to create the PTR, a Reverse Lookup
zone must be already created for the network address. There is also a checkbox to select that
will allow any authenticated user to update the DNS record with the same owner name. This
option is only available when the zone is being stored in Active Directory. It will create an ACL
that can be viewed in the Properties of the resource record.
Dynamic Registration
Clients running Windows 2000, Windows XP, Windows Server 2003 and 2008 support dynamic
registration. The clients will dynamically register their A and PTR records in DNS, if all settings
are in order. If a static IP address has been configured, the default setting is to dynamically
register (checkbox on DNS tab is checked for Register this connection's addresses with DNS). If
DHCP is providing the IP address, it then is dependent on the DNS settings in DHCP.
The client will register whenever the system is started. If the DNS address is incorrect or has not
registered for some reason, go to a command prompt and type IPConfig /registerdns.
Round Robin
Rotates Resource Records as it responds to Clients
Provides Load Balancing of name resolution Does not verify host state so failure
can still occur
Enabled by default on the properties of the DNS server
Configure by creating multiple A resource records with the same name but
different IP address.
Round Robin
A Round Robin configuration allows a DNS server to return different IP addresses for the same
name. This strategy is used to balance the load on different servers that maintain the same data,
such as Web servers. The drawback is that it is not capable of determining the state of the server
so some requests may fail. If a request fails, it will have to make the request again. There is no
redirection or management of a failed request. It can also be used for a multi-homed computer
to balance out the network load on each adapter.
Round Robin is enabled by default. To configure records for Round Robin, create multiple Host
(A) resource records using the same name with different IP addresses. Round Robin is selected
in the Advanced tab in the DNS server Properties.
Hands on: To confirm round robin functionality, set up multiple A records with same
name/different IP addresses and then request that name multiple times from within nslookup.
DNS Interfaces
Multi-homed DNS servers can be configured to respond on all interfaces or just
specific IP addresses.
This allows DNS to ignore requests sent to network adapters not listed,
responses will not be sent.
Commonly used when a DNS server is part of two networks but should be
authoritative for name resolution on one of the networks.
For security, be sure to verify that only appropriate interfaces on the DNS servers
are responding to DNS clients. E.g. A clustered DNS server would typically not be
configured to respond to name queries on the intra-cluster interface.
Enable automatic scavenging of stale records - this setting is useful in networks of portable
computers which tend to get disconnected from the network without releasing their A and PTR
records. Enabling this setting in the server properties allows the DNS administrators to
configure individual zones for scavenging. By default, DNS zones are NOT enabled for
scavenging so this must be set by the administrators for each desired zone once the server has
been enabled for scavenging. When the DNS server scavenges a zone, it deletes records from
the zone file based on a somewhat complex sequence. To quote the Microsoft article: "It should
only be enabled when all parameters are fully understood. Otherwise, the server could be
accidentally configured to delete records that should not be deleted." Proceed with caution.
"Understanding aging and scavenging" Microsoft Windows Server 2003 TechCenter, January 21,
2005
To view either the default log or the log file designated in the configuration window, the DNS
service must be stopped. This can be accomplished in the DNS console by select the DNS server,
access the shortcut menu, and click All Tasks and Stop. Open WordPad and browse for the log
file. After closing the file, make sure to restart DNS.
Securing DNS
Securing the resources in the DNS server is a key factor in network security. Several items that
can help in that endeavor are listed.
Place a DNS server on the external and internal networks: The internal DNS server provides
name resolution for the internal network and then forwards any queries it cannot resolve to the
external DNS server. The external DNS server is the server the public uses to access resources
that have been made available to the public, such as public web servers. There is no forwarding
from the external to internal DNS server.
Limit DNS Interface Access: For a multi-homed system, identify the interface desired to receive
DNS requests and specify on the Interface tab in the Properties of the DNS server. The default
setting is All addresses. Remove any of the addresses that should not be accessing the DNS
server.
Secure Zone Transfers: The most secure method to replicate zone information is using Active
Directory Integrated zones. If that is not an option for the environment, select to transfer only to
specific IP addresses as the most secure alternative. Avoid allowing zone transfers "To all
servers" as this is the least secure option.
Secure cache against pollution: This is enabled by default on the Advanced tab in the Properties
of the DNS server. It prevents referrals entering the cache. It caches only records that match the
domain name from the original request. Any records that refer to a record outside of the
requested name will be dropped.
Use Secure Dynamic Updates: By storing the DNS database in Active Directory it can be
secured by applying an ACL to the records. This prevents anyone but the owner of the record
from modifying the information. Also, no one can add a record that does not have permission to
do so.
GlobalNames Zone
Allows DNS clients to connect to specific resources by a single-label name, such as Server 1.
Does not exist by default, but by deploying a zone with this name you can provide access to
resources by using a single label name without needed WINS. This functionality is only
supported on DNS servers running Server 2008, and cannot replicate to servers running earlier
versions. There are three basic steps to enabling this feature:
1. Enable the GlobalNames Zone support. On each server to which the GlobalNames zone will
be replicated run the following command:
Dnscmd /config /enableglobalnamessupport
2. Create the GlobalNames zone. This is not a special zone type. Instead it is an Active Directory
Integrated forward lookup zone that is named "GlobaINames". Make sure this is replicated to
all DNS servers in the forest
3. Populate the GlobalNames zone. For each server that you want to be able to provide single-
label name resolution for, create an alias (CNAME) record in the GlobalNames zone. The name
you give each record represents the single-label name that users will use to connect to the
resource. Note that each CNAME record points to a host record in another zone.
This ability to retrieve specific data from Active Directory during zone loading provides an
additional advantage over storing zone information in files-namely that the DNS Server service
has the ability to respond to requests immediately. When the zone is stored in files, the service
must sequentially read through the file until the data is found.
Infrastructure Master - The Infrastructure Master is in charge of tracking objects and their
movement between domains. It keeps track as objects are moved and is responsible to update
any associations. When an object moves, it maintains its GUID but the SID will change to reflect
the new domain. The Infrastructure Master must not be on the same DC as a Global Catalog if
there is more than one domain in the forest. Since the Global Catalog records object movement,
the Infrastructure Master does not know how to function with a GC so it will do nothing. In that
case, the associations will not be updated properly and objects can't be located properly by the
DCs in the misconfigured domain.
PDC Emulator - The PDC Emulator is a domain role and there is only one per domain. It is
responsible for several different things. If in Domain Functional Level 2000 and there are BDCs
in the domain, it will be the 'go between' for Active Directory and the NT BDC. It will
coordinate replication of system policies, scripts and other information to the NT BDC. Place the
PDC Emulator in the site where the largest number of NT clients are located.
In both NT and 2008, it is responsible for password changes. If a password is changed; it knows
it second. The DC first contacted during a password change records the change locally then
replicates it to the PDC Emulator. When a user logs on right after a password has been changed,
there may be a delay if replication has not occurred informing all the other DCs of the password
change. All DCs will check with the PDC Emulator before generating a negative password
message for a logon request.
It is also the time keeper. Many of the features of Active Directory are dependent on time. Time
synchronization problems are most significant for Kerberos which requires timestamps to be
within five minutes of each other at times. All times in Windows 2X are maintained relative to
GMT. This means a replica domain controller in the Pacific Time Zone will be fine replicating
AD contents with a DC in the ***Eastern Time Zone as long as their clocks display a time
difference of three hours. All Windows 2X systems in a domain synchronize their clock with the
PDC Emulator. If a DC has the wrong time zone or time, it will have difficulties with replication
and will not be able to uninstall Active Directory. A time synchronization error will be
generated and the process will fail.
Schema Master - This is a forest-wide role and there is only one in the forest. It is held by the
first domain controller in the forest and can be relocated to any DC in the forest root only. It is
responsible for all changes to the Schema. The Schema Master holds the only writable copy of
the schema partition
Transfer Role when maintenance is planned or it is known in advance that the server will be
unavailable. By transferring the role, it ensures the role will be available without any
interruption. To transfer the three domains roles, go to AD Users & Computers, either on the
DC that the role is being transferred or connect to the server remotely by right-clicking the
domain name and selecting Connect to Domain Controller. The Domain Controllers in the
domain will be displayed, select the one that will be receiving the role and connect to it. Once
connected, right-click the domain again and select Operation Masters. The window will have 3
tabs, one for each domain role. It will display the current DC assigned the role and the DC that
you are connected to will be displayed in the bottom area. Select the Change button to transfer
the role.
The same process is used to transfer the Domain naming master in AD Domains & Trusts and
the Schema Master in the Schema snap-in. Make sure that you are connected to the DC that will
be obtaining the role. If the Schema snap-in is not available in the MMC console, from a cmd
prompt, run regsvr32 schmmgmt.dll to register the schema snap-in.
Seize Role when the server has catastrophically failed or has gone down without having any
roles transferred. Some of the roles may not be missed right away, but if the server is down for a
longer time, it can cause a major impact in the environment. The Schema Master and Domain
Naming Master role servers would not be a major impact unless domains were planned to be
added, removed or schema changes were planned. In order to seize the role, the NTSDSUTIL
must be used from a command prompt. Seizing a role should not be done unless it is absolutely
necessary. Normally, the only role that will cause a major impact is the PDC emulator. It is
normally best to allow time to recover the server, but if you must bring them back up as quickly
as possible, then you will have to seize the role.
The steps to seizing the role are:
From a command prompt type: NTDSUtil
At the NTDSUtil command prompt, type roles
At the fsmo maintenance command prompt, type connections
At the server connections command prompt, type connect to server servername
At the server connections command prompt, type quit
At the fsmo maintenance command prompt, type seize name of fsmo role
Recover Roles
The best thing is to not seize the role and just bring the server backup when repaired. If the role
has been seized, care must be taken on handling of the servers that had the original roles. For
the Infrastructure Master and PDC Emulator roles, the servers can be brought back on line and
the role transferred back to the DC.
When dealing with the Domain Naming Master, Schema Master and RID Master roles, they
cannot be brought back online as they are. They should be taken off the network and
reformatted, then perform a fresh install. If the AD database and log files are on other volumes,
they will not impact the network but the operating system must be totally blown away and
reinstalled. If they would be put back online in there previous state, they would forcibly take
back the role and there would then be two of each server in the forest/domain. This could have
catastrophic results in the network.
Directory Partitions
Active Directory contains a lot of different information regarding its own directory and the
forest. This information is contained and replicated by very specific partitions. Active Directory
contains 4 partitions:
Schema Partition – Schema information replicated to all DCs in the forest.
contains a copy of Active Directory Schema for a forest
Configuration Partition – Forest, tree, and site configurations replicated to all
DCs in the forest.
contains information about Active Directory sites and services
Domain Partition – Domain data replicated by FRS to all DCs of the same
domain.
contains all objects associated with a particular domain
Application Directory Partition – Data from applications, files/folders replicated
to participating DCs.
Stores data related to Active Directory-integrated applications and services.
Replicates to specified domain controllers in the domain/forest. Only Windows Server
2003/2008 Domain Controllers can have Application Directory partitions.
There are several factors that should be considered when determining what sites should have a
replica of the Global Catalog. If a site has more than 100 users, it would be best to have a Global
Catalog in the site. If there are less than 100, Universal Group Membership Caching will
facilitate the authentication needs. If Directory-aware applications, such as Exchange
200012003/2007, are being used in a site, a Global Catalog in the site is required. The application
queries the Global Catalog port of 3268.
Another consideration is the roaming users in the network. The transitive nature of the user's
and the functionality of being able to login from anywhere in the forest, cause a greater need for
the Global Catalog in each site. When a roaming user logs on from another domain, the request
queries the Global Catalog to locate the user accounts domain and directs the authentication
process to that domain. By having the Global Catalog in the site, it makes the authentication
process happen a lot faster.
WAN link availability is the last item to consider. If the WAN link is available 100% of the time
for Active Directory traffic, one Global Catalog between two sites is possible. If there is a
concern over WAN connectivity and it is not reliable, having a Global Catalog in each site
ensure that authentication will happen, even if the WAN link is down.
The Global Catalog is configured in Active Directory Sites and Services in the NTDS Settings of
the server.
Additional GCs should be enabled based upon:
Number of users in the site ( >100)
Existence of an AD-aware application that reads the global catalog .
WAN link availability
Tree-Root Trust
This trust is automatically created between the tree root domain and the forest root domain.
These trusts are two-way and transitive.
Parent-Child Trust
This trust is automatically created between the child and parent domains. These trusts are two-
way and transitive.
Shortcut Trust
Shortcut Trust is a one-way, transitive trust between: Two domains of the same forest
A Shortcut Trust speeds authentication and resource access between different domains in the
same forest. This trust is manually created between two domains in the same forest. These
trusts are one-way and transitive.
External trust
Manually created between AD domains in different forests or between Windows Server 2008
and a Windows NT 4.0 domain - one-way or two-way, no transitive
The external trust is one-way or two-way and is non-transitive. The external trust is created
between specific domains and those domains are the only ones that have the trust relationship.
External trusts are created between domains in different AD forests. If one forest is a Windows
2000 forest or an NT domain, it must connect via an external trust. Also, an External Trust must
be used between two Windows Server 2008 forests unless they are both functioning at least at
the Windows Server 2003 forest functional level.
Must be a member of the Enterprise Admins or have the appropriate delegated authority in
order to create an External trust. A trust is essentially a managed breach of the forest's security
boundary.
External Trusts are created between domains and is non-transitive. It permits access to
resources only. Permissions must be applied to the resource in order for access. For a manual
trust, the Resource Trusts the User (arrow points to the user). The External trusts are created by
the New Trust Wizard. The wizard detects if the domains are at the proper functional level and
will automatically default to an external trust. If both forests are in Forest Functional Level
Windows Server 2003, the option will be given to create an External trust or Forest trust.
Required for migrations with Active Directory Migration Tool (ADMT)
Forest trust
Manually created between forest root domains in two separate forests – one-way or two-way,
transitive.
Forest trusts are created between the root domains of two separate forests. The forests must be
set to at least the Forest Functional Level of Windows Server 2003. It can be a one-way or two-
way trust. If it is a two-way trust, it allows both authentication and access to resources (as long
as permissions allow) in either forest. They are transitive between two forests only. If forestA
trusts forestB, and forestB trusts forestC, that doesn't mean that forestA trusts forestC. The trust
is only transitive between the two forests.
Some of the benefits of a Forest Trust include less external trusts are needed to share resources
across forests, UPN authentication can be used across the two forests, and administrators have
more flexibility because administrative efforts can be shared between the two trusting forests.
To create a Forest trust, the use must be a member of the Enterprise Admins or have delegated
authority in both forests.
Unless two forests are at the Windows Server 2003 forest functional level, they can only be
connected by External trusts. This means that if there was a two-domain forest connecting via
trusts with a three domain forest, and you wanted to add global groups from any domain to
local groups in any other domain, a total of 12 external trusts would have to be established.
Given the same scenario with the two forests at the Windows Server 2003 Forest Functional
Level, a single two-way forest trust could be configured between the forest root domains. This
is a huge advantage of Windows Server 2003 forests, especially if two companies using them
were merging.
Forest trusts are transitive and are created between forests that are in Forest Functional Level
Windows Server 2003 or 2008. The trust is created between the Forest Roots and is valid for only
those two forests. The forest trust can be used for cross-forest authentication using the UPN
suffix and the Global Catalog in each forest. During the creation of the Forest trust, UPN
suffixes are checked in both forests for duplication and a warning is given if duplicates exist. If
both forests use the same UPN suffix, the user trying to authenticate across the forest trust will
not be successful.
Access to resources is provided either as Forest-wide access or Selective access. With the
Selective trusts, users are granted access to specific servers only through the ACL of the server's
AD object.
Create the Forest trusts using the New Trust Wizard. Both the Incoming and Outgoing trusts
can be created at the same time. If the administrator has administrative rights in both forests,
both ends of the trust can be created at the same time. If not, the trust must be configured in
both forests.
When planning user access to resources across a forest trust, the Global group that the user
belongs to should be placed in a Universal group, which is then placed in the Domain Local
group where the resource is located. Since the Universal group is located in the Global Catalog
of the forest, the access across the forest is expedited. Global Catalog in both forests is queried
for access to resources and for authentication.
Realm trust
Manually created between non-Windows Kerberos and Windows Server 2003/2008 Active
Directory Domain - can be transitive or non-transitive, one-way or two-way
A reason to create a Realm trust would be to allow Active Directory users access to resources in
a UNIX environment without requiring them to authenticate separately. It could also be used to
provide access for those in the Unix Kerberos environment access to resources in a Windows
Server 2003/2008 AD domain.
Members of the Enterprise Admins group in the Windows Server 2008 domain can create a
Realm trust or someone who has the appropriate delegated privileges. The individual creating
the trust must also have the appropriate administrative privileges in the target Kerberos realm.
The trust can be transitive or non-transitive, one-way or two-way.
External Trusts
Non-transitive
Access to resources only
Domain-to-Domain
Resource Trusts User
Create using Trust Wizard in AD
Domains and Trusts
Forest Trusts
Transitive within the two forests
Managing Trusts
Trusts are managed through Active Directory Domains and Trusts. To display the trusts that
are in place, access the Properties of the Root domain. On the Trusts tab, the trusts that are
currently in affect are displayed. The domain name, trust type and transitive state are listed.
The top pane shows domains that the domain trusts (outgoing) and the bottom pane shows
domains that trust this domain (incoming).
Incoming trust is created by the administrator in the domain where the users are located. It is
the trusted domain. The Outgoing trust is created on the domain where the resource is located
and is called the trusting domain. When a single administrator creates the complete trust, they
create both the Incoming and Outgoing trusts.
In order to create a manual trust, select the New Trust button at the bottom of the window. The
New Trust Wizard will be launched.
Windows Server 2003 added flexibility to trusts, offering "wide" and "selective" options. If two
companies needed to work on a project together, their domains could be joined using a selective
trust. This could limit access between the companies to only the shared resources needed for the
project. "
With a Forest trust, the choices are a little different. Since the trust is transitive for the entire
forest, the selection is forest-wide authentication and also selective authentication. With Forest-
wide authentication, the user can access any resource in the forest that they have appropriate
permissions. For the Selective authentication, the server must give Allowed to Authenticate
permissions in order for the user/group to access the server and then the level of access is
determined by the share and NTFS permissions.
Forest trusts between partner company's can be configured with different authentication
methods for better control of resource access.
Selective-authentication
Servers must be manually configured with the Allowed to Authenticate permission for users in
trusted domain Permission is configured on the ACL of server in ADUC and for the specific
resource.
In the domain database, each security principal has a set of approximately 10 passwords or
secrets, called credentials. An RODC does not store user or computer credentials, except for its
own computer account and a special "krbtgt" account (the account that is used for Kerberos
authentication) for each RODC. The RODC is advertised as the Key Distribution Center (KDC)
for its site (usually the branch office). When the RODC signs or encrypts a ticket-granting ticket
(TGT) request, it uses a different krbtgt account and password than the KDC on a writable
domain controller.
The first time an account attempts to authenticate to an RODC, the RODC sends the request to a
writable domain controller at the hub site. If the authentication is successful, the RODC also
requests a copy of the appropriate credentials. The writable domain controller recognizes that
the request is coming from an RODC and consults the Password Replication Policy that's in
effect for that RODC.
The Password Replication Policy determines if the credentials are allowed to be replicated and
stored on the RODC. If so, a writable domain controller sends the credentials to the RODC, and
the RODC caches them. After the credentials are cached on the RODC, the next time that user
attempts to logon the request can be directly serviced by the RODC until the credentials change.
When a ticket is signed with the RODC's own krbtgt account, krbtgt, the RODC recognizes that
it has a cached copy of the credentials. If another DC has signed the TGT, the RODC will
forward requests to a writable domain controller.
By limiting credential caching only to users who have authenticated to the RODC, the potential
exposure of credentials by a compromise of the RODC is also limited.
By default, no user passwords will be cached on an RODC, but that's not necessarily the most
efficient scenario. Normally, only a few domain users need to have credentials cached on any
given RODC, compared with the total number of users in a domain. You can use the Password
Replication Policy to specify which groups of users can even be considered for caching. For
example, by limiting RODC caching to only users who are frequently at that branch office, or by
preventing the caching of high-value credentials, such as administrators, you can reduce the
potential exposure. Thus, in the event that the RODC is stolen or otherwise compromised, only
those credentials that have been cached need to be reset.
The Password Replication Policy lists the accounts that are permitted to be cached, and accounts
that are explicitly denied from being cached. The list of user and computer accounts that are
permitted to be cached does not imply that the RODC has necessarily cached the passwords for
those accounts. An administrator can, for example, specify in advance any accounts that an
RODC will cache. This way, the RODC can authenticate those accounts, even if the WAN link to
the hub site is offline.
Prerequisites for setting up an RODC
The PDC Emulator of the domain must be Windows Server 2008
Domain Functional Level must be Windows Server 2003
Adprep /RODCprep must be run
RODC must be able to replicate with a 2008 DC for initial sync
If implementing Delegation in the network environment, the security groups that are created
can be designed to accommodate the delegation plan. For example: create a group for IT staff
that will be assigned delegation tasks and delegate to the group instead of individual users.
Another example is a Help Desk staff that will be allowed to reset passwords. Create a security
group for HelpDesk and place the appropriate users in the group. Delegate authority to the
specific OU where they can reset passwords.
Disable inheritance of permissions for AD objects to prevent delegations from propagating.
Replication
Based on USN (update sequence number)
Intra-Site
o Between DCs in same site
o Ring-based topology is maintained by KCC
o Uses Replication Partners
o Notify-Pull replication
Inter-Site
o Managed by Site Links
o Use when replication needs to be scheduled
o Request-Pull replication
o Bridgehead servers receive replication at each site
Replication is used between domain controllers to update Active Directory information
throughout the domain and forest. The Schema and Configuration partition replicates to all
DCs in the forest. The Domain partition replicates to all DCs in the domain. The Application
Directory partition replicates to those DCs specific in the partition. The USN (update sequence
number) on objects is used to designate which objects have changed. When an object changes its
USN number will increase. When replication occurs the USN numbers are compared and the
higher USN is replicated. Two types of replication are used: Intra-Site (within the site) and
Inter-Site (between sites).
Intra-Site
The Intra-Site replication occurs between domain controllers in the same site. The replication is
a ring-based topology which provides a two-way replication. If one domain controller is not
available, the AD information is still replicated to all domain controllers. The replication
topology is established and maintained by the KCC (Knowledge Consistency Checker). It is
responsible to create the replication topology as well as reconfigure it when a domain controller
is added or removed.
The KCC creates Replication Partners for each domain controller in the site. These replication
partners are notified when changes have been made to an object in AD. It is called a Notify-Pull
replication because the replication partner is notified there are changes and it will then pull any
changes from the domain controller that sent the notification. When the change occurs, the
domain controller with the change waits 15 seconds before notifying the first replication
partner. The DC will then notify any remaining replication partners every 3 seconds in order
they are listed in AD Sites and Services. The replication topology is designed to be fully
replicated within 3 hops.
To view the replication partners for the domain controllers, in AD Sites and Services, expand
the Site name, expand Servers, expand the Server name and select NTDS Settings. The
replication partners will be displayed in the detail pane on the right. If there is a need to force
replication, right-click the replication partner and select Replicate Now. A command-line utility
called repadmin can also be used to force directory replication.
Inter-Site
The Inter-Site replication is replication that occurs between sites. The connections are normally
not as fast and reliable as the connections within the site. This depends on your site strategy and
the reasons you created the site. The replication is controlled by Site Links. There is a
DefaultSiteLink that all sites can be linked when the site is created. The replication between sites
is based on a schedule. When it is time for replication based on the schedule configured in the
site link, the domain controller designated will Request-Pull changes from a specified domain
controller in the other site. The schedule created includes the time window to use for
replication, the replication interval (how often to replicate) and the cost (priority) of the link.
The specified domain controller in the site is called the Bridgehead Server. It is automatically
designated by the ISTG (InterSite Topology Generator). The Bridgehead server can be manually
designated but it can have some major impact on the network which will be discussed in the
next few pages.
Intra-Site Replication
Replication between domain controllers in the same site is automatically configured and
managed by the KCC (Knowledge Consistency Checker). If only one site is created (Default
First Site), replication will not need to be configured because it will happen automatically.
The topology of replication uses Replication Partners. Each domain controller in the site will
have replication partners automatically created by KCC when it becomes part of the site. In
most cases, there will be up to 3 replication partners (3 hop rule). In larger networks there may
be more than three. Replication Partners can be manually created by right-clicking the NTDS
setting for the domain controller where the new Replication Partner is desired. Select New
Active Directory Connection. A window is displayed with a listing of all domain controllers in
the site. Select the domain controller desired as a Replication Partner.
The Windows Server 2008 intra-site replication uses Notify-Pull to complete replication. When a
change is made to an AD object (created, moved or modified), the domain controller will wait
15 seconds and then Notify the first Replication Partner. The Replication Partner will then Pull
changes based on the USN (update sequence number) of the objects. The higher number will be
replicated. After the first Replication Partner is notified, the second is notified in 3 seconds.
Additional Replication Partners are notified in 3 second increments. Total replication will take
no more than a total of 3 hops.
The data that is being replicated is being sent uncompressed. The protocol being used to
transmit the data is RPC/IP, which is a reliable protocol standard used with Site replication
traffic.
Site Creation
To create a new site, open AD Sites & Services located in Administrative Tools. Select Sites and
from the shortcut menu select New Site. In the window that is displayed, name the site and
associate it with a site link. This will be the DefaultSiteLink, if no other links have been created.
Name your sites so they can be easily recognized. If your site implementation is by location, use
the name of the location. Once the site is created, it is time to create the subnets.
Creating Subnets
In the left-hand pane select Subnets and from the shortcut menu select New Subnet. The IP
address to enter in the window displayed will be the network address with the appropriate
subnet mask. Select the site that is going to be associated with the subnet. Once the subnet is
created, the site association can be changed from the Properties of the subnet.
If the sites and subnets are created before any other domain controllers become part of the
forest, the domain controllers will be automatically listed in the appropriate site during the
installation of Active Directory. If not, expand the Servers folder under Default First Site to view
the domain controllers that are listed. Select to Move the domain controllers from the shortcut
menu on the server and select the site that is associated with their subnet.
Once the Sites are created and domain controllers are in place, select the Site and view in the
right pane a node for License Site Settings. The default Site License server is the first domain
controller created in the site. The Site License server is where the database regarding licensing
for Microsoft products is stored. The information is managed in the Licensing console, but
stored in the Site License server.
The five basic steps for creating a site are as follows:
Create the site, and associate it with a site link (typically the DefaultSiteLink on
smaller networks)
Create a subnet and associate it with a site-Sites must contain unique subnets to
make them useful.
Connect the site to other sites by using site links-A site that does not have a site
link to other sites cannot replicate directory information outside of its own site.
Move the domain controllers to the appropriate sites. - Future DCs will be placed
in the appropriate site based on subnet.
Select a site license server - For compliance with Microsoft licensing rules, this is
a necessary step. All sites are registered by the License Logging Service and stored on a
central database.
Inter-Site Replication
Faster WAN connections should be assigned lower site link costs, while slower WAN
connections should be assigned higher site link costs.
Inter-Site replication occurs between sites that have been created in AD Sites and Services. To
manage the replication, Site Links must be created.
Site Links
There is a DefaultSiteLink that is created when AD Sites & Services is created. When new sites
are created, they can be associated with the DefaultSiteLink. If all site connections are equal and
there is no preference in how the data replicates, the DefaultSiteLink is used and no other
configuration is required. The default settings for the DefaultSiteLink include Cost of 100,
replication every 180 minutes (3 hours) and scheduled time is 24/7 availability. If the type of
connections between the sites is different and there is a need to differentiate when the different
sites replicate, a site link is created to configure the specific settings required.
To create a Site Link, select the Inter-Site Transport desired. There are two options: IP (RPC/IP)
and SMTP. IP provides a reliable connection and all partitions can be replicated across this type
of transport. The SMTP is an e-mail based replication and is designed for unreliable
connections. Only the Schema and Configuration partitions can be replicated using SMTP. In
order to use SMTP, there must be a SMTP server in each site. The transport of choice is going to
be IP.
Select New Site Link from the shortcut menu of IP node. Name the Site link (user friendly name
for easy identification) and indicate the Sites that are going to be linked with this Site link. Once
the Site Link is created, access the Properties area to configure settings. The default cost
assigned to a Site Link is 100. When deciding the Cost to assign to a link, determine the
preferred link to replicate to first and then make that link a lower cost. For example, if SiteA and
SiteB have a link that is 100, SiteB and SiteC have a link that is 75. SiteB will replicate to SiteC
first, because it has the lower cost.
The Replication interval default setting is 180 minutes (3 hours). This represents the interval of
requests to pull changes from the Bridgehead server from the linked site. It is not recommended
to make the time any shorter. Recommended is to make sure at least 2 replications occur during
the time schedule allotted.
The Schedule is the days and times available for replication to occur. Default is 24 hours a day,
seven days a week. The schedule can be set to only allow replications during a certain period of
time. For example: Replication is desired for off-hours, it can be scheduled to only occur from 8
p.m. to 6 p.m. It is a 24 hour time schedule - midnight to midnight. Be careful when setting time
frames to ensure it corresponds with the desired results.
If the Replication is scheduled for a 4 hour time frame and replication interval is every 3 hours,
the 2 replication rule will be accomplished. It will initiate replication at the beginning of the
time and then again in 3 hours later. The time frame available and the intervals of replication
are very important when getting the desired result. If replication is set to occur between
multiple sites and one site does not get the changes until the next day, the schedule and
intervals need to be examined to make sure all changes are replicated to all sites within a given
time.
Bridgehead Servers
The Bridgehead server is the domain controller in each site that has been designated by the
ISTG for that domain to Request and Pull the changes to the AD database. It will then turn
around and start notifying its own Replication Partners that it has changes to initiate the Intra-
Site replication. The ISTG will maintain the Bridgehead server topology by replace it
automatically should it fail for any reason. If a different Bridgehead server is desired, right-click
the server name under the site and select Properties. It will show if the server has been assigned
the role of Bridgehead server. If it is not already the Bridgehead server, select the Transport
protocol it should be responsible for and move to the right side.
Note: After manually designating a Bridgehead server, the ISTG will no longer maintain the
environment. It will not designate any Bridgehead server at all, even if the one manually
created fails. If manually creating a Bridgehead server, make sure to specify at least two so there
is another one available, should one fail. If the Bridgehead server is manually created on one
site, it must also be manually created on the site that it is linked. The Site replication will fail if
only one Bridgehead server has been manually created and the other has been created by ISTG.
Inter-Site Transports
Fast, reliable WAN links
o Only domain controllers of the same domain
o Built in security
SMTP
o Slow unreliable WAN links
o DCs of different domains, same forest
o Certificates and SMTP on replication partners
Inter-Site replication occurs between sites that have been created in AD Sites and Services. To
manage the replication, Site Links must be created.
Rules to follow:
Domain controllers of the same domain must use RPC over IP regardless of
WAN connectivity speeds.
When WAN connections are fast and reliable use RPC over IP even when
replicating between DCs of different domains.
When replicating between DCs of different domains across a slow, unreliable
link use the SMTP inter-site transport.
When using SMTP, certificates need to be used to enhance security and the SMTP
protocol will need to be installed on the DCs participating in the inter-site replication.
Managing AD Sites
Inter-site Replication Strategy
Schedule: Site schedules should be configured with overlapping times to provide
at least two replication cycles.
Link Costs: Site link costs should be configured proportionately to the speed of
the physical link connecting the sites. (Faster speeds, lower costs)
Create Boundaries with Subnets - Subnets assigned to only one site
o Subnet association dictates what DCs clients will use for authentication.
The replication strategy for both Intra-site and Inter-site replication can be managed through
AD Sites and Services. For Intra-site replication, place all of the domain controllers belonging to
the same subnet in the appropriate site. Make sure the connectivity between the domain
controllers in a given subnet is fast and reliable to support the Intra-site replication. There is
more to managing Inter-Site replication. The Site Links that are created between the sites must
be configured properly in order to obtain the desired result. If only the default settings are
required, the DefaultSiteLink can be used to connect all sites. Otherwise a site link must be
created and customized. Items to be carefully configured include the cost, schedule and
replication interval. These will be discussed in detail in the pages to follow.
Monitoring Replication
Command-line Utilities
o Repadmin
o Dcdiag
Event Viewer
Directory Services log
Active Directory Replication Monitor (replmon)
Event Viewer
There are two Event Logs that pertain to Replication and Active Directory. These logs are
automatically added when Active Directory is installed. The Directory Services log records
events having to do with the Directory Services service. These events include connections
between the domain controller and the Global catalog. The File Replication Service log records
events regarding the File Replication service. Failures during replication of Active Directory can
be found in this log.
Command-Line Utilities
The repadmin (Replication Diagnostic Tool) utility is used to diagnose Active Directory
replication problems between domain controllers. It can be used to force replication or to
manually create a replication topology.
Dcdiag (Domain Controller Diagnostic tool) is used to analyze the domain controllers either in
the domain or forest. Domain controllers can be specified to run diagnostic tests.
System State
System State is the collection of all system components and distributed services that Active
Directory requires to function. It is a logical group that cannot be separated and backed up
individually. Included the System State is the registry, system boot files, files protected by
Windows File Protection, and Certificate Services database. It can include Active Directory
components and the Sysvol folder if the server is a domain controller.
WBAdmin
Not installed by default
Must be installed as a Feature with Server Manager
“wbadmin startsystemstatebackup" backs up the system state, including AD on a
domain controller.
Windows Server 2008 includes a new backup application named Windows Server Backup.
Windows Server Backup is not installed by default. You must install it by using the Add
Features option in Server Manager before you can use the Wbadmin.exe command-line tool or
Windows Server Backup on the Administrative Tools menu.
To back up a domain controller, you should use the wbadmin startsystemstatebackup
command to back up system state data. If you use the wbadmin startsystemstatebackup
command, the backup contains only system state data, which minimizes the size of the backup.
This method provides system state data backups that are similar to the system state backups
that are provided by the Ntbackup tool in previous versions of Windows Server. As another
option, you can use the wbadmin start backup command with the -allcritical parameter or use
Windows Server Backup to perform a backup of all critical volumes, rather than only backing
up system state data. However, this method backs up all the critical volumes entirely. A volume
is considered critical if any system state file is reported on that particular volume.
In Windows Server 2008, the system components that make up system state data depend on the
server roles that are installed on the computer. The system state data includes at least the
following data, plus additional data, depending on the server roles that are installed:
RegistryCOM+ Class Registration database
Boot files
Active Directory Certificate Services (AD CS) database
The Active Directory database (Ntds.dit)
SYSVOL directory
Cluster service information
Microsoft Internet Information Services (IIS) metadirectory
System files that are under Windows Resource Protection
When you use Windows Server Backup to back up the critical volumes on a domain controller,
the backup includes all data that resides on the volumes that include the following:
The volume that hosts the boot files, which consist of the Bootmgr file and the
Boot Configuration Data (BCD) store
The volume that hosts the Windows operating system and the registry
The volume that hosts the SYSVOL tree
The volume that hosts The Active Directory database (Ntds.dit)The volume that
hosts The Active Directory database log files
Windows Server 2008 supports the following types of backup:
Manual backup: A member of the Administrators group or the or Backup
Operators group can initiate a manual backup by using Server Backup or the
Wbadmin.exe command line tool each time that a backup is needed. If the target volume
is not included in the backup set, you can make manual backups on a remote network
share or on a volume on a local hard drive.
Scheduled backup: A member of the Administrators group can use the
Windows Server Backup or the Wbadmin.exe command line tool to schedule backups.
The scheduled backups must be made on a local, physical drive that does not host any
critical volumes. Because scheduled backups reformat the target drive that hosts the
backup files, we recommend that you use a dedicated backup volume.
Windows Server Backup supports DVDs or CDs as backup media. You cannot use magnetic
tape cartridges. You cannot use a dynamic volume as a backup target.
Windows Server Backup does not support backing up individual files or directories. You must
back up the entire volume that hosts the files that you want to back up.
For Install from Media (IFM) installations, use the enhanced version of Ntdsutil.exe that is
included in Windows Server 2008 to create the installation media, rather than Windows Server
Backup. Ntdsutil.exe in Windows Server 2008 includes a new ifm command that creates
installation media for additional domain controllers. For read-only domain controller (RODC)
installations, the NTDSUtil ifm command can create secure installation media, in which the
command strips secrets from Active Directory data. You can also include SYSVOL data in the
installation media.
When you need to restore a domain controller, you can use Bcdedit.exe to toggle the default
startup mode between normal and Directory Services Restore Mode (DSRM).
To start the server in DSRM by using Bcdedit.exe, at a command prompt, type the following
command: bcdedit /set safeboot dsrepair. To restart the server normally, at a command
prompt, type the following command: bcdedit /deletevalue safeboot.
Windows Server backup in Windows Server 2008 has three recovery modes:
Full server recovery
System state recovery
File/folder recovery
As with previous versions of Active Directory, you can perform a system state recovery only by
starting the domain controller in DSRM, which you access by pressing F8 during the initial boot
phase of Windows Server 2008. If you cannot start the server, you must perform a full server
recovery. For more information, see Performing a Full Server Recovery of a Domain Controller.
Restore Options
Three options are available when restoring Active Directory: Normal Restore, Authoritative
Restore and Primary Restore. Depending on the situation and the objects desired to restore will
determine the type of restore that is appropriate. All restores are executed in Directory Services
Restore Mode which can be accessed from the Advanced Options Menu. Directory Services
Restore Mode turns Active Directory off allowing restoring the System State. The ntdsutil.exe
function is a command-line utility used in Directory Services Restore mode to complete other
Directory Services functions such as moving the database, using metadata cleanup to remove
old objects and an offline defrag, among others.
Normal Restore
Normal Restore is also called nonauthoritative. Run the Backup Utility to restore the System
State backup. It restores the entire System State to its original location. Once complete, the
domain controller is rebooted and it will synchronize with the other domain controllers in the
domain to receive the most up-to-date changes to Active Directory.
Reasons for using a Normal Restore include:
Restoring a single domain controller when there are other domain controllers
Attempt to restore Sysvol or File Replication service data on domain controller
other than first replica
Authoritative Restore
An Authoritative Restore allows the administrator to specify objects in The Active Directory
database that should be restored to the entire network upon reboot. It 'marks' the objects so they
are not written over when the domain controller is synchronized at reboot. It will replicate those
marked objects to the other domain controllers in the network instead.
To accomplish an Authoritative restore, a non-authoritative restore is completed, but the system
is not rebooted. From a command prompt, complete the restore by using the Ntdsutil.exe
utility. It is in this utility that the objects are indicated that should be replicated to the other
domain controllers. The items are marked by increasing the USN number by 100,000 per day of
the backup. This makes them the highest USN, which causes them to be replicated.
Reasons to use the Authoritative restore include:
Rolling back or undoing changes to Active Directory objects
Resetting the data stored in the Sysvol folder
Primary Restore
The Primary Restore is similar to the Normal Restore. When executing the Backup Utility to
restore the backup file, select the Advanced Options and indicate in the Advanced Restore
Options dialog box "When restoring replicated data sets, mark the restored data as the primary
data for all replicas' which marks it to be the data that is to be replicated to the other domain
controllers, whether the data is older or not. It affectively marks the entire restored System State
to be the authoritative objects for the domain. Reasons to use the Primary Restore include:
Restoring the only domain controller in an Active Directory environment
.Restoring the first of several domain controllers
Restoring the first domain controller in a replica set
The Restore function must be configured on the local machine.
Note: When doing an authoritative restore, the objects that are being restored are sometimes
referred to as subtrees. This would be an OU that is being restored. The LDAP path might look
like: OU=Sales, DC=contoso, DC=com.
AD Replication Conflicts
Replication is at Attribute level: Same object, different attribute, no conflict
Types of Conflicts:
Attribute conflict: Uses latest date stamp
Objects created in OU on one DC & OU deleted on another DC: OU deleted and
objects placed in Lost and Found
Same object created on 2 DCs: Both objects created Second object includes GUID
Replication for objects in Active Directory is executed at the attribute level. If changes to the
same object are being made on two separate domain controllers, as long as the changes are
being made to different attributes of the object, the replication will occur without any conflicts.
For example, the home phone number is changed on one DC and the fax number is changed on
another DC. Since these are different attributes, they will both replicate with no conflicts.
Attribute conflicts will occur when the same attribute is modified on two DCs and then
replicated at the same time. When this occurs, the date stamp is checked and the most recent
change will be replicated.
Objects created in OU on one DC and OU is deleted on another DC provides a challenge. The
OU will be deleted and the objects that were created in the OU will be placed in the Lost and
Found in AD Users and Computers. This will maintain the SIDs on the security principals. The
OU can be recreated and the security principals moved from Lost and Found to the new OU.
Same object created on 2 DCs and replicated at the same time will cause both objects to be
created but one of the objects will have the GUID appended to the end of the name. It will be
necessary to check the Properties of each account to determine the account that should be
maintained.
If another DC in the same domain is not available or the time is not synchronized with the other
DCs, the process will fail. In that case there are two options, run the DCPROMO again with the
switch /forceremoval. This will force it to uninstall and will not attempt to contact another DC.
It can also be taken off line, reinstalled, and any occurrence removed from AD through
NTDSUTIL Metadata Cleanup.
File Permissions
On a Windows computer, you can share files among both local and remote users. Local users
log on to your computer directly through their own accounts or through a Guest account.
Remote users connect to your computer over the network and access the files that are shared on
your computer.
You can access the Simple File Sharing UI (the default in some XP & Vista versions) by viewing
a folder's properties. Through the Simple File Sharing UI, you can configure both share and
NTFS file system permissions at the folder level. These permissions apply to the folder, all the
files in that folder, subfolders, and all the files in the subfolders. Files and folders that are
created in or copied to a folder inherit the permissions that are defined for their parent folder.
This article describes how to configure access to your files, depending on permission levels.
Some information that this article contains about these permission levels is not documented in
the operating system files or in the Help file.
To view a video about how to turn Simple File Sharing on or off, click the Play button () on the
following Windows Media Player viewer:
NTFS Permissions
Folder permissions include Full Control, Modify, Read & Execute, List Folder Contents, Read,
and Write. Each of these permissions consists of a logical group of special permissions that are
listed and defined in the following sections.
Troubleshooting
If the Security tab is not available and you cannot configure special permissions for users and
groups, you may be experiencing the following issues :
The file or folder where you want to apply special permissions is not on an NTFS drive. You can
set permissions only on drives that are formatted to use NTFS.
Simple file sharing is turned on. By default, simplified sharing is turned on.
IMPORTANT: Groups or users who are granted Full Control on a folder can delete any files in
that folder regardless of the permissions that protect the file.
Note Although the List Folder Contents and the Read & Executefolder permissions appear to
have the same special permissions, these permissions are inherited differently. List Folder
Contents is inherited by folders but not files and it only appears when you view folder
permissions. Read & Execute is inherited by both files and folders and is always present when
you view file or folder permissions.
In Windows XP Professional, the Everyone group does not include the Anonymous Logon
group.
Traverse Folder/Execute File – For folders: The Traverse Folder permission applies only to
folders. This permission allows or denies the user from moving through folders to reach other
files or folders, even if the user has no permissions for the traversed folders. Traverse Folder
takes effect only when the group or user is not granted the Bypass Traverse Checking user right.
The Bypass Traverse Checking user right checks user rights in the Group Policy snap-in. By
default, the Everyone group is given the Bypass Traverse Checking user right. For files: The
Execute File permission allows or denies access to program files that are running. If you set the
Traverse Folder permission on a folder, the Execute File permission is not automatically set on
all files in that folder.
List Folder/Read Data – The List Folder permission allows or denies the user from viewing file
names and subfolder names in the folder. The List Folder permission applies only to folders and
affects only the contents of that folder. This permission is not affected if the folder that you are
setting the permission on is listed in the folder list. The Read Data permission applies only to
files and allows or denies the user from viewing data in files.
Read Attributes – The Read Attributes permission allows or denies the user from viewing the
attributes of a file or folder, such as read-only and hidden attributes. Attributes are defined by
NTFS.
Read Extended Attributes – The Read Extended Attributes permission allows or denies the user
from viewing the extended attributes of a file or folder. Extended attributes are defined by
programs and they may vary by program.
Create Files/Write Data – The Create Files permission applies only to folders and allows or
denies the user from creating files in the folder. The Write Data permission applies only to files
and allows or denies the user from making changes to the file and overwriting existing content
by NTFS.
Create Folders/Append Data – The Create Folders permission applies only to folders and
allows or denies the user from creating folders in the folder. The Append Data permission
applies only to files and allows or denies the user from making changes to the end of the file but
not from changing, deleting, or overwriting existing data .
Write Attributes – The Write Attributes permission allows or denies the user from changing the
attributes of a file or folder, such as read-only or hidden. Attributes are defined by NTFS. The
Write Attributes permission does not imply that you can create or delete files or folders,. It
includes only the permission to make changes to the attributes of a file or folder. To allow or to
deny create or delete operations, see Create Files/Write Data, Create Folders/Append Data,
Delete Subfolders and Files, and Delete.
Write Extended Attributes – The Write Extended Attributes permission allows or denies the
user from changing the extended attributes of a file or folder. Extended attributes are defined by
programs and may vary by program. The Write Extended Attributes permission does not imply
that the user can create or delete files or folders, it includes only the permission to make
changes to the attributes of a file or folder. To allow or to deny create or delete operations, view
the Create Files/Write Data, Create Folders/Append Data, Delete Subfolders and Files, and
Delete sections in this article.
Delete Subfolders and Files – The Delete Subfolders and Files permission applies only to folders
and allows or denies the user from deleting subfolders and files, even if the Delete permission is
not granted on the subfolder or file.
Delete – The Delete permission allows or denies the user from deleting the file or folder. If you
do not have a Delete permission on a file or folder, you can delete the file or folder if you are
granted Delete Subfolders and Files permissions on the parent folder.
Read Permissions – The Read Permissions permission allows or denies the user from reading
permissions about the file or folder, such as Full Control, Read, and Write.
Change Permissions – The Change Permissions permission allows or denies the user from
changing permissions on the file or folder, such as Full Control, Read, and Write.
Take Ownership – The Take Ownership permission allows or denies the user from taking
ownership of the file or folder. The owner of a file or folder can change permissions on it,
regardless of any existing permissions that protect the file or folder.
Synchronize – The Synchronize permission allows or denies different threads to wait on the
handle for the file or folder and synchronize with another thread that may signal it. This
permission applies only to multiple-threaded, multiple-process programs.
Set, view, change, or remove special permissions for files and folders
NTFS Permissions
1. Click Start, click My Computer, and then locate the file or folder where you want to set
special permissions.
2. Right-click the file or folder, click Properties, and then click the Security tab.
3. Click Advanced, and then use one of the following steps:
• To set special permissions for an additional group or user, click Add, and then in Name
box, type the name of the user or group, and then click OK.
• To view or change special permissions for an existing group or user, click the name of
the group or user, and then click Edit.
• To remove an existing group or user and the special permissions, click the name of the
group or user, and then click Remove. If the Remove button is unavailable, click to clear the
Inherit from parent the permission entries that apply to child objects. Include these with entries
explicitly defined here check box, click Remove, and then skip steps 4 and 5.
4. In the Permissions box, click to select or click to clear the appropriate Allow or Deny
check box.
5. In the Apply onto box, click the folders or subfolders where you want these permissions
applied.
6. To configure security so that the subfolders and files do not inherit these permissions,
click to clear the Apply these permissions to objects and/or containers within this container
only check box.
7. Click OK two times, and then click OK in the Advanced Security Settings for
FolderName box, where FolderName is the folder name.
CAUTION: You can click to select the Replace permission entries on all child objects with
entries shown here that apply to child objects. Include these with entries explicitly defined here
check box. Therefore,all subfolders and files have all their permission entries reset to the same
permissions as the parent object.If you do this, after you click Apply or OK, you cannot undo
this operation if you click to clear the check boxes.
Group Scopes
Global
There are three different group scopes. The Global group is created in the domain where the
users are located. The user accounts are placed in the Global groups. It is stored in the local
domain but is referenced in the Global Catalog by its name with the domain where it is located.
Only the name is recorded, not the group membership. Since it is in the Global Catalog, it can be
viewed in other domains. It can 'travel' across the trusts to other domains.
Domain Local
The Domain Local groups are created in the domain where the resource is located and are used
to assign permissions. Global groups should be added to the Domain Local groups. They are
stored in the local domain only and cannot be viewed from any other domain.
Universal
When in Domain Functional Level Windows 2000 Native or higher, Universal groups are
available. The Universal groups are stored in the Global Catalog and can have membership
from any domain in the forest. Since it is stored in the GC, the membership should be fairly
static so there are not many changes to the Global Catalog. By placing Global groups into
Universal groups they are able to remain static, since it only has the Global group name, not the
actual membership. When the Global group membership changes, it will not impact the
Universal group.
Group Nesting
With Domain Functional Level Windows 2000 Native and higher, Group Nesting is available.
This allows a Global group to be a member of another Global group in the same domain, or a
Domain Local group is a member of another Domain Local group in the same domain. Nesting
should be limited to only 2 levels to minimize the impact of combined permissions.
Removing Groups
Remove groups when they are no longer needed. Whenever a group is deleted, the users that
belong to that group are not impacted because they are only associated with the group. The user
account is maintained separately from the groups.
Command line account management
You can also use the command-line tools Dsadd, Dsmod, and Dsrm to manage user, computer,
and group accounts in Active Directory. You must specify the type of object that you want to
create, modify, or delete. For example, use the dsadd user command to create a user account.
Use the dsrm group command to delete a group account. Although you can use Directory
Service tools to create only one Active Directory object at a time, you can use the tools in batch
files and scripts.
The Csvde command-line tool uses a comma-delimited text file, also known as a comma-
separated value format (Csvde format) as input to create multiple accounts in Active Directory.
You use the Csvde format to add user objects and other types of objects to Active Directory.
You cannot use the Csvde format to delete or modify objects in Active Directory. Before
importing a Csvde file, ensure that the file is properly formatted.
The input file:
Must include the path to the user account in Active Directory, the object type,
which is the user account, and the user logon name (for Microsoft Windows NT® 4.0
and earlier).
Should include the user principal name (UPN) and whether the user account is
disabled or enabled. If you do not specify a value, the account is disabled.
Can include personal information.for example, telephone numbers or home
addresses. Include as much user account information as possible so that users can search
in Active Directory successfully.
Cannot include passwords. Bulk import leaves the password blank for user
accounts. Because a blank password allows an unauthorized person to access the
network by knowing only the user logon name, disable the user accounts until users
start logging on.
To edit and format the input text file, use an application that has good editing capabilities, such
as Microsoft Excel or Microsoft Word. Next, save the file as a comma-delimited text file. You
can export data from Active Directory to an Excel spreadsheet or import data from a
spreadsheet into Active Directory.
The Ldifde command-line tool uses a line-separated value format to create, modify, and delete
objects in Active Directory. An Ldifde input file consists of a series of records that are separated
by a blank line. A record describes a single directory object or a set of modifications to the
attributes of an existing object and consists of one or more lines in the file. Most database
applications can create text files that you can import in one of these formats. The requirements
for the input file are similar to those of the Csvde command-line tool.
DN,objectClass,sAMAccountName,userPrincipalName,
displayName,userAccountControl
The user account line. For each user account that you create, the import file
contains a line that specifies the value for each attribute in the attribute line. The
following rules apply to the values in a user account line:
The attribute values must follow the sequence of the attribute line.
If a value is missing for an attribute, leave it blank, but include all of the commas.
If a value contains commas, include the value in quotation marks.
The following sample code is an example of a user account line:
"cn=Suzan Fine,ou=HumanResources,dc=asia,
dc=contoso,dc=msft",user,suzanf,[email protected],Suzan Fine,514
You cannot use Csvde to create enabled user accounts if the domain password policy requires a
minimum password length or requires complex passwords. In this case, use a
userAccountControl value of 514, which disables the user account, and then enable the account
using Windows Script Host or Active Directory Users and Computers.
Run the csvde command by typing the following command at the command prompt:
Resources,dc=NG,dc=DS,dc=ARMY, dc=MIL
Changetype: Add
objectClass: user
sAMAccountName: suzanf
userPrincipalName: [email protected]
userAccountControl: 514
Run the ldifde command to import the file and create multiple user accounts in Active
Directory.
Type the following command at the command prompt:
Password Challenges
When resetting user passwords the following information is no longer accessible:
Files that the user encrypted
E-mail that is encrypted with the user's public key
Internet passwords that are saved on the computer
Domain Accounts
Recover by archiving the Certificate private key of the users
Recovery Key Agent can then add the certificate key
Local user accounts
Use a Password Reset Disk to prevent losing access
Allows user to connect and change password instead of resetting
Local users can create a Password Reset Disk ahead of time to avoid losing access to the
information listed above. It allows the user to login to the computer and then change the
password, rather than reset it.
Creating a Password Reset Disk:
Put a blank disk into the floppy drive.
Press CTRL+ALT+DEL, and click to Change Password Enter the usemame for
which you are creating the Reset Disk. In Log On To, select the local computer.
Don't change the password.
Click Backup to launch the Forgotten Password Wizard. Enter the current
password for this user.
When Reset Disk is complete, click Next, then click Finish. Label disk and store
in secure place.
Forgotten Password Wizard: This wizard creates a security key pair; a private key is written to
the password reset disk and a public key encrypts the local user's password on the computer.
The private key is used to decrypt the public key. The user will be prompted to create a new
password. Since the user is only changing the password, no user access to data is lost.
Resetting a Password with the Reset Disk:
Open the Log On to Windows dialog box.
Enter the usemame and select the local computer.
Click OK without entering a password (or enter a bad password)
The Logon Failed dialog box appears, which includes an invitation to use a
password reset disk, is one exists.
Click Reset and insert your Password Reset Disk into the floppy drive.
Follow the prompts in the Password Reset Wizard to create a new password.
(Option to create a hint to remember the password is provided)
Log on to the computer with the new password.
WMI Filters
WMI filters can be use to allow or deny GPOs to specific systems based upon hardware and
software configuration specifications and is useful for:
Deploying software upgrades to systems that already have a previous manually
-installed version
Deploying software to systems that meet hardware/software specifications
Categories
There are 3 categories of settings that can be applied in both the Computer and User
Configuration settings. There are not many duplicate settings between the two areas. When
there are duplicates, the computer policies will take precedence over the user policies. The
categories of settings are:
Software Settings - provides software deployment using Microsoft Installer
packages (.msi)
Windows Settings - some of the items located in this section include applying
scripts (logon, logoff, startup, and shutdown), security settings, IE maintenance, remote
installation, and folder redirection.
Administrative Templates - these settings are Registry-based settings. These
include desktop configurations, control panel access, network access, printers, start
menu configuration, system information and Windows components.
There is no way to know all the policy settings that are available. There will be definite items
that you will want to know for the exam and also knowing the general area where the item
might be found is helpful.
There are several places that the Resultant Set of Policy Wizard can be accessed. The Group
Policy Management Console has a node for both Group Policy Modeling and Resultant Set of
Policy. The Resultant Set of Policy snap-in can be added to a MMC console, Both Active
Directory Users and Computers and Active Directory Sites and Services provides access to the
wizard by right-clicking the object desired to view, select All Tasks, then either Resultant Set of
Policy Planning or Logging.
Planning Mode - enables you to plan by seeing what would happen if a policy
was applied to a particular computer or user. Policy settings, software installations and
security can all be viewed in various scenarios. Different scenarios can be simulated to
view the impact on the computer and/or user accounts. This includes being able to
determine the impact of an object/objects move from one place to another.
Logging Mode- enables you to review existing GPO settings, software
installation applications and security for a user account or computer account that has
already been applied.
Whether called Resultant Set of Policy (Planning) in Active Directory Users &
Computers/Sites & Services or Group Policy Modeling in the Group Policy
Management Console, the wizard steps and the end results are the same.
Select the domain desired to conduct the analysis then select any domain
controller or a specific domain controller to conduct the analysis.
In order to run Group Policy Modeling, at least one Domain Controller is
required. You must also have the correct permissions set on Active Directory container
where you want to run the analysis. Set the permission on the Security tab of the
container and select Resultant Set of Policy - Planning
Select either the container or specific user/computer desired for the analysis.
Select advanced simulation options. Options include to simulate a Slow Network
Connection, analyze using the Loopback processing and an option is given to select the
site desired.
Select the user security groups desired to view how the policies affect them,
should they be part of that container.
Select the computer security groups desired to view how the policies affect them
should they be moved to the that container.
Select specific WMI filters for users/computers to be associated with the
analysis. Filters have to already be created in order to select them.
Confirm selections and select Next to run the analysis.
Once completed, if using Resultant Set of Policy (Planning), an MMC console
will open and the results can be viewed. Save the MMC console to the Administrative
Tools folder in order to use the analysis another time. If executing as part of the GPMC,
the report will appear in the detail window. To permanently save the report, right-click
the report and click Save Reports.
Software Deployment
Software can be deployed using a native or custom .msi package to either
Computers or Users
Computer - Assign only
Users - Assign or Publish
Published Software is installed by selecting
o Shortcut icon
o Add/Remove Programs
o file extension activation
Application without a Windows Installer – assigned only
o Package can use Application (.zap) files
Users can manually install from a file share with limited permissions if the
"Always install with elevated privileges" option is enabled in a policy.
Publish optional applications to users. Assign mandatory applications to users or
computers.
Enable the Uninstall application when it falls out of the scope of management
option to prevent continued use
Windows 2000 introduced the feature of Software Deployment through a Group Policy Object.
Now with Windows Server 2008, several features have been added. Software Deployment can
be used as long as there is an Active Directory domain and the clients are Windows 2000
Professional or later.
Software Deployment is available in both the User and Computer Configuration of the Group
Policy. Decision points on where to execute the deployment depends if the software is to be
available to anyone on the computer or if it is to be available to only certain users. Another
deciding factor is how the software is deployed. Two choices are available. It can either be
Assign or Publish.
A deployment that has been Assigned indicates that the shortcut is available on Start Menu and
is installed if the user selects the shortcut. It can also be installed if a file with the software
extension is selected. For example, if Adobe Acrobat has been Assigned, double-clicking an
attached .pdf file would cause Acrobat to install. An application that is assigned can be
deployed either to Computers or Users.
When Published, the availability of the software is listed in Add/Remove Programs and will be
installed when requested by the user. It will also install if a file with the software extension is
selected. Only Users can have a Published deployment.
Creating Package
To create the software package, Right-click the Software Installation node, select New/Package.
Make sure to select a UNC path for the Software Distribution Point, not a local path.
Once the path has been entered, select whether to Assign or Publish. If you are configuring a
deployment under the Computer Configuration section of the GPO, publish will be grayed out
because it is not a valid option.
Once the package is created you can then modify the settings. To go directly to the properties of
the package, select Advanced instead of Assign or Publish. This opens up the dialog box so
additional settings can be selected. If adding a Transform or Patch file, make sure to select
Advanced to add the appropriate files before the package is deployed.
On the Deployment tab is the option to Publish or Assign, Uninstall application when it falls
out of the scope of management, option to not show in Add/Remove Programs, and the newest
option, Install the application at logon.
Uninstall application when it falls out of the scope of management option will uninstall the
application when the user account is moved from the OU where the policy is linked. For
example, Sue's user account is in the accounting OU and special accounting software has been
deployed. When she changes to the marketing department, her use account is moved to the
marketing OU. Any accounting software that was deployed to Sue will be removed when she
logs on after the user account is moved. The Install application at logon completely installs the
application at logon instead of having to select a shortcut or file extension. This option is only
available when the package is Assigned to Users.
Upgrading Software
In order to upgrade a software deployment, create a new package for the new version and select
Advanced to open the Properties of the package, On the Deployment tab select either Assign
(users or computers) or Publish (users),
On the Upgrade tab select Add to select the software package this one is going to upgrade. At
the bottom of the window are options to install over the existing software or uninstall the
existing software and then install the upgrade. After making the desired selection, select OK to
return to the Upgrade tab. The package that is being replaced will be seen in the top window.
The last configuration for the upgrade is to determine if the upgrade is required or not. If it is
required, check the box in the center of the Upgrade window that states "Required upgrade for
existing packages."
Once the upgrade has been configured, the original package will display the upgrade package
in the bottom of the Upgrade tab.
Redeployment - can be used when small changes are made to the original deployment
package. Most of the time redeployment is used when new features are desired from the
original deployment.
Removing Applications
To remove an application that has been installed by a software installation package, right-click
the software package in the Group Policy Object and select All Tasks/Remove. There are two
selections: Forced Removal which causes "immediate uninstall" or Optional Removal which
allows users to continue using but no installations are provided.
Note: An "immediate removal" does not trigger uninstallation until the user logs off and back
on for an application deployed to Users. For an application Assigned to Computers, the
machine must be rebooted to uninstall the application.
Another option is to setup the uninstall as part of the original installation package by selecting
to Uninstall the application if the user/computer falls out of the scope of management. If the
user/computer is moved from the original location where the Group Policy Object with the
installation package is deployed, the application will uninstall at the next logon/reboot. This
provides a means of removing software specific for an OU when the user/computer no longer
belongs to that OU. For instance: a user's account belongs to an Accounting OU and has special
Accounting software deployed. When that user's account is moved to the Marketing OU, the
Accounting software will be removed. This can prevent potential licensing problems.
Software Rules
Rules override default security level
Determine as part of rule if it is allowed to run
Select Unrestricted or Disallowed within the rule
Rules include: Hash rule, Certificate Rule, Path Rule and Internet Zone Rule
The Software Rules determine the programs and files that can be executed on a computer
system and override the Security Levels. Each Software Rule will specify if that program/file
will be allowed to run by selecting either Unrestricted (allow to run) or Disallowed (not allowed
to run).
There are four rules besides the Registry rules created when Disallowed Security level is
selected. These four are: Hash Rule, Certificate Rule, Path Rule and Internet Zone Rule. The
rules are applied in the order listed.
Hash Rule - A Hash Rule allows the file that is being either restricted or allowed to be identified
by a hash, which is a series of bytes that uniquely identifies a program or file. The file is selected
in the New Hash Rule Dialog box and it automatically creates the hash. Information about the
file: filename, size and creation date, populates the rule automatically. Select whether to allow
the hash (unrestricted) or restrict (disallowed).
Certificate Rule - A Certificate Rule identifies the software by the signed certificate. This
indicates the software is from a trusted source and will not prompt the user. Certificate Rules
can be applies to scripts and Windows Installer Packages. They do not apply to .exe or .dll file
extensions. To create the rule, select the certificate and then whether to allow (unrestricted) or
restrict (disallowed).
Path Rule - A Path Rule identifies the file by the file path. If the file is moved, the Rule will no
longer apply. Select to allow (unrestricted) or restrict (disallowed).
Internet Zone Rule - An Internet Zone Rule apply only to Windows Installer packages. It
identifies software through the Internet Zone specified in the rule. Select to allow (unrestricted)
or restrict (disallowed).
Enforcement
To prevent the Software Restriction Rules from being applied to the local administrator, double-
click Enforcement, located in the details pane when the Software Restrictions node is selected.
Under Apply software restrictions to the following users: select All users except local
administrator.
Redirected Folders
My Documents
Desktop Settings
Start Menu
Application Settings
Four possible options available for target location depending on what is being
redirected
o Redirect back to local
o Redirect to following location -- %userprofile%\My Documents
o Redirect to the local user profile
o Settings tab, redirect back when policy deleted
There are four areas of the user's profile that can be redirected to another location. The four
areas are My Documents, Start Menu, Desktop Settings and Application Settings. Using
Redirect Folders can be a definite asset for Roaming Profiles since the information is being
stored in a location other than the profile, it will not be downloaded to the client system every
time the user logs in. For My Documents, this is a big advantage. Besides the profile loading
faster and not using excessive bandwidth, there are no documents cached on the local system
that might cause a security breach. The Start Menu and Desktop Settings can have permissions
applied to them to be Read only so the user cannot change any of the settings stored in either of
these areas.
There are two settings available for all redirected folders: Basic - to redirect the folders for all
users or Advanced - select the security group and the location for the target for each security
group separately. There are some significant changes with where the folders can be redirected
and the choices provided for the administrator when configuring.
The first key difference is My Documents can be redirected to a user's home folder, as long as
the home folder structure is already in place. This is not the preferred method but is provided
for organizations that have already deployed the home folder environment. It is restricted to
Windows XP Professional clients.
In order to redirect to the home folder certain things must be taken in consideration by the
administrator. By redirecting to home folder, the security of the network environment is relaxed
and the security of the contents of My Documents is not guaranteed secure because of the
following items:
Security - There is no security settings checked or altered in the process of the
redirection.
Ownership - Redirection occurs without any type of ownership check to make
sure the user redirecting is actually the owner of the folder.
Home directory - The Home Folder location indicated in the Properties of the
User's account in Active Directory is used to redirect. If this path fails or is incorrect in
anyway, the redirect fails.
GP processing
By default in Windows XP Professional, the Fast Logon Optimization feature is set for both
domain and workgroup members. This results in the asynchronous application of policies
when the computer starts and when the user logs on. This application of policies is similar to a
background refresh process and can reduce the length of time it takes for the Logon dialog box
to display and the length of time it takes for the shell to be available to the user. An
administrator can change the default by using the Group Policy Object Editor.
Fast Logon Optimization is always off during logon under the following conditions:
When a user first logs on to a computer.
When a user has a roaming user profile or a home directory for logon purposes.
When a user has synchronous logon scripts.
Security Templates
Any customized security templates can be backed up from one domain and then imported into
another domain.
IP Addressing
An IP address is a 32 bit binary number that identifies a node (computer, interface card). A
binary number is a sequence of 0s and 1s. The 32 bits are interpreted as 4 groups of 8 bits. The IP
address as we know it is called an IPv4 (4 octets). The 4 octets, when converted to decimal, can
be any value between 0-255.
When converting from binary to decimal, draw lines representing the 8 bits. Starting from the
right, write a 1 under the first line. Then proceeding to the left, multiply by 2. The numbers
under each line represent the value of that bit. Each bit can have either a 1 or 0 as its binary
value. Compare any binary number to the value chart. Any bit that has a 1 is considered to be
'on' and all the values of the '1' bits are added together to get the decimal conversion.
For instance: a binary number of 10110011 could be converted to decimal by determining the
value of all the' I' bits and adding them together which would be: 128+32+16+2+1=179.
To convert a decimal number to binary, use the value chart, starting from left to right.
Determine if the decimal number can have 128 subtracted, if yes, place a 1 in the 128 spot.
Determine the value left. Can 64 be subtracted? If yes, place a 1 in the 64 spot. Determine the
value left and continue through until the total decimal number has been converted.
For Example: Consider the decimal number 203. We can subtract 128 from 203, so there is a 1 in
the 128 place. The value left is 75. We can subtract 64 from 75, so a 1 goes in the 64 place. We
have 11 remaining. We can't take 32 or 16 from 11, so0s go in those places. We can take 8 from
11 with 3 left over. Place a 1 in the 8 spot. We can't subtract 4 from 3. Place a 0 in the 4 spot. The
last two bits equal 3, which is what we have left, so place a 1 in the 2 and 1 spot. Our new binary
number is: 11001011.
With some practice, converting from binary to decimal and back is not a difficult process. Learn
the bit value table and it will get you a long way.
Address Classes
There are 5 classes of addresses, 3 of those classes can be assigned to individual systems. The
first three classes, A, B, and C, are the classes that can be used to address clients. Class D is used
for Multicasting, which provides a central pool of addresses for video conferencing and other
types of multicasting traffic. There are a series of addresses that are Reserved for future growth.
The first octet of the IP address will identify the class of the address. The ranges listed in the
chart above should be memorized. Each IP address has two parts: network and host. The
network portion is used to determine if the packet being sent from a source is local to the
computer or remote. If the network portions do not match, they cannot communicate without a
router. The host portion must be unique to the segment.
By identifying the class of the address, you determine the portion of that address that is being
used for the network and the host. Class A addresses use the first octet as the host or the first 8
bits. The network bits are often times represented as a slash or CIDR notation at the end of the
IP address. This indicates the number of bits that are being used to represent the network. The
remaining bits are for the host.
Subnetting
The process of creating more networks with fewer hosts per network by
"borrowing" bits in the subnet mask.
Supernetting aggregates multiple routes to a single network via the opposite
process
Why subnet?
o Make communication more efficient.
o Reduce network broadcasts by creating broadcast domains.
o Dividing large IP networks into smaller, more efficient ones.
Subnet Masks
Use Decimal notation (255.0.0.0) or CIDR Notation (/8) to identify the subnet mask value.
Each IP address has an associated subnet mask which is used by the computer to identify the
network portion of the address. When converted to binary, the 1sin the subnet mask represent
the network portion and the0s represent the host portion. The 3 classes have default subnet
masks, which represent the full range of addresses in the available bits. With a class A network
address, 16,777,216 (21\24) hosts can be configured. For a class B, 65,536 (21\16) hosts are
available. In a class C, 256 (21\8) hosts are available. In order to calculate the number of valid
addresses available, count how many bits are available for the hosts and use that number as the
exponent. (2 to the power of u) The 2 comes from the possibilities in 1 bit - 0 or 1. The u
represents the number of unmasked bits in the IP address.
Network/Broadcast Address
Another item to consider in determining the valid number of hosts that can be configured, is the
rule that the host bits cannot be all 0s or all 1s in binary. When the host bits are all0s it is called
the Network Address. It is the first address in the range and is not available to address a host.
Network addresses appear in routing tables on PCs and routers. When the host bits of an IP
address are all 1 's, that is the Broadcast Address used by all PCs in that network. It is the last
address in the range and cannot be configured to a host. When a PC broadcasts an
announcement, it will send that packet to the Broadcast Address.
With this in mind, the Class A can have 16,777,214 (16,777,216 - 2) hosts, class B can have 65,534
(65,536 - 2) hosts and a Class C can have 254 (256 - 2) hosts.
All 0s in 8 bits has a decimal value of 0 and all 1s has a decimal value of 255. Don't get confused
that any address ending in decimal 0 or 255 is invalid. It is binary host bits that need to be
considered. If a class B address is being used, it could have a 0 or 255 at the end of some
addresses, but there would be a 1 or 0 somewhere in the total number of host bits.
In order to manage the range of addresses available, it is possible to break those addresses into
portions or subnets. The network borrows bits from the host in order to create the subnets. If a
class B address borrows bits from the host portion, it will still be a class B address (16 bits) plus
have additional bits that can identify the subnet. The subnet bits will allow the hosts to be
identified as local or remote by comparing the bits that have been borrowed for the network.
The 1s in the subnet mask must be consecutive so there are only 9 possibilities for subnet masks:
0, 128, 192,224,240,248,252,254, and 255. Notice in the chart the l's are consecutive. There are no
other possibilities.
See also :Port Requirements for the Microsoft Windows Server System
(KB832017)
IPv6
128 bit address
o Unicast IPv6 addresses are divided into 2 parts: a 64 bit network and 64 bit host.
o Host component is typically based on MAC, or can be randomly generated
Eight blocks for four hexadecimal digits
Coloned hexadecimal
Can be shortened by eliminating leading Os:
o 2001:0db8:0000:0000:0000:0000:1428:57ab
o 2001:0db8:0000:0000:0000::1428:57ab
o 2001:0db8:0:0:0:0:1428:57ab
o 2001:0db8:0:0::1428:57ab
o 2001:0db8::1428:57ab
o 2001:db8::1428:57ab
Special addresses
o :: -- unspecified address
o ::1 – loopback
o fe80::/10 – link local (auto-configuration address, not routable)
o 2001::/32 – Teredo
o 2002::/16 – 6to4 addressing
o fd00::/8 – routable unicast (normal)
Since all Link-Local Addresses (LLAs) share the same network id (fe80::), you can't determine
which interface an LLA is bound to just by looking at the address. If a PC has multiple network
interface cards bound to different networks, each network is identified by a zone id. The zone id
will follow a "%" sign.
Teredo
Teredo client: computer enabled with both IPv6 and IPv4 and that is located behind a router
performing IPv4 NAT. The Teredo client creates a Teredo tunneling interface and configures a
routable IPv6 address with the help of a Teredo server. Through this interface, Teredo clients
communicate with other Teredo clients or with other IPv6 hosts on the IPv6 Internet
Teredo Server: Public server connected both to the IPv4 Internet and the IPv6 Internet. Its job is
to perform configuration of addresses of the Teredo clients while also configuring the initial
communication.
Teredo Relay: A Teredo Relay is a Teredo tunnel endpoint. It is an IPv6/IPv4 router that can
forward packets between Teredo clients on the IPv4 Internet and IPv6-only hosts
DHCP
When a DHCP client boots it broadcasts a DHCP Discover packet. All DHCP servers with a
valid available address will respond to this packet with a DHPC Offer. The client will select one
of these offers to accept, it then sends a DHCP Request. The issuing server responds with a
DHCP Acknowledgement. This process is commonly known as DORA.
If no offer is received, the client will rebroadcast its Discover at 2, 4, 8, and 16 seconds (+/- a
randomized delay of up to 1 second). If after all this no Offer is received, the client will revert to
APIPA and retry the process every five minutes. APIPA ensures that computers in a broadcast
domain can communicate with each other even without a DHCP server.
At 50% of the lease duration, the client will begin renewal attempts. This consists of a renewal
Request from the client, and an Ack by the originally issuing server. If the issuing server is not
available at 87.5% of the lease duration, a general Discover packet is issued as in the initial boot
process. If a client requests renewal of an invalid address, the server will issue a DHCP deny
(NAK), which forces the client to release its current address and begin the DORA process again.
Information contained in a DHCP offer must include an IP address and subnet mask. It may
include a number of optional parameters: gateway address, DNS server address, WINS server
address, disable NetBIOS over TCP, and release DHCP Lease on Shutdown. Note that APIPA
configuration contains no information beyond IP address and subnet mask.
The DHCP process occurs on UPD ports 67 & 68, which may not be forwarded by some
switches in default configuration. It may be necessary to configure broadcast forwarding on
these ports before deploying DHCP.
Unless otherwise specified in the DHCP options, a client will not release its DHCP address on
shutdown. It will automatically attempt renewal on restart. If the issuing server is not
available, but the gateway is reachable, the client will continue using the lease until its
expiration. If the default gateway is also unreachable, the client will release the IP and use
APIPA until a DHCP server is available.
DHCP clients can be manually manipulated using the ipconfig /release & /renew command.
These can be useful when moving computers, or renumbering a network.
Configuring TCP/IP
Dynamically
o Obtain an IP address automatically
o Default setting
Statically
o Use the following IP address
o Manually enter
IP address
o Subnet Mask
o Default Gateway
o DNS Server
Use alternate for Fault Tolerance, alternate will respond when preferred does not
Optimize name resolution by directing clients to local DNS servers
IP addresses can be configured either dynamically or statically. To configure the settings, go to
the Properties of the Network Interface Card located in the Network Connections dialog box.
On the General tab, select Internet Protocol and Properties. The Properties for the IP settings
will open.
The General tab provides radio buttons to choose either to Obtain in IP address automatically or
to assign a static address. When selecting to obtain an address dynamically, a DHCP server
must be running in the network in order to obtain an address. If not, an APIP A address will be
assigned. This is the default setting.
To statically enter an IP address, select to Use the Following IP address. The IP address and
subnet mask must be manually keyed. The Default Gateway, which is the near side of the
router, is optional. It must be provided if Internet or remote communication is desired.
When obtaining an IP address automatically, all other options can be configured dynamically as
well, including the DNS server addresses. DNS provides name resolution for the network and is
necessary for the Windows Server 2008 network along with communicating on the Internet.
This address points to a DNS server that can provide name resolution. If statically entered, the
Preferred DNS server is the first server that will be contacted. The Alternate DNS server is only
used if the preferred server is unavailable.
The advanced settings include the capability of statically assigning multiple addresses to one
network interface and configuring an alternate static address for an automatically assigned IP
address.
Multiple IP Addresses
There are a lot of reasons multiple IP addresses may be required on a single interface. One
reason would be for a web server hosting multiple web sites. To accomplish multiple IP
address,
Select the Advanced button at the bottom of the Properties window where the original IP
address has been assigned. At the top of the IP settings tab, there is an Add button. Select this
and enter the additional IP address and subnet mask. Once the addresses are added here, they
will then be available to select throughout the system.
IP Troubleshooting Tools
IPConfig
o View IP settings
Ping
o Check name resolution
o Test connectivity
o IP address or FQDN
Tracert
o Trace the route (hops) for a specific IP address or FQDN
o –d for ‘no name resolution’
Path Ping
o Combination of Tracert and Ping Traces route and provides statistics for lost
packets
o Slower than the others
IPConfig
This is probably the most used tools, next to PING. This tool displays a view of the IP
configuration along with executing other tasks. When entered by itself, IPConfig will show the
network interfaces configured with the IP address, subnet mask and default gateway. To see all
the configuration settings, type IPConfig /all. This will show all the network interface
information including any additional options that have been configured (DNS, WINS), if it is
enabled for DHCP, the DHCP server address where it obtained its address, the lease for the
address, the host name, the DNS suffix being used and the MAC address of the interface.
Some of the other tasks that can be executed with IPConfig involve DNS and DHCP. Use
/release and /renew to refresh a DHCP configured address and /registerdns, /flushdns,
/displaydns for DNS. They force registration of the client to DNS, flush the DNS cache, and will
show the entries in the DNS cache.
Ping
Ping is a command-line utility used to test connectivity. The IP address or fully qualified
domain name (FQDN) can be used to execute a ping. A Ping sends out 4 packets to the specific
address/FQDN and waits for a reply. The four responses are then displayed. This is the first
thing to do when troubleshooting a connection.
The rule for troubleshooting is to start with yourself and then work out. Ping the loopback
address (127.0.0.1), then the local host address. Then try a host on the same subnet. Then try the
default gateway address. If all those work, then ping a remote address. If all hosts return a ping,
then start looking at other possibilities. Most of the time, if there is connectivity problems, one
of the ping attempts will fail. It can be a bad cable, bad device or incorrect addressing that can
be causing the failure. This gives a better place to start looking for the problem.
Tracert
The Tracert command is used with either IP address or fully qualified domain names (FQDN)
to trace the route of the packet.- It will display each time the packet touches a router as a hop. If
the packet is not reaching its destination, this can assist with tracking down where it is
dropping. Many times it may be a firewall or proxy server that has been instituted that does not
allow packets from the source area to pass.
PathPing
Traces the route a packet takes to a destination and displays information on packet losses for
each router in the path. This gives detailed statistics about the packet and its path.
Event Viewer
View events from multiple event logs
Save useful event filters as custom views that can be reused
Schedule a task to run in response to an event
Create and manage subscriptions
Event Viewer is a Microsoft Management Console that allows you to browse and manage event
logs. It is a useful tool for monitoring the health of systems and troubleshooting issues when
they arise.
When looking for improper logon events, an administrator must examine the Security log of
every DC that may have received the logon requests.
Viewing events from Multiple Logs: When you use Event Viewer to troubleshoot a problem,
you need to locate events related to the problem, regardless of which event log they appear in.
To specify a filter that spans multiple logs, you need to create a custom View.
Reusable Custom Views: When you work with Event Logs, your primary challenge is to narrow
the set of events to just those that you are interested in. Sometimes this involves a good deal of
effort. Now, Event Viewer allows you to save these custom views once they are created.
Integration with Task Scheduler: By right-clicking on a task, you are now able to schedule a task
to run when that specific event is logged in the future.
Event Subscriptions: You can collect events from remote computers and store them locally by
creating event subscriptions. This is definitely testable!!!
Event Subscriptions
Configure Forwarding computer
o Winrm quickconfig (Windows Remote Management)
o Add the server to the Event Log Readers Group
Configure Collecting Computer
o Wecutil qc (Windows Event Collector) - Configure the subscription
o Setup collection of Event Viewer data
You will have one machine that will forward Event Viewer data (Forwarding computer) and
one computer to collect the data (Collector) and then create an subscription on the Collecting
computer for what should be collected (all entries are probably not so interesting).
2. If you have Windows Firewall enabled it will ask you if you want to create an exception for
this port, type Y to do so.
3. To decide who can collect Event Viewer data from this Forwarding computer you must add
the people or machine to the Event Log Readers group.
This can be done with the graphical mmc but since we already have an elevated cmd from
running the winrm command we will use that to add our Collecting computer to the Event Log
Readers group by using net localgroup command:
Net local "Event Log Readers" [email protected] /add
Notice: Don't forget the $ sign after the computer name.
DFS Namespaces
A namespace is a virtual view of shared folders in an organization. The path to a namespace is
similar to a Universal Naming Convention (UNC) path to a shared folder, such as
\\Server1\Public\Software\Tools. In this example, the shared folder Public and its subfolders
Software and Tools are all hosted on Serverl.
A namespace server hosts a namespace. The namespace server can be a member server or a
domain controller.
The namespace root is the starting point of the namespace. In the previous figure, the name of
the root is Public, and the namespace path is \\Contoso\Public. This type of namespace is a
domain-based namespace because it begins with a domain name (for example, Contoso) and its
metadata is stored in Active Directory Domain Services (AD DS). Although a single namespace
server is shown in the previous figure, a domain-based namespace can be hosted on multiple
namespace servers to increase the availability of the namespace.
Folders without folder targets add structure and hierarchy to the namespace, and folders with
folder targets provide users with actual content. When users browse a folder that has folder
targets in the namespace, the client computer receives a referral that transparently redirects the
client computer to one of the folder targets.
A folder target is the UNC path of a shared folder or another namespace that is associated with
a folder in a namespace. The folder target is where data and content is stored. In the previous
figure, the folder named Tools has two folder targets, one in London and one in New York, and
the folder named Training Guides has a single folder target in New York. A user who browses
to \\Contoso\Public\Software\Tools is transparently redirected to the shared folder \\LDN-
SVR-O1\Tools or \\NYC-SVR-01\Tools, depending on which site the user is currently located
in.
Create a namespace :
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, right-click the Namespaces node, and then click New Namespace.
3. Follow the instructions in the New Namespace Wizard.
To create a folder in a namespace
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Namespaces node, right-click a namespace or a folder within a
namespace, and then click New Folder.
3. In the Name text box, type the name of the new folder.
4. To add one or more folder targets to the folder, click Add and specify the Universal Naming
Convention (UNC) path of the folder target, and then click OK.
DFS Replication
DFS Replication is an efficient, multiple-master replication engine that you can use to keep
folders synchronized between servers across limited bandwidth network connections. It
replaces the File Replication service (FRS) as the replication engine for DFS Namespaces, as well
as for replicating Active Directory Domain Services (AD DS) SYSVOL folder in domains that
use the Windows Server 2008 domain functional level. For more information about replicating
SYSVOL using DFS Replication, see the Microsoft Web site (http://go.microsoft.com/fwlink/?
Linkld=93057).
DFS Replication uses a compression algorithm known as remote differential compression
(RDC). RDC detects changes to the data in a file and enables DFS Replication to replicate only
the changed file blocks instead of the entire file.
To use DFS Replication, you must create replication groups and add replicated folders to the
groups. Replication groups, replicated folders, and members are illustrated in the above figure.
This figure shows that a replication group is a set of servers, known as members, which
participates in the replication of one or more replicated folders. A replicated folder is a folder
that stays synchronized on each member. In the figure, there are two replicated folders: Projects
and Proposals. As the data changes in each replicated folder, the changes are replicated across
connections between the members of the replication group. The connections between all
members form the replication topology.
Creating multiple replicated folders in a single replication group simplifies the process of
deploying replicated folders because the topology, schedule, and bandwidth throttling for the
replication group are applied to each replicated folder. To deploy additional replicated folders,
you can use Dfsradmin.exe or a follow the instructions in a wizard to define the local path and
permissions for the new replicated folder.
Each replicated folder has unique settings, such as file and subfolder filters, so that you can
filter out different files and subfolders for each replicated folder.
The replicated folders stored on each member can be located on different volumes in the
member, and the replicated folders do not need to be shared folders or part of a namespace.
However, the DFS Management snap-in makes it easy to share replicated folders and optionally
publish them in an existing namespace.
You can administer DFS Replication by using DFS Management, the DfsrAdmin and Dfsrdiag
commands, or scripts that call WMI.
When you first set up replication, you must choose a primary member. Choose the member that
has the most up-to-date files that you want to replicate to all other members of the replication
group, because the primary member's content is considered "authoritative." This means that
during initial replication, the primary member's files will always win the conflict resolution that
occurs when the receiving members have files that are older or newer than the associated files
on the primary member.
The following concepts will help you better understand the initial replication process: Initial
replication does not begin immediately. The topology and DFS Replication settings must be
replicated to all domain controllers, and each member in the replication group must poll its
closest domain controller to obtain these settings. The amount of time this takes depends on AD
DS replication latency and the long polling interval (60 minutes) on each member.
Initial replication always occurs between the primary member and the receiving replication
partners of the primary member. After a member has received all files from the primary
member, that member will replicate files to its receiving partners as well. In this way,
replication for a new replicated folder starts from the primary member and then progresses to
the other members of the replication group.
When receiving files from the primary member during initial replication, if a receiving member
contains files that are not present on the primary member, those files are moved to their
respective DfsrPrivate\PreExisting folder. If a file is identical to a file on the primary member,
the file is not replicated. If the version of a file on the receiving member is different from the
primary member's version, the receiving member's version is moved to the Conflict and Deleted
folder and remote differential compression (RDC) can be used to download only the changed
blocks.
To determine whether files are identical on the primary member and receiving member, DFS
Replication compares the files by using a hash algorithm. If the files are identical, only minimal
metadata is transferred.
After the initialization of the replicated folder, when all existing files in the replicated folder are
added to the DFS Replication database, the primary member designation is removed. That
member is then treated like any other member and its files are no longer considered
authoritative over other members that have completed initial replication. Any member that has
completed initial replication is considered authoritative over members that have not completed
initial replication.
3. Note Replication of the new replicated folder does not begin immediately. The new DFS
Replication settings must be replicated to all domain controllers, and each member in the
replication group must poll its closest domain controller to obtain these settings. The amount of
time this takes depends on AD OS replication latency and the long polling interval ( 60 minutes)
on each member.
DFS Requirements
Members of the replication group must be running 2003 R2 or 2008
Install File Services Role with DFS Replication Role Service
Replicated folders must be stored on NTFS volumes
Not available on Server Core
Single Forest Only
Third-party software compatible with DFS Replication.
o Defragmentation/disk maintenance
o Antivirus
o Backup
Not fully compatible with/aware of clustering, if deployed on a cluster node
locate replicated folders on the local storage of the node, not shared.
DFS Commands
DFSUtil
DFSdiag
DFSradmin
Windows Server 2008 includes an updated version of the DFSUtil command, the new DFSdiag
command, and the new DFSradmin which you can use to diagnose namespace issues. The test
is mainly concerned with DFSUtil.
DFSUtil Examples
Example 1: Control a DFS Client's Ability to Link to Sites
Enabling the insite setting of a DFS server is useful when:
You don't want the DFS clients to connect outside the site. You don't want the DFS client to
connect to a site other than the site it is in, and hence avoid using expensive WAN links. Dfsutil
/insite:\\example.com\dfsroot /enable
After using this command statement, clients will not get any referral for a replica outside the
dfsroot site. This means that if the Replica sets in the client Site are down, the client will not do a
failover to a Replica set in another site. Disabling the insite setting of a DFS server is useful
when you want to enable outside site referrals.
If you want your DFS clients to be able to link to outside sites when no local server is available,
and the DFS clients never seem to link outside the site, it may be because connectivity has been
limited to an internal site using the /insite enable setting. Disabling this setting will restore the
ability of clients to link outside the site. To reset your site preferences, type the following at the
command prompt:
DFSUtil /insite:\\example.com\Sales /disable
Example 2: Configure a DFS Server to be Site Cost Aware
You want DFS clients to be able to connect outside the internal site, but you want clients to
connect to the closest site first, saving the expensive network bandwidth. You want to maintain
high availability as a priority, but obviously you want DFS clients to connect to closer sites
rather than farther sites when the former are reachable and up. To configure the server to be site
cost aware, type either of the following statements at the command line:
DFSUtil /sitecosting:\\example.com\sales /enable
DFSUtil /root:\\example.com\sales /sitecosting /enable
Now the server sends the referral list composed of the randomly ranked targets in the same site
as the client, followed by the targets in the next closest site from the site in which the client
resides, followed by targets in the second closest site, and then the third and so on.
Example 3: Back up the DFS Namespace
You want to back up the DFS namespace for a specified root so that you can restore it later in
case of system crash and loss of namespace from the system. Backing up namespace
information is especially important when you have large namespaces. Using a single command
statement per root, you can back up the namespaces into simple files. The files are in an XML
format. To back up a namespace, type the following at the command line:
DFSUtil /root: \\example.com\sales /export: c:\NameSpaceBackups\Dir\file.txt
Note: The output of the export file is in XML in Windows Server 2003 DFS. This means that, if
your current DFS is a prior version and the output file is coming from a prior version, it should
be converted to the XML format used by the /import parameter. Example 4: Restore the DFS
Namespace from a Back Up
Your system has crashed and you have lost your namespace data. In order to restore the
namespace, type the following at the command line:
DFSUtil /root: \\example.com\sales /import: c:\NameSpaceBackups\Dir\file.txt /set
Shadow Copy
Shadow copying of files in shared folders is a feature administrators can use to create backup
copies of files on designated volumes automatically. You can think of these backup copies as
point-in-time snapshots that can be used to recover previous versions of files. Normally, when a
user deletes a file from a shared folder, it is immediately deleted and doesn’t go to the local
Recycle Bin. This means the only way to recover it is from backup. The reason for this is that
when you delete files over the network, the files are permanently deleted on the remote server
and never make it to the Recycle Bin. This problem changes with shadow copying. If a user
deletes a file from a network share, she can go back to a previous version and recover it—and
she can do this without needing assistance from an administrator.
Volume Shadow Copy service is a new feature of Microsoft Windows Server 2003. It offers two
important features:
Shadow copying of files in shared folders: Allows you to configure volumes so that shadow
copies of files in shared folders are created automatically at specific intervals during the day.
This allows you to go back and look at earlier versions of files stored in shared folders. You can
use these earlier versions to recover deleted, incorrectly modified, or overwritten files. You can
also compare versions of files to see what changes were made over time. Up to 64 versions of
files are maintained.
Shadow copying of open or locked files for backups: Allows you to use backup programs, such
as Windows Backup, to back up files that are open or locked. This means you can back up when
applications are using the files and no longer have to worry about backups failing because files
were in use. Backup programs must implement the Volume Shadow Copy application
programming interface (API).
Both features are independent of each other. You do not need to enable shadow copying of a
volume to be able to back up open or locked files on a volume.
can use this client. If you use this client with earlier versions of the Windows operating system,
you must install the Shadow Copy Client on both the servers using shadow copies and the user
computers that must access shadow copies.
click New. When you are finished configuring run times, click OK twice to return to the
volume’s Properties dialog box.
Select the volume on which you want to enable shadow copies
Click Enable
When prompted, click Yes to confirm the action
Windows will then create a snapshot of the volume.
Configure any additional volumes for shadow copying by repeating steps 3
through 8.
Click OK when you are finished.
Tool Description
Dsadd Adds objects, such as computers, users, groups, organizational
units, and contacts, to Active Directory.
Dsmod Modifies objects, such as computers, servers, users, groups,
organizational units, and contacts, in Active Directory.
Dsquery Runs queries in Active Directory according to specified criteria.
You can run queries against servers, computers, groups, users,
sites, organizational units, and partitions.
Dsmove Moves a single object, within a domain, to a new location in Active
Directory, or renames a single object without moving it.
Dsrm Deletes an object from Active Directory.
Dsget Displays selected attributes of a computer, contact, group,
organizational unit, server, or user in Active Directory.
Csvde Imports and exports Active Directory data by using comma-
separated format.
Ldifde Creates, modifies, and deletes Active Directory objects. Can also
extend the Active Directory schema, export user and group
information to other applications or services, and populate Active
Directory with data from other directory services.
Server Manager
Provides a console with which to manage the basic functions of the server. This utility has
replaced Computer Management and has enhanced the functionality while still retaining some
of the original features.
Roles
No Roles are installed by default. For instance, DNS is now configured as a role of the server. To
install DNS, you would need to add that role. Many roles have Role Services that can be
installed. For instance, File Services is an available role on the server. The role services
associated with the File Services Role are things like DFS (Distributed File System), Windows
Search Services, etc. Role Services add the functionality to the Role. Another example of a role
and role service is Active Directory Domain Service Role. The role service that gives its
functionality is Active Directory Domain Controller Role Service.
Roles and Role Services can be added, removed and their status monitored from within the
Roles page.
Features are software programs that enhance the functionality of the server. Features do not
necessarily correspond with roles, though they sometimes do. For instance, the Failover
Clustering feature can be used to augment the roles of File Services or DHCP services, by
enabling them to join server clusters. However, Bitlocker drive encryption is a feature that is
available regardless of the roles installed.
Diagnostics
Event Viewer: An advanced tool that displays detailed information about
significant events on your computer. Event viewer's main logs are the Application Log,
the System Log, and the Security Log. Other logs are available, depending upon what
roles and services are installed on the system
Reliability and Performance Monitor: MMC snap-in utility that provides tools for
analyzing system's performance. Reliability and Performance monitor allows an
administrator to monitor hardware and software performance in real time. Customize
what data is collected configure alerts and generate reports.
Device Manger: Allows a user to view the installed devices on a system, verify
hardware functionality and upgrade or rollback drivers.
Configuration
Task Scheduler: Allows scheduling of automated tasks to perform actions at a
specific time
Windows Firewall with Advanced Security: Combines a host firewall and IPSec.
This is an extension of the Windows Basic Firewall that includes stateful packet
inspection and filtering
Services: Provides access to configure how services run. Services are programs or
processes that run in the background that provide support to other program.
WMI control: Windows Management Instrumentation (WMI) is the primary
management technology for Windows operating systems. It enables consistent and
uniform management, control, and monitoring of systems throughout your enterprise.
Based on industry standards, WMI allows system administrators to query, change, and
monitor configuration settings on desktop and server systems, applications, networks,
and other enterprise components. System administrators can write scripts that use the
WMI Scripting Library to work with WMI and create a wide range of systems
management and monitoring scripts.
Storage
Windows Server Backup: MMC snap-in and command-line utility that provides
a complete solution for day-to-day backup and recovery needs. The GUI is Wizard-
driven to enable ease of use. This utility allows for the backup and recovery of the entire
server, selected volumes only or the system state data. The command-line is
wbadmin.exe
Disk Management: System utility for managing the hard disks and volumes or
partitions which they contain. As always, this utility is used to initialize disks, create
volumes or partitions, format those volumes or partitions and most other disk-related
tasks. New functionality includes the ability to extend or shrink volumes, regardless of
whether or not the disk is basic or dynamic.
Data Mining Tool- (aka database Mounting Tool) - although The Active
Directory database mounting tool does not recover deleted objects by itself, it helps
streamline the process for recovering objects that have been accidentally deleted.
Before the Windows Server@ 2008 operating system, when objects or organizational
units (OUs) were accidentally deleted, the only way to determine exactly which objects
were deleted was to restore data from backups. This approach had two drawbacks:
Active Directory had to be restarted in Directory Services Restore Mode to perform an
authoritative restore; and an administrator could not compare data in backups that were
taken at different points in time (unless the backups were restored to various domain
controllers, a process which is not feasible). The purpose of the Active Directory
database mounting tool is to expose AD DS data that is stored in snapshots or backups
online. Administrators can then compare data in snapshots or backups that are taken at
different points in time, which in turn helps them to make better decisions about which
data to restore, without incurring service downtime.
Support for Active Directory Sites and Services - with this feature, you can use
Active Directory Sites and Services snap-in to manage replication among AD LDS
instances. To use this tool, you must import the classes in MS-ADLDS-
DisplaySpecifiers.LDF to extend the schema Active Directory Sites of a configuration set
that you want to manage. To connect to an AD LDS and Services instance that hosts
your configuration set, specify the computer name and the port number of a server that
hosts this AD LOS instance.
Dynamic list of LDAP Data Interchange - with this feature, you can make custom
LDIF files available during AD LOS Format (LDIF) files instance setup -in addition to
the default LDIF files that are provided with AD LOS-by adding the files to the
%systemroot%\ADAM directory during instance setup
To learn more about AD LDS, click the AD LDS Help link in Server Manager.
AD RMS Benefits
Safeguard sensitive information. Applications such as word processors, e-mail clients, and line-
of-business applications can be AD RMS-enabled to help safeguard sensitive information Users
can define who can open, modify, print, forward, or take other actions with the information.
Organizations can create custom usage policy templates such as "confidential - read only" that
can be applied directly to the information.
Persistent protection. AD RMS augments existing perimeter-based security solutions, such as
firewalls and access control lists (ACLs), for better information protection by locking the usage
rights within the document itself, controlling how information is used even after it has been
opened by intended recipients.
Flexible and customizable technology. Independent software vendors (ISVs) and developers can
AD RMS-enable any application or enable other servers, such as content management systems
or portal servers running on Windows or other operating systems, to work with AD RMS to
help safeguard sensitive information. ISVs are enabled to integrate information protection into
server-based solutions such as document and records management, e-mail gateways and
archival systems, automated workflows, and content inspection.
Identity Federation Support. The identity federation support role service is an optional role
service that allows federated identities to consume rights-protected content by using Active
Directory Federation Services.
AD RMS combines the features of Rights Management Services (RMS) in Windows Server 2003,
developer tools, and industry security technologies-including encryption, certificates, and
authentication-to help organizations create reliable information protection solutions.
For more detailed information about hardware and software considerations with AD RMS, see
the Pre-installation Information for Active Directory Rights Management Services topic on the
Windows Server 2008 Technical Library (http://go.microsoft.comlfwlink/?Linkld=84733).
For detailed instructions about installing and configuring AD RMS in a test environment, see
the AD RMS installation Step-by-Step Guide (http://go.microsoft.com/fwlink/?Linkld=72134).
To learn more about AD RMS, you can view the Help on your server. To do this, open Active
Directory Rights Management Services console, and then press F 1, or visit Active Directory
Rights Management Services TechCenter (http://go.microsoft.comlfwlink/?Linkld=80907).
Extensible architecture
AD FS provides an extensible architecture that supports the Security Assertion Markup
Language (SAML) 1.1 token type and Kerberos authentication (in the Federated Web SSO with
. Forest Trust design. AD FS can also perform claim mapping, for example, modifying claims
using custom business logic as a variable in an access request. Organizations can use this
extensibility to modify AD FS to coexist with their current security infrastructure and business
policies. For more information about modifying claims, see Understanding Claims.
WDS
Windows Deployment Services is included in the Windows Automated Installation Kit
(Windows AIK) and in Windows Server 2003 SP2. For more information about the Windows
Deployment Services role, see http://go.microsoft.com/fwlink/?LinkId=81873.
Prerequisites:
Active Directory. A Windows Deployment Services server must be either a
member of an Active Directory domain or a domain controller for an Active Directory
domain. The Active Directory domain and forest versions are irrelevant; all domain and
forest configurations support Windows Deployment Services.
DHCP. You must have a working Dynamic Host Configuration Protocol (DHCP)
server with an active scope on the network because Windows Deployment Services uses
PXE, which relies on DHCP for IP addressing.
DNS. You must have a working Dynamic Name Services (DNS) server on the
network to run Windows Deployment Services.
Add Images:
After you configure Windows Deployment Services, you must add at least one boot image, and
one install image before you will be able to PXE boot a computer to install an operating system
(unless you use RIS). Once you have added the default images using the instructions in this
section, you will be ready to deploy operating systems. Alternatively, you can use the
instructions in the rest of this guide to perform more advanced tasks like creating your own
install images, creating discover images, or configuring an unattended installation.
Boot images. Boot images are images that you boot a client computer into to
perform an operating system installation. In most scenarios, you can use the Boot.wim
from the installation DVD (in the \Sources directory). The Boot.wim contains Windows
PE and the Windows Deployment Services client (which is basically Windows Vista
Setup.exe and supporting files).
Install images. Install images are the operating system images that you deploy to
the client computer. You can also use the install.wim from the installation DVD, or you
can create your own install image using the steps in creating custom install images.
To add the default boot image included in the product installation DVD:
1. In the left-hand pane of the Windows Deployment Services MMC snap-in, right-click the
Boot Images node, and then click Add Boot Image.
2. Browse to choose the default boot image (Boot.wim) located on the Windows Vista
DVD, in the \Sources directory.
3. Click Open, and then click Next.
4. Follow the instructions in the wizard to add the image.
To add the default install image included in the product installation DVD
1. In the Windows Deployment Services MMC snap-in, right-click the Install Images node,
and then click Add Install Image.
2. Specify a name for the image group, and then click Next.
3. Browse to select the default install image (install.wim) located on the Windows Vista
DVD, in the \Sources directory, and then click Open.
4. To add a subset of the images included in the install.wim, clear the check boxes for the
images that you do not want to add to the server. You should only add the images for which
you have licenses.
5. Follow the instructions in the wizard to add the images.
6. Now that you have a boot image and an install image on the server, you can PXE boot a
client computer to install an operating system using the instructions in the following section.
Unattended Installation
Optionally, you can automate the entire installation. To do this, you use two different unattend
files: one for the Windows Deployment Services UI screens, and one for the latter phases of
Setup. Two files are necessary because Windows Deployment Services can deploy two image
types: Windows Vista images that support the Unattend.xml format, and Windows XP and
Windows Server 2003 images, which do not support the Unattend.xml format.
Windows Deployment Services client unattend file. This file uses the
Unattend.xml format, and it is stored on the Windows Deployment Services server in
the \WDSClientUnattend folder. It is used to automate the Windows Deployment
Services client user-interface screens (such as entering credentials, choosing an install
image, and configuring the disk).
Image unattend file. This file uses either the Unattend.xml or Sysprep.inf format,
depending upon the version of the operating system of the image. It is used to configure
unattended installation options during Windows Setup and to automate the remaining
phases of Setup (for example, offline servicing, Sysprep specialize, and mini-setup). It is
stored in a subfolder (either $OEM$ structure or \Unattend) of the per-image folder.
Two unattend files are necessary because Windows Deployment Services can deploy two image
types: Windows Vista and Windows Server 2008 images that support the Unattend.xml format,
and Windows XP and Windows Server 2003 images, which do not support the Unattend.xml
format.
To automate the installation, create the appropriate unattend file depending on whether you are
configuring the Windows Deployment Services screens or Windows Setup. We recommend that
you use Windows System Image Manager, (included as part of the Windows AIK) to author the
unattend files. Then copy the unattend file to the appropriate location, and assign it for use. You
can assign it at the server level or the client level. The server level assignment can further be
broken down by architecture, allowing you to have different settings for x86-based and x64-
based clients. Assignment at the client level overrides the server-level settings. For more
information, see Performing Unattended Installations (http://go.microsoft.com/fwlink/?
LinkId=89226) and Sample Unattend Files (http://go.microsoft.com/fwlink/?LinkId=122642).
Additional references
For more detailed information, see Deploying and Managing the Windows
Deployment Services Update on Windows Server 2003
http://go.microsoft.com/fwlink/?LinkId=81031
For more information about the Windows Deployment Services role that is
included in Windows Server 2008 see http://go.microsoft.com/fwlink/?LinkId=81873
For a newsgroup about Windows Deployment Services, see Setup and
Deployment (http://go.microsoft.com/fwlink/?LinkId=87628)
Windows AIK (http://go.microsoft.com/fwlink/?LinkId=81030)
Windows AIK User's Guide for Windows Vista
(http://go.microsoft.com/fwlink/?LinkID=53552
Hyper-V
Server Consolidation
Businesses are under pressure to ease management and reduce costs while retaining and
enhancing competitive advantages, such as flexibility, reliability, scalability, and security. The
fundamental use of virtualization to help consolidate many servers on a single system while
maintaining isolation helps address these demands. One of the main benefits of server
consolidation is a lower total cost of ownership (TCO), not just from lowering hardware
requirements but also from lower power, cooling, and management costs.
Live Migration
Hyper-V in Windows Server 2008 R2 includes the much-anticipated live migration feature,
which allows you to move a virtual machine between two virtualization host servers without
any interruption of service. Hyper-V live migration is integrated with Windows Server 2008 R2
Hyper-V and Microsoft Hyper-V Server 2008 R2. With it you can move running VMs from one
Hyper-V physical host to another without any disruption of service or perceived downtime.
Dynamic VM storage
Windows Server 2008 R2 Hyper-V supports hot plug-in and hot removal of storage. By
supporting the addition or removal of Virtual Hard Drive (VHD) files and pass-through disks
while a VM is running, Windows Server 2008 R2 Hyper-V makes it possible to reconfigure VMs
quickly to meet changing workload requirements. This feature allows the addition and removal
of both VHD files and pass-through disks to existing SCSI controllers for VMs.
Broad OS Support
Broad support for simultaneously running different types of operating systems, including 32-bit
and 64-bit systems across different server platforms, such as Windows, Linux, and others.
High Availability
Providing High Availability solutions to mission-critical applications, services, and data is a
primary objective of successful IT departments. When services are down or fail, business
continuity is interrupted, which can result in significant losses. Windows Server 2008 R2
supports High Availability features to help organizations meet their uptime requirements for
their critical systems.
Failover Clustering
Failover clustering can help you build redundancy into your network and eliminate single
points of failure. The improvements to failover clusters (formerly known as server clusters) in
Windows Server 2008 R2 are aimed at simplifying clusters, making them more secure, and
enhancing cluster stability.
Cluster Migration
When migrating a clustered service from one cluster to another, cluster settings can be captured
and copied to another cluster. This reduces the time it takes to build the new cluster and
configure the services. The migration process supports every workload currently supported on
Windows Server 2003 and Windows Server 2008, including DFS-N, DHCP, DTC, File Server,
Generic Application, Generic Script, Generic Service, iSNS, MSMS, NFS, Other Server, TSSB,
and WINS, and supports most common network configurations.
Cluster Infrastructure
The cluster quorum contains the configuration settings for the entire cluster. With Windows
Server 2008 R2, you can configure a cluster so that the quorum resource is not a single point of
failure by using the majority node set or a hybrid of the majority node set and the quorum
resource model. The cluster service can also isolate DLLs that perform actions incorrectly to
minimize impact to the cluster, as well as verify consistency among copies of the quorum
resource.
Cluster Storage
Failover clusters now support GUID partition table (GPT) disks that can have capacities of
larger than 2 terabytes, for increased disk size and robustness. Administrators can now modify
resource dependencies while resources are online, which means they can make an additional
disk available without interrupting access to the application that will use it.
Cluster Network
Networking has been enhanced to support Internet Protocol version 6 (IPv6) as well as Domain
Name System (DNS) for name resolution, removing the requirement to have WINS and
NetBIOS name broadcasts. Other network improvements include managing dependencies
between network names and IP addresses: If either of the IP addresses associated with a
network name is available, the network name will remain available. Because of the architecture
of Cluster Shared Volumes (CSV), there is improved cluster node connectivity fault tolerance
that directly affects Virtual Machines running on the cluster. The CSV architecture implements a
mechanism, known as dynamic I/O redirection, in which I/O can be rerouted within the
failover cluster based on connection availability.
Cluster Security
Internet Protocol security (IPsec) can be used between clients and the cluster nodes, as well as
between nodes so that you can authenticate and encrypt the data. Access to the cluster can also
be audited to determine who connected to the cluster and when.
Network Load Balancing distributes IP traffic to multiple copies (or instances) of a TCP/IP
service, such as a Web server, each running on a host within the cluster. Network Load
Balancing transparently partitions the client requests among the hosts and lets the clients access
the cluster using one or more "virtual" IP addresses. From the client's point of view, the cluster
appears to be a single server that answers these client requests. As enterprise traffic increases,
network administrators can simply plug another server into the cluster.
Host Priorities
Each cluster host is assigned a unique host priority in the range of 1 to 32, where lower numbers
denote higher priorities. The host with the highest host priority (lowest numeric value) is called
the default host. It handles all client traffic for the virtual IP addresses that is not specifically
intended to be load-balanced. This ensures that server applications not configured for load
balancing only receive client traffic on a single host. If the default host fails, the host with the
next highest priority takes over as default host.
Port Rules
Network Load Balancing uses port rules to customize load balancing for a consecutive numeric
range of server ports. Port rules can select either multiple-host or single-host load-balancing
policies. With multiple-host load balancing, incoming client requests are distributed among all
cluster hosts, and a load percentage can be specified for each host. Load percentages allow
hosts with higher capacity to receive a larger fraction of the total client load. Single-host load
balancing directs all client requests to the host with highest handling priority. The handling
priority essentially overrides the host priority for the port range and allows different hosts to
individually handle all client traffic for specific server applications. Port rules also can be used
to block undesired network access to certain IP ports.
When a port rule uses multiple-host load balancing, one of three client affinity modes is
selected. When no client affinity mode is selected, Network Load Balancing load-balances client
traffic from one IP address and different source ports on multiple-cluster hosts. This maximizes
the granularity of load balancing and minimizes response time to clients. To assist in managing
client sessions, the default single-client affinity mode load-balances all network traffic from a
given client's IP address on a single-cluster host. The class C affinity mode further constrains
this to load-balance all client traffic from a single class C address space.
By default, Network Load Balancing is configured with a single port rule that covers all ports
(0-65,535) with multiple-host load balancing and single-client affinity. This rule can be used for
most applications. It is important that this rule not be modified for VPN applications and
whenever IP fragmentation is expected. This ensures that fragments are efficiently handled by
the cluster hosts.
Remote Control
Network Load Balancing provides a remote control program (Wlbs.exe) that allows system
administrators to remotely query the status of clusters and control operations from a cluster
host or from any networked computer running Windows 2000. This program can be
incorporated into scripts and monitoring programs to automate cluster control. Monitoring
services are widely available for most client/server applications. Remote control operations
include starting and stopping either single hosts or the entire cluster. In addition, load
balancing for individual port rules can be enabled or disabled on one or more hosts. New traffic
can be blocked on a host while allowing ongoing TCP connections to complete prior to
removing the host from the cluster. Although remote control commands are password-
protected, individual cluster hosts can disable remote control operations to enhance security.
Each Network Load Balancing host can specify the load percentage that it will handle, or the
load can be equally distributed among all of the hosts. Using these load percentages, each
Network Load Balancing server selects and handles a portion of the workload. Clients are
statistically distributed among cluster hosts so that each server receives its percentage of
incoming requests. This load balance dynamically changes when hosts enter or leave the cluster.
In this version, the load balance does not change in response to varying server loads (such as
CPU or memory usage). For applications, such as Web servers, which have numerous clients
and relatively short-lived client requests, the ability of Network Load Balancing to distribute
workload through statistical mapping efficiently balances loads and provides fast response to
cluster changes.
Network Load Balancing cluster servers emit a heartbeat message to other hosts in the cluster,
and listen for the heartbeat of other hosts. If a server in a cluster fails, the remaining hosts adjust
and redistribute the workload while maintaining continuous service to their clients. Although
existing connections to an offline host are lost, the Internet services nevertheless remain
continuously available. In most cases (for example, with Web servers), client software
automatically retries the failed connections, and the clients experience only a few seconds' delay
in receiving a response.
To further assist in managing session state, Network Load Balancing provides an optional client
affinity setting that directs all client requests from a TCP/IP class C address range to a single
cluster host. With this feature, clients that use multiple proxy servers can have their TCP
connections directed to the same cluster host. The use of multiple proxy servers at the client's
site causes requests from a single client to appear to originate from different systems. Assuming
that all of the client's proxy servers are located within the same 254-host class C address range,
Network Load Balancing ensures that the same host handles client sessions with minimum
impact on load distribution among the cluster hosts. Some very large client sites may use
multiple proxy servers that span class C address spaces.
In addition to session state, server applications often maintain persistent, server-based state
information that is updated by client transactions, such as merchandise inventory at an e-
commerce site. Network Load Balancing should not be used to directly scale applications,
such as Microsoft SQL Server(other than for read-only database access), that independently
update inter-client state because updates made on one cluster host will not be visible to other
cluster hosts. To benefit from Network Load Balancing, applications must be designed to permit
multiple instances to simultaneously access a shared database server that synchronizes updates.
For example, Web servers with Active Server Pages should have their client updates pushed to
a shared back-end database serve
How it works
At least one upstream WSUS server connects to Microsoft Update to get available updates and
update information, while other downstream servers get their updates from the upstream
server.
Administrators can choose which updates are downloaded to a WSUS server during
synchronization, based on the following criteria:
Product or product family (for example, Microsoft Windows Server 2003 or
Microsoft Office)
Update classification (for example, critical updates, and drivers)
Language (for example, English and Japanese only)
In addition, administrators can specify a schedule for synchronization to initiate automatically.
An administrator must approve every automated action to be carried out for the update.
Approval actions include the following:
Approve
Remove (this action is possible only if the update supports uninstall)
Decline
In addition, the administrator can enforce a deadline: a specific date and time to install or
remove (uninstall) updates. The administrator can force an immediate download by setting a
deadline for a time in the past.
WSUS 3.0 can be configured to send e-mail notification of new updates and status reports.
Specified recipients can receive update notifications as they arrive on the WSUS server. Status
reports can be sent at specified times and intervals.
WSUS 3.0 now automatically scans updates to determine the computers on which they should
be installed. Before actually planning and deploying the update for installation, the
administrator can analyze the update’s impact by means of a status report that can be generated
directly from the update view for a single update, a subset of updates, or all updates.
Targeting enables administrators to deploy updates to specific computers and groups of
computers. Targeting can be configured either on the WSUS server directly, on the WSUS server
by using Group Policy in an Active Directory network environment, or on the client computer
by editing registry settings.
The WSUS database stores update information, event information about update actions on
client computers, and WSUS server settings. Administrators have the following options for the
WSUS 3.0 database:
The Windows Internal Database that WSUS can install during setup on Windows
Server 2003.
An existing Microsoft SQL Server™ 2005 Service Pack 1 database.
WSUS enables administrators to create an update management infrastructure consisting of a
hierarchy of WSUS servers. WSUS servers can be scaled out to handle any number of clients.
With replica synchronization, the administrator of the central WSUS server can create updates,
target groups, and approvals that are automatically propagated to WSUS servers designated as
replica servers. This means that branch office clients can get centrally approved updates from a
local server without the need for a local WSUS administrator. Also, offices with a low-
bandwidth link to the central server pose less of a problem, because the branch WSUS server
connects only to the central WSUS server. Update status reports can be generated for all the
clients of a replica server.
WSUS 3.0 now allows administrators to manage a WSUS server hierarchy from a single WSUS
console. The WSUS administration snap-in to the Microsoft Management Console can be
installed on any computer in the network.
Using WSUS reports, administrators can monitor the following activity (all reports are in a
printable format and can be exported to Excel spreadsheets or Adobe .pdf files):
Update status: Administrators can monitor the level of update compliance for
their client computers on an ongoing basis using Update Status reports, which can
provide status for update approval and deployment per update, per computer, and per
computer group, based on all events that are sent from the client computer.
Computer status: Administrators can assess the status of updates on client
computers. For example, they can request a summary of updates that have been
installed or are needed for a particular computer.
Computer compliance status: Administrators can view or print a summary of
compliance information for a specific computer, including basic software and hardware
information, WSUS activity, and update status.
Update compliance status: Administrators can view or print a summary of
compliance information for a specific update, including the update properties and
cumulative status for each computer group.
Synchronization (or download) status: Administrators can monitor
synchronization activity and status for a given time period, and view the latest updates
that have been downloaded.
WSUS configuration settings: Administrators can see a summary of options they
have specified for their WSUS implementation.
Administrators have the flexibility of configuring computers to get updates directly from
Microsoft Update, from an intranet WSUS server that distributes updates internally, or from a
combination of both, depending on the network configuration.
Administrators can configure a WSUS server to use a custom port for connecting to the intranet
or Internet, if appropriate. (The default port used by a WSUS server is port 80.) It is also possible
to connect via SSL, in which case the default port is 443.
Client-side features
In an Active Directory service environment, administrators can configure the behavior of
Automatic Updates by using Group Policy. In other cases, administrators can remotely
configure Automatic Updates using registry keys through the use of a logon script or similar
mechanism.
Administrator capabilities for configuring client computers include the
following:
Configuring notification and scheduling options for users through Group Policy.
Configuring how often the client computer checks the update source (either
Microsoft Update or another WSUS server) for new updates.
Configuring Automatic Updates to install updates that do not require reboots or
service interruptions as soon as it finds them and not to wait until the scheduled
automatic installation time.
Managing client computers through the Component Object Model (COM)–based
API. An SDK is available.
Self-updating for client computers
WSUS client computers can detect from the WSUS server if a newer version of
Automatic Updates is available, and then upgrade their Automatic Updates service
automatically.
Administrators can deploy multiple servers that are configured so that each server is managed
independently and each server synchronizes its content from Microsoft Update, as shown in the
following figure.
The deployment method in this scenario would be appropriate for situations in which different
local area network (LAN) or wide area network (WAN) segments are managed as separate
entities (for example, a branch office). It would also be appropriate when one server running
WSUS is configured to deploy updates only to client computers running a certain operating
system (such as Windows 2000), while another server is configured to deploy updates only to
client computers running another operating system (such as Windows XP).
Administrators can deploy multiple servers running WSUS that synchronize all content within
their organization’s intranet. In the following figure, only one server is exposed to the Internet.
In this configuration, this is the only server that downloads updates from Microsoft Update.
This server is set up as the upstream server—the source to which the downstream server
synchronizes. When applicable, servers can be located throughout a geographically dispersed
network to provide the best connectivity to all client computers.
If corporate policy or other conditions limit computer access to the Internet, administrators can
set up an internal server running WSUS, as illustrated in the following figure. In this example, a
server is created that is connected to the Internet but is isolated from the intranet. After
downloading, testing, and approving the updates on this server, an administrator would then
export the update metadata and content to the appropriate media; then, from the media, the
administrator would import the update metadata and content to servers running WSUS within
the intranet. Although the following figure illustrates this model in its simplest form, it could be
scaled to a deployment of any size.
More Information
Windows Server Update Services site: (http://go.microsoft.com/fwlink/?LinkId=71198) to:
Step-by-Step Guide to Getting Started: (http://go.microsoft.com/fwlink/?LinkID=71190)
Readme for Server Update Services: (http://go.microsoft.com/fwlink/?LinkId=71220)