ADMX files, where to put them, and you – take 2

4 05 2012

A few years ago, I wrote a blog on the storage location of ADMX files. For Group Policy, these files are crucial, as they define the settings you see in the Group Policy Editor, and by extension, they describe the registry settings which need to be managed on each client workstation to which a policy is applied.

(Contrary to popular belief, the Group Policy Engine on a client does *not* need to refer to these files to actually apply Group Policy. The Group Policy Editor parses the file and stores the specific registry modifications in the appropriate location in the SYSVOL folder structure. The editor does, however, require access to all the proper ADMX files to allow an administrator to make policy changes)

The ADMX format was introduced in Windows Server 2008 and Windows Vista and is XML-based, unlike the previous ADM file syntax of Windows Server 2003, which was a custom syntax which proved challenging at times.

In my earlier post, I specified that the best location to store these files is %systemroot%\PolicyDefinitions on each of your DCs. This was in response to a specific problem I had at a customer with a new, single, standalone Domain Controller.

However, on much larger networks, this advice is not something I would endorse. By storing the policies in the PolicyDefinitions container on each DC, the ADMX files will only be available in the Group Policy Editor on that particular Domain Controller. If you want to use Group Policy Management Console from a workstation, another DC or a member server, then you are going to have many settings which have no policy definition, so you will be unable to manage them. With products like Server Core (a particular focus of Windows Server 8 Beta), managing Group Policy from the DC’s desktop is no longer a recommended or particular routine operation. Similarly, managing a DC directly from its desktop for such routine changes is not a best practice – delegating control over Group Policy and making the changes on a workstation would be a better choice. So, we need a better way of sharing the ADMX files across the entire LAN to ensure they roam to any machine where policy may be set.

Fortunately, Microsoft already have a solution. It’s known as the Central Store. Essentially, this is a PolicyDefinitions folder within the SYSVOL folder hierarchy which you already know about. By placing the ADMX files in this directory, they are replicated to every DC in the domain; by extension, the ADMX-aware Group Policy Management Console in Windows Vista, Windows 7, Windows Server 2008 and R2 can check this folder as an additional source of ADMX files, and will report them accordingly when setting your policies.

By default, the folder is not created. Whether you are a single DC or several thousand, I would strongly recommend you create a Central Store and start using it for all your ADMX file storage. It really does work well.

More information and detailed procedures are available from Microsoft Support.

Advertisements




Demystifying the Active Directory FSMO roles

3 04 2010

If you’ve spent any time administering Active Directory, you’ve probably come across the concept of Flexible Single Master Operations (FSMO) roles. Their introduction is arguably one of the most important but misunderstood changes to Active Directory in the last ten years.

Take a trip down memory lane

In the days of Windows NT, one may recall the Primary Domain Controller (PDC) and Backup Domain Controller (BDC) concept. The directory was structured such that every DC, whether a PDC or a BDC, had a copy of the directory database, but only the PDC could make changes to that database. The model was inefficient, negatively impacted growth and desperately needed improving if the product had any chance of surviving.

Enter Windows 2000. The Directory Service went through one of its largest scale rebuilds to date. Replication and management was significantly improved and the concept of having a multi-master directory was introduced. Although this design has been tweaked over the years, fundamentally, it has remained the same through the versions – because it works. Any DC anywhere in the domain can execute virtually any update to the directory. This scales beautifully, even on large, geographically dispersed networks with many thousands of users.

However, notice I said virtually any change. Since a change can take effect at any DC, there is the possibility that a conflicting change will be made in two locations concurrently – or before replication can occur. Active Directory must ensure these situations are accounted for. In most cases, it applies its complex Multimaster Conflict Resolution Policy, which essentially says the last change wins. However, there are several procedures which simply cannot conflict; these procedures are assigned to one of the five FSMO roles, which go on to be delegated to one or more Domain Controllers.

What are the FSMO roles?

There are nominally five roles present in the directory which reside on DCs nominated specifically by the Administrator to perform these tasks. All the roles are very important and constitute a single point of failure in all Active Directory enterprises. If you have a complex topology with more than one domain, some roles are domain-specific, so you can expect to have duplicates of some roles in every domain in the enterprise.

  • The Domain Naming Master exists once per forest – in the forest root domain – and is rarely used. It is responsible for processing the addition of new child domains, application partitions and external cross-references to the enterprise. Since the name of a child domain or application partition cannot be duplicated (it would conflict in DNS, let alone send Active Directory around the twist), the DC holding this role is the only DC with the ability to process all additions of this kind in the forest.
  • Infrastructure Master: If a user from a foreign domain within the same forest is added as a member of a compatible group in another domain, the DCs in the group’s domain must have some information about that user in its local database in order to update the member attribute of the group. To do this, it adds a special record to its database called a phantom, which contains only the foreign user’s security identifier (SID), globally unique identifier (GUID) and their distinguished name (DN). Like all objects in the database, this record is given a distinguished name tag, or DNT, an internal reference used solely in the low-level Active Directory database layer. In doing this, the directory service is able to add that user as a member of the group by referring to the phantom’s DNT, just like it would refer to a user’s own DNT if you added a user from the group’s own domain to the group. You might think of this like using a primary key in a relational database to refer to objects across tables, but not exactly, as Active Directory’s database is by no means any sort of RDBMS.

    That’s very clever, but what if something about the source user in their original domain changes? If the user is renamed, moved or deleted, the phantom in the group domain DC databases would lose its referential integrity with the source domain. This is a situation the infrastructure master aims to avoid. On a periodic basis (by default, every 2 days), the infrastructure master – an FSMO role present in every domain – compares its local database to a Global Catalog (GC) server to determine whether any changes have been made to the objects the phantoms were created to represent. A GC contains a partial replica of all objects in the forest, so replication means any GC would already know about this updated data. The phantom is then updated with new values or deleted from the domain’s database if the object has been removed from its source domain.

    In a multi-domain forest, you must either locate this role on a Domain Controller which is not a Global Catalog or, if you must locate the role on a GC, ensure all DCs in that particular domain are GCs. A GC will never create phantoms because it already knows about users from other domains. If the infrastructure master is a GC, there will never be any phantoms in its local database to compare with the global catalog data, so no updates will be made, but other non-GC DCs in the domain would gradually become outdated. If all DCs in the domain are GCs, or you only have a single-domain forest, every DC knows enough about the security principal that it does not need to create a phantom, so this role is essentially redundant.

  • Schema Master: As the name suggests, this role is the Master of the Schema, the information which contains the formal definitions of how Active Directory stores objects, what attributes are available on those objects and so on. This role exists once per forest, on a DC in the forest root domain. Any updates to the Schema must be tightly controlled, so one DC delegated as the Schema Master performs all such changes to the database. Schema updates are then replicated to other DCs on the network by standard Active Directory replication.

So far, three of the five roles have been covered. Those above are those I would consider the least critical FSMO roles in the forest. If you lose the DC delegated one or more of these roles, it’s no big deal — it may prevent a network administrator taking an action, but it will not impact the usability of the network. Losing the Domain Naming Master or Schema Master would create problems in regard to creating child domains or running schema updates, but these generally occur very rarely and checking this Operations master DC is up would be part of the planned engineering works. Similarly, losing the Infrastructure Master may cause integrity issues in the database, but given that it only runs its scan every two days in the first place, a day or two of outage will not generally cause an issue.

  • RID Master: This role is one of the two which are important to the daily operation of Active Directory. Under the glossy GUI of Windows, security principals are identified and differentiated by use of two values – a Security Identifier (SID) and a Globally Unique Identifier (GUID).A SID is an alphanumeric string which is unique throughout a forest. The SID is the actual value used internally by Windows to identify users and grant access to resources using Discretionary Access Control Lists (DACLs), for example, via the ‘Security’ tab on a file or directory. Have you ever deleted a user, recreated her, then wondered why she cannot access the same files and folders, despite having the same username? The new account would have a new SID and is therefore considered an entirely different security principal to the system.Contrary to popular belief, the username, distinguished name or full name of a user are not internal tracking mechanisms within Windows as all these values could change.The standard make up of an SID might be as follows (this SID is purely random): S-1-5-21-789336058-1123561945-725345543-10823.The nature and formation of an SID is beyond the scope of this article, but it is the very last octet (in this instance, 10823) we are interested in. This figure represents a Relative Identifier (RID), an incremental value which actually makes the SIDs unique within a domain, ensuring no two users conflict in the database. When a security principal (user, computer, group etc.) is created, the domain SID (in this instance S-1-5-21-789336058-1123561945-725345543) has the next available RID appended to the end.Each Domain Controller is initially allocated a pool of 500 RIDs. As security principals are created, RIDs are used up. The allocation of RIDs to DCs is a task delegated in the RID Master FSMO role to one DC in a domain. Placing the operation in an FSMO role ensures no DC obtains a duplicate RID pool, which would eventually lead to conflicts in SID values and a major problem in terms of SID-uniqueness within the domain.
  • PDC Emulator is the most complicated and least understood role, for it runs a diverse range of critical tasks. It is a domain-specific role, so exists in the forest root domain and every child domain. Its original conception was for backwards compatibility with legacy systems, such as Windows NT BDCs. However, the role is also responsible for keeping the domain time in sync, given that the DC holding this role in the forest root domain is the most authoritative time source in the forest. Password changes and account lockouts are immediately processed at the PDC Emulator for a domain, to ensure such changes do not prevent a user logging on as a result of multi-master replication delays, such as across Active Directory sites.It should be noted that the PDC Emulator does not act in the same fashion as a PDC on a Windows NT network. Cast your eye back to the top of this article and note the section regarding a multi-master directory — for multi-master aware applications, most updates can be made at any DC on the network. However, if an application (or Operating System) is not multi-master aware, the PDC Emulator acts as if it were the PDC on the Windows NT network. One of these older applications would most probably single out the PDC Emulator and write all its changes there.

The latter two roles are much more crucial to the daily operation of the network and could very quickly become a limiting factor in its growth, usability or even the logon process if the DC(s) holding the roles are offline for any period of time. If the RID Master is lost, impact will only be felt by the Network Administrator if a DC depletes its pool of RIDs. On busy networks, this could potentially occur in a matter of days through the creation of new security principals. However, loss of the PDC Emulator could directly affect your users — you’d better have a substantial help desk ready for a spike in call volume if this DC is down for an extended period of time. For example, with the most authoritative source of time unavailable, time skew could eventually occur between DCs and computers in the enterprise and/or domain, lending itself to Kerberos authentication errors and ultimately, failed logons. While it would not be an immediate issue to take this server offline (provided you do not have any legacy applications), this would be the role I would be most concerned about in the event of a DC failure.

Conclusion

If you are still reading, well done! This article covers several aspects of Active Directory in detail, including low-level database processes unseen at the surface – particularly via the GUI. However, FSMO roles are a crucial component of your deployment — having an understanding of the underpinning concepts will help with their placement, deployment and high availability concerns within your enterprise.





Why you shouldn’t put an Exchange Server in the DMZ

3 08 2009

The official Microsoft documentation for Exchange Server is contradictory in terms of deploying an Exchange Server into your perimeter network (DMZ). In many cases, it is interpreted that placing an Exchange Server into this zone is a good idea.

This is a myth.

As a standard rule of managing your network, you should never place any machine joined to the domain into the DMZ. Exchange 2000, 2003 and 2007 (with the exception of the Edge Transport role – see below) must all be installed on machines joined to the domain – place them into the DMZ and you break the first rule of firewalls and Active Directory, which I mentioned above.

So why is this a bad idea?

An Exchange Server needs Active Directory to function because most of its configuration information is stored in the directory service. This is the reason why it must be deployed on a domain-joined server.

If you attempt to move an Exchange Server to the DMZ, you will quickly find that Exchange will break. This is because it loses the ability to find and communicate with the Domain Controllers on the private network. In situations like this, you would have to do one of two things:

  • Deploy an additional Domain Controller into the DMZ
  • Allow the Exchange Server access to the DCs on the private network

Completing either of the above tasks requires you to open ports between the DMZ and private network. The list of ports is extensive and includes sensitive services such as DNS, LDAP and NetBIOS. I heard a fellow Exchange Server MVP state the other day while referring to this list of ports: “open these ports and your firewall rules will look like Swiss Cheese”.

The bottom line is this defeats the principle of a DMZ. A DMZ is intended as a ‘safe’ location for machines which are not joined to the domain; you might put public web servers or public nameservers there, for example. In the DMZ, they are protected from the Internet, but anyone maliciously gaining access to those servers cannot cross the firewall into your private network. By opening the Active Directory ports I describe above and by placing a domain-joined machine in this insecure zone, any hacker in control of a compromised machine in the DMZ has a much easier route to access your Active Directory environment, perhaps bringing it to its knees.

Every Exchange MVP I know considers this to be a very, very bad idea. They would not configure an Exchange Server in this way and neither would I.

Any Exchange Server you deploy should always be on the private network. Located there, you can ensure it has access to the Domain Controllers without the need to compromise network security. From the outside, you only ever need ports 25 and 443 open to allow internal email to flow and for users to access Outlook Web Access and Exchange ActiveSync.

But what about Exchange 2000/2003 Front End Servers?

What about them? Again, it is a misconception – probably brought about by ambiguous documentation – that leads people to believe these servers are there for security reasons. They are not. Legacy Front-End Servers are designed for organisations with multiple mailbox servers. A front-end acts as a central connection point for access to OWA, OMA or ActiveSync under a single, common URL – it does not provide security.

If you are deploying a front-end server because you believe it will secure your Exchange environment, think again. Install Vamsoft ORF on a Virtual Machine or use an external spam filtering service as an alternative.

Exchange 2007 Edge Transport

With Exchange 2007, Microsoft have recognised this problem by adding the Edge Transport server role. This is the first time an Exchange Server role has been specifically designed to be located on the perimeter network. It is also the first time such a role exists for security reasons. The Edge Transport machine is designed to be on a workgroup – not a member of the domain – so it does not require sensitive ports to be opened between the DMZ and private network. It maintains its own copy of the Active Directory database using Active Directory Application Mode (ADAM) in Server 2003 or Active Directory Light-Weight Directory Services (AD LDS) in Server 2008.

I personally do not see a requirement for an Edge Transport server in an Exchange deployment, so I never deploy them. They are an unnecessary expenditure. Unlike a 2000/2003 front-end, they only process SMTP email traffic. Requests for OWA or Exchange Activesync still need to be made directly to the Client Access Servers (CAS), which are domain members and therefore still need to be located on the private network.

The minimal security advantage Edge Transport servers provide can easily be achieved directly on the Hub Transport servers – or by deploying a much cheaper Vamsoft ORF virtual server between the Internet and the Hub Transport server.

Conclusion

You should now have a better understanding of why an Exchange Server should not be deployed into the DMZ. I hope this prompts you to review your Exchange configuration and make appropriate changes to further improve your network security.

Illegal breakage found in headernever




Configuring Windows Time for Active Directory

1 08 2009

I’ve had a few requests recently from people who were confused regarding how to configure time in their Active Directory domains – and some were playing with settings on servers and workstations to try to make things work. In this article, I’ll briefly explain how the time service works in Active Directory networks and general information on how you should go about configuring it.

For anyone not aware, all machines in an Active Directory environment automatically find a time server to sync time with. Workstations use their authenticating Domain Controller, and the DCs sync with the server holding the PDC Emulator FSMO role. In a multi-domain forest, the PDC Emulator in each child domain synchronises with a DC or the PDCe in the forest root domain. To ensure the time remains reliable across the forest, only the PDC Emulator in the forest root domain should ever sync with an external time source – this leads to only one source of time being used across the forest. The Windows Time Service blog have a great post entitled Keeping the domain on time which explains this in more detail, including a great graphic.

The Windows Time Settings

You can find the settings for the Time Service in the registry, under HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters. The most important value to note is the ‘Type’ string – on any domain machine other than the PDC Emulator in the forest root, this should be set to NT5DS. That name isn’t particularly descriptive; if it is set, it means the machine is finding a time server in the Active Directory hierarchy.

If it isn’t set to that, you should think about resetting the time service on that machine. To do that, run a Command Prompt as an Administrator and execute the following commands:

net stop w32time
w32tm /unregister
w32tm /register
net start w32time

Check the registry again, and the Type should now be in domain sync mode (NT5DS).

Sometimes, you may find an NTPServer key in the registry despite the Type being set to NT5DS. NT5DS doesn’t use an NTP Server, so what gives? This setting is simply left over from prior to the machine being joined to the domain, when it was in a workgroup. Provided the Type value is set correctly, the NTPServer entry can be completely ignored or even deleted. Running the above commands on a domain-joined machine will delete it automatically.

The Group Policy Settings

There are also a number of Group Policy settings for the time service. These can be found in Computer Configuration\Administrative Templates\System\Windows Time Service.

I do not encourage you to change these settings; if you have done so, you probably want to revert the policies to ‘Not Configured’. There are reasons why you may make the odd change, but in general, no changes are required and you can actually break the time sync if you do make them.

If you are interested in reading further about what they do, the Windows Time Service blog has another great page going through them: Group Policy Settings Explained.

The Forest Root PDC Emulator Settings

After a bit of a configuration reset, all your DCs, member servers and workstations should now be set to sync from the domain hierarchy. But what about the PDC Emulator in the forest root?

The fact of the matter is the PDCe doesn’t actually need to synchronise with anything. It automatically designates itself the most reliable time server in the domain and it can run quite happily like that, without ever talking to an external time server. My earlier blog post entitled Time: Reliable or accurate? describes why.

However, to have an easy life and keep your users from complaining, it is almost always a good idea to have some form of external time sync on the forest root PDC Emulator. There are a number of ways to do this – for example, an external hardware clock which syncs with GPS. However, the most common (and cheapest – free) solution is synchronising with another NTP server on the Internet. I often use the servers closest to me which participate in possibly the largest time service, the NTP Project (list of time servers). Be aware that if you are bound by SLAs (my company certainly is), by its very nature, the NTP project most probably isn’t the resource for you.

To configure the time sync on the PDCe, you need to execute the following commands. I’d strongly suggest you get a level playing field by resetting the time service using the instructions above before you start.

w32tm /config /manualpeerlist:”uk.pool.ntp.org,0x8 europe.pool.ntp.org,0x8″ /syncfromflags:MANUAL /reliable:yes /update

What’s that command doing?

That command is a rather hefty command, so you may like to know exactly what it is doing to your server. All the changes are taking place in the registry at the key I posted above; using the w32tm tool to make the configuration changes is simply much easier than doing it manually yourself.

/config causes the tool to enter configuration mode. There are a number of other modes it supports which you can find by running w32tm /?.

/manualpeerlist allows you to specify the NTP server or servers you wish to synchronise time with. In this instance, each server’s DNS name or IP address should have a comma followed by the string 0x8. This instructs Windows to send requests to this external server in client mode. If you enter multiple servers, which I suggest, put the servers in quotation marks and separate each entry with a space. The value you specify here is written back to the NTPServer value in the time service’s registry key.

/syncfromflags tells the time service where it should sync time from. You can specify two entries for this – either DOMHIER or MANUAL. The former causes the time service to synchronise with the Domain Hierarchy (sets NT5DS in the Type key in the registry) whereas the latter tells the time service to sync with the server(s) you specified in the Manual Peer List. MANUAL sets Type to NTP.

/reliable sets the server to be a reliable source of time for the domain. Strictly it isn’t required, because the PDC Emulator in the forest root is always the most reliable time server, but I like to include it anyway.

Finally, /update notifies the time service the values have changed, so the new settings are used with immediate effect. If this isn’t included, the registry is updated but the new values will only be used by the time service when its service or the server itself is restarted.

After you’ve run that command, you might want to take a look in the registry to see what changes have been made, and whether they are as you expected.

Check Time Synchronisation

You may be intrigued to know whether the time sync is working correctly. You can do this in one of two ways.

The safest is to wait for a scheduled time sync to take place, or restart the machine. Either will trigger Event ID 35 to be logged in the System log. This event’s description shows the time server the machine is synchronising with. This will be logged on both the PDC Emulator and all DCs, member servers and workstations. You can check for this on member machines to ensure a DC in the domain hierarchy is being found and used correctly – and to ensure your custom NTP servers configured on the PDC Emulator are being used as intended.

Alternatively, putting your cowboy hat on, you can force a time synchronisation. Set the time a minute or two out from what it should be, then return to the command prompt and run w32tm /resync /rediscover. After a few moments, the above event should be logged, and a healthy time service should cause the time on the system to be set back to normal.

As a note, no time synchronisation will take place if the difference between the current system time and the new time provided by the time server is too great. A minute or two is fine, but I would not set the difference to be any more than that. The system checks this difference at each sync, and will reject the new time provided by the time server if it is too large.

Conclusion

You should now have an understanding of how the time service works and where it stores its settings in the registry. While time isn’t one of the most fun services an Active Directory administrator will work with, it is important you ensure the forest stays in sync if you want to avoid major problems with time skew, Kerberos and Active Directory in general.