Demystifying the Active Directory FSMO roles

3 04 2010

If you’ve spent any time administering Active Directory, you’ve probably come across the concept of Flexible Single Master Operations (FSMO) roles. Their introduction is arguably one of the most important but misunderstood changes to Active Directory in the last ten years.

Take a trip down memory lane

In the days of Windows NT, one may recall the Primary Domain Controller (PDC) and Backup Domain Controller (BDC) concept. The directory was structured such that every DC, whether a PDC or a BDC, had a copy of the directory database, but only the PDC could make changes to that database. The model was inefficient, negatively impacted growth and desperately needed improving if the product had any chance of surviving.

Enter Windows 2000. The Directory Service went through one of its largest scale rebuilds to date. Replication and management was significantly improved and the concept of having a multi-master directory was introduced. Although this design has been tweaked over the years, fundamentally, it has remained the same through the versions – because it works. Any DC anywhere in the domain can execute virtually any update to the directory. This scales beautifully, even on large, geographically dispersed networks with many thousands of users.

However, notice I said virtually any change. Since a change can take effect at any DC, there is the possibility that a conflicting change will be made in two locations concurrently – or before replication can occur. Active Directory must ensure these situations are accounted for. In most cases, it applies its complex Multimaster Conflict Resolution Policy, which essentially says the last change wins. However, there are several procedures which simply cannot conflict; these procedures are assigned to one of the five FSMO roles, which go on to be delegated to one or more Domain Controllers.

What are the FSMO roles?

There are nominally five roles present in the directory which reside on DCs nominated specifically by the Administrator to perform these tasks. All the roles are very important and constitute a single point of failure in all Active Directory enterprises. If you have a complex topology with more than one domain, some roles are domain-specific, so you can expect to have duplicates of some roles in every domain in the enterprise.

  • The Domain Naming Master exists once per forest – in the forest root domain – and is rarely used. It is responsible for processing the addition of new child domains, application partitions and external cross-references to the enterprise. Since the name of a child domain or application partition cannot be duplicated (it would conflict in DNS, let alone send Active Directory around the twist), the DC holding this role is the only DC with the ability to process all additions of this kind in the forest.
  • Infrastructure Master: If a user from a foreign domain within the same forest is added as a member of a compatible group in another domain, the DCs in the group’s domain must have some information about that user in its local database in order to update the member attribute of the group. To do this, it adds a special record to its database called a phantom, which contains only the foreign user’s security identifier (SID), globally unique identifier (GUID) and their distinguished name (DN). Like all objects in the database, this record is given a distinguished name tag, or DNT, an internal reference used solely in the low-level Active Directory database layer. In doing this, the directory service is able to add that user as a member of the group by referring to the phantom’s DNT, just like it would refer to a user’s own DNT if you added a user from the group’s own domain to the group. You might think of this like using a primary key in a relational database to refer to objects across tables, but not exactly, as Active Directory’s database is by no means any sort of RDBMS.

    That’s very clever, but what if something about the source user in their original domain changes? If the user is renamed, moved or deleted, the phantom in the group domain DC databases would lose its referential integrity with the source domain. This is a situation the infrastructure master aims to avoid. On a periodic basis (by default, every 2 days), the infrastructure master – an FSMO role present in every domain – compares its local database to a Global Catalog (GC) server to determine whether any changes have been made to the objects the phantoms were created to represent. A GC contains a partial replica of all objects in the forest, so replication means any GC would already know about this updated data. The phantom is then updated with new values or deleted from the domain’s database if the object has been removed from its source domain.

    In a multi-domain forest, you must either locate this role on a Domain Controller which is not a Global Catalog or, if you must locate the role on a GC, ensure all DCs in that particular domain are GCs. A GC will never create phantoms because it already knows about users from other domains. If the infrastructure master is a GC, there will never be any phantoms in its local database to compare with the global catalog data, so no updates will be made, but other non-GC DCs in the domain would gradually become outdated. If all DCs in the domain are GCs, or you only have a single-domain forest, every DC knows enough about the security principal that it does not need to create a phantom, so this role is essentially redundant.

  • Schema Master: As the name suggests, this role is the Master of the Schema, the information which contains the formal definitions of how Active Directory stores objects, what attributes are available on those objects and so on. This role exists once per forest, on a DC in the forest root domain. Any updates to the Schema must be tightly controlled, so one DC delegated as the Schema Master performs all such changes to the database. Schema updates are then replicated to other DCs on the network by standard Active Directory replication.

So far, three of the five roles have been covered. Those above are those I would consider the least critical FSMO roles in the forest. If you lose the DC delegated one or more of these roles, it’s no big deal — it may prevent a network administrator taking an action, but it will not impact the usability of the network. Losing the Domain Naming Master or Schema Master would create problems in regard to creating child domains or running schema updates, but these generally occur very rarely and checking this Operations master DC is up would be part of the planned engineering works. Similarly, losing the Infrastructure Master may cause integrity issues in the database, but given that it only runs its scan every two days in the first place, a day or two of outage will not generally cause an issue.

  • RID Master: This role is one of the two which are important to the daily operation of Active Directory. Under the glossy GUI of Windows, security principals are identified and differentiated by use of two values – a Security Identifier (SID) and a Globally Unique Identifier (GUID).A SID is an alphanumeric string which is unique throughout a forest. The SID is the actual value used internally by Windows to identify users and grant access to resources using Discretionary Access Control Lists (DACLs), for example, via the ‘Security’ tab on a file or directory. Have you ever deleted a user, recreated her, then wondered why she cannot access the same files and folders, despite having the same username? The new account would have a new SID and is therefore considered an entirely different security principal to the system.Contrary to popular belief, the username, distinguished name or full name of a user are not internal tracking mechanisms within Windows as all these values could change.The standard make up of an SID might be as follows (this SID is purely random): S-1-5-21-789336058-1123561945-725345543-10823.The nature and formation of an SID is beyond the scope of this article, but it is the very last octet (in this instance, 10823) we are interested in. This figure represents a Relative Identifier (RID), an incremental value which actually makes the SIDs unique within a domain, ensuring no two users conflict in the database. When a security principal (user, computer, group etc.) is created, the domain SID (in this instance S-1-5-21-789336058-1123561945-725345543) has the next available RID appended to the end.Each Domain Controller is initially allocated a pool of 500 RIDs. As security principals are created, RIDs are used up. The allocation of RIDs to DCs is a task delegated in the RID Master FSMO role to one DC in a domain. Placing the operation in an FSMO role ensures no DC obtains a duplicate RID pool, which would eventually lead to conflicts in SID values and a major problem in terms of SID-uniqueness within the domain.
  • PDC Emulator is the most complicated and least understood role, for it runs a diverse range of critical tasks. It is a domain-specific role, so exists in the forest root domain and every child domain. Its original conception was for backwards compatibility with legacy systems, such as Windows NT BDCs. However, the role is also responsible for keeping the domain time in sync, given that the DC holding this role in the forest root domain is the most authoritative time source in the forest. Password changes and account lockouts are immediately processed at the PDC Emulator for a domain, to ensure such changes do not prevent a user logging on as a result of multi-master replication delays, such as across Active Directory sites.It should be noted that the PDC Emulator does not act in the same fashion as a PDC on a Windows NT network. Cast your eye back to the top of this article and note the section regarding a multi-master directory — for multi-master aware applications, most updates can be made at any DC on the network. However, if an application (or Operating System) is not multi-master aware, the PDC Emulator acts as if it were the PDC on the Windows NT network. One of these older applications would most probably single out the PDC Emulator and write all its changes there.

The latter two roles are much more crucial to the daily operation of the network and could very quickly become a limiting factor in its growth, usability or even the logon process if the DC(s) holding the roles are offline for any period of time. If the RID Master is lost, impact will only be felt by the Network Administrator if a DC depletes its pool of RIDs. On busy networks, this could potentially occur in a matter of days through the creation of new security principals. However, loss of the PDC Emulator could directly affect your users — you’d better have a substantial help desk ready for a spike in call volume if this DC is down for an extended period of time. For example, with the most authoritative source of time unavailable, time skew could eventually occur between DCs and computers in the enterprise and/or domain, lending itself to Kerberos authentication errors and ultimately, failed logons. While it would not be an immediate issue to take this server offline (provided you do not have any legacy applications), this would be the role I would be most concerned about in the event of a DC failure.

Conclusion

If you are still reading, well done! This article covers several aspects of Active Directory in detail, including low-level database processes unseen at the surface – particularly via the GUI. However, FSMO roles are a crucial component of your deployment — having an understanding of the underpinning concepts will help with their placement, deployment and high availability concerns within your enterprise.





Why you shouldn’t use PST files

5 12 2009

They have been around for years and for thousands of Microsoft Outlook users and email administrators out there, they’d be lost without them: Personal Storage Table (PST) files. If you’ve worked with Outlook for very long, the name will immediately ring a bell; if you’ve ever administered Outlook, you may already know about the problems associated with this notorious file format.

In any corporate environment – or, for that matter, any environment with an Exchange Server – the use of PST files as a permanent solution to an email administrator’s problems should be banned. Let’s find out why.

Problem 1: File Sizes and Data Security

The number one issue with the PST format prior to Outlook 2003 was that it was ANSI (American National Standards Institute)-based. The ANSI PST format has a maximum size limit of 2GB, and other limitations exist with regard to the number of items which can be stored per folder. However, there was a particularly problematic bug in the Outlook software which allowed data to be written to ANSI PSTs past the 2GB limit without warning. This would result in data loss, at least past the 2GB limit, but potentially loss of all the data stored in the file.

To address these concerns, Outlook 2003 and higher introduces a new PST format which runs on Unicode instead. This format stores up to 20GB of data, but it should be noted that upgrading Outlook does not automatically upgrade any PST file(s). This must be completed manually, by creating a new Unicode file and transferring the data across.

Despite the improvements made, PST files are still susceptible to corruption issues – which will result in lost data. These become particularly prevalent as files become larger or you increase the volume of data which moves through the file. For most users, the prospect of losing precious or business-critical emails, reminders, tasks and contacts could be cause for significant concern. It shouldn’t come as a surprise that you should make a regular backup of your PST file(s), but this is not completely safe, as a PST can go for weeks or months in a partially corrupted state before you realise you have a problem.

Problem 2: Network Access and Backups

PST files must be stored on a local hard disk. Accessing them over a network is not supported by Microsoft. Instabilities in the network, loss of network connectivity, speed issues in reading and writing from the file server can all cause issues — particularly for sensitive PST files, which are so very easily corrupted.

This has two implications for system administration:

Firstly, backups are already difficult to maintain, due to the issues with corruption going undetected, but will become ever more difficult to implement. As the PST cannot be run from the network, you must configure backups on each machine individually – and must ensure the backup does not run while Outlook is running. Backing up the Exchange Server is rather pointless, as the data is offloaded into the PST when the user logs in.

Second, your cost of administration increases significantly. Considering a typical organisation, which may have remote workers and several sites across different areas of the country or perhaps throughout the world, moving administration away from the server and towards the client lessens the design principles surrounding central administration, requiring more admin time to perform repetitive tasks on PST files. The system may quickly grow beyond your control, becoming exponentially difficult to track and maintain.

Problem 3: File Sharing and Remote Access

PST files do not natively support file sharing between multiple users simultaneously. If you attempt to configure this, the mail file may be corrupted — not to mention the fact you would need to run the file over the network, so problem #2 has already been invoked.

Storing data in PST files also has no benefits for remote access either. Exchange’s Outlook Web Access (OWA) (or Outlook Web App, in Exchange 2010) allows users to remotely access their mailboxes, providing a near Outlook user interface for doing so. Data in PST files has usually been removed from the mailbox, so immediately becomes inaccessible to the user remotely.

Problem 4: Inefficient use of resources

You’ve invested in a powerful Exchange Server. It: has large, redundant disk arrays, processing power and RAM capacity; cost you thousands to purchase the hardware and software licenses; adds significantly to your energy and data centre cooling bill. If PST files are in use, your server is essentially going to waste; the functionality of the server you are actually using is essentially the same as a free Linux mail server distribution running on an old workstation supporting POP3 clients.

But…

Despite the considerations above, you might still be wondering how to work around those common problems which PST files are oh so convenient for solving.

Use 1: Archiving

This is a mis-conception, brought about largely by Outlook’s desire to continue annoying its users with AutoArchive prompts. There is no reason whatsoever that mail should be archived to each user’s local PC. Consider the actions you would take to archive files off your file server; where would you put the archived data? On your own PC? On your manager’s? On the CEO’s? You’d do none of those three, as the data is unlikely to be backed up, and you cannot assure data security. Instead, you’d find some space on a share on your archive server – or create a LUN using spare space on one of your SANs.

The same applies to email. Off-loading email from your Exchange Server to user PCs has significant risks attached to it. Instead, you should use an enterprise mail archiving solution. The product I usually recommend is Symantec Enterprise Vault, although there are many others. The main benefits? Data is still stored centrally, under the guise of your retention policies and backup process. To the end user, they can still view archived emails using a handy web interface (yes, a web interface – providing remote access to the archive).

Okay, but what about when disk space on my Exchange Server runs low or I hit the store size limit?

UPGRADE THE SERVER! Exchange 2007 and 2010 do not impose a hard limit on the mail stores, and you shouldn’t be trying to run a mail server with little disk space or database space remaining. Archiving to PST is a quick solution, but one which won’t work in the long run.

With the soon-to-be Exchange 2010 release, significant changes have been made, one of which is the addition of archiving support. Each user can be given a separate ‘archive’ mailbox; it is attached to their main mailbox, but allows for data to be archived for long term storage. The settings governing when and how mail is moved to the archive are controlled by retention policies, giving the administrator greater control over retention. Again, the archive store is available remotely via Exchange 2010’s Outlook Web App.

Use 2: On the road

For users on the road, there is no need to store their mail in a PST file. Cached Exchange Mode is available in Exchange 2003/Outlook 2003 and higher, allowing users to work offline with a cached copy of their mailbox. When they reconnect to the network, the changes are seamlessly synchronised back to the server.

Use 3: Exmerge/Export-Mailbox

This is just about the only use of PST files which I can agree to — and I’ll admit, I’ve used this approach myself. If you migrate to a new mail system or rebuild your Exchange system, sometimes you cannot avoid using exmerge (or Exchange 2007’s export-mailbox management shell cmdlet) to take handy copies of the mailboxes – which can later be re-imported to the new system. For moving mailboxes between servers, you would use the Move Mailbox wizard – but for large scale rebuilds, exmerge is sometimes your friend.

Be cautious though; Exmerge uses the ANSI PST format, so you will need to meticulously plan your export and import procedure for larger mailboxes.

Use 4: Home Users

These are the people who the PST is most applicable to. If you are connecting via Outlook to a Post Office Protocol (POP) host to download your email, that email will be stored in a PST file. The fact you don’t have an Exchange Server doesn’t change any of the points above, though; that PST is still susceptible to corruption. If mail is deleted off the server, this could lead to data loss.

For this issue, you really have two solutions. The POP3 account in Outlook can be configured to leave email on the server. This acts as a backup; if your PST file becomes corrupted, the ISP still has a copy of your messages, so they can be downloaded again. To configure, open the Tools > Account Settings dialog in Outlook. Select your POP3 account, choose Properties, press More Settings, then switch to the Advanced tab. Under the Delivery section at the bottom of the window you should check the “Leave a copy of messages on the server” checkbox. If you want a backup of all your mail, don’t enable the option to remove it from the server after a certain time period.

The disadvantage to the POP3 solution becomes apparent if you move to another computer or access your mailbox via your ISP’s webmail interface. The message state information (tracking of read/unread or whether the message has been replied to or forwarded) is not transferred back to the ISP, so all the mail you thought you had read and handled will still be marked unread on the ISP’s server.

My preferred solution, and the one I use regularly, is an Internet Message Access Protocol (IMAP) account. The IMAP protocol is another mail protocol used to access email; it stands alongside POP. However, using IMAP, you replicate a client-server topology very similar to connecting to an Exchange mailbox with Outlook in Cached Exchange Mode. With IMAP, email generally remains stored in your mailbox at the ISP until you specifically delete it. Nevertheless, you can’t get away from PST files completely; they are still there when you use an IMAP account, as Outlook uses them to make a cache of the data for working with the IMAP account in offline mode. However, as the PST isn’t the only location where your data is stored, any corruption is not going to lead to data loss.

It should be noted that both the POP solution for leaving data on the server, as well as the IMAP solution, both have drawbacks, as items in your Calendar, Contacts or Tasks folders will not be stored on the server. IMAP does not support special folders – such as the Calendar or Tasks – and these will not be replicated back with a POP account, so you will still be using a PST file to some extent. Unless you move entirely into the cloud (use web services for email, calendar and contacts) or purchase your Exchange Server, you won’t be able to easily get away from this.

Conclusion

I’ve covered a fair bit of information regarding PST files here. Hopefully, my points detailing why the use of PSTs is so impractical will now encourage you to reconsider your PST usage, archiving practices and retention policies.

With all your user mail stored safely on the Exchange Server, rather than local PCs, assistants can become delegates for their managers, looking after their mailbox; the administrator can rest assured that all data is centrally stored and backup up and you can turn off Outlook AutoArchive, relieving end users of that annoying prompt every couple of weeks.

This article was originally published at Experts Exchange.





Why you shouldn’t put an Exchange Server in the DMZ

3 08 2009

The official Microsoft documentation for Exchange Server is contradictory in terms of deploying an Exchange Server into your perimeter network (DMZ). In many cases, it is interpreted that placing an Exchange Server into this zone is a good idea.

This is a myth.

As a standard rule of managing your network, you should never place any machine joined to the domain into the DMZ. Exchange 2000, 2003 and 2007 (with the exception of the Edge Transport role – see below) must all be installed on machines joined to the domain – place them into the DMZ and you break the first rule of firewalls and Active Directory, which I mentioned above.

So why is this a bad idea?

An Exchange Server needs Active Directory to function because most of its configuration information is stored in the directory service. This is the reason why it must be deployed on a domain-joined server.

If you attempt to move an Exchange Server to the DMZ, you will quickly find that Exchange will break. This is because it loses the ability to find and communicate with the Domain Controllers on the private network. In situations like this, you would have to do one of two things:

  • Deploy an additional Domain Controller into the DMZ
  • Allow the Exchange Server access to the DCs on the private network

Completing either of the above tasks requires you to open ports between the DMZ and private network. The list of ports is extensive and includes sensitive services such as DNS, LDAP and NetBIOS. I heard a fellow Exchange Server MVP state the other day while referring to this list of ports: “open these ports and your firewall rules will look like Swiss Cheese”.

The bottom line is this defeats the principle of a DMZ. A DMZ is intended as a ‘safe’ location for machines which are not joined to the domain; you might put public web servers or public nameservers there, for example. In the DMZ, they are protected from the Internet, but anyone maliciously gaining access to those servers cannot cross the firewall into your private network. By opening the Active Directory ports I describe above and by placing a domain-joined machine in this insecure zone, any hacker in control of a compromised machine in the DMZ has a much easier route to access your Active Directory environment, perhaps bringing it to its knees.

Every Exchange MVP I know considers this to be a very, very bad idea. They would not configure an Exchange Server in this way and neither would I.

Any Exchange Server you deploy should always be on the private network. Located there, you can ensure it has access to the Domain Controllers without the need to compromise network security. From the outside, you only ever need ports 25 and 443 open to allow internal email to flow and for users to access Outlook Web Access and Exchange ActiveSync.

But what about Exchange 2000/2003 Front End Servers?

What about them? Again, it is a misconception – probably brought about by ambiguous documentation – that leads people to believe these servers are there for security reasons. They are not. Legacy Front-End Servers are designed for organisations with multiple mailbox servers. A front-end acts as a central connection point for access to OWA, OMA or ActiveSync under a single, common URL – it does not provide security.

If you are deploying a front-end server because you believe it will secure your Exchange environment, think again. Install Vamsoft ORF on a Virtual Machine or use an external spam filtering service as an alternative.

Exchange 2007 Edge Transport

With Exchange 2007, Microsoft have recognised this problem by adding the Edge Transport server role. This is the first time an Exchange Server role has been specifically designed to be located on the perimeter network. It is also the first time such a role exists for security reasons. The Edge Transport machine is designed to be on a workgroup – not a member of the domain – so it does not require sensitive ports to be opened between the DMZ and private network. It maintains its own copy of the Active Directory database using Active Directory Application Mode (ADAM) in Server 2003 or Active Directory Light-Weight Directory Services (AD LDS) in Server 2008.

I personally do not see a requirement for an Edge Transport server in an Exchange deployment, so I never deploy them. They are an unnecessary expenditure. Unlike a 2000/2003 front-end, they only process SMTP email traffic. Requests for OWA or Exchange Activesync still need to be made directly to the Client Access Servers (CAS), which are domain members and therefore still need to be located on the private network.

The minimal security advantage Edge Transport servers provide can easily be achieved directly on the Hub Transport servers – or by deploying a much cheaper Vamsoft ORF virtual server between the Internet and the Hub Transport server.

Conclusion

You should now have a better understanding of why an Exchange Server should not be deployed into the DMZ. I hope this prompts you to review your Exchange configuration and make appropriate changes to further improve your network security.

Illegal breakage found in headernever




Configuring Windows Time for Active Directory

1 08 2009

I’ve had a few requests recently from people who were confused regarding how to configure time in their Active Directory domains – and some were playing with settings on servers and workstations to try to make things work. In this article, I’ll briefly explain how the time service works in Active Directory networks and general information on how you should go about configuring it.

For anyone not aware, all machines in an Active Directory environment automatically find a time server to sync time with. Workstations use their authenticating Domain Controller, and the DCs sync with the server holding the PDC Emulator FSMO role. In a multi-domain forest, the PDC Emulator in each child domain synchronises with a DC or the PDCe in the forest root domain. To ensure the time remains reliable across the forest, only the PDC Emulator in the forest root domain should ever sync with an external time source – this leads to only one source of time being used across the forest. The Windows Time Service blog have a great post entitled Keeping the domain on time which explains this in more detail, including a great graphic.

The Windows Time Settings

You can find the settings for the Time Service in the registry, under HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters. The most important value to note is the ‘Type’ string – on any domain machine other than the PDC Emulator in the forest root, this should be set to NT5DS. That name isn’t particularly descriptive; if it is set, it means the machine is finding a time server in the Active Directory hierarchy.

If it isn’t set to that, you should think about resetting the time service on that machine. To do that, run a Command Prompt as an Administrator and execute the following commands:

net stop w32time
w32tm /unregister
w32tm /register
net start w32time

Check the registry again, and the Type should now be in domain sync mode (NT5DS).

Sometimes, you may find an NTPServer key in the registry despite the Type being set to NT5DS. NT5DS doesn’t use an NTP Server, so what gives? This setting is simply left over from prior to the machine being joined to the domain, when it was in a workgroup. Provided the Type value is set correctly, the NTPServer entry can be completely ignored or even deleted. Running the above commands on a domain-joined machine will delete it automatically.

The Group Policy Settings

There are also a number of Group Policy settings for the time service. These can be found in Computer Configuration\Administrative Templates\System\Windows Time Service.

I do not encourage you to change these settings; if you have done so, you probably want to revert the policies to ‘Not Configured’. There are reasons why you may make the odd change, but in general, no changes are required and you can actually break the time sync if you do make them.

If you are interested in reading further about what they do, the Windows Time Service blog has another great page going through them: Group Policy Settings Explained.

The Forest Root PDC Emulator Settings

After a bit of a configuration reset, all your DCs, member servers and workstations should now be set to sync from the domain hierarchy. But what about the PDC Emulator in the forest root?

The fact of the matter is the PDCe doesn’t actually need to synchronise with anything. It automatically designates itself the most reliable time server in the domain and it can run quite happily like that, without ever talking to an external time server. My earlier blog post entitled Time: Reliable or accurate? describes why.

However, to have an easy life and keep your users from complaining, it is almost always a good idea to have some form of external time sync on the forest root PDC Emulator. There are a number of ways to do this – for example, an external hardware clock which syncs with GPS. However, the most common (and cheapest – free) solution is synchronising with another NTP server on the Internet. I often use the servers closest to me which participate in possibly the largest time service, the NTP Project (list of time servers). Be aware that if you are bound by SLAs (my company certainly is), by its very nature, the NTP project most probably isn’t the resource for you.

To configure the time sync on the PDCe, you need to execute the following commands. I’d strongly suggest you get a level playing field by resetting the time service using the instructions above before you start.

w32tm /config /manualpeerlist:”uk.pool.ntp.org,0x8 europe.pool.ntp.org,0x8″ /syncfromflags:MANUAL /reliable:yes /update

What’s that command doing?

That command is a rather hefty command, so you may like to know exactly what it is doing to your server. All the changes are taking place in the registry at the key I posted above; using the w32tm tool to make the configuration changes is simply much easier than doing it manually yourself.

/config causes the tool to enter configuration mode. There are a number of other modes it supports which you can find by running w32tm /?.

/manualpeerlist allows you to specify the NTP server or servers you wish to synchronise time with. In this instance, each server’s DNS name or IP address should have a comma followed by the string 0x8. This instructs Windows to send requests to this external server in client mode. If you enter multiple servers, which I suggest, put the servers in quotation marks and separate each entry with a space. The value you specify here is written back to the NTPServer value in the time service’s registry key.

/syncfromflags tells the time service where it should sync time from. You can specify two entries for this – either DOMHIER or MANUAL. The former causes the time service to synchronise with the Domain Hierarchy (sets NT5DS in the Type key in the registry) whereas the latter tells the time service to sync with the server(s) you specified in the Manual Peer List. MANUAL sets Type to NTP.

/reliable sets the server to be a reliable source of time for the domain. Strictly it isn’t required, because the PDC Emulator in the forest root is always the most reliable time server, but I like to include it anyway.

Finally, /update notifies the time service the values have changed, so the new settings are used with immediate effect. If this isn’t included, the registry is updated but the new values will only be used by the time service when its service or the server itself is restarted.

After you’ve run that command, you might want to take a look in the registry to see what changes have been made, and whether they are as you expected.

Check Time Synchronisation

You may be intrigued to know whether the time sync is working correctly. You can do this in one of two ways.

The safest is to wait for a scheduled time sync to take place, or restart the machine. Either will trigger Event ID 35 to be logged in the System log. This event’s description shows the time server the machine is synchronising with. This will be logged on both the PDC Emulator and all DCs, member servers and workstations. You can check for this on member machines to ensure a DC in the domain hierarchy is being found and used correctly – and to ensure your custom NTP servers configured on the PDC Emulator are being used as intended.

Alternatively, putting your cowboy hat on, you can force a time synchronisation. Set the time a minute or two out from what it should be, then return to the command prompt and run w32tm /resync /rediscover. After a few moments, the above event should be logged, and a healthy time service should cause the time on the system to be set back to normal.

As a note, no time synchronisation will take place if the difference between the current system time and the new time provided by the time server is too great. A minute or two is fine, but I would not set the difference to be any more than that. The system checks this difference at each sync, and will reject the new time provided by the time server if it is too large.

Conclusion

You should now have an understanding of how the time service works and where it stores its settings in the registry. While time isn’t one of the most fun services an Active Directory administrator will work with, it is important you ensure the forest stays in sync if you want to avoid major problems with time skew, Kerberos and Active Directory in general.





Where do I put my ADMX files?

6 06 2009

Note (4th May 2012): As this post proves to be ever popular, I have updated it to account for new developments and to provide a more general method of storing your ADMX files, especially on large networks. Please check out the new post: ADMX files, where to put them, and you – take 2.

ADMX files are the new form of ADM files, the format which defines what Group Policy settings set what registry changes when they are applied. With Microsoft’s move to XML-based file formats and alongside their release of the new Office 2007 file extensions (DOCX, XLSX, PPTX etc.) the ADM format was also upgraded to ADMX.

People familiar with ADM files would remember that in order to have Group Policy Editor read the ADM file and add the settings to the policy, they would need to Add the template. However, for ADMX files, you cannot add them via the Add/Remove Template wizard in Group Policy Editor, because they do not appear as an option to add.

Windows reads the ADMX files on the system from a pre-defined location, and that location is the only location on the system where you should place the ADMX files. It is %systemroot%\PolicyDefinitions, where %systemroot% is normally C:\WINDOWS.

Any ADML files you receive with the ADMX files should be placed into a subfolder within PolicyDefinitions, named after their MUI ID. For example, a en-US ADML file would be placed into the directory %systemroot%\PolicyDefinitions\en-US.

Once you have stored your ADMX files in their respective locations, it is simply a matter of restarting Group Policy Management Console for the files to appear in the Group Policy Editor.

It should be noted that any form of ADM/ADMX file only needs to be present on the machine where the policies are edited from. It does not need to be present on every machine on the network. The ADMX files simply link the GUI of the GPO Editor with the appropriate registry settings to make; the registry settings are simply stored and processed at each client where the GPO applies.





Modifying Outlook Web Access Login Page

30 05 2009

After a recent Exchange 2007 deployment, I was asked to make some modifications to OWA to make it more intuitive for some of the less technically-proficient users to make use of OWA more effectively, and to personalise the OWA site to the company.

In Exchange 2007, the business logic which renders OWA is contained within the Client Access Server (CAS) role. This is a new addition; in 2003, this logic was handled by the back-end mailbox servers, with HTTP requests simply proxied via the front-end servers which acted in a similar fashion to a gateway. Therefore, on a 2007 Server, you need to be modifying the login screen on your Client Access Server(s).

The location of the OWA static content is C:\Program Files\Microsoft\Exchange Server\ClientAccess\OWA. Before you begin making modifications, I would suggest you take a backup of this entire folder and store it safely. There is a lot of ASP.NET programming in the various files; unless you are a proficient .NET programmer, you could easily break your forms-based OWA logon and several other aspects of OWA with just a few wrong clicks.

The changes I made were as follows:

  • I changed the header image on the front page (which says Microsoft Office Outlook Web Access) to include the company name below the text and the company logo in the upper right. This was particularly easy to modify using Photoshop, although any graphics editing suite would suffice.The file you need to take a backup of, then modify, can be found in the Current\themes\base folder below the ‘OWA’ directory referenced above. The file to modify is lgntopl.gif. It is in GIF format and opens in Photoshop as an Indexed image; if you are importing any graphics, you may need to change the image mode in Photoshop using the ‘View’ menu, to ensure colour content is retained.It looks particularly effective when the text for Company Name appears to the bottom right of the ‘Web Access’ line in the header image. That along with the addition of the company logo in the upper-right of the image personalises the OWA experience, and also acts as a potential security benefit – if users become used to seeing the header in this way, they may be deterred from logging in to any other OWA page which does not exhibit your modifications.
  • The logon page can be modified too. It can be found in the Auth directory, and is quite aptly named logon.aspx. If you did not make a backup earlier, it is very important you take a backup of this file prior to making modifications. You will see why when you right-click the file and choose to Edit it using Notepad or Wordpad.The page is built around a standard HTML table, and it is particularly easy to pick through the content to find out what does what. If, like me, it is unclear to you at the beginning, simply comment out sections of code and refresh your OWA login page to notice the effect. The HTML comment tags are <!– to start a comment, and –> to end the comment. All the HTML code you wish the browser to ignore should be within the two tags – but there is no limit to the number of comment tags you can have per page.The features I removed from the login page was the  ‘Public/Private’ login option and the ‘OWA Light’ version. The company decided it did not wish for these features to be visible to users. As a result, all users would login with sessions of type ‘Public’, and OWA would determine whether it operated in Premium or Basic mode based on browser (IE6 or above works in Premium, all other browsers operate in the cut-down, no frills Basic mode).I also added the following as a new row inside the main table which makes up the page:<tr>
    <td style=”width: 100%;font-size: 14pt; text-align:center;”>
    <p align=”center”>Welcome to <company>”s Web Mail</p>
    </td>
    </tr>
    <tr><td><hr></td></tr>

    This added an additional line to the login page, once again to personalise OWA to the company.

Once you are happy with your changes, I suggest you make a note of exactly what changes you made. When any new Service Pack or Update Rollup applied to the server, it is likely the OWA files will be overwritten when the CAS role is upgraded, meaning you must implement your changes again. I do not advise that you copy/paste the original files back into their previous location for the simple reason that any SP/UR may upgrade these files, and overwriting them with your originals from the previous patch level will revert these changes.

I hope you have learnt something from this blog posting, and I look forward to hearing back from you as to how you have taken these modifications further with your OWA pages. You are not just limited to modifying the login page; within the ‘OWA’ directory there are plenty of other pages which can have changes made to them, and you can also access all the images which produce the various default themes and modify these as you wish.





Exchange 2007 access to all mailboxes for Administrator

24 05 2009

Deploying Exchange 2007 can have its problems at the best of times. The separation of Exchange management from the Active Directory tools also has a knock-on effect when it comes to granting Exchange-related permissions en masse. This seemingly easy task is now proving to be a minefield.

So, how do you grant an Administrator access to all the mailboxes for an Exchange 2007 Mailbox Database?

Remove the Default Permissions

Before you start, you need to remove the default permissions at the Exchange Organization Level. These apply to all mailboxes in the organization and specifically deny any administrative-type user the Send-As and Receive-As permissions. This may cause confusion later, so it is best to remove them.

The user accounts and groups which are denied the ‘Receive-As’ permission at the organization level are:

  • DOMAIN\Administrator
  • DOMAIN\Domain Admins
  • DOMAIN\Enterprise Admins
  • DOMAIN\Exchange Organization Administrators

In order to remove the Deny entry for all the above users, the following command should be used:

Get-OrganizationConfig | Remove-ADPermission -User “DOMAIN\Administrator” -AccessRights ExtendedRight -ExtendedRights Receive-As -Deny

Replace DOMAIN\Administrator with the other entries in the above list to remove the permission for those accounts too.

The Commands

Having removed the default permissions, you can now set about implementing the permissions needed. Prior to discussing the actul Powershell commands to use, it is important that you understand the different types of permission which can be granted:

  • FullAccess Mailbox Permission. This can only be granted at the individual mailbox level to a user or group of users; it allows the designated users the ability to access the mailbox via Outlook or the new feature to open another user’s mailbox in Outlook Web Access.This permission is granted directly on the mailbox using the cmdlet Add-MailboxPermission. An example might be Add-MailboxPermission -Identity JDoe -User MSmith -AccessRights FullAccess where MSmith is being granted the ability to access JDoe’s mailbox.This permission can also be granted via the Exchange Management Console. Selecting a Mailbox in Recipient view adds an option Manage Full Access Permission to the actions pane, where this permission can be managed in a similar fashion.The problem with granting permissions in this fashion is it has to be done on an individual mailbox basis. For granting permissions en masse, it would defeat the principles of granting permissions due to the administrative overhead of maintaining the ACLs. Instead, permissions should be granted on a common parent object and allowed to inherit to the child objects, in this case, the mailboxes.
  • Receive As Active Directory permission. This permission can be set either at the mailbox level, or at a higher level in the Active Directory tree. It has the same effect as Full Mailbox Access, with the difference that it can be set at the store or storage group level, and therefore will be inherited down by all decendent mailboxes.Generic Active Directory permissions related to Active Directory objects are granted and modified at the Management Shell using Add-ADPermission. This cmdlet expects the -Identity parameter to be a full Active Directory path – I believe a Distinguished Name is expected. It is, therefore, much easier to pipe this path from the result of a previous command, particularly when handling some of the more complicated Exchange objects with complex DNs. For example, to grant these permissions at the store level (the store being an Active Directory object), I could use: Get-MailboxDatabase -Identity “My Database” | Add-ADPermission -User “DOMAIN\Group of Users” -AccessRights ExtendedRight -ExtendedRights Receive-As

The problem with granting Receive As permissions is while Outlook will obey them and happily display a mailbox where the Receive As permission is inherited, the new feature of Outlook Web Access which allows other mailboxes to be opened does not. For the OWA feature to work, the user must be granted explicit Full Mailbox access on an individual mailbox basis, to every mailbox they need to access.

My Approach

To achieve the ultimate objective of allowing Domain Admins to access a mailbox, either from Outlook or OWA, I chose to use several commands.

I first granted Domain Admins ‘Receive-As’ access at the store level using the command I described above. Via Outlook, these permissions would allow any Domain Admin to open these mailboxes as additional mailboxes.

To counteract the OWA restriction, I had to grant the Full Access permission across every mailbox. While this is very messy to maintain, it is currently the only option. Furthermore, as new mailboxes will not have the permission set by default, I use a Scheduled Task with a small PowerShell script, to set the permissions for every mailbox once per day.

My PowerShell script (.ps1 file) consists of the following:

# Matt’s Powershell Script (see tigermatt.wordpress.com) to add Full Mailbox permissions to all mailboxes in the Exchange organization
Add-PSSnapin Microsoft.Exchange.Management.Powershell.Admin -erroraction silentlyContinue
$userAccounts = get-mailbox -resultsize unlimited
ForEach ($user in $userAccounts)
{
add-MailboxPermission -identity $user -user “Domain Admins” -AccessRights FullAccess
}

Via Task Scheduler (Windows Server 2008), you can launch the script by specifying powershell.exe as the application, and “& ‘C:\path\to\script.ps1′” as the parameter. Note the double-quote followed by single-quote, and the requirement to close both quotes at the end of the command.

On the basis defined by your Scheduled Task configuration (I would suggest the task runs daily during the night, when the load on the server is low), the script will enumerate all mailboxes in the Exchange Organization, adding the required Full Access permission to the Domain Admins group.

This concludes my entry on granting various permissions in Exchange using Powershell. I hope I have cleared up some concerns regarding the differences between adding Mailbox Permissions and adding Active Directory permissions, and that this helps you.