SBS is going! A sign of the future?

27 08 2012

The recent news that Microsoft are discontinuing their all-in-one Small Business Server (SBS) offering sent a shockwave through the entire IT community. In a blog post on the SBS product group’s blog at the start of July, the news was made very clear: the end of an era is upon us. However, it is not just small businesses or those who specialise in working with them who should listen up and take note. This development is one of many which will define lasting change in the way we do computing, in this decade and beyond.

For those who are not in the Microsoft or the SMB market, SBS rapidly became a valuable product for companies with 75 users or fewer when it was introduced to the market. The platform was an instant winner because it combined a good number of products from the Windows Server portfolio into a single server which was very easy to manage. For the first time, small businesses were able to compete in the global marketplace alongside their enterprise rivals — realising the benefits of identical technologies, but without breaking the bank. Many companies sought a collaboration server like Exchange to manage all their communication better than they could achieve with conventional email systems. This was undoubtedly one of the most popular products to be bunded with SBS, and one which enabled many to thrive and flourish.

I owe a lot to SBS — in fact, without it, I would not be writing this article. Regular contributors at Experts Exchange may be aware that I began learning about networks and servers when I built my first server at the age of 11 to run the family computers. That server was running SBS 2003. Back then, my knowledge of networking was non-existent, but over time and with the supportive guidance from my peers at Experts Exchange, I was able to learn. I work now with a few organisations from 5 users up to several thousand, managing their infrastructure and developing solutions to help them integrate technology into their business model to position themselves and grow. For one such company, having an office staff and a snazzy computer system is unheard of in its industry, yet I have watched them develop from a small setup with a single laptop and a home office to a very successful company in their own right — and they give credit to their SBS server for helping make that happen.

So what’s the alternative?

Well, it’s not all bad news, and no, that’s not because I’m advocating we make what we already have the de facto standard for the next umpteen years and stop innovating. The replacement product comes from the Windows Server 2012 range, which has been massively revamped and simplified. It’s called Windows Server Essentials. This supports a maximum of 25 users on a single physical server and provides many of the services expected of a local server. Unfortunately, there’s a drawback. No Exchange Server. Companies are encouraged to push email hosting away from the local premises to Office 365 in Microsoft’s public cloud. The Essentials server provides integration with the Office 365 cloud to manage and control the service. It does still provide support for an on-premises Exchange Server and there is the option to use Windows Server 2012 Standard, which now allows a physical host and 2 virtual servers under a single license. However, the licensing for that must be acquired separately. Quite understandably, this strategic change in the rules of engagement has caused much controversy.

I understand and fully appreciate the intention of cloud-based computing – moving the infrastructure away from individual companies to service providers and large datacentres, increasing reliability and managing cost for the end-user. The model makes sense on the surface and has been enjoyed by many for years – consider Hotmail, one of the first cloud-based email services, joined rapidly by Yahoo! and Gmail in the late 20th and early 21st centuries. The technology is not new. Indeed, Google, Microsoft, Apple et al. have been successfully pushing it for years. What is new and scary is the paradigm shift which is trying to immerse the business sector kicking and screaming into the cloud before the technology is proven. There are many issues blocking adoption of the cloud: compliance, conflicting privacy laws between countries (where the service provider is in one and the user in another) and the lack of affordable, high-speed Internet connections in all corners of the globe. For these reasons, I am not convinced that cloud is the way forward for business (and apparently, neither is Steve Wozniak). I want to know exactly where my data is and have complete control over who is permitted to access it. I cannot afford to entrust such data to another individual. If everything is in my hands, it’s my fault and my fault alone if something goes wrong. On the contrary, in the cloud, it’s impossible to know exactly what is happening to your data, nor is there any guarantee today’s data will still be there tomorrow.

There is always difficulty effecting change in just about any situation, but this becomes very prominent when the impact is global. At first, I was saddened by the loss of SBS, but at least I know there are valid options for the future. I am concerned over the direction of the IT industry and the forced shift of companies to the cloud. It is a very big worry, and many IT professionals feel their entire livelihoods have been torn from beneath their feet. However, we cannot stand still. As Paul Cunningham of exchangeserverpro.com says in his news article on this story, This is IT. Things change. You either change with them, or you die too. We do, however, need to be very careful how we play the hand we have been dealt, and moving leaps and bounds into the cloud is not a card I am about to play any time soon.





ADMX files, where to put them, and you – take 2

4 05 2012

A few years ago, I wrote a blog on the storage location of ADMX files. For Group Policy, these files are crucial, as they define the settings you see in the Group Policy Editor, and by extension, they describe the registry settings which need to be managed on each client workstation to which a policy is applied.

(Contrary to popular belief, the Group Policy Engine on a client does *not* need to refer to these files to actually apply Group Policy. The Group Policy Editor parses the file and stores the specific registry modifications in the appropriate location in the SYSVOL folder structure. The editor does, however, require access to all the proper ADMX files to allow an administrator to make policy changes)

The ADMX format was introduced in Windows Server 2008 and Windows Vista and is XML-based, unlike the previous ADM file syntax of Windows Server 2003, which was a custom syntax which proved challenging at times.

In my earlier post, I specified that the best location to store these files is %systemroot%\PolicyDefinitions on each of your DCs. This was in response to a specific problem I had at a customer with a new, single, standalone Domain Controller.

However, on much larger networks, this advice is not something I would endorse. By storing the policies in the PolicyDefinitions container on each DC, the ADMX files will only be available in the Group Policy Editor on that particular Domain Controller. If you want to use Group Policy Management Console from a workstation, another DC or a member server, then you are going to have many settings which have no policy definition, so you will be unable to manage them. With products like Server Core (a particular focus of Windows Server 8 Beta), managing Group Policy from the DC’s desktop is no longer a recommended or particular routine operation. Similarly, managing a DC directly from its desktop for such routine changes is not a best practice – delegating control over Group Policy and making the changes on a workstation would be a better choice. So, we need a better way of sharing the ADMX files across the entire LAN to ensure they roam to any machine where policy may be set.

Fortunately, Microsoft already have a solution. It’s known as the Central Store. Essentially, this is a PolicyDefinitions folder within the SYSVOL folder hierarchy which you already know about. By placing the ADMX files in this directory, they are replicated to every DC in the domain; by extension, the ADMX-aware Group Policy Management Console in Windows Vista, Windows 7, Windows Server 2008 and R2 can check this folder as an additional source of ADMX files, and will report them accordingly when setting your policies.

By default, the folder is not created. Whether you are a single DC or several thousand, I would strongly recommend you create a Central Store and start using it for all your ADMX file storage. It really does work well.

More information and detailed procedures are available from Microsoft Support.





iPhone 4S, Exchange ActiveSync and internal wifi

30 04 2012

It has been known for a while that iPhones and other iDevices do not play well with Exchange ActiveSync when roaming between a public network (such as 3G) and a private internal network to which the Exchange Server is connected. In particular, push email often does not work, which seems to be a bug in the iOS software. It’s a known issue, according to Apple. However, it caught me out recently, because the problem seemed to go away for a long time with the release of the iPhone 4.

At work, we have a set-up which is quite common for organisations of our size. We have two distinct networks: the internal network, which is reserved only for trusted devices owned and managed by us (the PCs, laptops, printers, switching gear, servers and now, thin clients). With 1000s of devices on this network, it is VLANed quite heavily to increase manageability, although I will admit this project was something I only completed fairly recently. Before last year, it was a single broadcast domain… but that is another story.

However, we also have a guest network. The guest network is isolated into its own VLAN, and is for wired clients which cannot authenticate as domain members (via 802.1x authentication) or for wireless clients connecting via a “Guest” SSID issued by our wireless controller. The guest network is still restricted to internal use – users authenticate to our RADIUS servers from their phones or laptops. Provided they provide valid credentials, they are provided with restricted access to the Internet.

All of the networks are linked together by our Forefront TMG deployment. This is driven by our inbound ISP connection and exposes several interfaces to the network – the internal network has two, teamed interfaces (for redundancy and throughput for data from cache) and the guest network has a further interface. The TMG deployment is the gateway for the guest network, and the internal network has a 0.0.0.0 default route for unresolved traffic crossing the VLANs.

When the Forefront TMG was provisioned last year, I initially configured the guest network both for internet access, but also with an internal set of “relay” rules, if you like, for access to certain resources on the internal network – OWA, RD Web Access, our management system, internal websites and, crucially, DNS lookups via our internal name servers. In effect, guest traffic was not NATed onto its own public IP. When it matched a firewall rule for one of our internal services, it was simply routed into the internal network. This made the deployment much simpler, and meant the internal IP addresses returned by internal DNS nameservers would still work for guest clients. Upshot: I don’t need more nameservers!

At the time, this did not pose a problem, even with the iPhone and iPad devices used by our staff. These phones could have been on 3G and wifi simultaneously, and we never had an issue with the mismatched IP addresses on the two networks stopping ActiveSync working.

That is, however, until someone upgraded to the iPhone 4S.

As noted in the blog post linked above:

“push” may stop working if your company’s Exchange ActiveSync server has a different IP address for intranet and Internet clients. Make sure the DNS for your network returns a single, externally-routable address to the Exchange ActiveSync server for both intranet and Internet clients

The problem experienced with this one iPhone 4S user went beyond push email. The user’s phone worked perfectly when away from the network. However, the moment it roaming onto our wifi, it seemed to have an adverse effect on the Exchange account configuration. Almost immediately, the phone would report a password error on a manual email check. The Exchange account would then refuse to work at all – on any network – until the user deleted the device from the Exchange Control Panel (ECP), switched back to 3G and re-created his connection.

I was not convinced the issue was with Exchange – all manner of other devices, even the iPhone 4, were still working. Nothing tested incorrectly. The problem was not a user issue, as I had him configure a test user account for a few days. Same problem.

Eventually, after a lot of painstaking troubleshooting (and waiting for feedback), it started to become very clear the issue was present only on the iPhone 4S and only in certain circumstances. However, it was much more serious than before – when the issue occurred, it did not just stop the iPhone from working until it roamed off-site again. It essentially wrote the email capability on the device off.

The resolution was a simple one, and one I should probably have implemented in the first place. The Forefront TMG deployment was re-configured. No routing was permitted between the internal network and the guest network. Instead, I added a network rule for guest network traffic to be NATed to its own public IP. I built a new cluster of standalone DNS servers, which serve two purposes – recursive lookups from the internal network (they are, effectively, caching servers) and hosting of the public DNS zones which return public IP addresses for all our network services.

When the guest network was given access to these nameservers, the iPhone 4S problem immediately went away. As detailed by Apple, it seems their devices are once again having issues with multiple IP addresses being issued by DNS for the same service. I thought this inconvenience had been resolved, but it would appear this design strategy will be going back into my network design methodology in the future. In any event, it did allow me to streamline and simplify our guest network configuration, which is always a good thing!

Watch out for Apple devices and the problem with issuing different internal and external IPs if they are used on your internal wifi. Either make the public IP routable internally and use that for internal access, or – a very common solution – don’t use them on wifi at all.





Exchange Server password expiry handling on iPad/iOS 5

31 12 2011

Overnight, the password for my Exchange account expired, as would be expected in line with my security policy.

Unfortunately, it would appear there is a bug in iOS 5’s handling of this situation. My iPad (running iOS 5.0.1) had many, many “incorrect password” prompts when I picked it up to use it this morning. There were so many that I was about to concede that the iPad as unusable until I found a computer to change my password on, as the password was yet to be set to a new value.

I would usually change my password directly from the iPad, by logging in to OWA, where I have enabled the ability to change a password when it has expired.

After some time of pressing “Cancel”, I was finally relinquished from the grasp of this prompt and was able to proceed to use the iPad normally.

It would appear to me that the number of prompts would be equal to either the number of Fetch attempts since the password expired and/or the number of occasions the iPad has tried to open a session for push delivery from the server. Of course, the iPad would have failed on every occasion, and it would appear it is being extremely verbose by displaying each and every failure.

Either way, the code should detect an incorrect password and show the “Incorrect Password” pop-up once only, as was the behaviour I experienced on iOS 4. If I choose to dismiss that message, I should not be repeatedly prompted with the same alert. As a tech savvy user, I repeatedly hit “Cancel”, but many of the users I deals with on a daily basis would try this a couple of times and then assume their iPad was unusable and not continue for fear of “breaking” something.

It seems I am not the first to come across this issue, but I will add my voice to those who hope this issue is resolved in a future iOS release.

For Exchange and AD admins, be aware this issue could potentially lead to lockout situations, dependent on your security policies.





Missing some cmdlets at Exchange Management Shell? Me too!

11 11 2010

On one of our many Exchange Servers at work, I recently discovered the Exchange cmdlets in the Management Shell which I rely on for my daily Exchange management had disappeared. get-excommand reported just one Exchange cmdlet was loaded: Get-ExchangeDiagnosticInfo. Strange. They were there one day, gone the next. No, it wasn’t caused by an update to the best of my knowledge; it didn’t happen over our patching window.

The case of missing cmdlets was traced back to an issue with my user profile on this server. A test with another user account yielded no issues at the Management Shell.

A quick fix to this might be to obliterate the user profile using the System applet Control Panel, then log back in and have Windows generate a new profile. However, this is totally unnecessary and you’ll lose any special configuration, given how simple the actual solution is.

Exchange Management Shell uses a directory in the user’s roaming Application Data to store the Powershell module configuration settings. My module data had some… modifications. I don’t know the source of these changes, but it rendered the cmdlets missing. I suspected this was the case because shell loaded much more quickly than normal when it was broken – rather than show the status of the pending implicit remoting session, which I am used to seeing, it loaded and connected almost instantaneously.

The solution is to remove the C:\Users\username\AppData\Roaming\Microsoft\Exchange\RemotePowershell\your.domain.com directory.

After deleting this directory, restart the Shell. The startup process will create the directory and re-generate the module files, fixing your issue and allowing you to get on with whatever you needed to do!

Matt

P.s. I know I’ve been quiet lately, and for that, I apologise. For the past couple of months I’ve been involved in an almighty migration job, away from an awful managed service network (tip: NEVER opt for an outside company to supply your network. It falls apart!) to a vanilla Windows Server system. This came not a moment too soon but completing a migration of this magnitude for 2500 seats in the 6 week maintenance window is no easy feat!

I do have some articles on the backburner, and hope to get some out to you ASAP. Thanks for your patience, and thanks for reading!





Windows XP Favourites Redirection – ADMX files

3 08 2010

One of the major disadvantages of still running XP in production is its lack of Internet Explorer Favourites directory redirection. If your users frequently roam between computers, the usual workaround is to enable Roaming Profiles to have the favourites roam with them. This usually works, until Windows Vista or 7 is introduced into the environment.

The newer Microsoft operating systems from Vista onwards do not support the old, legacy format of the XP profile. Instead, users logging on to a modern OS for the first time will be given a new roaming profile with “.V2” appended to their username in the roaming profile share. This is the version 2 profile, used by Vista up and totally isolated from the XP profile, including total isolation of the data it contains. In a phased roll-out of the newer Microsoft operating systems, you must follow best practices by using folder redirection to redirect user data on all systems to a common network location. This removes the data from the profiles, maintains consistency and ensures the user experience is the same on all network stations, without concerns over which OS is installed and therefore which profile and data the user will have access to. Plus, roaming profiles are just too slow for storing lots of user data anyway.

Unfortunately, Windows XP does not support redirection of the Favourites directory; this support was added in Windows Vista. One workaround I have seen is the built-in Vista redirection configured to redirect user favourites folders on newer systems to the legacy XP roaming profile share. This works, but it’s not particularly clean; redirecting data to a profile share rather than a user (home folder) share just isn’t right. It also causes data loss issues if a user’s profile must be reset; I work by the principle that only disposable data – stuff the users could live without – should be put into a user’s profile for precisely this reason.

Implementing Favourites redirection in Windows XP is a logical alternative; it isn’t particularly difficult either. I developed the following ADMX files to supplement the older ADM solutions which are available through a search on a popular web search engine. With 2008 or 2008 R2 Domain Controllers, the ADMX format is available for your use and I would highly suggest you make use of it. ADMX is XML-based and much, much easier to use than the legacy ADM language.

XPFavouritesRedirect.admx

<policyDefinitions revision="1.0" schemaVersion="1.0">
  <policyNamespaces>
    <target prefix="customFavorites" namespace="Microsoft.Policies.Favorites" />
    <using prefix="inetres" namespace="Microsoft.Policies.InternetExplorer" />
  </policyNamespaces>
  <resources minRequiredRevision="1.0" />
  <supportedOn>
    <definitions>
      <definition name="SUPPORTED_IE5" displayName="$(string.SUPPORTED_IE5)" />
    </definitions>
  </supportedOn>
  <policies>
    <policy name="IE_Favorites" class="User" displayName="$(string.IE_Favorites)" explainText="$(string.IE_Favorites_Location_Explain)" presentation="$(presentation.IE_Favorites)" key="Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders">
      <parentCategory ref="inetres:InternetExplorer" />
      <supportedOn ref="SUPPORTED_IE5" />
      <elements>
        <text id="IE_Favorites_Location" key="Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders" valueName="Favorites" required="true" expandable="true" />
      </elements>
    </policy>
  </policies>
</policyDefinitions>

XPFavouritesRedirect.adml (name this the same as the ADMX file and dump it in the language folder in your PolicyDefinitions directory)

<policyDefinitionResources revision="1.0" schemaVersion="1.0">
  <displayName>
  </displayName>
  <description>
  </description>
  <resources>
    <stringTable>
      <string id="IE_Favorites">Location of Internet Explorer Favorites</string>
      <string id="IE_Favorites_Location">The path to the favorites folder</string>
      <string id="IE_Favorites_Location_Explain">Specify the path to the location of your Favorites folder. This is stored in an expandable registry string value, so you can use environment variables, such as %HomeDrive%%HomePath%.</string>
      <string id="IE_Favorites_Location_Tip1">Specify the UNC path to the favorites location</string>
      <string id="InternetExplorer">Internet Explorer</string>
      <string id="SUPPORTED_IE5">at least Internet Explorer v5.01</string>
    </stringTable>
    <presentationTable>
      <presentation id="IE_Favorites">
        <textBox refId="IE_Favorites_Location">
          <label>Path:</label>
          <defaultValue>
          </defaultValue>
        </textBox>
      </presentation>
    </presentationTable>
  </resources>
</policyDefinitionResources>

The above is standard ADMX/ADML format which can be dumped in the correct locations of your Central Store (if you don’t have one, why not? Set one up, otherwise you will need to store them in the local store on each DC). In the GP Editor, it will appear as a policy in the standard Internet Explorer area under the User Configuration / Windows Components node.

The Favourites registry value in HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders is of type REG_EXPAND_SZ. The ADMX implements this with the expandable=”true” syntax, meaning from your perspective, you can specify environment variables in the GPO and these will be properly expanded by the system to their full paths. I personally use %HomeDrive%%HomePath%\Favourites to direct them to a subfolder of the user’s defined home folder location in their Active Directory user account properties.

This does not move any existing Favourites out of the profile and into the redirected location. However; this is fairly easy to script in a logon script or one-time operation. For new users, the Favourites directory will be created automatically, assuming the home drive exists, the user has permissions, quota is not fully used and so on.

It is a good idea to set the XP Favourites redirection policy in its own GPO object, then apply a WMI condition to filter the policy to XP/2003 and older systems only. Windows Vista and above support native redirection of Favourites, so you should use a separate, WMI filtered policy for Vista+ computers to redirect their Favourites to the same location as defined for XP clients.





APC Powerchute vs. Windows Power Management

1 08 2010

I was recently trialling APC Powerchute on a small SBS 2008 server, attempting to maintain some automated shutdown while also gleaning some stats on how frequently the UPS was intervening. I’ve used the software before, but this time it refused to play ball; I saw the stats, but it never shut the server down on power failure. Not good; I’d rather know the data was safe than be told how many times it wasn’t safe.

So, I reverted to the fallback option. An APC UPS (the USB connected ones, not sure about serial) can run under Windows’ power management, being configured and monitored in exactly the same way a battery in a laptop would. Thus, they truly are plug-and-play; some less reputable brands require their own monitoring software and aren’t nearly as effective in my experience.

Alas, uninstalling the software never restored Windows Power Management. I waited… rebooted… checked control panel… nothing. No mention of a battery in the power options and the power meter icon was disabled in the task tray. I’d lost all shutdown functionality from the UPS. Yet again, a routine job involving a computer turned in to a match of man vs. machine.

The fix was surprisingly simple, didn’t involve edits to the registry (which I fear when it comes to drivers and hardware and critical things like power) — but unintuitive:

  1. Open Device Manager, expand Batteries, locate the UPS and uninstall it. Be sure to uninstall the driver too when asked
  2. Wait a few minutes for that to complete, then on the Action menu, hit Scan for hardware changes
  3. Sure enough, the UPS was detected again, the drivers installed fresh and my power icon in the task tray immediately restored

It would appear APC Powerchute doesn’t fully tidy up after itself.





Demystifying the Active Directory FSMO roles

3 04 2010

If you’ve spent any time administering Active Directory, you’ve probably come across the concept of Flexible Single Master Operations (FSMO) roles. Their introduction is arguably one of the most important but misunderstood changes to Active Directory in the last ten years.

Take a trip down memory lane

In the days of Windows NT, one may recall the Primary Domain Controller (PDC) and Backup Domain Controller (BDC) concept. The directory was structured such that every DC, whether a PDC or a BDC, had a copy of the directory database, but only the PDC could make changes to that database. The model was inefficient, negatively impacted growth and desperately needed improving if the product had any chance of surviving.

Enter Windows 2000. The Directory Service went through one of its largest scale rebuilds to date. Replication and management was significantly improved and the concept of having a multi-master directory was introduced. Although this design has been tweaked over the years, fundamentally, it has remained the same through the versions – because it works. Any DC anywhere in the domain can execute virtually any update to the directory. This scales beautifully, even on large, geographically dispersed networks with many thousands of users.

However, notice I said virtually any change. Since a change can take effect at any DC, there is the possibility that a conflicting change will be made in two locations concurrently – or before replication can occur. Active Directory must ensure these situations are accounted for. In most cases, it applies its complex Multimaster Conflict Resolution Policy, which essentially says the last change wins. However, there are several procedures which simply cannot conflict; these procedures are assigned to one of the five FSMO roles, which go on to be delegated to one or more Domain Controllers.

What are the FSMO roles?

There are nominally five roles present in the directory which reside on DCs nominated specifically by the Administrator to perform these tasks. All the roles are very important and constitute a single point of failure in all Active Directory enterprises. If you have a complex topology with more than one domain, some roles are domain-specific, so you can expect to have duplicates of some roles in every domain in the enterprise.

  • The Domain Naming Master exists once per forest – in the forest root domain – and is rarely used. It is responsible for processing the addition of new child domains, application partitions and external cross-references to the enterprise. Since the name of a child domain or application partition cannot be duplicated (it would conflict in DNS, let alone send Active Directory around the twist), the DC holding this role is the only DC with the ability to process all additions of this kind in the forest.
  • Infrastructure Master: If a user from a foreign domain within the same forest is added as a member of a compatible group in another domain, the DCs in the group’s domain must have some information about that user in its local database in order to update the member attribute of the group. To do this, it adds a special record to its database called a phantom, which contains only the foreign user’s security identifier (SID), globally unique identifier (GUID) and their distinguished name (DN). Like all objects in the database, this record is given a distinguished name tag, or DNT, an internal reference used solely in the low-level Active Directory database layer. In doing this, the directory service is able to add that user as a member of the group by referring to the phantom’s DNT, just like it would refer to a user’s own DNT if you added a user from the group’s own domain to the group. You might think of this like using a primary key in a relational database to refer to objects across tables, but not exactly, as Active Directory’s database is by no means any sort of RDBMS.

    That’s very clever, but what if something about the source user in their original domain changes? If the user is renamed, moved or deleted, the phantom in the group domain DC databases would lose its referential integrity with the source domain. This is a situation the infrastructure master aims to avoid. On a periodic basis (by default, every 2 days), the infrastructure master – an FSMO role present in every domain – compares its local database to a Global Catalog (GC) server to determine whether any changes have been made to the objects the phantoms were created to represent. A GC contains a partial replica of all objects in the forest, so replication means any GC would already know about this updated data. The phantom is then updated with new values or deleted from the domain’s database if the object has been removed from its source domain.

    In a multi-domain forest, you must either locate this role on a Domain Controller which is not a Global Catalog or, if you must locate the role on a GC, ensure all DCs in that particular domain are GCs. A GC will never create phantoms because it already knows about users from other domains. If the infrastructure master is a GC, there will never be any phantoms in its local database to compare with the global catalog data, so no updates will be made, but other non-GC DCs in the domain would gradually become outdated. If all DCs in the domain are GCs, or you only have a single-domain forest, every DC knows enough about the security principal that it does not need to create a phantom, so this role is essentially redundant.

  • Schema Master: As the name suggests, this role is the Master of the Schema, the information which contains the formal definitions of how Active Directory stores objects, what attributes are available on those objects and so on. This role exists once per forest, on a DC in the forest root domain. Any updates to the Schema must be tightly controlled, so one DC delegated as the Schema Master performs all such changes to the database. Schema updates are then replicated to other DCs on the network by standard Active Directory replication.

So far, three of the five roles have been covered. Those above are those I would consider the least critical FSMO roles in the forest. If you lose the DC delegated one or more of these roles, it’s no big deal — it may prevent a network administrator taking an action, but it will not impact the usability of the network. Losing the Domain Naming Master or Schema Master would create problems in regard to creating child domains or running schema updates, but these generally occur very rarely and checking this Operations master DC is up would be part of the planned engineering works. Similarly, losing the Infrastructure Master may cause integrity issues in the database, but given that it only runs its scan every two days in the first place, a day or two of outage will not generally cause an issue.

  • RID Master: This role is one of the two which are important to the daily operation of Active Directory. Under the glossy GUI of Windows, security principals are identified and differentiated by use of two values – a Security Identifier (SID) and a Globally Unique Identifier (GUID).A SID is an alphanumeric string which is unique throughout a forest. The SID is the actual value used internally by Windows to identify users and grant access to resources using Discretionary Access Control Lists (DACLs), for example, via the ‘Security’ tab on a file or directory. Have you ever deleted a user, recreated her, then wondered why she cannot access the same files and folders, despite having the same username? The new account would have a new SID and is therefore considered an entirely different security principal to the system.Contrary to popular belief, the username, distinguished name or full name of a user are not internal tracking mechanisms within Windows as all these values could change.The standard make up of an SID might be as follows (this SID is purely random): S-1-5-21-789336058-1123561945-725345543-10823.The nature and formation of an SID is beyond the scope of this article, but it is the very last octet (in this instance, 10823) we are interested in. This figure represents a Relative Identifier (RID), an incremental value which actually makes the SIDs unique within a domain, ensuring no two users conflict in the database. When a security principal (user, computer, group etc.) is created, the domain SID (in this instance S-1-5-21-789336058-1123561945-725345543) has the next available RID appended to the end.Each Domain Controller is initially allocated a pool of 500 RIDs. As security principals are created, RIDs are used up. The allocation of RIDs to DCs is a task delegated in the RID Master FSMO role to one DC in a domain. Placing the operation in an FSMO role ensures no DC obtains a duplicate RID pool, which would eventually lead to conflicts in SID values and a major problem in terms of SID-uniqueness within the domain.
  • PDC Emulator is the most complicated and least understood role, for it runs a diverse range of critical tasks. It is a domain-specific role, so exists in the forest root domain and every child domain. Its original conception was for backwards compatibility with legacy systems, such as Windows NT BDCs. However, the role is also responsible for keeping the domain time in sync, given that the DC holding this role in the forest root domain is the most authoritative time source in the forest. Password changes and account lockouts are immediately processed at the PDC Emulator for a domain, to ensure such changes do not prevent a user logging on as a result of multi-master replication delays, such as across Active Directory sites.It should be noted that the PDC Emulator does not act in the same fashion as a PDC on a Windows NT network. Cast your eye back to the top of this article and note the section regarding a multi-master directory — for multi-master aware applications, most updates can be made at any DC on the network. However, if an application (or Operating System) is not multi-master aware, the PDC Emulator acts as if it were the PDC on the Windows NT network. One of these older applications would most probably single out the PDC Emulator and write all its changes there.

The latter two roles are much more crucial to the daily operation of the network and could very quickly become a limiting factor in its growth, usability or even the logon process if the DC(s) holding the roles are offline for any period of time. If the RID Master is lost, impact will only be felt by the Network Administrator if a DC depletes its pool of RIDs. On busy networks, this could potentially occur in a matter of days through the creation of new security principals. However, loss of the PDC Emulator could directly affect your users — you’d better have a substantial help desk ready for a spike in call volume if this DC is down for an extended period of time. For example, with the most authoritative source of time unavailable, time skew could eventually occur between DCs and computers in the enterprise and/or domain, lending itself to Kerberos authentication errors and ultimately, failed logons. While it would not be an immediate issue to take this server offline (provided you do not have any legacy applications), this would be the role I would be most concerned about in the event of a DC failure.

Conclusion

If you are still reading, well done! This article covers several aspects of Active Directory in detail, including low-level database processes unseen at the surface – particularly via the GUI. However, FSMO roles are a crucial component of your deployment — having an understanding of the underpinning concepts will help with their placement, deployment and high availability concerns within your enterprise.





Why you shouldn’t use PST files

5 12 2009

They have been around for years and for thousands of Microsoft Outlook users and email administrators out there, they’d be lost without them: Personal Storage Table (PST) files. If you’ve worked with Outlook for very long, the name will immediately ring a bell; if you’ve ever administered Outlook, you may already know about the problems associated with this notorious file format.

In any corporate environment – or, for that matter, any environment with an Exchange Server – the use of PST files as a permanent solution to an email administrator’s problems should be banned. Let’s find out why.

Problem 1: File Sizes and Data Security

The number one issue with the PST format prior to Outlook 2003 was that it was ANSI (American National Standards Institute)-based. The ANSI PST format has a maximum size limit of 2GB, and other limitations exist with regard to the number of items which can be stored per folder. However, there was a particularly problematic bug in the Outlook software which allowed data to be written to ANSI PSTs past the 2GB limit without warning. This would result in data loss, at least past the 2GB limit, but potentially loss of all the data stored in the file.

To address these concerns, Outlook 2003 and higher introduces a new PST format which runs on Unicode instead. This format stores up to 20GB of data, but it should be noted that upgrading Outlook does not automatically upgrade any PST file(s). This must be completed manually, by creating a new Unicode file and transferring the data across.

Despite the improvements made, PST files are still susceptible to corruption issues – which will result in lost data. These become particularly prevalent as files become larger or you increase the volume of data which moves through the file. For most users, the prospect of losing precious or business-critical emails, reminders, tasks and contacts could be cause for significant concern. It shouldn’t come as a surprise that you should make a regular backup of your PST file(s), but this is not completely safe, as a PST can go for weeks or months in a partially corrupted state before you realise you have a problem.

Problem 2: Network Access and Backups

PST files must be stored on a local hard disk. Accessing them over a network is not supported by Microsoft. Instabilities in the network, loss of network connectivity, speed issues in reading and writing from the file server can all cause issues — particularly for sensitive PST files, which are so very easily corrupted.

This has two implications for system administration:

Firstly, backups are already difficult to maintain, due to the issues with corruption going undetected, but will become ever more difficult to implement. As the PST cannot be run from the network, you must configure backups on each machine individually – and must ensure the backup does not run while Outlook is running. Backing up the Exchange Server is rather pointless, as the data is offloaded into the PST when the user logs in.

Second, your cost of administration increases significantly. Considering a typical organisation, which may have remote workers and several sites across different areas of the country or perhaps throughout the world, moving administration away from the server and towards the client lessens the design principles surrounding central administration, requiring more admin time to perform repetitive tasks on PST files. The system may quickly grow beyond your control, becoming exponentially difficult to track and maintain.

Problem 3: File Sharing and Remote Access

PST files do not natively support file sharing between multiple users simultaneously. If you attempt to configure this, the mail file may be corrupted — not to mention the fact you would need to run the file over the network, so problem #2 has already been invoked.

Storing data in PST files also has no benefits for remote access either. Exchange’s Outlook Web Access (OWA) (or Outlook Web App, in Exchange 2010) allows users to remotely access their mailboxes, providing a near Outlook user interface for doing so. Data in PST files has usually been removed from the mailbox, so immediately becomes inaccessible to the user remotely.

Problem 4: Inefficient use of resources

You’ve invested in a powerful Exchange Server. It: has large, redundant disk arrays, processing power and RAM capacity; cost you thousands to purchase the hardware and software licenses; adds significantly to your energy and data centre cooling bill. If PST files are in use, your server is essentially going to waste; the functionality of the server you are actually using is essentially the same as a free Linux mail server distribution running on an old workstation supporting POP3 clients.

But…

Despite the considerations above, you might still be wondering how to work around those common problems which PST files are oh so convenient for solving.

Use 1: Archiving

This is a mis-conception, brought about largely by Outlook’s desire to continue annoying its users with AutoArchive prompts. There is no reason whatsoever that mail should be archived to each user’s local PC. Consider the actions you would take to archive files off your file server; where would you put the archived data? On your own PC? On your manager’s? On the CEO’s? You’d do none of those three, as the data is unlikely to be backed up, and you cannot assure data security. Instead, you’d find some space on a share on your archive server – or create a LUN using spare space on one of your SANs.

The same applies to email. Off-loading email from your Exchange Server to user PCs has significant risks attached to it. Instead, you should use an enterprise mail archiving solution. The product I usually recommend is Symantec Enterprise Vault, although there are many others. The main benefits? Data is still stored centrally, under the guise of your retention policies and backup process. To the end user, they can still view archived emails using a handy web interface (yes, a web interface – providing remote access to the archive).

Okay, but what about when disk space on my Exchange Server runs low or I hit the store size limit?

UPGRADE THE SERVER! Exchange 2007 and 2010 do not impose a hard limit on the mail stores, and you shouldn’t be trying to run a mail server with little disk space or database space remaining. Archiving to PST is a quick solution, but one which won’t work in the long run.

With the soon-to-be Exchange 2010 release, significant changes have been made, one of which is the addition of archiving support. Each user can be given a separate ‘archive’ mailbox; it is attached to their main mailbox, but allows for data to be archived for long term storage. The settings governing when and how mail is moved to the archive are controlled by retention policies, giving the administrator greater control over retention. Again, the archive store is available remotely via Exchange 2010’s Outlook Web App.

Use 2: On the road

For users on the road, there is no need to store their mail in a PST file. Cached Exchange Mode is available in Exchange 2003/Outlook 2003 and higher, allowing users to work offline with a cached copy of their mailbox. When they reconnect to the network, the changes are seamlessly synchronised back to the server.

Use 3: Exmerge/Export-Mailbox

This is just about the only use of PST files which I can agree to — and I’ll admit, I’ve used this approach myself. If you migrate to a new mail system or rebuild your Exchange system, sometimes you cannot avoid using exmerge (or Exchange 2007’s export-mailbox management shell cmdlet) to take handy copies of the mailboxes – which can later be re-imported to the new system. For moving mailboxes between servers, you would use the Move Mailbox wizard – but for large scale rebuilds, exmerge is sometimes your friend.

Be cautious though; Exmerge uses the ANSI PST format, so you will need to meticulously plan your export and import procedure for larger mailboxes.

Use 4: Home Users

These are the people who the PST is most applicable to. If you are connecting via Outlook to a Post Office Protocol (POP) host to download your email, that email will be stored in a PST file. The fact you don’t have an Exchange Server doesn’t change any of the points above, though; that PST is still susceptible to corruption. If mail is deleted off the server, this could lead to data loss.

For this issue, you really have two solutions. The POP3 account in Outlook can be configured to leave email on the server. This acts as a backup; if your PST file becomes corrupted, the ISP still has a copy of your messages, so they can be downloaded again. To configure, open the Tools > Account Settings dialog in Outlook. Select your POP3 account, choose Properties, press More Settings, then switch to the Advanced tab. Under the Delivery section at the bottom of the window you should check the “Leave a copy of messages on the server” checkbox. If you want a backup of all your mail, don’t enable the option to remove it from the server after a certain time period.

The disadvantage to the POP3 solution becomes apparent if you move to another computer or access your mailbox via your ISP’s webmail interface. The message state information (tracking of read/unread or whether the message has been replied to or forwarded) is not transferred back to the ISP, so all the mail you thought you had read and handled will still be marked unread on the ISP’s server.

My preferred solution, and the one I use regularly, is an Internet Message Access Protocol (IMAP) account. The IMAP protocol is another mail protocol used to access email; it stands alongside POP. However, using IMAP, you replicate a client-server topology very similar to connecting to an Exchange mailbox with Outlook in Cached Exchange Mode. With IMAP, email generally remains stored in your mailbox at the ISP until you specifically delete it. Nevertheless, you can’t get away from PST files completely; they are still there when you use an IMAP account, as Outlook uses them to make a cache of the data for working with the IMAP account in offline mode. However, as the PST isn’t the only location where your data is stored, any corruption is not going to lead to data loss.

It should be noted that both the POP solution for leaving data on the server, as well as the IMAP solution, both have drawbacks, as items in your Calendar, Contacts or Tasks folders will not be stored on the server. IMAP does not support special folders – such as the Calendar or Tasks – and these will not be replicated back with a POP account, so you will still be using a PST file to some extent. Unless you move entirely into the cloud (use web services for email, calendar and contacts) or purchase your Exchange Server, you won’t be able to easily get away from this.

Conclusion

I’ve covered a fair bit of information regarding PST files here. Hopefully, my points detailing why the use of PSTs is so impractical will now encourage you to reconsider your PST usage, archiving practices and retention policies.

With all your user mail stored safely on the Exchange Server, rather than local PCs, assistants can become delegates for their managers, looking after their mailbox; the administrator can rest assured that all data is centrally stored and backup up and you can turn off Outlook AutoArchive, relieving end users of that annoying prompt every couple of weeks.

This article was originally published at Experts Exchange.





Why you shouldn’t put an Exchange Server in the DMZ

3 08 2009

The official Microsoft documentation for Exchange Server is contradictory in terms of deploying an Exchange Server into your perimeter network (DMZ). In many cases, it is interpreted that placing an Exchange Server into this zone is a good idea.

This is a myth.

As a standard rule of managing your network, you should never place any machine joined to the domain into the DMZ. Exchange 2000, 2003 and 2007 (with the exception of the Edge Transport role – see below) must all be installed on machines joined to the domain – place them into the DMZ and you break the first rule of firewalls and Active Directory, which I mentioned above.

So why is this a bad idea?

An Exchange Server needs Active Directory to function because most of its configuration information is stored in the directory service. This is the reason why it must be deployed on a domain-joined server.

If you attempt to move an Exchange Server to the DMZ, you will quickly find that Exchange will break. This is because it loses the ability to find and communicate with the Domain Controllers on the private network. In situations like this, you would have to do one of two things:

  • Deploy an additional Domain Controller into the DMZ
  • Allow the Exchange Server access to the DCs on the private network

Completing either of the above tasks requires you to open ports between the DMZ and private network. The list of ports is extensive and includes sensitive services such as DNS, LDAP and NetBIOS. I heard a fellow Exchange Server MVP state the other day while referring to this list of ports: “open these ports and your firewall rules will look like Swiss Cheese”.

The bottom line is this defeats the principle of a DMZ. A DMZ is intended as a ‘safe’ location for machines which are not joined to the domain; you might put public web servers or public nameservers there, for example. In the DMZ, they are protected from the Internet, but anyone maliciously gaining access to those servers cannot cross the firewall into your private network. By opening the Active Directory ports I describe above and by placing a domain-joined machine in this insecure zone, any hacker in control of a compromised machine in the DMZ has a much easier route to access your Active Directory environment, perhaps bringing it to its knees.

Every Exchange MVP I know considers this to be a very, very bad idea. They would not configure an Exchange Server in this way and neither would I.

Any Exchange Server you deploy should always be on the private network. Located there, you can ensure it has access to the Domain Controllers without the need to compromise network security. From the outside, you only ever need ports 25 and 443 open to allow internal email to flow and for users to access Outlook Web Access and Exchange ActiveSync.

But what about Exchange 2000/2003 Front End Servers?

What about them? Again, it is a misconception – probably brought about by ambiguous documentation – that leads people to believe these servers are there for security reasons. They are not. Legacy Front-End Servers are designed for organisations with multiple mailbox servers. A front-end acts as a central connection point for access to OWA, OMA or ActiveSync under a single, common URL – it does not provide security.

If you are deploying a front-end server because you believe it will secure your Exchange environment, think again. Install Vamsoft ORF on a Virtual Machine or use an external spam filtering service as an alternative.

Exchange 2007 Edge Transport

With Exchange 2007, Microsoft have recognised this problem by adding the Edge Transport server role. This is the first time an Exchange Server role has been specifically designed to be located on the perimeter network. It is also the first time such a role exists for security reasons. The Edge Transport machine is designed to be on a workgroup – not a member of the domain – so it does not require sensitive ports to be opened between the DMZ and private network. It maintains its own copy of the Active Directory database using Active Directory Application Mode (ADAM) in Server 2003 or Active Directory Light-Weight Directory Services (AD LDS) in Server 2008.

I personally do not see a requirement for an Edge Transport server in an Exchange deployment, so I never deploy them. They are an unnecessary expenditure. Unlike a 2000/2003 front-end, they only process SMTP email traffic. Requests for OWA or Exchange Activesync still need to be made directly to the Client Access Servers (CAS), which are domain members and therefore still need to be located on the private network.

The minimal security advantage Edge Transport servers provide can easily be achieved directly on the Hub Transport servers – or by deploying a much cheaper Vamsoft ORF virtual server between the Internet and the Hub Transport server.

Conclusion

You should now have a better understanding of why an Exchange Server should not be deployed into the DMZ. I hope this prompts you to review your Exchange configuration and make appropriate changes to further improve your network security.

Illegal breakage found in headernever