SBS is going! A sign of the future?

27 08 2012

The recent news that Microsoft are discontinuing their all-in-one Small Business Server (SBS) offering sent a shockwave through the entire IT community. In a blog post on the SBS product group’s blog at the start of July, the news was made very clear: the end of an era is upon us. However, it is not just small businesses or those who specialise in working with them who should listen up and take note. This development is one of many which will define lasting change in the way we do computing, in this decade and beyond.

For those who are not in the Microsoft or the SMB market, SBS rapidly became a valuable product for companies with 75 users or fewer when it was introduced to the market. The platform was an instant winner because it combined a good number of products from the Windows Server portfolio into a single server which was very easy to manage. For the first time, small businesses were able to compete in the global marketplace alongside their enterprise rivals — realising the benefits of identical technologies, but without breaking the bank. Many companies sought a collaboration server like Exchange to manage all their communication better than they could achieve with conventional email systems. This was undoubtedly one of the most popular products to be bunded with SBS, and one which enabled many to thrive and flourish.

I owe a lot to SBS — in fact, without it, I would not be writing this article. Regular contributors at Experts Exchange may be aware that I began learning about networks and servers when I built my first server at the age of 11 to run the family computers. That server was running SBS 2003. Back then, my knowledge of networking was non-existent, but over time and with the supportive guidance from my peers at Experts Exchange, I was able to learn. I work now with a few organisations from 5 users up to several thousand, managing their infrastructure and developing solutions to help them integrate technology into their business model to position themselves and grow. For one such company, having an office staff and a snazzy computer system is unheard of in its industry, yet I have watched them develop from a small setup with a single laptop and a home office to a very successful company in their own right — and they give credit to their SBS server for helping make that happen.

So what’s the alternative?

Well, it’s not all bad news, and no, that’s not because I’m advocating we make what we already have the de facto standard for the next umpteen years and stop innovating. The replacement product comes from the Windows Server 2012 range, which has been massively revamped and simplified. It’s called Windows Server Essentials. This supports a maximum of 25 users on a single physical server and provides many of the services expected of a local server. Unfortunately, there’s a drawback. No Exchange Server. Companies are encouraged to push email hosting away from the local premises to Office 365 in Microsoft’s public cloud. The Essentials server provides integration with the Office 365 cloud to manage and control the service. It does still provide support for an on-premises Exchange Server and there is the option to use Windows Server 2012 Standard, which now allows a physical host and 2 virtual servers under a single license. However, the licensing for that must be acquired separately. Quite understandably, this strategic change in the rules of engagement has caused much controversy.

I understand and fully appreciate the intention of cloud-based computing – moving the infrastructure away from individual companies to service providers and large datacentres, increasing reliability and managing cost for the end-user. The model makes sense on the surface and has been enjoyed by many for years – consider Hotmail, one of the first cloud-based email services, joined rapidly by Yahoo! and Gmail in the late 20th and early 21st centuries. The technology is not new. Indeed, Google, Microsoft, Apple et al. have been successfully pushing it for years. What is new and scary is the paradigm shift which is trying to immerse the business sector kicking and screaming into the cloud before the technology is proven. There are many issues blocking adoption of the cloud: compliance, conflicting privacy laws between countries (where the service provider is in one and the user in another) and the lack of affordable, high-speed Internet connections in all corners of the globe. For these reasons, I am not convinced that cloud is the way forward for business (and apparently, neither is Steve Wozniak). I want to know exactly where my data is and have complete control over who is permitted to access it. I cannot afford to entrust such data to another individual. If everything is in my hands, it’s my fault and my fault alone if something goes wrong. On the contrary, in the cloud, it’s impossible to know exactly what is happening to your data, nor is there any guarantee today’s data will still be there tomorrow.

There is always difficulty effecting change in just about any situation, but this becomes very prominent when the impact is global. At first, I was saddened by the loss of SBS, but at least I know there are valid options for the future. I am concerned over the direction of the IT industry and the forced shift of companies to the cloud. It is a very big worry, and many IT professionals feel their entire livelihoods have been torn from beneath their feet. However, we cannot stand still. As Paul Cunningham of exchangeserverpro.com says in his news article on this story, This is IT. Things change. You either change with them, or you die too. We do, however, need to be very careful how we play the hand we have been dealt, and moving leaps and bounds into the cloud is not a card I am about to play any time soon.

Advertisements




Why you shouldn’t use PST files

5 12 2009

They have been around for years and for thousands of Microsoft Outlook users and email administrators out there, they’d be lost without them: Personal Storage Table (PST) files. If you’ve worked with Outlook for very long, the name will immediately ring a bell; if you’ve ever administered Outlook, you may already know about the problems associated with this notorious file format.

In any corporate environment – or, for that matter, any environment with an Exchange Server – the use of PST files as a permanent solution to an email administrator’s problems should be banned. Let’s find out why.

Problem 1: File Sizes and Data Security

The number one issue with the PST format prior to Outlook 2003 was that it was ANSI (American National Standards Institute)-based. The ANSI PST format has a maximum size limit of 2GB, and other limitations exist with regard to the number of items which can be stored per folder. However, there was a particularly problematic bug in the Outlook software which allowed data to be written to ANSI PSTs past the 2GB limit without warning. This would result in data loss, at least past the 2GB limit, but potentially loss of all the data stored in the file.

To address these concerns, Outlook 2003 and higher introduces a new PST format which runs on Unicode instead. This format stores up to 20GB of data, but it should be noted that upgrading Outlook does not automatically upgrade any PST file(s). This must be completed manually, by creating a new Unicode file and transferring the data across.

Despite the improvements made, PST files are still susceptible to corruption issues – which will result in lost data. These become particularly prevalent as files become larger or you increase the volume of data which moves through the file. For most users, the prospect of losing precious or business-critical emails, reminders, tasks and contacts could be cause for significant concern. It shouldn’t come as a surprise that you should make a regular backup of your PST file(s), but this is not completely safe, as a PST can go for weeks or months in a partially corrupted state before you realise you have a problem.

Problem 2: Network Access and Backups

PST files must be stored on a local hard disk. Accessing them over a network is not supported by Microsoft. Instabilities in the network, loss of network connectivity, speed issues in reading and writing from the file server can all cause issues — particularly for sensitive PST files, which are so very easily corrupted.

This has two implications for system administration:

Firstly, backups are already difficult to maintain, due to the issues with corruption going undetected, but will become ever more difficult to implement. As the PST cannot be run from the network, you must configure backups on each machine individually – and must ensure the backup does not run while Outlook is running. Backing up the Exchange Server is rather pointless, as the data is offloaded into the PST when the user logs in.

Second, your cost of administration increases significantly. Considering a typical organisation, which may have remote workers and several sites across different areas of the country or perhaps throughout the world, moving administration away from the server and towards the client lessens the design principles surrounding central administration, requiring more admin time to perform repetitive tasks on PST files. The system may quickly grow beyond your control, becoming exponentially difficult to track and maintain.

Problem 3: File Sharing and Remote Access

PST files do not natively support file sharing between multiple users simultaneously. If you attempt to configure this, the mail file may be corrupted — not to mention the fact you would need to run the file over the network, so problem #2 has already been invoked.

Storing data in PST files also has no benefits for remote access either. Exchange’s Outlook Web Access (OWA) (or Outlook Web App, in Exchange 2010) allows users to remotely access their mailboxes, providing a near Outlook user interface for doing so. Data in PST files has usually been removed from the mailbox, so immediately becomes inaccessible to the user remotely.

Problem 4: Inefficient use of resources

You’ve invested in a powerful Exchange Server. It: has large, redundant disk arrays, processing power and RAM capacity; cost you thousands to purchase the hardware and software licenses; adds significantly to your energy and data centre cooling bill. If PST files are in use, your server is essentially going to waste; the functionality of the server you are actually using is essentially the same as a free Linux mail server distribution running on an old workstation supporting POP3 clients.

But…

Despite the considerations above, you might still be wondering how to work around those common problems which PST files are oh so convenient for solving.

Use 1: Archiving

This is a mis-conception, brought about largely by Outlook’s desire to continue annoying its users with AutoArchive prompts. There is no reason whatsoever that mail should be archived to each user’s local PC. Consider the actions you would take to archive files off your file server; where would you put the archived data? On your own PC? On your manager’s? On the CEO’s? You’d do none of those three, as the data is unlikely to be backed up, and you cannot assure data security. Instead, you’d find some space on a share on your archive server – or create a LUN using spare space on one of your SANs.

The same applies to email. Off-loading email from your Exchange Server to user PCs has significant risks attached to it. Instead, you should use an enterprise mail archiving solution. The product I usually recommend is Symantec Enterprise Vault, although there are many others. The main benefits? Data is still stored centrally, under the guise of your retention policies and backup process. To the end user, they can still view archived emails using a handy web interface (yes, a web interface – providing remote access to the archive).

Okay, but what about when disk space on my Exchange Server runs low or I hit the store size limit?

UPGRADE THE SERVER! Exchange 2007 and 2010 do not impose a hard limit on the mail stores, and you shouldn’t be trying to run a mail server with little disk space or database space remaining. Archiving to PST is a quick solution, but one which won’t work in the long run.

With the soon-to-be Exchange 2010 release, significant changes have been made, one of which is the addition of archiving support. Each user can be given a separate ‘archive’ mailbox; it is attached to their main mailbox, but allows for data to be archived for long term storage. The settings governing when and how mail is moved to the archive are controlled by retention policies, giving the administrator greater control over retention. Again, the archive store is available remotely via Exchange 2010’s Outlook Web App.

Use 2: On the road

For users on the road, there is no need to store their mail in a PST file. Cached Exchange Mode is available in Exchange 2003/Outlook 2003 and higher, allowing users to work offline with a cached copy of their mailbox. When they reconnect to the network, the changes are seamlessly synchronised back to the server.

Use 3: Exmerge/Export-Mailbox

This is just about the only use of PST files which I can agree to — and I’ll admit, I’ve used this approach myself. If you migrate to a new mail system or rebuild your Exchange system, sometimes you cannot avoid using exmerge (or Exchange 2007’s export-mailbox management shell cmdlet) to take handy copies of the mailboxes – which can later be re-imported to the new system. For moving mailboxes between servers, you would use the Move Mailbox wizard – but for large scale rebuilds, exmerge is sometimes your friend.

Be cautious though; Exmerge uses the ANSI PST format, so you will need to meticulously plan your export and import procedure for larger mailboxes.

Use 4: Home Users

These are the people who the PST is most applicable to. If you are connecting via Outlook to a Post Office Protocol (POP) host to download your email, that email will be stored in a PST file. The fact you don’t have an Exchange Server doesn’t change any of the points above, though; that PST is still susceptible to corruption. If mail is deleted off the server, this could lead to data loss.

For this issue, you really have two solutions. The POP3 account in Outlook can be configured to leave email on the server. This acts as a backup; if your PST file becomes corrupted, the ISP still has a copy of your messages, so they can be downloaded again. To configure, open the Tools > Account Settings dialog in Outlook. Select your POP3 account, choose Properties, press More Settings, then switch to the Advanced tab. Under the Delivery section at the bottom of the window you should check the “Leave a copy of messages on the server” checkbox. If you want a backup of all your mail, don’t enable the option to remove it from the server after a certain time period.

The disadvantage to the POP3 solution becomes apparent if you move to another computer or access your mailbox via your ISP’s webmail interface. The message state information (tracking of read/unread or whether the message has been replied to or forwarded) is not transferred back to the ISP, so all the mail you thought you had read and handled will still be marked unread on the ISP’s server.

My preferred solution, and the one I use regularly, is an Internet Message Access Protocol (IMAP) account. The IMAP protocol is another mail protocol used to access email; it stands alongside POP. However, using IMAP, you replicate a client-server topology very similar to connecting to an Exchange mailbox with Outlook in Cached Exchange Mode. With IMAP, email generally remains stored in your mailbox at the ISP until you specifically delete it. Nevertheless, you can’t get away from PST files completely; they are still there when you use an IMAP account, as Outlook uses them to make a cache of the data for working with the IMAP account in offline mode. However, as the PST isn’t the only location where your data is stored, any corruption is not going to lead to data loss.

It should be noted that both the POP solution for leaving data on the server, as well as the IMAP solution, both have drawbacks, as items in your Calendar, Contacts or Tasks folders will not be stored on the server. IMAP does not support special folders – such as the Calendar or Tasks – and these will not be replicated back with a POP account, so you will still be using a PST file to some extent. Unless you move entirely into the cloud (use web services for email, calendar and contacts) or purchase your Exchange Server, you won’t be able to easily get away from this.

Conclusion

I’ve covered a fair bit of information regarding PST files here. Hopefully, my points detailing why the use of PSTs is so impractical will now encourage you to reconsider your PST usage, archiving practices and retention policies.

With all your user mail stored safely on the Exchange Server, rather than local PCs, assistants can become delegates for their managers, looking after their mailbox; the administrator can rest assured that all data is centrally stored and backup up and you can turn off Outlook AutoArchive, relieving end users of that annoying prompt every couple of weeks.

This article was originally published at Experts Exchange.





Why you shouldn’t put an Exchange Server in the DMZ

3 08 2009

The official Microsoft documentation for Exchange Server is contradictory in terms of deploying an Exchange Server into your perimeter network (DMZ). In many cases, it is interpreted that placing an Exchange Server into this zone is a good idea.

This is a myth.

As a standard rule of managing your network, you should never place any machine joined to the domain into the DMZ. Exchange 2000, 2003 and 2007 (with the exception of the Edge Transport role – see below) must all be installed on machines joined to the domain – place them into the DMZ and you break the first rule of firewalls and Active Directory, which I mentioned above.

So why is this a bad idea?

An Exchange Server needs Active Directory to function because most of its configuration information is stored in the directory service. This is the reason why it must be deployed on a domain-joined server.

If you attempt to move an Exchange Server to the DMZ, you will quickly find that Exchange will break. This is because it loses the ability to find and communicate with the Domain Controllers on the private network. In situations like this, you would have to do one of two things:

  • Deploy an additional Domain Controller into the DMZ
  • Allow the Exchange Server access to the DCs on the private network

Completing either of the above tasks requires you to open ports between the DMZ and private network. The list of ports is extensive and includes sensitive services such as DNS, LDAP and NetBIOS. I heard a fellow Exchange Server MVP state the other day while referring to this list of ports: “open these ports and your firewall rules will look like Swiss Cheese”.

The bottom line is this defeats the principle of a DMZ. A DMZ is intended as a ‘safe’ location for machines which are not joined to the domain; you might put public web servers or public nameservers there, for example. In the DMZ, they are protected from the Internet, but anyone maliciously gaining access to those servers cannot cross the firewall into your private network. By opening the Active Directory ports I describe above and by placing a domain-joined machine in this insecure zone, any hacker in control of a compromised machine in the DMZ has a much easier route to access your Active Directory environment, perhaps bringing it to its knees.

Every Exchange MVP I know considers this to be a very, very bad idea. They would not configure an Exchange Server in this way and neither would I.

Any Exchange Server you deploy should always be on the private network. Located there, you can ensure it has access to the Domain Controllers without the need to compromise network security. From the outside, you only ever need ports 25 and 443 open to allow internal email to flow and for users to access Outlook Web Access and Exchange ActiveSync.

But what about Exchange 2000/2003 Front End Servers?

What about them? Again, it is a misconception – probably brought about by ambiguous documentation – that leads people to believe these servers are there for security reasons. They are not. Legacy Front-End Servers are designed for organisations with multiple mailbox servers. A front-end acts as a central connection point for access to OWA, OMA or ActiveSync under a single, common URL – it does not provide security.

If you are deploying a front-end server because you believe it will secure your Exchange environment, think again. Install Vamsoft ORF on a Virtual Machine or use an external spam filtering service as an alternative.

Exchange 2007 Edge Transport

With Exchange 2007, Microsoft have recognised this problem by adding the Edge Transport server role. This is the first time an Exchange Server role has been specifically designed to be located on the perimeter network. It is also the first time such a role exists for security reasons. The Edge Transport machine is designed to be on a workgroup – not a member of the domain – so it does not require sensitive ports to be opened between the DMZ and private network. It maintains its own copy of the Active Directory database using Active Directory Application Mode (ADAM) in Server 2003 or Active Directory Light-Weight Directory Services (AD LDS) in Server 2008.

I personally do not see a requirement for an Edge Transport server in an Exchange deployment, so I never deploy them. They are an unnecessary expenditure. Unlike a 2000/2003 front-end, they only process SMTP email traffic. Requests for OWA or Exchange Activesync still need to be made directly to the Client Access Servers (CAS), which are domain members and therefore still need to be located on the private network.

The minimal security advantage Edge Transport servers provide can easily be achieved directly on the Hub Transport servers – or by deploying a much cheaper Vamsoft ORF virtual server between the Internet and the Hub Transport server.

Conclusion

You should now have a better understanding of why an Exchange Server should not be deployed into the DMZ. I hope this prompts you to review your Exchange configuration and make appropriate changes to further improve your network security.

Illegal breakage found in headernever




Microsoft Exchange: Stores fail to start with 0x8004010f error

21 05 2009

I was recently involved in a migration from a legacy Exchange 2000 Organization to a new Exchange 2007 deployment. All was going well until the Information Store service on the Exchange 2000 Server was restarted, but failed to start due to the following error:

Unable to initialize the Microsoft Exchange Information Store service. Error 0x8004010f
Event ID: 5000 Source: MSExchangeIS

On attemping to start the Information Store manually, I was greeted with a not-particularly-verbose error stating the process failed with service-specific error code ‘0’.

The issue was caused in this case by the X400 address, required by Exchange 2000/2003 for internal communications, being removed from the Default Recipient policy. Although legacy Exchange organizations require this address, Exchange 2007 deployments without any legacy Exchange Servers will function without the address , and it is possible some aspect of the new server configuration removed the entry.

Resolution

To restore the address, open Exchange System Manager on a legacy Exchange Server and edit the Default Recipient Policy. If no X400 address is present, add a new X400 address into the Recipient Policy using the appropriate information. Since removing an entry from a Recipient Policy does not remove the entry from existing mailboxes, I retrieved the required settings from an existing mailbox using the management tools on the new Exchange 2007 Server.

Once the address is added, or if it was present already but unchecked, you need to enable the entry. Attempting to check or uncheck the entry in ESM will more than likely result in an error:

X400 address cannot be disabled using Exchange System Manager

X400 address cannot be disabled using Exchange System Manager

This error is to be expected, since Exchange 2000/2003 requires the X400 address and will therefore prevent any attempt to remove it as a safety precaution. To enable the address, you need to perform a low-level edit using ADSIEdit on a Domain Controller.

Usual warnings apply – ADSIEdit can make permanent and potentially destructive changes to Active Directory. Use the tool at your own risk and with proper backups in place.

In ADSIEdit at a Domain Controller, expand the Configuration Naming Context and drill down through CN=Services > CN=Microsoft Exchange > CN=<Your Exchange Organization Name> > CN=Recipient Policies. Right click the Default Policy and choose Properties. You will notice the X400 address is listed within the ‘disabledGatewayProxy’ attribute. To enable:

  1. Edit the disabledGatewayProxy attribute and remove the X400 address.
  2. After pressing the ‘Remove’ button, the X400 configuration contents generated by Exchange are placed into the textarea. Copy this data.
  3. Close the attribute so it is now stored blank. Edit the ‘gatewayProxy’ attribute, which is the location for enabled entries in the policy, and add the X400 contents from your clipboard.
  4. Store your changes, then wait for or manually force Active Directory replication prior to restarting the Exchange Services.

Viola! Your IS service should now start and you can mount the stores. If the X400 issue was not the culprit, it is more than likely permissions within Active Directory. Verify the Exchange 2000/2003 computer is a member of the legacy Exchange Domain Servers group, and that group is in turn a Member Of the Exchange Enterprise Servers security group. You should then use ADSIEdit to check and reset certain permissions; the changes required are detailed over at Technet.

Special Thanks are due to kieran_b who offered assistance during his evening and at a moment’s notice to help resolve this issue.