Work and Play: NAS-style

The last time I wrote about the network-attached storage (NAS) appliance that the good folks at Synology had sent my way, I spent a lot of time talking about how amazed I was at all the things that NAS appliances could do these days. They truly have come a very long way in the last decade or so.

Once I got done gushing about the DiskStation DS220+ that I had sitting next to my primary work area, I realized that I should probably do a post about it that amounted to more than a “fanboy rant.”

This is an attempt at “that post” and contains some relevant specifics on the DS220+’s capabilities as well as some summary words about my roughly five or six months of use.

First Up: Business

As the title of this post alluded to, I’ve found uses for the NAS that would be considered “work/business,” others that would be considered “play/entertainment,” and some that sit in-between. I’m going to start by outlining the way I’ve been using it in my work … or more accurately, “for non-play purposes.”

But first: one of the things I found amazing about the NAS that really isn’t a new concept is the fact that Synology maintains an application site (they call it the “Package Center“) that is available directly from within the NAS web interface itself:

Much like the application marketplaces that have become commonplace for mobile phones, or the Microsoft Store which is available by default to Windows 10 installations, the Package Center makes it drop-dead-simple to add applications and capabilities to a Synology NAS appliance. The first time I perused the contents of the Package Center, I kind of felt like a kid in a candy store.

Candy StoreWith all the available applications, I had a hard time staying focused on the primary package I wanted to evaluate: Active Backup for Microsoft 365.

Backup and restore, as well as Disaster Recovery (DR) in general, are concepts I have some history and experience with. What I don’t have a ton of experience with is the way that companies are handling their DR and BCP (business continuity planning) for cloud-centric services themselves.

What little experience I do have generally leads me to categorize people into two different camps:

  • Those who rely upon their cloud service provider for DR. As a generalization, there are plenty of folks that rely upon their cloud service provider for DR and data protection. Sometimes folks in this group wholeheartedly believe, right or wrong, that their cloud service’s DR protection and support are robust. Oftentimes, though, the choice is simply made by default, without solid information, or simply because building one’s own DR plan and implementing it is not an inexpensive endeavor. Whatever the reason(s), folks in this group are attached at the hip to whatever their cloud service provider has for DR and BCP – for better or for worse.
  • Those who don’t trust the cloud for DR. There are numerous reasons why someone may choose to augment a cloud service provider’s DR approach with something supplemental. Maybe they simply don’t trust their provider. Perhaps the provider has a solid DR approach, but the RTO and RPO values quoted by the provider don’t line up with the customer’s specific requirements. It may also be that the customer simply doesn’t want to put all of their DR eggs in one basket and wants options they control.
In reality, I recognize that this type of down-the-middle split isn’t entirely accurate. People tend to fall somewhere along the spectrum created by both extremes.

Microsoft 365 Data Protection

On the specific topic of Microsoft 365 data protection, I tend to sit solidly in the middle of the two extremes I just described. I know that Microsoft takes steps to protect 365 data, but good luck finding a complete description or metrics around the measures they take. If I had to recover some data, I’m relatively (but not entirely) confident I could open a service ticket, make the request, and eventually get the data back in some form.

The problem with this approach is that it’s filled with assumptions and not a lot of objective data. I suspect part of the reason for this is that actual protection windows and numbers are always evolving, but I just don’t know.

You can’t throw a stick on the internet and not hit a seemingly endless supply of vendors offering to fill the hole that exists with Microsoft 365 data protection. These tools are designed to afford customers a degree of control over their data protection. And as someone who has talked about DR and BCP for many years now, redundancy of data protection is never a bad thing.

Introducing the NAS Solution

And that brings me back to Synology’s Active Backup for Microsoft 365 package.

In all honesty, I wasn’t actually looking for supplemental Microsoft 365 data protection at the time. Knowing the price tag on some of the services and packages that are sold to address protection needs, I couldn’t justify (as a “home user”) the cost.

I was pleasantly surprised to learn that the Synology solution/package was “free” – or rather, if you owned one of Synology’s NAS devices, you had free access to download and use the package on your NAS.

The price was right, so I decided to install the package on my DS220+ and take it for a spin.

 

Kicking The Tires

First impressions and initial experiences mean a lot to me. For the brief period of time when I was a product manager, I knew that a bad first experience could shape someone’s entire view of a product.

I am therefore very happy to say that the Synology backup application was a breeze to get setup – something I initially felt might not be the case. The reason for my initial hesitancy was due to the fact that applications and products that work with Microsoft 365 need to be registered as trusted applications within the M365 tenant they’re targeting. Most of the products I’ve worked with that need to be setup in this capacity involve a fair amount manual legwork: certificate preparation, finding and granting permissions within a created app registration, etc.

Not Synology’s backup package. From the moment you press the “Create” button and indicate that you want to establish a new backup of Microsoft 365 data, you’re provided with solid guidance and hand-holding throughout the entire setup and app registration process. Of all of the apps I’ve registered in Azure, Synology’s process and approach has been the best – hands-down. It took no more than five minutes to establish a recurring backup against a tenant of mine.

I’ve included a series of screenshots (below) that walk through the backup setup process.

What Goes In, SHOULD Come Out ...

When I would regularly speak on data protection and DR topics, I had a saying that I would frequently share: “Backup is science, but Restore is an art.” A decade or more ago, those tasked with backing up server-resident data often took a “set it and forget it” approach to data backups. And when it came time to restore some piece of data from those backups, many of the folks who took such an approach would discover (to their horror) that their backups had been silently failing for weeks or months.

Motto of the story (and a 100-level lesson in DR): If you establish backups, you need to practice your restore operations until you’re convinced they will work when you need them.

Synology approaches restoration in a very straightforward fashion that works very well (at least in my use case). There is a separate web portal from which restores and exports (from backup sets) are conducted.

And in case you’re wondering: yes, this means that you can grant some or all of your organization (or your family, if you’re like me) self-service backup capabilities. Backup and restore are handled separately from one another.

As the series of screenshots below illustrates, there are five slightly different restore presentations for each of the five areas backed up by the Synology package: (OneDrive) Files, Email, SharePoint Sites, Contacts, and Calendars. Restores can be performed from any backup set and offer the ability to select the specific files/items to recover. The ability to do an in-place restore or an export (which is downloaded by the browser) is also available for all items being recovered. Pretty handy.

Will It Work For You?

I’ve got to fall-back to the SharePoint consultant’s standard answer: it depends.

I see something like this working exceptionally well for small-to-mid-sized organizations that have smaller budgets and already overburdened IT staff. Setting up automated backups is a snap, and enabling users to get their data back without a service ticket and/or IT becoming the bottleneck is a tremendous load off of support personel.

My crystal ball stops working when we’re talking about larger companies and enterprise scale. All sorts of other factors come into play with organizations in this category. A NAS, regardless of capabilities, is still “just” a NAS at the end of the day.

My DS220+ has two-2TB drives in it. I/O to the device is snappy, but I’m only one user. Enterprise-scale performance isn’t something I’m really equipped to evaluate.

Then there are the questions of identity and Active Directory implementation. I’ve got a very basic AD implementation here at my house, but larger organizations typically have alternate identity stores, enforced group policy objects (GPOs), and all sorts of other complexities that tend to produce a lot of “what if” questions.

Larger organizations are also typically interested in advanced features, like integration with existing enterprise backup systems, different backup modes (differential/incremental/etc.), deduplication, and other similar optimizations. The Synology package, while complete in terms of its general feature set, doesn’t necessarily possess all the levers, dials, and knobs an enterprise might want or need.

So, I happily stand by my “solid for small-to-mid-sized companies” outlook … and I’ll leave it there. For no additional cost, Synology’s Active Backup for Microsoft 365 is a great value in my book, and I’ve implemented it for three tenants under my control. 

Rounding Things Out: Entertainment

I did mention some “play” along with the work in this post’s title – not something that everyone thinks about when envisioning a network storage appliance. Or rather, I should say that it’s not something I had considered very much.

My conversations with the Synology folks and trips through the Package Center convinced me that there were quite a few different ways to have fun with a NAS. There are two packages I installed on my NAS to enable a little fun.

Package Number One: Plex Server

Admittedly, this is one capability I knew existed prior to getting my DS220+. I’ve been an avid Plex user and advocate for quite a few years now. When I first got on the Plex train in 2013, it represented more potential than actual product.

Nowadays (after years of maturity and expanding use), Plex is a solid media server for hosting movies, music, TV, and other media. It has become our family’s digital video recorder (DVR), our Friday night movie host, and a great way to share media with friends.

I’ve hosted a Plex Server (self-hosted virtual machine) for years, and I have several friends who have done the same. At least a few of my friends are hosting from NAS devices, so I’ve always had some interest in seeing how Plex would perform on NAS device versus my VM.

As with everything else I’ve tried with my DS220+, it’s a piece of cake to actually get a Plex Server up-and-running. Install the Plex package, and the NAS largely takes care of the rest. The sever is accessible through a browser, Plex client, or directly from the NAS web console. 

I’ve tested a bit, but I haven’t decommissioned the virtual machine (VM) that is my primary Plex Server – and I probably won’t. A lot of people connect to my Plex Server, and that server has had multiple transcodes going while serving up movies to multiple concurrent users – tasks that are CPU, I/O, and memory intensive. So while the NAS does a decent job in my limited testing here at the house, I don’t have data that convinces me that I’d continue to see acceptable performance with everyone accessing it at once.

One thing that’s worth mentioning: if you’re familiar with Plex, you know that they have a pretty aggressive release schedule. I’ve seen new releases drop on a weekly basis at times, so it feels like I’m always updating my Plex VM.

What about the NAS package and updates? Well, the NAS is just as easy to update. Updated packages don’t appear in the Package Center with the same frequency as the new Plex Server releases, and you won’t get the same one-click server update support (a feature that never worked for me since I run Plex Server non-interactively in a VM), but you do get a link to download a new package from the NAS’s update notification:

The “Download Now” button initiates the download of an .SPK file – a Synology/NAS package file. The package file then needs to be uploaded from within the Package Center using the “Manual Install” button:

And that’s it! As with most other NAS tasks, I would be hard-pressed to make the update process any easier.

Package Number Two: Docker

If you read the first post I wrote back in February as a result of getting the DS220+, you might recall me mentioning Docker as another of the packages I was really looking forward to taking for a spin.

The concept of containerized applications has been around for a while now, and it represents an attractive alternative to establishing application functionality without an administrator or installer needing to understand all of the ins and outs of a particular application stack, its prerequisites and dependencies, etc.  All that’s needed is a container image and host.

So, to put it another way: there are literally millions of Docker container images available that you could download and get running in Docker with very little time invested on your part to make a service or application available. No knowledge of how to install, configure, or setup the application or service is required on your part.

Let's Go Digging

One container I had my eye on from the get-go was itzg’s Minecraft Server container. itzg is the online handle used by a gentleman named Geoff Bourne from Texas, and he has done all of the work of preparing a Minecraft server container that is as close to plug-and-play as containers come.

Minecraft (for those of you without children) is an immensely popular game available on many platforms and beloved by kids and parents everywhere. Minecraft has a very deep crafting system and focuses on building and construction rather than on “blowing things up” (although you can do that if you truly want to) as so many other games do.

My kids and I have played Minecraft together for years, and I’ve run various Minecraft servers in that time that friends have joined us in play. It isn’t terribly difficult to establish and expose a Minecraft server, but it does take a little time – if you do it “manually.”

I decided to take Docker for a run with itzg’s Minecraft server container, and we were up-and-running in no time. The NAS Docker package has a wonderful web-based interface, so there’s no need to drop down to a command line – something I appreciate (hey, I love my GUIs). You can easily make configuration changes (like swapping the TCP port that responds to game requests), move an existing game’s files onto/off of the NAS, and more.

I actually decided to move our active Minecraft “world” (in the form of the server data files) onto the NAS, and we ran the game from the NAS for about two months. Although we had some unexpected server stops, the NAS performed admirably with multiple players concurrently. I suspect the server stops were actually updates of some form taking place rather than a problem of some sort.

The NAS-based Docker server performed admirably for everything except Elytra flight. In all fairness, though, I haven’t been on a server of any kind yet where Elytra flight works in a way I’d describe as “well” largely because of the I/O demands associated with loading/unloading sections of the world while flying around.

Conclusion

After a number of months of running with a Synology NAS on my network, I can’t help but say again that I am seriously impressed by what it can do and how it simplifies a number of tasks.

I began the process of server consolidation years ago, and I’ve been trying to move some tasks and operations out to the cloud as it becomes feasible to do so. Where it wouldn’t have even resulted in a second thought to add another Windows server to my infrastructure, I’m now looking at things differently. Anything a NAS can do more easily (which is the majority of what I’ve tried), I see myself trying it there first. 

I once had an abundance of free time on my hands. But that was 20 – 30 years ago. Nowadays, I’m in the business of simplifying and streamlining as much as I can. And I can’t think of a simpler approach for many infrastructure tasks and needs than using a NAS.

References and Resources

Kicking-Off 2012: SharePoint Style

My SharePoint community activities are off to a roaring start in 2012. In this post, I’ll be recapping a couple of events from the end of 2011, as well as covering new activities taking place during the first couple of months of 2012.

HighSpeedI don’t know how 2011 ended for most of you, but the year closed without much of a bang for me. I’m not complaining about that; the general slow-down gave me an opportunity to get caught up on a few things, and it was nice to spend some quality time with my friends and family.

While 2011 went out relatively quietly, 2012 seems to have arrived with a vengeance. In fact, I was doing some joking on Twitter with Brian Jackett and Rob Collie shortly after the start of the year about #NYN, or “New Year’s Nitrous.” It’s been nothing but pedal-to-the-metal and then some since the start of the year, and there’s absolutely no sign of it letting up anytime soon. I like staying busy, but in some ways I’m wondering whether or not there will be enough time to fit everything in. One day at a time …

Here’s a recap of some stuff from the tail end of 2011, as well as what I’ve got going on for the first couple of months in 2012. After February, things actually get even crazier … but I’ll save events beyond February for a later post.

SPTV

SPTV logoDuring the latter part of 2011, I had a conversation with Michael Hiles and Jon Breyfogle of DSC Consulting, a technical consulting and media services company based here in Cincinnati, Ohio. Michael and Jon had an idea: they wanted to develop a high-quality, high-production-value television program that centered on SharePoint and the larger SharePoint ecosystem/community. The initial idea was that the show would feature an interview segment, coverage of community events, SharePoint news, and some other stuff thrown in.

It was all very preliminary stuff when they initially shared the idea with me, but I told them that I thought they might be on to something. The idea of a professional show that centered on SharePoint wasn’t something that was being done, and I was really curious to see how they would do it if they elected to move forward.

Just before Christmas, Jon contacted me to let me know that they were indeed moving forward with the idea … and he asked if I’d be the show’s first SharePoint guest. I told him I’d love to help out, and so the bulk of the pilot episode was shot at the Village Tavern in Montgomery one afternoon with host Mark Tiderman and co-host Craig Pereira. Mark and I shot some pool, discussed disaster recovery, and just talked SharePoint for a fair bit. It was really a lot of fun.

The pilot isn’t yet available (publicly), but a teaser for the show is available on the SPTV web site. All in all, I think the DSC folks have done a tremendous job creating a quality, professional program. Check out the SPTV site for a taste of what’s to come!

SharePoint Saturday Columbus Kick-Off

SharePoint Saturday Columbus logoAround the time of the SPTV shooting, the planning committee for SharePoint Saturday Columbus (Brian Jackett, Jennifer Mason, Nicola Young, and I) had a checkpoint conversation to figure out what, if anything, we were going to do about SharePoint Saturday Columbus in 2012. Were we going to try to do it again? If so, were we going to change anything? What was our plan?

Everything with SPSColumbus in 2012 is still very preliminary, of course, but I can tell you that we are looking forward to having the event once again! We expect that we’ll attempt to hold the event during roughly the same part of the year as we’ve had it in the past (i.e., late summer). As we start to nail things down and come up with concrete plans, I’ll share those. Until then, keep your eyes on the SharePoint Saturday site and the SPSColumbus account on Twitter!

SharePointCincy

Those of us who reside in and around Cincinnati, Ohio, are very fortunate when it comes to SharePoint events and opportunities. In the past we’ve had SharePoint Saturday Indianapolis just to the west of us, SharePoint Saturday Columbus to the northeast, and last year we had our first ever SharePoint Saturday Cincinnati (which was a huge success!) On top of that, last year was the first ever SharePointCincy event.

SharePointCincy was similar in some ways to a SharePoint Saturday, but it was different in others. It was a day full of SharePoint sessions, but we also had Fred Studer (the General Manager for the Information Worker product group at Microsoft) come out an speak. Kroger, a local company whose SharePoint implementation I’m very familiar with, also shared their experience with SharePoint. Rather than go into too much detail, though, I encourage you to check out the SharePointCincy site yourself to see what it was all about.

Of course, the whole reason I’m mentioning SharePointCincy is that it’s coming again in March of this year! Last year’s success (the event was attended by hundreds) pretty much guaranteed that the event would happen again.

I’m part of a planning team that includes Geoff Smith, Steve Caravajal of Microsoft, Mike Smith from MAX Technical Training, and the infamous Shane Young of SharePoint911 (which, in case you didn’t know it, is based here in Cincinnati). Four of the five of us met last Friday for a kick-off meeting and to discuss how the event might go this year. It was a good breakfast and a productive meeting. I don’t have much more to share at this point (other than the fact that, “yes, it’s happening”), but I will share information as it becomes available. Stay tuned!

Secrets of SharePoint Webcast

Secrets of SharePoint logoIt’s been a few months since my last webcast on SharePoint caching, so my co-workers at Idera approached me about doing another webcast. I guess I was due.

On this Wednesday, January 18th, I’ll be delivering a Secrets of SharePoint webcast titled “The Essentials of SharePoint Disaster Recovery.” Here’s the abstract:

“Are my nightly SQL Server backups good enough?” “Do I need an off-site disaster recovery facility?” “How do I even start the process of disaster recovery planning?” These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes “it depends…”

In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

For those of you who have heard me speak and/or attended my webcasts in the past, you’ll probably find this session to be a bit different than ones you’ve seen or heard. The main reason I say that is because the content is primarily business-centric rather than nuts-and-bolts admin content.

That doesn’t mean that SharePoint administrators shouldn’t attend, though; on the contrary, the webcast includes a number of very important messages for admins (e.g., why DR must be driven from the business angle rather than the technical/admin angle) that could really help them in their jobs. The session expands the scope of the DR discussion, though, to include the business aspects that are so tremendously important during the DR planning process.

If what I’ve shared sounds interesting, please sign-up! The webcast is free, and I’ll be doing Q&A after the session.

SharePoint Saturday Austin

SharePoint Saturday Austin logoThis upcoming weekend, I’ll be heading down to Austin, Texas, for the first SharePoint Saturday Austin event! The event is taking place on January 21st, and it is being coordinated by Jim Bob Howard (of Juniper Strategy) and Matthew Lathrop (of Rackspace). Boy oh boy – do they have an amazing line-up of speakers and contributors. It’s quite impressive; check out the site to see what I mean.

The guys are giving me the opportunity to present “The Essentials of SharePoint Disaster Recovery” session, and I’m looking forward to it. I’m also looking forward to catching up with many of my friends … and some of my Idera co-workers (who will be coming in from Houston, Texas).

If you’re in the Austin area and looking for something to do this upcoming Saturday, come to the event. It’s free, and it’s a great chance to take in some phenomenal sessions, win some prizes, and be a part of the larger SharePoint community!

SharePoint Pro Demo Booth Session

SharePoint Pro logoOn Monday, February 20th at 12pm EST, I’m going to be doing a “demo booth” session through SharePoint Pro Magazine. The demo booth is titled “Backup Basics: SharePoint’s Backup and Restore Capabilities and Beyond.” Here’s the description for the demo booth:

SharePoint ships with a number of tools and capabilities that are geared toward protecting content and configuration. These tools provide basic coverage for your SharePoint environment and the content it contains, but they can quickly become cumbersome in real world scenarios. In this session, we will look at SharePoint’s backup and restore capabilities, discuss how they work, and identify where they fall short in common usage scenarios. We will also highlight how Idera’s SharePoint backup solution picks up where the SharePoint platform tools leave off in order to provide complete protection that is cost-effective and easy to use.

The “demo booth” concept is something new for me; it’s part “platform education” (which is where I normally spend the majority of my time and energy) and part “product education” – in this case, education about Idera’s SharePoint backup product. Being both the product manager for Idera SharePoint backup and a co-author for the SharePoint 2010 Disaster Recovery Guide leaves me in something of a unique position to talk about SharePoint’s built-in backup/restore capabilities, where gaps exist, and how Idera SharePoint backup can pick up where the SharePoint platform tools leave off.

If you’re interested in learning more about Idera’s SharePoint backup product and/or how far you can reasonably push SharePoint’s built-in capabilities, check out the demo booth.

SPTechCon 2012 San Francisco

SPTechConFebruary comes to close with a big bang when SPTechCon rolls into San Francisco for the first of two stops in 2012. For those of you who check my blog now and again, you may have noticed the SPTechCon “I’ll be speaking at” badge and link on the right-hand side of the page. Yes, that means I’ll be delivering a session at the event! The BZ Media folks always put on a great show, and I’m certainly proud to be a part of SPTechCon and presenting again this time around.

At this point, I know that I’ll be presenting “The Essentials of SharePoint Disaster Recovery.” I think I’m also going to be doing another lightning talk; I need to check up on that, though, to confirm it.

I also found out that John Ferringer (my co-author and partner-in-crime) and I are also going to have the opportunity to do an SPTechCon-sponsored book signing (for our SharePoint 2010 Disaster Recovery Guide) on the morning of Wednesday the 29th.

If you’re at SPTechCon, please swing by to say hello – either at my session, at the Idera booth, the book signing, or wherever you see me!

Additional Reading and Resources

  1. Blog: Brian Jackett’s Frog Pond of Technology
  2. Blog: Rob Collie’s PowerPivotPro
  3. Company: DSC Consulting
  4. Site: SPTV
  5. LinkedIn: Mark Tiderman
  6. LinkedIn: Craig Pereira
  7. Event: SharePoint Saturday Columbus
  8. Blog: Jennifer Mason
  9. Twitter: Nicola Young
  10. Site: SharePoint Saturday
  11. Twitter: SharePoint Saturday Columbus
  12. Event: SharePoint Saturday Cincinnati
  13. Event: SharePointCincy
  14. LinkedIn: Geoff Smith
  15. Blog: Steve Caravajal’s Ramblings
  16. Blog: Mike Smith’s Tech Training Notes
  17. Company: MAX Technical Training
  18. Blog: Shane Young’s SharePoint Farmer’s Almanac
  19. Company: SharePoint911
  20. Webcast: “Caching-In” for SharePoint Performance
  21. Site: Secrets of SharePoint
  22. Webcast: The Essentials of SharePoint Disaster Recovery
  23. Event: SharePoint Saturday Austin
  24. Blog: Jim Bob Howard
  25. Company: Juniper Strategy
  26. LinkedIn: Matthew Lathrop
  27. Company: Rackspace
  28. Company: Idera
  29. Event: SharePoint Pro Demo Booth Session
  30. Site: SharePoint Pro Magazine
  31. Product: Idera SharePoint backup
  32. Book: SharePoint 2010 Disaster Recovery Guide
  33. Event: SPTechCon 2012 San Francisco
  34. Company: BZ Media
  35. Blog: John Ferringer’s My Central Admin

A Tale of Two Cmdlets

In this post I investigate the differences between the Backup-SPFarm and Backup-SPConfigurationDatabase cmdlets in the process of configuration-only backup in SharePoint 2010. I also extrapolate a bit on how the Backup-SPConfigurationDatabase may have come to be.

I recently authored a blog post titled “Configuration-Only Backup and Restore in SharePoint 2010,” and in that post I tried to address some of the false hopes and misunderstandings I saw arising around SharePoint 2010’s configuration-only backup and restore capabilities.

While I was putting the post together, I was reminded of another head-scratcher that I’ve seen confuse some folks on a handful of occasions; specifically, what the differences are between the Backup-SPFarm and Backup-SPConfigurationDatabase PowerShell cmdlets in SharePoint 2010 when it comes to configuration-only backup.

Before I go too far, I should probably rewind a bit and explain a few things.

Many Paths to the Destination

If you aren’t yet familiar with configuration-only backup and restore in SharePoint 2010, the basics of it are covered in this TechNet article and in my previous post.  I’d recommend checking both out before continuing.

Configuration-only backups in SharePoint 2010 can be generated in several different ways;

  1. Using the “Backup up only configuration settings” option when running a backup using Central Administration’s Farm Backup and Restore capabilities.
  2. Through STSADM.exe using STSADM.exe –o backup with the –ConfigurationOnly switch.
  3. By running the Backup-SPFarm PowerShell cmdlet along with the –ConfigurationOnly switch.
  4. By executing the Backup-SPConfigurationDatabase PowerShell cmdlet.

Option #1 is obviously designed for the “UI-oriented” administrator who wants to accomplish a configuration-only backup with a point-and-click interface.  Option #2 works, but Microsoft has been pretty clear that STSADM.exe is on its way out and should be generally be avoided in favor of the PowerShell cmdlets shown in Options #3 and #4.

Options #3 and #4 are where I’ve actually seen some administrative head-scratching start.  Both options leverage PowerShell, and both produce a configuration-only backup.  The Backup-SPConfigurationDatabase cmdlet would seem most appropriate for the job … but is it?  If it is most appropriate, then why the redundant capability with the Backup-SPFarm cmdlet?

Two Cmdlets, One Function?

We have two PowerShell cmdlets that produce the same type of backup set.  Understanding how the cmdlets differ starts with an analysis of the syntax and parameters for each one.  Let’s start by looking at the syntax for each of the cmdlets in full.

First, the Backup-SPFarm cmdlet.

[sourcecode language=”powershell”]Backup-SPFarm -BackupMethod <String> -Directory <String> [-AssignmentCollection <SPAssignmentCollection>] [-BackupThreads <Int32>] [-ConfigurationOnly <SwitchParameter>] [-Confirm [<SwitchParameter>]] [-Force <SwitchParameter>] [-Item <String>] [-Percentage <Int32>] [-WhatIf [<SwitchParameter>]] [<CommonParameters>][/sourcecode]

Next, the Backup-SPConfigurationDatabase cmdlet.

[sourcecode language=”powershell”]Backup-SPConfigurationDatabase -Directory <String> [-AssignmentCollection <SPAssignmentCollection>] [-DatabaseCredentials <PSCredential>] [-DatabaseName <String>] [-DatabaseServer <String>] [-Item <String>] [<CommonParameters>][/sourcecode]

Each of the cmdlets requires that you specify where the backup set should be created using the –Directory switch, and each cmdlet permits the selection of either the entire configuration hierarchy (the default) or specific subset of it through the –Item switch.

There’s quite a bit of noise in the full syntax for each cmdlet, particularly for Backup-SPFarm, so let’s distill things down a bit and look at each cmdlet in turn.

Backup-SPFarm

For the purposes of this discussion, the following represents the core syntactical elements of interest for configuration-only backup using the Backup-SPFarm cmdlet:

[sourcecode language=”powershell”]Backup-SPFarm [-ConfigurationOnly <SwitchParameter>][/sourcecode]

All you need to do is specify the –ConfigurationOnly switch and you’re ready to go.  There’s really not much more to it than that.

Under the hood, this method of configuration-only backup creation is the same as running a configuration-only backup operation from within SharePoint Central Administration.  Backup-SPFarm assumes that you’ve got a live SharePoint farm, that it’s operating properly, and that you want to capture its configuration with the backup process.

Backup-SPConfigurationDatabase

So then, what’s the deal with Backup-SPConfigurationDatabase?

[sourcecode language=”powershell”]Backup-SPConfigurationDatabase [-DatabaseCredentials <PSCredential>] [-DatabaseName <String>] [-DatabaseServer <String>][/sourcecode]

Clearly there’s something more going on with this cmdlet.  Looking at the parameters, it should be clear that the –DatabaseName and –DatabaseServer switches allow you to specify an alternate configuration database and server location as the target of the configuration-only backup operation you intend to perform.  If you happen to require SQL Server authentication to access the configuration database, then you can specify connection credentials with the –DatabaseCredentials switch.

In most cases where I’ve seen the Backup-SPConfigurationDatabase cmdlet described, these extra database-centric parameters have been passed-off as giving you the ability to backup configuration databases that reside in other (non-local) farms.  I’ve also seen it suggested that it would be possible to centralize configuration-only backups for multiple farms using this cmdlet.  While I think that those are certainly possibilities, I think that they fail to consider the bigger picture and backup/restore as an end-to-end process.

It’s All About Recovery

First, let me close the case on the Backup-SPFarm cmdlet.  Under normal circumstances, Backup-SPFarm is how you should be running a configuration-only backup if you need it.  The TechNet documentation spells this out in the description for both the Backup-SPFarm and Backup-SPConfigurationDatabase cmdlets.  Although Backup-SPConfigurationDatabase can be used to backup the configuration database of an operational farm, that’s not really its intended use (as I see it, anyway).

To understand the real value of the Backup-SPConfigurationDatabase cmdlet, you need to think past the execution of backups and consider the process of recovery.  In many of the organizations I’ve consulted for, the SharePoint environments were not protected using the native backup and restore capabilities that come with the platform.  Quite a few of these mid-size and enterprise organizations handled SharePoint farm protection using SQL Server backups, third-party tools, or a combination of the two.  Usability guidelines for the native backup capabilities were sometimes the reason for avoiding SharePoint’s built-in tools; in other cases, backups were controlled and managed by a different internal group that had already standardized on their own (non-SharePoint) backup tool or product.

In each of the aforementioned backup scenarios, the primary goal was protection of SharePoint’s SQL Server databases.  Content databases were utterly critical targets in these backup scenarios since nothing would bring them back if they were lost.  Whether or not other databases were backed up depended on the recovery strategy.  In the event of catastrophic farm failure, some administrators try to recover all parts of the farm; others prefer to rebuild the farm from scratch and bolt the content databases back in.  Many different approaches to the recovery challenge exist, and each has benefits and disadvantages.

Regardless of the restore approach taken for database backups, the SharePoint farm configuration database has generally been regarded as relatively useless.  After all, farm configuration databases are not normally portable.  When you rebuild a SharePoint farm, you end up with a new farm configuration database.  In SharePoint 2007, it was generally accepted that the farm configuration database was “throwaway;” backing it up wasn’t even necessary unless you had some very specific (and oftentimes proprietary) use case for it.

The Restore Process: SharePoint 2010-Style

With SharePoint 2010, the question of “should I backup the farm configuration database?” should generally be answered with a “yes.”  The same restrictions regarding the use of a farm configuration database backup still apply from SharePoint 2007 (i.e., you can’t simply “drop it into” a live SharePoint farm and go), but we now have configuration-only backup and restore with SharePoint 2010.

If you think about the database or file-based approach to backup, and you consider the process of restoring or rebuilding a SharePoint farm, then you’ll probably understand where the Backup-SPConfigurationDatabase cmdlet actually fits in.  The cmdlet is less about backup than it is about restore.

If you’re trying to put a farm back together after a catastrophic failure using database backups, then you’re probably going to follow a series of steps that starts out like this:

  1. Rebuild your servers with their operating systems
  2. Install SharePoint
  3. Create a new SharePoint farm (with a new farm configuration database)
  4. Restore your old farm’s databases

Once you’ve completed step #3, you actually have a working SharePoint farm – it just doesn’t look anything like the farm you’re trying to rebuild/restore yet.  You’ll probably still need to re-provision services and service applications, re-establish all of your farm’s configuration settings, etc.

Assuming you were capturing your farm configuration database as part of the backup cycle for your old farm, then step #4 is where the Backup-SPConfigurationDatabase can be brought in to work its magic.  The Backup-SPConfigurationDatabase does actually require a functional farm to properly operate (even against another configuration database), but it can be used to execute a configuration-only backup against the old (pre-catastrophe) farm configuration database that was restored into SQL Server from backup in step #4.  The configuration-only backup set that is generated through the Backup-SPConfigurationDatabase action can then be brought into the rebuilt farm almost immediately using the Restore-SPFarm cmdlet with the –ConfigurationOnly switch engaged.

The Data Protection Manager 2010 Connection

I’ll be clear and state that I don’t have hard evidence to say with certainty that my take on the Backup-SPFarm and Backup-SPConfigurationDatabase division of responsibilities is what Microsoft envisioned when they created them, but I am relatively confident based on what I’ve seen and know – especially when I factor-in Microsoft’s data protection product offerings and enterprise backup/restore strategy.

In particular, I’m talking about Microsoft’s own System Center Data Protection Manager (DPM) product line.  The DPM product line conducts its backup and restore operations through the Volume Shadow Copy Service (VSS) that is built into the Windows operating system.  VSS is a very powerful mechanism for the creation of consistent, point-in-time file snapshots … but at its core, VSS is file-based.

Even though DPM advertises some integration capabilities with SharePoint, it isn’t “aware” of SharePoint beyond its interface with the SharePoint Foundation Volume Shadow Copy Service Writer (SPF VSS Writer) and the SPF VSS Writer’s subordinate search index writer.  For all practical purposes, this means that DPM can’t really treat SharePoint backups as much more than file-based backups.  DPM’s approach to SharePoint farm protection is to backup SQL Server databases (including the configuration database) and the farm’s search index partitions.  It doesn’t understand farm metadata, IIS configuration settings, the SharePoint Root (aka “14 hive”), etc.  These additional targets can be backed-up using DPM’s file system protection capabilities, but they aren’t associated in any way with other SharePoint backups.

So even though DPM can protect a SharePoint farm configuration database, it can’t do anything more with it than you or I can do with a standard SQL Server database restore.  If you realize that DPM only works with files and doesn’t have much in the way of application-level intelligence, this chart that compares its capabilities against other SharePoint protection mechanisms makes quite a bit of sense.  It should also make it clear why even Microsoft’s products have a need for the Backup-SPConfigurationDatabase cmdlet.

Closing Thoughts

Don’t interpret what I’m saying as DPM 2010-bashing.  On the contrary, I’ve been using DPM since it’s 2007 release in my home network environment, and I think it’s a pretty good product.  I only brought up DPM to make my point about the Backup-SPConfigurationDatabase cmdlet – not to beat-up the product.

Since both SQL Server backups and DPM are capable of restoring a SharePoint configuration database – but not much more than that – the Backup-SPConfigurationDatabase cmdlet fills a very important role in the SharePoint restore process.  It’s the “bridge” from many backup/restore solutions to SharePoint itself for purposes of getting back configuration data.

The general rule of thumb that I give people is this: use Backup-SPFarm if you’re trying to extract configuration data from a live farm, and use Backup-SPConfigurationDatabase if you’re trying to extract configuration data from a restored (and otherwise unassociated) configuration database.

Additional Resources and References

  1. Blog Post: Configuration-Only Backup and Restore in SharePoint 2010
  2. TechNet: Backup-SPFarm PowerShell cmdlet
  3. TechNet: Backup-SPConfiguration PowerShell cmdlet
  4. TechNet: Backup and recovery overview (SharePoint Server 2010)
  5. TechNet: Restore-SPFarm PowerShell cmdlet
  6. Product: Microsoft System Center Data Protection Manager 2010
  7. MSDN: SharePoint Foundation VSS Writer
  8. TechNet: Plan for backup and recovery (SharePoint Server 2010)

Release of the SharePoint 2010 Disaster Recovery Guide

The SharePoint 2010 Disaster Recovery Guide is now available! In this post, I provide a small peek into the contents of the book and the people who helped make it a reality.

Since my first copy of our new book actually arrived in the mail yesterday (from Amazon.com), I think I can officially announce that the SharePoint 2010 Disaster Recovery Guide is available!  Here’s a picture of it – straight out of the box:

SharePoint 2010 Disaster Recovery Guide

John Ferringer and I apparently didn’t learn our lesson the first time around.  When Cengage approached us about writing another version of the book, we said “yes.”  We were either in denial or had repressed the memories associated with writing the first book.  There were definitely some difficulties and challenges (like trying to learn the relevant pieces of the SharePoint 2010 platform while also writing about them), but we managed to pull it off again.

Of course, we couldn’t have done this without the technical prowess and patience of JD Wade.  JD was our technical editor, and he had a knack for questioning any assumption or statement that wasn’t clearly backed by fact.  He did a fantastic job – I couldn’t have been happier.  The book’s accuracy and quality are a direct result of his contributions.

What’s Inside?

Interested in what we included?  Here’s the table of contents by chapter:

  1. SharePoint Disaster Recovery Planning and Key Concepts
  2. SharePoint Disaster Recovery Design and Implementation
  3. SharePoint Disaster Recovery Testing and Maintenance
  4. SharePoint Disaster Recovery Best Practices
  5. Windows Server 2008 Backup and Restore
  6. Windows Server 2008 High Availability
  7. SQL Server 2008 Backup and Restore
  8. SQL Server 2008 High Availability
  9. SharePoint 2010 Central Administration Backup and Restore
  10. SharePoint 2010 Command Line Backup and Restore: PowerShell
  11. SharePoint 2010 Disaster Recovery Development
  12. SharePoint 2010 Disaster Recovery for End Users
  13. Conclusion

As you can see, we’ve included a little something for just about everyone who might work with SharePoint or interface with it for disaster recovery purposes.  SharePoint administrators will probably benefit the most from the book, but there are definitely sections that are of use to SharePoint developers, DR planners, and others who are interested in SharePoint from a business continuity perspective.

If you happen to pick up a copy of the book, please share your feedback with us – good, bad, ugly, or anything else you feel like sending our way!  We poured a lot of time and effort into this book in an attempt to “do our part” for the community, and your thoughts and feedback mean everything to us.

Thanks, and enjoy!

Additional Resources and References

  1. Book: SharePoint 2010 Disaster Recovery Guide
  2. Blog: John Ferringer’s MyCentralAdmin
  3. Blog: JD Wade’s Wading Through

Configuration-Only Backup and Restore in SharePoint 2010

In this post, I discuss SharePoint 2010’s new configuration-only backup and restore capabilities, how they work, and why they probably aren’t going to remove the need for farm configuration documentation anytime soon.

Since our SharePoint 2010 Disaster Recovery Guide is written, starched, pressed, and ready to wear, I thought it was time to get back to some of the blogging I promised to start doing again once the book was finished.  I guess that if I didn’t have something to write, I simply wouldn’t know what to do with myself.  <insert smirk here>

Motivation For This Post

SharePoint 2010’s configuration-only backup and restore capabilities are on a long list of topics I’ve been meaning to blog about, but in all honesty it wasn’t at the top of that list.  I’ve been seeing the topic start to get some real attention in a number of forums, though, from folks like Todd Klindt (in one of his recent netcasts) and Benjamin Athawes (in his blog and in the helpful replies he’s been providing out in Microsoft’s TechNet forums).

It seems that many folks in the SharePoint community have heard about configuration-only backup and restore, and I think there’s an awful lot of hope that it will help with some of the problems we faced with SharePoint 2007.  By the time you finish reading this post, I hope to impart a solid understanding of what configuration-only backup and restore will – and won’t – do for you.

The Elephant In The Room

Before I go any further, let me address the question that I suspect the overwhelming majority of you probably want an answer to:

Will configuration-only backup and restore let me clone my SharePoint 2010 farm?

The quick answer: no.  At the risk of being a bit flippant, I’ll include a slightly longer answer: heck no – not even close.

When configuration-only backup and restore was introduced to the world, it promised so much.  I remember hearing the discussion of “farm templates” and of “cloning configuration.”  I remember sitting through Bill Baer’s business continuity management (BCM) session at the SharePoint Conference in 2009 and thinking about all the things I was going to do with the new capability.

In light of what I now know about configuration-only backup and restore, I went back to the recorded SPC sessions (including SPC311 – Bill’s BCM session) to make sure I wasn’t hearing things.  I wasn’t.  My guess is that the initial vision for configuration-only backup and restore had to get scaled-back prior to the product becoming generally available.  Maybe the team ran out of time, maybe they hit technical hurdles, or perhaps it was a combination of the two.  Regardless, the capability in its current form isn’t quite what I had hoped it would be.

Enough with the hand-waving.  Let’s dive in.

High-Level: What Is Configuration-Only Backup and Restore?

For a brief primer on configuration-only backup and restore, check out the “Backup and recovery overview (SharePoint 2010)” article on TechNet.  If you don’t want to take the time to read the article, though, I’ll sum it up for you: a configuration-only backup and restore allows you to extract portable configuration settings from a SharePoint 2010 farm configuration database and apply those settings to a different farm.  The promise, as indicated earlier, is that you could effectively “clone” the configuration of a farm.  The configuration template that would be generated from this process could then be applied to other farms to create copies of the original farm’s settings and configuration.  This would be extremely beneficial when duplicating environments (e.g., creating staging and testing environments that match a production environment), building development and demo virtual machines (VMs), and more.

Those of you who have worked with SharePoint 2007 recognize the leap forward that this represents.  Anyone who has spent any amount of time exploring SharePoint backup and recovery knows that farm configuration databases are tied to their SharePoint environments.  Microsoft doesn’t support transplanting one farm’s configuration database into another farm; most of the time, it simply wouldn’t work.  Even if you could get it to work through some extremely impressive techno-jujitsu, you’d be in a horribly unsupported state as far as Microsoft support was concerned.

What Does It Look Like?

Configuration-only backup and restore in SharePoint 2010 is an extension to the existing catastrophic backup and restore capabilities located in the Microsoft.SharePoint.Administration and Microsoft.SharePoint.Administration.Backup namespaces.  The processes and mechanisms that allow you to create farm-level backups from Central Administration (through “Farm Backup and Restore”), PowerShell (via Backup-SPFarm), and STSADM.exe (via STSADM –o backup in catastrophic mode) are the same ones that are employed in configuration-only backups.

In fact, the backup sets that are generated from a configuration-only backup are basically the same, structurally speaking, as those that are generated from a “normal” (content + configuration) catastrophic backup.  One easy way to determine the nature of a backup, though, is to crack open the backup location’s table of contents file (spbrtoc.xml) and examine the value within the <SPConfigurationOnly /> element for a given backup or restore run (represented by a <SPHistoryObject /> element).

For example, this particular backup run was clearly a configuration-only backup because its <SPConfigurationOnly /> element contains a value of True
[sourcecode language=”xml” highlight=”9″]<SPHistoryObject>
<SPId>571d2ad2-f485-46de-918e-653e8868c8bc</SPId>
<SPRequestedBy>SPDC\s0ladmin</SPRequestedBy>
<SPBackupMethod>Full</SPBackupMethod>
<SPRestoreMethod>None</SPRestoreMethod>
<SPStartTime>08/18/2010 16:15:12</SPStartTime>
<SPFinishTime>08/18/2010 16:15:37</SPFinishTime>
<SPIsBackup>True</SPIsBackup>
<SPConfigurationOnly>True</SPConfigurationOnly>
<SPBackupDirectory>e:\temp\spbr0001\</SPBackupDirectory>
<SPDirectoryName>spbr0001</SPDirectoryName>
<SPDirectoryNumber>1</SPDirectoryNumber>
<SPTopComponent>Farm</SPTopComponent>
<SPTopComponentId>7850df11-60ef-460c-ab4a-9b7b9f2f735f</SPTopComponentId>
<SPWarningCount>0</SPWarningCount>
<SPErrorCount>0</SPErrorCount>
</SPHistoryObject>[/sourcecode]
If you browse the folder containing the backup set that is generated from a configuration-only backup, you’ll see the expected array of sequentially numbered hexadecimal .bak files, as well as a log file (spbackup.log) and backup component hierarchy file (spbackup.xml).

config-only_backup_set

The .bak files themselves contain XML-serialized representations of various farm objects that were captured during the configuration backup process:

[sourcecode language=”xml” wraplines=”false”]<object type="Microsoft.Office.Server.Administration.DiagnosticsService, Microsoft.Office.Server, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c">
<fld type="System.Collections.Hashtable, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="m_Throttles" />
<fld name="m_Versions" type="null" />
<fld name="m_UpgradeContext" type="null" />
<fld type="System.Collections.Hashtable, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="m_UpgradedPersistedFields" />
<fld name="m_Properties" type="null" />
<sFld type="String" name="m_LastUpdatedUser">SPDC\s0ladmin</sFld>
<sFld type="String" name="m_LastUpdatedProcess">psconfigui (4228)</sFld>
<sFld type="String" name="m_LastUpdatedMachine">SPDEV</sFld>
<sFld type="DateTime" name="m_LastUpdatedTime">2010-04-25T17:38:27</sFld>
</object>[/sourcecode]

Again, this is all very similar to a standard catastrophic farm backup.  The one notable absence in the backup set that is produced during a configuration-only backup is that of SQL Server database backup files that begin (internally) with a telltale TAPE header.  The absence of these files is expected, though, since configuration-only backups operate on farm configuration settings and metadata – not the content and other data that is housed primarily in SQL Server databases.

“Wait,” you might be saying, “service applications typically have quite a bit of configuration data, and much of that data is housed in SQL Server databases.  Wouldn’t those databases be captured by the configuration-only backup process?”  Hold that question – I’ll be addressing it in a short bit.

A Quick Peek At What’s Going On Under The Hood

To better understand how configuration-only backup and restore works, it helps to dive below the backup set and into the SharePoint object model to see what’s actually happening.  If you’re not a developer, no worries – I’ll try to keep this simple.

The type that is the backbone of configuration-only backup and restore operations is the IBackupRestoreConfiguration interface.  Classes in the SharePoint object model can implement this interface (and supply a CanBackupRestoreAsConfiguration property value of true) if they wish to meet the bare minimum requirements for inclusion in configuration-only backup and restore operations.

If you’ve worked with catastrophic backup and restore in the SharePoint object model before, this interface name may seem a little familiar to you – even if it isn’t.  That’s because extending the native catastrophic backup and restore functionality of SharePoint to include new content classes is done through the similarly named IBackupRestore interface.  IBackupRestore came before IBackupRestoreConfiguration, and the latter is actually derived from the former.  The patterns of interaction between the runtime backup objects and objects that implement these two interfaces is very similar – as you might expect given their inheritance relationship.

So you might be wondering, “So what?  I don’t plan to build configuration-only backup and restore-capable components.  Why are you going through all of this.”  The answer to that question is relatively easy to answer: we can get a pretty clear understanding of what is actually included in configuration-only backup and restore operations by looking at the SharePoint classes that implement the IBackupRestoreConfiguration interface.

Hold onto the concept of examining types that implement IBackupRestoreConfiguration; we’ll be coming back to it in just a second.

What Does A Configuration-Only Backup Actually Capture? – Part 1

Let’s leave the SharePoint object model and come back up to ground level for a moment.

In plain English, configuration-only backup and restore is basically supposed to address the “I need to create a template of my farm” pain point we felt with SharePoint 2007.  Does it?  What gap is filled by the capability according to Microsoft?

If you read the TechNet article I linked to earlier, you’ll find just five types of settings (or configuration data items) that are actually listed as included in a configuration-only backup:

  • Antivirus
  • Information rights management (IRM)
  • Outbound e-mail settings (only restored when performing an "overwrite").
  • Customizations deployed as trusted solutions
  • Diagnostic logging

I don’t know about you, but this doesn’t exactly line up too well with the list of things I’d want to replicate from Farm A to Farm B if I were actually trying to clone Farm A configuration settings.  Don’t get me wrong: several of the items listed are things I would want to bring across (especially the customizations in the farm solution store), but there are a whole host of additional things I’d want to see.

What Does A Configuration-Only Backup Actually Capture? – Part 2

The five bullet points I just supplied aren’t entirely well-defined, and they’re more than a little “light” in terms of farm configuration data.  Let’s define the list a bit more clearly by seeing which classes in the SharePoint object model actually implement the required IBackupRestoreConfiguration interface.

When I fire-up Reflector and analyze the types that use the IBackupRestoreConfiguration interface, I come up with the following classes (ignoring the SPBackupRestore type, since its ImplementsIBackupRestoreConfiguration method only checks to see whether or not other objects themselves actually implement the interface):

  • Microsoft.SharePoint.Administration.SPDiagnosticsServiceBase
  • Microsoft.SharePoint.Administration.SPFarm
  • Microsoft.SharePoint.Administration.SPResourceMeasure
  • Microsoft.SharePoint.Administration.SPSolution
  • Microsoft.SharePoint.Administration.SPSolutionCollection
  • Microsoft.SharePoint.Administration.SPSolutionLanguagePack
  • Microsoft.SharePoint.Administration.SPUserCodeExecutionTier
  • Microsoft.SharePoint.Administration.SPUserCodeLoadBalancerProvider
  • Microsoft.SharePoint.Administration.SPUserCodeProvider
  • Microsoft.SharePoint.Administration.SPUserCodeService
  • Microsoft.SharePoint.Administration.SPWebService
  • Microsoft.SharePoint.UserCode.SPSolutionValidator

Each of these classes is capable of participating in a configuration-only backup because it implements IBackupRestoreConfiguration.  The list is longer than the five bullets I mentioned earlier, but many of the classes cited can be grouped into common areas of functionality.  The SPSolution, SPSolutionCollection, and SPSolutionLanguagePack types are associated with trusted solutions (the farm solution store), for example, while the SPDiagnosticsServiceBase type is tied to trace log and event throttling management (i.e., diagnostic logging).  A simple one-to-one mapping between classes and settings areas (that you might find in Central Administration) doesn’t actually exist.

Identifying what is included in a configuration-only backup isn’t quite a quick and easy affair.

What Doesn’t A Configuration-Only Backup Capture?

Sometimes it’s simply easier to talk about what a thing isn’t rather than what it is.  As you’re probably coming to see, configuration-only backup and restore is one of those things.

For those who hoped that configuration-only backup and restore would deliver us to the promised land of SharePoint farm templates and full configuration replication, the first signs of trouble in paradise come by reading the implementation notes for the IBackupRestoreConfiguration interface.  In essence they state that you shouldn’t be implementing the interface to capture configuration settings unless the following three conditions are true for the settings in question:

  1. The settings you’re trying to preserve are only configuration settings – not content like lists, documents, etc.
  2. The settings you want to capture are scoped to the entire farm or the Content Publishing Web Service (i.e., they apply equally to all non-Central Admin Web applications and the site collections contained within them – not to just a subset)
  3. The settings aren’t tied to server names or your specific SharePoint farm topology

That list starts “simple” and ends “rough.”  With those three bullets, we can instantly rule-out configuration data that is tied to individual Web applications, content databases, site collections, and everything else below them.  Configuration-only backup and restore won’t protect your per-Web application settings, either, including alternate access mappings (AAMs).

I’m making a special point of highlighting AAMs because configuration-only backup and restore was initially advertised as being capable of capturing these mappings.  Sure, you can view AAMs within Central Administration and may think that they’re maintained at the farm level, but they aren’t – they’re tied to specific Web applications.  AAMs for a Web application are represented (within the object model) as an instance of the SPAlternateUrlCollection class.  The SPAlternateUrlCollection isn’t on the list of IBackupRestoreConfiguration implementers provided earlier, nor are its parent types (most notably the SPWebApplication type through its AlternateUrls property).  Net effect: it isn’t included in configuration-only backup and restore operations.

Since cloning a SharePoint farm usually involves taking it from one environment to another, bullet #3 is a rather big sticking point, as well.  Configuration-only backup and restore won’t handle anything that includes a server name, IP address, or any other environmentally-dependent setting.  The reason is pretty simple – how would SharePoint know how to actually re-wire that stuff (in a new environment) on restore?

Ouch.

Okay, What About Service Applications?

The Service Application Framework is new to SharePoint 2010, and it represents a major step forward in correcting many of the performance, configuration, and scalability limits of MOSS 2007’s shared service provider (SSP) model.  If you’ve touched SharePoint 2010 in any form, chances are you’ve at least stumbled into service applications in some form.  Examples include the Managed Metadata Service, Business Data Connectivity (BCS) Services, and Search.

Although the Service Application Framework has been engineered to participate in normal (content+configuration) catastrophic backup and restore operations, it doesn’t do so through the standard IBackupRestore interface.  Developers of service application and related classes can adorn their classes with a couple of different attributes (IisWebServiceApplicationBackupBehaviorAttribute and IIsWebServiceApplicationProxyBackupBehaviorAttribute – not exactly “short and sweet” in the name department), and they get backup and restore integration as a freebie.  This is a big relief for developers, because properly implementing the IBackupRestore interface in their classes is anything but trivial.

There is a downside to the attribute-based backup and restore approach as its implemented, though: the Service Application Framework simply doesn’t participate in configuration-only backup and restore.  When you execute a configuration-only backup, you won’t capture any configuration data tied to search, BCS, managed metadata, web analytics, Excel services, or any of the other service applications.

Double ouch.

The Verdict On Configuration-Only Backup And Restore

I’ll start by apologizing if this post dashed your hopes.  Believe me when I say that I had very high hopes for configuration-only backup and restore, as well.  Cloning farms by hand is painful work; I’ve done it enough times to know that much.

Since configuration-only backup and restore doesn’t actually cover any configuration data tied to service applications and individual Web applications, cloning a farm in SharePoint 2010 is still going to be a largely manual affair.  Scripting can (and probably should) play a large role, and so will documentation.

There, I said it – the ugly “d” word.  Documentation.

Documentation continues to play a big role in capturing configuration data in SharePoint 2010, but that doesn’t mean you have to resort to taking notes or capturing screenshots en masse.

Microsoft has (indirectly) acknowledged that configuration-only backup and restore isn’t going to round up all of our desired configuration settings, and they’ve attempted to lend us a hand through some PowerShell scripting.  If you haven’t yet reviewed Microsoft’s farm documentation script on TechNet, I highly recommend that you check it out.  Saying that the script’s treatment of farm configuration data is “extensive” is kind of like saying that a tsunami is a “big wave” – it doesn’t do it justice.

I also want to be clear and say that despite the limitations I’ve described, I still think that configuration-only backup and restore is worth some serious investigation for anyone trying to do template creation, cloning, and disaster recovery work.  Given my focus on disaster recovery, for example, the ability to get a farm’s solution store backed-up in a form that can be restored easily at a later time is a huge benefit – one that would really ease the process of farm recovery in a true disaster scenario.

Additional Resources and References

  1. Book: SharePoint 2010 Disaster Recovery Guide
  2. Netcast: Todd Klindt’s Netcast 54
  3. Blog: Benjamin Athawes’ Tales from a SharePoint Farm
  4. Blog: Bill Baer’s TechNet Blog
  5. TechNet: Backup and recovery overview (SharePoint 2010)
  6. API: IBackupRestoreConfiguration
  7. API: IBackupRestore
  8. Product: Red Gate’s Reflector
  9. MSDN: What’s New: Service Application Framework
  10. TechNet: Document Farm Configuration Settings (PowerShell Script)

RPO and RTO: Prerequisites for Informed SharePoint Disaster Recovery Planning

RPO (recovery point objective) targets and RTO (recovery time objective) targets are critical to have in hand prior to the start of disaster recovery (DR) planning for SharePoint. This post discusses RPO and RTO to build an understanding of what they are and how they impact DR decision making.

Years ago, before I began working with SharePoint, I spent some time working as an application architect for a Fortune 500 financial services company based here in Cincinnati, Ohio.  While at the company, I was awarded the opportunity to serve as a disaster recovery (DR) architect on the team that would build (from ground up) the company’s first DR site implementation.  It was a high-profile role with little definition – the kind that can either boost a career or burn it down.  Luckily for me, the outcome leaned towards the former.

Admittedly, though, I knew very little about DR before starting in that position.  I only knew what management had tasked me with doing: ensuring that mission-critical applications would be available and functional at the future DR site in the event of a disaster.  If you aren’t overly familiar with DR, then that target probably sounds relatively straightforward.  As I began working and researching avenues of attack for my problem set, though, I quickly realized how challenging and unusual disaster recovery planning was as a discipline – particularly for a “technically minded” person like myself.

Understanding the “Technical Tendency”

When it comes to DR, folks with whom I’ve worked have heard me say the following more than a few times:

It is the nature of technical people to find and implement technical solutions to technical problems.  At its core, disaster recovery is not a technical problem; it is a business problem.  Approaching disaster recovery with a purely technical mindset will result in a failure to deliver an appropriate solution more often than not.

What do I mean by that?  Well, technical personnel tend to lump DR plans and activities into categories like “buying servers,” “taking backups,” and “acquiring off-site space.”  These activities can certainly be (and generally are) part of a DR plan, but if they are the starting point for a DR strategy, then problems are likely to arise.

Let me explain by way of a simplistic and fictitious example.

Planning for SharePoint DR in a Vacuum

Consider the plight of Larry.  Larry is an IT professional who possesses administrative responsibility for his company’s SharePoint-based intranet.  One day, Larry is approached by his manager and instructed to come up with a DR strategy for the SharePoint farm that houses the intranet.  Like most SharePoint administrators, Larry’s never really “done” DR before.  He’s certain that he will need to review his backup strategy and make sure that he’s getting good backups.  He’ll probably need to talk with the database administrators, too, because it’s generally a good idea to make sure that SQL backups are being taken in addition to SharePoint farm (catastrophic) backups.

Larry’s been told that off-site space is already being arranged by the facilities group, so that’s something he’ll be able to take off of his plate.  He figures he’ll need to order new servers, though.  Since the company’s intranet farm consists of four servers (including database servers), he plans to play it safe and order four new servers for the DR site.  In his estimation, he’ll probably need to talk with the server team about the hardware they’ll be placing out at the DR site, he’ll need to speak with the networking team about DNS and switching capabilities they plan to include, etc.

Larry prepares his to-do list, dives in, and emerges three months later with an intranet farm DR approach possessing the following characteristics:

  • The off-site DR location will include four servers that are setup and configured as a new, “warm standby” SharePoint farm.
  • Every Sunday night, a full catastrophic backup of the SharePoint farm will be taken; every other night of the week, a differential backup will be taken.  After each nightly backup is complete, it will be remotely copied to the off-site DR location.
  • In the event of a disaster, Larry will restore the latest full backup and appropriate differential backups to the standby farm that is running at the DR site.
  • Once the backups have been restored, all content will be available for users to access – hypothetically speaking, of course.

There are a multitude of technical questions that aren’t answered in the plan described above.  For example, how is patching of the standby farm handled?  Is the DR site network a clone of the existing network?  Will server name and DNS hostname differences be an issue?  What about custom solution packages (WSPs)?  Ignoring all the technical questions for a moment, take a step back and ask yourself the question of greatest importance: will Larry’s overall strategy and plan meet his DR requirements?

If you’re new to DR, you might say “yes” or “no” based on how you view your own SharePoint farm and your experiences with it.  If you’ve previously been involved in DR planning and are being honest, though, you know that you can’t actually answer the question.  Neither can Larry or his manager.  In fact, no one (on the technical side, anyway) has any idea if the DR strategy is a good one or not – and that’s exactly the point I’m trying to drive home.

The Cart Before the Horse

Assuming Larry’s company is like many others, the SharePoint intranet has a set of business owners and stakeholders (collectively referred to as “The Business” hereafter) who represent those who use the intranet for some or all of their business activities.  Ultimately, The Business would issue one of three verdicts upon learning of Larry’s DR strategy:

Verdict 1: Exactly What’s Needed

Let’s be honest: Larry’s DR plan for intranet recovery could be on-the-money.  Given all of the variables in DR planning and the assumptions that Larry made, though, the chance of such an outcome is slim.

Verdict 2: DR Strategy Doesn’t Offer Sufficient Protection

There’s a solid chance that The Business could judge Larry’s DR plan as falling short.  Perhaps the intranet houses areas that are highly volatile with critical data that changes frequently throughout the day.  If an outage were to occur at 4pm in the afternoon, an entire day’s worth of data would basically be lost because the most recent backup would likely be 12 or so hours old (remember: the DR plan calls for nightly backups).  Loss of that data could be exceptionally costly to the organization.

At the same time, Larry’s recovery strategy assumes that he has enough time to restore farm-level backups at the off-site location in the event of a disaster.  Restoring a full SharePoint farm-level backup (with the potential addition of differential backups) could take hours.  If having the intranet down costs the company $100,000 per hour in lost productivity or revenue, you can bet that The Business will not be happy with Larry’s DR plan in its current form.

Verdict 3: DR Strategy is Overkill

On the flipside, there’s always the chance that Larry’s plan is overkill.  If the company’s intranet possesses primarily static content that changes very infrequently and is of relatively low importance, nightly backups and a warm off-site standby SharePoint farm may be overkill.  Sure, it’ll certainly allow The Business to get their intranet back in a timely fashion … but at what cost?

If a monthly tape backup rotation and a plan to buy hardware in the event of a disaster is all that is required, then Larry’s plan is unnecessarily costly.  Money is almost always constrained in DR planning and execution, and most organizations prioritize their DR target systems carefully.  Extra money that is spent on server hardware, nightly backups, and maintenance for a warm off-site SharePoint farm could instead be allocated to the DR strategies of other, more important systems.

Taking Care of Business First

No one wants to be left guessing whether or not their SharePoint DR strategy will adequately address DR needs without going overboard.  In approaching the challenge his manager handed him without obtaining any additional input, Larry fell into the same trap that many IT professionals do when confronted with DR: he failed to obtain the quantitative targets that would allow him to determine if his DR plan would meet the needs and expectations established by The Business.  In their most basic form, these requirements come in the form of recovery point objectives (RPOs) and recovery time objectives (RTOs).

The Disaster Recovery Timeline

I have found that the concepts of RPO and RTO are easiest to explain with the help of illustrations, so let’s begin with a picture of a disaster recovery timeline itself:

Disaster Recovery Timeline

The diagram above simply shows an arbitrary timeline with an event (a “declared disaster”) occurring in the middle of the timeline.  Any DR planning and preparation occurs to the left of the event on the timeline (in the past when SharePoint was still operational), and the actual recovery of SharePoint will happen following the event (that is, to the right of the event on the timeline in the “non-operational” period).

This DR timeline will become the canvas for further discussion of the first quantitative DR target you need to obtain before you can begin planning a SharePoint DR strategy: RPO.

RPO: Looking Back

As stated a little earlier, RPO is an acronym for Recovery Point Objective.  Though some find the description distasteful, the easiest way to describe RPO is this: it’s the maximum amount of data loss that’s tolerated in the event of a disaster.  RPO targets vary wildly depending on volatility and criticality of the data stored within the SharePoint farm.  Let’s add a couple of RPO targets to the DR timeline and discuss them a bit further.

Disaster Recovery Timeline with RPO

Two RPO targets have been added to the timeline: RPO1 and RPO2.  As discussed, each of these targets marks a point in the past from which data must be recoverable in the event of a disaster.  In the case of our first example, RPO1, the point in question is 48 hours before a declared disaster (that is, “we have a 48 hour RPO”).  RPO2, on the other hand, is a point in time that is a mere 30 minutes prior to the disaster event (or a “30 minute target RPO”).

At a minimum, any DR plan that is implemented must ensure that all of the data prior to the point in time denoted by the selected RPO can be recovered in the event of a disaster.  For RPO1, there may be some loss of data that was manipulated in the 48 hours prior to the disaster, but all data older than 48 hours will be recovered in a consistent state.  RPO2 is more stringent and leaves less wiggle room; all data older than 30 minutes is guaranteed to be available and consistent following recovery.

If you think about it for a couple of minutes, you can easily begin to see how RPO targets will quickly validate or rule-out varying backup and/or data protection strategies.  In the case of RPO1, we’re “allowed” to lose up to two days (48 hours) worth of data.  In this situation, a nightly backup strategy would be more than adequate to meet the RPO target, since a nightly backup rotation guarantees that available backup data is never more than 24 hours old.  Whether disk or tape based, this type of backup approach is very common in the world of server management.  It’s also relatively inexpensive.

The same nightly backup strategy would fail to meet the RPO requirement expressed by RPO2, though.  RPO2 states that we cannot lose more than 30 minutes of data.  With this type of RPO, most standard disk and tape-based backup strategies will fall short of meeting the target.  To meet RPO2’s 30 minute target, we’d probably need to look at something like SQL Server log shipping or mirroring.  Such a strategy is going to generally require a greater investment in database hardware, storage, and licensing.  Technical complexity also goes up relative to the aforementioned nightly backup routine.

It’s not too hard to see that as the RPO window becomes increasingly more narrow and approaches zero (that is, an RPO target of real-time failover with no data loss permitted), the cost and complexity of an acceptable DR data protection strategy climbs dramatically.

RTO: Thinking Ahead

If RPO drives how SharePoint data protection should be approached prior to a disaster, RTO (or Recovery Time Objective) denotes the timeline within which post-disaster farm and data recovery must be completed.  To illustrate, let’s turn once again to the DR timeline.

Disaster Recovery Timeline with RTO

As with the previous RPO example, we now have two RTO targets on the timeline: RTO1 and RTO2.  Analogous to the RPO targets, the RTO targets are given in units of time relative to the disaster event.  In the case of RTO1, the point in time in question is two hours after a disaster has been declared.  RTO2 is designated as t+36 hours, or a day and a half after the disaster has been declared.

In plain English, an RTO target is the maximum amount of time that the recovery of data and functionality can take following a disaster event.  If the overall DR plan for your SharePoint farm were to have an RTO that matches RTO2, for instance, you would need to have functionality restored (at an agreed-upon level) within a day and half.  If you were operating with a target that matches RTO1, you would have significantly less time to get everything “up and running” – only two hours.

RTO targets vary for the same reasons that RPO targets vary.  If the data that is stored within SharePoint is highly critical to business operations, then RTOs are generally going to trend towards hours, minutes, or maybe even real-time (that is, an RTO that mandates transferring to a hot standby farm or “mirrored” data center for zero recovery time and no interruption in service).  For SharePoint data and farms that are less business critical (maybe a publishing site that contains “nice to have” information), RTOs could be days or even weeks.

Just like an aggressive RPO target, an aggressive RTO target is going to limit the number of viable recovery options that can possibly address it – and those options are generally going to lean towards being more expensive and technically more complex.  For example, attempting to meet a two hour RTO (RTO1) by restoring a farm from backup tapes is going to be a gamble.  With very little data, it may be possible … but you wouldn’t know until you actually tried with a representative backup.  At the other extreme, an RTO that is measured in weeks could actually make a ground-up farm rebuild (complete with new hardware acquisition following the disaster) a viable – and rather inexpensive (in up-front capital) – recovery strategy.

Whether or not a specific recovery strategy will meet RTO targets in advance of a disaster is oftentimes difficult to determine without actually testing it.  That’s where the value of simulated disasters and recovery exercises come into play – but that’s another topic for another time.

Closing Words

This post was intended to highlight a common pitfall affecting not only SharePoint DR planning, but DR planning in general.  It should be clear by now that I deliberately avoided technical questions and issues to focus on making my point about planning.  Don’t interpret my “non-discussion” of technical topics to mean that I think that their place with regard to SharePoint DR is secondary.  That’s not the case at all; the fact that John Ferringer and I wrote a book on the topic (the “SharePoint 2007 Disaster Recovery Guide”) should be proof of this.  It should probably come as no surprise that I recommend our book for a much more holistic treatment of SharePoint DR – complete with technical detail.

There are also a large number of technical resources for SharePoint disaster recovery online, and the bulk of them have their strong and weak points.  My only criticism of them in general is that they equate “disaster recovery” to “backup/restore.”  While the two are interrelated, the latter is but only one aspect of the former.  As I hope this post points out, true disaster recovery planning begins with dialog and objective targets – not server orders and backup schedules.

If you conclude your reading holding onto only one point from this post, let it be this: don’t attempt DR until you have RPOs and RTOs in hand!

Additional Reading and References

  1. Book: SharePoint 2007 Disaster Recovery Guide
  2. Online: Disaster Recovery Journal site
%d bloggers like this: