Work and Play: NAS-style

The last time I wrote about the network-attached storage (NAS) appliance that the good folks at Synology had sent my way, I spent a lot of time talking about how amazed I was at all the things that NAS appliances could do these days. They truly have come a very long way in the last decade or so.

Once I got done gushing about the DiskStation DS220+ that I had sitting next to my primary work area, I realized that I should probably do a post about it that amounted to more than a “fanboy rant.”

This is an attempt at “that post” and contains some relevant specifics on the DS220+’s capabilities as well as some summary words about my roughly five or six months of use.

First Up: Business

As the title of this post alluded to, I’ve found uses for the NAS that would be considered “work/business,” others that would be considered “play/entertainment,” and some that sit in-between. I’m going to start by outlining the way I’ve been using it in my work … or more accurately, “for non-play purposes.”

But first: one of the things I found amazing about the NAS that really isn’t a new concept is the fact that Synology maintains an application site (they call it the “Package Center“) that is available directly from within the NAS web interface itself:

Much like the application marketplaces that have become commonplace for mobile phones, or the Microsoft Store which is available by default to Windows 10 installations, the Package Center makes it drop-dead-simple to add applications and capabilities to a Synology NAS appliance. The first time I perused the contents of the Package Center, I kind of felt like a kid in a candy store.

Candy StoreWith all the available applications, I had a hard time staying focused on the primary package I wanted to evaluate: Active Backup for Microsoft 365.

Backup and restore, as well as Disaster Recovery (DR) in general, are concepts I have some history and experience with. What I don’t have a ton of experience with is the way that companies are handling their DR and BCP (business continuity planning) for cloud-centric services themselves.

What little experience I do have generally leads me to categorize people into two different camps:

  • Those who rely upon their cloud service provider for DR. As a generalization, there are plenty of folks that rely upon their cloud service provider for DR and data protection. Sometimes folks in this group wholeheartedly believe, right or wrong, that their cloud service’s DR protection and support are robust. Oftentimes, though, the choice is simply made by default, without solid information, or simply because building one’s own DR plan and implementing it is not an inexpensive endeavor. Whatever the reason(s), folks in this group are attached at the hip to whatever their cloud service provider has for DR and BCP – for better or for worse.
  • Those who don’t trust the cloud for DR. There are numerous reasons why someone may choose to augment a cloud service provider’s DR approach with something supplemental. Maybe they simply don’t trust their provider. Perhaps the provider has a solid DR approach, but the RTO and RPO values quoted by the provider don’t line up with the customer’s specific requirements. It may also be that the customer simply doesn’t want to put all of their DR eggs in one basket and wants options they control.
In reality, I recognize that this type of down-the-middle split isn’t entirely accurate. People tend to fall somewhere along the spectrum created by both extremes.

Microsoft 365 Data Protection

On the specific topic of Microsoft 365 data protection, I tend to sit solidly in the middle of the two extremes I just described. I know that Microsoft takes steps to protect 365 data, but good luck finding a complete description or metrics around the measures they take. If I had to recover some data, I’m relatively (but not entirely) confident I could open a service ticket, make the request, and eventually get the data back in some form.

The problem with this approach is that it’s filled with assumptions and not a lot of objective data. I suspect part of the reason for this is that actual protection windows and numbers are always evolving, but I just don’t know.

You can’t throw a stick on the internet and not hit a seemingly endless supply of vendors offering to fill the hole that exists with Microsoft 365 data protection. These tools are designed to afford customers a degree of control over their data protection. And as someone who has talked about DR and BCP for many years now, redundancy of data protection is never a bad thing.

Introducing the NAS Solution

And that brings me back to Synology’s Active Backup for Microsoft 365 package.

In all honesty, I wasn’t actually looking for supplemental Microsoft 365 data protection at the time. Knowing the price tag on some of the services and packages that are sold to address protection needs, I couldn’t justify (as a “home user”) the cost.

I was pleasantly surprised to learn that the Synology solution/package was “free” – or rather, if you owned one of Synology’s NAS devices, you had free access to download and use the package on your NAS.

The price was right, so I decided to install the package on my DS220+ and take it for a spin.

Ā 

Kicking The Tires

First impressions and initial experiences mean a lot to me. For the brief period of time when I was a product manager, I knew that a bad first experience could shape someone’s entire view of a product.

I am therefore very happy to say that the Synology backup application was a breeze to get setup – something I initially felt might not be the case. The reason for my initial hesitancy was due to the fact that applications and products that work with Microsoft 365 need to be registered as trusted applications within the M365 tenant they’re targeting. Most of the products I’ve worked with that need to be setup in this capacity involve a fair amount manual legwork: certificate preparation, finding and granting permissions within a created app registration, etc.

Not Synology’s backup package. From the moment you press the “Create” button and indicate that you want to establish a new backup of Microsoft 365 data, you’re provided with solid guidance and hand-holding throughout the entire setup and app registration process. Of all of the apps I’ve registered in Azure, Synology’s process and approach has been the best – hands-down. It took no more than five minutes to establish a recurring backup against a tenant of mine.

I’ve included a series of screenshots (below) that walk through the backup setup process.

What Goes In, SHOULD Come Out ...

When I would regularly speak on data protection and DR topics, I had a saying that I would frequently share: “Backup is science, but Restore is an art.” A decade or more ago, those tasked with backing up server-resident data often took a “set it and forget it” approach to data backups. And when it came time to restore some piece of data from those backups, many of the folks who took such an approach would discover (to their horror) that their backups had been silently failing for weeks or months.

Motto of the story (and a 100-level lesson in DR): If you establish backups, you need to practice your restore operations until you’re convinced they will work when you need them.

Synology approaches restoration in a very straightforward fashion that works very well (at least in my use case). There is a separate web portal from which restores and exports (from backup sets) are conducted.

And in case you’re wondering: yes, this means that you can grant some or all of your organization (or your family, if you’re like me) self-service backup capabilities. Backup and restore are handled separately from one another.

As the series of screenshots below illustrates, there are five slightly different restore presentations for each of the five areas backed up by the Synology package: (OneDrive) Files, Email, SharePoint Sites, Contacts, and Calendars. Restores can be performed from any backup set and offer the ability to select the specific files/items to recover. The ability to do an in-place restore or an export (which is downloaded by the browser) is also available for all items being recovered. Pretty handy.

Will It Work For You?

I’ve got to fall-back to the SharePoint consultant’s standard answer: it depends.

I see something like this working exceptionally well for small-to-mid-sized organizations that have smaller budgets and already overburdened IT staff. Setting up automated backups is a snap, and enabling users to get their data back without a service ticket and/or IT becoming the bottleneck is a tremendous load off of support personel.

My crystal ball stops working when we’re talking about larger companies and enterprise scale. All sorts of other factors come into play with organizations in this category. A NAS, regardless of capabilities, is still “just” a NAS at the end of the day.

My DS220+ has two-2TB drives in it. I/O to the device is snappy, but I’m only one user. Enterprise-scale performance isn’t something I’m really equipped to evaluate.

Then there are the questions of identity and Active Directory implementation. I’ve got a very basic AD implementation here at my house, but larger organizations typically have alternate identity stores, enforced group policy objects (GPOs), and all sorts of other complexities that tend to produce a lot of “what if” questions.

Larger organizations are also typically interested in advanced features, like integration with existing enterprise backup systems, different backup modes (differential/incremental/etc.), deduplication, and other similar optimizations. The Synology package, while complete in terms of its general feature set, doesn’t necessarily possess all the levers, dials, and knobs an enterprise might want or need.

So, I happily stand by my “solid for small-to-mid-sized companies” outlook … and I’ll leave it there. For no additional cost, Synology’s Active Backup for Microsoft 365 is a great value in my book, and I’ve implemented it for three tenants under my control.Ā 

Rounding Things Out: Entertainment

I did mention some “play” along with the work in this post’s title – not something that everyone thinks about when envisioning a network storage appliance. Or rather, I should say that it’s not something I had considered very much.

My conversations with the Synology folks and trips through the Package Center convinced me that there were quite a few different ways to have fun with a NAS. There are two packages I installed on my NAS to enable a little fun.

Package Number One: Plex Server

Admittedly, this is one capability I knew existed prior to getting my DS220+. I’ve been an avid Plex user and advocate for quite a few years now. When I first got on the Plex train in 2013, it represented more potential than actual product.

Nowadays (after years of maturity and expanding use), Plex is a solid media server for hosting movies, music, TV, and other media. It has become our family’s digital video recorder (DVR), our Friday night movie host, and a great way to share media with friends.

I’ve hosted a Plex Server (self-hosted virtual machine) for years, and I have several friends who have done the same. At least a few of my friends are hosting from NAS devices, so I’ve always had some interest in seeing how Plex would perform on NAS device versus my VM.

As with everything else I’ve tried with my DS220+, it’s a piece of cake to actually get a Plex Server up-and-running. Install the Plex package, and the NAS largely takes care of the rest. The sever is accessible through a browser, Plex client, or directly from the NAS web console.Ā 

I’ve tested a bit, but I haven’t decommissioned the virtual machine (VM) that is my primary Plex Server – and I probably won’t. A lot of people connect to my Plex Server, and that server has had multiple transcodes going while serving up movies to multiple concurrent users – tasks that are CPU, I/O, and memory intensive. So while the NAS does a decent job in my limited testing here at the house, I don’t have data that convinces me that I’d continue to see acceptable performance with everyone accessing it at once.

One thing that’s worth mentioning: if you’re familiar with Plex, you know that they have a pretty aggressive release schedule. I’ve seen new releases drop on a weekly basis at times, so it feels like I’m always updating my Plex VM.

What about the NAS package and updates? Well, the NAS is just as easy to update. Updated packages don’t appear in the Package Center with the same frequency as the new Plex Server releases, and you won’t get the same one-click server update support (a feature that never worked for me since I run Plex Server non-interactively in a VM), but you do get a link to download a new package from the NAS’s update notification:

The “Download Now” button initiates the download of an .SPK file – a Synology/NAS package file. The package file then needs to be uploaded from within the Package Center using the “Manual Install” button:

And that’s it! As with most other NAS tasks, I would be hard-pressed to make the update process any easier.

Package Number Two: Docker

If you read the first post I wrote back in February as a result of getting the DS220+, you might recall me mentioning Docker as another of the packages I was really looking forward to taking for a spin.

The concept of containerized applications has been around for a while now, and it represents an attractive alternative to establishing application functionality without an administrator or installer needing to understand all of the ins and outs of a particular application stack, its prerequisites and dependencies, etc.Ā  All that’s needed is a container image and host.

So, to put it another way: there are literally millions of Docker container images available that you could download and get running in Docker with very little time invested on your part to make a service or application available. No knowledge of how to install, configure, or setup the application or service is required on your part.

Let's Go Digging

One container I had my eye on from the get-go was itzg’s Minecraft Server container. itzg is the online handle used by a gentleman named Geoff Bourne from Texas, and he has done all of the work of preparing a Minecraft server container that is as close to plug-and-play as containers come.

Minecraft (for those of you without children) is an immensely popular game available on many platforms and beloved by kids and parents everywhere. Minecraft has a very deep crafting system and focuses on building and construction rather than on “blowing things up” (although you can do that if you truly want to) as so many other games do.

My kids and I have played Minecraft together for years, and I’ve run various Minecraft servers in that time that friends have joined us in play. It isn’t terribly difficult to establish and expose a Minecraft server, but it does take a little time – if you do it “manually.”

I decided to take Docker for a run with itzg’s Minecraft server container, and we were up-and-running in no time. The NAS Docker package has a wonderful web-based interface, so there’s no need to drop down to a command line – something I appreciate (hey, I love my GUIs). You can easily make configuration changes (like swapping the TCP port that responds to game requests), move an existing game’s files onto/off of the NAS, and more.

I actually decided to move our active Minecraft “world” (in the form of the server data files) onto the NAS, and we ran the game from the NAS for about two months. Although we had some unexpected server stops, the NAS performed admirably with multiple players concurrently. I suspect the server stops were actually updates of some form taking place rather than a problem of some sort.

The NAS-based Docker server performed admirably for everything except Elytra flight. In all fairness, though, I haven’t been on a server of any kind yet where Elytra flight works in a way I’d describe as “well” largely because of the I/O demands associated with loading/unloading sections of the world while flying around.

Conclusion

After a number of months of running with a Synology NAS on my network, I can’t help but say again that I am seriously impressed by what it can do and how it simplifies a number of tasks.

I began the process of server consolidation years ago, and I’ve been trying to move some tasks and operations out to the cloud as it becomes feasible to do so. Where it wouldn’t have even resulted in a second thought to add another Windows server to my infrastructure, I’m now looking at things differently. Anything a NAS can do more easily (which is the majority of what I’ve tried), I see myself trying it there first.Ā 

I once had an abundance of free time on my hands. But that was 20 – 30 years ago. Nowadays, I’m in the business of simplifying and streamlining as much as I can. And I can’t think of a simpler approach for many infrastructure tasks and needs than using a NAS.

References and Resources

Quick Tips for Managing the SharePoint 2010 Office Web Applications Cache

I presented remotely to the Boston Area SharePoint User Group (BASPUG) tonight (7/13/2016), and I referenced an article that I had written that is no longer available online. This post originally appeared as a “SharePoint Smarts” article from Idera. Idera is out of the SharePoint business nowadays, but the information I shared in that article is still relevant to those who use SharePoint 2010. So if you have a SharePoint 2010 environment and use the Office Web Apps, this post (and more specifically, the scripts contained within) is for you.

One of the hotly anticipated items in SharePoint 2010’s feature set is the introduction of the Microsoft Office Web Applications, or ā€œOffice Web Appsā€ for short. The release of the Office Web Apps opens up new possibilities for those who work with documents and files that are tied to Microsoft Word and other applications in the Microsoft Office Family.

What Are the Office Web Apps?

In prior versions of SharePoint, viewing and editing Office documents that existed in SharePoint document libraries normally required a client computer possessing the Microsoft Office suite of applications. If you wanted to view or edit a Word document that existed in SharePoint, for example, you needed Microsoft Word (or an equivalent application) installed on your computer.

That situation changes with the arrival of the Office Web Apps. When a SharePoint 2010 farm is properly set up and configured with the Office Web Apps, it becomes possible to view and edit several different Office document types directly from within a browser as shown in Figure 1 below.

Open Document

Figure 1: Browser-based editing of a Microsoft Word document

The Office Web Apps provide browser-based viewing and editing support for Microsoft Excel, OneNote, PowerPoint, and Word document types, and this support extends to more than just Internet Explorer. Firefox 3.x, Safari 4.x, and Google Chrome browser types are also supported for viewing and editing – making the Office Web Apps an enabler of cross-platform collaboration that centers on Office documents.

A Word about the Plumbing

As you might imagine, browser-based rendering and editing of Office documents involves a number of complex processes that engage a variety of front-end, middle-tier, and back-end components. The front-end and middle-tier tasks that are tied to document viewing and editing are handled primarily by a new set of service applications that appear when the Office Web Apps are installed. These service applications (and their associated pages, handlers, and worker processes) take care of the business of document conversion, load-balancing, and rendering for browser consumption.

Document conversion and rendering typically generate a combination of images, HTML, JavaScript, and XAML (or eXtensible Application Markup Language) that are sent to consuming browsers. The creation of these document resources is an expensive process, both in terms of CPU cycles and storage. To improve performance levels, it makes sense to generate these document resources only as needed and reuse them whenever possible. That’s where the Office Web Apps cache comes in.

The Office Web Apps cache is the back-end store that is responsible for housing images, HTML, JavaScript, and XAML resources once they have been created for a document. Each time a document is converted into a set of these resources, the resources are stored in the Office Web Apps cache. When a request for a document comes into SharePoint, the cache is checked to see if the document had been previously requested and rendered. If it had, and the cached document resources are up-to-date for the document, then the document request is served from the cache instead of engaging the Office Web Apps to convert and re-render it. Serving document resources from the Office Web Apps cache can yield significant performance improvements over scenarios where no cache is employed.

Quick side note before going too far: the Office Web Apps cache is only employed for Word and PowerPoint document types. It is not used for OneNote or Excel documents.

Inside the Office Web Apps Cache

The Office Web Apps cache takes the form of a single site collection for each Web application within a SharePoint farm. When the Office Web Apps are installed and configured in a SharePoint environment, a couple of new timer jobs are installed and run regularly within the farm. One of those timer jobs, the Office Web Apps Cache Creation timer job, ensures that each Web application where the Office Web Apps are running has a site collection like the one shown below in Figure 2.

Site Collection

Figure 2: The Office_Viewing_Service_Cache site collection

The Office_Viewing_Service_Cache site collection is a standard Team Site, and it is the location where resources are stored following the conversion and rendering of either a Word or PowerPoint document by the Office Web Apps.

The Team Site can be accessed just like any other SharePoint Team Site, and a glimpse inside the All Documents library (showing a number of document resources) appears below in Figure 3.

Cache Library

Figure 3: All Documents library in an Office Web Apps cache site collection

Managing the Cache

For such a complex system, the Office Web Apps components do a pretty good job of maintaining themselves without external intervention. This extends to the site collections that are used by Office Web Apps for caching purposes, as well. For example, the Office Web Apps Expiration timer job that is installed with the Office Web Apps removes old document resources from cache site collections once they’ve hit a certain age. The timer job also ensures that each of the site collections responsible for caching has adequate space to serve its purpose.

This doesn’t mean that there aren’t opportunities for tuning and maintenance, though. In fact, there are a couple of things that every administrator should do and review when it comes to the Office Web Apps cache.

Tip #1: Relocate the Cache to a New Database

By default, the Office Web Apps Cache Creation timer job creates an Office_Viewing_Service_Cache site collection in a content database that is collocated with one or more of the ā€œrealā€ site collections within each of your content Web applications. Since the cache site collection is allowed to grow to a beefy 100GB by default, it makes sense to relocate the cache site collection to its own (new) content database. By relocating the cache site collection to its own content database, it becomes easy to exclude it from other maintenance such as backups.

Relocating the cache site collection is pretty straightforward, and it can be accomplished pretty easily with following RelocateOwaCache.ps1 PowerShell script. Simply save the script, execute it, and supply the URL of a Web application within your farm where the Office Web Apps are running. The script will take care of creating a new content database within the Web application, and it will then move the Web application’s Office Web Apps cache site collection to the newly created content database.

[code language=”powershell”]
<#
.SYNOPSIS
RelocateOwaCache.ps1
.DESCRIPTION
Relocates the Office Web Apps cache for a specified Web application to a new content database that is created by the script
.NOTES
Author: Sean McDonough
Last Revision: 07-June-2011
.PARAMETER targetUrl
A Web application where Office Web Apps are in use
.EXAMPLE
RelocateOwaCache.ps1 http://www.TargetWebApplication.com
#>
param
(
[string]$targetUrl = "$(Read-Host ‘Target Web application URL [e.g. http://hostname]&#8217;)"
)

function RelocateCache($targetUrl)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Get the name of the current database where the cache is located; it
# will serve as the basis for a new content database name.
$cacheSite = Get-SPOfficeWebAppsCache -WebApplication $targetUrl -ErrorAction stop
$newDbName = $cacheSite.ContentDatabase.Name + "_OWACache"

# Create a new content database and relocate the cache to it. Make sure the
# user knows what’s happening each step of the way.
Write-Host "- creating a new content database …"
$cacheDb = New-SPContentDatabase -Name $newDbName -WebApplication $targetUrl -ErrorAction stop
Write-Host "- moving the Office Web Apps cache …"
Move-SPSite $cacheSite -DestinationDatabase $cacheDb -Confirm:$false -ErrorAction stop
Write-Host "- performing required IISRESET …"
iisreset | Out-Null

# Let the user know where the cache is now located
Write-Host "Cache successfully relocated to the ‘$newDbName’ database."

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
RelocateCache $targetUrl
[/code]

Tip #2: Review Size and Expiration Settings

When an Office_Viewing_Service_Cache site collection is provisioned within a Web application by the Office Web Apps Cache Creation timer job, it is initially configured to hold cached document resources for 30 days. As mentioned in Tip #1, a cache site collection can also grow to a maximum of 100GB by default.

Whether or not these default settings are appropriate for a Web application depends primarily upon the nature of the site collections housed within the Web application. When site collections contain primarily static documents or content that changes infrequently, it makes sense to allow the cache to grow larger and expire content less often than normal. This maximizes the benefit obtained from caching since document content turns over less frequently.

On the other hand, site collections that experience frequent document turnover and heavy collaboration traffic tend to benefit very little from large cache sizes and long expiration periods. In site collections of this nature, cached content tends to become stale quickly. Little benefit is derived from holding onto document resources that may only be good for days or even hours, so maximum cache size is reduced and expiration periods are shortened.

Tip #3: Give Yourself Some Warning

Since each Office Web App cache is a Team Site and like any other site collection, you can leverage standard SharePoint site collection features and capabilities to help you out. One such mechanism that can be of assistance is the ability to have an e-mail warning sent to site collection owners once a site collection’s size hits a predefined threshold. In the case of the Office Web Apps cache, such a warning could be a cue to increase the maximum size of the cache site collection or perhaps lower the expiration period for document resources housed within the site collection.

Like the maximum cache size setting described in Tip #2, the ability to send e-mail warnings once the cache reaches a threshold is actually tied to SharePoint’s site collection quota capabilities. The maximum size of the cache site collection is handled as a storage quota, and the warning threshold maps directly to the quota’s warning threshold as shown below in Figure 4. In the case of Figure 4, a maximum cache size of 50GB is in effect for the cache site collection, and the e-mail warning threshold is set for 25GB.

Quota

Figure 4: Quota settings for an Office Web Apps cache site collection

Knobs and Dials

Tips #2 and #3 discussed some of the more straightforward Office Web Apps cache settings that are available to you, but you might be wondering how you actually go about changing them.

The AdjustOwaCache.ps1 PowerShell script that appears below provides you with an easy way to review and change the settings discussed. Simply save the script, execute it, and supply the URL of the Web application containing the Office Web Apps cache you’d like to adjust. The script will show you the cache’s current settings and give you the opportunity to modify them.

[code language=”powershell”]
<#
.SYNOPSIS
AdjustOwaCache.ps1
.DESCRIPTION
Dumps several common OWA cache settings to the console for a selected Web application and provides a mechanism for altering the those values
.NOTES
Author: Sean McDonough
Last Revision: 08-June-2011
.PARAMETER targetUrl
A Web application where Office Web Apps are in use
.EXAMPLE
AdjustOwaCache.ps1 http://www.TargetWebApplication.com
#>
param
(
[string]$targetUrl = "$(Read-Host ‘Target Web application URL [e.g. http://hostname]&#8217;)"
)

function AdjustCache($targetUrl)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Create an easy converter for GB to bytes
$GBtoBytes = 1024 * 1024 * 1024

# Get a reference to the cache site collection and extract the values we’ll be
# working with and (potentially) altering.
$cacheSite = Get-SPOfficeWebAppsCache -WebApplication $targetUrl -ErrorAction stop
$wacSize = $cacheSite.Quota.StorageMaximumLevel / $GBtoBytes
$wacWarn = $cacheSite.Quota.StorageWarningLevel / $GBtoBytes
$wacExpire = 30
if ($cacheSite.RootWeb.Properties.ContainsKey("waccacheexpirationperiod"))
{ $wacExpire = $cacheSite.RootWeb.Properties["waccacheexpirationperiod"] }
Write-Host "Current OWA cache values for ‘$targetUrl’"
Write-Host "- Maximum Cache Size (GB): $wacSize"
Write-Host "- Warning Threshold (GB): $wacWarn"
Write-Host "- Expiration Period (Days): $wacExpire"

# Give the user the option to make changes.
$yesOrNo = Read-Host "Would you like to change one or more values? [y/n]"
if ($yesOrNo -eq "y")
{
[Int64]$newWacSize = Read-Host "- Maximum Cache Size (GB)"
Write-Host "- Warning Threshold (GB)"
[Int64]$newWacWarn = Read-Host " (supply 0 for no warning)"
[int]$newWacExpire = Read-Host "- Expiration Period (Days)"

# Convert GB values to bytes and set the cache
$newWacSize = ($newWacSize * $GBtoBytes)
$newWacWarn = ($newWacWarn * $GBtoBytes)
Set-SPOfficeWebAppsCache -WebApplication $targetUrl -ExpirationPeriodInDays $newWacExpire -MaxSizeInBytes $newWacSize -WarningSizeInBytes $newWacWarn -ErrorAction stop
}

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
AdjustCache $targetUrl
[/code]

Conclusion

The Office Web Apps are a powerful addition to SharePoint 2010 and pave the way for greater collaboration on Office documents without the need for the Microsoft Office suite of client applications. The Office Web Apps cache is an important part of the larger Office Web Apps equation, and the cache is generally pretty good about taking care of itself. As shown in this article, though, it is still a good idea to relocate the cache from its default location. At the same time, a little bit of tuning and e-mail alerting can go a long way towards ensuring that the cache operates optimally for you in your environment.

Is a Higher SharePoint Backup Thread Count Better?

Many administrators have noted that SharePoint 2010 allows them to tune the number of threads that can be used for farm backup and restore operations, but very few have played with the settings. In this post, I share some results I compiled while testing the settings in my own environments. I also share the PowerShell script I assembled for my testing so you can tune the backup and restore thread settings in your own SharePoint farm.

Balls of purple, orange and grey yarn or woolScalability in the hardware and software space is all about parallel computing nowadays. Consider our modern hardware: it used to be that all we really cared about was how fast our CPU could run (ā€œhow many GHz?ā€) Now, we care more about how many cores our CPU has, whether or not those cores support Hyper-threading, how many memory channels our CPU has available to it, etc. Scale-out beats scale-up.

The same is largely true in the software space. Most IT folks learned some time ago that ā€œmultithreadingā€ and ā€œhigher performanceā€ tended to go hand-in-hand or were at least associated in some way. Multiple threads of execution meant better scheduling of limited processor resources and fewer chances that one long-running operation would bottleneck an entire application.

Configuring SharePoint 2010 Farm Backup and Restore

When I first saw the following section in the ā€œConfigure Backup Settingsā€ section of SharePoint 2010’s Central Administration site, it brought a big grin to my face:

Thread Configuration

In SharePoint 2007 and earlier, administrators had no real levers to pull to try and tune the performance of farm backup and restore operations. This obviously changed with SharePoint 2010. We were basically being handed a way to adjust those processes as we saw fit – for better or worse.

Strangely enough, though, I never really took the time to explore the impact of those settings in my SharePoint environments. I always left the number of assigned threads for backup and restore operations at three. I would have liked to mess around with the values, but something else was always more important in the grand scheme of things.

Why Now?

I’ve been working on a new ā€œbackup tips and tricksā€ whitepaper, and I found myself looking for backup and restore concerns within the SharePoint platform that I may not have given much attention to in the past. It didn’t take much wading through Central Administration before I once again found myself looking at thread counts for backup and restore operations.

Doing a little bit of Internet (background) research confirmed what I had suspected: no one else had really spent any time on the topic either. In fact, the only ā€œfreshā€ and non-copyright-infringing material I found came from a Microsoft TechNet post titled Backup and recovery best practices (SharePoint Server 2010) … and to tell you the truth, the following paragraph from the section titled ā€œConfigure SharePoint settings for better backup or restore performanceā€ really bugged me:

If you are using the Backup-SPFarm cmdlet, you can use the BackupThreads parameter to specify how many threads SharePoint Server 2010 will use during the backup process. The more threads you specify, the more resources that backup operation will take, but the faster that it will finish, if sufficient resources are available. However, each thread is reported individually in the log files, so using fewer threads makes interpreting the log files easier. By default, three threads are used. The maximum number of threads available is 10.

Without an understanding of how multithreading (in general) and SharePoint backup (specifically) work, this could easily be interpreted as follows:

The greater the number of threads you assign, the faster your backups will complete.

I realize that my summary is an oversimplification, but I believe that many administrators see the TechNet paragraph as I summarized it. And that concerns me.

I’ve always told people that increasing the backup thread count could yield better performance, but any adjustments would need to be tested in the target farm where they are to be implemented. Realistically speaking, there are several participants and a lot of moving parts in any SharePoint farm backup. Besides the SharePoint server where the backup operation is being coordinated, there is the performance of one or more SQL Servers to consider. The capabilities and restrictions of the backup destination location (typically a UNC file share) also need to be factored-in since that destination is being written to by both the SharePoint Server and one or more SQL Servers.

Setting the number of backup threads to 10 on a SharePoint Server of infinite capability and resources doesn’t guarantee a fast backup, because the farm might have a slow SQL Server, a less-capable backup destination location, a slow or congested network, or a host of other complicating factors.

Oh Yeah? Prove It.

Of course, all of this is just a bunch of hand-waving without proof. So, the scientist in me (yeah, I actually used to be a chemist) decided to take over and devise a series of simple tests to see if there is any real weight to the arguments I’ve been making.

I began with the hypothesis that the easiest and most visible way to gauge the performance of a farm backup operation is to measure how long a backup takes to run; e.g., a farm backup that takes 10 minutes to run is faster than a backup that takes 20 minutes to run if farm content, hardware, configuration, and other factors remain constant. Since SharePoint 2010 provides the ability to specify anywhere from one to 10 backup threads, running a series of backups where the only variable is backup thread count should determine if greater or fewer backup threads yield better performance.

You might recall that I also mentioned that farm topology is a factor in the overall backup equation. As part of my experiment, I decided to run the tests on two different farms I have available to me. General descriptions for each farm:

  • Single-Server Farm: my single server farm environment is a VM running on my laptop. The VM houses SharePoint, SQL Server, and the backup location being targeted. The laptop hardware is a Core-i7 quad-core processor, and the underlying storage for the VM is a solid-state drive (SSD). Hardware bottlenecks should be minimized, and network latency isn’t a factor since backup operations are conducted against a local drive within the VM.
  • Multi-Server Farm: my multi-server environment is the ā€œproductionā€ environment on my home network. It consists of a SharePoint Server VM running on a Hyper-V host that also hosts other VMs. The SQL Server instance backing the farm is a non-virtualized SQL Server housing all of the SharePoint databases as well as a few databases for other applications. The backup destination location is a virtualized file server with a pass-through drive array (eSATA with RAID-5). Overall hardware, in this case, is ā€œokayā€ but obviously not dedicated purely to SharePoint. In addition, network latency and bandwidth (GbE) are also in-play as potential sources of impact.

These two environments have pretty different overall topologies, and it was my hope that I’d see some effect on the performance numbers as a result.

The Script

To run the tests reproducibly, I needed a PowerShell script. So, I put the following script together while I had a bit of free time one night. Feel free to pluck this out to use for testing in your SharePoint environment, as well.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
TestBackupThreads.ps1
.DESCRIPTION
This script is used to conduct and time a series of backups using different thread counts.
The output can then be used to make an educated decision on the number of backup threads to
assign for use in farm-level backups.
.NOTES
Author: Sean McDonough
Last Revision: 25-July-2012
.PARAMETER TestLocation
A UNC path to a location that can be used to create test backup sets
.EXAMPLE
TestBackupThreads \\FileShare\TestLocation
#>
param
(
[string]$TestLocation = "$(Read-Host ‘UNC path to test backup location [e.g. \\FileShare\TestLocation]’)"
)

function TestThreads($backupLocation)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Setup some variables we’ll need for execution.
$threadTimes = @{} # Hash table to hold timing results
$backupItems = Join-Path $backupLocation "spbr*" # Used to delete temp backup files

# We need to execute a full farm backup for each thread count 1 through 10
Clear-Host
Write-Host "`nBackup thread count testing process beginning."
for ($threads = 1; $threads -lt 11; $threads++)
{
# Clean out any backup contents from the test location
Remove-Item $backupItems -recurse

# Grab the starting date/time (for later comparison), kick-off a farm backup, and then
# grab the stop date/time.
Write-Host "`nInitiating a backup with $threads thread(s) …"
$startPoint = Get-Date
Backup-SPFarm -BackupMethod Full -Directory $backupLocation -BackupThreads $threads
$stopPoint = Get-Date

# Store and report results
$keyName = "Backup with {0} thread(s)" -f $threads
$elapsedSeconds = "{0:N0}" -f ($stopPoint – $startPoint).TotalSeconds
$threadTimes[$keyName] = $elapsedSeconds
Write-Host "Backup with $threads thread(s) complete"
Write-Host ("- time to complete (in seconds): {0}" -f $elapsedSeconds)
}

# Do a final sweep of the test backup location to clean out backup items
Remove-Item $backupItems -recurse

# Dump the results sorted in order of quickest to longest
Write-Host "`nBackup thread count testing process complete."
$threadTimes.GetEnumerator() | Sort-Object Value

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
TestThreads $TestLocation
[/sourcecode]

The script is fairly straightforward in what it does. You supply a TestLocation parameter to specify where farm backup test data should be written to, and the script will run a series of full farm backups using the supplied location as the backup destination. The script starts with a full backup using one backup thread; at the end of each full farm backup, the script notes how long the backup took (in seconds) and cleans-up the contents of the TestLocation folder. The number of backup threads is then incremented, and the next test is run. When the script has completed running all backup tests, it sorts the results from ā€œquickest backupā€ (i.e., the backup thread count requiring the least amount of time) to the slowest backup.

Test Results

I ran a series of three tests for each of the aforementioned environments for a total of six total test runs. Although there’s still quite a bit of variability between individual results within a backup thread series, some trends did appear to emerge.

Single-Server Farm

Backup Times for the Single-Server Environment

With the single-server environment, increasing the number of backup threads did appear to have a directional impact on performance. A single backup thread proved to be the slowest option for the farm backup, and ā€œgreater than oneā€ thread resulted in better performance.

If you look at the average values, though, there wasn’t a tremendous difference between the slowest thread count (410 seconds for one thread) and the fastest (388 seconds for 10 threads). We’re only talking about a 5% to 6% difference overall. To truly find the optimum number of backup threads in an environment like this would require more than three test runs to account for standard deviation and establish significance.

Oh, and for those that might be wondering: I’m sure I introduced some of my own variability into the results. Although I didn’t do anything processor or disk intensive during the test runs, I didn’t go out of my way to minimize the impact of services, background operations, etc. To repeat: more testing (with better controls) would be needed for truly conclusive results. The only thing I started to show with this particular set of tests is that multithreading seemed to improve backup performance.

Multi-Server Farm

Things got quite a bit more interesting (to me) when I switched over to multi-server farm testing.

Backup Times for the Multi-Server Environment

In the multi-server environment, the average for using just one backup thread (1413 seconds) appeared to be significantly faster than the next best option (1747 seconds for seven backup threads) – in the neighborhood of 20% or so faster. Just like the single-server results, additional trials would be needed to completely validate the observations, but the results are less ambiguous (given the relatively greater precision of the samples) than with the single-server runs.

Do you find this surprising? Given my multi-server environment and what I know about it, I can’t really say that I was caught flat-footed by the results. Going into the tests, my hypothesis was that my backup destination location would likely be the ā€œweak linkā€ in my overall farm and backup topology. The SharePoint Server was doing well, the SQL Server was relatively robust … but all of that backup activity was hard on my (virtualized) file server. Multiple servers trying to write to the backup location were swamping it and the network, and adding additional backup threads to the mix didn’t end up helping or improving the overall backup process.

The Take-Away

At the end of the day, I recognize that these tests of mine didn’t prove anything conclusively. Frankly, conclusive proof wasn’t my goal. The intent of these experiments wasn’t to say ā€œmore threads are betterā€ or ā€œmore threads are worse.ā€

The only point I’m making (I hope) by sharing these results is this: until you run some real tests of your own in your SharePoint environment, you really don’t know where your backup thread sweet spot is. You can try to guess it, but it’s just a guess. And guessing is really no better than simply leaving the backup thread count set to its default value of three.

References and Resources

  1. Wikipedia: Parallel Computing
  2. Wikipedia: Hyper-threading
  3. Wikipedia: Thread (computing) and Multithreading
  4. TechNet: Backup and recovery best practices (SharePoint Server 2010)

Kicking-Off 2012: SharePoint Style

My SharePoint community activities are off to a roaring start in 2012. In this post, I’ll be recapping a couple of events from the end of 2011, as well as covering new activities taking place during the first couple of months of 2012.

HighSpeedI don’t know how 2011 ended for most of you, but the year closed without much of a bang for me. I’m not complaining about that; the general slow-down gave me an opportunity to get caught up on a few things, and it was nice to spend some quality time with my friends and family.

While 2011 went out relatively quietly, 2012 seems to have arrived with a vengeance. In fact, I was doing some joking on Twitter with Brian Jackett and Rob Collie shortly after the start of the year about #NYN, or ā€œNew Year’s Nitrous.ā€ It’s been nothing but pedal-to-the-metal and then some since the start of the year, and there’s absolutely no sign of it letting up anytime soon. I like staying busy, but in some ways I’m wondering whether or not there will be enough time to fit everything in. One day at a time …

Here’s a recap of some stuff from the tail end of 2011, as well as what I’ve got going on for the first couple of months in 2012. After February, things actually get even crazier … but I’ll save events beyond February for a later post.

SPTV

SPTV logoDuring the latter part of 2011, I had a conversation with Michael Hiles and Jon Breyfogle of DSC Consulting, a technical consulting and media services company based here in Cincinnati, Ohio. Michael and Jon had an idea: they wanted to develop a high-quality, high-production-value television program that centered on SharePoint and the larger SharePoint ecosystem/community. The initial idea was that the show would feature an interview segment, coverage of community events, SharePoint news, and some other stuff thrown in.

It was all very preliminary stuff when they initially shared the idea with me, but I told them that I thought they might be on to something. The idea of a professional show that centered on SharePoint wasn’t something that was being done, and I was really curious to see how they would do it if they elected to move forward.

Just before Christmas, Jon contacted me to let me know that they were indeed moving forward with the idea … and he asked if I’d be the show’s first SharePoint guest. I told him I’d love to help out, and so the bulk of the pilot episode was shot at the Village Tavern in Montgomery one afternoon with host Mark Tiderman and co-host Craig Pereira. Mark and I shot some pool, discussed disaster recovery, and just talked SharePoint for a fair bit. It was really a lot of fun.

The pilot isn’t yet available (publicly), but a teaser for the show is available on the SPTV web site. All in all, I think the DSC folks have done a tremendous job creating a quality, professional program. Check out the SPTV site for a taste of what’s to come!

SharePoint Saturday Columbus Kick-Off

SharePoint Saturday Columbus logoAround the time of the SPTV shooting, the planning committee for SharePoint Saturday Columbus (Brian Jackett, Jennifer Mason, Nicola Young, and I) had a checkpoint conversation to figure out what, if anything, we were going to do about SharePoint Saturday Columbus in 2012. Were we going to try to do it again? If so, were we going to change anything? What was our plan?

Everything with SPSColumbus in 2012 is still very preliminary, of course, but I can tell you that we are looking forward to having the event once again! We expect that we’ll attempt to hold the event during roughly the same part of the year as we’ve had it in the past (i.e., late summer). As we start to nail things down and come up with concrete plans, I’ll share those. Until then, keep your eyes on the SharePoint Saturday site and the SPSColumbus account on Twitter!

SharePointCincy

Those of us who reside in and around Cincinnati, Ohio, are very fortunate when it comes to SharePoint events and opportunities. In the past we’ve had SharePoint Saturday Indianapolis just to the west of us, SharePoint Saturday Columbus to the northeast, and last year we had our first ever SharePoint Saturday Cincinnati (which was a huge success!) On top of that, last year was the first ever SharePointCincy event.

SharePointCincy was similar in some ways to a SharePoint Saturday, but it was different in others. It was a day full of SharePoint sessions, but we also had Fred Studer (the General Manager for the Information Worker product group at Microsoft) come out an speak. Kroger, a local company whose SharePoint implementation I’m very familiar with, also shared their experience with SharePoint. Rather than go into too much detail, though, I encourage you to check out the SharePointCincy site yourself to see what it was all about.

Of course, the whole reason I’m mentioning SharePointCincy is that it’s coming again in March of this year! Last year’s success (the event was attended by hundreds) pretty much guaranteed that the event would happen again.

I’m part of a planning team that includes Geoff Smith, Steve Caravajal of Microsoft, Mike Smith from MAX Technical Training, and the infamous Shane Young of SharePoint911 (which, in case you didn’t know it, is based here in Cincinnati). Four of the five of us met last Friday for a kick-off meeting and to discuss how the event might go this year. It was a good breakfast and a productive meeting. I don’t have much more to share at this point (other than the fact that, ā€œyes, it’s happeningā€), but I will share information as it becomes available. Stay tuned!

Secrets of SharePoint Webcast

Secrets of SharePoint logoIt’s been a few months since my last webcast on SharePoint caching, so my co-workers at Idera approached me about doing another webcast. I guess I was due.

On this Wednesday, January 18th, I’ll be delivering a Secrets of SharePoint webcast titled ā€œThe Essentials of SharePoint Disaster Recovery.ā€ Here’s the abstract:

ā€œAre my nightly SQL Server backups good enough?ā€ ā€œDo I need an off-site disaster recovery facility?ā€ ā€œHow do I even start the process of disaster recovery planning?ā€ These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes ā€œit depends…ā€

In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

For those of you who have heard me speak and/or attended my webcasts in the past, you’ll probably find this session to be a bit different than ones you’ve seen or heard. The main reason I say that is because the content is primarily business-centric rather than nuts-and-bolts admin content.

That doesn’t mean that SharePoint administrators shouldn’t attend, though; on the contrary, the webcast includes a number of very important messages for admins (e.g., why DR must be driven from the business angle rather than the technical/admin angle) that could really help them in their jobs. The session expands the scope of the DR discussion, though, to include the business aspects that are so tremendously important during the DR planning process.

If what I’ve shared sounds interesting, please sign-up! The webcast is free, and I’ll be doing Q&A after the session.

SharePoint Saturday Austin

SharePoint Saturday Austin logoThis upcoming weekend, I’ll be heading down to Austin, Texas, for the first SharePoint Saturday Austin event! The event is taking place on January 21st, and it is being coordinated by Jim Bob Howard (of Juniper Strategy) and Matthew Lathrop (of Rackspace). Boy oh boy – do they have an amazing line-up of speakers and contributors. It’s quite impressive; check out the site to see what I mean.

The guys are giving me the opportunity to present ā€œThe Essentials of SharePoint Disaster Recoveryā€ session, and I’m looking forward to it. I’m also looking forward to catching up with many of my friends … and some of my Idera co-workers (who will be coming in from Houston, Texas).

If you’re in the Austin area and looking for something to do this upcoming Saturday, come to the event. It’s free, and it’s a great chance to take in some phenomenal sessions, win some prizes, and be a part of the larger SharePoint community!

SharePoint Pro Demo Booth Session

SharePoint Pro logoOn Monday, February 20th at 12pm EST, I’m going to be doing a ā€œdemo boothā€ session through SharePoint Pro Magazine. The demo booth is titled ā€œBackup Basics: SharePoint’s Backup and Restore Capabilities and Beyond.ā€ Here’s the description for the demo booth:

SharePoint ships with a number of tools and capabilities that are geared toward protecting content and configuration. These tools provide basic coverage for your SharePoint environment and the content it contains, but they can quickly become cumbersome in real world scenarios. In this session, we will look at SharePoint’s backup and restore capabilities, discuss how they work, and identify where they fall short in common usage scenarios. We will also highlight how Idera’s SharePoint backup solution picks up where the SharePoint platform tools leave off in order to provide complete protection that is cost-effective and easy to use.

The ā€œdemo boothā€ concept is something new for me; it’s part ā€œplatform educationā€ (which is where I normally spend the majority of my time and energy) and part ā€œproduct educationā€ – in this case, education about Idera’s SharePoint backup product. Being both the product manager for Idera SharePoint backup and a co-author for the SharePoint 2010 Disaster Recovery Guide leaves me in something of a unique position to talk about SharePoint’s built-in backup/restore capabilities, where gaps exist, and how Idera SharePoint backup can pick up where the SharePoint platform tools leave off.

If you’re interested in learning more about Idera’s SharePoint backup product and/or how far you can reasonably push SharePoint’s built-in capabilities, check out the demo booth.

SPTechCon 2012 San Francisco

SPTechConFebruary comes to close with a big bang when SPTechCon rolls into San Francisco for the first of two stops in 2012. For those of you who check my blog now and again, you may have noticed the SPTechCon ā€œI’ll be speaking atā€ badge and link on the right-hand side of the page. Yes, that means I’ll be delivering a session at the event! The BZ Media folks always put on a great show, and I’m certainly proud to be a part of SPTechCon and presenting again this time around.

At this point, I know that I’ll be presenting ā€œThe Essentials of SharePoint Disaster Recovery.ā€ I think I’m also going to be doing another lightning talk; I need to check up on that, though, to confirm it.

I also found out that John Ferringer (my co-author and partner-in-crime) and I are also going to have the opportunity to do an SPTechCon-sponsored book signing (for our SharePoint 2010 Disaster Recovery Guide) on the morning of Wednesday the 29th.

If you’re at SPTechCon, please swing by to say hello – either at my session, at the Idera booth, the book signing, or wherever you see me!

Additional Reading and Resources

  1. Blog: Brian Jackett’s Frog Pond of Technology
  2. Blog: Rob Collie’s PowerPivotPro
  3. Company: DSC Consulting
  4. Site: SPTV
  5. LinkedIn: Mark Tiderman
  6. LinkedIn: Craig Pereira
  7. Event: SharePoint Saturday Columbus
  8. Blog: Jennifer Mason
  9. Twitter: Nicola Young
  10. Site: SharePoint Saturday
  11. Twitter: SharePoint Saturday Columbus
  12. Event: SharePoint Saturday Cincinnati
  13. Event: SharePointCincy
  14. LinkedIn: Geoff Smith
  15. Blog: Steve Caravajal’s Ramblings
  16. Blog: Mike Smith’s Tech Training Notes
  17. Company: MAX Technical Training
  18. Blog: Shane Young’s SharePoint Farmer’s Almanac
  19. Company: SharePoint911
  20. Webcast: ā€œCaching-Inā€ for SharePoint Performance
  21. Site: Secrets of SharePoint
  22. Webcast: The Essentials of SharePoint Disaster Recovery
  23. Event: SharePoint Saturday Austin
  24. Blog: Jim Bob Howard
  25. Company: Juniper Strategy
  26. LinkedIn: Matthew Lathrop
  27. Company: Rackspace
  28. Company: Idera
  29. Event: SharePoint Pro Demo Booth Session
  30. Site: SharePoint Pro Magazine
  31. Product: Idera SharePoint backup
  32. Book: SharePoint 2010 Disaster Recovery Guide
  33. Event: SPTechCon 2012 San Francisco
  34. Company: BZ Media
  35. Blog: John Ferringer’s My Central Admin

Wrapping Up 2011

After some time away, I’m getting back to blogging with a recap of the last several months’ worth of events. I cover a couple of SharePoint Saturdays, a webcast, my new whitepaper, and a new CodePlex project for SharePoint administrators.

Over the last several months, I haven’t been blogging as much as I’d hoped to; in reality, I haven’t blogged at all. There are a couple of reasons for that: one of them was our recent house move (and the aftermath), and the other was a little more personal. Without going into too much detail: we were contending with a very serious health issue in our family, and that took top priority.

The good news is that the clouds are finally parting, and I’m heading into the close of 2011 on a much better note (and with more time) than I’ve spent the last several months. To get back into some blogging, I figured I’d wrap-up the last several months’ worth of activities that took place since SharePoint Saturday Columbus.

Secrets of SharePoint (SoS) Webcast

Secrets of SharePoint Webcast BannerA lot of things started coming together towards the end of October, and the first of those was another webcast that I did for Idera titled ā€œā€™Caching-In’ for SharePoint Performance.ā€ The webcast covered each of SharePoint’s built-in caching mechanisms (object caching, BLOB caching, and page output caching) as well as the Office Web Applications’ cache. I provided a rundown on each mechanism, how it worked, how it could be leveraged, and some watch-outs that came with its use.

The webcast was basically a lightweight version (40 minutes or so) of the longer (75 minute) presentation I like to present at SharePoint Saturday events. It was something of a challenge to squeeze all of the regular session’s content into 40 minutes, and I had to cut some of the material I would have liked to have kept in … but the final result turned-out pretty well.

If you’re interested in seeing the webcast, you can watch it on-demand from the SoS webcast archive. I also posted the slides in the Resources section of this blog.

SharePoint Saturday Cincinnati

SharePoint Cincinnati BannerOn Saturday October 29th, Cincinnati had its first-ever SharePoint Saturday Cincinnati event. The event took place at the Kingsgate Marriott on Goodman Drive (near University Hospital), and it was very well attended – so much so that Stacy Deere and the other folks who organized the event are planning to do so again next year!

Many people from the local SharePoint community came out to support the event, and we had a number of folks from out of town come rolling in as well to help ensure that the event was a big success. I ended up delivering two sessions: my ā€œā€™Caching-In’ for SharePoint Performanceā€ session and my ā€œSharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!ā€

I had a great time at the event, and I’m hoping I’ll be fortunate enough to participate again on the next go ā€˜round!

New Disaster Recovery WhitePaper

WhitePaper Title PageMy co-author and good friend John Ferringer and I were hard at work throughout the summer and early Fall putting together a new disaster recovery whitepaper for Idera. The whitepaper is titled ā€œNew Features in SharePoint 2010: A Disaster Recovery Love Story,ā€ and it’s a bromance novel that only a couple of goofballs like John and I could actually write …

Okay, there’s actually no romance in it whatsoever (thank heavens for prospective readers – no one needs us doing that to them), but there is a solid chunk of coverage on SharePoint 2010’s new platform capabilities pertaining to disaster recovery. We also review some disaster recovery basics in the whitepaper, cover things that have changed since SharePoint 2007, and identify some new watch-out areas in SharePoint 2010 that could have an impact on your disaster recovery planning.

The whitepaper is pretty substantial at 13 pages, but it’s a good read if you want to understand your platform-level disaster recovery options in SharePoint 2010. It’s a free download, so please grab a copy if it sounds interesting. John and I would certainly love to hear your feedback, as well.

SharePoint Backup Augmentation Cmdlets (SharePointBAC)

SharePointBACMany of my friends in the SharePoint community have heard me talk about some of the projects I’ve wanted to undertake to extend the SharePoint platform. I’m particularly sensitive to the plight of the administrator who is constrained (typically due to lack of resources) to use only the out-of-the-box (OOTB) tools that are available for data protection. While I think the OOTB tools do a solid job in most small and mid-size farms scenarios, there are some clear gaps that need to be addressed.

Since I’d been big on promises and short on delivery in helping these administrators, I finally started on a project to address some of the backup and restore gaps I see in the SharePoint platform. The evolving and still-under-development result is my SharePoint Backup Augmentation Cmdlets (SharePointBAC) project that is available on CodePlex.

With the PowerShell cmdlets that I’m developing for SharePoint 2010, I’m trying to introduce some new capabilities that SharePoint administrators need in order to make backup scripting with the OOTB tools a simpler and more straightforward experience. For example, one big gap that exists with the OOTB tools is that there is no way to groom a backup set. Each backup you create using Backup-SPFarm, for instance, adds to the backups that existed before it. There’s no way to groom (or remove) older backups you no longer want to keep, so disk consumption grows unless manual steps are taken to do something about it. That’s where my cmdlets come in. With Remove-SPBackupCatalog, for example, you could trim backups to retain only a certain number of them; you could also trim backups to ensure that they consume no more disk space (e.g., 100GB) than you’d like.

The CodePlex project is in alpha form right now (it’s brand spankin’ new), and it’s far from complete. I’ve already gotten some great suggestions for what I could do to continue development, though. When I combine those ideas with the ones I already had, I’m pretty sure I’ll be able to shape the project into something truly useful for SharePoint administrators.

If you or someone you know is a SharePoint administrator using the OOTB tools for backup scripting, please check out the project. I’d really love to hear from you!

SharePoint Saturday Denver

SharePoint Saturday DenverAs I type this, I’m in Colorado at the close of the third (annual) SharePoint Saturday Denver event. This year’s event was phenomenal – a full two days of SharePoint goodness! Held on Friday November 11th and Saturday November 12th at the Colorado Convention Center, this year’s event was capped at 350 participants for Saturday. A full 350 people signed-up, and the event even had a wait list.

On the first day of the event, I delivered a brand new session that I put together (in Prezi format) titled The Essentials of SharePoint Disaster Recovery. Here’s the amended abstract (and I’ll explain why it’s amended in a second) for the session:

ā€œAre my nightly SQL Server backups good enough?ā€ ā€œDo I need an off-site disaster recovery facility?ā€ ā€œHow do I even start the process of disaster recovery planning?ā€ These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes ā€œit depends.ā€ In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

The reason I amended the abstract is because the previous abstract for the session didn’t do enough to call out the fact that the presentation is primarily business-centric rather than technically focused. Many of the folks who initially came to the session were SharePoint IT pros and administrators looking for information on backup/restore, mirroring, configuration, etc. Although I cover those items at a high level in this new talk, they’re only a small part of what I discuss during the session.

On Saturday, I delivered my ā€œā€™Caching-In’ for SharePoint Performanceā€ talk during the first slot of the day. I really enjoy delivering the session; it’s probably my favorite one. I had a solid turn-out, and I had some good discussions with folks both during and after the presentation.

As I mentioned, this year’s event was a two day event. That’s a little unusual, but multi-day SharePoint Saturday events appear to be getting some traction in the community – starting with SharePoint Saturday The Conference a few months back. Some folks in the community don’t care much for this style of event, probably because there’s some nominal cost that participants typically bear for the extra day of sessions. I expect that we’ll probably continue to see more hybrid events, though, because I think they meet an unaddressed need that falls somewhere between ā€œgive up my Saturday for free trainingā€ and ā€œpay a lot of money for a multi-day weekday conference.ā€ Only time will tell, though.

On the Horizon

Event though 2011 isn’t over yet, I’m slowing down on some of my activities save for SharePointBAC (my new extracurricular pastime). 2012 is already looking like it’s going to be a big year for SharePoint community activities. In January I’ll be heading down to Texas for SharePoint Saturday Austin, and in February I’ll be heading to San Francisco for SPTechCon. I’ll certainly cover those activities (and others) as we approach 2012.

Additional Reading and Resources

  1. Event: SharePoint Saturday Columbus
  2. Company: Idera
  3. Webcast: ā€œCaching-Inā€ for SharePoint Performance
  4. Webcast Slides: ā€œCaching-Inā€ for SharePoint Performance
  5. Location: My blog’s Resources section
  6. Event: SharePoint Saturday Cincinnati
  7. Blog: Stacy Deere and Stephanie Donahue’s ā€œNot Just SharePointā€
  8. SPS Cincinnati Slides: ā€œCaching-Inā€ for SharePoint Performance
  9. SPS Cincinnati Slides: SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!
  10. Blog: John Ferringer’s ā€œMy Central Adminā€
  11. Whitepaper: New Features in SharePoint 2010: A Disaster Recovery Love Story
  12. CodePlex: SharePoint Backup Augmentation Cmdlets (SharePointBAC)
  13. Event: SharePoint Saturday Denver
  14. Tool: Prezi
  15. SPS Denver Slides: The Essentials of SharePoint Disaster Recovery
  16. SPS Denver Slides: ā€œCaching-Inā€ for SharePoint Performance
  17. Event: SharePoint Saturday The Conference
  18. Event: SharePoint Saturday Austin
  19. Event: SPTechCon 2012 San Francisco

Bare Metal Bugaboos

Having recently recovered from a firewall outage using Windows Server 2008’s bare metal restore capabilities, I figured I’d write a quick post to cover how I did it. I also cover one really big learning I picked up as a result of the process.

I had one of those ā€œaw nutsā€ moments last night.

At some point yesterday afternoon, I noticed that none of the computers in the house could get out to the Internet.  After verifying that my wireless network was fine and that internal DNS was up-and-running, I traced the problem back to my Forefront Threat Management Gateway (TMG) firewall.  Attempting to RDP into it proved fruitless, and when I went downstairs and looked at the front of the server, I noticed the hard drive activity light was constantly lit.

So, I powered the server off and brought it back on.  Problem solved … well, not really. It happened again a couple of hours later, so I repeated the process and made a mental note that I was going to have to look at the server when I had a chance.

Demanding My Attention

Well, things didn’t ā€œstay fixed.ā€  Later in the evening, the same lack of connectivity surfaced again.  I went to the basement, powered the server off, and brought it back up.  That time, though, the server wouldn’t start and complained about having nothing to boot from.

As I did a reset and watched it boot again, I could see the problem: although the server knew that something was plugged in for boot purposes, it couldn’t tell that what was plugged in was a 250GB SATA drive.  Ugh.

When I run into those types of situation, the remedy is pretty clear: a new hard drive.  I always have a dozen or more hard drives sitting around (comes from running a server farm in the basement), and I grabbed a 500GB Hitachi drive that I had leftover from another machine.  Within five minutes, the drive was in the server and everything was hooked back up.

Down to the Metal

Of course, a new hard drive was only half of the solution.  The other half of the equation involved restoring from backup.  In this case, a bare metal restore from backup was the most appropriate course of action since I was starting with a blank disc.

For those who may not be familiar with the concept of bare metal restoration, you can get a quick primer from Wikipedia.  I use Microsoft’s System Center Data Protection Manager 2010 (DPM) to protect the servers in my environment, so I knew that I had an image from which I could restore my TMG box.  I just dreaded the thought of doing so.

Why the worry?  Well, I think Arthur C. Clarke summed it up best with the following quote:

Any sufficiently advanced technology is indistinguishable from magic.

The Cold Sweats

Now bare metal restore isn’t ā€œmagic,ā€ but it is relatively sophisticated technology … and it’s still an area that seems plagued with uncertainties.

I have to believe that I’m not the only one who feels this way.  I’ve co-authored two books on SharePoint disaster recovery, and the second book includes a chapter I wrote that covers bare metal restore on a Windows 2008 server.  My experience with bare metal restores can be summarized as follows: when it works, it’s awesome … but it doesn’t always work as we’d want it to.  When it doesn’t work, it’s plain ol’ annoying in that it doesn’t explain why.

So, it’s with that mindset that I started the process of trying to clear away my server’s lobotomized state.  These are the steps I carried out to get ready for the restore:

  1. DPM consoleI went into the DPM console, selected the most recent bare metal restore recovery point available to me (as shown on the right), and restored the contents of the folder to a network file share– in my case, \\VMSS-FILE1\RESTORENote: you’ll notice a couple of restore points available after the one I selected; those were created in the time since I did the restore but before I wrote this post.
  2. The approximately 21GB bare metal restore image was created on the share.  I do have gigabit Ethernet on my network, and since I recently built-out a new DPM server with faster hardware, it really didn’t take too long to get the image restored to the designated file share – maybe five minutes or so.  The result was a single folder in the designated file share.
  3. Folder structure for restore shareI carried out a little manipulation on the folder that DPM created; specifically, I cut out two levels of sub-folders and made sure that the WindowsImageBackup folder was available directly from the top of the share as shown at the left.  The Windows Recovery Environment (or WinRE) is picky about this detail; if it doesn’t see the folder structure it expects when restoring from a network share, it will declare that nothing is available for you to restore from – even though you know better.

In Recovery

With my actual restore image ready to go on the file share, I booted into the WinRE using a bootable USB memory stick with Windows 2008 R2 Server on it.  I walked through the process of selecting Repair your computer, navigating out to the file share, choosing my restore image, etc.  The process is relatively easy to stumble through, but if you want it in a lot of detail, I’d encourage you to read Chapter 5 (Windows Server 2008 Backup and Restore) in our SharePoint 2010 Disaster Recovery Guide.  In that chapter, I walk through the restore process in step-by-step fashion with screenshots.

Additional restore options dialogI got to the point in the wizard where I was prompted to select additional options for restore as shown on the left.  By default, the WinRE will format and repartition discs as needed.  In my case, that’s what I wanted; after all, I was putting a brand new drive in (one that was larger than the original), so formatting and partitioning was just what the doctor ordered.  I also had the ability to exclude some drives (through Exclude disks) from the recovery process – not something I had to worry about given that my system image only covered one hard drive.  If my hard drive required additional drivers (as might be needed with a drive array, RAID card, or something equivalent), I also had the opportunity to supply them with the Install drivers option.  Again, this was a basic in-place restore; the only thing I needed was a clean-up of the hard drive I supplied, so I clicked Next.

Confirmation dialogI confirmed the details of the operation on the next page, and everything looked right to me.  I then paused to mentally double-check everything I was about to do.

In my experience, the dialog on the left is the last point of easily grasped normal wizard activity before the WinRE restore wizard takes off and we enter ā€œmagic land.ā€  As I mentioned, when restores work … they just chug right along and it looks easy.  When bare metal and system state restores don’t work, though, the error messages are often unintelligible and downright useless from a troubleshooting and remediation perspective.  I hoped that my restore would be one of the happy restores that chugged right along and made me proud of my backup and restore prowess.

I crossed my fingers and clicked the Next button.

<Insert Engine Dying Noises Here>

A picture of the restore going belly-upThe screenshot on the right shows what happened almost immediately after I clicked next.

Well, you knew this blog post would be a whole lot less interesting if everything went according to plan.

Once I worked through my panic and settled down, I looked a little closer.  I understood The system image restore failed without much interpretation, but I had no idea what to make of

Error details: The parameter is incorrect. (0x80070057)

That was the extent of what I had to work with.  All I could do was close out and try again.  Sheesh.

Head Scratching

Advanced options dialogLet’s face it: there aren’t a whole lot of options to play with in the WinRE when it comes to bare metal restore.  The screenshot on the left shows the Advanced options you have available to you, but there really isn’t much to them.  I experimented with the Automatically check and update disk error information checkbox, but it really didn’t have an effect on the process.  Nevertheless, I tried restores with all combinations of the checkboxes set and cleared.  No dice.

With the Advanced options out of the way, there was really only one other place to look: the Exclude disks dialog.  I knew Install drivers wasn’t needed, because I had no trouble accessing my disks and wasn’t using anything like a RAID card or some other advanced disk configuration.

Disk exclusion dialogI popped open the disk exclusion dialog (shown on the right) and tried running a restore after excluding all of the disks except the Hitachi disk to which I would be writing data (Disk 2).  Again, no dice – I still continued to get the aforementioned error and couldn’t move forward.

I knew that DPM created usable bare metal images, and I knew that the WinRE worked when it came to restoring those images, so I knew that I had to be doing something wrong.  After another half an hour of goofing around, I stopped my thrashing and took stock of what I had been doing.

My Inner Archimedes

My eureka moment came when I put a few key pieces of information together:

  • While writing the chapter on Windows Server 2008 Backup and Restore for the SharePoint 2010 DR book, I’d learned that image restores from WinRE are very persnickety about the number of disks you have and the configuration of those disks.
  • When DPM was creating backups, only three hard drives were attached to the server: the original 250GB system drive and two 30GB SSD caching drives.
  • Booting into WinRE from a memory stick was causing a distinctly visible fourth ā€œdriveā€ to show up in the list of available disks.
    The bootable USB stick had to be a factor, so I put it away and pulled out a Windows Server 2008 R2 installation disk.  I then booted into the WinRE from the DVD and walked through the entire restore process again.  When I got to the confirmation dialog and pressed the Next button this time around, I received no The parameter is incorrect errors – just a progress bar that tracked the restore operation.

Takeaway

The one point that’s going to stick with me from here on out is this: if I’m doing a bare metal restore, I need to be booting into the WinRE from a DVD or from some other source that doesn’t affect my drives list.  I knew that the disks list was sensitive on restore, but I didn’t expect USB drives to have any sort of effect on whether or not I could actually carry out the desired operation.  I’m glad I know better now.

Additional Reading and References

  1. Product Overview: Forefront Threat Management Gateway 2010
  2. Wikipedia: Bare-metal restore
  3. Product Overview: System Center Data Protection Manager 2010
  4. Book: SharePoint 2010 Disaster Recovery Guide

Recent and Upcoming SharePoint Activities

In this post I talk about the recent SharePoint Cincy event. I also cover some of my upcoming activities for the next month, including a trip to Denver and SharePoint Saturday Twin Cities.

There have been some great SharePoint events recently, and quite a few more are coming up.  Here are some of the events I have been (or will be) involved in recently/soon:

SharePoint Cincy

SharePoint Cincy EventOn March 18th, Cincinnati saw it’s first (arguably) ā€œmajorā€ SharePoint event.  SharePoint Cincy was put together by Geoff Smith, the Cincinnati CIO Roundtable, and MAX Training … and it was a huge success by any measure.  I took the picture on the right during the introductory speech by Geoff Smith, and it should give you an idea of well-attended the event was.

I was fortunate enough to deliver a talk on disaster recovery during the day, and I also helped Geoff and the organizing committee in advance of the event with some of the speaker round-up for the IT professional / administrator track.

I enjoyed the event because the audience composition was different than one would typically find at a SharePoint Saturday event.  Many of those in attendance were IT decision makers and managers rather than implementers and developers.  I attribute the high numbers in my DR session (typically not a big pull with technical crowds) to that demographic difference.

The next SharePoint Cincy event is already planned for next year (on March 12th, I believe), so keep your eyes peeled at the beginning of next year!

SharePoint Saturday Twin Cities

SharePoint Saturday Twin CitiesSome fine folks in the Minneapolis and St. Paul area (Jim Ferguson, Sarah Haase, and Wes Preston)  have worked to assemble the upcoming SharePoint Saturday Twin Cities on April 9, 2011.  Like all SharePoint Saturdays, the event is free for attendees.  There’s plenty of good education and giveaways to make it worth your while to spend a Saturday with other SharePoint enthusiasts.

I’ll be heading out to the event to deliver my IT pro caching talk titled ā€œā€™Caching-In’ for SharePoint Performanceā€  The abstract for the session appears below

Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios.  In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites.  We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.

If you’re in the Twin Cities area and available on Saturday April 9th, come on by the following address …

Normandale Community College
9700 France Avenue South
Bloomington, MN 55431

… for a day of high-quality and free training.  You can register here on Eventbrite!

Lunch and Learn with Prinomic

Idera partners with a number of different companies in order to make its software more available, and one of those companies is Prinomic Technologies.  Prinomic is a consulting company based out of Denver, Colorado, and they focus on the creation of solutions that employ and target SharePoint.  They are somewhat unique in that they offer a combination of both services and products, and it affords them a great deal of flexibility when addressing customer needs.

I’ll actually be traveling out to Denver to deliver a lunch-and-learn in conjunction with Prinomic titled ā€œSharePoint Disaster Recovery Optionsā€ on Wednesday, April 13th, 2011.  The lunch and learn is open to the public; simply follow the link (above) to register if you’re interested.

Prinomic is located at the following address:

4600 S Syracuse
9th floor
Denver, CO 80237

Colorado Springs SharePoint User Group

Knowing that I’d be out in the Denver area on April 13th, I reached out to some of the folks I know there to see if I might coordinate something with one of the local user groups.  I love speaking, and it was my hope that someone would actually grant me some more time with additional SharePoint geeks!

I was very fortunate to get a reply back from Dave Milner.  Dave and Gary Lapointe run the Colorado Springs SharePoint User Group, and they mentioned that it would be okay for me to come by and present to their group on the evening of the 13th.  So, it looks like I’ll be heading down to Colorado Springs after the lunch and learn with Prinomic!

I’ll be presenting my 2010 DR talk titled ā€œSharePoint 2010 and Your DR Plan: New Capabilities, New Possibilities!ā€

Disaster recovery planning for a SharePoint 2010 environment is something that must be performed to insure your data and the continuity of business operations. Microsoft made significant enhancements to the disaster recovery landscape with SharePoint 2010, and we’ll be taking a good look at how the platform has evolved in this session. We’ll dive inside the improvements to the native backup and restore capabilities that are present in the SharePoint 2007 platform to see what has been changed and enhanced. We’ll also look at the array of exciting new capabilities that have been integrated into the SharePoint 2010 platform, such as unattended content database recovery, SQL Server snapshot integration, and configuration-only backup and restore. By the time we’re done, you will possess a solid understanding of how the disaster recovery landscape has changed with SharePoint 2010.

If you’re in the Colorado Springs area on Wednesday, April 13th, come on by the user group!  The user group meets at the following address:

Cobham Analytics
985 Space Center Drive
Suite 100
Colorado Springs, CO

Meet-and-greet activities are from 5:30pm until 6pm, and the session begins at 6pm!

TechNet Events: Transforming IT from Virtualization to the Cloud

Finally, I wanted to mention a series of events that are both going on right now and coming soon.  My good friend Matt Hester, our region’s IT Pro Evangelist with Microsoft, is traveling around putting on a series of Technet events titled ā€œTransforming IT from Virtualization to the Cloud.ā€  These events are free training and center on cloud computing, why it is important, private vs. public cloud options, etc.

The event looks really cool, and it’s being offered in a number of different cities in the region.  I’ve already signed up for the Cincinnati event on April 6th (a Wednesday).  Check out Matt’s blog post (linked above) for additional details.  If you want to sign up for the Cincinnati event on April 6th, you can use this link directly.

Additional Reading and References

  1. Event: SharePoint Cincy
  2. People: Geoff Smith
  3. Blog: Jim Ferguson
  4. Twitter: Sarah Haase
  5. Blog: Wes Preston
  6. Event: SharePoint Saturday Twin Cities
  7. Registration: SharePoint Saturday Twin Cities on Eventbrite
  8. Company: Idera
  9. Company: Prinomic Technologies
  10. Lunch and Learn: ā€œSharePoint Disaster Recovery Optionsā€
  11. LinkedIn: Dave Milner
  12. LinkedIn: Gary Lapointe
  13. Technet Event: ā€œTransforming IT from Virtualization to the Cloudā€
  14. Technet Event: Cincinnati cloud event

February’s Rip-Roarin’ SharePoint Activities

February is a really busy month for SharePoint activities. In this post, I cover some speaking engagements I have during the month. I also talk about the release of Idera SharePoint backup 3.0 — the product we’ve been working so hard to build and make available!

Holy smokes!  2011 is off to a fast start, and February is already here.  Now that our product release is out (see below), I’m going to make good on my promise to get more ā€œrealā€ blog content out.  Before I do that, though, I want to highlight all of the great SharePoint stuff I’m fortunate enough to be involved with during the month of February.

Idera SharePoint backup 3.0 Release

Idera SharePoint backup 3.0 management consoleAs some of you know, I’m currently a Product Manager for SharePoint products at Idera.  Although it isn’t something that is strictly community focused, Idera SharePoint backup has been a large part of my life for most of the last year.  We’ve been doing some really cool development and product work, and I want to share a piece of good news: We just released version 3.0 of Idera SharePoint backup!

Idera SharePoint backup is ā€œmyā€ product from a management standpoint, and I’m really proud of all the effort that our team put in to make the release a reality.  There are a lot of people in many different areas who busted their butts to get the product out-the-door: development, quality assurance, information development, marketing, product marketing management, public relations, web site management, sales, sales engineering, and more.

To everyone who contributed to making the release a success: you have my heartfelt thanks.  To the rest of you who might be shopping for a SharePoint backup product, please check out what we’ve put together!

SPTechCon San Francisco

Idera-sponsored book signings at SPTechCon San FranciscoI’ll be heading out to BZ Media’s SPTechCon in San Francisco for most of the week of February 7th.  Although I will be delivering a lightning talk titled ā€œBackup/Restore Knowledge Nuggets: What’s True, What’s Not?ā€ (more-or-less the same talk I delivered at last Fall’s SPTechCon event in Boston) on Monday the 7th, that’s only a small part of why I’ll be at the conference.

The big stuff?  Well, first off is the big ā€œpublic releaseā€ of Idera SharePoint backup 3.0.  I’ll be talking with conference participants, seeing what they like (and what they don’t like), explaining the new capabilities and features, etc.

My good friend (and co-author) John Ferringer and I will also be doing an Idera-sponsored book signing for our SharePoint 2010 Disaster Recovery Guide.  If you’re going to be at the conference and want to get a free (and signed!) copy of our book, come by booth #302 on Wednesday morning (2/9) at 10am during the coffee and donuts.  We’ll be around and easy to find: John will be the thin bald guy, and I’ll be the mostly bald guy next to him shoveling donuts into his mouth.  I have a tremendous weak spot for donuts …

Presenting for the Rochester SharePoint User Group

Rick Black is one of the organizers of the Rochester (New York) SharePoint User Group.  I met Rick late in 2009 at SharePoint Saturday Cleveland in Cleveland, Ohio; he and I were both presenting sessions.  We talked a bit during the event, and we pinged each other now and again on Twitter and Facebook in the time after the event.

At one point in time, Rick tossed out the idea of having me present for the Rochester SPUG.  I told him I’d certainly be up for it; after all, I really enjoy hanging out with SharePoint users and talking shop.  The trick, of course, would be getting me to Rochester.

Recently, I asked Rick if he thought a virtual SPUG presentation might work for his group.  I do quite a bit of time presenting on Live Meeting these days, so I figured it might be an option.  It sounded like a good idea to Rick, so I’m on the schedule to (virtually) present for the Rochester SPUG on Thursday, February 10th, 2011.  I’ll be presenting Saving SharePoint – a time-tested and refined SharePoint disaster recovery talk.  The abstract reads as follows:

In this session, we will be discussing disaster recovery (DR) concepts and strategies for SharePoint in a format that highlights a combination of both business and technical concerns.  We will define some critical planning terms such as ā€œrecovery time objectivesā€ and ā€œrecovery point objectives,ā€ and we’ll see why they are so important to understand when trying to formulate a DR strategy.  We will also identify the capabilities and limitations of the Microsoft tools that can used for backing up, restoring, and making SharePoint highly available for disaster recovery purposes.  Changes that have arrived with SharePoint Server 2010 and how they affect DR planning and implementation will also be covered.

I’ll be presenting the night that I get home on a red-eye flight from SPTechCon, so I could be a bit weary … but it will be fun.  I’m really looking forward to it!

SharePoint Saturday San Diego

SharePointSaturdayFor the last weekend of February, I’ll be heading back out to the west coast for SharePoint Saturday San Diego.  The event itself will be held at the San Diego Convention Center on Saturday, February 26th.  The event has filled-up once already, but Chris Givens (who is organizing the event) was able to add another 75 tickets thanks to some additional support from sponsors.

In addition to my Saving SharePoint session (which is described earlier in this post), I’ll be delivering another session called ā€œCaching-Inā€ for SharePoint Performance.  The abstract for the session reads as follows:

Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios.  In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites.  We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.

As always, SharePoint Saturday events are free to the public.  They’re a great way to get a day’s worth of free training, access to SharePoint experts, and plenty of swag and info from vendors.  If you live in or around San Diego and are free on 2/26, consider signing up!

Two trips to the west coast in one month is definitely a first for me, but I’m looking forward to it.  I hope to see you out there!

Additional Reading and References

  1. Company: Idera
  2. Product: Idera SharePoint backup
  3. Company: BZ Media
  4. Event: SPTechCon San Francisco
  5. Event: SPTechCon SharePoint Lightning Talks
  6. Blog: John Ferringer’s My Central Admin
  7. Book: SharePoint 2010 Disaster Recovery Guide
  8. Twitter: Rick Black (@ricknology)
  9. User Group: Rochester SharePoint User Group
  10. Events: SharePoint Saturday San Diego
  11. Twitter: Chris Givens (@givenscj)

SPTechCon Boston Lightning Talk

This post contains my SPTechCon Boston lightning talk titled “Backup/Restore Knowledge Nuggets: What’s True, What’s Not?” I also spend a few moments talking about Prezi and how it compares to PowerPoint.

Last week, I was in Boston for BZ Media’s SPTechCon Boston event.  It was a great opportunity to see and spend time with many of my friends in the SharePoint community, do a book signing with John Ferringer and Idera, and take in a few sessions.

Although I wasn’t technically a presenter at the conference, I did deliver a ā€œlightning talkā€ on the first day of the conference.  Lightning talks are five minute presentations that are typically given by sponsors and designed to expose audiences (who are usually chowing-down on food) to the sponsors’ services, products, etc.

I was given Idera’s slot to speak, and I was also given the latitude to basically do whatever I wanted … so, I decided to have some fun with it!

The Lightning Talk Itself

The five minute presentation that appears below is titled Backup/Restore Knowledge Nuggets: What’s True, What’s Not?  If you weren’t at SPTechCon and can spare a few minutes, I think you’ll find the presentation to be both amusing and informative.

Follow the link and try it out!  You’ll find that the play button allows you to step through the presentation from start to finish pretty easily.

Prezi has a very slick mechanism for embedding actual presentations directly into a website, but that isn’t an option here on my blog.  WordPress.com hosts my blog, and they strip out anything with <object> tags.  I tried to embed the presentation directly, but it got me nowhere  :-(

Wait, What’s Prezi?

I recently became hooked on Prezi (the product + service that drove both the lightning talk and the link that I included above) when I saw Peter Serzo use it to deliver his service application session at SharePoint Saturday Columbus.  Prior to Prezi, I did everything with PowerPoint.  Once I saw Prezi in action and got some additional details from Peter, though, I knew that I’d be using Prezi before long.

I don’t see Prezi as a replacement for PowerPoint, but I do see it as a nice complement.  PowerPoint is great for presenting sequential sets of data and information, and it excels in situations where a linear delivery is desirable.

Prezi, on the other hand, is fantastic for talks and presentation where jumping around happens a lot – such as you might do when trying to tie several points back to a central theme or concept.  Prezi isn’t nearly as full-featured as PowerPoint, but I find that it can be more visually engaging and simply ā€œfun.ā€

Wrapping It Up

The lightning talk at SPTechCon was the perfect arena for a test run with Prezi, and I think the presentation went wonderfully… and was a whole lot of fun to deliver, as well.  I certainly see myself using Prezi more in the future.  SharePoint Saturday Dallas is coming up in just a couple of weeks …

If you take the time to watch the presentation, I’d really love to hear your feedback!

Additional Reading and References

  1. Company: BZ Media LLC
  2. Book: SharePoint 2010 Disaster Recovery Guide
  3. Blog: John Ferringer’s MyCentralAdmin
  4. Company: Idera
  5. Presentation: Backup/Restore Knowledge Nuggets: What’s True, What’s Not?
  6. Product: Prezi
  7. Twitter: Peter Serzo (@pserzo)
  8. Event: SharePoint Saturday Columbus
  9. Event: SharePoint Saturday Dallas

A Tale of Two Cmdlets

In this post I investigate the differences between the Backup-SPFarm and Backup-SPConfigurationDatabase cmdlets in the process of configuration-only backup in SharePoint 2010. I also extrapolate a bit on how the Backup-SPConfigurationDatabase may have come to be.

I recently authored a blog post titled ā€œConfiguration-Only Backup and Restore in SharePoint 2010,ā€ and in that post I tried to address some of the false hopes and misunderstandings I saw arising around SharePoint 2010’s configuration-only backup and restore capabilities.

While I was putting the post together, I was reminded of another head-scratcher that I’ve seen confuse some folks on a handful of occasions; specifically, what the differences are between the Backup-SPFarm and Backup-SPConfigurationDatabase PowerShell cmdlets in SharePoint 2010 when it comes to configuration-only backup.

Before I go too far, I should probably rewind a bit and explain a few things.

Many Paths to the Destination

If you aren’t yet familiar with configuration-only backup and restore in SharePoint 2010, the basics of it are covered in this TechNet article and in my previous post.Ā  I’d recommend checking both out before continuing.

Configuration-only backups in SharePoint 2010 can be generated in several different ways;

  1. Using the ā€œBackup up only configuration settingsā€ option when running a backup using Central Administration’s Farm Backup and Restore capabilities.
  2. Through STSADM.exe using STSADM.exe –o backup with the –ConfigurationOnly switch.
  3. By running the Backup-SPFarm PowerShell cmdlet along with the –ConfigurationOnly switch.
  4. By executing the Backup-SPConfigurationDatabase PowerShell cmdlet.

Option #1 is obviously designed for the ā€œUI-orientedā€ administrator who wants to accomplish a configuration-only backup with a point-and-click interface.Ā  Option #2 works, but Microsoft has been pretty clear that STSADM.exe is on its way out and should be generally be avoided in favor of the PowerShell cmdlets shown in Options #3 and #4.

Options #3 and #4 are where I’ve actually seen some administrative head-scratching start.Ā  Both options leverage PowerShell, and both produce a configuration-only backup.Ā  The Backup-SPConfigurationDatabase cmdlet would seem most appropriate for the job … but is it?Ā  If it is most appropriate, then why the redundant capability with the Backup-SPFarm cmdlet?

Two Cmdlets, One Function?

We have two PowerShell cmdlets that produce the same type of backup set.Ā  Understanding how the cmdlets differ starts with an analysis of the syntax and parameters for each one.Ā  Let’s start by looking at the syntax for each of the cmdlets in full.

First, the Backup-SPFarm cmdlet.

[sourcecode language=”powershell”]Backup-SPFarm -BackupMethod <String> -Directory <String> [-AssignmentCollection <SPAssignmentCollection>] [-BackupThreads <Int32>] [-ConfigurationOnly <SwitchParameter>] [-Confirm [<SwitchParameter>]] [-Force <SwitchParameter>] [-Item <String>] [-Percentage <Int32>] [-WhatIf [<SwitchParameter>]] [<CommonParameters>][/sourcecode]

Next, the Backup-SPConfigurationDatabase cmdlet.

[sourcecode language=”powershell”]Backup-SPConfigurationDatabase -Directory <String> [-AssignmentCollection <SPAssignmentCollection>] [-DatabaseCredentials <PSCredential>] [-DatabaseName <String>] [-DatabaseServer <String>] [-Item <String>] [<CommonParameters>][/sourcecode]

Each of the cmdlets requires that you specify where the backup set should be created using the –Directory switch, and each cmdlet permits the selection of either the entire configuration hierarchy (the default) or specific subset of it through the –Item switch.

There’s quite a bit of noise in the full syntax for each cmdlet, particularly for Backup-SPFarm, so let’s distill things down a bit and look at each cmdlet in turn.

Backup-SPFarm

For the purposes of this discussion, the following represents the core syntactical elements of interest for configuration-only backup using the Backup-SPFarm cmdlet:

[sourcecode language=”powershell”]Backup-SPFarm [-ConfigurationOnly <SwitchParameter>][/sourcecode]

All you need to do is specify the –ConfigurationOnly switch and you’re ready to go.Ā  There’s really not much more to it than that.

Under the hood, this method of configuration-only backup creation is the same as running a configuration-only backup operation from within SharePoint Central Administration.Ā  Backup-SPFarm assumes that you’ve got a live SharePoint farm, that it’s operating properly, and that you want to capture its configuration with the backup process.

Backup-SPConfigurationDatabase

So then, what’s the deal with Backup-SPConfigurationDatabase?

[sourcecode language=”powershell”]Backup-SPConfigurationDatabase [-DatabaseCredentials <PSCredential>] [-DatabaseName <String>] [-DatabaseServer <String>][/sourcecode]

Clearly there’s something more going on with this cmdlet.Ā  Looking at the parameters, it should be clear that the –DatabaseName and –DatabaseServer switches allow you to specify an alternate configuration database and server location as the target of the configuration-only backup operation you intend to perform.Ā  If you happen to require SQL Server authentication to access the configuration database, then you can specify connection credentials with the –DatabaseCredentials switch.

In most cases where I’ve seen the Backup-SPConfigurationDatabase cmdlet described, these extra database-centric parameters have been passed-off as giving you the ability to backup configuration databases that reside in other (non-local) farms.Ā  I’ve also seen it suggested that it would be possible to centralize configuration-only backups for multiple farms using this cmdlet.Ā  While I think that those are certainly possibilities, I think that they fail to consider the bigger picture and backup/restore as an end-to-end process.

It’s All About Recovery

First, let me close the case on the Backup-SPFarm cmdlet.Ā  Under normal circumstances, Backup-SPFarm is how you should be running a configuration-only backup if you need it.Ā  The TechNet documentation spells this out in the description for both the Backup-SPFarm and Backup-SPConfigurationDatabase cmdlets.Ā  Although Backup-SPConfigurationDatabase can be used to backup the configuration database of an operational farm, that’s not really its intended use (as I see it, anyway).

To understand the real value of the Backup-SPConfigurationDatabase cmdlet, you need to think past the execution of backups and consider the process of recovery.Ā  In many of the organizations I’ve consulted for, the SharePoint environments were not protected using the native backup and restore capabilities that come with the platform.Ā  Quite a few of these mid-size and enterprise organizations handled SharePoint farm protection using SQL Server backups, third-party tools, or a combination of the two.Ā  Usability guidelines for the native backup capabilities were sometimes the reason for avoiding SharePoint’s built-in tools; in other cases, backups were controlled and managed by a different internal group that had already standardized on their own (non-SharePoint) backup tool or product.

In each of the aforementioned backup scenarios, the primary goal was protection of SharePoint’s SQL Server databases.Ā  Content databases were utterly critical targets in these backup scenarios since nothing would bring them back if they were lost.Ā  Whether or not other databases were backed up depended on the recovery strategy.Ā  In the event of catastrophic farm failure, some administrators try to recover all parts of the farm; others prefer to rebuild the farm from scratch and bolt the content databases back in.Ā  Many different approaches to the recovery challenge exist, and each has benefits and disadvantages.

Regardless of the restore approach taken for database backups, the SharePoint farm configuration database has generally been regarded as relatively useless.Ā  After all, farm configuration databases are not normally portable.Ā  When you rebuild a SharePoint farm, you end up with a new farm configuration database.Ā  In SharePoint 2007, it was generally accepted that the farm configuration database was “throwaway;ā€ backing it up wasn’t even necessary unless you had some very specific (and oftentimes proprietary) use case for it.

The Restore Process: SharePoint 2010-Style

With SharePoint 2010, the question of ā€œshould I backup the farm configuration database?ā€ should generally be answered with a ā€œyes.ā€Ā  The same restrictions regarding the use of a farm configuration database backup still apply from SharePoint 2007 (i.e., you can’t simply ā€œdrop it intoā€ a live SharePoint farm and go), but we now have configuration-only backup and restore with SharePoint 2010.

If you think about the database or file-based approach to backup, and you consider the process of restoring or rebuilding a SharePoint farm, then you’ll probably understand where the Backup-SPConfigurationDatabase cmdlet actually fits in.Ā  The cmdlet is less about backup than it is about restore.

If you’re trying to put a farm back together after a catastrophic failure using database backups, then you’re probably going to follow a series of steps that starts out like this:

  1. Rebuild your servers with their operating systems
  2. Install SharePoint
  3. Create a new SharePoint farm (with a new farm configuration database)
  4. Restore your old farm’s databases

Once you’ve completed step #3, you actually have a working SharePoint farm – it just doesn’t look anything like the farm you’re trying to rebuild/restore yet.Ā  You’ll probably still need to re-provision services and service applications, re-establish all of your farm’s configuration settings, etc.

Assuming you were capturing your farm configuration database as part of the backup cycle for your old farm, then step #4 is where the Backup-SPConfigurationDatabase can be brought in to work its magic.Ā  The Backup-SPConfigurationDatabase does actually require a functional farm to properly operate (even against another configuration database), but it can be used to execute a configuration-only backup against the old (pre-catastrophe) farm configuration database that was restored into SQL Server from backup in step #4.Ā  The configuration-only backup set that is generated through the Backup-SPConfigurationDatabase action can then be brought into the rebuilt farm almost immediately using the Restore-SPFarm cmdlet with the –ConfigurationOnly switch engaged.

The Data Protection Manager 2010 Connection

I’ll be clear and state that I don’t have hard evidence to say with certainty that my take on the Backup-SPFarm and Backup-SPConfigurationDatabase division of responsibilities is what Microsoft envisioned when they created them, but I am relatively confident based on what I’ve seen and know – especially when I factor-in Microsoft’s data protection product offerings and enterprise backup/restore strategy.

In particular, I’m talking about Microsoft’s own System Center Data Protection Manager (DPM) product line.Ā  The DPM product line conducts its backup and restore operations through the Volume Shadow Copy Service (VSS) that is built into the Windows operating system.Ā  VSS is a very powerful mechanism for the creation of consistent, point-in-time file snapshots … but at its core, VSS is file-based.

Even though DPM advertises some integration capabilities with SharePoint, it isn’t ā€œawareā€ of SharePoint beyond its interface with the SharePoint Foundation Volume Shadow Copy Service Writer (SPF VSS Writer) and the SPF VSS Writer’s subordinate search index writer.Ā  For all practical purposes, this means that DPM can’t really treat SharePoint backups as much more than file-based backups.Ā  DPM’s approach to SharePoint farm protection is to backup SQL Server databases (including the configuration database) and the farm’s search index partitions.Ā  It doesn’t understand farm metadata, IIS configuration settings, the SharePoint Root (aka ā€œ14 hiveā€), etc.Ā  These additional targets can be backed-up using DPM’s file system protection capabilities, but they aren’t associated in any way with other SharePoint backups.

So even though DPM can protect a SharePoint farm configuration database, it can’t do anything more with it than you or I can do with a standard SQL Server database restore.Ā  If you realize that DPM only works with files and doesn’t have much in the way of application-level intelligence, this chart that compares its capabilities against other SharePoint protection mechanisms makes quite a bit of sense.Ā  It should also make it clear why even Microsoft’s products have a need for the Backup-SPConfigurationDatabase cmdlet.

Closing Thoughts

Don’t interpret what I’m saying as DPM 2010-bashing.Ā  On the contrary, I’ve been using DPM since it’s 2007 release in my home network environment, and I think it’s a pretty good product.Ā  I only brought up DPM to make my point about the Backup-SPConfigurationDatabase cmdlet – not to beat-up the product.

Since both SQL Server backups and DPM are capable of restoring a SharePoint configuration database – but not much more than that – the Backup-SPConfigurationDatabase cmdlet fills a very important role in the SharePoint restore process.Ā  It’s the ā€œbridgeā€ from many backup/restore solutions to SharePoint itself for purposes of getting back configuration data.

The general rule of thumb that I give people is this: use Backup-SPFarm if you’re trying to extract configuration data from a live farm, and use Backup-SPConfigurationDatabase if you’re trying to extract configuration data from a restored (and otherwise unassociated) configuration database.

Additional Resources and References

  1. Blog Post: Configuration-Only Backup and Restore in SharePoint 2010
  2. TechNet: Backup-SPFarm PowerShell cmdlet
  3. TechNet: Backup-SPConfiguration PowerShell cmdlet
  4. TechNet: Backup and recovery overview (SharePoint Server 2010)
  5. TechNet: Restore-SPFarm PowerShell cmdlet
  6. Product: Microsoft System Center Data Protection Manager 2010
  7. MSDN: SharePoint Foundation VSS Writer
  8. TechNet: Plan for backup and recovery (SharePoint Server 2010)
%d bloggers like this: