Quick Tips for Managing the SharePoint 2010 Office Web Applications Cache

I presented remotely to the Boston Area SharePoint User Group (BASPUG) tonight (7/13/2016), and I referenced an article that I had written that is no longer available online. This post originally appeared as a “SharePoint Smarts” article from Idera. Idera is out of the SharePoint business nowadays, but the information I shared in that article is still relevant to those who use SharePoint 2010. So if you have a SharePoint 2010 environment and use the Office Web Apps, this post (and more specifically, the scripts contained within) is for you.

One of the hotly anticipated items in SharePoint 2010’s feature set is the introduction of the Microsoft Office Web Applications, or “Office Web Apps” for short. The release of the Office Web Apps opens up new possibilities for those who work with documents and files that are tied to Microsoft Word and other applications in the Microsoft Office Family.

What Are the Office Web Apps?

In prior versions of SharePoint, viewing and editing Office documents that existed in SharePoint document libraries normally required a client computer possessing the Microsoft Office suite of applications. If you wanted to view or edit a Word document that existed in SharePoint, for example, you needed Microsoft Word (or an equivalent application) installed on your computer.

That situation changes with the arrival of the Office Web Apps. When a SharePoint 2010 farm is properly set up and configured with the Office Web Apps, it becomes possible to view and edit several different Office document types directly from within a browser as shown in Figure 1 below.

Open Document

Figure 1: Browser-based editing of a Microsoft Word document

The Office Web Apps provide browser-based viewing and editing support for Microsoft Excel, OneNote, PowerPoint, and Word document types, and this support extends to more than just Internet Explorer. Firefox 3.x, Safari 4.x, and Google Chrome browser types are also supported for viewing and editing – making the Office Web Apps an enabler of cross-platform collaboration that centers on Office documents.

A Word about the Plumbing

As you might imagine, browser-based rendering and editing of Office documents involves a number of complex processes that engage a variety of front-end, middle-tier, and back-end components. The front-end and middle-tier tasks that are tied to document viewing and editing are handled primarily by a new set of service applications that appear when the Office Web Apps are installed. These service applications (and their associated pages, handlers, and worker processes) take care of the business of document conversion, load-balancing, and rendering for browser consumption.

Document conversion and rendering typically generate a combination of images, HTML, JavaScript, and XAML (or eXtensible Application Markup Language) that are sent to consuming browsers. The creation of these document resources is an expensive process, both in terms of CPU cycles and storage. To improve performance levels, it makes sense to generate these document resources only as needed and reuse them whenever possible. That’s where the Office Web Apps cache comes in.

The Office Web Apps cache is the back-end store that is responsible for housing images, HTML, JavaScript, and XAML resources once they have been created for a document. Each time a document is converted into a set of these resources, the resources are stored in the Office Web Apps cache. When a request for a document comes into SharePoint, the cache is checked to see if the document had been previously requested and rendered. If it had, and the cached document resources are up-to-date for the document, then the document request is served from the cache instead of engaging the Office Web Apps to convert and re-render it. Serving document resources from the Office Web Apps cache can yield significant performance improvements over scenarios where no cache is employed.

Quick side note before going too far: the Office Web Apps cache is only employed for Word and PowerPoint document types. It is not used for OneNote or Excel documents.

Inside the Office Web Apps Cache

The Office Web Apps cache takes the form of a single site collection for each Web application within a SharePoint farm. When the Office Web Apps are installed and configured in a SharePoint environment, a couple of new timer jobs are installed and run regularly within the farm. One of those timer jobs, the Office Web Apps Cache Creation timer job, ensures that each Web application where the Office Web Apps are running has a site collection like the one shown below in Figure 2.

Site Collection

Figure 2: The Office_Viewing_Service_Cache site collection

The Office_Viewing_Service_Cache site collection is a standard Team Site, and it is the location where resources are stored following the conversion and rendering of either a Word or PowerPoint document by the Office Web Apps.

The Team Site can be accessed just like any other SharePoint Team Site, and a glimpse inside the All Documents library (showing a number of document resources) appears below in Figure 3.

Cache Library

Figure 3: All Documents library in an Office Web Apps cache site collection

Managing the Cache

For such a complex system, the Office Web Apps components do a pretty good job of maintaining themselves without external intervention. This extends to the site collections that are used by Office Web Apps for caching purposes, as well. For example, the Office Web Apps Expiration timer job that is installed with the Office Web Apps removes old document resources from cache site collections once they’ve hit a certain age. The timer job also ensures that each of the site collections responsible for caching has adequate space to serve its purpose.

This doesn’t mean that there aren’t opportunities for tuning and maintenance, though. In fact, there are a couple of things that every administrator should do and review when it comes to the Office Web Apps cache.

Tip #1: Relocate the Cache to a New Database

By default, the Office Web Apps Cache Creation timer job creates an Office_Viewing_Service_Cache site collection in a content database that is collocated with one or more of the “real” site collections within each of your content Web applications. Since the cache site collection is allowed to grow to a beefy 100GB by default, it makes sense to relocate the cache site collection to its own (new) content database. By relocating the cache site collection to its own content database, it becomes easy to exclude it from other maintenance such as backups.

Relocating the cache site collection is pretty straightforward, and it can be accomplished pretty easily with following RelocateOwaCache.ps1 PowerShell script. Simply save the script, execute it, and supply the URL of a Web application within your farm where the Office Web Apps are running. The script will take care of creating a new content database within the Web application, and it will then move the Web application’s Office Web Apps cache site collection to the newly created content database.

<#
.SYNOPSIS
   RelocateOwaCache.ps1
.DESCRIPTION
   Relocates the Office Web Apps cache for a specified Web application to a new content database that is created by the script
.NOTES
   Author: Sean McDonough
   Last Revision: 07-June-2011
.PARAMETER targetUrl
   A Web application where Office Web Apps are in use
.EXAMPLE
   RelocateOwaCache.ps1 http://www.TargetWebApplication.com
#>
param 
(
	[string]$targetUrl = "$(Read-Host 'Target Web application URL [e.g. http://hostname]')"
)

function RelocateCache($targetUrl)
{
	# Ensure that the SharePoint cmdlets are loaded before continuing
	$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
	if ($spCmdlets -eq $Null)
	{ Add-PSSnapin Microsoft.SharePoint.PowerShell }
	
	# Get the name of the current database where the cache is located; it
	# will serve as the basis for a new content database name.
	$cacheSite = Get-SPOfficeWebAppsCache -WebApplication $targetUrl -ErrorAction stop
	$newDbName = $cacheSite.ContentDatabase.Name + "_OWACache"
	
	# Create a new content database and relocate the cache to it.  Make sure the
	# user knows what's happening each step of the way.
	Write-Host "- creating a new content database ..."
	$cacheDb = New-SPContentDatabase -Name $newDbName -WebApplication $targetUrl -ErrorAction stop
	Write-Host "- moving the Office Web Apps cache ..."
	Move-SPSite $cacheSite -DestinationDatabase $cacheDb -Confirm:$false -ErrorAction stop
	Write-Host "- performing required IISRESET ..."
	iisreset | Out-Null
	
	# Let the user know where the cache is now located
	Write-Host "Cache successfully relocated to the '$newDbName' database."

	# Abort script processing in the event an exception occurs.
	trap
	{
		Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
		$_.Message
		break
	}
}

# Launch script
RelocateCache $targetUrl

Tip #2: Review Size and Expiration Settings

When an Office_Viewing_Service_Cache site collection is provisioned within a Web application by the Office Web Apps Cache Creation timer job, it is initially configured to hold cached document resources for 30 days. As mentioned in Tip #1, a cache site collection can also grow to a maximum of 100GB by default.

Whether or not these default settings are appropriate for a Web application depends primarily upon the nature of the site collections housed within the Web application. When site collections contain primarily static documents or content that changes infrequently, it makes sense to allow the cache to grow larger and expire content less often than normal. This maximizes the benefit obtained from caching since document content turns over less frequently.

On the other hand, site collections that experience frequent document turnover and heavy collaboration traffic tend to benefit very little from large cache sizes and long expiration periods. In site collections of this nature, cached content tends to become stale quickly. Little benefit is derived from holding onto document resources that may only be good for days or even hours, so maximum cache size is reduced and expiration periods are shortened.

Tip #3: Give Yourself Some Warning

Since each Office Web App cache is a Team Site and like any other site collection, you can leverage standard SharePoint site collection features and capabilities to help you out. One such mechanism that can be of assistance is the ability to have an e-mail warning sent to site collection owners once a site collection’s size hits a predefined threshold. In the case of the Office Web Apps cache, such a warning could be a cue to increase the maximum size of the cache site collection or perhaps lower the expiration period for document resources housed within the site collection.

Like the maximum cache size setting described in Tip #2, the ability to send e-mail warnings once the cache reaches a threshold is actually tied to SharePoint’s site collection quota capabilities. The maximum size of the cache site collection is handled as a storage quota, and the warning threshold maps directly to the quota’s warning threshold as shown below in Figure 4. In the case of Figure 4, a maximum cache size of 50GB is in effect for the cache site collection, and the e-mail warning threshold is set for 25GB.

Quota

Figure 4: Quota settings for an Office Web Apps cache site collection

Knobs and Dials

Tips #2 and #3 discussed some of the more straightforward Office Web Apps cache settings that are available to you, but you might be wondering how you actually go about changing them.

The AdjustOwaCache.ps1 PowerShell script that appears below provides you with an easy way to review and change the settings discussed. Simply save the script, execute it, and supply the URL of the Web application containing the Office Web Apps cache you’d like to adjust. The script will show you the cache’s current settings and give you the opportunity to modify them.

<#
.SYNOPSIS
   AdjustOwaCache.ps1
.DESCRIPTION
   Dumps several common OWA cache settings to the console for a selected Web application and provides a mechanism for altering the those values
.NOTES
   Author: Sean McDonough
   Last Revision: 08-June-2011
.PARAMETER targetUrl
   A Web application where Office Web Apps are in use
.EXAMPLE
   AdjustOwaCache.ps1 http://www.TargetWebApplication.com
#>
param 
(
	[string]$targetUrl = "$(Read-Host 'Target Web application URL [e.g. http://hostname]')"
)

function AdjustCache($targetUrl)
{
	# Ensure that the SharePoint cmdlets are loaded before continuing
	$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
	if ($spCmdlets -eq $Null)
	{ Add-PSSnapin Microsoft.SharePoint.PowerShell }
	
	# Create an easy converter for GB to bytes
	$GBtoBytes = 1024 * 1024 * 1024
	
	# Get a reference to the cache site collection and extract the values we'll be
	# working with and (potentially) altering.
	$cacheSite = Get-SPOfficeWebAppsCache -WebApplication $targetUrl -ErrorAction stop
	$wacSize = $cacheSite.Quota.StorageMaximumLevel / $GBtoBytes
	$wacWarn = $cacheSite.Quota.StorageWarningLevel / $GBtoBytes
	$wacExpire = 30
	if ($cacheSite.RootWeb.Properties.ContainsKey("waccacheexpirationperiod"))
	{ $wacExpire = $cacheSite.RootWeb.Properties["waccacheexpirationperiod"] }
	Write-Host "Current OWA cache values for '$targetUrl'"
	Write-Host "-  Maximum Cache Size (GB): $wacSize"
	Write-Host "-   Warning Threshold (GB): $wacWarn"
	Write-Host "- Expiration Period (Days): $wacExpire"
	
	# Give the user the option to make changes.
	$yesOrNo = Read-Host "Would you like to change one or more values? [y/n]"
	if ($yesOrNo -eq "y")
	{
		[Int64]$newWacSize = Read-Host "-  Maximum Cache Size (GB)"
		Write-Host  "-   Warning Threshold (GB)"
		[Int64]$newWacWarn = Read-Host " (supply 0 for no warning)"
		[int]$newWacExpire = Read-Host "- Expiration Period (Days)"
		
		# Convert GB values to bytes and set the cache
		$newWacSize = ($newWacSize * $GBtoBytes)
		$newWacWarn = ($newWacWarn * $GBtoBytes)
		Set-SPOfficeWebAppsCache -WebApplication $targetUrl -ExpirationPeriodInDays $newWacExpire -MaxSizeInBytes $newWacSize -WarningSizeInBytes $newWacWarn -ErrorAction stop
	}

	# Abort script processing in the event an exception occurs.
	trap
	{
		Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
		$_.Message
		break
	}
}

# Launch script
AdjustCache $targetUrl

Conclusion

The Office Web Apps are a powerful addition to SharePoint 2010 and pave the way for greater collaboration on Office documents without the need for the Microsoft Office suite of client applications. The Office Web Apps cache is an important part of the larger Office Web Apps equation, and the cache is generally pretty good about taking care of itself. As shown in this article, though, it is still a good idea to relocate the cache from its default location. At the same time, a little bit of tuning and e-mail alerting can go a long way towards ensuring that the cache operates optimally for you in your environment.

Is a Higher SharePoint Backup Thread Count Better?

Many administrators have noted that SharePoint 2010 allows them to tune the number of threads that can be used for farm backup and restore operations, but very few have played with the settings. In this post, I share some results I compiled while testing the settings in my own environments. I also share the PowerShell script I assembled for my testing so you can tune the backup and restore thread settings in your own SharePoint farm.

Balls of purple, orange and grey yarn or woolScalability in the hardware and software space is all about parallel computing nowadays. Consider our modern hardware: it used to be that all we really cared about was how fast our CPU could run (“how many GHz?”) Now, we care more about how many cores our CPU has, whether or not those cores support Hyper-threading, how many memory channels our CPU has available to it, etc. Scale-out beats scale-up.

The same is largely true in the software space. Most IT folks learned some time ago that “multithreading” and “higher performance” tended to go hand-in-hand or were at least associated in some way. Multiple threads of execution meant better scheduling of limited processor resources and fewer chances that one long-running operation would bottleneck an entire application.

Configuring SharePoint 2010 Farm Backup and Restore

When I first saw the following section in the “Configure Backup Settings” section of SharePoint 2010’s Central Administration site, it brought a big grin to my face:

Thread Configuration

In SharePoint 2007 and earlier, administrators had no real levers to pull to try and tune the performance of farm backup and restore operations. This obviously changed with SharePoint 2010. We were basically being handed a way to adjust those processes as we saw fit – for better or worse.

Strangely enough, though, I never really took the time to explore the impact of those settings in my SharePoint environments. I always left the number of assigned threads for backup and restore operations at three. I would have liked to mess around with the values, but something else was always more important in the grand scheme of things.

Why Now?

I’ve been working on a new “backup tips and tricks” whitepaper, and I found myself looking for backup and restore concerns within the SharePoint platform that I may not have given much attention to in the past. It didn’t take much wading through Central Administration before I once again found myself looking at thread counts for backup and restore operations.

Doing a little bit of Internet (background) research confirmed what I had suspected: no one else had really spent any time on the topic either. In fact, the only “fresh” and non-copyright-infringing material I found came from a Microsoft TechNet post titled Backup and recovery best practices (SharePoint Server 2010) … and to tell you the truth, the following paragraph from the section titled “Configure SharePoint settings for better backup or restore performance” really bugged me:

If you are using the Backup-SPFarm cmdlet, you can use the BackupThreads parameter to specify how many threads SharePoint Server 2010 will use during the backup process. The more threads you specify, the more resources that backup operation will take, but the faster that it will finish, if sufficient resources are available. However, each thread is reported individually in the log files, so using fewer threads makes interpreting the log files easier. By default, three threads are used. The maximum number of threads available is 10.

Without an understanding of how multithreading (in general) and SharePoint backup (specifically) work, this could easily be interpreted as follows:

The greater the number of threads you assign, the faster your backups will complete.

I realize that my summary is an oversimplification, but I believe that many administrators see the TechNet paragraph as I summarized it. And that concerns me.

I’ve always told people that increasing the backup thread count could yield better performance, but any adjustments would need to be tested in the target farm where they are to be implemented. Realistically speaking, there are several participants and a lot of moving parts in any SharePoint farm backup. Besides the SharePoint server where the backup operation is being coordinated, there is the performance of one or more SQL Servers to consider. The capabilities and restrictions of the backup destination location (typically a UNC file share) also need to be factored-in since that destination is being written to by both the SharePoint Server and one or more SQL Servers.

Setting the number of backup threads to 10 on a SharePoint Server of infinite capability and resources doesn’t guarantee a fast backup, because the farm might have a slow SQL Server, a less-capable backup destination location, a slow or congested network, or a host of other complicating factors.

Oh Yeah? Prove It.

Of course, all of this is just a bunch of hand-waving without proof. So, the scientist in me (yeah, I actually used to be a chemist) decided to take over and devise a series of simple tests to see if there is any real weight to the arguments I’ve been making.

I began with the hypothesis that the easiest and most visible way to gauge the performance of a farm backup operation is to measure how long a backup takes to run; e.g., a farm backup that takes 10 minutes to run is faster than a backup that takes 20 minutes to run if farm content, hardware, configuration, and other factors remain constant. Since SharePoint 2010 provides the ability to specify anywhere from one to 10 backup threads, running a series of backups where the only variable is backup thread count should determine if greater or fewer backup threads yield better performance.

You might recall that I also mentioned that farm topology is a factor in the overall backup equation. As part of my experiment, I decided to run the tests on two different farms I have available to me. General descriptions for each farm:

  • Single-Server Farm: my single server farm environment is a VM running on my laptop. The VM houses SharePoint, SQL Server, and the backup location being targeted. The laptop hardware is a Core-i7 quad-core processor, and the underlying storage for the VM is a solid-state drive (SSD). Hardware bottlenecks should be minimized, and network latency isn’t a factor since backup operations are conducted against a local drive within the VM.
  • Multi-Server Farm: my multi-server environment is the “production” environment on my home network. It consists of a SharePoint Server VM running on a Hyper-V host that also hosts other VMs. The SQL Server instance backing the farm is a non-virtualized SQL Server housing all of the SharePoint databases as well as a few databases for other applications. The backup destination location is a virtualized file server with a pass-through drive array (eSATA with RAID-5). Overall hardware, in this case, is “okay” but obviously not dedicated purely to SharePoint. In addition, network latency and bandwidth (GbE) are also in-play as potential sources of impact.

These two environments have pretty different overall topologies, and it was my hope that I’d see some effect on the performance numbers as a result.

The Script

To run the tests reproducibly, I needed a PowerShell script. So, I put the following script together while I had a bit of free time one night. Feel free to pluck this out to use for testing in your SharePoint environment, as well.

<#
.SYNOPSIS
   TestBackupThreads.ps1
.DESCRIPTION
   This script is used to conduct and time a series of backups using different thread counts.
   The output can then be used to make an educated decision on the number of backup threads to
   assign for use in farm-level backups.
.NOTES
   Author: Sean McDonough
   Last Revision: 25-July-2012
.PARAMETER TestLocation
   A UNC path to a location that can be used to create test backup sets
.EXAMPLE
   TestBackupThreads \\FileShare\TestLocation
#>
param 
(
	[string]$TestLocation = "$(Read-Host 'UNC path to test backup location [e.g. \\FileShare\TestLocation]')"
)

function TestThreads($backupLocation)
{
	# Ensure that the SharePoint cmdlets are loaded before continuing
	$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
	if ($spCmdlets -eq $Null)
	{ Add-PSSnapin Microsoft.SharePoint.PowerShell }
	
	# Setup some variables we'll need for execution.
	$threadTimes = @{}									# Hash table to hold timing results
	$backupItems = Join-Path $backupLocation "spbr*"	# Used to delete temp backup files
	
	# We need to execute a full farm backup for each thread count 1 through 10
	Clear-Host
	Write-Host "`nBackup thread count testing process beginning."
	for ($threads = 1; $threads -lt 11; $threads++)
	{
		# Clean out any backup contents from the test location
		Remove-Item $backupItems -recurse

		# Grab the starting date/time (for later comparison), kick-off a farm backup, and then
		# grab the stop date/time.
		Write-Host "`nInitiating a backup with $threads thread(s) ..."
		$startPoint = Get-Date
		Backup-SPFarm -BackupMethod Full -Directory $backupLocation -BackupThreads $threads
		$stopPoint = Get-Date

		# Store and report results
		$keyName = "Backup with {0} thread(s)" -f $threads
		$elapsedSeconds = "{0:N0}" -f ($stopPoint - $startPoint).TotalSeconds
		$threadTimes[$keyName] = $elapsedSeconds
		Write-Host "Backup with $threads thread(s) complete"
		Write-Host ("- time to complete (in seconds): {0}" -f $elapsedSeconds)
	}
	
	# Do a final sweep of the test backup location to clean out backup items
	Remove-Item $backupItems -recurse

	# Dump the results sorted in order of quickest to longest
	Write-Host "`nBackup thread count testing process complete."
	$threadTimes.GetEnumerator() | Sort-Object Value

	# Abort script processing in the event an exception occurs.
	trap
	{
		Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
		$_.Message
		break
	}
}

# Launch script
TestThreads $TestLocation

The script is fairly straightforward in what it does. You supply a TestLocation parameter to specify where farm backup test data should be written to, and the script will run a series of full farm backups using the supplied location as the backup destination. The script starts with a full backup using one backup thread; at the end of each full farm backup, the script notes how long the backup took (in seconds) and cleans-up the contents of the TestLocation folder. The number of backup threads is then incremented, and the next test is run. When the script has completed running all backup tests, it sorts the results from “quickest backup” (i.e., the backup thread count requiring the least amount of time) to the slowest backup.

Test Results

I ran a series of three tests for each of the aforementioned environments for a total of six total test runs. Although there’s still quite a bit of variability between individual results within a backup thread series, some trends did appear to emerge.

Single-Server Farm

Backup Times for the Single-Server Environment

With the single-server environment, increasing the number of backup threads did appear to have a directional impact on performance. A single backup thread proved to be the slowest option for the farm backup, and “greater than one” thread resulted in better performance.

If you look at the average values, though, there wasn’t a tremendous difference between the slowest thread count (410 seconds for one thread) and the fastest (388 seconds for 10 threads). We’re only talking about a 5% to 6% difference overall. To truly find the optimum number of backup threads in an environment like this would require more than three test runs to account for standard deviation and establish significance.

Oh, and for those that might be wondering: I’m sure I introduced some of my own variability into the results. Although I didn’t do anything processor or disk intensive during the test runs, I didn’t go out of my way to minimize the impact of services, background operations, etc. To repeat: more testing (with better controls) would be needed for truly conclusive results. The only thing I started to show with this particular set of tests is that multithreading seemed to improve backup performance.

Multi-Server Farm

Things got quite a bit more interesting (to me) when I switched over to multi-server farm testing.

Backup Times for the Multi-Server Environment

In the multi-server environment, the average for using just one backup thread (1413 seconds) appeared to be significantly faster than the next best option (1747 seconds for seven backup threads) – in the neighborhood of 20% or so faster. Just like the single-server results, additional trials would be needed to completely validate the observations, but the results are less ambiguous (given the relatively greater precision of the samples) than with the single-server runs.

Do you find this surprising? Given my multi-server environment and what I know about it, I can’t really say that I was caught flat-footed by the results. Going into the tests, my hypothesis was that my backup destination location would likely be the “weak link” in my overall farm and backup topology. The SharePoint Server was doing well, the SQL Server was relatively robust … but all of that backup activity was hard on my (virtualized) file server. Multiple servers trying to write to the backup location were swamping it and the network, and adding additional backup threads to the mix didn’t end up helping or improving the overall backup process.

The Take-Away

At the end of the day, I recognize that these tests of mine didn’t prove anything conclusively. Frankly, conclusive proof wasn’t my goal. The intent of these experiments wasn’t to say “more threads are better” or “more threads are worse.”

The only point I’m making (I hope) by sharing these results is this: until you run some real tests of your own in your SharePoint environment, you really don’t know where your backup thread sweet spot is. You can try to guess it, but it’s just a guess. And guessing is really no better than simply leaving the backup thread count set to its default value of three.

References and Resources

  1. Wikipedia: Parallel Computing
  2. Wikipedia: Hyper-threading
  3. Wikipedia: Thread (computing) and Multithreading
  4. TechNet: Backup and recovery best practices (SharePoint Server 2010)

Kicking-Off 2012: SharePoint Style

My SharePoint community activities are off to a roaring start in 2012. In this post, I’ll be recapping a couple of events from the end of 2011, as well as covering new activities taking place during the first couple of months of 2012.

HighSpeedI don’t know how 2011 ended for most of you, but the year closed without much of a bang for me. I’m not complaining about that; the general slow-down gave me an opportunity to get caught up on a few things, and it was nice to spend some quality time with my friends and family.

While 2011 went out relatively quietly, 2012 seems to have arrived with a vengeance. In fact, I was doing some joking on Twitter with Brian Jackett and Rob Collie shortly after the start of the year about #NYN, or “New Year’s Nitrous.” It’s been nothing but pedal-to-the-metal and then some since the start of the year, and there’s absolutely no sign of it letting up anytime soon. I like staying busy, but in some ways I’m wondering whether or not there will be enough time to fit everything in. One day at a time …

Here’s a recap of some stuff from the tail end of 2011, as well as what I’ve got going on for the first couple of months in 2012. After February, things actually get even crazier … but I’ll save events beyond February for a later post.

SPTV

SPTV logoDuring the latter part of 2011, I had a conversation with Michael Hiles and Jon Breyfogle of DSC Consulting, a technical consulting and media services company based here in Cincinnati, Ohio. Michael and Jon had an idea: they wanted to develop a high-quality, high-production-value television program that centered on SharePoint and the larger SharePoint ecosystem/community. The initial idea was that the show would feature an interview segment, coverage of community events, SharePoint news, and some other stuff thrown in.

It was all very preliminary stuff when they initially shared the idea with me, but I told them that I thought they might be on to something. The idea of a professional show that centered on SharePoint wasn’t something that was being done, and I was really curious to see how they would do it if they elected to move forward.

Just before Christmas, Jon contacted me to let me know that they were indeed moving forward with the idea … and he asked if I’d be the show’s first SharePoint guest. I told him I’d love to help out, and so the bulk of the pilot episode was shot at the Village Tavern in Montgomery one afternoon with host Mark Tiderman and co-host Craig Pereira. Mark and I shot some pool, discussed disaster recovery, and just talked SharePoint for a fair bit. It was really a lot of fun.

The pilot isn’t yet available (publicly), but a teaser for the show is available on the SPTV web site. All in all, I think the DSC folks have done a tremendous job creating a quality, professional program. Check out the SPTV site for a taste of what’s to come!

SharePoint Saturday Columbus Kick-Off

SharePoint Saturday Columbus logoAround the time of the SPTV shooting, the planning committee for SharePoint Saturday Columbus (Brian Jackett, Jennifer Mason, Nicola Young, and I) had a checkpoint conversation to figure out what, if anything, we were going to do about SharePoint Saturday Columbus in 2012. Were we going to try to do it again? If so, were we going to change anything? What was our plan?

Everything with SPSColumbus in 2012 is still very preliminary, of course, but I can tell you that we are looking forward to having the event once again! We expect that we’ll attempt to hold the event during roughly the same part of the year as we’ve had it in the past (i.e., late summer). As we start to nail things down and come up with concrete plans, I’ll share those. Until then, keep your eyes on the SharePoint Saturday site and the SPSColumbus account on Twitter!

SharePointCincy

Those of us who reside in and around Cincinnati, Ohio, are very fortunate when it comes to SharePoint events and opportunities. In the past we’ve had SharePoint Saturday Indianapolis just to the west of us, SharePoint Saturday Columbus to the northeast, and last year we had our first ever SharePoint Saturday Cincinnati (which was a huge success!) On top of that, last year was the first ever SharePointCincy event.

SharePointCincy was similar in some ways to a SharePoint Saturday, but it was different in others. It was a day full of SharePoint sessions, but we also had Fred Studer (the General Manager for the Information Worker product group at Microsoft) come out an speak. Kroger, a local company whose SharePoint implementation I’m very familiar with, also shared their experience with SharePoint. Rather than go into too much detail, though, I encourage you to check out the SharePointCincy site yourself to see what it was all about.

Of course, the whole reason I’m mentioning SharePointCincy is that it’s coming again in March of this year! Last year’s success (the event was attended by hundreds) pretty much guaranteed that the event would happen again.

I’m part of a planning team that includes Geoff Smith, Steve Caravajal of Microsoft, Mike Smith from MAX Technical Training, and the infamous Shane Young of SharePoint911 (which, in case you didn’t know it, is based here in Cincinnati). Four of the five of us met last Friday for a kick-off meeting and to discuss how the event might go this year. It was a good breakfast and a productive meeting. I don’t have much more to share at this point (other than the fact that, “yes, it’s happening”), but I will share information as it becomes available. Stay tuned!

Secrets of SharePoint Webcast

Secrets of SharePoint logoIt’s been a few months since my last webcast on SharePoint caching, so my co-workers at Idera approached me about doing another webcast. I guess I was due.

On this Wednesday, January 18th, I’ll be delivering a Secrets of SharePoint webcast titled “The Essentials of SharePoint Disaster Recovery.” Here’s the abstract:

“Are my nightly SQL Server backups good enough?” “Do I need an off-site disaster recovery facility?” “How do I even start the process of disaster recovery planning?” These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes “it depends…”

In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

For those of you who have heard me speak and/or attended my webcasts in the past, you’ll probably find this session to be a bit different than ones you’ve seen or heard. The main reason I say that is because the content is primarily business-centric rather than nuts-and-bolts admin content.

That doesn’t mean that SharePoint administrators shouldn’t attend, though; on the contrary, the webcast includes a number of very important messages for admins (e.g., why DR must be driven from the business angle rather than the technical/admin angle) that could really help them in their jobs. The session expands the scope of the DR discussion, though, to include the business aspects that are so tremendously important during the DR planning process.

If what I’ve shared sounds interesting, please sign-up! The webcast is free, and I’ll be doing Q&A after the session.

SharePoint Saturday Austin

SharePoint Saturday Austin logoThis upcoming weekend, I’ll be heading down to Austin, Texas, for the first SharePoint Saturday Austin event! The event is taking place on January 21st, and it is being coordinated by Jim Bob Howard (of Juniper Strategy) and Matthew Lathrop (of Rackspace). Boy oh boy – do they have an amazing line-up of speakers and contributors. It’s quite impressive; check out the site to see what I mean.

The guys are giving me the opportunity to present “The Essentials of SharePoint Disaster Recovery” session, and I’m looking forward to it. I’m also looking forward to catching up with many of my friends … and some of my Idera co-workers (who will be coming in from Houston, Texas).

If you’re in the Austin area and looking for something to do this upcoming Saturday, come to the event. It’s free, and it’s a great chance to take in some phenomenal sessions, win some prizes, and be a part of the larger SharePoint community!

SharePoint Pro Demo Booth Session

SharePoint Pro logoOn Monday, February 20th at 12pm EST, I’m going to be doing a “demo booth” session through SharePoint Pro Magazine. The demo booth is titled “Backup Basics: SharePoint’s Backup and Restore Capabilities and Beyond.” Here’s the description for the demo booth:

SharePoint ships with a number of tools and capabilities that are geared toward protecting content and configuration. These tools provide basic coverage for your SharePoint environment and the content it contains, but they can quickly become cumbersome in real world scenarios. In this session, we will look at SharePoint’s backup and restore capabilities, discuss how they work, and identify where they fall short in common usage scenarios. We will also highlight how Idera’s SharePoint backup solution picks up where the SharePoint platform tools leave off in order to provide complete protection that is cost-effective and easy to use.

The “demo booth” concept is something new for me; it’s part “platform education” (which is where I normally spend the majority of my time and energy) and part “product education” – in this case, education about Idera’s SharePoint backup product. Being both the product manager for Idera SharePoint backup and a co-author for the SharePoint 2010 Disaster Recovery Guide leaves me in something of a unique position to talk about SharePoint’s built-in backup/restore capabilities, where gaps exist, and how Idera SharePoint backup can pick up where the SharePoint platform tools leave off.

If you’re interested in learning more about Idera’s SharePoint backup product and/or how far you can reasonably push SharePoint’s built-in capabilities, check out the demo booth.

SPTechCon 2012 San Francisco

SPTechConFebruary comes to close with a big bang when SPTechCon rolls into San Francisco for the first of two stops in 2012. For those of you who check my blog now and again, you may have noticed the SPTechCon “I’ll be speaking at” badge and link on the right-hand side of the page. Yes, that means I’ll be delivering a session at the event! The BZ Media folks always put on a great show, and I’m certainly proud to be a part of SPTechCon and presenting again this time around.

At this point, I know that I’ll be presenting “The Essentials of SharePoint Disaster Recovery.” I think I’m also going to be doing another lightning talk; I need to check up on that, though, to confirm it.

I also found out that John Ferringer (my co-author and partner-in-crime) and I are also going to have the opportunity to do an SPTechCon-sponsored book signing (for our SharePoint 2010 Disaster Recovery Guide) on the morning of Wednesday the 29th.

If you’re at SPTechCon, please swing by to say hello – either at my session, at the Idera booth, the book signing, or wherever you see me!

Additional Reading and Resources

  1. Blog: Brian Jackett’s Frog Pond of Technology
  2. Blog: Rob Collie’s PowerPivotPro
  3. Company: DSC Consulting
  4. Site: SPTV
  5. LinkedIn: Mark Tiderman
  6. LinkedIn: Craig Pereira
  7. Event: SharePoint Saturday Columbus
  8. Blog: Jennifer Mason
  9. Twitter: Nicola Young
  10. Site: SharePoint Saturday
  11. Twitter: SharePoint Saturday Columbus
  12. Event: SharePoint Saturday Cincinnati
  13. Event: SharePointCincy
  14. LinkedIn: Geoff Smith
  15. Blog: Steve Caravajal’s Ramblings
  16. Blog: Mike Smith’s Tech Training Notes
  17. Company: MAX Technical Training
  18. Blog: Shane Young’s SharePoint Farmer’s Almanac
  19. Company: SharePoint911
  20. Webcast: “Caching-In” for SharePoint Performance
  21. Site: Secrets of SharePoint
  22. Webcast: The Essentials of SharePoint Disaster Recovery
  23. Event: SharePoint Saturday Austin
  24. Blog: Jim Bob Howard
  25. Company: Juniper Strategy
  26. LinkedIn: Matthew Lathrop
  27. Company: Rackspace
  28. Company: Idera
  29. Event: SharePoint Pro Demo Booth Session
  30. Site: SharePoint Pro Magazine
  31. Product: Idera SharePoint backup
  32. Book: SharePoint 2010 Disaster Recovery Guide
  33. Event: SPTechCon 2012 San Francisco
  34. Company: BZ Media
  35. Blog: John Ferringer’s My Central Admin

Wrapping Up 2011

After some time away, I’m getting back to blogging with a recap of the last several months’ worth of events. I cover a couple of SharePoint Saturdays, a webcast, my new whitepaper, and a new CodePlex project for SharePoint administrators.

Over the last several months, I haven’t been blogging as much as I’d hoped to; in reality, I haven’t blogged at all. There are a couple of reasons for that: one of them was our recent house move (and the aftermath), and the other was a little more personal. Without going into too much detail: we were contending with a very serious health issue in our family, and that took top priority.

The good news is that the clouds are finally parting, and I’m heading into the close of 2011 on a much better note (and with more time) than I’ve spent the last several months. To get back into some blogging, I figured I’d wrap-up the last several months’ worth of activities that took place since SharePoint Saturday Columbus.

Secrets of SharePoint (SoS) Webcast

Secrets of SharePoint Webcast BannerA lot of things started coming together towards the end of October, and the first of those was another webcast that I did for Idera titled “’Caching-In’ for SharePoint Performance.” The webcast covered each of SharePoint’s built-in caching mechanisms (object caching, BLOB caching, and page output caching) as well as the Office Web Applications’ cache. I provided a rundown on each mechanism, how it worked, how it could be leveraged, and some watch-outs that came with its use.

The webcast was basically a lightweight version (40 minutes or so) of the longer (75 minute) presentation I like to present at SharePoint Saturday events. It was something of a challenge to squeeze all of the regular session’s content into 40 minutes, and I had to cut some of the material I would have liked to have kept in … but the final result turned-out pretty well.

If you’re interested in seeing the webcast, you can watch it on-demand from the SoS webcast archive. I also posted the slides in the Resources section of this blog.

SharePoint Saturday Cincinnati

SharePoint Cincinnati BannerOn Saturday October 29th, Cincinnati had its first-ever SharePoint Saturday Cincinnati event. The event took place at the Kingsgate Marriott on Goodman Drive (near University Hospital), and it was very well attended – so much so that Stacy Deere and the other folks who organized the event are planning to do so again next year!

Many people from the local SharePoint community came out to support the event, and we had a number of folks from out of town come rolling in as well to help ensure that the event was a big success. I ended up delivering two sessions: my “’Caching-In’ for SharePoint Performance” session and my “SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!”

I had a great time at the event, and I’m hoping I’ll be fortunate enough to participate again on the next go ‘round!

New Disaster Recovery WhitePaper

WhitePaper Title PageMy co-author and good friend John Ferringer and I were hard at work throughout the summer and early Fall putting together a new disaster recovery whitepaper for Idera. The whitepaper is titled “New Features in SharePoint 2010: A Disaster Recovery Love Story,” and it’s a bromance novel that only a couple of goofballs like John and I could actually write …

Okay, there’s actually no romance in it whatsoever (thank heavens for prospective readers – no one needs us doing that to them), but there is a solid chunk of coverage on SharePoint 2010’s new platform capabilities pertaining to disaster recovery. We also review some disaster recovery basics in the whitepaper, cover things that have changed since SharePoint 2007, and identify some new watch-out areas in SharePoint 2010 that could have an impact on your disaster recovery planning.

The whitepaper is pretty substantial at 13 pages, but it’s a good read if you want to understand your platform-level disaster recovery options in SharePoint 2010. It’s a free download, so please grab a copy if it sounds interesting. John and I would certainly love to hear your feedback, as well.

SharePoint Backup Augmentation Cmdlets (SharePointBAC)

SharePointBACMany of my friends in the SharePoint community have heard me talk about some of the projects I’ve wanted to undertake to extend the SharePoint platform. I’m particularly sensitive to the plight of the administrator who is constrained (typically due to lack of resources) to use only the out-of-the-box (OOTB) tools that are available for data protection. While I think the OOTB tools do a solid job in most small and mid-size farms scenarios, there are some clear gaps that need to be addressed.

Since I’d been big on promises and short on delivery in helping these administrators, I finally started on a project to address some of the backup and restore gaps I see in the SharePoint platform. The evolving and still-under-development result is my SharePoint Backup Augmentation Cmdlets (SharePointBAC) project that is available on CodePlex.

With the PowerShell cmdlets that I’m developing for SharePoint 2010, I’m trying to introduce some new capabilities that SharePoint administrators need in order to make backup scripting with the OOTB tools a simpler and more straightforward experience. For example, one big gap that exists with the OOTB tools is that there is no way to groom a backup set. Each backup you create using Backup-SPFarm, for instance, adds to the backups that existed before it. There’s no way to groom (or remove) older backups you no longer want to keep, so disk consumption grows unless manual steps are taken to do something about it. That’s where my cmdlets come in. With Remove-SPBackupCatalog, for example, you could trim backups to retain only a certain number of them; you could also trim backups to ensure that they consume no more disk space (e.g., 100GB) than you’d like.

The CodePlex project is in alpha form right now (it’s brand spankin’ new), and it’s far from complete. I’ve already gotten some great suggestions for what I could do to continue development, though. When I combine those ideas with the ones I already had, I’m pretty sure I’ll be able to shape the project into something truly useful for SharePoint administrators.

If you or someone you know is a SharePoint administrator using the OOTB tools for backup scripting, please check out the project. I’d really love to hear from you!

SharePoint Saturday Denver

SharePoint Saturday DenverAs I type this, I’m in Colorado at the close of the third (annual) SharePoint Saturday Denver event. This year’s event was phenomenal – a full two days of SharePoint goodness! Held on Friday November 11th and Saturday November 12th at the Colorado Convention Center, this year’s event was capped at 350 participants for Saturday. A full 350 people signed-up, and the event even had a wait list.

On the first day of the event, I delivered a brand new session that I put together (in Prezi format) titled The Essentials of SharePoint Disaster Recovery. Here’s the amended abstract (and I’ll explain why it’s amended in a second) for the session:

“Are my nightly SQL Server backups good enough?” “Do I need an off-site disaster recovery facility?” “How do I even start the process of disaster recovery planning?” These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes “it depends.” In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

The reason I amended the abstract is because the previous abstract for the session didn’t do enough to call out the fact that the presentation is primarily business-centric rather than technically focused. Many of the folks who initially came to the session were SharePoint IT pros and administrators looking for information on backup/restore, mirroring, configuration, etc. Although I cover those items at a high level in this new talk, they’re only a small part of what I discuss during the session.

On Saturday, I delivered my “’Caching-In’ for SharePoint Performance” talk during the first slot of the day. I really enjoy delivering the session; it’s probably my favorite one. I had a solid turn-out, and I had some good discussions with folks both during and after the presentation.

As I mentioned, this year’s event was a two day event. That’s a little unusual, but multi-day SharePoint Saturday events appear to be getting some traction in the community – starting with SharePoint Saturday The Conference a few months back. Some folks in the community don’t care much for this style of event, probably because there’s some nominal cost that participants typically bear for the extra day of sessions. I expect that we’ll probably continue to see more hybrid events, though, because I think they meet an unaddressed need that falls somewhere between “give up my Saturday for free training” and “pay a lot of money for a multi-day weekday conference.” Only time will tell, though.

On the Horizon

Event though 2011 isn’t over yet, I’m slowing down on some of my activities save for SharePointBAC (my new extracurricular pastime). 2012 is already looking like it’s going to be a big year for SharePoint community activities. In January I’ll be heading down to Texas for SharePoint Saturday Austin, and in February I’ll be heading to San Francisco for SPTechCon. I’ll certainly cover those activities (and others) as we approach 2012.

Additional Reading and Resources

  1. Event: SharePoint Saturday Columbus
  2. Company: Idera
  3. Webcast: “Caching-In” for SharePoint Performance
  4. Webcast Slides: “Caching-In” for SharePoint Performance
  5. Location: My blog’s Resources section
  6. Event: SharePoint Saturday Cincinnati
  7. Blog: Stacy Deere and Stephanie Donahue’s “Not Just SharePoint”
  8. SPS Cincinnati Slides: “Caching-In” for SharePoint Performance
  9. SPS Cincinnati Slides: SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!
  10. Blog: John Ferringer’s “My Central Admin”
  11. Whitepaper: New Features in SharePoint 2010: A Disaster Recovery Love Story
  12. CodePlex: SharePoint Backup Augmentation Cmdlets (SharePointBAC)
  13. Event: SharePoint Saturday Denver
  14. Tool: Prezi
  15. SPS Denver Slides: The Essentials of SharePoint Disaster Recovery
  16. SPS Denver Slides: “Caching-In” for SharePoint Performance
  17. Event: SharePoint Saturday The Conference
  18. Event: SharePoint Saturday Austin
  19. Event: SPTechCon 2012 San Francisco

Bare Metal Bugaboos

Having recently recovered from a firewall outage using Windows Server 2008’s bare metal restore capabilities, I figured I’d write a quick post to cover how I did it. I also cover one really big learning I picked up as a result of the process.

I had one of those “aw nuts” moments last night.

At some point yesterday afternoon, I noticed that none of the computers in the house could get out to the Internet.  After verifying that my wireless network was fine and that internal DNS was up-and-running, I traced the problem back to my Forefront Threat Management Gateway (TMG) firewall.  Attempting to RDP into it proved fruitless, and when I went downstairs and looked at the front of the server, I noticed the hard drive activity light was constantly lit.

So, I powered the server off and brought it back on.  Problem solved … well, not really. It happened again a couple of hours later, so I repeated the process and made a mental note that I was going to have to look at the server when I had a chance.

Demanding My Attention

Well, things didn’t “stay fixed.”  Later in the evening, the same lack of connectivity surfaced again.  I went to the basement, powered the server off, and brought it back up.  That time, though, the server wouldn’t start and complained about having nothing to boot from.

As I did a reset and watched it boot again, I could see the problem: although the server knew that something was plugged in for boot purposes, it couldn’t tell that what was plugged in was a 250GB SATA drive.  Ugh.

When I run into those types of situation, the remedy is pretty clear: a new hard drive.  I always have a dozen or more hard drives sitting around (comes from running a server farm in the basement), and I grabbed a 500GB Hitachi drive that I had leftover from another machine.  Within five minutes, the drive was in the server and everything was hooked back up.

Down to the Metal

Of course, a new hard drive was only half of the solution.  The other half of the equation involved restoring from backup.  In this case, a bare metal restore from backup was the most appropriate course of action since I was starting with a blank disc.

For those who may not be familiar with the concept of bare metal restoration, you can get a quick primer from Wikipedia.  I use Microsoft’s System Center Data Protection Manager 2010 (DPM) to protect the servers in my environment, so I knew that I had an image from which I could restore my TMG box.  I just dreaded the thought of doing so.

Why the worry?  Well, I think Arthur C. Clarke summed it up best with the following quote:

Any sufficiently advanced technology is indistinguishable from magic.

The Cold Sweats

Now bare metal restore isn’t “magic,” but it is relatively sophisticated technology … and it’s still an area that seems plagued with uncertainties.

I have to believe that I’m not the only one who feels this way.  I’ve co-authored two books on SharePoint disaster recovery, and the second book includes a chapter I wrote that covers bare metal restore on a Windows 2008 server.  My experience with bare metal restores can be summarized as follows: when it works, it’s awesome … but it doesn’t always work as we’d want it to.  When it doesn’t work, it’s plain ol’ annoying in that it doesn’t explain why.

So, it’s with that mindset that I started the process of trying to clear away my server’s lobotomized state.  These are the steps I carried out to get ready for the restore:

  1. DPM consoleI went into the DPM console, selected the most recent bare metal restore recovery point available to me (as shown on the right), and restored the contents of the folder to a network file share– in my case, \\VMSS-FILE1\RESTORENote: you’ll notice a couple of restore points available after the one I selected; those were created in the time since I did the restore but before I wrote this post.
  2. The approximately 21GB bare metal restore image was created on the share.  I do have gigabit Ethernet on my network, and since I recently built-out a new DPM server with faster hardware, it really didn’t take too long to get the image restored to the designated file share – maybe five minutes or so.  The result was a single folder in the designated file share.
  3. Folder structure for restore shareI carried out a little manipulation on the folder that DPM created; specifically, I cut out two levels of sub-folders and made sure that the WindowsImageBackup folder was available directly from the top of the share as shown at the left.  The Windows Recovery Environment (or WinRE) is picky about this detail; if it doesn’t see the folder structure it expects when restoring from a network share, it will declare that nothing is available for you to restore from – even though you know better.

In Recovery

With my actual restore image ready to go on the file share, I booted into the WinRE using a bootable USB memory stick with Windows 2008 R2 Server on it.  I walked through the process of selecting Repair your computer, navigating out to the file share, choosing my restore image, etc.  The process is relatively easy to stumble through, but if you want it in a lot of detail, I’d encourage you to read Chapter 5 (Windows Server 2008 Backup and Restore) in our SharePoint 2010 Disaster Recovery Guide.  In that chapter, I walk through the restore process in step-by-step fashion with screenshots.

Additional restore options dialogI got to the point in the wizard where I was prompted to select additional options for restore as shown on the left.  By default, the WinRE will format and repartition discs as needed.  In my case, that’s what I wanted; after all, I was putting a brand new drive in (one that was larger than the original), so formatting and partitioning was just what the doctor ordered.  I also had the ability to exclude some drives (through Exclude disks) from the recovery process – not something I had to worry about given that my system image only covered one hard drive.  If my hard drive required additional drivers (as might be needed with a drive array, RAID card, or something equivalent), I also had the opportunity to supply them with the Install drivers option.  Again, this was a basic in-place restore; the only thing I needed was a clean-up of the hard drive I supplied, so I clicked Next.

Confirmation dialogI confirmed the details of the operation on the next page, and everything looked right to me.  I then paused to mentally double-check everything I was about to do.

In my experience, the dialog on the left is the last point of easily grasped normal wizard activity before the WinRE restore wizard takes off and we enter “magic land.”  As I mentioned, when restores work … they just chug right along and it looks easy.  When bare metal and system state restores don’t work, though, the error messages are often unintelligible and downright useless from a troubleshooting and remediation perspective.  I hoped that my restore would be one of the happy restores that chugged right along and made me proud of my backup and restore prowess.

I crossed my fingers and clicked the Next button.

<Insert Engine Dying Noises Here>

A picture of the restore going belly-upThe screenshot on the right shows what happened almost immediately after I clicked next.

Well, you knew this blog post would be a whole lot less interesting if everything went according to plan.

Once I worked through my panic and settled down, I looked a little closer.  I understood The system image restore failed without much interpretation, but I had no idea what to make of

Error details: The parameter is incorrect. (0x80070057)

That was the extent of what I had to work with.  All I could do was close out and try again.  Sheesh.

Head Scratching

Advanced options dialogLet’s face it: there aren’t a whole lot of options to play with in the WinRE when it comes to bare metal restore.  The screenshot on the left shows the Advanced options you have available to you, but there really isn’t much to them.  I experimented with the Automatically check and update disk error information checkbox, but it really didn’t have an effect on the process.  Nevertheless, I tried restores with all combinations of the checkboxes set and cleared.  No dice.

With the Advanced options out of the way, there was really only one other place to look: the Exclude disks dialog.  I knew Install drivers wasn’t needed, because I had no trouble accessing my disks and wasn’t using anything like a RAID card or some other advanced disk configuration.

Disk exclusion dialogI popped open the disk exclusion dialog (shown on the right) and tried running a restore after excluding all of the disks except the Hitachi disk to which I would be writing data (Disk 2).  Again, no dice – I still continued to get the aforementioned error and couldn’t move forward.

I knew that DPM created usable bare metal images, and I knew that the WinRE worked when it came to restoring those images, so I knew that I had to be doing something wrong.  After another half an hour of goofing around, I stopped my thrashing and took stock of what I had been doing.

My Inner Archimedes

My eureka moment came when I put a few key pieces of information together:

  • While writing the chapter on Windows Server 2008 Backup and Restore for the SharePoint 2010 DR book, I’d learned that image restores from WinRE are very persnickety about the number of disks you have and the configuration of those disks.
  • When DPM was creating backups, only three hard drives were attached to the server: the original 250GB system drive and two 30GB SSD caching drives.
  • Booting into WinRE from a memory stick was causing a distinctly visible fourth “drive” to show up in the list of available disks.
    The bootable USB stick had to be a factor, so I put it away and pulled out a Windows Server 2008 R2 installation disk.  I then booted into the WinRE from the DVD and walked through the entire restore process again.  When I got to the confirmation dialog and pressed the Next button this time around, I received no The parameter is incorrect errors – just a progress bar that tracked the restore operation.

Takeaway

The one point that’s going to stick with me from here on out is this: if I’m doing a bare metal restore, I need to be booting into the WinRE from a DVD or from some other source that doesn’t affect my drives list.  I knew that the disks list was sensitive on restore, but I didn’t expect USB drives to have any sort of effect on whether or not I could actually carry out the desired operation.  I’m glad I know better now.

Additional Reading and References

  1. Product Overview: Forefront Threat Management Gateway 2010
  2. Wikipedia: Bare-metal restore
  3. Product Overview: System Center Data Protection Manager 2010
  4. Book: SharePoint 2010 Disaster Recovery Guide

Recent and Upcoming SharePoint Activities

In this post I talk about the recent SharePoint Cincy event. I also cover some of my upcoming activities for the next month, including a trip to Denver and SharePoint Saturday Twin Cities.

There have been some great SharePoint events recently, and quite a few more are coming up.  Here are some of the events I have been (or will be) involved in recently/soon:

SharePoint Cincy

SharePoint Cincy EventOn March 18th, Cincinnati saw it’s first (arguably) “major” SharePoint event.  SharePoint Cincy was put together by Geoff Smith, the Cincinnati CIO Roundtable, and MAX Training … and it was a huge success by any measure.  I took the picture on the right during the introductory speech by Geoff Smith, and it should give you an idea of well-attended the event was.

I was fortunate enough to deliver a talk on disaster recovery during the day, and I also helped Geoff and the organizing committee in advance of the event with some of the speaker round-up for the IT professional / administrator track.

I enjoyed the event because the audience composition was different than one would typically find at a SharePoint Saturday event.  Many of those in attendance were IT decision makers and managers rather than implementers and developers.  I attribute the high numbers in my DR session (typically not a big pull with technical crowds) to that demographic difference.

The next SharePoint Cincy event is already planned for next year (on March 12th, I believe), so keep your eyes peeled at the beginning of next year!

SharePoint Saturday Twin Cities

SharePoint Saturday Twin CitiesSome fine folks in the Minneapolis and St. Paul area (Jim Ferguson, Sarah Haase, and Wes Preston)  have worked to assemble the upcoming SharePoint Saturday Twin Cities on April 9, 2011.  Like all SharePoint Saturdays, the event is free for attendees.  There’s plenty of good education and giveaways to make it worth your while to spend a Saturday with other SharePoint enthusiasts.

I’ll be heading out to the event to deliver my IT pro caching talk titled “’Caching-In’ for SharePoint Performance”  The abstract for the session appears below

Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios.  In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites.  We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.

If you’re in the Twin Cities area and available on Saturday April 9th, come on by the following address …

Normandale Community College
9700 France Avenue South
Bloomington, MN 55431

… for a day of high-quality and free training.  You can register here on Eventbrite!

Lunch and Learn with Prinomic

Idera partners with a number of different companies in order to make its software more available, and one of those companies is Prinomic Technologies.  Prinomic is a consulting company based out of Denver, Colorado, and they focus on the creation of solutions that employ and target SharePoint.  They are somewhat unique in that they offer a combination of both services and products, and it affords them a great deal of flexibility when addressing customer needs.

I’ll actually be traveling out to Denver to deliver a lunch-and-learn in conjunction with Prinomic titled “SharePoint Disaster Recovery Options” on Wednesday, April 13th, 2011.  The lunch and learn is open to the public; simply follow the link (above) to register if you’re interested.

Prinomic is located at the following address:

4600 S Syracuse
9th floor
Denver, CO 80237

Colorado Springs SharePoint User Group

Knowing that I’d be out in the Denver area on April 13th, I reached out to some of the folks I know there to see if I might coordinate something with one of the local user groups.  I love speaking, and it was my hope that someone would actually grant me some more time with additional SharePoint geeks!

I was very fortunate to get a reply back from Dave Milner.  Dave and Gary Lapointe run the Colorado Springs SharePoint User Group, and they mentioned that it would be okay for me to come by and present to their group on the evening of the 13th.  So, it looks like I’ll be heading down to Colorado Springs after the lunch and learn with Prinomic!

I’ll be presenting my 2010 DR talk titled “SharePoint 2010 and Your DR Plan: New Capabilities, New Possibilities!”

Disaster recovery planning for a SharePoint 2010 environment is something that must be performed to insure your data and the continuity of business operations. Microsoft made significant enhancements to the disaster recovery landscape with SharePoint 2010, and we’ll be taking a good look at how the platform has evolved in this session. We’ll dive inside the improvements to the native backup and restore capabilities that are present in the SharePoint 2007 platform to see what has been changed and enhanced. We’ll also look at the array of exciting new capabilities that have been integrated into the SharePoint 2010 platform, such as unattended content database recovery, SQL Server snapshot integration, and configuration-only backup and restore. By the time we’re done, you will possess a solid understanding of how the disaster recovery landscape has changed with SharePoint 2010.

If you’re in the Colorado Springs area on Wednesday, April 13th, come on by the user group!  The user group meets at the following address:

Cobham Analytics
985 Space Center Drive
Suite 100
Colorado Springs, CO

Meet-and-greet activities are from 5:30pm until 6pm, and the session begins at 6pm!

TechNet Events: Transforming IT from Virtualization to the Cloud

Finally, I wanted to mention a series of events that are both going on right now and coming soon.  My good friend Matt Hester, our region’s IT Pro Evangelist with Microsoft, is traveling around putting on a series of Technet events titled “Transforming IT from Virtualization to the Cloud.”  These events are free training and center on cloud computing, why it is important, private vs. public cloud options, etc.

The event looks really cool, and it’s being offered in a number of different cities in the region.  I’ve already signed up for the Cincinnati event on April 6th (a Wednesday).  Check out Matt’s blog post (linked above) for additional details.  If you want to sign up for the Cincinnati event on April 6th, you can use this link directly.

Additional Reading and References

  1. Event: SharePoint Cincy
  2. People: Geoff Smith
  3. Blog: Jim Ferguson
  4. Twitter: Sarah Haase
  5. Blog: Wes Preston
  6. Event: SharePoint Saturday Twin Cities
  7. Registration: SharePoint Saturday Twin Cities on Eventbrite
  8. Company: Idera
  9. Company: Prinomic Technologies
  10. Lunch and Learn: “SharePoint Disaster Recovery Options”
  11. LinkedIn: Dave Milner
  12. LinkedIn: Gary Lapointe
  13. Technet Event: “Transforming IT from Virtualization to the Cloud”
  14. Technet Event: Cincinnati cloud event

February’s Rip-Roarin’ SharePoint Activities

February is a really busy month for SharePoint activities. In this post, I cover some speaking engagements I have during the month. I also talk about the release of Idera SharePoint backup 3.0 — the product we’ve been working so hard to build and make available!

Holy smokes!  2011 is off to a fast start, and February is already here.  Now that our product release is out (see below), I’m going to make good on my promise to get more “real” blog content out.  Before I do that, though, I want to highlight all of the great SharePoint stuff I’m fortunate enough to be involved with during the month of February.

Idera SharePoint backup 3.0 Release

Idera SharePoint backup 3.0 management consoleAs some of you know, I’m currently a Product Manager for SharePoint products at Idera.  Although it isn’t something that is strictly community focused, Idera SharePoint backup has been a large part of my life for most of the last year.  We’ve been doing some really cool development and product work, and I want to share a piece of good news: We just released version 3.0 of Idera SharePoint backup!

Idera SharePoint backup is “my” product from a management standpoint, and I’m really proud of all the effort that our team put in to make the release a reality.  There are a lot of people in many different areas who busted their butts to get the product out-the-door: development, quality assurance, information development, marketing, product marketing management, public relations, web site management, sales, sales engineering, and more.

To everyone who contributed to making the release a success: you have my heartfelt thanks.  To the rest of you who might be shopping for a SharePoint backup product, please check out what we’ve put together!

SPTechCon San Francisco

Idera-sponsored book signings at SPTechCon San FranciscoI’ll be heading out to BZ Media’s SPTechCon in San Francisco for most of the week of February 7th.  Although I will be delivering a lightning talk titled “Backup/Restore Knowledge Nuggets: What’s True, What’s Not?” (more-or-less the same talk I delivered at last Fall’s SPTechCon event in Boston) on Monday the 7th, that’s only a small part of why I’ll be at the conference.

The big stuff?  Well, first off is the big “public release” of Idera SharePoint backup 3.0.  I’ll be talking with conference participants, seeing what they like (and what they don’t like), explaining the new capabilities and features, etc.

My good friend (and co-author) John Ferringer and I will also be doing an Idera-sponsored book signing for our SharePoint 2010 Disaster Recovery Guide.  If you’re going to be at the conference and want to get a free (and signed!) copy of our book, come by booth #302 on Wednesday morning (2/9) at 10am during the coffee and donuts.  We’ll be around and easy to find: John will be the thin bald guy, and I’ll be the mostly bald guy next to him shoveling donuts into his mouth.  I have a tremendous weak spot for donuts …

Presenting for the Rochester SharePoint User Group

Rick Black is one of the organizers of the Rochester (New York) SharePoint User Group.  I met Rick late in 2009 at SharePoint Saturday Cleveland in Cleveland, Ohio; he and I were both presenting sessions.  We talked a bit during the event, and we pinged each other now and again on Twitter and Facebook in the time after the event.

At one point in time, Rick tossed out the idea of having me present for the Rochester SPUG.  I told him I’d certainly be up for it; after all, I really enjoy hanging out with SharePoint users and talking shop.  The trick, of course, would be getting me to Rochester.

Recently, I asked Rick if he thought a virtual SPUG presentation might work for his group.  I do quite a bit of time presenting on Live Meeting these days, so I figured it might be an option.  It sounded like a good idea to Rick, so I’m on the schedule to (virtually) present for the Rochester SPUG on Thursday, February 10th, 2011.  I’ll be presenting Saving SharePoint – a time-tested and refined SharePoint disaster recovery talk.  The abstract reads as follows:

In this session, we will be discussing disaster recovery (DR) concepts and strategies for SharePoint in a format that highlights a combination of both business and technical concerns.  We will define some critical planning terms such as “recovery time objectives” and “recovery point objectives,” and we’ll see why they are so important to understand when trying to formulate a DR strategy.  We will also identify the capabilities and limitations of the Microsoft tools that can used for backing up, restoring, and making SharePoint highly available for disaster recovery purposes.  Changes that have arrived with SharePoint Server 2010 and how they affect DR planning and implementation will also be covered.

I’ll be presenting the night that I get home on a red-eye flight from SPTechCon, so I could be a bit weary … but it will be fun.  I’m really looking forward to it!

SharePoint Saturday San Diego

SharePointSaturdayFor the last weekend of February, I’ll be heading back out to the west coast for SharePoint Saturday San Diego.  The event itself will be held at the San Diego Convention Center on Saturday, February 26th.  The event has filled-up once already, but Chris Givens (who is organizing the event) was able to add another 75 tickets thanks to some additional support from sponsors.

In addition to my Saving SharePoint session (which is described earlier in this post), I’ll be delivering another session called “Caching-In” for SharePoint Performance.  The abstract for the session reads as follows:

Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios.  In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites.  We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.

As always, SharePoint Saturday events are free to the public.  They’re a great way to get a day’s worth of free training, access to SharePoint experts, and plenty of swag and info from vendors.  If you live in or around San Diego and are free on 2/26, consider signing up!

Two trips to the west coast in one month is definitely a first for me, but I’m looking forward to it.  I hope to see you out there!

Additional Reading and References

  1. Company: Idera
  2. Product: Idera SharePoint backup
  3. Company: BZ Media
  4. Event: SPTechCon San Francisco
  5. Event: SPTechCon SharePoint Lightning Talks
  6. Blog: John Ferringer’s My Central Admin
  7. Book: SharePoint 2010 Disaster Recovery Guide
  8. Twitter: Rick Black (@ricknology)
  9. User Group: Rochester SharePoint User Group
  10. Events: SharePoint Saturday San Diego
  11. Twitter: Chris Givens (@givenscj)