The rest of this summer is a busy one for me, and it is chock-full of SharePoint presentation events. In this blog post, I discuss my upcoming SharePoint Saturday event schedule as well as a webcast I’ll be delivering in the middle of August.
I just finished writing the conclusion for the SharePoint 2010 Disaster Recovery Guide, so it’s safe to say that the marathon writing and revising sessions are nearing an end. Book writing is one of those things that most people don’t do, so it’s hard to describe the feeling that comes when you truly internalize the realization that the light at the end of the tunnel isn’t another oncoming train. To me, it means that I may once again have some free time to spend time with my family, play videogames, take care of some much-needed home network maintenance, and actually work on some cool SharePoint projects.
The end of the book couldn’t have come a moment sooner, either, because it seems that I’m about to launch into what I’m starting to jokingly call my “SharePoint Summer Whirlwind Tour.”
SharePoint Saturday New York
The first stop on my tour is the Big Apple. SharePoint Saturday New York will be held on Saturday, July 31st, at the Microsoft office in Manhattan. The event “sold-out” in almost no time at all, and it is currently wait-listed about 80 people deep based on what I saw from some recent tweets on Twitter.
The event is scheduled with a whopping nine concurrent tracks, so there will be plenty of SharePoint goodness for everyone in attendance. I’ll be presenting on SharePoint disaster recovery for both SharePoint 2007 and SharePoint 2010 with “Saving SharePoint,” so stop in to see me if you want to talk about DR!
Since Idera is sponsoring the event, I’ll also be in and around the Idera booth answering questions, showing off our backup tools, and hopefully meeting some of the local SharePoint community. It should be a lot of fun!
SharePoint Saturday Denver
The week after SharePoint Saturday New York is SharePoint Saturday Denver in (surprise) Denver, Colorado, on August 7th. The folks organizing SharePoint Saturday Denver have planned for six tracks: two for architecture, two for development, one for admins/IT pros, and one for end users/decision makers. I feel fortunate in that I’ll be delivering two presentations in the admin/IT pro track. The first is “Saving SharePoint” (on SharePoint disaster recovery), and the second is titled “’Caching-In’ for SharePoint Performance.”
“’Caching-In for SharePoint Performance” is a relatively new session that I put together a while ago based on a lot of the experience I gained while consulting for a “particular” client, and it dives into the platform caching mechanisms that are built into SharePoint. The abstract sums it up pretty well.
Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios. In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites. We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.
Quite a few folks from Idera will be present at the event, and since we’re a sponsor, we’ll have a booth. In between sessions, I’m sure I’ll be milling around in and around the booth if I’m not in another session.
I’m particularly excited about the Denver event because it represents a chance to interact with a portion of the SharePoint community I don’t see or talk to very often. The folks putting on the event are a top-notch group in my mind, and many of the speakers are folks I’ve wanted to meet for some time.
SharePoint Saturday Columbus
The SharePoint Saturday tour concludes for a while (for me, anyway) with SharePoint Saturday Columbus on August 14th, 2010, in Columbus, Ohio. This particular SharePoint Saturday is special to me because I’m a member of the committee that is currently working to put the event together.
We have some fantastic sessions lined up, and I’ll be delivering “Saving SharePoint” in one of the IT pro session slots throughout the day. For those of you who are in and around the Cincinnati, Columbus, and Cleveland areas, I hope you’ll show up for a great day of SharePoint sessions and community connection building! You can sign up for the event with this Eventbrite link.
Idera is a sponsor for the event and will have a booth. I’m sure I’ll be in and around the booth, but being both a speaker and an organizer means that I’ll probably be doing quite a bit of extra running around, too.
Secrets of SharePoint Webcast
“But wait, there’s more!” I’m imagining one of those old made-for-TV gadget commercials from many years ago as I typed that …
The Wednesday after SharePoint Saturday Columbus (August 18th), I’ll be delivering a webcast for Idera as part of their Secrets of SharePoint series. Idera regularly seeks the help of SharePoint community members to give webcasts, and John Ferringer and I actually delivered one of these titled “SharePoint Disaster Recovery Essential Guidelines” back in 2009.
SharePoint 2010 is here, and many organizations are hard at work building their implementation roadmap. Some organizations are starting fresh with SharePoint 2010 while many others are contemplating a migration strategy from SharePoint 2007. Regardless of how an organization arrives at SharePoint 2010, disaster recovery planning for their SharePoint environment is something that must be included to ensure the protection of their data and the continuity of business operations.
Microsoft made significant enhancements to the disaster recovery landscape with SharePoint 2010, and in this webcast we’ll be taking a good look at many of those of new features. We’ll dive inside enhancements to the existing backup and restore capabilities that were present in the SharePoint 2007 platform to see what has changed and been enhanced. We’ll also look at many of the exciting new capabilities that have been integrated into the SharePoint 2010 platform, such as unattended content database recoveries, SQL Server snapshot integration, and configuration-only backup and restore. By the time we’re done, you will possess a solid understanding of how the disaster recovery landscape has changed for the better with SharePoint 2010.
If you’re free at 1pm EDT (12pm CDT) on August 18th, I encourage you to sign up and listen. Just like the SharePoint Saturday events, there is no charge to attend the webcast. Just sign up and you’re ready to go!
And After That?
After that, I’ll be cooling my jets for a while and taking a much-needed break to remind my kids of what their father looks like. There are many backlogged blog posts I’ve been planning to write, though, so hopefully I’ll be able to start sharing more soon!
In this post, I discuss my quest to determine whether or not site collection backups properly capture workflow information in SharePoint 2010. TechNet made a point of saying they didn’t, but Joel Oleson said they did. Who was right?
Do you trust TechNet? I generally do, as I figure the good folks at Microsoft are doing their best to disseminate reliable information to those of us working with their products. As I recently learned, though, even the information that appears on TechNet needs some cross-checking once in a while.
Bear with me, as this post is equal parts narrative and data discussion. If you don’t like stories and want to cut straight to the chase, though, simply scroll down to the section titled “The Conclusion” for the key takeaway.
Site Collection Backup Primer
For those who aren’t overly familiar with site collection backups, it’s probably worth spending a moment discussing them a bit before going any further. Site collection backups are, after all, at the heart of this blog post.
What is a site collection backup? It is basically what you would expect from its name: a backup of a specific SharePoint site collection. These backups can be used to restore or overwrite a site collection if it becomes lost or corrupt, and they can also be used to copy site collections from one web application (or farm) to another.
Anytime you execute one of the following operations, you’re performing a site collection backup:
from the command line: STSADM.exe –o backup –url <url> –filename <filename>
through PowerShell in SharePoint 2010: Backup-SPSite <url> –Path <filepath>
Using the “Perform a site collection backup” link in SharePoint 2010 Central Administration
When a site collection backup is executed, a single file with a .bak extension is generated that contains the entire contents of the site collection targeted. This file can be freely copied and moved around as needed. Aside from some recommendations regarding the maximum size of the site collection captured using this approach (15GB and under in SharePoint 2007, 85GB and under in SharePoint 2010), the backups themselves are really quite handy for both protection and site collection migration operations.
A Little Background
John Ferringer and I have been plugging away at the SharePoint 2010 Disaster Recovery Guide for quite some time. As you might imagine, the writing process involves a lot of research, hands-on experimentation, and fact-checking. This is especially true for a book that’s being written about a platform (SharePoint 2010) that is basically brand new in the marketplace.
While researching backup-related changes for the book, I made a special mental note of the following change regarding site collection backups in SharePoint 2010:
Workflows are not included in site collection backups
This stuck with me when I read it, because I hadn’t recalled any such statement being made with regard to site collection backups in SharePoint 2007. Since Microsoft made a special note of pointing out this limitation for SharePoint 2010, though, I figured it was important to keep in mind. Knowing that workflows had changed from 2007 to 2010, I reasoned that the new limitation was probably due to some internal workflow plumbing alterations that adversely affected the backup process.
A couple of weeks back, I was presenting at SharePoint Saturday Ozarks alongside an awesome array of other folks (including Joel Oleson) from the SharePoint community. Due to a speaker no-show in an early afternoon slot, Mark Rackley (the event’s one-man force-of-nature organizer) decided to hold an “ask the experts” panel where attendees could pitch questions at those of us who were willing to share what we knew.
A number of good questions came our way, and we all did our best to supply our experiences and usable advice. Though I don’t recall the specific question that was asked in one particular case, I do remember advising someone to perform a site collection backup before attempting to do whatever it was they wanted to do. After sharing that advice, though, things got a little sketchy. The following captures the essence of the exchange that took place between Joel and me:
Me: <to the attendee> Site collection backups don’t capture everything in SharePoint 2010, though, so be careful.
Joel: No, site collection backups are full-fidelity.
Me: TechNet specifically indicates that workflows aren’t covered in site collection backups with SharePoint 2010.
Joel: No, the backups are still full fidelity.
Me: <blank stare>
The discussion topic and associated questions for the panel quickly changed, but my brain was still stripping a few gears trying to reconcile what I’d read on TechNet with what Joel was saying.
After the session, I forwarded the TechNet link I had quoted to Joel and asked if he happened to have an “inside track” or perhaps some information I didn’t have access to. We talked about the issue for a while at the hotel a little later on, but the only thing that we could really conclude was that more research was needed to see if site collection backups had in fact changed with SharePoint 2010. Before taking off that weekend, we decided to stay in contact and work together to get some answers.
Under The Hood
To understand why this issue bothered me so much, remember that I’m basically in the middle of co-authoring a book on the topic of disaster recovery – a topic that is intimately linked to backup and restore operations. The last thing I would ever want to do is write a book that contains ambiguous or (worse) flat-out wrong information about the book’s central topic.
To get to the heart of the matter, I decided to start where most developers would: with the SharePoint object model. In both SharePoint 2007 and SharePoint 2010, the object model types that are used to backup and export content typically fall into one of two general categories:
Catastrophic Backup and Restore API. These types are located in the Microsoft.SharePoint.Administration.Backup namespace, and they provide SharePoint’s full-fidelity backup and restore functions. Backup and restore operations take place on content components such as content databases, service applications, and the entire SharePoint farm. Catastrophic backup and restore operations are full-fidelity, meaning that no data is lost or selectively ignored during a backup and subsequent restore. By default, catastrophic backup and restore operation don’t get any more granular than a content database. If you want to protect something within a content database, such as a site collection, sub-site, or list, you have to backup the entire content database containing the target object(s).
Content Deployment API. The member types of this API (also known internally at Microsoft as the PRIME API) reside within the Microsoft.SharePoint.Deployment namespace and are used for granular content export and import operations. The exports that are created by the types in this namespace target objects from the site collection level all the way down to the field level – typically webs, lists, list items, etc. Content Deployment exports are not full-fidelity and are commonly used for moving content around more than they are for actual backup and restore operations.
So, where does this leave site collection backups? In truth, site collection backups don’t fit into either of these categories. They are a somewhat unusual case, both in SharePoint 2007 and SharePoint 2010.
Whether a site collection backup is initiated through STSADM, PowerShell, or Central Administration, a single method is called on the SPSiteCollection type which resides in the Microsoft.SharePoint.Administration namespace. This is basically the signature of the method:
To carry out a site collection backup, all that is needed is the URL of the site collection, the filename that will be used for the resultant backup file, and a TRUE or FALSE to indicate whether an overwrite should occur if the selected file already exists.
If you were to pop open Reflector and drill into the Backup method on the SPSiteCollection type, you wouldn’t get very far before running into a wall at the SPRequest type. SPRequest is a managed wrapper around the take-off point for a whole host of external calls, and the execution of the Backup method is actually handled in unmanaged legacy code. Examining the internals of what actually takes place during a site collection backup (or restore, for that matter) simply isn’t possible with Reflector.
Since the internals of the Backup method weren’t available for reflective analysis, I was forced to drop back and punt in order to determine how site collection backups and workflow interacted within SharePoint 2010.
I knew that I was going to have to execute backup and restore tests at some point; I was just hoping that I would be a bit more informed (through object model inspection) about where I needed to focus my efforts. Without any visibility into the internals of the site collection backup process, though, I didn’t really have much to start with.
Going into the testing process, I knew that I wasn’t going to have enough time to perform exhaustive testing for every scenario, execution path, variable, and edge-case that could be relevant to the backup and restore processes. I had to develop a testing strategy that would hit the likely problem areas as quickly (and with as few runs) as possible.
After some thought, I decided that these points were important facets to consider and account for while testing:
Workflow Types. Testing the most common workflow types was important. I knew that I would need to test at least one out of the box (OOTB) workflow type. I also decided that I needed to test at least one instance of each type of workflow that could be created through SharePoint Designer (SPD) 2010; that meant testing a list-bound workflow, a site collection workflow, and a reusable workflow. I decided that custom code workflows, such as those that might be created through Visual Studio, were outside the scope of my testing.
Workflow Data. In order to test the impact of backup and restore operations on a workflow, I obviously had to ensure that one or more workflows were in-place within the site collection targeted for backup. Having a workflow attached to a list would obviously test the static data portions of the workflow, but there was other workflow-related data that had to be considered. In particular, I decided that the testing of both workflow history information and in-process workflow state were important. More on the workflow state in a bit …
Backup and Restore Isolation. While testing, it would be important to ensure that backup operations and restore operations impacted one another (or rather, had the potential to impact one another) as little as possible. Though backups and restores occurred within the same virtual farm, I isolated them to the extent that I could. Backups were performed in one web application, and restores were performed in a separate web application. I even placed each web application in its own (IIS) application pool – just to be sure. I also established a single VM snapshot starting point; after each backup and restore test, I rolled back to the snapshot point to ensure that nothing remained in the farm (or VM, for that matter) that was tied to the previous round of testing.
I created a single Publishing Portal, bolted a couple of sub-sites and Document Libraries into it, and used it as the target for my site collection backup operations. The Document Library that I used for workflow testing varied between tests; it was not held constant and did change according to the needs of each specific test.
I ran four different workflow test scenarios. My OOTB workflow scenario involved testing the page approval workflow for publishing pages. My other three SPD workflow tests (list-bound, site collection, and reusable workflow) all involved the same basic set of workflow steps:
Wait five minutes
Create a To Do item (which had to be completed to move on)
Wait five more minutes
Add a comment to the workflow target
In both the OOTB workflow and SPD workflow scenarios, I wanted to perform backups while workflows were basically “in flight” to see how workflow state would or wouldn’t be impacted by the backup and restore processes. For the publishing approval workflow, this meant taking a site collection backup while at least one page was pending approval. For the SPD workflows, it meant capturing a backup while at least one workflow instance was in a five minute wait period and another was waiting on the completion of the To Do item.
Prior to executing a backup in each test case, I ran a couple of workflow instances from start to finish. This ensured that I had some workflow history information to capture and restore.
Once site collection backups were captured in each test case, I restored them into the empty web application. I then opened the restored site collection to determine what did and didn’t get transcribed through the backup and restore process.
Results Of Testing
In each workflow case (OOTB and all three SPD workflows), all workflow information that I could poke and prod appeared to survive the backup and restore process without issue. Workflow definition data was preserved, and workflow history came over intact. Even more impressive, though, was the fact that in-process workflow state was preserved. SPD workflow steps that were in the middle of a wait period when a backup was taken completed their wait period after restore and moved on. To Do items that were waiting for user intervention continued to wait and then proceeded to the next step when they were marked as completed in the restored site collection.
In addition, new instances of each workflow type could be created and started in both site collections following the backup and restore operations. The backup and subsequent restore didn’t appear to have any effect on either the source or destination.
Though my testing wasn’t exhaustive, it did cast a doubt on the absolute nature of the statement made on TechNet regarding site collection backups failing to include workflows.
While I was conducting my research and testing, Joel was leveraging his network of contacts and asking folks at Microsoft for the real story behind site collection backups and workflow. He made a little progress with each person he spoke to, and in the end, he managed to get someone to go on the record.
The official word from Microsoft is that the TechNet note indicating that site collection backups don’t include workflows is a misprint. In reality, the point that should have been conveyed through TechNet was that content exports (via the Content Deployment API) don’t include workflows – a point that is perfectly understandable considering that the Content Deployment API doesn’t export or import with full-fidelity. Microsoft indicated that they’ll be correcting the error, and TechNet may have been corrected by the time you read this.
My takeaway on this: if something on TechNet (or anywhere else on the web) doesn’t quite add up, it never hurts to test and seek additional information from others in the community who are knowledgeable on the subject matter. In this case, it made a huge difference.
This post introduces SharePoint Saturday Columbus which will be taking place on August 14, 2010. Several of us are putting the event together, and we’re seeking both speakers and sponsors. I will also be speaking at SharePoint Saturday Ozarks this Saturday, June 12th, and delivering my new talk titled “‘Caching-In’ for SharePoint Performance.”
The last couple of months have been exceptionally busy, so this blog hasn’t been getting the attention it deserves. All of my time has been spent writing chapters for the SharePoint 2010 Disaster Recovery Guide that John Ferringer and I are putting together. The good news is that John and I have rounded the bend and are heading towards home on completion of the book, so I will be getting back to blogging about topics of greater substance towards the middle of the summer.
Announcing SharePoint Saturday Columbus!
Yesterday we (the planning committee) announced that SharePoint Saturday Columbus will be taking place at the Conference Center at OCLC in Dublin, Ohio on August 14th, 2010. For those of you not familiar with the central Ohio region, Dublin is just a northern part of the Columbus area.
Brian Jackett, Jennifer Mason, Nicola Young, and I have been pulling the pieces together over the last several months, and we finally have enough done that we can announce the event. We’re very excited to be bringing a SharePoint Saturday event to this region of the Midwest!
We are actively seeking both speakers and sponsors for the event. If you or someone you know falls into either or both of these categories, please head out to the SharePoint Saturday Columbus site for sponsorship information, session submission forms, and other resources. You can also follow @SPSColumbus on Twitter for more information and announcements in the time leading up to the event.
Speaking of SharePoint Saturdays …
SharePoint Saturday Ozarks
It’s funny to think that the whole SharePoint Saturday experience started about a year ago for me. I’ll be going back to the scene of the crime this weekend when I head to Harrison, Arkansas, for SharePoint Saturday Ozarks.
Mark Rackley is reminding the SharePoint community that he is a force of nature by putting all the pieces together to make this event happen. Most SharePoint Saturday events have an organizing committee, but Mark plays all the instruments in this band. It’s simply amazing.
This time around, I’ll actually be delivering a session on something other than SharePoint disaster recovery. The session is titled “’Caching-In’ for SharePoint Performance,” and it’s a new one for me. I’m really looking forward to giving the talk, because caching within SharePoint is something I am both passionate about and have deep experience with. Here’s the abstract for my session:
Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios. In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites. We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.
If you happen to be in the Harrison, AR region on Saturday, June 12th, swing by the North Arkansas College. There will be one heck of a SharePoint party going on!
In this quick post, I talk about my presentation of “Saving SharePoint” at SharePoint Saturday Houston in a few days (Saturday, May 1st).
I’d normally have posted some information about this a bit earlier, but the last few weeks have been a bit of a whirlwind given the new job.
This Saturday, May 1st, I’ll be speaking at SharePoint Saturday Houston. I’m already here (in Houston) on business this week, and SharePoint Saturday Houston represents a great way to wrap up the week before heading back to Cincinnati!
I’ll be presenting “Saving SharePoint,” the talk that I’ve given (both solo and with my cohort in crime, John Ferringer) at a number of SharePoint Saturday events. In the talk, I discuss SharePoint disaster recovery, key terms and concepts for speaking the “DR lingo,” and the tools that SharePoint comes with to help you protect your data. A substantial portion of the talk also focuses on DR procedures and business practices that anyone tasked with DR responsibilities needs to understand to effectively carry out their duties.
If you check this blog with any degree of regularity, then you know that I’ve been relatively quiet for the last couple of months. I haven’t really posted anything new in some time, my tweets have been fewer in number (not that I’m a generator of high traffic on Twitter anyway), and I’ve generally been laying low. This is due in part to writing for the upcoming SharePoint 2010 Disaster Recovery Guide, but writing isn’t really the largest reason I’ve been “sparse” as of late.
For a few months now, I’ve been in a state of transition with regard to both my career and my employer. Now that all of the discussions are over, the details have been finalized, and I’m on my way to Houston for a week, I’m excited to announce that I’ve joined Idera as their Product Manager for SharePoint Products! The press release with some additional details can be found at this link.
For those of you who may not be familiar with the name, Idera is a software company that is based out of Houston, Texas. Idera makes tools for SharePoint, SQL Server, and PowerShell. In my new role with them, I’ll be part of the team that is working to craft the next generation of Idera’s backup and restore tools. This excites me on so many levels!
Given the degree to which many of my “extracurricular” activities (that is, writing and speaking) have focused on disaster recovery and the SharePoint platform, I think the new position is going to be a great fit. The match-up is wonderful in a number of ways:
Though I worked with SharePoint as a consultant with my previous company, I was always one step removed from the platform. With Idera, I’ll be working on products that specifically target SharePoint – a big win in my book.
About a year and a half ago, I made it a goal to get more involved in the SharePoint community. I wanted to participate more, give back some of what I had gotten, and host of other things. I see this position as a great way to continue those efforts in a way that helps both me and the company I work for.
When it comes to SharePoint, I’ve always had one foot in the development world and one foot in the infrastructure/IT pro world. Most of the development work I’ve done for SharePoint has focused on core plumbing, interop with other systems, performance improvement, and general tools. I’d be hard-pressed to find a better fit in this regard than Idera!
Though Idera is headquartered in Houston, I’ll still be staying in Cincinnati. I will be in Texas all week, though, to meet with my team, discuss strategies, and get myself “into the game,” so to speak.
If you see me around at a conference, SharePoint Saturday event, or anywhere else, please stop me and let me know what you think of Idera’s products. Make sure you share your thoughts on what you think should be done to make them better, too. From now on, I’ll be in a unique position to do something with the feedback!
In this post, I discuss some of my activities for the next couple of months. These include the INTERalliance TechOlympics, SharePoint Saturday Michigan, and continuing efforts to get the SharePoint 2010 Disaster Recovery Guide ready for product launch.
2010 is in full-swing, and there seems to be no shortage of activities for me to jump into! If anything, I need more free time to take on some of the stuff I really want to sink my teeth into (such as a SharePoint 2010 CodePlex project I want to have ready for RTM). Until I have something more tangible in hand, though, I’ll avoid talking about that topic any further.
Here are some of the things occupying my free time in the short-to-mid term:
TechOlympics Expo 2010
The TechOlympics Expo is the type of event every adult geek wishes they had when they were in high school – a weekend lock-in featuring technical competitions, cool toys, games of every imaginable sort, and pretty much everything else that would get a teenage gearhead jazzed-up. The underlying goal of the event is to get high school kids interested in technology, careers in technology, and technical opportunities in the Cincinnati area.
The event (on March 5-7) is being put on by the INTERalliance of Greater Cincinnati, and my involvement in the event is kind of a curious thing. My primary client of the past 2+ years is a big backer of (and heavily invested in) the INTERalliance, so naturally they kick-in help whenever events come up. I helped the INTERalliance through a last-minute (and somewhat ugly) technical hurdle involving SMS voting for their PharaohFest event last October, and I suspect that played a part in my being asked to help out with the TechOlympics.
With the TechOlympics, I’m part of a team that’s working to make all the “technical stuff” (behind-the-scenes and otherwise) happen. My responsibilities seem to shift a bit each day, but the bulk of what I’ve been working on is coordinating network logistics and services, translating “the vision” into technical infrastructure, providing some guidance on applications being written to support the event, and generally doing my best at “collision avoidance” to ensure that we don’t miss anything important for the event.
I’m confident that the event is going to be incredible, and it’s been a lot of fun doing the planning thus far. Seeing everything come together is going to be neat – both for me and for everyone else who has been laboring to make the magic happen!
SharePoint Saturday Michigan
What would an “Upcoming Activities” post be without a SharePoint Saturday announcement! The next one I’ll be attending is SharePoint Saturday Michigan in Ann Arbor on March 13th. I’ll be presenting “Saving SharePoint,” the disaster recovery talk that John Ferringer and I have been delivering at various SharePoint Saturday events around the region. I’ll be flying solo this time around, though, as John has some other things going on that weekend.
As always, SharePoint Saturday events are free and open to the public. If you have any interest in learning more about SharePoint, getting some free training, or simply networking and meeting other professionals in the SharePoint space, please sign up!
SharePoint 2010 Disaster Recovery Guide
This announcement is last, but it’s definitely not least. Some of you are aware, but for those who aren’t: John and I have been working on the SharePoint 2010 Disaster Recovery Guide for a while now. I’m not going to lie – it’s slow going. Personally, I’m a very slow writer, and the process itself is exceptionally labor-intensive. Nevertheless, we’re making progress – one page at a time.
Our goal (and Cengage’s goal for us) is to have the book ready for SharePoint 2010 RTM. I haven’t seen or heard anything official from Microsoft, but rumor has it that SharePoint 2010 will probably be out sometime in June. If that’s the case, then John and I are on-track.
If you have suggestions for us, particularly if you read the first book, we would love to hear them. We’re incorporating a few that we already received (for example, a chapter that covers some real world use-cases), but our ears are open and listening. We know that DR isn’t a topic that gets everyone overly hot and bothered (unless they’ve lost everything at some point, of course), but our goal is to make the book as useful as possible. We’d love your help!
In this post, I discuss a couple of DCOM/RPC snags I ran into while configuring Microsoft’s Data Protection Manager (DPM) 2007 client protection agent to run on my new Forefront Threat Management Gateway (TMG) 2010 server. I also walk through the troubleshooting approach I took to resolve the issues that appeared.
I completed the move about a week ago, and I’ve been very happy with TMG thus far. TMG’s ISP redundancy and load balancing features have been fantastic, and they’ve allowed me to use my Internet connections much more effectively.
As a user of ISA since its original 2000 release, I also had no problem jumping in and working with the TMG management GUI. It was all very familiar from the get-go. Call me “very satisfied” thus far.
This afternoon, I took a few moments to un-join the old ISA servers from the domain, power them down, and clean things up. I had also planned to take a little time integrating the new TMG box into my Data Protection Manager (DPM) 2007 backup rotation. Unfortunately, though, the DPM integration took a bit longer than expected …
For those unfamiliar with the operation of DPM, I’ll take a couple of moments to explain a bit about how it works. In order for DPM to do its thing, any computer that is going to be protected must have the DPM 2007 Protection Agent installed on it. Once the DPM Protection Agent is installed and configured, the DPM server communicates through the agent (which operates as a Windows service) to leverage the protected system’s Volume Shadow Copy Service (VSS) for backups.
Installing the DPM agent typically isn’t a big challenge for common client computers, and it can be accomplished directly from within the DPM management GUI itself. When the agent is installed through the GUI, DPM connects to the computer to be protected, installs the agent, and configures it to point back to the DPM server. No manual intervention is required.
On some systems, though, it’s simply easier to install and configure the agent directly from the to-be-protected system itself. A locked-down server (like a TMG box) falls into this category, so I manually put the agent installation package on the TMG server, ran it, and carried out the follow-up PowerShell Attach-ProductionServer (from the DPM Management Shell) on the DPM server. The install proceeded without issue, and the associated attach (on the DPM server) went off without a hitch. I thought I was good to go.
I fired up the management GUI on the DPM Server, went into the Agents tab under Management, and discovered that I couldn’t connect to the TMG server.
The fact that I couldn’t connect to the TMG server (SS-TMG1) from my DPM box was a bit of an eyebrow lifter, but it wasn’t entirely unexpected. Communication between a DPM server and the DPM agent leverages DCOM, and I’d had to jump through a few hoops to ensure that the DPM server could communicate with the ISA boxes previously.
I suspected that an RPC/DCOM issue was in play, but I was having trouble seeing where the problem might be. So, I reviewed where I was at.
Without an exception, Windows Firewall will block communication between a DPM server and its agents. I confirmed that Windows Firewall wasn’t in play and that TMG itself was handling all of the firewall action.
Examining TMG, I confirmed that I had a rule in place that permitted all traffic between my DPM server (SS-TOOLS1) and the TMG box itself.
Strict RPC compliance is another potential problem with DPM on both ISA and TMG, as requiring strict compliance blocks any DCOM traffic. DCOM (and any other traffic that doesn’t explicitly begin RPC exchanges by communicating with the RPC endpoint mapper on the target server) gets dropped by the RPC Filter unless the checkbox for Enforce strict RPC compliance is unchecked. I confirmed that my rule wasn’t requiring strict compliance (as shown on the right).
I made sure that my DPM server wasn’t listed as a member of either the Enterprise Remote Management Computers or Remote Management Computers Computer Sets in TMG. These two Computer Sets are specially impacted by a couple of TMG System Policy rules that can impact their ability to call into TMG via RPC and DCOM.
I reviewed all System Policy rules that might impact inbound RPC calls to the TMG server, and I couldn’t find any that would (or should) be influencing DPM’s ability to connect to its agent.
I also went out to the Forefront TMG Product Team’s Blog to see what advice they might have to offer, and I found this excellent article on RPC and TMG – well worth a read if you’re trying to troubleshoot RPC problems. Unfortunately, it didn’t offer me any tips that would help in my situation.
Watching The Traffic
I may have simply had tunnel vision, but I was still obsessed with the notion that strict RPC checking was causing my problems. To see if it was, I decided to fire-up TMG’s live logging and see what was happening when DPM tried to connect to its agent. I set the logging to display only traffic that was originated from the DPM box, and this is what I saw.
There was nothing wrong that I could see. My access rule was clearly being utilized (the one that doesn’t enforce strict RPC checking), and I wasn’t seeing any errors – just connection initiations and closes. Traffic from DPM to TMG looked clean.
Taking A Step Back
I was frustrated, and I clearly needed to consider the possibility that I didn’t have a good read on the problem. So, I went to the Windows Application event log to see if it might provide some insight. I probably should have started with the event logs instead of TMG itself and firewall rules … but better late than never, I figured.
Popping open Event Viewer, I was greeted with the image you see on the left. What I saw was enlightening, for I did have a problem with communication between the DPM agent and the DPM server. The part that intrigued me, though, was the fact that the problem was with outbound communication (that is, from TMG server to DPM server) – not the other way around as I had originally suspected. All of my focus had been on troubleshooting traffic coming into TMG because I’d been interpreting the errors I’d seen to mean that the DPM server couldn’t reach the agent – not that the agent couldn’t “phone home,” so to speak.
I knew for a fact that the DPM Server, SS-TOOLS1, didn’t have the Windows Firewall service running. Since the service wasn’t running, there was no way that the agent’s attempts to communicate with DPM could (or rather, should) be getting blocked at the destination. That left the finger of blame pointing at TMG.
On The Way Out
I decided to repeat the traffic watching exercise I’d conducted earlier, but instead of watching traffic coming into the TMG box from my DPM server, I elected to watch traffic going the other direction – from TMG to DPM. Here’s what I saw:
The “a-ha” moment for me came when I saw the firewall rule that was actually governing RPC traffic to the DPM box from TMG. It wasn’t the DPM All <=> SS-TMG1 rule I’d established — it was a system policy rule called [System] Allow RPC from Forefront TMG to trusted servers.
System policy rules are normally hidden in the firewall policy tab, so I had to explicitly show them to review them. Once I did, there it was – rule 22.
Note that this rule applies to all traffic from the TMG server to the Internal network; I’ll be talking about that more in a bit.
I couldn’t edit the rule in-place; I needed to use the System Policy editor. So, I fired up the System Policy Editor and traced the rule back to its associated configuration group. As it turned out, the rule was tied to the Active Directory configuration group under Authentication Services.
As the picture on the left clearly shows, the Enforce strict RPC compliance checkbox was checked. Once I unchecked it and applied the configuration change, the DPM agent began communicating with the DPM server without issue. Problem solved.
I was fairly sure that I hadn’t experienced this sort of trouble installing the DPM Protection Agent under ISA Server 2006, so I tried to figure out what might have happened.
I hadn’t recalled having to adjust the target system policy under ISA when installing the DPM agent originally, but a quick boot and check of my old ISA server revealed that the checkbox was indeed unchecked (meaning that strict RPC compliance wasn’t being enforced). I’d apparently made the change at some point and forgotten about it. I suspect I’d messed with it at some point in the distant past while working on passing AD information through ISA, getting VPN functionality up-and-running, or perhaps something else.
Bottom line: TMG enforces strict compliance for RPC traffic that originates on the TMG server (Local Host) and is destined for the Internal network. Since System Policy Rules are applied before administrator-defined Firewall Policy Rules, RPC traffic from the TMG server to the Internal network will always be governed by the system policy unless that policy is disabled.
In this particular scenario, the DPM 2007 Protection Agent’s operation was impacted. Even though I’d created a rule that I thought would govern interactions between DPM and TMG, the reality is that it only governed RPC traffic coming into TMG – not traffic going out.
In reality, any service or application that sends DCOM traffic originating on the TMG server to the Internal network is going to be affected by the Allow RPC from Forefront TMG to trusted servers rule unless the associated system policy is adjusted.
The core findings of this post have been documented by others (in a variety of forms/scenarios) for ISA, but this go-round with TMG and the DPM association caught me off-guard such that I thought it would be worth sharing my experience with other firewall administrators. If anyone else moving to TMG takes the “build it from the ground up” approach that I did, then the system policy I’ve been discussing may get missed. Hopefully this post will serve as a good lesson (or reminder for veteran firewall administrators).
In this post, I take a small detour from SharePoint to talk about my home network, how it has helped me to grow my skill set, and where I see it going.
Whenever I’m speaking to other technology professionals about what I do for a living, there’s always a decent chance that the topic of my home network will come up. This seems to be particularly true when talking with up-and-coming technologists, as I’m commonly asked by them how I managed to get from “Point A” (having transitioned into IT from my previous life as a polymer chemist) to “Point B” (consulting as a SharePoint architect).
I thought it would be fun (and perhaps informative) to share some information, pictures, and other geek tidbits on the thing that seems to consume so much of my “free time.” This post also allows me to make good on the promise I made to a few people to finally put something online for them to see.
Wait … “Basement Datacenter?”
For those on Twitter who may have seen my occasional use of the hashtag #BasementDatacenter: I can’t claim to have originated the term, though I fully embrace it these days. The first time I heard the term was when I was having one of the aforementioned “home network” conversations with a friend of mine, Jason Ditzel. Jason is a Principal Consultant with Microsoft, and we were working together on a SharePoint project for a client a couple of years back. He was describing his love for his recently acquired Windows Home Server (WHS) and how I should have a look at the product. I described why WHS probably wouldn’t fit into my network, and that led Jason to comment that Microsoft would have to start selling “Basement Datacenter Editions” of its products. The term stuck.
So, What Does It Look Like?
Two pictures appear on the right. The left-most shot is a picture of my server shelves from the front. Each of the computing-related items in the picture is labeled in the right-most shot. There are obviously other things in the pictures, but I tried to call out the items that might be of some interest or importance to my fellow geeks.
Generally speaking, things look relatively tidy from the front. Of course, I can’t claim to have the same degree of organization in the back. The shot on the left displays how things look behind and to the right of the shots that were taken above. All of the power, network, and KVM cabling runs are in the back … and it’s messy. I originally had things nicely organized with cables of the proper length, zip ties, and other aids. Unfortunately, servers and equipment shift around enough that the organization system wasn’t sustainable.
While doing the network planning and subsequent setup, I’m happy that I at least had the foresight to leave myself ample room to move around behind the shelves. If I hadn’t, my life would be considerably more difficult.
On the topic of shelves: if you ever find yourself in need of extremely heavy duty, durable industrial shelves, I highly recommend this set of shelves from Gorilla Rack. They’re pretty darn heavy, but they’ll accept just about any amount of weight you want to put on them.
I had to include the shot below to give you a sense of the “ambiance.”
Anyone who’s been to my basement (which I lovingly refer to as “the bunker”) knows that I have a thing for dim but colorful lighting. I normally illuminate my basement area with Christmas lights, colored light bulbs, etc. Frankly, things in the basement are entirely too ugly (and dusty) to be viewed under normal lighting. It may be tough to see from this shot, but the servers themselves contribute some light of their own.
Why On Earth Do You Have So Many Servers?
After seeing my arrangement, the most common question I get is “why?” It’s actually an easy one to answer, but to do so requires rewinding a bit.
Many years ago, when I was a “young and hungry” developer, I was trying to build a skill set that would allow me to work in the enterprise – or at least on something bigger than a single desktop. Networking was relatively new to me, as was the notion of servers and server-side computing. The web had only been visual for a while (anyone remember text-based surfing? Quite a different experience …), HTML 3 was the rage, Microsoft was trying to get traction with ASP, ActiveX was the cool thing to talk about (or so we thought), etc.
It was around that time that I set up my first Windows NT4 server. I did so on the only hardware I had leftover from my first Pentium purchase – a humble 486 desktop. I eventually got the server running, and I remember it being quite a challenge. Remember: Google and “answers at your fingertips” weren’t available a decade or more ago. Servers and networking also weren’t as forgiving and self-correcting as they are nowadays. I learned a awful lot while troubleshooting and working on that server.
Before long, though, I wanted to learn more than was possible on a single box. I wanted to learn about Windows domains, I wanted to figure out how proxies and firewalls worked (anyone remember Proxy Server 2.0?), and I wanted to start hosting online Unreal Tournament and Half Life games for my friends. With everything new I learned, I seemed to pick up some additional hardware.
When I moved out of my old apartment and into the house that my wife and I now have, I was given the bulk of the basement for my “stuff.” My network came with me during the move, and shortly after moving in I re-architected it. The arrangement changed, and of course I ended up adding more equipment.
Fast-forward to now. At this point in time, I actually have more equipment than I want. When I was younger and single, maintaining my network was a lot of fun. Now that I have a wife, kids, and a great deal more responsibility both in and out of work, I’ve been trying to re-engineer things to improve reliability, reduce size, and keep maintenance costs (both time and money) down.
I can’t complain too loudly, though. Without all of this equipment, I wouldn’t be where I’m at professionally. Reading about Windows Server, networking, SharePoint, SQL Server, firewalls, etc., has been important for me, but what I’ve gained from reading pales in comparison to what I’ve learned by *doing*.
How Is It All Setup?
I actually have documentation for most of what you see (ask my Cardinal SharePoint team), but I’m not going to share that here. I will, however, mention a handful of bullets that give you an idea of what’s running and how it’s configured.
I’m running a Windows 2008 domain (recently upgraded from Windows 2003)
With only a couple of exceptions, all the computers in the house are domain members
I have redundant ISP connections (DSL and BPL) with static IP addresses so I can do things like my own DNS resolution
My primary internal network is gigabit Ethernet; I also have two 802.11g access points
All my equipment is UPS protected because I used to lose a lot of equipment to power irregularities and brown-outs.
I believe in redundancy. Everything is backed-up with Microsoft Data Protection Manager, and in some cases I even have redundant backups (e.g., with SharePoint data).
There’s certainly a lot more I could cover, but I don’t want to turn this post into more of a document than I’ve already made it.
Fun And Random Facts
Some of these are configuration related, some are just tidbits I feel like sharing. All are probably fleeting, as my configuration and setup are constantly in flux:
Beefiest Server: My SQL Server, a Dell T410 with quad-core Xeon and about 4TB worth of drives (in a couple of RAID configurations)
Wimpiest Server: I’ve got some straggling Pentium 3, 1.13GHz, 512MB RAM systems. I’m working hard to phase them out as they’re of little use beyond basic functions these days.
Preferred Vendor: Dell. I’ve heard plenty of stories from folks who don’t like Dell, but quite honestly, I’ve had very good luck with them over the years. About half of my boxes are Dell, and that’s probably where I’ll continue to shop.
Uptime During Power Failure: With my oversize UPS units, I’m actually good for about an hour’s worth of uptime across my whole network during a power failure. Of course, I have to start shutting down well before that (to ensure graceful power-off).
Most Common Hardware Failure: Without a doubt, I lose power supplies far more often than any other component. I think that’s due in part to the age of my machines, the fact that I haven’t always bought the best equipment, and a couple of other factors. When a machine goes down these days, the first thing I test and/or swap out is a power supply. I keep at least a couple spares on-hand at all times.
Backup Storage: I have a ridiculous amount of drive space allocated to backups. My DPM box alone has 5TB worth of dedicated backup storage, and many of my other boxes have additional internal drives that are used as local backup targets.
Server Paraphernalia: Okay, so you may have noticed all the “junk” on top of the servers. Trinkets tend to accumulate there. I’ve got a set of Matrix characters (Mr. Smith and Neo), a PIP boy (of Fallout fame), Cheshire Cat and Alice (from American McGee’s Alice game), a Warhammer mech (one of the Battletech originals), a “cat in the bag” (don’t ask), a multimeter, and other assorted stuff.
Cost Of Operation: I couldn’t begin to tell you, though my electric bill is ridiculous (last month’s was about $400). Honestly, I don’t want to try to calculate it for fear of the result inducing some severe depression.
Where Is It All Going?
As I mentioned, I’m actively looking for ways to get my time and financial costs down. I simply don’t have the same sort of time I used to have.
Given rising storage capacities and processor capabilities, it probably comes as no surprise to hear me say that I’ve started turning towards virtualization. I have two servers that act as dedicated Hyper-V hosts, and I fully expect the trend to continue.
Here are a few additional plans I have for the not-so-distant future:
I just purchased a Dell T110 that I’ll be configuring as a Microsoft Forefront Threat Management Gateway 2010 (TMG) server. I currently have two Internet Security and Acceleration Server 2006 servers (one for each of my ISP connections) and a third Windows Server 2008 for SSL VPN connectivity. I can get rid of all three boxes with the feature set supplied by one TMG server. I can also dump some static routing rules and confusing firewall configuration in the process. That’s hard to beat.
I’m going to see about virtualizing my two domain controllers (DCs) over the course of the year. Even though the machines are backed-up, the hardware is near the end of its usable life. Something is eventually going to fail that I can’t replace. By virtualizing the DCs, I gain a lot of flexibility (I can move them around on physical hardware) and can get rid of two more physical boxes. Box reduction is the name of the game these days! I’ll probably build a new (virtual) DC on Windows Server 2008 R2; migrate FSMO roles, DNS, and DHCP responsibilities to it; and then phase out the physical DCs – rather than try a P2V move.
With SharePoint Server 2010 coming, I’m going to need to get some even beefier server hardware. I’m learning and working just fine with the aid of desktop virtualization right now (my desktop is a Core i7-920 with 12GB RAM), but that won’t cut it for “production use” and testing scenarios when SharePoint Server 2010 goes RTM.
If the past has taught me anything, it’s that additional needs and situations will arise that I haven’t anticipated. I’m relatively confident that the infrastructure I have in place will be a solid base for any “coming attractions,” though.
If you have any questions or wonder how I did something, feel free to ask! I can’t guarantee an answer (good or otherwise), but I do enjoy discussing what I’ve worked to build.
In this post, I cover the upcoming SharePoint Saturday Indianapolis event and a couple of its sessions (including one of my own).
You can’t turn a corner these days without running into a SharePoint Saturday event! At the end of this month, Indianapolis will be holding its SharePoint Saturday on January 30th.
My disaster recovery (DR) cohort-in-crime, John Ferringer, and I will be presenting “Saving SharePoint” within the event’s IT Pro track. We’ve given the talk together a handful of times, and the session tries to communicate some of the more important concepts from our DR book, such as the importance of undertanding RPO/RTO, tools that are available for DR out-of-the-box, and more. We’ll also be covering how the landscape will be changing a bit for DR in the upcoming SharePoint 2010 release.
One of my team members, Steve Pietrek, will also be presenting his new SharePoint and Silverlight presentation – one that I am very anxious to see. Steve’s been doing an exceptional amount of work in “constrained” SharePoint environments recently, and he’s found all sorts of ways to bend Silverlight to his will. I’m sure developers will walk away with some novel ideas.
As always, SharePoint Saturday events are free to the public; all they’ll cost you is some time. Sign up today!
Recent failures with Microsoft Office Picture Manager and SharePoint Explorer View led me to dive under-the-hood to better understand how SharePoint 2007’s WebDAV and IIS7’s WebDAV Publishing role service interact. This post summarizes my findings, as well as how I eliminated my 405 errors.
Several months ago, I decided that a rebuild of my primary MOSS environment here at home was in order. My farm consisted of a couple of Windows Server 2003 R2 VMs (one WFE, one app server) that were backed by a non-virtualized SQL Server. I wanted to free up some cycles on my Hyper-V boxes, and I had an “open physical box” … so, I elected to rebuild my farm on a single, non-virtualized box running (the then newly released) Windows Server 2008 R2.
The rebuild went relatively smoothly, and bringing my content databases over from the old farm posed no particular problems. Everything was good.
Fast forward to just a few weeks ago.
One of the site collections in my farm is used to store and share pictures that we take of our kids. The site collection is, in effect, a huge multimedia repository …
… and allow me a moment to address the concerns of the savvy architects and administrators out there. I do understand SharePoint BLOB (binary large object) storage and the implications (and potential effects) that large multimedia libraries can have on scalability. I wouldn’t recommend what I’m doing to most clients – at least not until remote BLOB storage (RBS) gets here with SharePoint 2010. Remember, though, that my wife and I are just two people – not a company of hundreds or thousands. The benefits of centralized, tagged, searchable, nicely presented content outweigh scalability and performance concerns for us.
Back to the pictures site. I was getting set to upload a batch of pictures, so I did what I always do: I went into the Upload menu of the target pictures library in the site collection and selected Upload Multiple Pictures as shown on the right. For those who happen to have Microsoft Office 2007 installed (as I do), this action normally results in the Microsoft Office Picture Manager getting launched as shown below.
From within the Microsoft Office Picture Manager, uploading pictures is simply a matter of navigating to the folder containing the pictures, selecting the ones that are to be pushed into SharePoint, and pressing the Upload and Close button. From there, the application itself takes care of rounding up the pictures that have been selected and getting them into the picture library within SharePoint. SharePoint pops up a page that provides a handy “Go back to …” link that can then be used to navigate back to the library for viewing and working with the newly uploaded pictures.
Upon selecting the Upload Multiple Pictures menu item, SharePoint navigated to the infopage.aspx page shown above. I waited, and waited … but the Microsoft Office Picture Manager never launched. I hit my browser’s back button, and tried the operation again. Same result: no Picture Manager.
Trouble In River City
Picture Manager’s failure to launch was obviously a concern, and I wanted to know why I was encountering problems … but more than anything, I simply wanted to get my pictures uploaded and tagged. My wife had been snapping pictures of our kids left and right, and I had 131 JPEG files waiting for me to do something.
I figured that there was more than one way to skin a cat, so I initiated my backup plan: Explorer View. If you aren’t familiar with SharePoint’s Explorer View, then you need not look any further than the name to understand what it is and how it operates. By opening the Actions menu of a library (such as a Document Library or Picture Library) and selecting the Open with Windows Explorer menu item as shown on the right, a Windows Explorer window is opened to the library. The contents of the library can then be examined and manipulated using a file system paradigm – even though SharePoint libraries are not based in (or housed in) any physical file system.
The mechanisms through which the Explorer View are prepared, delivered, and rendered are really quite impressive from a technical perspective. I’m not going to go into the details, but if you want to learn more about them, I highly recommend a whitepaper that was authored by Steve Sheppard. Steve is an escalation engineer with Microsoft who I’ve worked with in the past, and his knowledge and attention to detail are second to none – and those qualities really come through in the whitepaper.
Unfortunately for me, though, my attempts to open the picture library in Explorer View also led nowhere. Simply put, nothing happened. I tried the Open with Windows Explorer option several times, and I was greeted with no action, error, or visible sign that anything was going on.
SharePoint and WebDAV
I was 0 for 2 on my attempts to get at the picture library for uploading. I wasn’t sure what was going on, but I was pretty sure that WebDAV (Web Distributed Authoring and Versioning) was mixed-up in the behavior I was seeing. WebDAV is implemented by SharePoint and typically employed to provide the Explorer View operations it supports. I was under the impression that the Microsoft Office Picture Manager leveraged WebDAV to provide some or all of its upload capabilities, too.
After a few moments of consideration, the notion that WebDAV might be involved wasn’t a tremendous mental leap. In rebuilding my farm on Windows Server 2008 R2, I had moved from Internet Information Services (IIS) version 6 (in Windows Server 2003 R2) to IIS7. WebDAV is different in IIS7 versus previous versions … I just hadn’t heard about SharePoint WebDAV-based functions operating any differently.
Playing a Client-Side Tune
My gut instincts regarding WebDAV hardly qualified as “objective troubleshooting information,” so I fired-up Fiddler2 to get a look at what was happening between my web browser and the rebuilt SharePoint farm. When I attempted to execute an Open with Windows Explorer against the picture library, I was greeted with a bunch of HTTP 405 errors.
To be completely honest, I’d never actually seen an HTTP 405 status code before. It was obviously an error (since it was in the 400-series), but beyond that, I wasn’t sure. A couple of minutes of digging through the W3C’s status code definitions, though, revealed that a 405 status code is returned whenever a requested method or verb isn’t supported.
I dug a little deeper and compared the request headers my browser had sent with the response headers I’d received from SharePoint. Doing that spelled-out the problem pretty clearly.
Here’s an example of one of the HTTP headers that was sent:
PROPFIND was the method that my browser was passing to SharePoint, and the request was failing because the server didn’t include the PROPFIND verb in its list of supported methods as stated in the Allow: portion of the response. PROPFIND was further evidence that WebDAV was in the mix, too, given its limited usage scenarios (and since the bulk of browser web requests employ either the GET or POST verb).
So what was going on? The operations I was attempting had worked without issue under II6 and Windows Server 2003 R2, and I was pretty confident that I hadn’t seen any issues with other Windows Server 2008 (R2 and non-R2) farms running IIS7. I’d either botched something on my farm rebuild or run into an esoteric problem of some sort; experience (and common sense) pointed to the former.
Doing Some Legwork
I turned to the Internet to see if anyone else had encountered HTTP 405 errors with SharePoint and WebDAV. Though I quickly found a number of posts, questions, and other information that seemed related to my situation, none of it really described my particular scenario or what I was seeing.
After some additional searching, I eventually came across a discussion on the MSDN forums that revolved around whether or not WebDAV should be enabled within IIS for servers that serve-up SharePoint content. The back and forth was a bit disjointed, but my relevant take-away was that enabling WebDAV within IIS7 seemed to cause problems for SharePoint.
I decided to have a look at the server housing my rebuilt farm to see if I had enabled the WebDAV Publishing role service. I didn’t think I had, but I needed to check. I opened up the Server Manager applet and had a look at Role Services that were enabled for the Web Server (IIS). The results are shown in the image on right; apparently, I had enabled WebDAV Publishing. My guess is that I did it because I thought it would be a good idea, but it was starting to look like a pretty bad idea all around.
I was tempted to simply remove the WebDAV Publishing role service and cross my fingers, but instead of messing with my live “production” farm, I decided to play it safe and study the effects of enabling and disabling WebDAV Publishing in a controlled environment. I fired up a VM that more-or-less matched my production box (Windows Server 2008 R2, 64-bit, same Windows and SharePoint patch levels) to play around.
When I fired-up the VM, a quick check of the enabled role services for IIS showed that WebDAV Publishing was not enabled – further proof that I got a bit overzealous in enabling role services on my rebuilt farm. I quickly went into the VM’s SharePoint Central Administration site and created a new web application (http://spsdev:18480). Within the web application, I created a team site called Sample Team Site. Within that team site, I then created a picture library called Sample Picture Library for testing.
When It Works (without the WebDAV Publishing Role Service)
I fired up Fiddler2 in the VM, opened Internet Explorer 8, navigated to the Sample Picture Library, and attempted to execute an Open with Windows Explorer operation. Windows Explorer opened right up, so I knew that things were working as they should within the VM. The pertinent capture for the exchange between Internet Explorer and SharePoint (from Fiddler2) appears below.
Reviewing the dialog between client and server, there appeared to be two distinct “stages” in this sequence. The first stage was an HTTP request that was made to the root of the site collection using the OPTIONS method, and the entire HTTP request looked like this:
In response to the request, the SharePoint server passed back an HTTP 200 status that looked similar to the block that appears below. Note the permitted methods/verbs (as Allow:) that the server said it would accept, and that the PROPFIND verb appeared within the list:
It was these PROPFIND requests (or rather, the 207 responses to the PROPFIND requests) that gave the client-side WebClient (directed by Internet Explorer) the information it needed to determine what was in the picture library, operations that were supported by the library, etc.
When It Doesn’t Work (i.e., WebDAV Publishing Enabled)
When the WebDAV Publishing role service was enabled within IIS7, the very same request (to open the picture library in Explorer View) yielded a very different series of exchanges (again, captured within Fiddler2):
The initial OPTIONS request returned an HTTP 200 status that was identical to the one previously shown, and it even included the PROPFIND verb amongst its list of accepted methods:
Even though the PROPFIND verb was supposedly permitted, though, subsequent requests resulted in an HTTP 405 status and failure:
HTTP/1.1 405 Method Not Allowed
Allow: GET, HEAD, OPTIONS, TRACE
Date: Mon, 21 Dec 2009 22:04:31 GMT
Unfortunately, these behind-the-scenes failures didn’t seem to generate any noticeable error or message in client browsers. While testing (locally) in the VM environment, I was at least prompted to authenticate and eventually shown a form of “unsupported” error message. While connecting (remotely) to my production environment, though, the failure was silent. Only Fiddler2 told me what was really occurring.
The solution to this issue, it seems, is to ensure that the WebDAV Publishing role service is not installed on WFEs serving up SharePoint content in Windows Server 2008 / IIS7 environments. The mechanism by which SharePoint 2007 handles WebDAV requests is still something of a mystery to me, but it doesn’t appear to involve the IIS7-based WebDAV Publishing role service at all.
Steve Sheppard’s troubleshooting whitepaper (introduced earlier) mentions that enabling or disabling the WebDAV functionality supplied by IIS6 (under Windows Server 2003) has no appreciable effect on SharePoint operation. Steve even mentions that SharePoint’s internal WebDAV implementation is provided by an ISAPI filter that is housed in Stsfilt.dll. Though this was true in WSSv2 and SharePoint Portal Server 2003 (the platforms addressed by Steve’s whitepaper), it’s no longer the case with SharePoint 2007 (WSSv3 and MOSS 2007). The OPTIONS and PROPFIND verbs are mapped to the Microsoft.SharePoint.ApplicationRuntime.SPHttpHandler type in SharePoint web.config files (see below) – Stsfilt.dll library doesn’t even appear anywhere within the file system of MOSS servers (or at least in mine).
Regardless of how it is implemented, the fact that the two verbs of interest (OPTIONS and PROPFIND) are mapped to a SharePoint type indicates that WebDAV functionality is still handled privately within SharePoint for its own purposes. When the WebDAV Publishing role is enabled in IIS7, IIS7 takes over (or at least takes precedence for) PROPFIND requests … and that’s where things appear to break.
To Sum Up
After toggling the WebDAV Publishing role service on and off a couple of times in my VM, I became convinced that my production environment would start behaving the way I wanted it to if I simply disabled IIS7’s WebDAV Publishing functionality. I uninstalled the WebDAV Publishing role service, and both Microsoft Office Picture Manager and Explorer View started behaving again.
I also made a note to myself to avoid installing role services I thought I might need before I actually needed them :-)