Archive

Author Archive

Do You Know What’s Going to Happen When You Enable the SharePoint BLOB Cache?

March 12, 2012 32 comments

The topic of the SharePoint BLOB Cache and how it operates jumped back into the front of my brain recently given some conversations I’ve had and things I’ve seen (e.g., a promising CodePlex project called the SharePoint 2010 BlobCache Manager).

SharePoint PSA

"Just Do It" Post-It NoteThis post is my way of doing something akin to a SharePoint public service announcement. I’ve recently seen some caching-related functionality and topics – especially the BLOB Cache – getting some real traction in different circles, and I think that the attention and love is generally a good thing. I am somewhat concerned, though, by the fact that the discussions and projects that have been surfacing don’t seem to say much beyond the Post-It on the right.

What do I mean by “Just do-it?” Well, here’s the high-level summary of what I’ve been seeing people say, post, and practice with the SharePoint BLOB Cache:

  • The SharePoint BLOB Cache can lighten the load on your SQL Servers by caching BLOB (binary large object) data such as images, video, audio, CSS, etc., on your web front-ends (WFEs)
  • BLOB assets are then served directly from the WFEs. This prevents regular round trips from the WFEs to SQL Servers for every BLOB item needed, and this conserves network bandwidth and reduces SQL Server load.
  • To realize the benefits of the BLOB Cache, simply turn it on and you’re good to go. Nothing to it!

To be fair, I think that I’ve done a disservice by contributing to the perception that all you need to do to kick-start BLOB caching is change this web.config line …

<BlobCache location="C:\BlobCache\14" path="\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$" maxSize="10" enabled="false" />

… to this:

<BlobCache location="C:\BlobCache\14" path="\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$" maxSize="10" enabled="true" />

If you look closely, you’ll see that the only difference between the two XML elements is that the enabled attribute is changed from false to true in the second example.

As you might have guessed, I wouldn’t be writing this blog post if simply changing the BlobCache element’s enabled attribute to true didn’t cause potential problems.

The Small Print

Disclaimer text that includes some BLOB cache usage warningsAt the recent SPTechCon in San Francisco, I gave a five-minute lightning talk called Pushing SharePoint’s ‘Go Faster’ Button. It was a lighthearted look at SharePoint performance, and it focused on a couple of caching changes that could be easily implemented to improve SharePoint performance. One of the recommended changes was (surprise surprise) to simply “turn on” SharePoint’s BLOB Cache.

I only had five minutes to deliver the lightning talk, so I had to cram all of the disclaimers for what I was recommending into the legal style slide that appears on the left. Although the slide got a chuckle from the crowd (the print did look pretty small on-screen), I actually did invest some time in its warnings and watch-outs for anyone who wanted to go and dig them up later.

Of the two tips I delivered in the lightning talk, Tip #2 dealt with the SharePoint BLOB cache. I included a very specific warning in the “Disclaimer of Liability” aimed at those who sought to simply “set it and forget it.” The text of that warning read:

Failure to specify a max-age attribute in the BlobCache element of the web.config will result in the default value of 86,400 seconds (24 hours) being used. Use of a non-zero max-age attribute will result in the attachment of client-side cacheability headers to assets that are being BLOB cached, and such headers can result in BLOB assets being cached on the client beyond the duration of the current user session; such caching can easily result in "stale" BLOB resources being used from the client rather than newer ones being fetched from the WFE, so adjust max-age values carefully.

Put another way: if you simply enable the BLOB cache and do nothing else, your users may be getting a SharePoint behavior change that you hadn’t intended for them to have.

Why Did You Have To Bring Age Into This?

The sticking point with SharePoint’s default BlobCache element and attribute settings is that a max-age of 24 hours is assumed and used when the max-age attribute isn’t explicitly specified or set. What does that mean? I wrote a separate post a while back titled Client-Server Interactions and the max-age Attribute with SharePoint BLOB Caching, and that post addressed the effect that explicit and implicit max-age attribute value specifications have on BLOB Caching. I recommend checking out the post for the full background; for anyone who needs a quick summary, though, I can distill it down to two bullet points:

  • Enabling the BLOB Cache without specifying a max-age attribute means that BLOBs will be cached on both the WFEs in your farm and within users’ browser caches (through the use of Cache-Control HTTP headers).
  • In collaboration environments and anyplace else where BLOB assets may be edited or turn over frequently (within the course of a day), the default client-side caching behavior can mess with the UI/UX of your SharePoint site in all sorts of interesting ways.

What does this mean for the average user of SharePoint? Well, let me walk through a fictitious scenario with supporting detail – as told from the perspective of a SharePoint end user. If you already understand the problem, you’re short on time, and you want to get right to what I recommend, jump down to the “Recommendations Before You Enable the BLOB Cache” section.

Acme Online Goes Live!

Welcome to the Acme Corporation! The Acme Corporation recently completed a “webification” of its entire product catalog, and the end result is a publishing site collection that is implemented in SharePoint 2010. The site collection houses all of Acme’s products, and those products are available for the public to browse and order. Acme’s web content management team is responsible for maintaining the product catalog as it appears on the site, and that team is led by a crafty old fellow named Wile E. Coyote (who we’ll simply refer to as “Wiley” from here on out).

Wiley has many years of experience with Acme’s products and has tried nearly all of them personally; he’s something of a legend. He and his team worked diligently to get Acme’s products into SharePoint before the launch. Not all of the products made it into SharePoint before the launch, though, so a phased approach was taken to rolling out the entire catalog.

The Launch

A SharePoint article page featuring a bundle of dynamiteThe first products that Wiley and his team worked to get into SharePoint were Acme’s line of explosives. To prepare for the launch of the new online catalog, Wiley wrote up an article on Acme’s top-selling “Bundle o’ Dynamite” product. The article featured a picture of the Bundle o’ Dynamite, along with some descriptive text about the product, how it operates, a few safety warnings, and a couple of other informational points. When Wiley finished, a mockup of the article page looked like the screenshot seen on the left.

A Fiddler trace of the first request for the dynamite article pageUnbeknownst to Wiley, the Acme product catalog site collection is served-up by one Web application through one zone (the Default zone) on one WFE. This means that all product catalog requests, whether they come from customers or Wiley’s team, go to one IIS site on one server. The first time that someone (or more specifically, someone’s browser) requests the article page that Wiley put together, a series of web requests are kicked-off to pull down the page content, images, scripts, CSS, and everything else needed to render the page in a browser. This series of interactions (captured using Fiddler) is shown on the top right.

A Fiddler trace of the second request for the dynamite article pageSubsequent requests for the same article page (within the context of a single browser session) will follow the series of interactions seen directly to the right. One thing that you may notice upon inspecting the Fiddler trace is that subsequent page requests result in fewer calls back to the server. This is because SharePoint applies per session caching to many of the items it passes back to the browser, and this caching (which is not the same as BLOB caching) removes the need for constant re-fetching of items that haven’t changed.

In both of the Fiddler traces above, the focus is on the newsarticleimage.jpg file  – the file which houses a picture of the Bundle o’ Dynamite. The first time the browser requests the image within a session, a successful HTTP 200 response is returned to the browser along with the image. Also important to note is the Cache-Control header that comes back with the image:

Cache-Control: private,max-age=0

The private part of the Cache-Control header tells the client browser to cache the image locally for the duration of the browser session. The max-age=0 portion says, in effect, that subsequent uses of the image by the browser (from its cache) should be validated with a call back to the WFE to ensure that the image hasn’t changed.

And that’s what is shown happening in the second Fiddler trace. When subsequent page requests attempt to use the image, a GET request from the browser is answered by the WFE with

HTTP/1.1 304 NOT MODIFIED

This response code tells the browser that the image hasn’t changed and that it’s safe to use the locally cached copy. If the image were to change, then an HTTP 200 would be returned instead and the new/updated version of the image would be sent to the browser.

When the browser is closed, the locally cached copy of the image is flushed and the process begins anew the next time the browser opens.

Meep Meep

Not long after the launch of Acme’s online product catalog, customers began complaining that browsing the catalog was simply too slow. After some discussion, Management decided to bring in Roadrunner Consulting to assess the site and make suggestions that would improve performance.

Roadrunner’s team raced around (as they are wont to do), ran some tests, made some observations, and provided a list of suggestions. At the top of the list was “Implement SharePoint BLOB Caching.”

So, Acme’s SharePoint administrators jumped right in and turned on BLOB caching. Since the site is served up through a single IIS site (SharePoint zone), the admins set enabled=“true” in the BlobCache element of the site’s web.config file. No other changes were made to the BlobCache element.

So, what happened? Well, things got snappier! The administrators watching their back-end performance noticed that the file system on the WFE started to cache BLOBs that were being requested by users. Each request to the WFE for one of those BLOBs resulted in the BLOB being served back directly from the WFE without a round-trip to the SQL Server. Internal network bandwidth utilization dropped significantly, and the SQL Servers started breathing a bit easier. The administrators were most definitely happy with the change they’d made … and it was as easy as setting enabled=”true” in the BlobCache element of the web.config file. Talk about the greatest thing since sliced bread! Everyone exchanged a round of high-fives after the change was made, and talks of how the geeks would rise up to dominate the world resumed.

Dynamite Article Page - First Request with BLOB Caching enabledSo, how do things look on the client side after enabling the BLOB Cache? Well, when someone goes to retrieve Wiley’s article for the first time, the first browser request series for the page looks much like it did without the BLOB Cache enabled. See the Fiddler trace on the right.

There is one very important difference when retrieving items with the BLOB Cache enabled, though, and you have to look closely to see it. Do you see the Cache-Control HTTP header that is returned with the request for the newsarticleimage.jpg image? It’s different than it was before the BLOB Cache was enabled. Now it says

Cache-Control: public, max-age=86400

Whoa … what does this mean? Well, it means two important things. First, the public designation means that when the image is cached by the browser, it will no longer be private to the current session. It can be re-used across sessions, so it won’t necessarily “go away” when the browser is closed.

Second, the max-age=86400 means that the image will continue to “live” in the browser’s cache for 86400 seconds, or 24 hours. For that period of time, the browser won’t even attempt to contact the WFE to see if the image has changed; it will just use the copy that it holds onto. Nothing short of a browser cache flush (which is manual intervention by the user) will change this behavior.

Dynamite Article Page - Subsequent page requests with BLOB Caching enabledAnd that’s what we see with the Fiddler trace on the right. This trace represents what subsequent page requests look like for the next 24 hours. Notice that the newsarticleimage.jpg image doesn’t get re-requested or checked. There are no HTTP 304 response codes coming back, because the browser simply isn’t requesting the image; it’s using its cached copy.

Admittedly, the Fiddler trace will look a little different when the browser is closed and re-opened … but a re-fetch of the newsarticleimage.jpg file will not take place for a full 24 hours unless a user clears the browser cache.

What does this change in behavior mean for actual users of the site? Read on to find out …

Running Off the Edge of the Cliff

The corrected article page showing the TNT barrelShortly after the BLOB Cache changes were made, Wiley got an (unrelated) call from the Fulfillment Department. They were furious because they’d been getting all sorts of returns for the Bundle o’ Dynamite. The reason for the returns? It’s because Wiley put the wrong image in his article page!

Even though Acme sells a product called the “Bundle o’ Dynamite,” the actual product that ships is a barrel of TNT. Since the product image was wrong, customers were incorrectly concluding that they’d get several sticks of dynamite instead of a barrel, and this was rubbing many of them the wrong way. Who knew?

Wiley went out to SharePoint, checked the article that he wrote, and saw that he did indeed use a series of dynamite sticks for an image. The page should have actually appeared as it does in the screenshot that is above and to the left. After a quick facepalm, Wiley realized that he needed to make a change – and fast.

Wiley went out to the Publishing Images library for the site collection and uploaded a new version of the newsarticleimage.jpg image file – one that contained a barrel of TNT instead of a bundle of dynamite. He then browsed to the article page and did a refresh.

Nothing changed.

Wiley hit F5 in his browser. Still nothing changed.

Over the course of the hour that followed, Wiley grew increasingly more bewildered and panicked as he tried in vain to get the new TNT barrel to show up on the article page. He uploaded the image several more times, closed and re-opened his browser, deleted and then reloaded the image, re-published and re-approved the actual article page, and even got the administrators to flush the SharePoint BLOB Cache. None of the actions made a difference.

The Coyote Never Wins

Why didn’t any of Wiley’s efforts make a difference? Because what Wiley didn’t understand was that there was nothing he could do short of flushing his cache that would prompt the browser to re-request the updated image. The browser started using the cached copy of the image after the first request Wiley made in the morning; i.e., the request to verify that the image on the page was incorrect as Fulfillment indicated. For another 24 hours (86400 seconds), the browser would continue to use the cached image.

Wiley’s image problem was just one of the potential issues that might surface as a result of the BLOB Cache change. It was also one of the more visible problems. In looking at the path attribute of the BlobCache element, you might have noticed some of the other file types that got cached by default – file types with js (JavaScript) and css (Cascading Style Sheets) extensions, for example. Any of those file types which were served from site collection lists and libraries would also be impacted by the “fetch once and use for 24 hours” behavior.

Recommendations Before You Enable the BLOB Cache

A frustrated end userI hope the example featuring Wiley did an adequate job of explaining why I think that blindly turning on the BLOB Cache can be a bad thing for end users. Having seen first-hand what an improperly configured BLOB Cache can do to the user experience, I’d like to offer up a handful of suggestions based on my own experience.

1. Don’t just “enable” the BLOB Cache with its out-of-the-box (OOTB) default settings. There are a couple of OOTB settings that you should really think hard about changing. I mentioned the default max-age value you get if you don’t actually specify the attribute value. I’m going to talk more about that one in a bit. Also: do you really want the BLOB Cache using your system drive (C:) as its target location for cached files? Most admins I know aren’t particularly friendly with that idea, so relocate the BLOB Cache to another drive.

2. If your Web application has only one zone (i.e., the Default zone), strongly consider specifying a max-age attribute value of zero (max-age=”0”). Why do I say this? Because it avoids the situation I described with Wiley above, and it’s a compromise that gives administrators some of the performance boosts they seek without completely shafting users in the process.

Dynamite Article Page - max-age = 0 in effectWhen the BLOB Cache is enabled and a max-age attribute value of 0 is explicitly specified, things change a bit. BLOB caching and offloading still happens on the WFEs, so administrators get the internal performance boosts they were probably seeking in the first place. On the other side of the equation (i.e., the “user side”), persistent client side caching ceases as shown on the left. Although the Cache-Control header still specifies public cacheability, the max-age=0 ensures that the browser will round-trip to the server each time it intends to use a locally cached resource to ensure that the most up-to-date copy of the resource is in the cache. This will keep users like Wiley from going off the deep end due to the wonky and inconsistent user experience that afflicts users who need to edit and proof a site that employs persistent client-side caching.

3. If you have a Web application that is extended to two or more zones, apply BLOB Cache settings that are appropriate for each zone. This is relatively common in public-facing SharePoint site collections and Web applications where anonymous access is in-use. In these particular scenarios, there are usually at least two SharePoint zones per Web application: an internal zone (typically the Default zone) through which editors and other users may authenticate to carry out content work, and an external zone (e.g., the Internet zone) which is set up for anonymous access and “external consumption.”

In this dual-zone scenario, it makes sense to configure each zone (IIS site) differently since usage patterns differ between zones. The BlobCache element in the web.config for the internal (Default) zone, for example, should probably be configured according to #2 (above – the one zone scenario with a max-age attribute value of zero). For the web.config that is used in the external zone, though, it may make sense to apply a non-zero max-age value for use with the BLOB Cache – especially since anonymous users aren’t (normally) content editors. A non-zero max-age means fewer trips (overall) to your WFEs from outside the LAN environment, and this helps to keep bandwidth utilization on your Internet connection. There is still a risk that external users may see “stale” content, but the impact is generally more acceptable for straight viewers since they aren’t actively working on content.

4. Consider changing the path expression to restrict what goes into the BLOB Cache. The default path expression for SharePoint 2010’s BlobCache element looks like this:

\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$

Most administrators are savvy enough to add and remove file extensions from this expression as needed; for example, taking |wmv out of the path expression means that the BLOB Cache will no longer store and serve files with a .wmv extension. Adding and removing extensions really only scratches the surface of what can be done, though. The path attribute value is actually a regular expression, so the full power of regular expressions can be applied to select and exclude files for use with the BLOB Cache.

Suppose you want to explicitly control which images, videos, and other files (that match the list of extensions) end up in the BLOB Cache? Maybe you want to specially name files you intend to cache with an additional .cache extension before the actual file type extension (e.g., .gif). To accomplish this, you could change the path expression to this:

\.cache\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$

With this path expression, filenames like these would be included in the BLOB Cache:

  • SampleImage.cache.jpg
  • MyVideo.cache.wmv

… but anything without the additional .cache qualifier would get omitted, such as:

  • AnotherImage.jpg
  • ExcludeThisVideo.wmv

This is just a simple example, but hopefully it gives you an idea of what you could do with the path regular expression to control the contents of the BLOB Cache.

Summing It Up

The SharePoint BLOB Cache is a powerful mechanism to improve farm performance and scalability, but it shouldn’t be turned on without some forethought and a couple of changes to the default BlobCache element attribute values.

If you are an administrator and have enabled the BLOB Cache with its default values, check with your users. They might have some feedback for you …

Additional Reading and Resources

  1. CodePlex: SharePoint 2010 BlobCache Manager
  2. Event: SPTechCon San Francisco 2012
  3. Prezi: Pushing SharePoint’s ‘Go Faster’ Button
  4. Blog Post: Client-Server Interactions and the max-age Attribute with SharePoint BLOB Caching
  5. Tool: Fiddler Web Debugging Proxy

Kicking-Off 2012: SharePoint Style

January 16, 2012 Leave a comment

HighSpeedI don’t know how 2011 ended for most of you, but the year closed without much of a bang for me. I’m not complaining about that; the general slow-down gave me an opportunity to get caught up on a few things, and it was nice to spend some quality time with my friends and family.

While 2011 went out relatively quietly, 2012 seems to have arrived with a vengeance. In fact, I was doing some joking on Twitter with Brian Jackett and Rob Collie shortly after the start of the year about #NYN, or “New Year’s Nitrous.” It’s been nothing but pedal-to-the-metal and then some since the start of the year, and there’s absolutely no sign of it letting up anytime soon. I like staying busy, but in some ways I’m wondering whether or not there will be enough time to fit everything in. One day at a time …

Here’s a recap of some stuff from the tail end of 2011, as well as what I’ve got going on for the first couple of months in 2012. After February, things actually get even crazier … but I’ll save events beyond February for a later post.

SPTV

SPTV logoDuring the latter part of 2011, I had a conversation with Michael Hiles and Jon Breyfogle of DSC Consulting, a technical consulting and media services company based here in Cincinnati, Ohio. Michael and Jon had an idea: they wanted to develop a high-quality, high-production-value television program that centered on SharePoint and the larger SharePoint ecosystem/community. The initial idea was that the show would feature an interview segment, coverage of community events, SharePoint news, and some other stuff thrown in.

It was all very preliminary stuff when they initially shared the idea with me, but I told them that I thought they might be on to something. The idea of a professional show that centered on SharePoint wasn’t something that was being done, and I was really curious to see how they would do it if they elected to move forward.

Just before Christmas, Jon contacted me to let me know that they were indeed moving forward with the idea … and he asked if I’d be the show’s first SharePoint guest. I told him I’d love to help out, and so the bulk of the pilot episode was shot at the Village Tavern in Montgomery one afternoon with host Mark Tiderman and co-host Craig Pereira. Mark and I shot some pool, discussed disaster recovery, and just talked SharePoint for a fair bit. It was really a lot of fun.

The pilot isn’t yet available (publicly), but a teaser for the show is available on the SPTV web site. All in all, I think the DSC folks have done a tremendous job creating a quality, professional program. Check out the SPTV site for a taste of what’s to come!

SharePoint Saturday Columbus Kick-Off

SharePoint Saturday Columbus logoAround the time of the SPTV shooting, the planning committee for SharePoint Saturday Columbus (Brian Jackett, Jennifer Mason, Nicola Young, and I) had a checkpoint conversation to figure out what, if anything, we were going to do about SharePoint Saturday Columbus in 2012. Were we going to try to do it again? If so, were we going to change anything? What was our plan?

Everything with SPSColumbus in 2012 is still very preliminary, of course, but I can tell you that we are looking forward to having the event once again! We expect that we’ll attempt to hold the event during roughly the same part of the year as we’ve had it in the past (i.e., late summer). As we start to nail things down and come up with concrete plans, I’ll share those. Until then, keep your eyes on the SharePoint Saturday site and the SPSColumbus account on Twitter!

SharePointCincy

Those of us who reside in and around Cincinnati, Ohio, are very fortunate when it comes to SharePoint events and opportunities. In the past we’ve had SharePoint Saturday Indianapolis just to the west of us, SharePoint Saturday Columbus to the northeast, and last year we had our first ever SharePoint Saturday Cincinnati (which was a huge success!) On top of that, last year was the first ever SharePointCincy event.

SharePointCincy was similar in some ways to a SharePoint Saturday, but it was different in others. It was a day full of SharePoint sessions, but we also had Fred Studer (the General Manager for the Information Worker product group at Microsoft) come out an speak. Kroger, a local company whose SharePoint implementation I’m very familiar with, also shared their experience with SharePoint. Rather than go into too much detail, though, I encourage you to check out the SharePointCincy site yourself to see what it was all about.

Of course, the whole reason I’m mentioning SharePointCincy is that it’s coming again in March of this year! Last year’s success (the event was attended by hundreds) pretty much guaranteed that the event would happen again.

I’m part of a planning team that includes Geoff Smith, Steve Caravajal of Microsoft, Mike Smith from MAX Technical Training, and the infamous Shane Young of SharePoint911 (which, in case you didn’t know it, is based here in Cincinnati). Four of the five of us met last Friday for a kick-off meeting and to discuss how the event might go this year. It was a good breakfast and a productive meeting. I don’t have much more to share at this point (other than the fact that, “yes, it’s happening”), but I will share information as it becomes available. Stay tuned!

Secrets of SharePoint Webcast

Secrets of SharePoint logoIt’s been a few months since my last webcast on SharePoint caching, so my co-workers at Idera approached me about doing another webcast. I guess I was due.

On this Wednesday, January 18th, I’ll be delivering a Secrets of SharePoint webcast titled “The Essentials of SharePoint Disaster Recovery.” Here’s the abstract:

“Are my nightly SQL Server backups good enough?” “Do I need an off-site disaster recovery facility?” “How do I even start the process of disaster recovery planning?” These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes “it depends…”

In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

For those of you who have heard me speak and/or attended my webcasts in the past, you’ll probably find this session to be a bit different than ones you’ve seen or heard. The main reason I say that is because the content is primarily business-centric rather than nuts-and-bolts admin content.

That doesn’t mean that SharePoint administrators shouldn’t attend, though; on the contrary, the webcast includes a number of very important messages for admins (e.g., why DR must be driven from the business angle rather than the technical/admin angle) that could really help them in their jobs. The session expands the scope of the DR discussion, though, to include the business aspects that are so tremendously important during the DR planning process.

If what I’ve shared sounds interesting, please sign-up! The webcast is free, and I’ll be doing Q&A after the session.

SharePoint Saturday Austin

SharePoint Saturday Austin logoThis upcoming weekend, I’ll be heading down to Austin, Texas, for the first SharePoint Saturday Austin event! The event is taking place on January 21st, and it is being coordinated by Jim Bob Howard (of Juniper Strategy) and Matthew Lathrop (of Rackspace). Boy oh boy – do they have an amazing line-up of speakers and contributors. It’s quite impressive; check out the site to see what I mean.

The guys are giving me the opportunity to present “The Essentials of SharePoint Disaster Recovery” session, and I’m looking forward to it. I’m also looking forward to catching up with many of my friends … and some of my Idera co-workers (who will be coming in from Houston, Texas).

If you’re in the Austin area and looking for something to do this upcoming Saturday, come to the event. It’s free, and it’s a great chance to take in some phenomenal sessions, win some prizes, and be a part of the larger SharePoint community!

SharePoint Pro Demo Booth Session

SharePoint Pro logoOn Monday, February 20th at 12pm EST, I’m going to be doing a “demo booth” session through SharePoint Pro Magazine. The demo booth is titled “Backup Basics: SharePoint’s Backup and Restore Capabilities and Beyond.” Here’s the description for the demo booth:

SharePoint ships with a number of tools and capabilities that are geared toward protecting content and configuration. These tools provide basic coverage for your SharePoint environment and the content it contains, but they can quickly become cumbersome in real world scenarios. In this session, we will look at SharePoint’s backup and restore capabilities, discuss how they work, and identify where they fall short in common usage scenarios. We will also highlight how Idera’s SharePoint backup solution picks up where the SharePoint platform tools leave off in order to provide complete protection that is cost-effective and easy to use.

The “demo booth” concept is something new for me; it’s part “platform education” (which is where I normally spend the majority of my time and energy) and part “product education” – in this case, education about Idera’s SharePoint backup product. Being both the product manager for Idera SharePoint backup and a co-author for the SharePoint 2010 Disaster Recovery Guide leaves me in something of a unique position to talk about SharePoint’s built-in backup/restore capabilities, where gaps exist, and how Idera SharePoint backup can pick up where the SharePoint platform tools leave off.

If you’re interested in learning more about Idera’s SharePoint backup product and/or how far you can reasonably push SharePoint’s built-in capabilities, check out the demo booth.

SPTechCon 2012 San Francisco

SPTechConFebruary comes to close with a big bang when SPTechCon rolls into San Francisco for the first of two stops in 2012. For those of you who check my blog now and again, you may have noticed the SPTechCon “I’ll be speaking at” badge and link on the right-hand side of the page. Yes, that means I’ll be delivering a session at the event! The BZ Media folks always put on a great show, and I’m certainly proud to be a part of SPTechCon and presenting again this time around.

At this point, I know that I’ll be presenting “The Essentials of SharePoint Disaster Recovery.” I think I’m also going to be doing another lightning talk; I need to check up on that, though, to confirm it.

I also found out that John Ferringer (my co-author and partner-in-crime) and I are also going to have the opportunity to do an SPTechCon-sponsored book signing (for our SharePoint 2010 Disaster Recovery Guide) on the morning of Wednesday the 29th.

If you’re at SPTechCon, please swing by to say hello – either at my session, at the Idera booth, the book signing, or wherever you see me!

Additional Reading and Resources

  1. Blog: Brian Jackett’s Frog Pond of Technology
  2. Blog: Rob Collie’s PowerPivotPro
  3. Company: DSC Consulting
  4. Site: SPTV
  5. LinkedIn: Mark Tiderman
  6. LinkedIn: Craig Pereira
  7. Event: SharePoint Saturday Columbus
  8. Blog: Jennifer Mason
  9. Twitter: Nicola Young
  10. Site: SharePoint Saturday
  11. Twitter: SharePoint Saturday Columbus
  12. Event: SharePoint Saturday Cincinnati
  13. Event: SharePointCincy
  14. LinkedIn: Geoff Smith
  15. Blog: Steve Caravajal’s Ramblings
  16. Blog: Mike Smith’s Tech Training Notes
  17. Company: MAX Technical Training
  18. Blog: Shane Young’s SharePoint Farmer’s Almanac
  19. Company: SharePoint911
  20. Webcast: “Caching-In” for SharePoint Performance
  21. Site: Secrets of SharePoint
  22. Webcast: The Essentials of SharePoint Disaster Recovery
  23. Event: SharePoint Saturday Austin
  24. Blog: Jim Bob Howard
  25. Company: Juniper Strategy
  26. LinkedIn: Matthew Lathrop
  27. Company: Rackspace
  28. Company: Idera
  29. Event: SharePoint Pro Demo Booth Session
  30. Site: SharePoint Pro Magazine
  31. Product: Idera SharePoint backup
  32. Book: SharePoint 2010 Disaster Recovery Guide
  33. Event: SPTechCon 2012 San Francisco
  34. Company: BZ Media
  35. Blog: John Ferringer’s My Central Admin

Mirror, Mirror, In the Farm …

November 20, 2011 6 comments

This is a post I’ve been meaning to write for some time, but I’m only now getting around to it. It’s a quick one, and it’s intended to share a couple of observations and a script that may be of use to those of you who are SharePoint 2010 administrators.

Mirroring and SharePoint

The use of SQL Server mirroring isn’t something that’s unique to SharePoint, and it was possible to leverage mirroring with SharePoint 2007 … though I tended to steer people away from trying it unless they had a very specific reason for doing so and no other approach would work. There were simply too many hoops you needed to jump through in order to get mirroring to work with SharePoint 2007, primarily because SharePoint 2007 wasn’t mirroring-aware. Even if you got it working, it was … finicky.

SharePoint 2010, on the other hand, is fully mirroring-aware through the use of the Failover Partner keyword in connection strings used by SharePoint to connect to its databases.

(Side note: if you aren’t familiar with the Failover Partner keyword, here’s an excellent breakdown by Michael Aspengren on how the SQL Server Native Provider leverages it in mirroring configurations.)

There are plenty of blog posts, articles (like this one from TechNet), and books (like the SharePoint 2010 Disaster Recovery Guide that John Ferringer and I wrote) that talk about how to configure mirroring. It’s not particularly tough to do, and it can really help you in situations where you need a SQL Server-based high availability and/or remote redundancy solution for SharePoint databases.

This isn’t a blog post about setting up mirroring; rather, it’s a post to share some of what I’ve learned (or think I’ve learned) and related “ah-ha” moments when it comes to mirroring.

What Are You Pointing At?

This all started when Jay Strickland (one of the Quality Assurance (QA) folks on my team at Idera) ran into some problems with one of our SharePoint 2010 farms that was used for QA purposes. The farm contained two SQL Server instances, and the database instances were setup such that the databases on the second instance mirrored the databases on the first (principal) instance. Jay had configured SharePoint’s service applications and Web applications for mirroring, so all was good.

But not really. The farm had been running properly for quite some time, but something had gone wrong with the farm’s mirroring configuration – or so it seemed. That’s when Jay pinged me on Skype one day with a question (which I’m paraphrasing here):

Is there any way to tell (from within SharePoint) which SQL Server instance is in-use by SharePoint at any given time for a database that is being mirrored?

It seemed like a simple question that should have a simple answer, but I was at a loss to give Jay anything usable off the top of my head. I told Jay that I’d get back to him and started doing some digging.

The SPDatabase Type

Putting on my developer hat for a second, I recalled that all SharePoint databases are represented by an instance of the SPDatabase type (Microsoft.SharePoint.Administration.Database specifically) or one of the other classes that derive from it, such as SPContentDatabase. Running down the available members for the SPDatabase type, I came up with the following properties and methods that were tied to mirroring in some way:

  • FailoverServer
  • FailoverServiceInstance
  • AddFailoverServiceInstance()

What I thought I would find (but didn’t) was one or more properties and/or methods that would allow me to determine which SQL Server instance was serving as the active connection point for SharePoint requests.

In fact, the more digging that I did, the more that it appeared that SharePoint had no real knowledge of where it was actually connecting to for data in mirrored setups. It was easy enough to specify which database instances should be used for mirroring configurations, but there didn’t appear to be any way to determine (from within SharePoint) if the principal was in-use or if failover to the mirrored instance had taken place.

The Key Takeaway

If you’re familiar with SQL Server mirroring and how it’s implemented, then the following diagram (which I put together for discussion) probably looks familiar:

SharePoint connecting to mirrored database

This diagram illustrates a couple of key points:

  1. SharePoint connects to SQL Server databases using the SQL Server Native Client
  2. SharePoint supplies a connection string that tells the native client which SQL Server instances (as Data Source and Failover Partner) should be used as part of a mirroring configuration.
  3. It’s the SQL Server Native Client that actually determines where connections are made, and the results of the Client’s decisions don’t directly surface through SharePoint.
    Number 3 was the point that I kept getting stuck on. I knew that it was possible to go into SQL Server Management Studio or use SQL Server’s Management Objects (SMO) directly to gain more insight around a mirroring configuration and what was happening in real-time, but I thought that SharePoint must surely surface that information in some form.

Apparently not.

Checking with the Experts

I hate when I can’t nail down a definitive answer. Despite all my reading, I wanted to bounce the conclusions I was drawing off of a few people to make sure I wasn’t missing something obvious (or hidden) with my interpretation.

  • I shot Bill Baer (Senior Technical Product Manager for SharePoint and an MCM) a note with my question about information surfacing through SharePoint. If anyone could have given me a definitive answer, it would have been him. Unfortunately, I didn’t hear back from him. In his defense, he’s pretty doggone busy.
  • I put a shout out on Twitter, and I did hear back from my good friend Todd Klindt. While he couldn’t claim with absolute certainty that my understanding was on the mark, he did indicate that my understanding was in-line with everything he’d read and conclusions he had drawn.
  • I turned to Enrique Lima, another good friend and SQL Server MCM, with my question. Enrique confirmed that SQL SMO would provide some answers, but he didn’t have additional thoughts on how that information might surface through SharePoint.

Long and short: I didn’t receive rock-solid confirmation on my conclusions, but my understanding appeared to be on-the-mark. If anyone knows otherwise, though, I’d love to hear about it (and share the information here – with proper recognition for the source, of course!)

Back to the Farm

In the end, I wasn’t really able to give Jay much help with the QA farm that he was trying to diagnose. Since I couldn’t determine where SharePoint was pointing from within SharePoint itself, I did the next best thing: I threw together a PowerShell script that would dump the (mirroring) configuration for each database in the SharePoint farm.

<#
.SYNOPSIS
   SPDBMirrorInfo.ps1
.DESCRIPTION
   Examines each of the databases in the SharePoint environment to identify which have failover partners and which don't.
.NOTES
   Author: Sean McDonough
   Last Revision: 19-August-2011
#>
function DumpMirroringInfo ()
{
	# Make sure we have the required SharePoint snap-in loaded.
	$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
	if ($spCmdlets -eq $Null)
	{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

	# Grab databases and determine which have failover support (and which don't)
	$allDatabases = Get-SPDatabase
	$dbsWithoutFailover = $allDatabases | Where-Object {$_.FailoverServer -eq $null} | Sort-Object -Property Name
	$dbsWithFailover = $allDatabases | Where-Object {$_.FailoverServer -ne $null} | Sort-Object -Property Name
	
	# Write out unmirrored databases
	if ($dbsWithoutFailover -eq $null)
	{ Write-Host "`n`nNo databases are configured without a mirroring partner." }
	else
	{ 
		Write-Host ("`n`nDatabases without a mirroring partner: {0}" -f $dbsWithoutFailover.Count) 
		$dbsWithoutFailover | Format-Table -Property Name, Server -AutoSize 
	}

	# Dump results for mirrored databases
	if ($dbsWithFailover -eq $null)
	{ Write-Host "`nNo databases are configured with a mirroring partner." }
	else
	{ 
		Write-Host ("`nDatabases with a mirroring partner: {0}" -f $dbsWithFailover.Count) 
		$dbsWithFailover | Format-Table -Property Name, Server, FailoverServer -AutoSize
	}
	
	# For ease of reading
	Write-Host ("`n`n")
}
DumpMirroringInfo

The script itself isn’t rocket science, but it did actually prove helpful in identifying some databases that had apparently “lost” their failover partners.

Additional Reading and Resources

  1. MSDN: Using Database Mirroring
  2. Whitepaper: Using database mirroring (Office SharePoint Server)
  3. Blog Post: Clarification on the Failover Partner in the connectionstring in Database Mirror setup
  4. TechNet: Configure availability by using SQL Server database mirroring (SharePoint Server 2010)
  5. Book: The SharePoint 2010 Disaster Recovery Guide
  6. Blog: John Ferringer’s “My Central Admin”
  7. Blog: Jay Strickland’s “Slinger’s Thoughts
  8. Company: Idera
  9. MSDN: SPDatabase members
  10. MSDN: SQL Server Management Objects (SMO)
  11. Blog: Bill Baer
  12. Blog: Todd Klindt’s SharePoint Admin Blog
  13. Blog: Enrique Lima’s Intentional Thinking

Wrapping Up 2011

November 13, 2011 1 comment

Over the last several months, I haven’t been blogging as much as I’d hoped to; in reality, I haven’t blogged at all. There are a couple of reasons for that: one of them was our recent house move (and the aftermath), and the other was a little more personal. Without going into too much detail: we were contending with a very serious health issue in our family, and that took top priority.

The good news is that the clouds are finally parting, and I’m heading into the close of 2011 on a much better note (and with more time) than I’ve spent the last several months. To get back into some blogging, I figured I’d wrap-up the last several months’ worth of activities that took place since SharePoint Saturday Columbus.

Secrets of SharePoint (SoS) Webcast

Secrets of SharePoint Webcast BannerA lot of things started coming together towards the end of October, and the first of those was another webcast that I did for Idera titled “’Caching-In’ for SharePoint Performance.” The webcast covered each of SharePoint’s built-in caching mechanisms (object caching, BLOB caching, and page output caching) as well as the Office Web Applications’ cache. I provided a rundown on each mechanism, how it worked, how it could be leveraged, and some watch-outs that came with its use.

The webcast was basically a lightweight version (40 minutes or so) of the longer (75 minute) presentation I like to present at SharePoint Saturday events. It was something of a challenge to squeeze all of the regular session’s content into 40 minutes, and I had to cut some of the material I would have liked to have kept in … but the final result turned-out pretty well.

If you’re interested in seeing the webcast, you can watch it on-demand from the SoS webcast archive. I also posted the slides in the Resources section of this blog.

SharePoint Saturday Cincinnati

SharePoint Cincinnati BannerOn Saturday October 29th, Cincinnati had its first-ever SharePoint Saturday Cincinnati event. The event took place at the Kingsgate Marriott on Goodman Drive (near University Hospital), and it was very well attended – so much so that Stacy Deere and the other folks who organized the event are planning to do so again next year!

Many people from the local SharePoint community came out to support the event, and we had a number of folks from out of town come rolling in as well to help ensure that the event was a big success. I ended up delivering two sessions: my “’Caching-In’ for SharePoint Performance” session and my “SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!”

I had a great time at the event, and I’m hoping I’ll be fortunate enough to participate again on the next go ‘round!

New Disaster Recovery WhitePaper

WhitePaper Title PageMy co-author and good friend John Ferringer and I were hard at work throughout the summer and early Fall putting together a new disaster recovery whitepaper for Idera. The whitepaper is titled “New Features in SharePoint 2010: A Disaster Recovery Love Story,” and it’s a bromance novel that only a couple of goofballs like John and I could actually write …

Okay, there’s actually no romance in it whatsoever (thank heavens for prospective readers – no one needs us doing that to them), but there is a solid chunk of coverage on SharePoint 2010’s new platform capabilities pertaining to disaster recovery. We also review some disaster recovery basics in the whitepaper, cover things that have changed since SharePoint 2007, and identify some new watch-out areas in SharePoint 2010 that could have an impact on your disaster recovery planning.

The whitepaper is pretty substantial at 13 pages, but it’s a good read if you want to understand your platform-level disaster recovery options in SharePoint 2010. It’s a free download, so please grab a copy if it sounds interesting. John and I would certainly love to hear your feedback, as well.

SharePoint Backup Augmentation Cmdlets (SharePointBAC)

SharePointBACMany of my friends in the SharePoint community have heard me talk about some of the projects I’ve wanted to undertake to extend the SharePoint platform. I’m particularly sensitive to the plight of the administrator who is constrained (typically due to lack of resources) to use only the out-of-the-box (OOTB) tools that are available for data protection. While I think the OOTB tools do a solid job in most small and mid-size farms scenarios, there are some clear gaps that need to be addressed.

Since I’d been big on promises and short on delivery in helping these administrators, I finally started on a project to address some of the backup and restore gaps I see in the SharePoint platform. The evolving and still-under-development result is my SharePoint Backup Augmentation Cmdlets (SharePointBAC) project that is available on CodePlex.

With the PowerShell cmdlets that I’m developing for SharePoint 2010, I’m trying to introduce some new capabilities that SharePoint administrators need in order to make backup scripting with the OOTB tools a simpler and more straightforward experience. For example, one big gap that exists with the OOTB tools is that there is no way to groom a backup set. Each backup you create using Backup-SPFarm, for instance, adds to the backups that existed before it. There’s no way to groom (or remove) older backups you no longer want to keep, so disk consumption grows unless manual steps are taken to do something about it. That’s where my cmdlets come in. With Remove-SPBackupCatalog, for example, you could trim backups to retain only a certain number of them; you could also trim backups to ensure that they consume no more disk space (e.g., 100GB) than you’d like.

The CodePlex project is in alpha form right now (it’s brand spankin’ new), and it’s far from complete. I’ve already gotten some great suggestions for what I could do to continue development, though. When I combine those ideas with the ones I already had, I’m pretty sure I’ll be able to shape the project into something truly useful for SharePoint administrators.

If you or someone you know is a SharePoint administrator using the OOTB tools for backup scripting, please check out the project. I’d really love to hear from you!

SharePoint Saturday Denver

SharePoint Saturday DenverAs I type this, I’m in Colorado at the close of the third (annual) SharePoint Saturday Denver event. This year’s event was phenomenal – a full two days of SharePoint goodness! Held on Friday November 11th and Saturday November 12th at the Colorado Convention Center, this year’s event was capped at 350 participants for Saturday. A full 350 people signed-up, and the event even had a wait list.

On the first day of the event, I delivered a brand new session that I put together (in Prezi format) titled The Essentials of SharePoint Disaster Recovery. Here’s the amended abstract (and I’ll explain why it’s amended in a second) for the session:

“Are my nightly SQL Server backups good enough?” “Do I need an off-site disaster recovery facility?” “How do I even start the process of disaster recovery planning?” These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes “it depends.” In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

The reason I amended the abstract is because the previous abstract for the session didn’t do enough to call out the fact that the presentation is primarily business-centric rather than technically focused. Many of the folks who initially came to the session were SharePoint IT pros and administrators looking for information on backup/restore, mirroring, configuration, etc. Although I cover those items at a high level in this new talk, they’re only a small part of what I discuss during the session.

On Saturday, I delivered my “’Caching-In’ for SharePoint Performance” talk during the first slot of the day. I really enjoy delivering the session; it’s probably my favorite one. I had a solid turn-out, and I had some good discussions with folks both during and after the presentation.

As I mentioned, this year’s event was a two day event. That’s a little unusual, but multi-day SharePoint Saturday events appear to be getting some traction in the community – starting with SharePoint Saturday The Conference a few months back. Some folks in the community don’t care much for this style of event, probably because there’s some nominal cost that participants typically bear for the extra day of sessions. I expect that we’ll probably continue to see more hybrid events, though, because I think they meet an unaddressed need that falls somewhere between “give up my Saturday for free training” and “pay a lot of money for a multi-day weekday conference.” Only time will tell, though.

On the Horizon

Event though 2011 isn’t over yet, I’m slowing down on some of my activities save for SharePointBAC (my new extracurricular pastime). 2012 is already looking like it’s going to be a big year for SharePoint community activities. In January I’ll be heading down to Texas for SharePoint Saturday Austin, and in February I’ll be heading to San Francisco for SPTechCon. I’ll certainly cover those activities (and others) as we approach 2012.

Additional Reading and Resources

  1. Event: SharePoint Saturday Columbus
  2. Company: Idera
  3. Webcast: “Caching-In” for SharePoint Performance
  4. Webcast Slides: “Caching-In” for SharePoint Performance
  5. Location: My blog’s Resources section
  6. Event: SharePoint Saturday Cincinnati
  7. Blog: Stacy Deere and Stephanie Donahue’s “Not Just SharePoint”
  8. SPS Cincinnati Slides: “Caching-In” for SharePoint Performance
  9. SPS Cincinnati Slides: SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!
  10. Blog: John Ferringer’s “My Central Admin”
  11. Whitepaper: New Features in SharePoint 2010: A Disaster Recovery Love Story
  12. CodePlex: SharePoint Backup Augmentation Cmdlets (SharePointBAC)
  13. Event: SharePoint Saturday Denver
  14. Tool: Prezi
  15. SPS Denver Slides: The Essentials of SharePoint Disaster Recovery
  16. SPS Denver Slides: “Caching-In” for SharePoint Performance
  17. Event: SharePoint Saturday The Conference
  18. Event: SharePoint Saturday Austin
  19. Event: SPTechCon 2012 San Francisco

SharePoint Summer Fun

July 5, 2011 1 comment

My family recently relocated from the west side of Cincinnati to the east side, and it’s been a major undertaking – as anyone who’s familiar with Jim Borgman’s comic series on the east and west sides of Cincinnati can appreciate. Between the move and some other issues, I had planned on taking it easy with SharePoint activities for a while.

Despite that goal, it seems I still have a handful of SharePoint-related things planned this summer. Here’s what’s going on.

Office Web Apps’ Cache Article

Idera SharePoint SmartsAs a product manager for Idera, I occasionally author articles for the company’s SharePoint Smarts e-newsletter. A couple of weeks back, I wrote an article titled Quick Tips for Managing the SharePoint 2010 Office Web Apps’ Cache. The article basically provides an overview of the Office Web Apps’ cache and how it can be maintained for optimal performance.

The main reason I’m calling the article out here (in my blog) is because I put together a couple of PowerShell scripts that I included in the article. The first script relocates the Office Web Apps’ cache site collection to a different content database for any given Web application. The second script displays current values for some common cache settings and gives you the opportunity to change them directly.

The scripts (and article contents) are helpful for anyone trying to manage the Office Web Apps in SharePoint 2010. Check them out!

Talk TechNet Appearance

On Wednesday, July 6th (tomorrow!), I’ll be on Talk TechNet with Keith Combs and Matt Hester. I’m going to be talking with Keith and Matt about SharePoint, disaster recovery, and anything else that they want to shoot the breeze about. 60 minutes seems like a long time, but I know how quickly it can pass once my mouth starts going …

Here’s the fun part (for you): the episode is presented live, and anyone who registers for the event can “call in” with questions, comments, etc. Feel free to call in and throw me a softball question … or heckle me, if that’s your style! Although I don’t know Keith personally (yet), I do know Matt – and knowing Matt, things will be lighthearted and lively.

Evansville SPUG

On Thursday the 7th (yeah, this is a busy week), I’ll be heading down to Evansville, Indiana, to speak at the Evansville user group. This is something that Rob Wilson and I have been discussing for quite some time, and I’m glad that it’s finally coming to fruition!

I’ll be presenting my SharePoint 2010 and Your DR Plan: New Capabilities, New Possibilities! session. The abstract reads as follows:

Disaster recovery planning for a SharePoint 2010 environment is something that must be performed to insure your data and the continuity of business operations. Microsoft made significant enhancements to the disaster recovery landscape with SharePoint 2010, and we’ll be taking a good look at how the platform has evolved in this session. We’ll dive inside the improvements to the native backup and restore capabilities that are present in the SharePoint 2007 platform to see what has been changed and enhanced. We’ll also look at the array of exciting new capabilities that have been integrated into the SharePoint 2010 platform, such as unattended content database recovery, SQL Server snapshot integration, and configuration-only backup and restore. By the time we’re done, you will possess a solid understanding of how the disaster recovery landscape has changed with SharePoint 2010.

It’ll be a bit of a drive from here to Evansville and back, but I’m really looking forward to talking shop with Rob and his crew on Thursday!

SharePoint Saturday New York City (SPSNYC)

SPS New York City LogoI’ll be heading up to New York City at the end of the month to present at SharePoint Saturday New York City on July 30th. I’ll be presenting SharePoint 2010 and Your DR Plan: New Capabilities, New Possibilities! session, and it should be a lot of fun.

Amazingly enough, the primary registration (400 seats) for the event “sold out” in a little over three days. Holy smokes – that’s fast! The event is now wait listed, so if you haven’t yet signed up … you probably won’t get a spot  :-(

CincySPUG

On August 4th, I’ll be heading back up to Mason, Ohio, to present for my friends at the Cincinnati SharePoint User Group. My presentation topic this time around will be “Caching-In” for SharePoint Performance. Here’s the abstract:

Caching is a critical variable in the SharePoint scalability and performance equation, but it’s one that’s oftentimes misunderstood or dismissed as being needed only in Internet-facing scenarios. In this session, we’ll build an understanding of the caching options that exist within the SharePoint platform and how they can be leveraged to inject some pep into most SharePoint sites. We’ll also cover some sample scenarios, caching pitfalls, and watch-outs that every administrator should know.

Like most of my presentations, this one started as a PowerPoint. I converted it over to Prezi format some time ago, and I’ve been having a lot of fun with it since. I hope the CincySPUG folks enjoy it, as well!

SharePoint Saturday The Conference (SPSTC)

SPSTC LogoIf you haven’t heard of SharePoint Saturday The Conference yet, then the easiest way for me to describe is this way: it’s a SharePoint Saturday event on steroids. Instead of being just one Saturday, the event is three days long. Expected attendance is 2500 to 3000 people. It’s going to be huge.

I submitted a handful of abstracts for consideration, and I know that I’ll be speaking at the event. I just don’t know what I’ll be talking about at this point.  If you’re going to be in the Washington, DC area on August 11th through 13th, though, consider signing up for the conference!

SharePoint Saturday Columbus (SPSColumbus)

SPS Columbus LogoThe 2nd SharePoint Saturday Columbus event will be held on August 20th, 2011, at the OCLC Conference Center in Columbus, Ohio. Registration is now open, and session submissions are being accepted through the end of the day tomorrow (7/6).

Along with Brian Jackett, Jennifer Mason, and Nicola Young, I’m helping to plan and execute the event on the 20th. I’m handling speaker coordination again this year – a role that I do enjoy! We’ve had a number of great submissions thus far; in the next week or so, we (the organizing committee) will be putting our heads together to make selections for the event. Once those selections have been made, I’ll be communicating with everyone who submitted a session.

If you live in Ohio and don’t find Columbus to be an exceptionally long drive, I encourage you to head out to the SharePoint Saturday site and sign up for the event. It’s free, and the training you’ll get will be well-worth the Saturday you spend!

Additional Reading and References

  1. Jim Borgman: East Side/West Side of Cincinnati comic series
  2. Company: Idera
  3. Article: Quick Tips for Managing the SharePoint 2010 Office Web Apps’ Cache
  4. Event: Talk TechNet Webcast, Episode 43
  5. Blog: Keith Combs
  6. Blog: Matt Hester
  7. User Group: Evansville SPUG site
  8. Blog: Rob Wilson
  9. Event: SharePoint Saturday New York City
  10. User Group: CincySPUG site
  11. Software/Service: Prezi
  12. Event: SharePoint Saturday The Conference
  13. Event: SharePoint Saturday Columbus
  14. Blog: Brian Jackett
  15. Blog: Jennifer Mason
  16. Twitter: Nicola Young

The Spring SharePoint Activities Run-Down

It’s turning out to be a very busy Spring – more so than I would have originally guessed (or planned).  That’s okay, though: there’s very little else I’d rather be doing than getting out and spending time with the SharePoint community at-large!  Spending time with the community also means I’m getting out of my basement, and that’s really good for my Vitamin D levels …

Here’s what I have coming up (or just passed) this Spring:

DBTechCon

The SQL Server Worldwide User Group (SSWUG) recently put on their entirely virtual DBTechCon event.  One of the original speakers for the event ended up having to cancel just before the event was due to take place, so the SSWUG folks had a gap and asked if I could fill it.  It took some scrambling, but I was able to pull together three sessions for them on a combination of SharePoint disaster recovery (DR) and performance topics.

Although the event has passed, it’s still possible to access the sessions on-demand.

SharePoint Saturday Saint Louis

I’ll actually be heading over to Saint Louis today (Friday, April 29th) for tomorrow’s SharePoint Saturday Saint Louis.  My co-author and good buddy John Ferringer and I will be getting the band back together to do our “Saving SharePoint” session on SharePoint disaster recovery. 

Like all other SharePoint Saturday events, the event is free to the public.  Come on out if you’re in the Saint Louis area for a free day of training, socializing, and giveaways!

SharePoint Saturday Houston

Houston, Texas, will be hosting its SharePoint Saturday Houston event next Saturday on May 7th.  I’ll be traveling down to Houston on Thursday for some business-related items, but I’ll be speaking at the event on Saturday.  I’ll be giving my “’Caching-In’ for SharePoint Performance” talk – now in Prezi form.

Houston’s SharePoint Saturday event is one of the bigger ones that takes place, and the line-up of speakers is phenomenal.  I hope to see you there!

Dayton SharePoint User Group

On the evening of Tuesday, May 10th, I’ll be heading up to Dayton to spend some time with the Dayton SharePoint User Group.  I met Tony Maddin (who heads up the group) after a Cincinnati SPUG meeting, and Tony asked if I would come up and speak to the recently formed Dayton SPUG.  I jump at just about any opportunity to speak, so on Tuesday the 10th I’ll be delivering a SharePoint DR session to the group.

SharePoint Saturday Michigan

To wrap up the SharePoint Saturday hat trick, I’ll be heading up to Troy, Michigan, for SharePoint Saturday Michigan on May 14th.  Peter Serzo and crew are sure to put on another stellar event this year, and I’ll be presenting a session on SharePoint disaster recovery – specifics still unknown.  Stay tuned, and be sure to head over to the event on 5/14 if you’re in the area.

SPTechCon

I feel very fortunate to have a spot at the mid-year 2011 SPTechCon event in Boston from June 1st through June 3rd.  My session (Session 702) will be on June 3rd from 11:30am until 12:45pm, and I’ll be delivering “SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities”  I’m really looking forward to it, and I hope that I’ll see some of you there!

Additional Reading and References

  1. Event: DBTechCon
  2. Event: SharePoint Saturday Saint Louis
  3. People: John Ferringer
  4. Event: SharePoint Saturday Houston
  5. Services: Prezi
  6. User Group: Dayton SharePoint User Group
  7. People: Tony Maddin
  8. Event: SharePoint Saturday Michigan
  9. Twitter: Peter Serzo
  10. Event: SPTechCon
Categories: News Tags: , ,

Bare Metal Bugaboos

April 24, 2011 7 comments

I had one of those “aw nuts” moments last night.

At some point yesterday afternoon, I noticed that none of the computers in the house could get out to the Internet.  After verifying that my wireless network was fine and that internal DNS was up-and-running, I traced the problem back to my Forefront Threat Management Gateway (TMG) firewall.  Attempting to RDP into it proved fruitless, and when I went downstairs and looked at the front of the server, I noticed the hard drive activity light was constantly lit.

So, I powered the server off and brought it back on.  Problem solved … well, not really. It happened again a couple of hours later, so I repeated the process and made a mental note that I was going to have to look at the server when I had a chance.

Demanding My Attention

Well, things didn’t “stay fixed.”  Later in the evening, the same lack of connectivity surfaced again.  I went to the basement, powered the server off, and brought it back up.  That time, though, the server wouldn’t start and complained about having nothing to boot from.

As I did a reset and watched it boot again, I could see the problem: although the server knew that something was plugged in for boot purposes, it couldn’t tell that what was plugged in was a 250GB SATA drive.  Ugh.

When I run into those types of situation, the remedy is pretty clear: a new hard drive.  I always have a dozen or more hard drives sitting around (comes from running a server farm in the basement), and I grabbed a 500GB Hitachi drive that I had leftover from another machine.  Within five minutes, the drive was in the server and everything was hooked back up.

Down to the Metal

Of course, a new hard drive was only half of the solution.  The other half of the equation involved restoring from backup.  In this case, a bare metal restore from backup was the most appropriate course of action since I was starting with a blank disc.

For those who may not be familiar with the concept of bare metal restoration, you can get a quick primer from Wikipedia.  I use Microsoft’s System Center Data Protection Manager 2010 (DPM) to protect the servers in my environment, so I knew that I had an image from which I could restore my TMG box.  I just dreaded the thought of doing so.

Why the worry?  Well, I think Arthur C. Clarke summed it up best with the following quote:

Any sufficiently advanced technology is indistinguishable from magic.

The Cold Sweats

Now bare metal restore isn’t “magic,” but it is relatively sophisticated technology … and it’s still an area that seems plagued with uncertainties.

I have to believe that I’m not the only one who feels this way.  I’ve co-authored two books on SharePoint disaster recovery, and the second book includes a chapter I wrote that covers bare metal restore on a Windows 2008 server.  My experience with bare metal restores can be summarized as follows: when it works, it’s awesome … but it doesn’t always work as we’d want it to.  When it doesn’t work, it’s plain ol’ annoying in that it doesn’t explain why.

So, it’s with that mindset that I started the process of trying to clear away my server’s lobotomized state.  These are the steps I carried out to get ready for the restore:

  1. DPM consoleI went into the DPM console, selected the most recent bare metal restore recovery point available to me (as shown on the right), and restored the contents of the folder to a network file share– in my case, \\VMSS-FILE1\RESTORENote: you’ll notice a couple of restore points available after the one I selected; those were created in the time since I did the restore but before I wrote this post.
  2. The approximately 21GB bare metal restore image was created on the share.  I do have gigabit Ethernet on my network, and since I recently built-out a new DPM server with faster hardware, it really didn’t take too long to get the image restored to the designated file share – maybe five minutes or so.  The result was a single folder in the designated file share.
  3. Folder structure for restore shareI carried out a little manipulation on the folder that DPM created; specifically, I cut out two levels of sub-folders and made sure that the WindowsImageBackup folder was available directly from the top of the share as shown at the left.  The Windows Recovery Environment (or WinRE) is picky about this detail; if it doesn’t see the folder structure it expects when restoring from a network share, it will declare that nothing is available for you to restore from – even though you know better.

In Recovery

With my actual restore image ready to go on the file share, I booted into the WinRE using a bootable USB memory stick with Windows 2008 R2 Server on it.  I walked through the process of selecting Repair your computer, navigating out to the file share, choosing my restore image, etc.  The process is relatively easy to stumble through, but if you want it in a lot of detail, I’d encourage you to read Chapter 5 (Windows Server 2008 Backup and Restore) in our SharePoint 2010 Disaster Recovery Guide.  In that chapter, I walk through the restore process in step-by-step fashion with screenshots.

Additional restore options dialogI got to the point in the wizard where I was prompted to select additional options for restore as shown on the left.  By default, the WinRE will format and repartition discs as needed.  In my case, that’s what I wanted; after all, I was putting a brand new drive in (one that was larger than the original), so formatting and partitioning was just what the doctor ordered.  I also had the ability to exclude some drives (through Exclude disks) from the recovery process – not something I had to worry about given that my system image only covered one hard drive.  If my hard drive required additional drivers (as might be needed with a drive array, RAID card, or something equivalent), I also had the opportunity to supply them with the Install drivers option.  Again, this was a basic in-place restore; the only thing I needed was a clean-up of the hard drive I supplied, so I clicked Next.

Confirmation dialogI confirmed the details of the operation on the next page, and everything looked right to me.  I then paused to mentally double-check everything I was about to do.

In my experience, the dialog on the left is the last point of easily grasped normal wizard activity before the WinRE restore wizard takes off and we enter “magic land.”  As I mentioned, when restores work … they just chug right along and it looks easy.  When bare metal and system state restores don’t work, though, the error messages are often unintelligible and downright useless from a troubleshooting and remediation perspective.  I hoped that my restore would be one of the happy restores that chugged right along and made me proud of my backup and restore prowess.

I crossed my fingers and clicked the Next button.

<Insert Engine Dying Noises Here>

A picture of the restore going belly-upThe screenshot on the right shows what happened almost immediately after I clicked next.

Well, you knew this blog post would be a whole lot less interesting if everything went according to plan.

Once I worked through my panic and settled down, I looked a little closer.  I understood The system image restore failed without much interpretation, but I had no idea what to make of

Error details: The parameter is incorrect. (0x80070057)

That was the extent of what I had to work with.  All I could do was close out and try again.  Sheesh.

Head Scratching

Advanced options dialogLet’s face it: there aren’t a whole lot of options to play with in the WinRE when it comes to bare metal restore.  The screenshot on the left shows the Advanced options you have available to you, but there really isn’t much to them.  I experimented with the Automatically check and update disk error information checkbox, but it really didn’t have an effect on the process.  Nevertheless, I tried restores with all combinations of the checkboxes set and cleared.  No dice.

With the Advanced options out of the way, there was really only one other place to look: the Exclude disks dialog.  I knew Install drivers wasn’t needed, because I had no trouble accessing my disks and wasn’t using anything like a RAID card or some other advanced disk configuration.

Disk exclusion dialogI popped open the disk exclusion dialog (shown on the right) and tried running a restore after excluding all of the disks except the Hitachi disk to which I would be writing data (Disk 2).  Again, no dice – I still continued to get the aforementioned error and couldn’t move forward.

I knew that DPM created usable bare metal images, and I knew that the WinRE worked when it came to restoring those images, so I knew that I had to be doing something wrong.  After another half an hour of goofing around, I stopped my thrashing and took stock of what I had been doing.

My Inner Archimedes

My eureka moment came when I put a few key pieces of information together:

  • While writing the chapter on Windows Server 2008 Backup and Restore for the SharePoint 2010 DR book, I’d learned that image restores from WinRE are very persnickety about the number of disks you have and the configuration of those disks.
  • When DPM was creating backups, only three hard drives were attached to the server: the original 250GB system drive and two 30GB SSD caching drives.
  • Booting into WinRE from a memory stick was causing a distinctly visible fourth “drive” to show up in the list of available disks.
    The bootable USB stick had to be a factor, so I put it away and pulled out a Windows Server 2008 R2 installation disk.  I then booted into the WinRE from the DVD and walked through the entire restore process again.  When I got to the confirmation dialog and pressed the Next button this time around, I received no The parameter is incorrect errors – just a progress bar that tracked the restore operation.

Takeaway

The one point that’s going to stick with me from here on out is this: if I’m doing a bare metal restore, I need to be booting into the WinRE from a DVD or from some other source that doesn’t affect my drives list.  I knew that the disks list was sensitive on restore, but I didn’t expect USB drives to have any sort of effect on whether or not I could actually carry out the desired operation.  I’m glad I know better now.

Additional Reading and References

  1. Product Overview: Forefront Threat Management Gateway 2010
  2. Wikipedia: Bare-metal restore
  3. Product Overview: System Center Data Protection Manager 2010
  4. Book: SharePoint 2010 Disaster Recovery Guide

Finding Duplicate GUIDs in Your SharePoint Site Collection

April 3, 2011 14 comments

This is a bit of an oldie, but I figured it might help one or two random readers.

Let me start by saying something right off the bat: you should never need what I’m about to share.  Of course, how many times have you heard “you shouldn’t ever really need this” when it comes to SharePoint?  I’ve been at it a while, and I can tell you that things that never should happen seem to find a way into reality – and into my crosshairs for troubleshooting.

Disclaimer

The story and situation I’m about to share is true.  I’m going to speak in generalities when it comes to the identities of the parties and software involved, though, to “protect the innocent” and avoid upsetting anyone.

The Predicament

I was part of a team that was working with a client to troubleshoot problems that the client was encountering when they attempted to run some software that targeted SharePoint site collections.  The errors that were returned by the software were somewhat cryptic, but they pointed to a problem handling certain objects in a SharePoint site collection.  The software ran fine when targeting all other site collections, so we naturally suspected that something was wrong with only one specific site collection.

After further examination of logs that were tied to the software, it became clear that we had a real predicament.  Apparently, the site collection in question contained two or more objects with the same identity; that is, the objects had ID properties possessing the same GUID.  This isn’t anything that should ever happen, but it had.  SharePoint continued to run without issue (interestingly enough), but the duplication of object GUIDs made it downright difficult for any software that depended on unique object identities being … well, unique.

Although the software logs told us which GUID was being duplicated, we didn’t know which SharePoint object or objects the GUID was tied to.  We needed a relatively quick and easy way to figure out the name(s) of the object or objects which were being impacted by the duplicate GUIDs.

Tackling the Problem

It is precisely in times like those described that PowerShell comes to mind.

My solution was to whip-up a PowerShell script (FindDuplicateGuids.ps1) that processed each of the lists (SPList) and webs (SPWeb) in a target site collection.  The script simply collected the identities of each list and web and reported back any GUIDs that appeared more than once.

The script created works with both SharePoint 2007 and SharePoint 2010, and it has no specific dependencies beyond SharePoint being installed and available on the server where the script is run.

########################
# FindDuplicateGuids.ps1
# Author: Sean P. McDonough (sean@sharepointinterface.com)
# Blog: http://SharePointInterface.com
# Last Update: August 29, 2013
#
# Usage from prompt: ".\FindDuplicateGuids.ps1 <siteUrl>"
#   where <siteUrl> is site collection root.
########################


#########
# IMPORTS
# Import/load common SharePoint assemblies that house the types we'll need for operations.
#########
Add-Type -AssemblyName "Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"


###########
# FUNCTIONS
# Leveraged throughout the script for one or more calls.
###########
function SpmBuild-WebAndListIdMappings {param ($siteUrl)
	$targetSite = New-Object Microsoft.SharePoint.SPSite($siteUrl)
	$allWebs = $targetSite.AllWebs
	$mappings = New-Object System.Collections.Specialized.NameValueCollection
	foreach ($spWeb in $allWebs)
	{
		$webTitle = "WEB '{0}'" -f $spWeb.Title
		$mappings.Add($spWeb.ID, $webTitle)
		$allListsForWeb = $spWeb.Lists
		foreach ($currentList in $allListsForWeb)
		{
			$listEntry = "LIST '{0}' in Web '{1}'" -f $currentList.Title, $spWeb.Title
			$mappings.Add($currentList.ID, $listEntry)
		}
		$spWeb.Dispose()
	}
	$targetSite.Dispose()
	return ,$mappings
}

function SpmFind-DuplicateMembers {param ([System.Collections.Specialized.NameValueCollection]$nvMappings)
	$duplicateMembers = New-Object System.Collections.ArrayList
	$allkeys = $nvMappings.AllKeys
	foreach ($keyName in $allKeys)
	{
		$valuesForKey = $nvMappings.GetValues($keyName)
		if ($valuesForKey.Length -gt 1)
		{
			[void]$duplicateMembers.Add($keyName)
		}
	}
	return ,$duplicateMembers
}


########
# SCRIPT
# Execution of actual script logic begins here
########
$siteUrl = $Args[0]
if ($siteUrl -eq $null)
{
	$siteUrl = Read-Host "`nYou must supply a site collection URL to execute the script"
}
if ($siteUrl.EndsWith("/") -eq $false)
{
	$siteUrl += "/"
}
Clear-Host
Write-Output ("Examining " + $siteUrl + " ...`n")
$combinedMappings = SpmBuild-WebAndListIdMappings $siteUrl
Write-Output ($combinedMappings.Count.ToString() + " GUIDs processed.")
Write-Output ("Looking for duplicate GUIDs ...`n")
$duplicateGuids = SpmFind-DuplicateMembers $combinedMappings
if ($duplicateGuids.Count -eq 0)
{
	Write-Output ("No duplicate GUIDs found.")
}
else
{
	Write-Output ($duplicateGuids.Count.ToString() + " duplicate GUID(s) found.")
	Write-Output ("Non-unique GUIDs and associated objects appear below.`n")
	foreach ($keyName in $duplicateGuids)
	{
		$siteNames = $combinedMappings[$keyName]
		Write-Output($keyName + ": " + $siteNames)
	}
}
$dumpData = Read-Host "`nDo you want to send the collected data to a file? (Y/N)"
if ($dumpData -match "y")
{
	$fileName = Read-Host "  Output file path and name"
	Write-Output ("Results for " + $siteUrl) | Out-File -FilePath $fileName
	$allKeys = $combinedMappings.AllKeys
	foreach ($currentKey in $allKeys)
	{
		Write-Output ($currentKey + ": " + $combinedMappings[$currentKey]) | Out-File -FilePath $fileName -Append
	}
}
Write-Output ("`n")

Running this script in the client’s environment quickly identified the two lists that contained the same ID GUIDs.  How did they get that way?  I honestly don’t know, nor am I going to hazard a guess …

What Next?

If you’re in the unfortunate position of owning a site collection that contains objects possessing duplicate ID GUIDs, let me start by saying “I feel for you.”

Having said that: the quickest fix seemed to be deleting the objects that possessed the same GUIDs.  Those objects were then rebuilt.  I believe we handled the delete and rebuild manually, but there’s nothing to say that an export and subsequent import (via the Content Deployment API) couldn’t be used to get content out and then back in with new object IDs. 

A word of caution: if you do leverage the Content Deployment API and do so programmatically, simply make sure that object identities aren’t retained on import; that is, make sure that SPImportSettings.RetainObjectIdentity = false – not true.

Additional Reading and References

  1. TechNet: Import and export: STSADM operations
  2. MSDN: SPImportSettings.RetainObjectIdentity

Review of “SharePoint 2010 Six-In-One”

March 27, 2011 1 comment

I read a lot.  Honestly, I assume that most people who work with technology spend a fair bit of their time reading.  Maybe it’s books, maybe it’s blogs – whatever.  There’s simply too much knowledge out there, and the human brain is only so big, to not be brushing-up on the ol’ technical skill set on a fairly regular basis.

When free books are dangled in front of me, naturally I jump.  I jump even higher when they’re books that I probably would have ended up buying had they not been given to me gratis.

The Opportunity

Several months ago, I received an e-mail from Becky Bertram.  Becky is an exceptionally knowledgeable SharePoint MVP and all-around wonderful woman.  Becky and I first met (albeit briefly) at the MS SharePoint Conference in Las Vegas (2009), and since that time we’ve spoken at a couple of the same events.

In my conversations with Becky and through Twitter, I knew that she was part of a team that was working to assemble a book on SharePoint 2010.  In her e-mail to me, she asked if I’d be interested in a copy of it.  Given what I’ve said about reading, it should come as no surprise to see me say that I jumped at her offer.

Fast forward a bit.  I’ve had SharePoint 2010 Six-In-One for a couple of months now, and I’ve managed to read a solid 80% of its 500+ pages thus far.  Unfortunately, I’m a very slow reader.  I always have been, and I probably always will be.  I probably should have told Becky that before she agreed to send me a copy of the book …

Top-Level Assessment

SharePoint 2010 Six-In-One CoverLet me start by saying that simply put, I think this book is an excellent SharePoint resource.  The reasons that one would find the book useful will likely vary based on their existing knowledge of SharePoint, but I believe that everyone from across the spectrum, newcomer to SharePoint journeyman, will find the book helpful in some way. 

The rest of this post/review explains the book, its intended audience, what it conveys, and some of my additional thoughts.

The Authors

First, let me start by giving credit where it was due.  The SharePoint 2010 Six-In-One is the collaborative effort of seven different and active members of the larger SharePoint community.

    I know several of these folks personally, and that’s one of the reasons why I was so excited to review the book.  Most of the authors are active in user groups.  Nearly all contribute socially through Twitter and other channels.  Many speak at SharePoint Saturdays and other events.  Some are designated Most Valuable Professionals (MVPs) by Microsoft.  All are darn good at what they do.

Target Audience

This book was written primarily for relative newcomers to SharePoint 2010, and this demographic is the one that will undoubtedly get the most value out of the book.  As the title of the book indicates, the authors covered six of the core SharePoint areas that anyone wrangling with SharePoint 2010 would need information on:

  • Branding
  • Business Connectivity Services
  • Development
  • Search
  • Social Networking
  • Workflow
    The book devotes a few chapters to each topic, and each topic is covered solidly from an introductory perspective.  Many of the common questions and concerns associated with each topic are also addressed in some way, and particulars for some of the topics (like development) are actually covered at a significantly deeper level.

Although it might get glossed-over by some, I want to call attention to a particularly valuable inclusion; specifically, the first three chapters.  These chapters do a fantastic job of explaining the essence of SharePoint, what it is, how to plan for it, concerns that implementers should have, and more.  Given SharePoint’s complexity and “tough to define” nature, I have to applaud the authors on managing to sum-up SharePoint so well in only 60 pages.  Anyone getting started with SharePoint will find these chapters to be excellent on-ramp and starting point for SharePoint.

Contents

The following is the per-chapter breakdown for the book’s content:

  • Chapter 1: SharePoint Overview
  • Chapter 2: Planning for SharePoint
  • Chapter 3: Getting Started with SharePoint
  • Chapter 4: Master Pages
  • Chapter 5: SharePoint Themes
  • Chapter 6: Cascading Style Sheets and SharePoint
  • Chapter 7: Features and Solutions
  • Chapter 8: Introducing SharePoint Development
  • Chapter 9: Publishing in SharePoint Server 2010
  • Chapter 10: Introducing Business Connectivity Services
  • Chapter 11: Building Solutions Using Business Connectivity Services
  • Chapter 12: Why Social Networking Is Important in SharePoint 2010
  • Chapter 13: Tagging and Ratings
  • Chapter 14: My Site
  • Chapter 15: Workflow Introduction and Background
  • Chapter 16: Building and Using Workflow in SharePoint 2010
  • Chapter 17: Visual Studio: When SharePoint Designer Is Not Enough
  • Chapter 18: Introduction to Enterprise Search
  • Chapter 19: Administering and Customizing
  • Chapter 20: FAST Search
  • Chapter 21: Wrapping It All Up

The Experienced SharePoint Reader

So, what if you happen to know a bit about SharePoint and/or have been working with SharePoint 2010 for some time?  I’m in this particular boat, and I have good news: this book strikes just the right balance of breadth and depth so as to be useful as a reference source.  Although the book doesn’t provide really deep dives into its topic areas (not its intent), I found myself reaching for it on a handful of occasions to get myself going on some SharePoint tasks I had to accomplish.  A quick review of Cathy’s chapters on branding, for instance, gave me just the right amount of information needed to get started on a small side project of my own.

Summary

Bottom line: SharePoint 2010 Six-In-One contains just the right mix of breadth and depth so as to be immediately informative to newcomers but also useful as a reference source in the longer term.   I’d recommend this book for anyone working with SharePoint, and I’d especially recommend it to those who are new to SharePoint 2010 and/or seeking to get a grasp on its core aspects. 

Additional Reading and References

  1. People: Becky Bertram
  2. Book: SharePoint 2010 Six-In-One
  3. Author (Twitter): Chris Geier
  4. Author (blog): Cathy Dew
  5. Author (blog): Wes Preston
  6. Author (blog): Raymond Mitchell
  7. Author (blog): Becky Bertram
  8. Author (blog): Ken Schaefer
  9. Author (Twitter): Andrew Clark
  10. Events: SharePoint Saturday
  11. Designation: Most Valuable Professional (MVP)
Follow

Get every new post delivered to your Inbox.

Join 2,207 other followers

%d bloggers like this: