Wrap-Up, Roll-Up, and Move-On!

My time with Idera has come to an end, so I wanted to aggregate some of the resources I assembled with them. I also wanted to share some information about my new company, Bitstream Foundry LLC.

Train Series Sometimes it’s hard to believe just how quickly time flies by. At the tail end of last December, I announced that some big changes were coming for me – namely that I would be transitioning into something new from an employment perspective. Today is the last day of normal business in March 2013, and that means my time with Idera is at an end.

My last three years with Idera have been quite a whirlwind of activity. I feel very fortunate and am extremely thankful to Idera for the opportunities they’ve afforded me – especially over the last year in my role as their Chief SharePoint Evangelist. In that role, I was given the latitude to spend a significant chunk of my time focusing on an area that is very important to me personally: the SharePoint Community.

The Roll-Up

In thinking about my role and some of what I’ve done over the last three years, it occurred to me that it might be nice to summarize and link to some of the materials I assembled while at Idera. I’ve occasionally referenced these items in the past, but I don’t think I’ve ever tried to aggregate them into one post or in one place.

Blog Posts

In the last half a year or so, my regular content generation efforts were being funneled to Idera’s SharePoint “Geek Stuff” blog. Here’s a table (with associated links) to the posts I’ve written:

March 19, 2013 Maskthumb Plan Your SharePoint Farm Right with a SQL Server Alias
February 8, 2013 Strategythumb Do You Have a SharePoint Backup Strategy?
January 17, 2013 Cheating on a Test The Five Minute Cheat-Sheet on SharePoint 2013’s Distributed Cache Service
December 20, 2012 smart girlfriends smiling and looking at the laptop Why Administrators Will Giggle Like Schoolgirls About SharePoint 2013’s New App Model
November 20, 2012 IderaIceThumb1 Sean’s Thoughts on the Microsoft SharePoint Conference 2012
October 19, 2012 Broken electric cable. Getting the Permissions Wired-Up Properly When Attaching a Content Database to a SharePoint Farm
September 21, 2012   Okay, Really – What Can I Do With a SharePoint Farm Configuration Database Backup?
August 24, 2012 roots Do I Really Need to Backup Up the SharePoint Root?
June 20, 2012 Talking with John Ferringer Interview with John Ferringer
June 8, 2012 TechEd '99 Baseball Cap TechEd – Why Should You Care?

SharePoint Smarts

There was a point in the past when Idera was publishing a sort of newsletter called “SharePoint Smarts,” and I wrote a couple of articles for the newsletter before it eventually rode off into the sunset:

Whitepapers

Over the years, I’ve also written or co-authored a handful of whitepapers for Idera. At the time I’m writing this post, it appears that a couple of those whitepapers are still available:

And although it isn’t available just yet, sometime soon Idera will be releasing another whitepaper I wrote that had the working title of “SharePoint Caching Implementation Guide.” If that sounds at all interesting, keep an eye on the Whitepapers section of Idera’s Resources page.

Moving-On

Bitstream Foundry LLC Although I’m going to miss my friends at Idera and wish them the best of luck going forward, I’m very excited about some things I’ve got cooking – particularly with my new company!

A couple of months back, I launched Bitstream Foundry, LLC, with the intention of getting back into more hands-on SharePoint work. My intention is to focus initially on a combination of custom SharePoint development work and SharePoint App Store product development. In the past, I’ve been a “switch hitter” when it comes to SharePoint, and I’ve gone back and forth between development and administration roles fairly regularly. Although I’m not abandoning my admin “comrades in arms,” I have to admit that I tend to get the greatest enjoyment out of development work. Between custom solutions and App Model development, I’m pretty sure I’ll be able to keep myself busy.

Microsoft BizSpark Things are falling into place with the new company, as well. I applied for membership in Microsoft’s BizSpark program yesterday, and within hours I was accepted – much to my surprise. Why was I surprised? Well, my company website is (at the moment) being redirected to a “coming soon” page I put together on the new Microsoft Azure Web Sites offering. I’ve been waiting for an Office 365 tenant upgrade so that I can build out a proper site on SharePoint 2013, but the upgrade seems to be taking much longer than originally expected …

I also learned today that my application to get Bitstream Foundry listed in the SharePoint App Store was approved, so the way is paved for me to roll out Apps. Now I just need to write them!

One Thing That Won’t Change

Despite all of the recent changes, one aspect of my professional life that won’t be changing is my commitment to sharing with (and giving back to) the SharePoint community. My confidence in my current situation would probably be substantially lower if it weren’t for all of you – my (SharePoint) friends. Over the last several months, my belief in “professional karma” has been strongly reinforced. I’ve always tried to help those who’ve asked for my time and assistance, and I’ve seen that goodwill return to me as I’ve sought input and worked to figure out “what’s next.” To those of you who have offered advice, provided feedback, written endorsements/recommendations, and more, you have my most heartfelt thanks.

I love interacting with all of you, and I still get tremendous enjoyment out of blogging, speaking, teaching, and sharing with everyone in the SharePoint space. My “official” days as a full-time evangelist may be behind me, but that won’t really change anything for me going forward as far as community involvement goes. I’ll continue to answer emails, blog when I have information worth sharing, assemble tools/widgets, help organize events, and generally do what I can to help all of you as you’ve helped me. I’m also honored to be a part of several upcoming events, and I hope to see some of you when I’m “on tour.” If we haven’t met, please say hi and introduce yourself. Making new friends and connections is one of the most rewarding aspects of being out-and-about :-)

References and Resources

  1. Blog Post: Big Changes and Resolutions for 2013
  2. Company: Idera
  3. Yahoo! Finance: Press Release
  4. Idera: SharePoint Geek Stuff Blog
  5. Idera: Resources Page
  6. Microsoft: BizSpark
  7. Company Site: Bitstream Foundry, LLC
  8. Microsoft: Azure Web Sites
  9. Microsoft: Office 365 Enterprise E3
  10. Microsoft: SharePoint App Store
  11. SharePoint Interface: Events and Activities

Custom Ribbon Button Image Limitations with SharePoint 2013 Apps

What started as a simple attempt to use the ~appWebUrl token in an image URL became a deep dive into SharePoint’s internal processing of custom actions and the App deployment process. In this post, I cover what will and won’t work for custom action image URLs in your own SharePoint 2013 Apps.

A custom action button with image My adventures in SharePoint 2013 App Model Land have been going pretty well, but I recently encountered a limitation that left me sort of scratching my head.

The limitation applies to the creation of custom actions for SharePoint apps. To be more specific: the problem I’ve encountered is that there doesn’t appear to be a way to package and reference (using relative links) custom images for ribbon buttons like the one that’s circled in the image above and to the left. This doesn’t mean that custom images can’t be used, of course, but the work-around isn’t exactly something I’m particularly fond of (nor is it even feasible) in some application scenarios.

If you’re not familiar with the new SharePoint 2013 App Model, then you may want to do a little reading before proceeding with this post. I’m only going to cover the App Model concepts that are relevant to the limitation I observed and how to address/work-around it. However, if you are familiar with the new 2013 App Model and creating custom actions in SharePoint 2010, then you may want to jump straight down to the section titled Where the Headaches Begin.

One more warning: this post does some heavy digging into SharePoint’s internal processing of custom ribbon actions and URL tokens. If you want to skip all of that and head straight to the practical take-away, jump down to the What About the Image32by32 and Image16by16 Attributes section.

Adding a Ribbon Custom Action

First, let me do a quick run-through on custom actions. They aren’t unique to SharePoint 2013 or its new “Cloud App Model.” In fact, the type of custom action I’m talking about (i.e., extending the ribbon) became available when the Ribbon was introduced with SharePoint 2010.

With a SharePoint 2013 App, adding a new button to the ribbon is a relatively simple affair. It starts with choosing the Ribbon Custom Action option from the Add New Item dialog as shown below and to the left. Once a name is provided for the custom action and the Add button is clicked, the Create Custom Action for Ribbon dialog appears as shown below and to the right. There’s a third dialog page that further assists in setting some properties for a custom action, but I’m going to skip over it since it isn’t relevant to the point I’m trying to make.

Adding a Ribbon Custom Action

Create Custom Action for Ribbon

I want to call attention to one of the selections I made on the Create Custom Action for Ribbon dialog, though; specifically, the decision to expose the custom action in the Host Web rather than in the App Web.

Why is this choice so important? Well, the new App Model enforces a relatively strict boundary of separation between SharePoint sites and any custom applications (running under the new App Model) that they may contain. A SharePoint site (Host Web) can technically “host” applications, but those applications operate in an isolated App Web that may have components running on an entirely different server. Under the new App Model, no custom app code is running in the Host Web.

App Webs (where custom applications exist after installation) don’t have direct access to the Host Web in which they’re contained, either. In fact, App Webs are logically isolated from their Host Web parents. If App Webs want to communicate with their Host Web parent to interact with site collection data, for example, they have to do so through SharePoint’s Client-Side Object Model (CSOM) or the Representational State Transfer (REST) interface. The old full-trust, server-side object isn’t available; everything is “client-side.”

There are some exceptions to this model of isolation, and one of those exceptions is the use of custom actions to allow an App (residing in an App Web) to partially wire itself into the Host Web. The Create Custom Action for Ribbon dialog shown above, for instance, adds a new button to the ribbon for each of the Document Libraries in the Host Web. This gives users a way to navigate directly from Document Libraries (in the Host Web) to a page in the App Web, for example.

The Elements.xml file that gets generated for the custom action once the Visual Studio wizard has finished running looks something like the following:

[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/"&gt;
<CustomAction Id="1470c964-6b8a-4d79-9817-4d32c898ffbe.RibbonCustomAction1"
RegistrationType="List"
RegistrationId="101"
Location="CommandUI.Ribbon"
Sequence="10001"
Title="Invoke &apos;LibraryDetailsCustomAction&apos; action">
<CommandUIExtension>
<!–
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
–>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Library.Actions.Controls._children">
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton"
Alt="Examine Library Details"
Sequence="100"
Command="Invoke_LibraryDetailsCustomActionButtonRequest"
LabelText="Examine Library Details"
TemplateAlias="o1"
Image32by32="_layouts/15/images/placeholder32x32.png"
Image16by16="_layouts/15/images/placeholder16x16.png" />
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler Command="Invoke_RibbonCustomAction1ButtonRequest"
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"/>
</CommandUIHandlers>
</CommandUIExtension >
</CustomAction>
</Elements>
[/sourcecode]

Deploying the App that contains the custom action markup shown above creates a new button in the ribbon of each Host Web Document Library. By default, each button looks like the following:

Custom Ribbon Button

There are a few attributes in the previous XML that I’m going to repeatedly come back to, so it’s worth taking a closer look at each one’s purpose and associated value(s):

  • Image32by32 and Image16by16 for the <Button /> element. These two attributes specify the images that are used when rendering the custom action button on the ribbon. By default, they point to an orange dot placeholder image that lives in the farm’s _layouts folder.
  • CommandAction for the <CommandUIHandler /> element. In its simplest form, this is the URL of the page to which the user is redirected upon pressing the custom ribbon button.

The Problem with the Default CommandAction

When a user clicks on a custom ribbon button in one of the Host Web document libraries, the goal is to send them over to a page in the App Web where the custom action can be processed. Unfortunately, the default CommandAction isn’t set up in a way that permits this.

[sourcecode language=”XML” autolinks=”false”]
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"
[/sourcecode]

In fact, attempting to deploy the solution to Office 365 with this default CommandAction results in failure; the App package doesn’t pass validation.

To understand why the failure occurs, it’s important to remember the isolation that exists between the Host Web and the App Web. To illustrate how the Host Web and App Web are different from simply a hostname perspective, consider the project I’ve been working on as an example:

Notice that although the /sites/dev2 relative path portion is the same for both the Host Web and App Web URLs, the hostname portion of each URL is different. This is by design, and it helps to enforce the logical separation between the Host Web and App Web – even though the App Web technically resides within the Host Web.

Looking again at the default CommandAction attribute reveals that its value is just an ASPX page that is identified with a relative URL. Rather than pointing to where we want it to point …

[sourcecode language=”XML” autolinks=”false”]
https://mcdonough-bc920dbeb7ecd3.sharepoint.com/sites/dev2/LibraryManager/Pages/LibraryDetails.aspx
[/sourcecode]

… it ends up pointing to a non-existent destination in the Host Web:

[sourcecode language=”XML” autolinks=”false”]
https://mcdonough.sharepoint.com/sites/dev2/LibraryManager/Pages/LibraryDetails.aspx
[/sourcecode]

And this is exactly what should happen. After all, the custom action is launched from within the Host Web, so a relative path specification should resolve to a location in the Host Web – not the location we actually want to target in the App Web.

Fixing the CommandAction

The Key! Thankfully, it isn’t a major undertaking to correct the CommandAction attribute value so that it points to the App Web instead of the Host Web. If you’ve worked with SharePoint at all in the past, then you may know that the key to making everything work (in this situation) is the judicious use of tokens.

What are tokens? In this case, tokens are specific string sequences that SharePoint parses at run-time and replaces with a value based on the run-time environment, action that was performed, associated list, or some other context-sensitive value that isn’t known at design-time.

To illustrate how this works, consider the default CommandAction attribute:

[sourcecode language=”XML” autolinks=”false”]
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"
[/sourcecode]

Modifying the attribute as follows changes the destination URL of the button so that the user is redirected to the desired page in the App Web rather than the Host Web:

[sourcecode language=”XML” autolinks=”false”]
CommandAction="~appWebUrl/Pages/LibraryDetails.aspx"
[/sourcecode]

The ~appWebUrl token is replaced at run-time with the actual URL of the associated App Web (https://mcdonough-bc920dbeb7ecd3.sharepoint.com/sites/dev2) to build the desired destination link.

SharePoint defines a whole host of URL strings and tokens for use in Apps. As it turns out, a fairly complete list has been aggregated and defined in a handy little page on MSDN. Thanks to the always-helpful Andrew Clark for pointing this out to me; I hadn’t realized Microsoft had pulled so many tokens together in one place!

Where the Headaches Begin

Baby Crying Since tokens are the key to inserting context-dependent values at run-time, you’d think they’d have been implemented and usable anywhere a developer needs to cross the Host Web / App Web divide.

Apparently not. To be more specific (and fair), I should instead say “not consistently.”

Since this blog post is about image limitations with custom ribbon buttons, you can probably guess where I’m headed with all of this. So, let’s take a look at the Image16by16 and Image32by32 attributes.

By default, the Image16x16 and Image32by32 attributes point to a location in the _layouts folder for the farm. Each attribute value references an image that is nothing more than a little round orange dot:

[sourcecode language=”XML” autolinks=”false”]
Image32by32="_layouts/15/images/placeholder32x32.png"
Image16by16="_layouts/15/images/placeholder16x16.png"
[/sourcecode]

Much like the CustomAction attribute, it stands to reason that developers would want to replace the placeholder image attribute values with URLs of their choosing. In my case, I wanted to use a set of images I was deploying with the rest of the application assets in my App Web. So, I updated my image attributes to look like the following:

[sourcecode language=”XML” autolinks=”false”]
Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png"
Image16by16="~appWebUrl/Images/sharepoint-library-analyzer_16x16-a.png"
[/sourcecode]

Tokens Do Not Work for Image Attributes I deployed my App to my Office 365 Preview tenant, watched my browser launch into my App Web, hopped back to the Host Web, navigated to a document library, and looked at the toolbar. I was not happy by what I saw (on the left).

The image I had specified for use by the button wasn’t being used. All I had was a broken image link.

Examining the properties for the broken image quickly confirmed my fear: the ~appWebUrl token was not being processed for either of the Image32by32 or Image16by16 attributes. The token was being output directly into the image references.

I tried changing the image attributes to reference the App Web a couple of different ways (and with a couple of different tokens), but none of them seemed to work.

I did a little digging, and I saw that Chris Hopkins (over at Microsoft) covered this very topic for sandboxed solutions in SharePoint 2010. In Chris’ article, though, it was clear that tokens such as ~site and ~sitecollection were valid for use by the Image32by32 and Image16by16 attributes.

To see if I was losing my mind, I decided to try a little experiment. Although I knew it wouldn’t solve my particular problem, I decided to try using the ~site token just to see if it would be parsed properly. Lo and behold, it was parsed and replaced. ~site worked. So, ~site worked … but ~appWebUrl didn’t?

That didn’t make any sense. If it isn’t possible to use the ~appWebUrl token, how are developers supposed to reference custom images for the buttons they deploy in their Apps? Without the ~appWebUrl, there’s no practical way to reference an item in the App Web from the Host Web.

Token Forensics

When I find myself in situations where I’m holding results that don’t make sense, I can’t help myself: I pull out Reflector and start poking around for clues inside SharePoint’s plumbing. If I dig really hard, sometimes I find answers to my questions.

RegisterCommandUIWithRibbon After some poking around with Reflector, I discovered that the “journey to enlightenment” (in this case) started with the RegisterCommandUIWithRibbon method on the SPCustomActionElement type. It is in this method that the Image16by16 and Image32by32 attributes are read-in from the XML file in which they are defined. Before assignment for use, they’re passed through a couple of methods that carry out token parsing:

  • ReplaceUrlTokens on the SPCustomActionElement type
  • UrlFromPrefixedUrlCore on the the SPUtility type

Although these methods together are capable of recognizing and replacing many different token types (including some I hadn’t seen listed in existing documentation; e.g., ~siteCollectionLayouts), none of the new SharePoint 2013 tokens, like the ~appWebUrl and ~remoteWebUrl ~remoteAppUrl tokens, appear in these methods.

Interestingly enough, I didn’t see any noteworthy differences between the path of execution for processing image attributes and the sequence of calls through which CommandAction attributes are handled in the RegisterCommandUIExtension method of the SPRibbon type. The RegisterCommandUIExtension method eventually “punches down” to the ReplaceUrlTokens and UrlFromPrefixedUrlCore methods, as well.

The differences I was seeing in how tokens were handled between the CommandAction and Image32by32/Image16by16 attributes had to be originating somewhere else – not in the processing of the custom action XML.

Deployment Modifications

After some more digging in Reflector to determine where the ~appWebUrl actually showed-up and was being processed, I came across evidence suggesting that “something specialwas happening on App deployment rather than at run-time. The ~appWebUrl token was being processed as part of a BuildTokenMap call in the SPAppInstance type; looking at the call chain for the BuildTokenMap method revealed that it was getting called during some App deployment operations processing.

App Deployment Hierarchy to BuildTokenMap

If changes were taking place on App deployment, then I had a hunch I might find what I was looking for in the content database housing the Host Web to which my App was being deployed. After all, Apps get deployed to App Webs that reside within a Host Web, and Host Webs live in content databases … so, all of the pieces of my App had to exist (in some form) in the content database. 

I fired-up Visual Studio, stopped deploying to Office 365, and started deploying my App to a site collection on my local SharePoint 2013 VM farm. Once my App was deployed, I launched SQL Management Studio on the SQL Server housing the SharePoint databases and began poking around inside the content database where the target site collection was located.

Brief aside: standard rules still apply in SharePoint 2013, so I’ll mention them here for those who may not know them. Don’t poke around inside content databases (or any other databases) in live SharePoint environments you care about. As with previous versions, querying and working against live databases may hurt performance and lead to bigger problems. If you want to play with the contents of a SharePoint database, either create a SQL snapshot of it (and work against the snapshot) or mount a backup copy of the database in a test environment.

I wasn’t sure what I was looking for, so I quickly examined the contents of each table in the content database. I hit paydirt when I opened-up the CustomActions table. It had a single row, and the Properties field of that row contained some XML that looked an awful lot like the Elements.xml which defined my custom action:

[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-16"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/"&gt;
<CustomAction Title="Invoke ‘LibraryDetailsCustomAction’ action" Id="4f835c73-a3ab-4671-b142-83304da0639f.LibraryDetailsCustomAction" Location="CommandUI.Ribbon" RegistrationId="101" RegistrationType="List" Sequence="10001">
<CommandUIExtension xmlns="http://schemas.microsoft.com/sharepoint/"&gt;
<!–
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
–>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Library.Actions.Controls._children">
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton" Alt="Examine Library Details" Sequence="100" Command="Invoke_LibraryDetailsCustomActionButtonRequest" LabelText="Examine Library Details" Image16by16="~site/Images/sharepoint-library-analyzer_16x16-a.png" Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png" TemplateAlias="o1"/>
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler Command="Invoke_LibraryDetailsCustomActionButtonRequest" CommandAction="javascript:LaunchApp(‘709d9f25-bb39-4e6a-97d5-6e1d7c855f38’, ‘i:0i.t|ms.sp.int|a441fa2c-8c5f-4152-9085-3930239ab21b@9db0b916-0dd6-4d6c-be49-41f72f5dfc02’, ‘~appWebUrl\u002fPages\u002fLibraryDetails.aspx?ListID={ListId}\u0026SiteUrl={SiteUrl}’, null);"/>
</CommandUIHandlers>
</CommandUIExtension>
</CustomAction>
</Elements>
[/sourcecode]

There were some differences, though, between the Elements.xml I had defined earlier and what actually appeared in the Properties field. I narrowed my focus to the differences that existed between the non-working Image32by32/Image16by16 attributes

[sourcecode language=”XML” autolinks=”false”]
Image16by16="~appWebUrl/Images/sharepoint-library-analyzer_16x16-a.png"
Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png"
[/sourcecode]

… and the CommandAction attribute.

[sourcecode language=”XML” autolinks=”false”]
CommandAction="javascript:LaunchApp(‘709d9f25-bb39-4e6a-97d5-6e1d7c855f38’, ‘i:0i.t|ms.sp.int|a441fa2c-8c5f-4152-9085-3930239ab21b@9db0b916-0dd6-4d6c-be49-41f72f5dfc02’, ‘~appWebUrl\u002fPages\u002fLibraryDetails.aspx’, null);"
[/sourcecode]

As suspected, some deployment-time processing had been performed on the CommandAction attribute but not on the image attributes. The CommandAction still contained an ~appWebUrl token, but it was wrapped as part of a parameter call to a LaunchApp JavaScript function that appeared to be handled (or rather, executed) from a client-side browser.

Jumping into my App in Internet Explorer and opening IE’s debugging tools via <F12>, I did a search for the LaunchApp function within the referenced scripts and found it in the core.js library/script. Examining the LaunchApp function revealed that it called the LaunchAppInternal function; LaunchAppInternal, in turn, called back to the SharePoint server’s /_layouts/15/appredirect.aspx page with the parameters that were supplied to the original LaunchApp method – including the URL with the ~appWebUrl token.

To complete the journey, I opened up the Microsoft.SharePoint.ApplicationPages.dll assembly back on the server and dug into the AppRedirectPage class that provides the code-behind support for the AppRedirect.aspx page. When the AppRedirect.aspx page is loaded, control passes to the page’s OnLoad event and then to the HandleRequest method. HandleRequest then uses the ReplaceAppTokensAndFixLaunchUrl method of the SPTenantAppUtils class to process tokens.

The ReplaceAppTokensAndFixLaunchUrl method is noteworthy because it includes parsing and replacement support for the ~appWebUrl token, ~remoteWebUrl ~remoteAppUrl token, and other tokens that were introduced with SharePoint 2013. The deployment-time processing that is performed on the CommandAction attribute is what ultimately wires-up the CommandAction to the ReplaceAppTokensAndFixLaunchUrl method. The Image32by32 and Image16by16 attributes don’t get this treatment, and so the new 2013 tokens (like ~appWebUrl) can’t be used by these attributes.

What About the Image32by32 and Image16by16 Attributes?

Doubt Now that some of the key differences in processing between the CommandAction attribute and image attributes have been identified, let me jump back to the original problem. Is there anything that can be done with the Image32by32 and Image16by16 attributes that are specified in a custom action to get them to reference assets that exist in the App Web? Since tokens like ~appWebUrl (and ~remoteWebUrl for all you Autohosted and Provider-hosted application builders) aren’t parsed and processed, are there alternatives?

My response is a somewhat wishy-washy “doubtful.” In my estimation, you’d need to hack SharePoint with something like a javascript: tag for an image attribute (which, interestingly enough, doesn’t appear to be expressly blocked), find some way to obtain the App Web URL base, formulate the proper path to the image, and more. If it could be done, you’d be gaming SharePoint … and I could easily see a cumulative update or service pack breaking this type of elaborate work-around.

The safest and most pragmatic way to handle this situation, it seems, is to use absolute URLs for the desired image resources and forget about deploying them to the App Web altogether. For example, I placed the images I was trying to use on the ribbon buttons here on my blog and referenced them as follows:

[sourcecode language=”XML” autolinks=”false”]
Image16by16="http://sharepointinterface.com/wp-content/uploads/2013/01/sharepoint-library-analyzer_16x16-a.png&quot;
Image32by32="http://sharepointinterface.com/wp-content/uploads/2013/01/sharepoint-library-analyzer_32x32-a.png&quot;
[/sourcecode]

Working Custom Button Image I had some initial concerns that I might inadvertently bump into some security boundaries, such as those that sometimes arise when an asset is referenced via HTTP from a site that is being served up under HTTPS. This didn’t prove to be the case, however. I tested the use of absolute URLs in both my development VM environment (served up under HTTP) and through one of my Office 365 Preview site collections (accessed via HTTPS), and no browser security warnings popped up. The target image appeared on the custom button as desired (shown on the left) in both cases.

Although the use of absolute URLs will work in many cases, I have to admit that I’m still not a big fan of this approach – especially for SharePoint-hosted apps like the one I’ve been working on. Even though Office 365 entails an “always connected” scenario, I can easily envision on-premises deployment environments that are taken offline some or all of the time. I can also see (and have seen in the past) SharePoint environments where unfettered Internet access is the exception rather than the rule.

In these environments, users won’t see image buttons at all – just blank placeholders or broken image links. After all, without Internet access there is no way to resolve and download the referenced button images.

Wrapping It Up

At some point in the future, I hope that Microsoft considers extending token parsing for URL-based attributes like Image32by32 and Image16by16 to include the ~appWebUrl, ~remoteWebUrl, and other new tokens used by the SharePoint 2013 App Model. In the meantime, though, you should probably consider getting an easily accessible online location (SkyDrive, Dropbox, a blog, etc.) for images and other similar assets if you’re building apps under the new SharePoint 2013 App Model and intend to use custom actions.

Update (1/27/2013)

I need to issue a couple of updates and clarifications. First, I need to be very clear and state that SharePoint-hosted apps were the focus of this post. In a SharePoint-hosted app, what I’ve written is correct: there is no processing of “new” 2013 tokens (like ~appWebUrl and ~remoteAppUrl) for the Image32by32 and Image16by16 attributes. Interestingly enough, though, there does appear to be processing of the ~remoteAppUrl in the Image32by32 and Image16by16 attributes specifically for the other application types such as provider-hosted apps and autohosted apps. Jamie Rance mentioned this in a comment (below), and I verified it with an autohosted app that I quickly spun-up.

I double-checked to see if the ~remoteAppUrl token would even be recognized/processed (despite the lack of a remote web component) for SharePoint-hosted apps, and it is not … nor is ~appWebUrl token processed for autohosted apps. The selective implementation of only the ~remoteAppUrl token for certain app types has me baffled; I hope that we’ll eventually see some clarification or changes. If you’re building provider-hosted or autohosted apps, though, this does give you a way to redirect image requests to your remote web application rather than an absolute endpoint. Thank you, Jamie, for the information!

And now for some good news that for SharePoint-hosted app creators. Prior to writing this post, I had posted a question about the tokens over in the SharePoint Exchange forums. At the time I wrote this post, there hadn’t been any activity to suggest that a solution or workaround existed. F. Aquino recently supplied an incredibly creative answer, though, that involves using a data URI to Base64-encode the images and package them directly into the Image32by32 and Image16by16 attributes themselves! Although this means that some image pre-processing will be required to package images, it gets around the requirement of being “always-connected.” This is an awesome technique, and I’ll certainly be adding it to my arsenal. Thank you, F. Aquino!

References and Resources

  1. MSDN: How to: Create custom actions to deploy with apps for SharePoint
  2. MSDN: Apps for SharePoint overview
  3. MSDN: Customizing and Extending the SharePoint 2010 Server Ribbon
  4. MSDN: How to: Complete basic operations using SharePoint 2013 client library code
  5. MSDN: How to: Complete basic operations using SharePoint 2013 REST endpoints
  6. MSDN: URL strings and tokens in apps for SharePoint
  7. Twitter: Andrew Clark
  8. Chris Hopkins’ Visilog: Using images on your ribbon buttons from a sandboxed solution in SharePoint 2010
  9. Software: Red Gate’s Reflector
  10. Service: Microsoft’s SkyDrive
  11. Service: Dropbox

Big Changes and Resolutions for 2013

2013 promises to be a year of big changes. In this post, I cover career changes and some official resolutions I’m making for the new year.

Happy 2013 Fortune Cookie

2012 is coming to a close, and 2013 is just around the corner. I’ve been thinking about the year that has gone by, but I’ve been thinking even more about the year to come. 2013 promises to be a year of great personal change – for reasons that will become clear with a little more reading.

But first: I’ve got this friend, and many of you probably know him. His name is Brian Jackett, and nowadays he works for Microsoft as a member of their premier field engineering (PFE) team. For the last couple of years, I’ve watched (with envy, I might add) as Brian has blogged about his year-gone-by and assembled a list of goals for the coming year. He even challenged me (directly) to do the same at one point in the past, but sadly I didn’t rise to the challenge.

I’ve decided that year-end 2012 is going to be different. 2012 was a very busy year for me, and a lot of great things happened throughout the year. Despite these great things, I’m going into 2013 knowing that a lot is going to change (and frankly has to change).

Biggest Things First

The End ... Or Is It?Let me start with the most impactful change-up: my full-time role as Chief SharePoint Evangelist for Idera is coming to a close by the end of March 2013. I’ve been with Idera for over two and a half years now, and I’m sad to be moving on from such a great group of folks.

I’m leaving because Idera is undergoing some changes, and the company is in the process of adjusting its strategy on a few different levels. One of the resultant changes brought about by the shift in strategy involves the company getting back to more of an Internet/direct sales-based approach. Since a large part of my role involves community based activities and activities that don’t necessarily align with the strategy change, it doesn’t make a whole lot of sense for me to remain – at least in the full-time capacity that I currently operate in.

To be honest, I didn’t expect my role or position to be around forever. As many of you heard me declare publicly, though: I wanted to make the most of it while I had the role and the backing. I got a lot out of working with my friends at Idera, and I greatly appreciate the opportunities they afforded me. I hope it’s been as much fun for them as it has been for me.

What’s Next?

Even after my full-time role comes to a close, I’ve already had a couple of conversations around continuing to do some work with/for Idera. Despite my full-time focus on Idera over the last 2+ years, I have actually been operating as a contractor/consultant – not a full-time employee. This has left me free to take on other SharePoint work when it made sense (and when my schedule permitted). Going forward, my situation will probably just do a flip-flop: Idera will become the “side work” (if it makes sense), and something else will take center stage.

I don’t yet know what will be “showing on the main screen,” though. That’s been on my mind quite a bit recently, and I’ve been spending a lot of time trying to figure out what I really want to do next. Take a full-time role with a local organization? Do contract development work and continue to work from home? Wiggle my way into becoming the first Starbucks SharePoint barista? Something else entirely? If my preliminary assessment of what’s out there is accurate, there are quite a few different options. I’ll certainly be busy evaluating them and comparing them against my ever-evolving “what I want to do” checklist.

Can You Help Me Out?

Linked In Connection to Sean McDonough Many of you know that I do a lot of speaking, blogging, answering of questions/emails, etc. Giving back to the community and sharing what I’ve learned are a part of my DNA, and I’ll continue to do those things to the extent that I can going forward. I normally don’t ask for anything in return; I just like to know that I’m helping others.

As I try to figure out what’s next, I’d like to ask a favor: if you feel that I’ve helped you in some significant or meaningful way (through one of my sessions, in an email I’ve answered, etc.) over the last few years, would you be willing to endorse my skills or recommend me on LinkedIn? I see a wealth of opportunities “out there,” and sometimes an endorsement or recommendation can make the difference when it comes to employment or landing a client.

Resolutions

Employment and the ability to support my family aside, this is the first year (in quite a few) that I’ve made some resolutions for the new year. Although it’s an artificial break-point, I’ve separated my resolutions into “work-related” and “non-work” categories. And although I can think of lots of things I want to change, I’ve picked only three in each category to focus on.

Work-Related

Resolutions for a New Year1. Manage Distractions More Effectively. Working at home can be a dual-edged sword. If I were single, unmarried, and better-disciplined, I’d see working at home as the ability to do whatever I wanted without distraction. That’s not the reality in my world, though. Where I can remove distractions, I intend to.

Some of you chimed-in (positively) when I recently made a comment on Facebook about unsubscribing to a lot of junk email. Over time, I’ve come to realize that all of the extra email I’ve been getting is just a distraction. I can do something about that.

The same goes for email in general. I have multiple email accounts, and mail streams into those accounts throughout the day. Rather than constantly trying to stay on top of my inbox, I’m going to shift to a “let it sit” mentality. If I’m honest with myself, 95% of the email I receive can go unanswered for a while. I’ll attend to those items that require my attention, but some of the quasi real-time email discussions I’m known to have don’t really matter in the greater scheme of getting real work done.

Social networking tools are another great example. I think they can be a very positive and helpful force (especially for someone who’s at home all day, like me), but they can very easily become a full-time distraction. I cut down my Twitter use dramatically a couple of years back. I won’t even set foot “on” Yammer because of the huge, sucking, time-consuming noise it appears to make. Going forward, I’m going to attempt to use other tools (Facebook, LinkedIn, etc.) during specific windows rather than having them open all-day, everyday – even if I’m not “actively” on them.

For distractions that can’t be removed (e.g., children running around), my only option is to better manage the distractions. My home office has doors; I’ve already begun using them more. I’ll be wearing headphones more often. These are the sorts of things I can do to ensure that I remain better focused.

2. Thoughtfully Choose Work. I had to come clean with myself on this one, and that’s why I chose to word the resolution the way I did. Work is important to me, and it’s in my nature to always be working on something – even if that work is “for fun.” While I’d like to be the type of person who could cut back and work less, I don’t know that I’d be able to do so without incurring substantial anxiety.

Knowing this about myself, I’ve settled on trying to be more thoughtful about doing work. Make it a choice, not the default. Being a workaholic who labors from home, work became my default mode rather quickly and naturally. I remember a time when weekends were filled with fun activities – and leaving work meant “leaving” in both the physical and mental sense. Even if I can’t maintain boundaries that are quite that clear nowadays, I can be more conscientious about my choices and actually making work a conscious choice. That may sound like nothing more than semantics or babble, but I suspect other work-at-home types will get what I’m saying.

For me, this mentality needs to extend to “extracurricular” work-like activities, as well. I just went back through my 2012 calendar, and I counted 19 weekends where I was traveling or engaged in (SharePoint) community activities. That’s over a third of the weekends for the year. Many of those events are things I just sort of “fell” into without thinking too much about it. Perhaps I’d choose to do them all anyway, but again – it needs to be a choice, not the default course of action.

3. Spend Time on Impactful Efforts. Of all my work-related resolutions, this is the one that’s been on my mind the most. As I already mentioned (and many of you know), I spend a lot of time answering questions in email, speaking at and organizing SharePoint events, writing, blogging, etc. Although I originally viewed all of these activities as equally “good things,” in the past year or so I’ve begun to see that some of those activities are more impactful (and thus “more good”) to a wider audience than others.

In 2013, I intend to focus more of my time on efforts that are going to help “the many” rather than “the few.” No, that doesn’t mean I’m going to stop answering email and cease meaningful one-on-one interactions, but I do intend to choose where I spend my time more carefully.

In broader terms, I also intend to focus my capabilities on topics and areas that are generally more meaningful in nature. For example, my wife and her co-worker started a project a while back that has been gaining a lot of traction at a regional level – and the scope of the project is growing. Their effort, The Schizophrenia Oral History Project, profoundly impacts the lives of people living with schizophrenia and those caring for them, providing services to them, and others. I’ve been providing “technical support” (via an introduction to Prezi, registering domain names, etc.) for the project for a while, and I’m currently building a web site for the project using SharePoint and the Office 365 Preview. This sort of work is much more meaningful and fulfilling than some of the other things I’ve spent my time on, and so I want to do more of it.

Non-Work

1. Lose Another Ten Pounds. My weight has gone up and down a few times in the past. At the beginning of 2012, I was pretty heavy … and I felt it. I was out of shape, lethargic, and pretty miserable. Over the course of 2012, I lost close to 30 pounds through a combination of diet (I have Mark Rackley to thank for the plan) and exercise. Now at the end of the year, I’ve been bouncing around at roughly the same weight for a month or two – something I attribute primarily to the holidays and all the good food that’s been around. In 2013, I plan to lose another ten pounds to get down to (what I feel) is an optimal weight.

2. Take Up a Martial Art Once Again. This will undoubtedly help with #1 directly above. I practiced a couple of different martial arts in the past. Before and during college, I practiced Tae Kwon Do. A few years back, I had to reluctantly cease learning Hapkido after only a couple of years in. Martial arts are something I’ve always enjoyed (well, except when I was doing something like separating a shoulder), and I’ve found that life generally feels more balanced when I’m practicing. With the recent enrollment of my five year-old son into a martial arts program, I’m once again feeling the pull. I’ve wanted to learn more about Krav Maga for a while; since there’s a school nearby, I intend to check it out.

3. Prioritize My Home Life. This may be last on my list, but it’s certainly not least. With everything I’ve described so far, it’s probably no surprise to read that I do a pretty poor job of prioritizing home life and family activities. That’s going to change in 2013. Provided I make some headway with my other resolutions, it will become easier to focus on my wife, my kids, and my own interests without feelings of guilt.

Wrap-Up

I’ve written these resolutions down on a Post-It, and that Post-It has been placed on one of my monitors. That’ll ensure that it stays “in my face.”

Do you have any resolutions you’re making? Big changes?

References and Resources

  1. Blog: Brian Jackett
  2. Microsoft: Premier Field Engineering (PFE) Team
  3. Blog Post: Brian Jackett – Goals for 2010
  4. Company: Idera
  5. Company: Starbucks
  6. LinkedIn: Sean McDonough
  7. Facebook: Sean McDonough
  8. LinkedIn: Dr. Tracy McDonough
  9. LinkedIn: Dr. Lynda Crane
  10. Prezi: The Schizophrenia Oral History Project
  11. Prezi: Home Page
  12. Microsoft: Office 365 Preview
  13. Blog: Mark Rackley (The SharePoint Hillbilly)
  14. Wikipedia: Taekwondo
  15. Wikipedia: Hapkido
  16. Wikipedia: Krav Maga

Workflow 1.0 Beta and SQL Server Aliases Do Not Play Nicely Together

My recent attempts to configure the Windows Azure Workflow service (Workflow 1.0 Beta) with a SQL Server alias didn’t go so well. If you’re playing with Workflow 1.0 Beta, stay away from aliases!

Bad behaviour I’ve been doing a bit of build-out with the new SharePoint 2013 Preview in anticipation of some development work, and I’ve documented a few snags that I’ve hit along the way. Although I ran into some additional problems with the SharePoint 2013 Preview yesterday, this post isn’t about SharePoint specifically; it’s about the Windows Azure Workflow service – also known (at this point in time) simply as Workflow 1.0 Beta.

A Bit of Background

If you’re brand-new to the SharePoint 2013 scene, you may not yet have heard: the future for workflow lies outside of SharePoint, not within it. The Windows Azure Workflow service (yes, it even has “Azure” in the name if you’re running it on-premise and not in the cloud) is industrial-strength stuff, and it promises all sorts of improvements over workflow as we know it (and use it) right now.

To take advantage of Windows Azure Workflow at this point in the SharePoint 2013 release cycle requires the installation of the Workflow 1.0 Beta. The installation is not a particularly complicated process, but that’s probably because I’ve been using a solid resource.

Note: the “solid resource” I’m referring to is CriticalPath Training’s VM setup guide. I’ve been using it as a reference as I’ve been doing my SharePoint 2013 build-outs; the guide itself is fantastic and comes with some supporting PowerShell scripts to help things along. The guide and scripts are freely available here – you just need to create an account on the CriticalPath Training site to download them. I recommend them if you’re just getting started with the SharePoint 2013 Preview.

So, what’s my beef with the Workflow 1.0 Beta? To summarize it in a few works: Workflow 1.0 Beta doesn’t seem to work with SQL Server aliases. I certainly tried, but in the end I was forced to abandon using an alias.

How I Initially Configured It

If you read my previous “An unexpected error has occurred” post, then you know that there are four different VMs I’m configuring for a SharePoint 2013 environment. Two of those VMs are of interest in the discussion about Workflow 1.0 Beta configuration:

  • SP2013-SQL. A SQL Server 2013 Enterprise VM
  • SP2013-APPS. A utility server for running Workflow 1.0 Beta and other “off-box” services

As a general rule of thumb, anytime I need to establish a SQL Server connection, I try to create a SQL Server alias to avoid tightly coupling my SQL Server consumers/clients directly to a SQL Server instance. This buys me some flexibility in the unfortunate event that a server dies, I need to relocate databases, etc.

SQL Server Alias ConfigurationI was planning to install the Workflow 1.0 Beta on my SP2013-APPS virtual machine, and I knew that Workflow 1.0 Beta would need to connect to my SP2013-SQL SQL Server. So, I created both a 32-bit alias and a 64-bit alias called SpSqlAlias for the default SQL Server instance residing on SP2013-SQL (which happened to be at IP address 172.16.0.2) as shown on left.

Trying to configure with a SQL aliasOnce the alias was created and all other prerequisites were addressed, I started the Workflow 1.0 Beta installation process. In the Workflow Configuration Wizard, I supplied my SQL Server alias in place of a server name, checked the connection, and was given a green check-mark. As the configuration process started, everything looked good. Even the Service Bus farm management and gateway databases were created without issue.

The problems started shortly thereafter, though, during the creation of a default container. Basically, I didn’t get any further. I literally stared at the screen on the right for a full ten (10) minutes without seeing any meaningful activity in the Details box. After 10 minutes had elapsed, the configuration process failed and I was treated to an exception message and stack trace. Omitting the inner exception detail, here’s what I was told:

[sourcecode language=”text”]
System.Management.Automation.CmdletInvocationException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) —> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) —> System.ComponentModel.Win32Exception: The system cannot find the file specified
[/sourcecode]

Validating the Alias

Of course, the first thing I double-checked was the SQL Server to ensure that it was responding. It was. I even backed through the configuration wizard a couple of steps and verified (with the “Test Connection” button) that I could reach the SQL Server. No issues there: my SQL Server alias was valid as far as the configuration wizard was concerned.

Looking more closely at the exception message left me suspicious. This part in particular made me raise my eyebrow:

(provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server)

Named Pipes Provider? I had specified a TCP/IP alias, not Named Pipes. Changing the permitted 32-bit and 64-bit client protocols (again, via the SQL Server Configuration Manager) to make sure that TCP/IP was enabled and Named Pipes was disabled …

Permitted Client Protocols

… made no difference, either – I’d still get an exception from the Named Pipes Provider. It looked as though one or more steps in the configuration process were “doing their own thing,” ignoring my alias and client protocols configuration, and (as a result) having trouble reaching the SQL Server.

Trying to Go with the Flow

Named Pipe AliasThe thought that entered my mind was, “Ok – don’t fight it if you don’t have to.” If the configuration wizard was going to fall back to using Named Pipes, then I’d go ahead and set up a Named Pipes alias. I wasn’t thrilled about the idea, but I’d rather have the SQL Server alias in-place than no alias at all.

So much for that thought.

I played with the actual Named Pipes alias format quite a bit, but in the end the result was always the same.

Trying to configure with SQL alias (named pipes) and failing

Attempts to use a TCP/IP alias always failed partway through configuration, and attempts to use a Named Pipes alias never even got started.

The Result

I gave it some more thought … and came up empty. So, I dumped any remaining aliases, ensured that all client protocols were back to their fully enabled state, and tried to do the configuration with just the SQL Server host name (to connect to the default instance).

The result?

 Successful completion of configuration

Using just the host name, I had no issues performing the configuration.

The Conclusion

If you are setting up Workflow 1.0 Beta, stay away from SQL Server aliases. As best as I can tell, they aren’t (yet) supported. I’m hopeful that this is just a beta bug or limitation.

On the other hand, if you think I’ve gone off the deep end and can find some way to get the Workflow 1.0 Beta configuration to run with SQL Server aliases, please let me know – I’d love to hear about it!

References and Resources

  1. Blog Post: "An unexpected error has occurred” after Installing SharePoint 2013
  2. Microsoft Download Center: Workflow 1.0 Beta
  3. TechNet: What’s new in workflow in SharePoint Server 2013
  4. CriticalPath Training: SharePoint Server 2013 Preview Virtual Machine Setup Guide
  5. MSDN: Create or Delete a Server Alias for Use by a Client (SQL Server Configuration Manager)

“An unexpected error has occurred” after Installing SharePoint 2013

After installing the current SharePoint 2013 preview build, I was greeted by “An unexpected error has occurred” message while trying to navigate to the Central Administration site. This post represents the steps I took to troubleshoot the problem and implement a least-privileges fix for it.

Smiley Pill - You May Need It You’ve undoubtedly heard the news: SharePoint 2013 is coming. The preview is available right now, and you can download it from TechNet if you want to join in the fun. Just make sure you can meet the hardware and environmental prerequisites. They’re somewhat brutal.

As you might have guessed from the title of this post, I’ve been trying to get in on the SharePoint 2013 fun. There are a number of things I’m supposed to be working on for SharePoint 2013, so building out a SharePoint 2013 environment with the new preview build has been high on my list of things to do.

This post is about a very recent experience with a SharePoint 2013 installation and configuration … and yes, it’s one that had me looking long and hard for a happy pill.

As with many of my other blog posts, this post takes a winding, iterative approach towards analyzing problems and trying to find solutions. Please bear with me or jump to the “Implementing the Change” section near the end if you want to blindly apply a change (based on the blog post title) and hope for the best.

Hitting a Small Snag

An unexpected error has occurredThis blog post would be something of a disappointment if all it said was “… SharePoint 2013 installed without issue, and my environment lived happily ever after.”

No such luck; just look at the screenshot on the left. Sometimes I feel like I’m a magnet for “bad technology karma” despite my attempts to keep a clean slate in that area. Of course, SharePoint 2013 is only in the preview stages of release, so hiccups are bound to occur. I accept that. Like many of you, I went through it with SharePoint 2010 and SharePoint 2007, as well.

Strangely, though, I built-out a SharePoint 2013 environment with an earlier build (prior to the release of the current preview) some time ago. That’s why I was really surprised to see the message shown in the screenshot immediately upon completing a run of the SharePoint 2013 Products Configuration Wizard:

An unexpected error has occurred.

That’s it. No additional information, no qualification – just a technological “whoops” accompanied by the equivalent of a shoulder shrug from my VM environment.

The Setup

Let me take a step back to describe the environment I had put into place before trying to install and configure the SharePoint 2013 binaries.

One major difference between my latest SharePoint 2013 setup attempt and the previous (successful) attempt was the make-up of the server environment. After learning of some of the install restrictions that are specific to SharePoint 2013 (for example, Office Web Apps require their own server), I decided to build out the following virtual servers on my laptop and assemble them into a domain:

  • SP2013-DC: a Windows 2008 R2 Enterprise domain controller (for my virtual spdc.com domain)
  • SP2013-SQL: a Windows 2008 R2 Enterprise server running SQL Server 2012 Enterprise
  • SP2013-WFE: a Windows 2008 R2 Enterprise all-in-one SharePoint 2013 Server
  • SP2013-APPS: a Windows 2008 R2 Enterprise “extra” server for roles/components that couldn’t be installed alongside SharePoint

Overkill? Perhaps, but I wanted to get a feel for how the different components might interact in a “real” production environment.

I also opted for a least privileges install so that I could start to understand where some of the security boundaries had shifted versus SharePoint 2010. Since I planned to use the farm for my development efforts, I didn’t want to make the common developer mistake of shoehorning everything onto one server with unrestricted privileges. Such an approach dodges security-related issues during development, but it also tends to yield code that falls apart (or at least generates security concerns) upon first contact with a “real” SharePoint environment.

Failed Troubleshooting

As stated earlier, my setup problems started after I installed the SharePoint 2013 bits and ran the SharePoint 2013 Products Configuration Wizard. The browser window that popped-up following the configuration wizard’s run was trying to take me to the Farm Configuration wizard that lives inside the Central Administration site. Clearly I hadn’t gotten very far in configuring my environment.

I started looking in some of the usual locations for additional troubleshooting hints. Strangely, I couldn’t quickly find any:

  • The Central Administration site application pool looked okay and was spun-up
  • My Application and System event logs were pretty doggone clean – exceptionally few errors and warnings, and none that appeared relevant to current problem
  • I didn’t see anything in the Security log to suggest problems

I tried an IISRESET. I rebooted the VM. I checked my SQL alias to make sure nothing was messed-up there. I checked my farm service account permissions in SQL Server to ensure that the account had the dbcreator and securityadmin role assignments as well as rights to the associated databases. Heck, I even deprovisioned the server and re-ran the SharePoint 2013 Products Configuration Wizard twice – once with a complete wipe of the databases. Nothing I did seemed to make a difference. Time after time, I kept getting “An unexpected error has occurred.”

Some Insight

Maybe it was my go ‘rounds with previous SharePoint beta releases, or maybe it was a combination of Eric Harlan’s and Todd Klindt’s spirits reaching out to me (the point of commonality between Todd and Eric: the two of them are fond of saying “it’s always permissions”). Whatever the source, I decided to start playing around with some account rights. Since I was setting up a least-privileges environment, it made sense that rights and permissions (or some lack of them) could be a factor.

Application Pools The benefit of having gotten nearly nowhere on my farm configuration task was that there wasn’t much to really troubleshoot. Only a handful of application pools had been created (as shown on the right), and only one or two accounts were actually in-play. Since my Central Administration site was having trouble coming up, and knowing that the Central Administration site runs in the context of the farm service/timer service account, I focused my efforts there.

In my farm, I had assigned SPDC\svcSPFarm for use by the timer service. This account was a basic domain account at the start – nothing special, and no interesting rights to speak of. To see if I could make any progress on getting the Central Administration site to come up, I dropped the account into the Domain Admins group and tried to access the Central Administration site again.

I had no luck at first … but after an IISRESET and a re-launch of the site, Central Administration came up. I pulled the account out of the Domain Admins group and re-tried the site. It came up, but again – after an IISRESET, I was back to “An unexpected error has occurred.”

I repeated the process again, but the second time around I used the local (SP2013-WFE) Administrators group instead of the Domain Admins group. The results were the same: adding SPDC\svcSPFarm to the Administrators group allowed me to bring Central Administration up, and removing the account from the Admininstrators group brought things back down.

Hunch confirmed: it looked like I was dealing with some sort of rights or permissions issue.

Of course, knowing that there is a rights or permissions issue and knowing what the specific issue is are two very different things. The practical part of me screamed “just leave the account in the Administrators group and move on.”

Unfortunately, I don’t deal well with not knowing why something doesn’t work. It’s a personal hang-up that I have. So, I started with some low-impact/low-effort troubleshooting: I adjusted my VM’s Audit Policy settings (via the Local Security Policy MMC snap-in) to report on all failures that might pop-up.

Unfortunately, the only thing this change actually did for me was reveal that some sort of WinHttpAutoProxySvc service issue was popping-up when SPDC\svcSPFarm wasn’t an administrator. After a few minutes of researching the service, I decided that it probably wasn’t an immediate factor in the problem I was trying to troubleshoot.

So much for finding a quick answer.

Wading Into the Muck

I knew that I needed to dig deeper, and I knew where my troubleshooting was going to take me next. Honestly, I wasn’t too excited.

I dug into my SysInternals folder and dug out Process Monitor. For those of you who aren’t familiar with Process Monitor, I’ll sum it up this way: it’s the “nuclear option” when you need diagnostic information regarding what’s happening with the applications and services running on your system. Process Monitor collects file system activity, Registry reads/writes, network calls – pretty much everything that’s happening at a process level. It’s a phenomenal tool, but it generates a tremendous amount of information. And you need to wade through that information to find what you’re looking for.

I did an IISRESET, fired-up Process Monitor, and tried to bring up the Central Administration site once again. Since the SPDC\svcSPFarm account was no longer an administrator, I knew that the site would fail to come up. My hope was that Process Monitor would provide some insight into where things were getting stuck.

Over the course of the roughly 30 seconds it took the application pool to spin-up and then hand me a failure page, Process Monitor collected over 220,000 events.

Gulp.

I don’t know how you feel about it, but 220,000 events was downright intimidating to me. “Browsing” 220,000 events wasn’t going to be feasible. I’d worked with Process Monitor before, though, and I knew that the trick to making headway with the tool was in judicious use and application of its filtering capabilities.

Initially, I created filters to rule out a handful of processes that I knew wouldn’t be involved – things like Internet Explorer (iexplore.exe), Windows Explorer (Explorer.EXE), etc. Each filter that I added brought the number of events down, but I was still dealing with thousands upon thousands of events.

ProcMon FilterAfter a little thinking, I got a bit smarter with my filtering. First, I knew that I was dealing with an ASP.NET application pool; that was, after all, where Central Administration ran. That meant that the activity in which I was interested was probably taking place within an IIS worker process (w3wp.exe). I set a filter to show only those events that were tied to w3wp.exe activity.

Second, I knew that my farm service account (SPDC\svcSPFarm) was at the heart of my rights and permissions issue. So, I decided to filter out any activity that wasn’t tied to this account.

Applying those two filters got me down to roughly 50,000 events. Excluding SUCCESS results dropped me to 10,000 events. Some additional tinkering and exclusions brought the number down even lower. I was still wading through a large number of results, though, and I didn’t see anything that I could put my finger on.

Next, I decided to place SPDC\svcSPFarm back into the Administrators group and do another Process Monitor capture. As expected, I captured a few hundred thousand events. I went through the process of applying filters and whittling things down as I had done the first time. Then I spent a lot of time going back and forth between the successful and unsuccessful runs looking for differences that might explain what I was seeing.

Two Bit Comedy

After doing a number of comparisons, I began to focus on a series of entries that were tagged with a result message of BAD IMPERSONATION (as seen below). I was seeing 145 of these entries (out of 220,000+ events) when the Central Administration site was failing to come up. When SPDC\svcSPFarm was part of the local Administrators group, though, I wasn’t seeing any of the entries.

BAD IMPERSONATION entries in Process Monitor

My gut told me that these BAD IMPERSONATION entries were probably a factor in my situation, so I started looking at them a bit more closely.

System.ServiceModel.Web Event Many of the entries were seemingly non-specific attempts to access the Registry, but I did notice a handful of file and Registry accesses where an explicit impersonation attempt was being made with the current user’s account context. In the example on the right, for instance, an attempt was being made by the worker process to use my account context (SPDC\s0ladmin) for a CreateFile operation – and that attempt was failing.

This led to me formulate (what may seem like an obvious) hypothesis: seeing the BAD IMPERSONATION results, I suspected that the SPDC\svcSPFarm account was lacking something like the ability to replace a process-level token, log on interactively, or something like that. I’m certainly no expert when it comes to the specific boundaries and abilities associated with each rights assignment, but again – my gut was telling me that I should probably play around with some of the User Rights Assignments (via Local Security Policy) to see if I might get lucky.

A Fortunate Discovery

I popped open the Local Security Policy MMC snap-in on the SP2013-WFE VM once again, and I navigated down to User Rights Assignment node. At first glance, I feared that my gut feeling was off-the-mark. Looking through the rights assignments available, I saw that SPDC\svcSPFarm had already been granted the ability to Replace a process level token and Log on as a service – presumably by the SharePoint 2013 Products Configuration Wizard.

Impersonate a client after authentication I continued looking at the various rights assignments, though, and I discovered one that looked promising: Impersonate a client after authentication. SPDC\svcSPFarm hadn’t been granted that right in my environment, and it seemed to me that such a right might be handy in getting rid of the BAD IMPERSONATION results I was seeing with Process Monitor. I took a leap, granted SPDC\svcSPFarm the ability to Impersonate a client after authentication (as shown on the left), performed an IISRESET, and tried to reach the Central Administration site.

And I’ll be darned if it didn’t actually work.

I don’t normally get lucky like that, but hey – I wasn’t going to argue with it. I browsed around the Central Administration site for a bit to see if the site would remain responsive, and I didn’t notice anything out of the ordinary. I also performed an IISRESET and brought the Central Administration site back up with Process Monitor running just to double-check things. Sure enough, the BAD IMPERSONATION results were gone.

The Fix?

SharePoint 2013 Central Administration Site I honestly have no idea whether this problem was specific to my environment or something that might be occurring in other SharePoint 2013 preview environments. I also don’t know if my solution is the “appropriate” solution to resolve the issue. It works for now, but I still have a lot of configuration and actual development work left to do to validate what I’ve implemented.

Since I’m trying to maintain a least-privileges install, though, I’m willing to try this out for a while instead of falling back to placing my farm service account (SPDC\svcSPFarm) in the Administrators group. Placing the account in that group is a last resort for me.

In case you were wondering: I did perform some level of verification on this change. Since the account I was running as (SPDC\s0ladmin) was itself a member of Domain Admins, I created a standard domain user account (SPDC\joe.nobody – he’s always my go-to guy in these situations) and added it to the Farm Administrators group in Central Administration. I then did an IISRESET and opened a browser to the Central Administration site from the domain controller (SP2013-DC) to see if SPDC\joe.nobody could indeed access the site. No troubles. The fact that the SPDC\joe.nobody account wasn’t a member of either Domain Admins or the local Administrators group (on SP2013-WFE) did not block the account from reaching Central Administration. No “An unexpected error has occurred” reared its head.

Implementing the Change

If you are of a similar mindset to me (i.e., you don’t like to elevate privileges unnecessarily) and find yourself unable to reach Central Administration with the same symptoms I’ve described, here is the quick run-through on how to grant your farm/timer service account the Impersonate a client after authentication right as I did:

  1. On your SharePoint Server, go to Start > Administrative Tools > Local Security Policy to open the Local Security Policy MMC snap in.
  2. When the snap-in opens, navigate (in the left Tree view) to the Security Settings > Local Policies > User Rights Assignment node.
  3. Locate the Impersonate a client after authentication policy in the right-hand pane.
  4. Right-click the policy and select the Properties item that appears in the pop-up menu.
  5. A dialog box will appear. Click the Add User or Group … button on the dialog box.
  6. In the Select Users, Computers, Service Accounts, or Groups dialog box that appears, add your farm service/timer service account.
  7. Click the OK button on each of the two open dialog boxes to exit out of them.
  8. Close the Local Security Policy MMC snap-in.
  9. Perform an IISRESET and verify that the Central Administration site actually comes up instead of “An unexpected error has occurred”

Conclusion

If the change that I described in this post and implemented in my environment causes problems or requires further adjustment, I’ll update this post. My goal certainly isn’t to mislead – only to share and hopefully help those who may find themselves in the same situation as me.

If you’ve seen this problem in your SharePoint 2013 preview environment, please let me know. I’d love to hear about it, as well as how your worked through (or around) it!

UPDATE (9/4/2012)

I ran into the same issue with the account that was being used to serve up non-Central Admin site collections; i.e., the account that I was using as the identity for the application pools servicing the web applications I created. In my environment, this was SPDC\svcSpContentWebs as seen below (for the SharePoint – 80 application pool):

IIS Application Pools

Attempts to bring up a site collection without the Impersonate a client after authentication privilege being assigned to the SPDC\svcSpContentWebs account would usually yield nothing more than a blank screen. As with the farm service account, there was very little to troubleshoot until I went in with Process Monitor to look for a bunch of BAD IMPERSONATION results:

ProcMon for svcSpContentWebs

At this point, I’m willing to bet that any other accounts that are assigned as application pool identities will need to be granted the Impersonate a client after authentication privilege, as well.

In addition to the Impersonate a client after authentication privilege, I also ended up having to grant the SPDC\svcSpContentWebs account the Log on as a batch job privilege from within the Local Security Policy MMC snap-in. Without the privilege to Log on as a batch job, I was receiving an HTTP 503 error every time I tried to bring up a site collection. Troubleshooting this problem wasn’t as difficult, though; examining the System event log helped with the following description for the WAS (Windows Process Activation Service) warning on an Event 5021 that was appearing:

The identity of application pool SharePoint – 80 is invalid. The user name or password that is specified for the identity may be incorrect, or the user may not have batch logon rights. If the identity is not corrected, the application pool will be disabled when the application pool receives its first request.  If batch logon rights are causing the problem, the identity in the IIS configuration store must be changed after rights have been granted before Windows Process Activation Service (WAS) can retry the logon. If the identity remains invalid after the first request for the application pool is processed, the application pool will be disabled. The data field contains the error number.

In my case, my account credentials were correct, but for some reason the Log on as batch job right hadn’t been assigned to the SPDC\svcSpContentWebs account. Each time the application pool tried to spin up, it failed and was stopped; I’d then get two warnings from WAS (5021 and 5057) in my System event log, and that would be followed by a WAS 5059 error.

References and Resources

  1. TechNet: Download Microsoft SharePoint 2013 Preview
  2. TechNet: Plan Office Web Apps Server Preview
  3. Blog: Eric Harlan
  4. Blog: Todd Klindt
  5. TechNet: Windows Sysinternals Process Monitor

Is a Higher SharePoint Backup Thread Count Better?

Many administrators have noted that SharePoint 2010 allows them to tune the number of threads that can be used for farm backup and restore operations, but very few have played with the settings. In this post, I share some results I compiled while testing the settings in my own environments. I also share the PowerShell script I assembled for my testing so you can tune the backup and restore thread settings in your own SharePoint farm.

Balls of purple, orange and grey yarn or woolScalability in the hardware and software space is all about parallel computing nowadays. Consider our modern hardware: it used to be that all we really cared about was how fast our CPU could run (“how many GHz?”) Now, we care more about how many cores our CPU has, whether or not those cores support Hyper-threading, how many memory channels our CPU has available to it, etc. Scale-out beats scale-up.

The same is largely true in the software space. Most IT folks learned some time ago that “multithreading” and “higher performance” tended to go hand-in-hand or were at least associated in some way. Multiple threads of execution meant better scheduling of limited processor resources and fewer chances that one long-running operation would bottleneck an entire application.

Configuring SharePoint 2010 Farm Backup and Restore

When I first saw the following section in the “Configure Backup Settings” section of SharePoint 2010’s Central Administration site, it brought a big grin to my face:

Thread Configuration

In SharePoint 2007 and earlier, administrators had no real levers to pull to try and tune the performance of farm backup and restore operations. This obviously changed with SharePoint 2010. We were basically being handed a way to adjust those processes as we saw fit – for better or worse.

Strangely enough, though, I never really took the time to explore the impact of those settings in my SharePoint environments. I always left the number of assigned threads for backup and restore operations at three. I would have liked to mess around with the values, but something else was always more important in the grand scheme of things.

Why Now?

I’ve been working on a new “backup tips and tricks” whitepaper, and I found myself looking for backup and restore concerns within the SharePoint platform that I may not have given much attention to in the past. It didn’t take much wading through Central Administration before I once again found myself looking at thread counts for backup and restore operations.

Doing a little bit of Internet (background) research confirmed what I had suspected: no one else had really spent any time on the topic either. In fact, the only “fresh” and non-copyright-infringing material I found came from a Microsoft TechNet post titled Backup and recovery best practices (SharePoint Server 2010) … and to tell you the truth, the following paragraph from the section titled “Configure SharePoint settings for better backup or restore performance” really bugged me:

If you are using the Backup-SPFarm cmdlet, you can use the BackupThreads parameter to specify how many threads SharePoint Server 2010 will use during the backup process. The more threads you specify, the more resources that backup operation will take, but the faster that it will finish, if sufficient resources are available. However, each thread is reported individually in the log files, so using fewer threads makes interpreting the log files easier. By default, three threads are used. The maximum number of threads available is 10.

Without an understanding of how multithreading (in general) and SharePoint backup (specifically) work, this could easily be interpreted as follows:

The greater the number of threads you assign, the faster your backups will complete.

I realize that my summary is an oversimplification, but I believe that many administrators see the TechNet paragraph as I summarized it. And that concerns me.

I’ve always told people that increasing the backup thread count could yield better performance, but any adjustments would need to be tested in the target farm where they are to be implemented. Realistically speaking, there are several participants and a lot of moving parts in any SharePoint farm backup. Besides the SharePoint server where the backup operation is being coordinated, there is the performance of one or more SQL Servers to consider. The capabilities and restrictions of the backup destination location (typically a UNC file share) also need to be factored-in since that destination is being written to by both the SharePoint Server and one or more SQL Servers.

Setting the number of backup threads to 10 on a SharePoint Server of infinite capability and resources doesn’t guarantee a fast backup, because the farm might have a slow SQL Server, a less-capable backup destination location, a slow or congested network, or a host of other complicating factors.

Oh Yeah? Prove It.

Of course, all of this is just a bunch of hand-waving without proof. So, the scientist in me (yeah, I actually used to be a chemist) decided to take over and devise a series of simple tests to see if there is any real weight to the arguments I’ve been making.

I began with the hypothesis that the easiest and most visible way to gauge the performance of a farm backup operation is to measure how long a backup takes to run; e.g., a farm backup that takes 10 minutes to run is faster than a backup that takes 20 minutes to run if farm content, hardware, configuration, and other factors remain constant. Since SharePoint 2010 provides the ability to specify anywhere from one to 10 backup threads, running a series of backups where the only variable is backup thread count should determine if greater or fewer backup threads yield better performance.

You might recall that I also mentioned that farm topology is a factor in the overall backup equation. As part of my experiment, I decided to run the tests on two different farms I have available to me. General descriptions for each farm:

  • Single-Server Farm: my single server farm environment is a VM running on my laptop. The VM houses SharePoint, SQL Server, and the backup location being targeted. The laptop hardware is a Core-i7 quad-core processor, and the underlying storage for the VM is a solid-state drive (SSD). Hardware bottlenecks should be minimized, and network latency isn’t a factor since backup operations are conducted against a local drive within the VM.
  • Multi-Server Farm: my multi-server environment is the “production” environment on my home network. It consists of a SharePoint Server VM running on a Hyper-V host that also hosts other VMs. The SQL Server instance backing the farm is a non-virtualized SQL Server housing all of the SharePoint databases as well as a few databases for other applications. The backup destination location is a virtualized file server with a pass-through drive array (eSATA with RAID-5). Overall hardware, in this case, is “okay” but obviously not dedicated purely to SharePoint. In addition, network latency and bandwidth (GbE) are also in-play as potential sources of impact.

These two environments have pretty different overall topologies, and it was my hope that I’d see some effect on the performance numbers as a result.

The Script

To run the tests reproducibly, I needed a PowerShell script. So, I put the following script together while I had a bit of free time one night. Feel free to pluck this out to use for testing in your SharePoint environment, as well.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
TestBackupThreads.ps1
.DESCRIPTION
This script is used to conduct and time a series of backups using different thread counts.
The output can then be used to make an educated decision on the number of backup threads to
assign for use in farm-level backups.
.NOTES
Author: Sean McDonough
Last Revision: 25-July-2012
.PARAMETER TestLocation
A UNC path to a location that can be used to create test backup sets
.EXAMPLE
TestBackupThreads \\FileShare\TestLocation
#>
param
(
[string]$TestLocation = "$(Read-Host ‘UNC path to test backup location [e.g. \\FileShare\TestLocation]’)"
)

function TestThreads($backupLocation)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Setup some variables we’ll need for execution.
$threadTimes = @{} # Hash table to hold timing results
$backupItems = Join-Path $backupLocation "spbr*" # Used to delete temp backup files

# We need to execute a full farm backup for each thread count 1 through 10
Clear-Host
Write-Host "`nBackup thread count testing process beginning."
for ($threads = 1; $threads -lt 11; $threads++)
{
# Clean out any backup contents from the test location
Remove-Item $backupItems -recurse

# Grab the starting date/time (for later comparison), kick-off a farm backup, and then
# grab the stop date/time.
Write-Host "`nInitiating a backup with $threads thread(s) …"
$startPoint = Get-Date
Backup-SPFarm -BackupMethod Full -Directory $backupLocation -BackupThreads $threads
$stopPoint = Get-Date

# Store and report results
$keyName = "Backup with {0} thread(s)" -f $threads
$elapsedSeconds = "{0:N0}" -f ($stopPoint – $startPoint).TotalSeconds
$threadTimes[$keyName] = $elapsedSeconds
Write-Host "Backup with $threads thread(s) complete"
Write-Host ("- time to complete (in seconds): {0}" -f $elapsedSeconds)
}

# Do a final sweep of the test backup location to clean out backup items
Remove-Item $backupItems -recurse

# Dump the results sorted in order of quickest to longest
Write-Host "`nBackup thread count testing process complete."
$threadTimes.GetEnumerator() | Sort-Object Value

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
TestThreads $TestLocation
[/sourcecode]

The script is fairly straightforward in what it does. You supply a TestLocation parameter to specify where farm backup test data should be written to, and the script will run a series of full farm backups using the supplied location as the backup destination. The script starts with a full backup using one backup thread; at the end of each full farm backup, the script notes how long the backup took (in seconds) and cleans-up the contents of the TestLocation folder. The number of backup threads is then incremented, and the next test is run. When the script has completed running all backup tests, it sorts the results from “quickest backup” (i.e., the backup thread count requiring the least amount of time) to the slowest backup.

Test Results

I ran a series of three tests for each of the aforementioned environments for a total of six total test runs. Although there’s still quite a bit of variability between individual results within a backup thread series, some trends did appear to emerge.

Single-Server Farm

Backup Times for the Single-Server Environment

With the single-server environment, increasing the number of backup threads did appear to have a directional impact on performance. A single backup thread proved to be the slowest option for the farm backup, and “greater than one” thread resulted in better performance.

If you look at the average values, though, there wasn’t a tremendous difference between the slowest thread count (410 seconds for one thread) and the fastest (388 seconds for 10 threads). We’re only talking about a 5% to 6% difference overall. To truly find the optimum number of backup threads in an environment like this would require more than three test runs to account for standard deviation and establish significance.

Oh, and for those that might be wondering: I’m sure I introduced some of my own variability into the results. Although I didn’t do anything processor or disk intensive during the test runs, I didn’t go out of my way to minimize the impact of services, background operations, etc. To repeat: more testing (with better controls) would be needed for truly conclusive results. The only thing I started to show with this particular set of tests is that multithreading seemed to improve backup performance.

Multi-Server Farm

Things got quite a bit more interesting (to me) when I switched over to multi-server farm testing.

Backup Times for the Multi-Server Environment

In the multi-server environment, the average for using just one backup thread (1413 seconds) appeared to be significantly faster than the next best option (1747 seconds for seven backup threads) – in the neighborhood of 20% or so faster. Just like the single-server results, additional trials would be needed to completely validate the observations, but the results are less ambiguous (given the relatively greater precision of the samples) than with the single-server runs.

Do you find this surprising? Given my multi-server environment and what I know about it, I can’t really say that I was caught flat-footed by the results. Going into the tests, my hypothesis was that my backup destination location would likely be the “weak link” in my overall farm and backup topology. The SharePoint Server was doing well, the SQL Server was relatively robust … but all of that backup activity was hard on my (virtualized) file server. Multiple servers trying to write to the backup location were swamping it and the network, and adding additional backup threads to the mix didn’t end up helping or improving the overall backup process.

The Take-Away

At the end of the day, I recognize that these tests of mine didn’t prove anything conclusively. Frankly, conclusive proof wasn’t my goal. The intent of these experiments wasn’t to say “more threads are better” or “more threads are worse.”

The only point I’m making (I hope) by sharing these results is this: until you run some real tests of your own in your SharePoint environment, you really don’t know where your backup thread sweet spot is. You can try to guess it, but it’s just a guess. And guessing is really no better than simply leaving the backup thread count set to its default value of three.

References and Resources

  1. Wikipedia: Parallel Computing
  2. Wikipedia: Hyper-threading
  3. Wikipedia: Thread (computing) and Multithreading
  4. TechNet: Backup and recovery best practices (SharePoint Server 2010)

Finding a GUID in a SharePoint Haystack

In this post, I share a PowerShell script that I recently wrote to help out a friend. The script allows you to search and identify content items in SharePoint site collections by object ID (GUID). The script isn’t something you’d probably use every day, but it might be handy to keep in the script library “just in case.”

HaystackHere’s another blog post to file in the “I’ll probably never need it, but you never know” bucket of things you’ve seen from me.

Admittedly, I don’t get to spend as much time as I’d like playing with PowerShell and assembling scripts. So when the opportunity to whip-up a “quick hit” script comes along, I usually jump at it.

The Situation

I feel very fortunate to have made so many friends in the SharePoint community over the last several years. One friend who has been with me since the beginning (i.e., since my first presentation at the original SharePoint Saturday Ozarks) is Kirk Talbot. Kirk has become something of a “regular” on the SharePoint Saturday circuit, and many of you may have seen him at a SharePoint Saturday event someplace in the continental United States. To tell you the truth, I’ve seen Kirk as far north as Michigan and as far south as New Orleans. Yes, he really gets around.

Kirk and I keep up fairly regular correspondence, and he recently found himself in a situation where he needed to determine which objects (in a SharePoint site collection) were associated with a handful of GUIDs. Put a different way: Kirk had a GUID (for example, 89b66b71-afc8-463f-b5ed-9770168996a6) and wanted to know – was it a web? A list? A list item? And what was the identity of the item?

PowerShell to the Rescue

I pointed Kirk to a script I had previously written (in my Finding Duplicate GUIDs in Your SharePoint Site Collection post) and indicated that it could probably be adapted for his purpose. Kirk was up to the challenge, but like so many other SharePoint administrators was short on time.

I happened to find myself with a bit of free time in the last week and was due to run into Kirk at SharePoint Saturday Louisville last weekend, so I figured “what the heck?” I took a crack at modifying the script I had written earlier so that it might address Kirk’s need. By the time I was done, I had basically thrown out my original script and started over. So much for following my own advice.

The Script

The PowerShell script that follows is relatively straightforward in its operation. You supply it with a site collection URL and a target object GUID. The script then searches through the webs, lists/libraries, and list items of the site collection for an object with an ID that matches the GUID specified. If it finds a match, it reports some information about the matching object.

A sample run of the script appears below. In the case of this example, a list item match was found in the target site collection for the supplied GUID.

Sample Script Execution

 

This script leverages the SharePoint object model directly, so it can be used with either SharePoint 2007 or SharePoint 2010. Its search algorithm is relatively efficient, as well, so match results should be obtained in seconds to maybe minutes – not hours.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
FindObjectByGuid.ps1
.DESCRIPTION
This script attempts to locate a SharePoint object by its unique ID (GUID) within
a site collection. The script first attempts to locate a match by examining webs;
following webs, lists/libraries are examined. Finally, individual items within
lists and libraries are examined. If an object with the ID is found, information
about the object is reported back.
.NOTES
Author: Sean McDonough
Last Revision: 27-July-2012
.PARAMETER SiteUrl
The URL of the site collection that will be searched
.PARAMETER ObjectGuid
The GUID that identifies the object to be located
.EXAMPLE
FindObjectByGuid -SiteUrl http://mysitecollection.com -ObjectGuid 91ce5bbf-eebb-4988-9964-79905576969c
#>
param
(
[string]$SiteUrl = "$(Read-Host ‘The URL of the site collection to search [e.g. http://mysitecollection.com]&#8217;)",
[Guid]$ObjectGuid = "$(Read-Host ‘The GUID of the object you are trying to find [e.g. 91ce5bbf-eebb-4988-9964-79905576969c]’)"
)

function FindObject($startingUrl, $targetGuid)
{
# To work with SP2007, we need to go directly against the object model
Add-Type -AssemblyName "Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"

# Grab the site collection and all webs associated with it to start
$targetSite = New-Object Microsoft.SharePoint.SPSite($startingUrl)
$matchObject = $false
$itemsTotal = 0
$listsTotal = 0
$searchStart = Get-Date

Clear-Host
Write-Host ("INITIATING SEARCH FOR GUID: {0}" -f $targetGuid)

# Step 1: see if we can find a matching web.
$allWebs = $targetSite.AllWebs
Write-Host ("`nPhase 1: Examining all webs ({0} total)" -f $allWebs.Count)
foreach ($spWeb in $allWebs)
{
$listsTotal += $spWeb.Lists.Count
if ($spWeb.ID -eq $targetGuid)
{
Write-Host "`nMATCH FOUND: Web"
Write-Host ("- Web Title: {0}" -f $spWeb.Title)
Write-Host ("- Web URL: {0}" -f $spWeb.Url)
$matchObject = $true
break
}
$spWeb.Dispose()
}

# If we don’t yet have match, we’ll continue with list iteration
if ($matchObject -eq $false)
{
Write-Host ("Phase 2: Examining all lists and libraries ({0} total)" -f $listsTotal)
$allWebs = $targetSite.AllWebs
foreach ($spWeb in $allWebs)
{
$allLists = $spWeb.Lists
foreach ($spList in $allLists)
{
$itemsTotal += $spList.Items.Count
if ($spList.ID -eq $targetGuid)
{
Write-Host "`nMATCH FOUND: List/Library"
Write-Host ("- List Title: {0}" -f $spList.Title)
Write-Host ("- List Default View URL: {0}" -f $spList.DefaultViewUrl)
Write-Host ("- Parent Web Title: {0}" -f $spWeb.Title)
Write-Host ("- Parent Web URL: {0}" -f $spWeb.Url)
$matchObject = $true
break
}
}
if ($matchObject -eq $true)
{
break
}

}
$spWeb.Dispose()
}

# No match yet? Look at list items (which includes folders)
if ($matchObject -eq $false)
{
Write-Host ("Phase 3: Examining all list and library items ({0} total)" -f $itemsTotal)
$allWebs = $targetSite.AllWebs
foreach ($spWeb in $allWebs)
{
$allLists = $spWeb.Lists
foreach ($spList in $allLists)
{
try
{
$listItem = $spList.GetItemByUniqueId($targetGuid)
}
catch
{
$listItem = $null
}
if ($listItem -ne $null)
{
Write-Host "`nMATCH FOUND: List/Library Item"
Write-Host ("- Item Name: {0}" -f $listItem.Name)
Write-Host ("- Item Type: {0}" -f $listItem.FileSystemObjectType)
Write-Host ("- Site-Relative Item URL: {0}" -f $listItem.Url)
Write-Host ("- Parent List Title: {0}" -f $spList.Title)
Write-Host ("- Parent List Default View URL: {0}" -f $spList.DefaultViewUrl)
Write-Host ("- Parent Web Title: {0}" -f $spWeb.Title)
Write-Host ("- Parent Web URL: {0}" -f $spWeb.Url)
$matchObject = $true
break
}
}
if ($matchObject -eq $true)
{
break
}

}
$spWeb.Dispose()
}

# No match yet? Too bad; we’re done.
if ($matchObject -eq $false)
{
Write-Host ("`nNO MATCH FOUND FOR GUID: {0}" -f $targetGuid)
}

# Dispose of the site collection
$targetSite.Dispose()
Write-Host ("`nTotal seconds to execute search: {0}`n" -f ((Get-Date) – $searchStart).TotalSeconds)

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
FindObject $SiteUrl $ObjectGuid
[/sourcecode]

Conclusion

Again, I don’t envision this being something that everyone needs. I want to share it anyway. One thing I learned with the “duplicate GUID” script I referenced earlier is that I generally underestimate the number of people who might find something like this useful.

Have fun with it, and please feel free to share your feedback!

Additional Reading and Resources

  1. Event: SharePoint Saturday Ozarks
  2. Twitter: Kirk Talbot (@kctElgin)
  3. Post: Finding Duplicate GUIDs in Your SharePoint Site Collection
  4. Event: SharePoint Saturday Louisville

Do You Know What’s Going to Happen When You Enable the SharePoint BLOB Cache?

The SharePoint BLOB Cache can be a very powerful tool for use in improving farm performance and scalability, but some planning should take place before the BLOB Cache is enabled. In this post, I explain how end users can suffer if BLOB Cache planning isn’t performed. I also make some recommendations on how to configure the BLOB Cache to provide administrators with performance benefits that don’t come at the cost of a negative end user experience.

The topic of the SharePoint BLOB Cache and how it operates jumped back into the front of my brain recently given some conversations I’ve had and things I’ve seen (e.g., a promising CodePlex project called the SharePoint 2010 BlobCache Manager).

SharePoint PSA

"Just Do It" Post-It NoteThis post is my way of doing something akin to a SharePoint public service announcement. I’ve recently seen some caching-related functionality and topics – especially the BLOB Cache – getting some real traction in different circles, and I think that the attention and love is generally a good thing. I am somewhat concerned, though, by the fact that the discussions and projects that have been surfacing don’t seem to say much beyond the Post-It on the right.

What do I mean by “Just do-it?” Well, here’s the high-level summary of what I’ve been seeing people say, post, and practice with the SharePoint BLOB Cache:

  • The SharePoint BLOB Cache can lighten the load on your SQL Servers by caching BLOB (binary large object) data such as images, video, audio, CSS, etc., on your web front-ends (WFEs)
  • BLOB assets are then served directly from the WFEs. This prevents regular round trips from the WFEs to SQL Servers for every BLOB item needed, and this conserves network bandwidth and reduces SQL Server load.
  • To realize the benefits of the BLOB Cache, simply turn it on and you’re good to go. Nothing to it!

To be fair, I think that I’ve done a disservice by contributing to the perception that all you need to do to kick-start BLOB caching is change this web.config line …

[sourcecode language=”xml”]
<BlobCache location="C:\BlobCache\14" path="\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$" maxSize="10" enabled="false" />
[/sourcecode]

… to this:

[sourcecode language=”xml”]
<BlobCache location="C:\BlobCache\14" path="\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$" maxSize="10" enabled="true" />
[/sourcecode]

If you look closely, you’ll see that the only difference between the two XML elements is that the enabled attribute is changed from false to true in the second example.

As you might have guessed, I wouldn’t be writing this blog post if simply changing the BlobCache element’s enabled attribute to true didn’t cause potential problems.

The Small Print

Disclaimer text that includes some BLOB cache usage warningsAt the recent SPTechCon in San Francisco, I gave a five-minute lightning talk called Pushing SharePoint’s ‘Go Faster’ Button. It was a lighthearted look at SharePoint performance, and it focused on a couple of caching changes that could be easily implemented to improve SharePoint performance. One of the recommended changes was (surprise surprise) to simply “turn on” SharePoint’s BLOB Cache.

I only had five minutes to deliver the lightning talk, so I had to cram all of the disclaimers for what I was recommending into the legal style slide that appears on the left. Although the slide got a chuckle from the crowd (the print did look pretty small on-screen), I actually did invest some time in its warnings and watch-outs for anyone who wanted to go and dig them up later.

Of the two tips I delivered in the lightning talk, Tip #2 dealt with the SharePoint BLOB cache. I included a very specific warning in the “Disclaimer of Liability” aimed at those who sought to simply “set it and forget it.” The text of that warning read:

Failure to specify a max-age attribute in the BlobCache element of the web.config will result in the default value of 86,400 seconds (24 hours) being used. Use of a non-zero max-age attribute will result in the attachment of client-side cacheability headers to assets that are being BLOB cached, and such headers can result in BLOB assets being cached on the client beyond the duration of the current user session; such caching can easily result in "stale" BLOB resources being used from the client rather than newer ones being fetched from the WFE, so adjust max-age values carefully.

Put another way: if you simply enable the BLOB cache and do nothing else, your users may be getting a SharePoint behavior change that you hadn’t intended for them to have.

Why Did You Have To Bring Age Into This?

The sticking point with SharePoint’s default BlobCache element and attribute settings is that a max-age of 24 hours is assumed and used when the max-age attribute isn’t explicitly specified or set. What does that mean? I wrote a separate post a while back titled Client-Server Interactions and the max-age Attribute with SharePoint BLOB Caching, and that post addressed the effect that explicit and implicit max-age attribute value specifications have on BLOB Caching. I recommend checking out the post for the full background; for anyone who needs a quick summary, though, I can distill it down to two bullet points:

  • Enabling the BLOB Cache without specifying a max-age attribute means that BLOBs will be cached on both the WFEs in your farm and within users’ browser caches (through the use of Cache-Control HTTP headers).
  • In collaboration environments and anyplace else where BLOB assets may be edited or turn over frequently (within the course of a day), the default client-side caching behavior can mess with the UI/UX of your SharePoint site in all sorts of interesting ways.

What does this mean for the average user of SharePoint? Well, let me walk through a fictitious scenario with supporting detail – as told from the perspective of a SharePoint end user. If you already understand the problem, you’re short on time, and you want to get right to what I recommend, jump down to the “Recommendations Before You Enable the BLOB Cache” section.

Acme Online Goes Live!

Welcome to the Acme Corporation! The Acme Corporation recently completed a “webification” of its entire product catalog, and the end result is a publishing site collection that is implemented in SharePoint 2010. The site collection houses all of Acme’s products, and those products are available for the public to browse and order. Acme’s web content management team is responsible for maintaining the product catalog as it appears on the site, and that team is led by a crafty old fellow named Wile E. Coyote (who we’ll simply refer to as “Wiley” from here on out).

Wiley has many years of experience with Acme’s products and has tried nearly all of them personally; he’s something of a legend. He and his team worked diligently to get Acme’s products into SharePoint before the launch. Not all of the products made it into SharePoint before the launch, though, so a phased approach was taken to rolling out the entire catalog.

The Launch

A SharePoint article page featuring a bundle of dynamiteThe first products that Wiley and his team worked to get into SharePoint were Acme’s line of explosives. To prepare for the launch of the new online catalog, Wiley wrote up an article on Acme’s top-selling “Bundle o’ Dynamite” product. The article featured a picture of the Bundle o’ Dynamite, along with some descriptive text about the product, how it operates, a few safety warnings, and a couple of other informational points. When Wiley finished, a mockup of the article page looked like the screenshot seen on the left.

A Fiddler trace of the first request for the dynamite article pageUnbeknownst to Wiley, the Acme product catalog site collection is served-up by one Web application through one zone (the Default zone) on one WFE. This means that all product catalog requests, whether they come from customers or Wiley’s team, go to one IIS site on one server. The first time that someone (or more specifically, someone’s browser) requests the article page that Wiley put together, a series of web requests are kicked-off to pull down the page content, images, scripts, CSS, and everything else needed to render the page in a browser. This series of interactions (captured using Fiddler) is shown on the top right.

A Fiddler trace of the second request for the dynamite article pageSubsequent requests for the same article page (within the context of a single browser session) will follow the series of interactions seen directly to the right. One thing that you may notice upon inspecting the Fiddler trace is that subsequent page requests result in fewer calls back to the server. This is because SharePoint applies per session caching to many of the items it passes back to the browser, and this caching (which is not the same as BLOB caching) removes the need for constant re-fetching of items that haven’t changed.

In both of the Fiddler traces above, the focus is on the newsarticleimage.jpg file  – the file which houses a picture of the Bundle o’ Dynamite. The first time the browser requests the image within a session, a successful HTTP 200 response is returned to the browser along with the image. Also important to note is the Cache-Control header that comes back with the image:

[sourcecode language=”text”]
Cache-Control: private,max-age=0
[/sourcecode]

The private part of the Cache-Control header tells the client browser to cache the image locally for the duration of the browser session. The max-age=0 portion says, in effect, that subsequent uses of the image by the browser (from its cache) should be validated with a call back to the WFE to ensure that the image hasn’t changed.

And that’s what is shown happening in the second Fiddler trace. When subsequent page requests attempt to use the image, a GET request from the browser is answered by the WFE with

[sourcecode language=”text”]
HTTP/1.1 304 NOT MODIFIED
[/sourcecode]

This response code tells the browser that the image hasn’t changed and that it’s safe to use the locally cached copy. If the image were to change, then an HTTP 200 would be returned instead and the new/updated version of the image would be sent to the browser.

When the browser is closed, the locally cached copy of the image is flushed and the process begins anew the next time the browser opens.

Meep Meep

Not long after the launch of Acme’s online product catalog, customers began complaining that browsing the catalog was simply too slow. After some discussion, Management decided to bring in Roadrunner Consulting to assess the site and make suggestions that would improve performance.

Roadrunner’s team raced around (as they are wont to do), ran some tests, made some observations, and provided a list of suggestions. At the top of the list was “Implement SharePoint BLOB Caching.”

So, Acme’s SharePoint administrators jumped right in and turned on BLOB caching. Since the site is served up through a single IIS site (SharePoint zone), the admins set enabled=“true” in the BlobCache element of the site’s web.config file. No other changes were made to the BlobCache element.

So, what happened? Well, things got snappier! The administrators watching their back-end performance noticed that the file system on the WFE started to cache BLOBs that were being requested by users. Each request to the WFE for one of those BLOBs resulted in the BLOB being served back directly from the WFE without a round-trip to the SQL Server. Internal network bandwidth utilization dropped significantly, and the SQL Servers started breathing a bit easier. The administrators were most definitely happy with the change they’d made … and it was as easy as setting enabled=”true” in the BlobCache element of the web.config file. Talk about the greatest thing since sliced bread! Everyone exchanged a round of high-fives after the change was made, and talks of how the geeks would rise up to dominate the world resumed.

Dynamite Article Page - First Request with BLOB Caching enabledSo, how do things look on the client side after enabling the BLOB Cache? Well, when someone goes to retrieve Wiley’s article for the first time, the first browser request series for the page looks much like it did without the BLOB Cache enabled. See the Fiddler trace on the right.

There is one very important difference when retrieving items with the BLOB Cache enabled, though, and you have to look closely to see it. Do you see the Cache-Control HTTP header that is returned with the request for the newsarticleimage.jpg image? It’s different than it was before the BLOB Cache was enabled. Now it says

[sourcecode language=”text”]
Cache-Control: public, max-age=86400
[/sourcecode]

Whoa … what does this mean? Well, it means two important things. First, the public designation means that when the image is cached by the browser, it will no longer be private to the current session. It can be re-used across sessions, so it won’t necessarily “go away” when the browser is closed.

Second, the max-age=86400 means that the image will continue to “live” in the browser’s cache for 86400 seconds, or 24 hours. For that period of time, the browser won’t even attempt to contact the WFE to see if the image has changed; it will just use the copy that it holds onto. Nothing short of a browser cache flush (which is manual intervention by the user) will change this behavior.

Dynamite Article Page - Subsequent page requests with BLOB Caching enabledAnd that’s what we see with the Fiddler trace on the right. This trace represents what subsequent page requests look like for the next 24 hours. Notice that the newsarticleimage.jpg image doesn’t get re-requested or checked. There are no HTTP 304 response codes coming back, because the browser simply isn’t requesting the image; it’s using its cached copy.

Admittedly, the Fiddler trace will look a little different when the browser is closed and re-opened … but a re-fetch of the newsarticleimage.jpg file will not take place for a full 24 hours unless a user clears the browser cache.

What does this change in behavior mean for actual users of the site? Read on to find out …

Running Off the Edge of the Cliff

The corrected article page showing the TNT barrelShortly after the BLOB Cache changes were made, Wiley got an (unrelated) call from the Fulfillment Department. They were furious because they’d been getting all sorts of returns for the Bundle o’ Dynamite. The reason for the returns? It’s because Wiley put the wrong image in his article page!

Even though Acme sells a product called the “Bundle o’ Dynamite,” the actual product that ships is a barrel of TNT. Since the product image was wrong, customers were incorrectly concluding that they’d get several sticks of dynamite instead of a barrel, and this was rubbing many of them the wrong way. Who knew?

Wiley went out to SharePoint, checked the article that he wrote, and saw that he did indeed use a series of dynamite sticks for an image. The page should have actually appeared as it does in the screenshot that is above and to the left. After a quick facepalm, Wiley realized that he needed to make a change – and fast.

Wiley went out to the Publishing Images library for the site collection and uploaded a new version of the newsarticleimage.jpg image file – one that contained a barrel of TNT instead of a bundle of dynamite. He then browsed to the article page and did a refresh.

Nothing changed.

Wiley hit F5 in his browser. Still nothing changed.

Over the course of the hour that followed, Wiley grew increasingly more bewildered and panicked as he tried in vain to get the new TNT barrel to show up on the article page. He uploaded the image several more times, closed and re-opened his browser, deleted and then reloaded the image, re-published and re-approved the actual article page, and even got the administrators to flush the SharePoint BLOB Cache. None of the actions made a difference.

The Coyote Never Wins

Why didn’t any of Wiley’s efforts make a difference? Because what Wiley didn’t understand was that there was nothing he could do short of flushing his cache that would prompt the browser to re-request the updated image. The browser started using the cached copy of the image after the first request Wiley made in the morning; i.e., the request to verify that the image on the page was incorrect as Fulfillment indicated. For another 24 hours (86400 seconds), the browser would continue to use the cached image.

Wiley’s image problem was just one of the potential issues that might surface as a result of the BLOB Cache change. It was also one of the more visible problems. In looking at the path attribute of the BlobCache element, you might have noticed some of the other file types that got cached by default – file types with js (JavaScript) and css (Cascading Style Sheets) extensions, for example. Any of those file types which were served from site collection lists and libraries would also be impacted by the “fetch once and use for 24 hours” behavior.

Recommendations Before You Enable the BLOB Cache

A frustrated end userI hope the example featuring Wiley did an adequate job of explaining why I think that blindly turning on the BLOB Cache can be a bad thing for end users. Having seen first-hand what an improperly configured BLOB Cache can do to the user experience, I’d like to offer up a handful of suggestions based on my own experience.

1. Don’t just “enable” the BLOB Cache with its out-of-the-box (OOTB) default settings. There are a couple of OOTB settings that you should really think hard about changing. I mentioned the default max-age value you get if you don’t actually specify the attribute value. I’m going to talk more about that one in a bit. Also: do you really want the BLOB Cache using your system drive (C:) as its target location for cached files? Most admins I know aren’t particularly friendly with that idea, so relocate the BLOB Cache to another drive.

2. If your Web application has only one zone (i.e., the Default zone), strongly consider specifying a max-age attribute value of zero (max-age=”0”). Why do I say this? Because it avoids the situation I described with Wiley above, and it’s a compromise that gives administrators some of the performance boosts they seek without completely shafting users in the process.

Dynamite Article Page - max-age = 0 in effectWhen the BLOB Cache is enabled and a max-age attribute value of 0 is explicitly specified, things change a bit. BLOB caching and offloading still happens on the WFEs, so administrators get the internal performance boosts they were probably seeking in the first place. On the other side of the equation (i.e., the “user side”), persistent client side caching ceases as shown on the left. Although the Cache-Control header still specifies public cacheability, the max-age=0 ensures that the browser will round-trip to the server each time it intends to use a locally cached resource to ensure that the most up-to-date copy of the resource is in the cache. This will keep users like Wiley from going off the deep end due to the wonky and inconsistent user experience that afflicts users who need to edit and proof a site that employs persistent client-side caching.

3. If you have a Web application that is extended to two or more zones, apply BLOB Cache settings that are appropriate for each zone. This is relatively common in public-facing SharePoint site collections and Web applications where anonymous access is in-use. In these particular scenarios, there are usually at least two SharePoint zones per Web application: an internal zone (typically the Default zone) through which editors and other users may authenticate to carry out content work, and an external zone (e.g., the Internet zone) which is set up for anonymous access and “external consumption.”

In this dual-zone scenario, it makes sense to configure each zone (IIS site) differently since usage patterns differ between zones. The BlobCache element in the web.config for the internal (Default) zone, for example, should probably be configured according to #2 (above – the one zone scenario with a max-age attribute value of zero). For the web.config that is used in the external zone, though, it may make sense to apply a non-zero max-age value for use with the BLOB Cache – especially since anonymous users aren’t (normally) content editors. A non-zero max-age means fewer trips (overall) to your WFEs from outside the LAN environment, and this helps to keep bandwidth utilization on your Internet connection. There is still a risk that external users may see “stale” content, but the impact is generally more acceptable for straight viewers since they aren’t actively working on content.

4. Consider changing the path expression to restrict what goes into the BLOB Cache. The default path expression for SharePoint 2010’s BlobCache element looks like this:

[sourcecode language=”text”]
\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$
[/sourcecode]

Most administrators are savvy enough to add and remove file extensions from this expression as needed; for example, taking |wmv out of the path expression means that the BLOB Cache will no longer store and serve files with a .wmv extension. Adding and removing extensions really only scratches the surface of what can be done, though. The path attribute value is actually a regular expression, so the full power of regular expressions can be applied to select and exclude files for use with the BLOB Cache.

Suppose you want to explicitly control which images, videos, and other files (that match the list of extensions) end up in the BLOB Cache? Maybe you want to specially name files you intend to cache with an additional .cache extension before the actual file type extension (e.g., .gif). To accomplish this, you could change the path expression to this:

[sourcecode language=”text”]
\.cache\.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$
[/sourcecode]

With this path expression, filenames like these would be included in the BLOB Cache:

  • SampleImage.cache.jpg
  • MyVideo.cache.wmv

… but anything without the additional .cache qualifier would get omitted, such as:

  • AnotherImage.jpg
  • ExcludeThisVideo.wmv

This is just a simple example, but hopefully it gives you an idea of what you could do with the path regular expression to control the contents of the BLOB Cache.

Summing It Up

The SharePoint BLOB Cache is a powerful mechanism to improve farm performance and scalability, but it shouldn’t be turned on without some forethought and a couple of changes to the default BlobCache element attribute values.

If you are an administrator and have enabled the BLOB Cache with its default values, check with your users. They might have some feedback for you …

Additional Reading and Resources

  1. CodePlex: SharePoint 2010 BlobCache Manager
  2. Event: SPTechCon San Francisco 2012
  3. Prezi: Pushing SharePoint’s ‘Go Faster’ Button
  4. Blog Post: Client-Server Interactions and the max-age Attribute with SharePoint BLOB Caching
  5. Tool: Fiddler Web Debugging Proxy

Kicking-Off 2012: SharePoint Style

My SharePoint community activities are off to a roaring start in 2012. In this post, I’ll be recapping a couple of events from the end of 2011, as well as covering new activities taking place during the first couple of months of 2012.

HighSpeedI don’t know how 2011 ended for most of you, but the year closed without much of a bang for me. I’m not complaining about that; the general slow-down gave me an opportunity to get caught up on a few things, and it was nice to spend some quality time with my friends and family.

While 2011 went out relatively quietly, 2012 seems to have arrived with a vengeance. In fact, I was doing some joking on Twitter with Brian Jackett and Rob Collie shortly after the start of the year about #NYN, or “New Year’s Nitrous.” It’s been nothing but pedal-to-the-metal and then some since the start of the year, and there’s absolutely no sign of it letting up anytime soon. I like staying busy, but in some ways I’m wondering whether or not there will be enough time to fit everything in. One day at a time …

Here’s a recap of some stuff from the tail end of 2011, as well as what I’ve got going on for the first couple of months in 2012. After February, things actually get even crazier … but I’ll save events beyond February for a later post.

SPTV

SPTV logoDuring the latter part of 2011, I had a conversation with Michael Hiles and Jon Breyfogle of DSC Consulting, a technical consulting and media services company based here in Cincinnati, Ohio. Michael and Jon had an idea: they wanted to develop a high-quality, high-production-value television program that centered on SharePoint and the larger SharePoint ecosystem/community. The initial idea was that the show would feature an interview segment, coverage of community events, SharePoint news, and some other stuff thrown in.

It was all very preliminary stuff when they initially shared the idea with me, but I told them that I thought they might be on to something. The idea of a professional show that centered on SharePoint wasn’t something that was being done, and I was really curious to see how they would do it if they elected to move forward.

Just before Christmas, Jon contacted me to let me know that they were indeed moving forward with the idea … and he asked if I’d be the show’s first SharePoint guest. I told him I’d love to help out, and so the bulk of the pilot episode was shot at the Village Tavern in Montgomery one afternoon with host Mark Tiderman and co-host Craig Pereira. Mark and I shot some pool, discussed disaster recovery, and just talked SharePoint for a fair bit. It was really a lot of fun.

The pilot isn’t yet available (publicly), but a teaser for the show is available on the SPTV web site. All in all, I think the DSC folks have done a tremendous job creating a quality, professional program. Check out the SPTV site for a taste of what’s to come!

SharePoint Saturday Columbus Kick-Off

SharePoint Saturday Columbus logoAround the time of the SPTV shooting, the planning committee for SharePoint Saturday Columbus (Brian Jackett, Jennifer Mason, Nicola Young, and I) had a checkpoint conversation to figure out what, if anything, we were going to do about SharePoint Saturday Columbus in 2012. Were we going to try to do it again? If so, were we going to change anything? What was our plan?

Everything with SPSColumbus in 2012 is still very preliminary, of course, but I can tell you that we are looking forward to having the event once again! We expect that we’ll attempt to hold the event during roughly the same part of the year as we’ve had it in the past (i.e., late summer). As we start to nail things down and come up with concrete plans, I’ll share those. Until then, keep your eyes on the SharePoint Saturday site and the SPSColumbus account on Twitter!

SharePointCincy

Those of us who reside in and around Cincinnati, Ohio, are very fortunate when it comes to SharePoint events and opportunities. In the past we’ve had SharePoint Saturday Indianapolis just to the west of us, SharePoint Saturday Columbus to the northeast, and last year we had our first ever SharePoint Saturday Cincinnati (which was a huge success!) On top of that, last year was the first ever SharePointCincy event.

SharePointCincy was similar in some ways to a SharePoint Saturday, but it was different in others. It was a day full of SharePoint sessions, but we also had Fred Studer (the General Manager for the Information Worker product group at Microsoft) come out an speak. Kroger, a local company whose SharePoint implementation I’m very familiar with, also shared their experience with SharePoint. Rather than go into too much detail, though, I encourage you to check out the SharePointCincy site yourself to see what it was all about.

Of course, the whole reason I’m mentioning SharePointCincy is that it’s coming again in March of this year! Last year’s success (the event was attended by hundreds) pretty much guaranteed that the event would happen again.

I’m part of a planning team that includes Geoff Smith, Steve Caravajal of Microsoft, Mike Smith from MAX Technical Training, and the infamous Shane Young of SharePoint911 (which, in case you didn’t know it, is based here in Cincinnati). Four of the five of us met last Friday for a kick-off meeting and to discuss how the event might go this year. It was a good breakfast and a productive meeting. I don’t have much more to share at this point (other than the fact that, “yes, it’s happening”), but I will share information as it becomes available. Stay tuned!

Secrets of SharePoint Webcast

Secrets of SharePoint logoIt’s been a few months since my last webcast on SharePoint caching, so my co-workers at Idera approached me about doing another webcast. I guess I was due.

On this Wednesday, January 18th, I’ll be delivering a Secrets of SharePoint webcast titled “The Essentials of SharePoint Disaster Recovery.” Here’s the abstract:

“Are my nightly SQL Server backups good enough?” “Do I need an off-site disaster recovery facility?” “How do I even start the process of disaster recovery planning?” These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes “it depends…”

In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

For those of you who have heard me speak and/or attended my webcasts in the past, you’ll probably find this session to be a bit different than ones you’ve seen or heard. The main reason I say that is because the content is primarily business-centric rather than nuts-and-bolts admin content.

That doesn’t mean that SharePoint administrators shouldn’t attend, though; on the contrary, the webcast includes a number of very important messages for admins (e.g., why DR must be driven from the business angle rather than the technical/admin angle) that could really help them in their jobs. The session expands the scope of the DR discussion, though, to include the business aspects that are so tremendously important during the DR planning process.

If what I’ve shared sounds interesting, please sign-up! The webcast is free, and I’ll be doing Q&A after the session.

SharePoint Saturday Austin

SharePoint Saturday Austin logoThis upcoming weekend, I’ll be heading down to Austin, Texas, for the first SharePoint Saturday Austin event! The event is taking place on January 21st, and it is being coordinated by Jim Bob Howard (of Juniper Strategy) and Matthew Lathrop (of Rackspace). Boy oh boy – do they have an amazing line-up of speakers and contributors. It’s quite impressive; check out the site to see what I mean.

The guys are giving me the opportunity to present “The Essentials of SharePoint Disaster Recovery” session, and I’m looking forward to it. I’m also looking forward to catching up with many of my friends … and some of my Idera co-workers (who will be coming in from Houston, Texas).

If you’re in the Austin area and looking for something to do this upcoming Saturday, come to the event. It’s free, and it’s a great chance to take in some phenomenal sessions, win some prizes, and be a part of the larger SharePoint community!

SharePoint Pro Demo Booth Session

SharePoint Pro logoOn Monday, February 20th at 12pm EST, I’m going to be doing a “demo booth” session through SharePoint Pro Magazine. The demo booth is titled “Backup Basics: SharePoint’s Backup and Restore Capabilities and Beyond.” Here’s the description for the demo booth:

SharePoint ships with a number of tools and capabilities that are geared toward protecting content and configuration. These tools provide basic coverage for your SharePoint environment and the content it contains, but they can quickly become cumbersome in real world scenarios. In this session, we will look at SharePoint’s backup and restore capabilities, discuss how they work, and identify where they fall short in common usage scenarios. We will also highlight how Idera’s SharePoint backup solution picks up where the SharePoint platform tools leave off in order to provide complete protection that is cost-effective and easy to use.

The “demo booth” concept is something new for me; it’s part “platform education” (which is where I normally spend the majority of my time and energy) and part “product education” – in this case, education about Idera’s SharePoint backup product. Being both the product manager for Idera SharePoint backup and a co-author for the SharePoint 2010 Disaster Recovery Guide leaves me in something of a unique position to talk about SharePoint’s built-in backup/restore capabilities, where gaps exist, and how Idera SharePoint backup can pick up where the SharePoint platform tools leave off.

If you’re interested in learning more about Idera’s SharePoint backup product and/or how far you can reasonably push SharePoint’s built-in capabilities, check out the demo booth.

SPTechCon 2012 San Francisco

SPTechConFebruary comes to close with a big bang when SPTechCon rolls into San Francisco for the first of two stops in 2012. For those of you who check my blog now and again, you may have noticed the SPTechCon “I’ll be speaking at” badge and link on the right-hand side of the page. Yes, that means I’ll be delivering a session at the event! The BZ Media folks always put on a great show, and I’m certainly proud to be a part of SPTechCon and presenting again this time around.

At this point, I know that I’ll be presenting “The Essentials of SharePoint Disaster Recovery.” I think I’m also going to be doing another lightning talk; I need to check up on that, though, to confirm it.

I also found out that John Ferringer (my co-author and partner-in-crime) and I are also going to have the opportunity to do an SPTechCon-sponsored book signing (for our SharePoint 2010 Disaster Recovery Guide) on the morning of Wednesday the 29th.

If you’re at SPTechCon, please swing by to say hello – either at my session, at the Idera booth, the book signing, or wherever you see me!

Additional Reading and Resources

  1. Blog: Brian Jackett’s Frog Pond of Technology
  2. Blog: Rob Collie’s PowerPivotPro
  3. Company: DSC Consulting
  4. Site: SPTV
  5. LinkedIn: Mark Tiderman
  6. LinkedIn: Craig Pereira
  7. Event: SharePoint Saturday Columbus
  8. Blog: Jennifer Mason
  9. Twitter: Nicola Young
  10. Site: SharePoint Saturday
  11. Twitter: SharePoint Saturday Columbus
  12. Event: SharePoint Saturday Cincinnati
  13. Event: SharePointCincy
  14. LinkedIn: Geoff Smith
  15. Blog: Steve Caravajal’s Ramblings
  16. Blog: Mike Smith’s Tech Training Notes
  17. Company: MAX Technical Training
  18. Blog: Shane Young’s SharePoint Farmer’s Almanac
  19. Company: SharePoint911
  20. Webcast: “Caching-In” for SharePoint Performance
  21. Site: Secrets of SharePoint
  22. Webcast: The Essentials of SharePoint Disaster Recovery
  23. Event: SharePoint Saturday Austin
  24. Blog: Jim Bob Howard
  25. Company: Juniper Strategy
  26. LinkedIn: Matthew Lathrop
  27. Company: Rackspace
  28. Company: Idera
  29. Event: SharePoint Pro Demo Booth Session
  30. Site: SharePoint Pro Magazine
  31. Product: Idera SharePoint backup
  32. Book: SharePoint 2010 Disaster Recovery Guide
  33. Event: SPTechCon 2012 San Francisco
  34. Company: BZ Media
  35. Blog: John Ferringer’s My Central Admin

Mirror, Mirror, In the Farm …

SQL Server mirroring support is a welcome addition to SharePoint 2010. Although SharePoint 2010 makes use of the Failover Partner keyword in its connection strings, SharePoint itself doesn’t appear to know whether or not SQL Server has failed-over for any given database. This post explores this topic in more depth and provides a PowerShell script to dump a farm’s mirroring configuration.

This is a post I’ve been meaning to write for some time, but I’m only now getting around to it. It’s a quick one, and it’s intended to share a couple of observations and a script that may be of use to those of you who are SharePoint 2010 administrators.

Mirroring and SharePoint

The use of SQL Server mirroring isn’t something that’s unique to SharePoint, and it was possible to leverage mirroring with SharePoint 2007 … though I tended to steer people away from trying it unless they had a very specific reason for doing so and no other approach would work. There were simply too many hoops you needed to jump through in order to get mirroring to work with SharePoint 2007, primarily because SharePoint 2007 wasn’t mirroring-aware. Even if you got it working, it was … finicky.

SharePoint 2010, on the other hand, is fully mirroring-aware through the use of the Failover Partner keyword in connection strings used by SharePoint to connect to its databases.

(Side note: if you aren’t familiar with the Failover Partner keyword, here’s an excellent breakdown by Michael Aspengren on how the SQL Server Native Provider leverages it in mirroring configurations.)

There are plenty of blog posts, articles (like this one from TechNet), and books (like the SharePoint 2010 Disaster Recovery Guide that John Ferringer and I wrote) that talk about how to configure mirroring. It’s not particularly tough to do, and it can really help you in situations where you need a SQL Server-based high availability and/or remote redundancy solution for SharePoint databases.

This isn’t a blog post about setting up mirroring; rather, it’s a post to share some of what I’ve learned (or think I’ve learned) and related “ah-ha” moments when it comes to mirroring.

What Are You Pointing At?

This all started when Jay Strickland (one of the Quality Assurance (QA) folks on my team at Idera) ran into some problems with one of our SharePoint 2010 farms that was used for QA purposes. The farm contained two SQL Server instances, and the database instances were setup such that the databases on the second instance mirrored the databases on the first (principal) instance. Jay had configured SharePoint’s service applications and Web applications for mirroring, so all was good.

But not really. The farm had been running properly for quite some time, but something had gone wrong with the farm’s mirroring configuration – or so it seemed. That’s when Jay pinged me on Skype one day with a question (which I’m paraphrasing here):

Is there any way to tell (from within SharePoint) which SQL Server instance is in-use by SharePoint at any given time for a database that is being mirrored?

It seemed like a simple question that should have a simple answer, but I was at a loss to give Jay anything usable off the top of my head. I told Jay that I’d get back to him and started doing some digging.

The SPDatabase Type

Putting on my developer hat for a second, I recalled that all SharePoint databases are represented by an instance of the SPDatabase type (Microsoft.SharePoint.Administration.Database specifically) or one of the other classes that derive from it, such as SPContentDatabase. Running down the available members for the SPDatabase type, I came up with the following properties and methods that were tied to mirroring in some way:

  • FailoverServer
  • FailoverServiceInstance
  • AddFailoverServiceInstance()

What I thought I would find (but didn’t) was one or more properties and/or methods that would allow me to determine which SQL Server instance was serving as the active connection point for SharePoint requests.

In fact, the more digging that I did, the more that it appeared that SharePoint had no real knowledge of where it was actually connecting to for data in mirrored setups. It was easy enough to specify which database instances should be used for mirroring configurations, but there didn’t appear to be any way to determine (from within SharePoint) if the principal was in-use or if failover to the mirrored instance had taken place.

The Key Takeaway

If you’re familiar with SQL Server mirroring and how it’s implemented, then the following diagram (which I put together for discussion) probably looks familiar:

SharePoint connecting to mirrored database

This diagram illustrates a couple of key points:

  1. SharePoint connects to SQL Server databases using the SQL Server Native Client
  2. SharePoint supplies a connection string that tells the native client which SQL Server instances (as Data Source and Failover Partner) should be used as part of a mirroring configuration.
  3. It’s the SQL Server Native Client that actually determines where connections are made, and the results of the Client’s decisions don’t directly surface through SharePoint.
    Number 3 was the point that I kept getting stuck on. I knew that it was possible to go into SQL Server Management Studio or use SQL Server’s Management Objects (SMO) directly to gain more insight around a mirroring configuration and what was happening in real-time, but I thought that SharePoint must surely surface that information in some form.

Apparently not.

Checking with the Experts

I hate when I can’t nail down a definitive answer. Despite all my reading, I wanted to bounce the conclusions I was drawing off of a few people to make sure I wasn’t missing something obvious (or hidden) with my interpretation.

  • I shot Bill Baer (Senior Technical Product Manager for SharePoint and an MCM) a note with my question about information surfacing through SharePoint. If anyone could have given me a definitive answer, it would have been him. Unfortunately, I didn’t hear back from him. In his defense, he’s pretty doggone busy.
  • I put a shout out on Twitter, and I did hear back from my good friend Todd Klindt. While he couldn’t claim with absolute certainty that my understanding was on the mark, he did indicate that my understanding was in-line with everything he’d read and conclusions he had drawn.
  • I turned to Enrique Lima, another good friend and SQL Server MCM, with my question. Enrique confirmed that SQL SMO would provide some answers, but he didn’t have additional thoughts on how that information might surface through SharePoint.

Long and short: I didn’t receive rock-solid confirmation on my conclusions, but my understanding appeared to be on-the-mark. If anyone knows otherwise, though, I’d love to hear about it (and share the information here – with proper recognition for the source, of course!)

Back to the Farm

In the end, I wasn’t really able to give Jay much help with the QA farm that he was trying to diagnose. Since I couldn’t determine where SharePoint was pointing from within SharePoint itself, I did the next best thing: I threw together a PowerShell script that would dump the (mirroring) configuration for each database in the SharePoint farm.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
SPDBMirrorInfo.ps1
.DESCRIPTION
Examines each of the databases in the SharePoint environment to identify which have failover partners and which don’t.
.NOTES
Author: Sean McDonough
Last Revision: 19-August-2011
#>
function DumpMirroringInfo ()
{
# Make sure we have the required SharePoint snap-in loaded.
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Grab databases and determine which have failover support (and which don’t)
$allDatabases = Get-SPDatabase
$dbsWithoutFailover = $allDatabases | Where-Object {$_.FailoverServer -eq $null} | Sort-Object -Property Name
$dbsWithFailover = $allDatabases | Where-Object {$_.FailoverServer -ne $null} | Sort-Object -Property Name

# Write out unmirrored databases
if ($dbsWithoutFailover -eq $null)
{ Write-Host "`n`nNo databases are configured without a mirroring partner." }
else
{
Write-Host ("`n`nDatabases without a mirroring partner: {0}" -f $dbsWithoutFailover.Count)
$dbsWithoutFailover | Format-Table -Property Name, Server -AutoSize
}

# Dump results for mirrored databases
if ($dbsWithFailover -eq $null)
{ Write-Host "`nNo databases are configured with a mirroring partner." }
else
{
Write-Host ("`nDatabases with a mirroring partner: {0}" -f $dbsWithFailover.Count)
$dbsWithFailover | Format-Table -Property Name, Server, FailoverServer -AutoSize
}

# For ease of reading
Write-Host ("`n`n")
}
DumpMirroringInfo
[/sourcecode]

The script itself isn’t rocket science, but it did actually prove helpful in identifying some databases that had apparently “lost” their failover partners.

Additional Reading and Resources

  1. MSDN: Using Database Mirroring
  2. Whitepaper: Using database mirroring (Office SharePoint Server)
  3. Blog Post: Clarification on the Failover Partner in the connectionstring in Database Mirror setup
  4. TechNet: Configure availability by using SQL Server database mirroring (SharePoint Server 2010)
  5. Book: The SharePoint 2010 Disaster Recovery Guide
  6. Blog: John Ferringer’s “My Central Admin”
  7. Blog: Jay Strickland’s “Slinger’s Thoughts
  8. Company: Idera
  9. MSDN: SPDatabase members
  10. MSDN: SQL Server Management Objects (SMO)
  11. Blog: Bill Baer
  12. Blog: Todd Klindt’s SharePoint Admin Blog
  13. Blog: Enrique Lima’s Intentional Thinking
%d bloggers like this: