In my last post, I promised those who attended my Content Search Web Part session (at SPTechCon Austin 2016) that I’d deliver videos of the demos I normally perform during that session. This post contains links to those demo videos as well as some additional commentary.
As I discussed in my last post titled “SPTechCon Austin 2016 And Death By Demo,” the demonstrations I intended to deliver at SPTechCon in Austin a week and a half ago didn’t go very well. In fact, they didn’t really go at all due to some extremely odd technical circumstances. To make up for the lack of demo content, I promised attendees that I would put together video walk-throughs for each of the demos I had intended to deliver at SPTechCon.
It took a little longer than initially anticipated, but the half-dozen links below represent the demo material I would normally walk through during a delivery of my “SharePoint’s New Swiss Army Knife: The Content Search Web Part” session. If there’s a silver lining to the fact that I’m doing the demos after the actual presentation, it’s that I was able to take more time than I normally have (within the context of a 75 minute session) to show some extra content and go off the beaten path a bit more.
So, for those of you who have been waiting … here are the goods!
These videos were recorded with Camtasia and rendered directly out to YouTube. I made every attempt to keep the quality high, but if something gets “lost in translation” or you have other issues, please let me know.
I enjoyed putting these videos together, and in the past I’ve tossed around the idea of doing more videos like this. If these CSWP videos were helpful to you and/or you’d like to see more, please let me know. If enough of you find value in these, I’d be willing to put together additional videos for some of the other presentations and workshops I deliver.
Enjoy, and as with everything else, I welcome your feedback!
I just got back from SPTechCon Austin 2016, and I had some “trouble” (putting it mildly) with demos I gave during one of my sessions. This post is a note – and a promise – to those who attended my Content Search Web Part (CSWP) session during the conference.
I used to write posts to sum-up the various conferences at which I’ve spoken. That was feasible when I was only speaking at a conference or event here or there, but writing about every event is somewhat time-consuming nowadays. And besides, most of the posts would look about the same: “great event,” “lots of fun,” “awesome attendees,” etc.
Well, I got back from SPTechCon Austin 2016 yesterday … and I felt compelled to write something today. Yes, it was a great conference, lots of fun, and filled with awesome attendees. But there was something more to this conference that motivated me – no, compelled me – to write this post.
Compelled By What?
That “thing” that compelled me was this: death by demo.
I delivered two sessions during the event: a new one on performance troubleshooting with SharePoint Online, and one of my “standards” that is an introduction to the Content Search Web Part (CSWP). I delivered the troubleshooting session on Tuesday, and although it went long (I still need to tune it up), it went pretty well – no real issues. I can’t claim the same about the CSWP session yesterday (Wednesday) morning.
Simply put, the demos for my CSWP session were a disaster. I’d gotten everything ready to go on the Tuesday night before the session; despite that, things went off-the-rails almost immediately. I was RDP’ing back to my home desktop system where I had VMware Workstation running, and all of that (i.e., the RDP and VWware Workstation parts) seemed fine. The fashion in which things blew up was not something I’d ever seen before.
Kerplunk? Kerplooey!
What went wrong? Well, it’s hard to describe. The best way to describe it is that left-clicking didn’t work properly in the development VM I was using. Sometimes my clicks would visibly register (e.g., on a window close button) – but nothing would happen. Other times, my left-clicks seemed to register somewhere else on the screen (other than where the mouse pointer was located). And at other times still, a left-click would highlight some weird section in the web browser window.
Because of this aberrant mouse behavior, I couldn’t show the demo material. I certainly tried enough times, and I even hobbled through one demo with the audience members helping me by shouting out keyboard shortcuts when I asked … but it was a total wreck.
If you were in attendance for this session (and there were quite a few folks, as shown in the picture on the right – taken a handful of minutes before I started), I truly apologize. An apology alone, though, isn’t enough (in my opinion).
My Attempt to Make Up
As the demos were slamming into walls and catching on fire, I commented a couple of times that I’d find some way to share the demo materials with the audience at a later time. I was initially thinking I’d try to do a webcast – kind of a do-over – but I thought about it some more on the plane ride home last night and decided on something else.
Here’s what I’m going to do: rather than do the whole session over again, I’m going to work through each of the demos I intended to show and record those as a Camtasia/video that can be viewed whenever someone has the time to do so. Doing this sort of video cuts straight to the chase and is ultimately more flexible than trying to round everyone up for a webcast. It can also be re-watched as desired.
“When is this video going to be ready,” you might ask? I need to do some catching-up after having been out of town for a while, but I’m hoping to find the time this coming weekend to put it together. If I can do that, then the video will be available sometime early next week.
How Will We Know?
Once everything is ready to go, I’ll put together another blog post to announce the availability and provide a link. I’ve also been in contact with David Rubinstein at BZ Media about this, and he said that he’d blast the information out to attendees and newsletter subscribers, as well.
Summary
So, once again: my sincere apologies to those who attended my CSWP session at SPTechCon. It’ll be a few days after the actual session, but hopefully the video will make up for the demos that went nowhere during the session.
In this post, I cover the topic of exposing paging controls within the display templates used by SharePoint 2013’s Content Search Web Part (CSWP). In addition to looking at the underlying paging mechanisms used by the CSWP, I make available a couple of display templates that I created which include advanced paging support.
I’ve had an opportunity to play with SharePoint 2013’s Content Search Web Part (CSWP) on a number of occasions in the last couple of years, and I have to say that I like it a lot. The CSWP can be employed to address a whole host of different use cases; in fact, in many situations I’ve found that it can be used to solve problems that were previously addressable only through custom code.
Search-driven content in SharePoint isn’t anything new, of course, but the display templates that are used to format search results in SharePoint 2013 are a large part of the CSWP’s “special sauce.” Through the use of a Control display template and an Item display template, it is possible to select, arrange, and style the search results that are returned and shown to users in a highly customizable fashion. With some knowledge of HTML and the help of SharePoint 2013’s Design Manager, it’s possible to produce some pretty impressive looking content. And if you know JavaScript, well … the sky is the limit on what you can produce.
Control And Item Display Templates
Before I dive into paging and how I’ve tried to stretch what can be done with the CSWP, I want to briefly examine some display template basics. Although this post isn’t intended to be a primer on the CSWP and its associated display templates, there are a few items worth reviewing.
On the right is a snippet of a ToolPart containing some configuration data for a particular CSWP displaying some very basic data. The examples that follow (i.e., the two images shown below) use the configuration shown in the ToolPart, so refer to it if needed.
The first example below is how a CSWP showing three results might appear to an end user when the List with Paging Control template and Two lines Item template are applied. The second example (to the right of the first) differentiates which content regions on the CSWP are being driven by the List with Paging Control template (shown with green highlighting) and which are being driven by the Two lines Item template (shown in red).
Through these images, I’m trying to convey a very simple point: the Control template dictates how the search results’ “container” appears, and the Item template determines how each individual search result within the container is displayed. The contents of any CSWP are rendered by the output of a single Control display template and zero or more Item display templates (again, one for each search result/item shown).
Nothing To See Here
As you might have guessed from the title of this post, one functional area that seems relatively unexplored and underdeveloped (in my experience) is that of paging and how paging controls are made available to end users of the CSWP. This is unfortunate, because whenever more results are returned than can be displayed at once – a very common scenario – some form of paging is needed.
Out of the box (OOTB), SharePoint 2013 comes with only a handful of Control and Item display templates that can be used with the CSWP to format your search results. Of these display templates, only one contains paging controls: the List with Paging Control template. If all you require is the basic forward/backward paging offered by the two buttons it contains, then the List with Paging display template may be adequate for your needs.
Personally, I think the List with Paging template is bland and kind of … ugly. It works, sure, but it doesn’t display several pieces of information that I find important; for example, the total number of search results and the result page that the user is currently on. Worse is the fact that it doesn’t provide any sort of mechanism to jump to a specific page. The best that an end-user can do is page forward or backward one page at a time.
Better Paging Support
One of the talks I’ve been giving recently at various SharePoint events and conferences is titled SharePoint’s New Swiss Army Knife: The Content Search Web Part. During that talk, I demonstrate a set of CSWP display templates that I put together to generate something decidedly “non-search” in appearance – like a directory of files to which the current user has access within the current site collection. An image of that CSWP example appears on the left.
This file directory CSWP instance was generated with three simple files: two custom display templates (one for the Control, and one for each Item shown) and a cascading style sheet. The actual result data isn’t particularly remarkable, but the manner in which the paging is implemented is what tends to catch people’s attention. With the Control template I created, end-users can:
See the total number of documents (i.e., search results) to which they have access
Clearly see which page of search results they’re on through the boxed page number and the “Page xx of yy” label
Identify which items/results they’re viewing through the “Items xxx to yyy of zzz” label
Jump directly to a page of results by clicking the specified page number
Finding all of this information and displaying it through the CSWP Control template took some research and tinkering. Some of the information was available directly within the search results that were passed to the CSWP, but some of it wasn’t. In the case where some desired information wasn’t available, it was computed with some basic math.
Overview Of The CSWP HTML
By default, the CSWP implements a client-side processing and paging model. Search results that are displayed are controlled by JavaScript functions contained within the Control and Item display templates that have been assigned to the CSWP. When a set of search results is being processed for display within the client browser, the Control template (which is the results container) gets called first to render the HTML that will frame or house the search items/results. For each search result or item that is passed to the CSWP, the Item display template then gets called to create a snippet of HTML for the result/item that can be inserted into the “frame” (commonly a
) created by Control display template.
An example of the rendered HTML for the search results shown in the previous “Better Paging Support” section appears below.
[code language=”html” autolinks=”false” collapse=”true” title=”HTML From CSWP Using New Display Templates (click to expand)”]
The Control template (SwissArmy_Control_Template.js) creates the top level
and a child <ul> element (with a CSS class of “sas-table”) for the overall HTML structure, as well as the individual
elements used to contain each search/result item. For each search result/item, the Item template (SwissArmy_Item_Template.js) creates a
block that is inserted as a child within an
element. Each of the individual item
blocks created by the Item template has a CSS class of “sas-tablecell” for easier styling.
Below the block housing the search results is another
with a CSS class of “sas-paging-table.” As suggested by the class name, the
is used to house the clickable page numbers and additional paging information seen at the bottom of the CSWP control. And like the rest of the container information, this HTML is generated within the Control template.
Fetching Search Results
Each time a set of search results is needed, either on initial page rendering or when the user moves to a new page, the CSWP calls back to its SharePoint site collection for the data it needs. By default, this call occurs asynchronously; however, the CSWP can be configured to make such calls synchronously within the normal page processing sequence.
For example, if I had a page containing a CSWP at the following URL
As part of its request, the CSWP passes an XML structure to the client.svc web service that looks something like the XML that appears below. This structure contains all of the information the ProcessQuery method needs to determine which search results should be sent back:
[code language=”xml” collapse=”true” autolinks=”false” title=”XML Request To Client.svc (click to expand)”]
Once the server finds the desired search results and processes them, it packages them up as a JSON object and returns that object to the browser for further action. The following is an example of a JSON structure (containing ten results) that was returned for the Swiss Army Knife CSWP as it was shown earlier in the “Better Paging Support” section:
This JSON structure contains a lot of information – certainly more information than is being displayed by the CSWP. The trick, of course, is in figuring out exactly “what” is “where” for purposes of building a paging system.
Page-Related Processing Within The Control Template
Thankfully, the CSWP provides us with a relatively easy mechanism for getting at the search result data we care about within the JSON object that is returned from the call to client.svc. When a Control display template is invoked by the CSWP, it is passed a context object as follows:
[code language=”javascript” light=”true”]
function DisplayTemplate_2aa45a743fd94e75a3e53c940628e2ec(ctx) { … }
[/code]
The ctx context object contains a number of useful methods, properties, and subordinate objects we can leverage in our attempts to manipulate search results and calculate paging information. We can use ctx to interact directly with the CSWP (which is accessible through the ClientControl property), as well as obtain the search results themselves (and information about them) through the ListData property.
Note: Even though Microsoft’s List with Paging Control template doesn’t provide anything more than relative paging forward and backward, the CSWP does appear to support some form of more advanced paging scheme through its get_pagingInfo() method. When this method is called, it returns an object that contains expanded paging information that includes information about the current page, as well as a subset of pages before and after the current page. The object does not, however, contain all of the information needed to determine the total number of items in the search result set, how to access those pages, etc.
Fortunately for us, though, the ctx.ListData object contains everything we need to implement an end-to-end paging system. Here’s how each of the critical paging information pieces is located or computed within the Control template so that the portion of the control seen below can be rendered:
Total number of all search results available. The totalRowCount variable is used within the Control display template scripting to store this value. And the value itself is readily available through the ctx.ListData.ResultTables[0].TotalRows property.
Maximum number of items to display per page. This value (represented within the Control display template scripting as rowsPerPageCount) is easily obtained through the ctx.ListData.Properties.RowLimit property. It identifies the maximum number of rows that may be returned in the search results table (through ctx.ListData.ResultTables[0]), and it is the value which should be used to determine the number of individual search results shown in the CSWP.
Number of items to display on the current page. In most cases, this value (defined through the rowsOnCurrentPageCount variable within the Control display template scripting) will be the same as the maximum number of items per page. On the last page of results, though, it isn’t uncommon for there to be fewer results available than the maximum number possible. To determine how many items should be shown on the current page of search results, the script simply determines how many rows are present in the current search results table (ctx.ListData.ResultTables[0].RowCount).
First page number for the result set. As my kids would say, “Easy peasy lemon squeezy.” This variable (firstPageNumber) always has the intuitive value of 1. This variable should not be confused with the firstPage variable that commonly appears in the OOTB display template scripting. The reference which is assigned to the firstPage variable is a PagingLink object that is meaningful within the context of the pagingInfo object returned from the call to the ctx.ClientControl.get_pagingInfo() – not the page calculations in the display templates associated with this post.
Last page number for the result set. This value (represented by the lastPageNumber variable) isn’t readily available in a way that can be “plucked” out of the search data, so the display template scripting simply does a little math to calculate it: Math.ceil(totalRowCount /rowsPerPageCount). In non-code terms: the last page of the total search results set is the total number of results in the set divided by the number of results displayed per page, rounded-down. As with the firstPageNumber and firstPage variables, don’t confuse lastPageNumber with the OOTB lastPage object reference.
In addition to the values described above, a series of calculations are performed to determine the currentPageStartItem and currentPageEndItem variables that represent the first and last item numbers on the current page. Once these values are known and assigned to their respective variables, it becomes a relatively straightforward exercise to display the paging information as is done within the SwissArmy_Control_Template.js Control template. The (not-so-heavy) lifting is accomplished with the following Javascript:
[code language=”javascript” collapse=”true” title=”JavaScript Snippet To Render Paging Information (click to expand)”]
// Figure out the current page information
for (var i = 0; i< pagingInfo.length; i++)
{
var pl = pagingInfo[i];
if (!$isNull(pl))
{
if (pl.startItem == -1)
{
currentPage = pl;
currentPageNumber = pl.pageNumber;
currentPageStartItem = ((currentPageNumber – 1) * rowsPerPageCount) + 1;
if (currentPageNumber == lastPageNumber) {
currentPageEndItem = totalRowCount;
} else {
currentPageEndItem = currentPageNumber * rowsPerPageCount;
}
}
}
}
// Generate page divs and their links
var getPageNumberDivs = function() {
var currentDiv, divBlocks, pageStartItem;
divBlocks = '';
for (var i = firstPageNumber; i <= lastPageNumber; i++) {
cellStyle = 'sas-paging-tablecell';
if (i == currentPageNumber) {
currentDiv = '
One area does warrant a little more explanation, though, and that is how the hyperlinks are created for each of the clickable page numbers at the bottom of the CSWP. There really isn’t a whole lot of magic behind the creation of the hyperlinks themselves; the results are achieved on line 101 of the SwissArmy_Control_Template.js file:
… where i is the value of the new result page being clicked by the user. For example, if the user clicks on “17” (to indicate a desire to move to page 17 of the search results), the value of i will be 161 when there are 10 items per page.
On the server side, SharePoint translates the request for 161 (which can also be reached directly via re-post by attaching a #s=161 to the query string portion of the URL as shown above) to package-up and send down a search results table containing all of the page 17 result items.
How Do I Use The Sample (Downloadable) Files?
Once you’ve downloaded the SwissArmyTemplates.zip file and opened it up, you’ll find that it contains five files:
SwissArmy_Control_Template.html
SwissArmy_Control_Template.js
SwissArmy_Item_Template.html
SwissArmy_Item_Template.js
SwissArmy_Styles.css
The two HTML files are each of the display templates in their HTML form. These HTML files can be processed through SharePoint 2013’s Design Manager to generate the corresponding JavaScript files that are actually used by the CSWP. Alternatively, the JS files that are included in the ZIP file can be used as-is and dropped directly into the Master Page Gallery > Display Templates > Content Web Parts folder within a site collection to make them available to the CSWP.
The remaining file is a CSS style sheet, and it provides the look and feel that is employed by the display templates. The style sheet itself is referenced from with the Control template files (SwissArmy_Control_Template.html and SwissArmy_Control_Template.js), and the Control template files assume that the style sheet will be available in the following location within the site collection:
If you place the style sheet somewhere else in the site collection, ensure that the $includeCSS() calls in the SwissArmy_Control_Template.html file (line 25) and the SwissArmy_Control_Template.js file (line 164) are updated accordingly.
Final Request
As with all of the resources I make available, please feel free to use the display templates and style sheet I’ve provided within your own projects, either as-is or in a form that you’ve modified to suit your needs.
If you do share or redistribute what I’ve provided in some form, whether or not you’ve made modifications, I simply ask that you reference the original source (i.e., me and/or this blog post) in what you’re sharing. I do believe in citing sources where appropriate.
Thanks, and have fun paging through search results!
SharePoint 2013 introduces a new Active Directory Import mode for user profiles, but can the creation of the required synchronization connections be easily scripted? Heck no. Out of the box (OOTB), the PowerShell support for creating user profile AD “direct mode” import connections isn’t what it needs to be. In this post, I offer a scripting alternative that allows you to specify display names and LDAP filters.
I’ve been operating under the Bitstream Foundry banner for a few months now, and I have to admit that it’s been absolutely wonderful to return to hands-on work with SharePoint. It really didn’t take long for me to realize just how much I missed solving problems and making SharePoint do what customers/clients wanted it to do. Some of you may think I’m crazy for saying this, but it’s good to be back in the trenches.
Brief note: I lump the UPS, UPA, and related pieces under the “UPS” acronym for this post. It’s not technically accurate, but it keeps things simpler for readability.
The UPS was buggy, problematic, and easily broke if you looked at it the wrong way. I know that the bugs were ultimately ironed-out and that the UPS was “fixed” to a large extent, but I stopped playing with it before it got to that point. I never really needed it for anything I wanted to do, and so I did my best ostrich impersonation anytime anything related to the User Profile Service and its underlying plumbing came up.
It’s somewhat ironic, then, that I was only back into hands-on work for about a month before I found myself thrown into the UPS with SharePoint 2013. A client wanted me to put together a script that they could use to perform the post-setup configuration for the UPS in a farm once the UPS instance had been created (through AutoSPInstaller – which also happens to be my “go to” tool for SharePoint setup). The client wanted the script to configure built-in profile properties, create custom properties, create user sub-types, and most importantly: configure synchronization connections. End-to-end, the script needed to do everything that an admin might normally have to do by hand in order to get the UPS ready for production usage.
The Twist
UPS configuration via PowerShell isn’t new territory. There’s a pretty substantial base of online material to draw from for guidance and inspiration (starting with TechNet), but my task was novel in one very important regard: the synchronization connections I was setting up were using SharePoint 2013’s new Active Directory Import mode.
If you’re not familiar with Active Directory Import mode (“AD Direct Mode”), pause your reading here and head over to Spence Harbar’s blog for a thorough and extremely well-written overview. It is a top-notch post on Active Directory Import Mode and provides a foundation for the rest of this post.
If you’ve scripted-out the creation of UPS synchronization connections through PowerShell under SharePoint 2010, then you’re undoubtedly familiar with the Add-SPProfileSyncConnection cmdlet and its syntax (taken straight from TechNet):
As I started putting together my script and looking for usage examples covering the Add-SPProfileSyncConnection cmdlet, I naturally came to another of Spence Harbar’s posts. Spence’s post was the first sign of trouble for me. Even though Spence’s post was written in the SharePoint 2010 time frame, he noted a number of limitations with the Add-SPProfileSyncConnection cmdlet versus creating sync connections within Central Administration. For example, there was no way to specify a DisplayName parameter with Add-SPProfileSyncConnection; you were stuck with a name that was automatically assigned based on the connection’s associated domain name.
The Wall
The DisplayName limitation was annoying, but it wasn’t a show-stopper for me. With a little more investigation, though, the real roadblocks quickly made themselves known.
If you were to compare the fields and options that are available when adding a sync connection through SharePoint Central Administration (shown on the right) with those that are available when adding a connection with the Add-SPProfileSyncConnection, you’d see some significant gaps. The ones that stood between me and a configuration script that did what I needed it to do were the following:
The inability to use the “Filter out disabled users” switch
The inability to specify an LDAP filter to restrict imported users
The ability to filter disabled users was really a convenience; I could just as easily extend my LDAP filter to include the following:
(!userAccountControl:1.2.840.113556.1.4.803:=2)
… if I wanted to filter disabled users without using the checkbox that was available in Central Admin. Without the ability to specify an LDAP filter in the Add-SPProfileSyncConnection call, though, I was stuck. I needed some way to specify an LDAP filter to restrict imported users for the connections I was going to be creating.
I spent a lot of time hunting through forums and looking for information online, but the hunt proved fruitless. No matter how I approached the problem and tinkered with PowerShell, I came up empty. Falling back to the SharePoint Server Object Model, the ConnectionManager class (in the Microsoft.Office.Server.UserProfiles namespace), and its related types got me no further, either. Absolutely nothing in the APIs seemed to expose a way to add Active Directory Import sync connections in a way that exposed the properties I needed to specify.
I had a hard time believing it, but it seemed that the only way to create an Active Directory Import-based sync connection that allowed me to specify a DisplayName and LDAP filter was to use SharePoint Central Administration.
Huh?
The Source of the Problem
Since Central Administration supplied the mechanism to create sync connections with the desired parameters, I knew that SharePoint had the plumbing to support what I wanted to do somewhere in its assemblies. I just needed to figure out where that functionality was. So, I did what I typically do in these sorts of situations: I pulled out Reflector and went diving into SharePoint’s internals.
I took a look at both the EditDSServer.aspx page in the _layouts/15 folder (which is the Central Admin page for creating sync connections) and the Add-SPProfileSyncConnection cmdlet, did some backtracking, and eventually found myself back at the ConnectionManager class – no big surprise there. The somewhat surprising part, though, was what I found (shown to the left).
It appeared that the ConnectionManager class sported several methods to support the addition of sync connections: AddActiveDirectoryConnection, AddActiveDirectoryImportConnection, AddBusinessDataCatalogConnection, and AddLdapConnection. The one that I was interested in was the second one – AddActiveDirectoryImportConnection:
The big problem was that method was marked internal – meaning it wasn’t publicly exposed to outside callers.
Tracing things through, I found that the AddActiveDirectoryImportConnection method was ultimately accessible through two different public entry points: the Central Admin EditDSServer.aspx page and the Add-SPProfileSyncConnection PowerShell cmdlet. The EditDSServer.aspx page call was pretty direct, but the PowerShell cmdlet call ended up channeling through the UpdateCreateSyncConnection method on the Microsoft.Office.Server.Administration.UserProfileApplication class before making its way to the AddActiveDirectoryImportConnection method. And the UpdateCreateSyncConnection method was the source of the problem.
UpdateCreateSyncConnection is a method that coordinates the creation and modification of different sync connection types – not just the Active Directory Import connection type I was focused on. Most of the parameters that are needed to create a sync connection are available on the method signature, but not all of them.
Looking through the method to where it actually calls into the AddActiveDirectoryImportConnection internal method reveals how it handles passing-in an LDAP filter … or rather, how it doesn’t handle it:
See the string.Empty parameter in the method call? It’s in the same position where the AddActiveDirectoryImportConnection method expects an LDAP filter. The UpdateCreateSyncConnection method basically “drops the ball” and does absolutely nothing with an LDAP filter. If code had attitude, I envisioned this as the UpdateCreateSyncConnection method throwing up its virtual arms and saying “whatever.”
That crashing noise? My efforts thus far slamming into a wall.
Working Around the Limitation
Since I’d officially proven (to myself, anyway) that the Add-SPProfileSyncConnection cmdlet wasn’t going to work for my purposes, and I wasn’t going to be able to script out the desired action through Central Administration, I was looking at going against the AddActiveDirectoryImportConnection method myself directly from within my PowerShell script.
Insert a giant “ewwwwwwwww” sound here. Why? Because the method was marked internal. Since the method wasn’t publicly accessible, I’d need to use Reflection to call it.
I’ve used Reflection when I’ve had to in the past, and it certainly works, but it’s always had something of a “dark art” feel to me. And the times when I have used Reflection heavily have been within managed C# code – not PowerShell. Reflection is very persnickety regarding object types and method signatures, and C# (with its strict typing) makes it relatively painless to manage these issues. PowerShell, on the other hand, is pretty loose with typing and coercion – not ideal for working with Reflection-based parameter specification and method invocation.
Dark arts or not, I was bound and determined to provision my sync connections through PowerShell. So onward I marched.
The Result
I’m happy to say that this story does have a happy ending. The script that I’m sharing (Add-SPProfileDirectSyncConnection) to provision Active Directory Import connections appears below. It’s not the prettiest thing around, but hey – it works.
A word of caution before you pluck this out and start using it: the script goes directly against an internal method on the ConnectionManager class instead of a publicly exposed method, so I want to stress that this is a “use at your own risk” script. Anytime I have the option of using publicly accessible APIs, I do … but I had no other choice in this case.
[sourcecode language=”powershell” toolbar=”true” wraplines=”true”]
<#
.SYNOPSIS
Add-SPProfileDirectSyncConnection.ps1
.DESCRIPTION
Establishes a new Active Directory Direct user profile synchronization connection for SharePoint 2013.
.NOTES
Author: Sean McDonough
Last Revision: 27-June-2013
#>
param (
[parameter(mandatory=$true,position=1)][object]$userProfileApp,
[parameter(mandatory=$true)][string]$displayName,
[parameter(mandatory=$true)][string]$forestName,
[parameter(mandatory=$true)][string]$syncOU,
[parameter(mandatory=$false)][switch]$useSsl = $false,
[parameter(mandatory=$false)][switch]$useDisabledFilter = $false,
[parameter(mandatory=$false)][string]$ldapFilter = "",
[parameter(mandatory=$true)][string]$domain,
[parameter(mandatory=$true)][string]$username,
[parameter(mandatory=$true)][string]$password,
[parameter(mandatory=$false)][string]$claimProviderType = "Windows",
[parameter(mandatory=$false)][string]$claimProviderId = "Windows",
[parameter(mandatory=$false)][string]$claimIdMapAttribute = "samAccountName"
)
# Perform an LDAP lookup to get the object ID for the target domain.
$ldapLookupContext = "LDAP://" + $forestName
$ldapUsername = $domain + "\" + $username
$objDomain = New-Object System.DirectoryServices.DirectoryEntry($ldapLookupContext, $ldapUsername, $password)
if ($useSsl) {
$objDomain.AuthenticationType = [System.DirectoryServices.AuthenticationTypes]::SecureSocketsLayer
}
$ldapDomainDn = $objDomain.distinguishedName
$ldapDomainGuid = New-Object Guid($objDomain.objectGUID)
# Creation of the objects needed to properly specify the OU for the sync connection
$dnCtx = New-Object Microsoft.Office.Server.UserProfiles.DirectoryServiceNamingContext(
$ldapDomainDn, $forestName, $isDomain, $ldapDomainGuid, $includedOU, $excludedOU, $preferredDCs, $useOnlyPreferredDCs)
$namingContext = New-Object System.Collections.Generic.List[[Microsoft.Office.Server.UserProfiles.DirectoryServiceNamingContext]]
$namingContext.Add($dnCtx)
$ncParam = [System.Collections.Generic.List[Microsoft.Office.Server.UserProfiles.DirectoryServiceNamingContext]]$namingContext
# Since the method we’re about to invoke is internal, some hoops have to be jumped through
# to call it via PowerShell and Reflection
$paramTypes = @([Microsoft.Office.Server.UserProfiles.ConnectionType], [System.Guid], `
[System.String], [System.String], [System.Boolean], [System.Boolean], [System.String], `
[System.String], [System.String], [System.Security.SecureString], `
[System.Collections.Generic.List[Microsoft.Office.Server.UserProfiles.DirectoryServiceNamingContext]], `
[System.String], [System.String], [System.String])
$addConnMethodInfo = [Microsoft.Office.Server.UserProfiles.ConnectionManager].GetMethod("AddActiveDirectoryImportConnection", `
$INVOKE_ATTRIBUTES_NON_PUBLIC_MEMBERS, $null, $paramTypes, $null)
$methodParams = @($connType, $dcId, $displayName, $forestName, $isSslUsed, $isDisabledFilterUsed, $ldapFilter, `
$domain, $username, $securePassword, $ncParam, $claimProviderType, $claimProviderId, $claimIdMapAttribute)
$addConnMethodInfo.Invoke($upConfigMgr.ConnectionManager, $methodParams)
The script accepts a variety of parameters and has a couple of switches:
-userProfileApp. A reference to the User Profile Service Application instance for which the sync connection will be provisioned. This parameter is required.
-displayName. A string containing the “friendly name” that will appear for the sync connection within Central Administration. This parameter is required.
-forestName. A string that contains the fully qualified domain name (FQDN) for the domain that hosts the user profiles that will be imported by the sync connection. This parameter is required.
-syncOU. A string that contains the full distinguished specification of the Active Directory organizational unit (OU) where target user accounts (for import) reside. This parameter is required.
-useSsl. A switch that can be used to direct the script to use Secure Sockets Layer (SSL) transport encryption when connecting to Active Directory for synchronization operations and LDAP lookups. If this switch is not specified, connections are not made using SSL.
-useDisabledFilter. If this switch is included, the user profile selection process that takes place for the sync connection being established will filter out user accounts that are disabled. If this switch is not specified, both active and inactive user accounts will be imported during synchronization.
-ldapFilter. A string that contains an LDAP (Lightweight Directory Access Protocol) filter that will be applied to restrict the accounts which will be imported during synchronization. Creating this type of filter is made easier with the use of a tool like ADSI Edit. This parameter is optional.
-domain. A string containing the NetBIOS name of the domain where the profile synchronization account resides. This string is combined with the –username parameter to form a domain + username account specification. This parameter is required.
-username. A string containing the username of the profile synchronization account. This parameter should include the username only without a domain name qualifier. A domain name is specified separately through the –domain parameter. This parameter is required.
-password. A string containing the plaintext password of the profile synchronization account. This parameter is required.
-claimProviderType. A string identifying the type of authentication provider that is used to encode user profile account names. Common values for this parameter are Windows and Trusted. This parameter is optional; if unspecified, a value of Windows is used.
-claimProviderId. A string specifying the name or ID of the claims provider in-use. When a –claimProviderType of Windows is used, this parameter value is also typically Windows. If a –claimProviderType of Trusted is used, then this parameter value is used in conjunction with that value to identify the specific auth provider instance used. This parameter is optional and will default to Windows if unspecified.
-claimIdMapAttribute. A string that specifies the claims ID or field used by the claim provider. This parameter is specific to the claim provider, and in Windows authentication scenarios will usually be samAccountName. This parameter is optional.
The following is an example of how you might use the Add-SPProfileDirectSyncConnection script in a larger script that provisions user profile Active Directory Direct sync connections.
Some tips and information regarding the script and its usage:
Before executing this script, ensure that the account under which the script is being run has Full Control connection permission to the User Profile Service Application. This is set by highlighting the UPA instance in the “Manage Service Applications” page (ServiceApplications.aspx) in Central Admin, clicking the Permissions button on the toolbar, and granting Full Control permissions to the desired account as shown on the right. If this step is not performed, the script will error-out.
If a sync connection with the supplied –displayName already exists, the script will exit. It will not attempt to create another connection with the same name.
Ensure that the user profile synchronization account specified has the necessary replication permissions with the target user profile store. If the account lacks these permissions, the sync connection will be properly established but profiles will not be imported. Both TechNet and Spence Harbar covers this permission requirement in their various articles/posts, and it’s the same one that was needed in SharePoint 2010.
Final Word
My goal with this post was to (hopefully) save some of you the hassle I encountered while trying to automate a task that I thought would have been trivial … but wasn’t. Please let me know if this ends up helping you!
And although this script has been tested, I don’t have the facilities to test every combination of parameter and environment. If you find something that’s wrong or should be refactored, please let me know and I’ll perform an update.
What started as a simple attempt to use the ~appWebUrl token in an image URL became a deep dive into SharePoint’s internal processing of custom actions and the App deployment process. In this post, I cover what will and won’t work for custom action image URLs in your own SharePoint 2013 Apps.
My adventures in SharePoint 2013 App Model Land have been going pretty well, but I recently encountered a limitation that left me sort of scratching my head.
The limitation applies to the creation of custom actions for SharePoint apps. To be more specific: the problem I’ve encountered is that there doesn’t appear to be a way to package and reference (using relative links) custom images for ribbon buttons like the one that’s circled in the image above and to the left. This doesn’t mean that custom images can’t be used, of course, but the work-around isn’t exactly something I’m particularly fond of (nor is it even feasible) in some application scenarios.
If you’re not familiar with the new SharePoint 2013 App Model, then you may want to do a little reading before proceeding with this post. I’m only going to cover the App Model concepts that are relevant to the limitation I observed and how to address/work-around it. However, if you are familiar with the new 2013 App Model and creating custom actions in SharePoint 2010, then you may want to jump straight down to the section titled Where the Headaches Begin.
One more warning: this post does some heavy digging into SharePoint’s internal processing of custom ribbon actions and URL tokens. If you want to skip all of that and head straight to the practical take-away, jump down to the What About the Image32by32 and Image16by16 Attributes section.
Adding a Ribbon Custom Action
First, let me do a quick run-through on custom actions. They aren’t unique to SharePoint 2013 or its new “Cloud App Model.” In fact, the type of custom action I’m talking about (i.e., extending the ribbon) became available when the Ribbon was introduced with SharePoint 2010.
With a SharePoint 2013 App, adding a new button to the ribbon is a relatively simple affair. It starts with choosing the Ribbon Custom Action option from the Add New Item dialog as shown below and to the left. Once a name is provided for the custom action and the Add button is clicked, the Create Custom Action for Ribbon dialog appears as shown below and to the right. There’s a third dialog page that further assists in setting some properties for a custom action, but I’m going to skip over it since it isn’t relevant to the point I’m trying to make.
I want to call attention to one of the selections I made on the Create Custom Action for Ribbon dialog, though; specifically, the decision to expose the custom action in the Host Web rather than in the App Web.
Why is this choice so important? Well, the new App Model enforces a relatively strict boundary of separation between SharePoint sites and any custom applications (running under the new App Model) that they may contain. A SharePoint site (Host Web) can technically “host” applications, but those applications operate in an isolated App Web that may have components running on an entirely different server. Under the new App Model, no custom app code is running in the Host Web.
App Webs (where custom applications exist after installation) don’t have direct access to the Host Web in which they’re contained, either. In fact, App Webs are logically isolated from their Host Web parents. If App Webs want to communicate with their Host Web parent to interact with site collection data, for example, they have to do so through SharePoint’s Client-Side Object Model (CSOM) or the Representational State Transfer (REST) interface. The old full-trust, server-side object isn’t available; everything is “client-side.”
There are some exceptions to this model of isolation, and one of those exceptions is the use of custom actions to allow an App (residing in an App Web) to partially wire itself into the Host Web. The Create Custom Action for Ribbon dialog shown above, for instance, adds a new button to the ribbon for each of the Document Libraries in the Host Web. This gives users a way to navigate directly from Document Libraries (in the Host Web) to a page in the App Web, for example.
The Elements.xml file that gets generated for the custom action once the Visual Studio wizard has finished running looks something like the following:
[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
<CustomAction Id="1470c964-6b8a-4d79-9817-4d32c898ffbe.RibbonCustomAction1"
RegistrationType="List"
RegistrationId="101"
Location="CommandUI.Ribbon"
Sequence="10001"
Title="Invoke 'LibraryDetailsCustomAction' action">
<CommandUIExtension>
<!–
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
–>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Library.Actions.Controls._children">
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton"
Alt="Examine Library Details"
Sequence="100"
Command="Invoke_LibraryDetailsCustomActionButtonRequest"
LabelText="Examine Library Details"
TemplateAlias="o1"
Image32by32="_layouts/15/images/placeholder32x32.png"
Image16by16="_layouts/15/images/placeholder16x16.png" />
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler Command="Invoke_RibbonCustomAction1ButtonRequest"
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"/>
</CommandUIHandlers>
</CommandUIExtension >
</CustomAction>
</Elements>
[/sourcecode]
Deploying the App that contains the custom action markup shown above creates a new button in the ribbon of each Host Web Document Library. By default, each button looks like the following:
There are a few attributes in the previous XML that I’m going to repeatedly come back to, so it’s worth taking a closer look at each one’s purpose and associated value(s):
Image32by32 and Image16by16 for the <Button /> element. These two attributes specify the images that are used when rendering the custom action button on the ribbon. By default, they point to an orange dot placeholder image that lives in the farm’s _layouts folder.
CommandAction for the <CommandUIHandler /> element. In its simplest form, this is the URL of the page to which the user is redirected upon pressing the custom ribbon button.
The Problem with the Default CommandAction
When a user clicks on a custom ribbon button in one of the Host Web document libraries, the goal is to send them over to a page in the App Web where the custom action can be processed. Unfortunately, the default CommandAction isn’t set up in a way that permits this.
In fact, attempting to deploy the solution to Office 365 with this default CommandAction results in failure; the App package doesn’t pass validation.
To understand why the failure occurs, it’s important to remember the isolation that exists between the Host Web and the App Web. To illustrate how the Host Web and App Web are different from simply a hostname perspective, consider the project I’ve been working on as an example:
Notice that although the /sites/dev2 relative path portion is the same for both the Host Web and App Web URLs, the hostname portion of each URL is different. This is by design, and it helps to enforce the logical separation between the Host Web and App Web – even though the App Web technically resides within the Host Web.
Looking again at the default CommandAction attribute reveals that its value is just an ASPX page that is identified with a relative URL. Rather than pointing to where we want it to point …
And this is exactly what should happen. After all, the custom action is launched from within the Host Web, so a relative path specification should resolve to a location in the Host Web – not the location we actually want to target in the App Web.
Fixing the CommandAction
Thankfully, it isn’t a major undertaking to correct the CommandAction attribute value so that it points to the App Web instead of the Host Web. If you’ve worked with SharePoint at all in the past, then you may know that the key to making everything work (in this situation) is the judicious use of tokens.
What are tokens? In this case, tokens are specific string sequences that SharePoint parses at run-time and replaces with a value based on the run-time environment, action that was performed, associated list, or some other context-sensitive value that isn’t known at design-time.
To illustrate how this works, consider the default CommandAction attribute:
Modifying the attribute as follows changes the destination URL of the button so that the user is redirected to the desired page in the App Web rather than the Host Web:
Since tokens are the key to inserting context-dependent values at run-time, you’d think they’d have been implemented and usable anywhere a developer needs to cross the Host Web / App Web divide.
Apparently not. To be more specific (and fair), I should instead say “not consistently.”
Since this blog post is about image limitations with custom ribbon buttons, you can probably guess where I’m headed with all of this. So, let’s take a look at the Image16by16 and Image32by32 attributes.
By default, the Image16x16 and Image32by32 attributes point to a location in the _layouts folder for the farm. Each attribute value references an image that is nothing more than a little round orange dot:
Much like the CustomAction attribute, it stands to reason that developers would want to replace the placeholder image attribute values with URLs of their choosing. In my case, I wanted to use a set of images I was deploying with the rest of the application assets in my App Web. So, I updated my image attributes to look like the following:
I deployed my App to my Office 365 Preview tenant, watched my browser launch into my App Web, hopped back to the Host Web, navigated to a document library,and looked at the toolbar. I was not happy by what I saw (on the left).
The image I had specified for use by the button wasn’t being used. All I had was a broken image link.
Examining the properties for the broken image quickly confirmed my fear: the ~appWebUrl token was not being processed for either of the Image32by32 or Image16by16 attributes. The token was being output directly into the image references.
I tried changing the image attributes to reference the App Web a couple of different ways (and with a couple of different tokens), but none of them seemed to work.
To see if I was losing my mind, I decided to try a little experiment. Although I knew it wouldn’t solve my particular problem, I decided to try using the ~site token just to see if it would be parsed properly. Lo and behold, it was parsed and replaced. ~site worked. So, ~site worked … but ~appWebUrl didn’t?
That didn’t make any sense. If it isn’t possible to use the ~appWebUrl token, how are developers supposed to reference custom images for the buttons they deploy in their Apps? Without the ~appWebUrl, there’s no practical way to reference an item in the App Web from the Host Web.
Token Forensics
When I find myself in situations where I’m holding results that don’t make sense, I can’t help myself: I pull out Reflector and start poking around for clues inside SharePoint’s plumbing. If I dig really hard, sometimes I find answers to my questions.
After some poking around with Reflector, I discovered that the “journey to enlightenment” (in this case) started with the RegisterCommandUIWithRibbon method on the SPCustomActionElement type. It is in this method that the Image16by16 and Image32by32 attributes are read-in from the XML file in which they are defined. Before assignment for use, they’re passed through a couple of methods that carry out token parsing:
ReplaceUrlTokens on the SPCustomActionElement type
UrlFromPrefixedUrlCore on the the SPUtility type
Although these methods together are capable of recognizing and replacing many different token types (including some I hadn’t seen listed in existing documentation; e.g., ~siteCollectionLayouts), none of the new SharePoint 2013 tokens, like the ~appWebUrl and ~remoteWebUrl~remoteAppUrl tokens, appear in these methods.
Interestingly enough, I didn’t see any noteworthy differences between the path of execution for processing image attributes and the sequence of calls through which CommandAction attributes are handled in the RegisterCommandUIExtension method of the SPRibbon type. The RegisterCommandUIExtension method eventually “punches down” to the ReplaceUrlTokens and UrlFromPrefixedUrlCore methods, as well.
The differences I was seeing in how tokens were handled between the CommandAction and Image32by32/Image16by16 attributes had to be originating somewhere else – not in the processing of the custom action XML.
Deployment Modifications
After some more digging in Reflector to determine where the ~appWebUrl actually showed-up and was being processed, I came across evidence suggesting that “something special” was happening on App deployment rather than at run-time. The ~appWebUrl token was being processed as part of a BuildTokenMap call in the SPAppInstance type; looking at the call chain for the BuildTokenMap method revealed that it was getting called during some App deployment operations processing.
If changes were taking place on App deployment, then I had a hunch I might find what I was looking for in the content database housing the Host Web to which my App was being deployed. After all, Apps get deployed to App Webs that reside within a Host Web, and Host Webs live in content databases … so, all of the pieces of my App had to exist (in some form) in the content database.
I fired-up Visual Studio, stopped deploying to Office 365, and started deploying my App to a site collection on my local SharePoint 2013 VM farm. Once my App was deployed, I launched SQL Management Studio on the SQL Server housing the SharePoint databases and began poking around inside the content database where the target site collection was located.
Brief aside: standard rules still apply in SharePoint 2013, so I’ll mention them here for those who may not know them. Don’t poke around inside content databases (or any other databases) in live SharePoint environments you care about. As with previous versions, querying and working against live databases may hurt performance and lead to bigger problems. If you want to play with the contents of a SharePoint database, either create a SQL snapshot of it (and work against the snapshot) or mount a backup copy of the database in a test environment.
I wasn’t sure what I was looking for, so I quickly examined the contents of each table in the content database. I hit paydirt when I opened-up the CustomActions table. It had a single row, and the Properties field of that row contained some XML that looked an awful lot like the Elements.xml which defined my custom action:
[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-16"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
<CustomAction Title="Invoke ‘LibraryDetailsCustomAction’ action" Id="4f835c73-a3ab-4671-b142-83304da0639f.LibraryDetailsCustomAction" Location="CommandUI.Ribbon" RegistrationId="101" RegistrationType="List" Sequence="10001">
<CommandUIExtension xmlns="http://schemas.microsoft.com/sharepoint/">
<!–
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
–>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Library.Actions.Controls._children">
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton" Alt="Examine Library Details" Sequence="100" Command="Invoke_LibraryDetailsCustomActionButtonRequest" LabelText="Examine Library Details" Image16by16="~site/Images/sharepoint-library-analyzer_16x16-a.png" Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png" TemplateAlias="o1"/>
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler Command="Invoke_LibraryDetailsCustomActionButtonRequest" CommandAction="javascript:LaunchApp(‘709d9f25-bb39-4e6a-97d5-6e1d7c855f38’, ‘i:0i.t|ms.sp.int|a441fa2c-8c5f-4152-9085-3930239ab21b@9db0b916-0dd6-4d6c-be49-41f72f5dfc02’, ‘~appWebUrl\u002fPages\u002fLibraryDetails.aspx?ListID={ListId}\u0026SiteUrl={SiteUrl}’, null);"/>
</CommandUIHandlers>
</CommandUIExtension>
</CustomAction>
</Elements>
[/sourcecode]
There were some differences, though, between the Elements.xml I had defined earlier and what actually appeared in the Properties field. I narrowed my focus to the differences that existed between the non-working Image32by32/Image16by16 attributes
As suspected, some deployment-time processing had been performed on the CommandAction attribute but not on the image attributes. The CommandAction still contained an ~appWebUrl token, but it was wrapped as part of a parameter call to a LaunchApp JavaScript function that appeared to be handled (or rather, executed) from a client-side browser.
Jumping into my App in Internet Explorer and opening IE’s debugging tools via <F12>, I did a search for the LaunchApp function within the referenced scripts and found it in the core.js library/script. Examining the LaunchApp function revealed that it called the LaunchAppInternal function; LaunchAppInternal, in turn, called back to the SharePoint server’s /_layouts/15/appredirect.aspx page with the parameters that were supplied to the original LaunchApp method – including the URL with the ~appWebUrl token.
To complete the journey, I opened up the Microsoft.SharePoint.ApplicationPages.dll assembly back on the server and dug into the AppRedirectPage class that provides the code-behind support for the AppRedirect.aspx page. When the AppRedirect.aspx page is loaded, control passes to the page’s OnLoad event and then to the HandleRequest method. HandleRequest then uses the ReplaceAppTokensAndFixLaunchUrl method of the SPTenantAppUtils class to process tokens.
The ReplaceAppTokensAndFixLaunchUrl method is noteworthy because it includes parsing and replacement support for the ~appWebUrl token, ~remoteWebUrl~remoteAppUrl token, and other tokens that were introduced with SharePoint 2013. The deployment-time processing that is performed on the CommandAction attribute is what ultimately wires-up the CommandAction to the ReplaceAppTokensAndFixLaunchUrl method. The Image32by32 and Image16by16 attributes don’t get this treatment, and so the new 2013 tokens (like ~appWebUrl) can’t be used by these attributes.
What About the Image32by32 and Image16by16 Attributes?
Now that some of the key differences in processing between the CommandAction attribute and image attributes have been identified, let me jump back to the original problem. Is there anything that can be done with the Image32by32 and Image16by16 attributes that are specified in a custom action to get them to reference assets that exist in the App Web? Since tokens like ~appWebUrl(and ~remoteWebUrl for all you Autohosted and Provider-hosted application builders) aren’t parsed and processed, are there alternatives?
My response is a somewhat wishy-washy “doubtful.” In my estimation, you’d need to hack SharePoint with something like a javascript: tag for an image attribute (which, interestingly enough, doesn’t appear to be expressly blocked), find some way to obtain the App Web URL base, formulate the proper path to the image, and more. If it could be done, you’d be gaming SharePoint … and I could easily see a cumulative update or service pack breaking this type of elaborate work-around.
The safest and most pragmatic way to handle this situation, it seems, is to use absolute URLs for the desired image resources and forget about deploying them to the App Web altogether. For example, I placed the images I was trying to use on the ribbon buttons here on my blog and referenced them as follows:
I had some initial concerns that I might inadvertently bump into some security boundaries, such as those that sometimes arise when an asset is referenced via HTTP from a site that is being served up under HTTPS. This didn’t prove to be the case, however. I tested the use of absolute URLs in both my development VM environment (served up under HTTP) and through one of my Office 365 Preview site collections (accessed via HTTPS), and no browser security warnings popped up. The target image appeared on the custom button as desired (shown on the left) in both cases.
Although the use of absolute URLs will work in many cases, I have to admit that I’m still not a big fan of this approach – especially for SharePoint-hosted apps like the one I’ve been working on. Even though Office 365 entails an “always connected” scenario, I can easily envision on-premises deployment environments that are taken offline some or all of the time. I can also see (and have seen in the past) SharePoint environments where unfettered Internet access is the exception rather than the rule.
In these environments, users won’t see image buttons at all – just blank placeholders or broken image links. After all, without Internet access there is no way to resolve and download the referenced button images.
Wrapping It Up
At some point in the future, I hope that Microsoft considers extending token parsing for URL-based attributes like Image32by32 and Image16by16 to include the ~appWebUrl, ~remoteWebUrl, and other new tokens used by the SharePoint 2013 App Model. In the meantime, though, you should probably consider getting an easily accessible online location (SkyDrive, Dropbox, a blog, etc.) for images and other similar assets if you’re building apps under the new SharePoint 2013 App Model and intend to use custom actions.
Update (1/27/2013)
I need to issue a couple of updates and clarifications. First, I need to be very clear and state that SharePoint-hosted apps were the focus of this post. In a SharePoint-hosted app, what I’ve written is correct: there is no processing of “new” 2013 tokens (like ~appWebUrl and ~remoteAppUrl) for the Image32by32 and Image16by16 attributes. Interestingly enough, though, there does appear to be processing of the ~remoteAppUrl in the Image32by32 and Image16by16 attributes specifically for the other application types such as provider-hosted apps and autohosted apps. Jamie Rance mentioned this in a comment (below), and I verified it with an autohosted app that I quickly spun-up.
I double-checked to see if the ~remoteAppUrl token would even be recognized/processed (despite the lack of a remote web component) for SharePoint-hosted apps, and it is not … nor is ~appWebUrl token processed for autohosted apps. The selective implementation of only the ~remoteAppUrl token for certain app types has me baffled; I hope that we’ll eventually see some clarification or changes. If you’re building provider-hosted or autohosted apps, though, this does give you a way to redirect image requests to your remote web application rather than an absolute endpoint. Thank you, Jamie, for the information!
And now for some good news that for SharePoint-hosted app creators. Prior to writing this post, I had posted a question about the tokens over in the SharePoint Exchange forums. At the time I wrote this post, there hadn’t been any activity to suggest that a solution or workaround existed. F. Aquino recently supplied an incredibly creative answer, though, that involves using a data URI to Base64-encode the images and package them directly into the Image32by32 and Image16by16 attributes themselves! Although this means that some image pre-processing will be required to package images, it gets around the requirement of being “always-connected.” This is an awesome technique, and I’ll certainly be adding it to my arsenal. Thank you, F. Aquino!
After working with the Office 365 Preview over the last several months, I shifted my thoughts on SharePoint in the Cloud. In this post, I share my thoughts and “revelations” about what’s coming with SharePoint 2013, Office 365, and usage of SharePoint in the Cloud.
It was about a year and a half ago when someone dialed-up the volume on “The SharePoint Cloud Message” in my world. It’s not that I hadn’t heard people talking about SharePoint in the Cloud prior to that; I guess it’s just that I started listening more closely because Microsoft was turning into one of the Cloud’s most vocal proponents.
My relationship with Microsoft and Microsoft technologies goes back to the days of MS-DOS. As a result, I’ve always seen Microsoft as a company that was primarily interested in one thing: selling software. I worked for a Microsoft managed systems integration (SI) partner – Cardinal Solutions Group – for several years. During my years with Cardinal, my goal was to help others who had purchased Microsoft software make use of that software. In many cases, customer leads came from Microsoft either directly or indirectly. Microsoft sold the software, and we setup/customized/serviced/configured that software based on what a customer was trying to accomplish. It was a symbiotic relationship, and it was pretty easy for me to grasp.
Then the whole “Cloud thing” started. Cloud-based SharePoint and other Azure-branded services seemed a somewhat confusing move for Microsoft at first – at least to me. Even before Office 365, Microsoft offered hosted SharePoint through BPOS – or the Business Productivity Online Suite. At the time when BPOS was first released, I viewed it as something of a niche market for Microsoft. I had plenty of friends who worked at places like Rackspace and Fpweb.net, so the part I found unusual wasn’t really that “someone else” was hosting SharePoint and focusing on it as a service. The fact that Microsoft itself was getting serious about SharePoint and other services was the eyebrow raiser.
For Microsoft, it wasn’t just about selling software anymore.
The Biggest Hurdle
Of course, when Microsoft wants to succeed at something, they invest considerable planning and resources in it. Since Microsoft is essentially betting the farm (pun intended) on Office 365 and SharePoint in the Cloud, they’re pushing it very hard on multiple fronts. Redmond’s marketing machine has been talking Office 365 frequently and loudly for at least the last year. With each new release, developer tools like Visual Studio get more Cloud-friendly. Partners have incentives to get customers onto Office 365 and Azure services. Competitive price points make it difficult to ignore Microsoft’s Cloud offerings. For me (and I’m sure for many of you), it’s a lot to process.
I’d also be remiss if I didn’t say that I think Office 365 has a very compelling value proposition, even without SharePoint. SharePoint itself is a complex platform, though, and many organizations struggle with administrative needs like data protection, performance optimization, high availability, and basic day-to-day management. The idea of turning these concerns over to someone else (or some other entity) who better-understands them makes sense to me.
After working with SharePoint 2013 for several months now, I can easily say that the platform isn’t getting any easier. SharePoint 2013 has quite a few more “moving parts” relative to SharePoint 2010, just as SharePoint 2010 demonstrated itself to be significantly more complex than SharePoint 2007.
Despite the compelling nature of Office 365, I always seemed to come back around to fixate on one thought. This thought constantly reverberated through my head anytime “SharePoint in the Cloud” became a topic of conversation:
Most companies using SharePoint have made a significant investments in hardware, software, personnel, and services to get SharePoint up-and-running. They aren’t going to simply “dump” those on-premises investments and go to the Cloud tomorrow. The Cloud will happen, but it’s going to take longer than Microsoft thinks.
In discussions with many friends and respected professionals in the SharePoint community, I knew that I wasn’t completely alone in my way of thinking. In the conversations I’d had, there was almost always agreement that a shift to the Cloud and Cloud-based services would happen over time. The greatest debate seemed to be over whether it would happen next year or if it would take the next half a decade.
Breakthrough
I’d say my “breakthrough moment” came after I started playing with the Office 365 Preview more extensively a few months back. I initially set up a preview tenant to familiarize myself with what was coming, how SharePoint 2013 would be exposed, how to configure Office 365 tenants, etc. The more I played with the tenant, the more I thought about how truly useful Office 365 could be, particularly for non-enterprise customers, home users, and others who didn’t fit into SharePoint’s “big deployment picture” previously.
That’s when the pieces started to click into place for me. All along I had been thinking about Office 365 and Cloud-based SharePoint deployments along the lines of the bar chart seen above and to the right. Numbers and proportions are all relative, but the key concept I’m trying to convey with the chart is this: for some reason, I had always thought that the proponents of Cloud-based SharePoint were suggesting that Cloud adoption would come at the cost of on-premises deployments; i.e., on-premises users would “convert” to the Cloud. If Cloud-based deployments grew, that meant that on-premises deployments had to shrink. In short: I was inadvertently assuming that the overall number of SharePoint deployments had hit saturation and was remaining static.
I don’t think that way at all anymore.
After I’d done some playing with my first tenant, it wasn’t long before I was setting up another two Office 365 tenants for other side projects. In conversations with friends in the SharePoint community, I was discovering that “everyone” was setting up tenants for their families, for their spouse’s business, etc. In almost all cases where tenants were being setup, the use cases were ones that didn’t align with traditional enterprise-scale on-premises SharePoint deployment and usage. In fact, the use cases were typically the types of things that would eventually find a home on Google Apps or its equivalent because Microsoft (previously) had nothing strong to offer in that space.
The more I think about it, the more I feel that Office 365 growth – once the new 2013 Preview goes live – will be aggressive and look something more like what I’ve charted above and to the left. While Office 365 might replace some on-premises deployments, particularly for smaller organizations, I don’t see that as its primary market (initially) or its strong suit. The greatest degree of Office 365 traction is going to be obtained with users who need a Google Apps-like solution but for whom buying the required infrastructure and expertise for Exchange, SharePoint, etc., is cost-prohibitive.
So, I stopped thinking “replacement” and started thinking “complement.” That’s my assessment and working outlook for the Office 365 (Preview) right now.
Why Not Everyone?
I’m sure that plenty of folks who’ve believed in “Cloud Power” since Day One probably think that I’m still being too conservative in my outlook for SharePoint on Office 365, and that may be true. However, I still see plenty of concerns that are near-and-dear to most enterprise and larger business customers, and I believe that they will be Cloud adoption blockers until they’re addressed directly and decisively. Here are just a few that come to mind.
2. What about disasters? Many people point to the Cloud as a solution for business continuity and disaster recovery (DR) concerns. The Cloud can certainly help, but I’ll tell you (somewhat authoritatively) that the Cloud doesn’t make DR concerns “go away” – especially for SharePoint. For one thing, you’re locked into your provider’s terms of service; if you need more aggressive RPO and RTO windows, then you need to be looking elsewhere. Even Cloud data centers themselves go down; what’s your plan then?
3. Can I leave my provider? Everyone is quick to talk about moving to the Cloud, and many companies are happy to talk about migration strategies. What if you want to leave or change providers, though? Do those migration strategies work? What do you lose? How long would it even take? These may not seem like important questions now, but they will become increasingly more important as Cloud adoption grows and more companies get in on the action. It stands to reason that some portion of those companies will fail, close-up shop, be bought, etc. When that happens, what do you do … and what happens to your SharePoint?
Wrap Up
Of course, my perspective on Office 365 uptake in the next several years could be completely off-the-mark. After all, I don’t really have any numbers to back up my hypotheses. They’re just my opinions, but they are in-line with my gut feel.
After applying some recently-released patches for SharePoint 2013, my farm’s App infrastructure went belly-up. This post describes my troubleshooting and resolution.
I’ve been doing a lot of work with the new SharePoint 2013 App Model in the last few months. Specifically, I’ve been working on a free tool (for Idera) that will be going into the SharePoint App Marketplace sometime soon. The tool itself is not too terribly complicated – just a SharePoint-hosted app that will allow users to analyze library properties, compare library configuration settings, etc.
The development environment that I was using to put the new application together had been humming along just fine … until today. It seems that I tempted fate today by applying a handful of RTM patches to my environment.
What Happened?
I’d heard that some patches for SharePoint 2013 RTM had been released, so I pulled them down and applied them to my development environment. Those patches were:
After all binaries had been installed and a reboot was performed, I ran the SharePoint 2013 Products Configuration Wizard. The wizard ran and completed without issue, Central Administration popped-up afterwards, and life seemed to be going pretty well.
I went back to working on my SharePoint-hosted app, and that’s when things went south. When I tried to deploy the application to my development site collection from Visual Studio 2012, it failed with the following error message:
Error occurred in deployment step ‘Install app for SharePoint’: We’re sorry, we weren’t able to complete the operation, please try again in a few minutes. If you see this message repeatedly, contact your administrator.
Okay, I thought, that’s odd. Let’s give it a second.
Three failed redeploys later, I rebooted the VM to see if that might fix things. No luck.
Troubleshooting
My development wasn’t moving forward until I figured out what was going on, so I did a quick hunt online to see if anyone had encountered this problem. The few entries I found indicated that I should verify my App settings in Central Administration, so I tried that. Strangely, I couldn’t even get those settings to come up – just error pages.
All of this was puzzling. Remember: my farm was doing just fine with the entire app infrastructure just a day earlier, and all of a sudden things were dead in the water. Something had to have happened as a result of the patches that were applied.
Not finding any help on the Internet, I fired-up ULSViewer to see what was happening as I attempted to access the farm App settings from Central Administration. These were the errors I was seeing:
Insufficient SQL database permissions for user ‘Name: SPDC\svcSpServices SID: S-1-5-21-1522874658-601840234-4276112424-1115 ImpersonationLevel: None’ in database ‘SP2013_AppManagement’ on SQL Server instance ‘SpSqlAlias’. Additional error information from SQL Server is included below. The EXECUTE permission was denied on the object ‘proc_GetDataRange’, database ‘SP2013_AppManagement’, schema ‘dbo’.
Seeing that my service account (SPDC\svcSpServices) didn’t have the access it needed to run the proc_GetDataRange stored procedure left me scratching my head. I didn’t know what sort of permissions the service account actually required or how they were specifically granted. So, I hopped over to my SQL Server to see if anything struck me as odd or out-of-place.
Looking at the SP2013_AppManagement database, I saw that members in the SPDataAccess role had rights to execute the proc_GetDataRange stored procedure. SPDC\svcSPServices didn’t appear to be a direct member of that group (that I could tell), so I added it. Bazinga! Adding the account to the role permitted me to once again review the App settings in Central Administration.
Unfortunately, I still couldn’t deploy my Apps from Visual Studio. Going back to the ULS logs, I found the following:
Insufficient SQL database permissions for user ‘Name: NT AUTHORITY\IUSR SID: S-1-5-17 ImpersonationLevel: Impersonation’ in database ‘SP2013_AppManagement’ on SQL Server instance ‘SpSqlAlias’. Additional error information from SQL Server is included below. The EXECUTE permission was denied on the object ‘proc_AM_PutAppPrincipal’, database ‘SP2013_AppManagement’, schema ‘dbo’.
It was obvious to me that more than just a single account was out of whack since the proc_AM_PutAppPrincipal stored procedure was now in-play. Rather than try to manually correct all possible permission issues, I decided to try and get SharePoint to do the heavy lifting for me.
Resolution
Knowing that the problem was tied to the Application Management Service, I figured that one (possible) easy way to resolve the problem was to simply have SharePoint reprovision the Application Management Service service application. To do this, I carried out the following:
Deleted my App Management Service Application instance (which I happened to call “Application Management Service”) in Central Administration. I checked the box for Delete data associated with the Service Applications when it appeared to ensure that I got a new app management database.
Once the service application was deleted, I created a new App Management Service service application. I named it the same thing I had called it before (“Application Management Service”) and re-used the same database name I had been using (“SP2013_AppManagement”). I re-used the shared services application pool I had been using previously, too.
After completing these steps, I was able to successfully deploy my application to the development site collection through Visual Studio. I no longer saw the stored procedure access errors appearing in the ULS logs.
What Happened?
I don’t know what happened exactly, but what I observed seems to suggest that one of the patches I applied messed with the App Management service application database. Specifically, rights and permissions that one or more accounts possessed were somehow revoked by removing those accounts from the SPDataAccess role. Additional role and/or permission changes could have been made, as well – I just don’t know.
Once everything was running again, I went back into my SQL Server and had a look at the (new) SP2013_AppManagement database. Examining the role membership for SPDC\svcSpServices (which was one of the accounts that was blocked from accessing stored procedures earlier), I saw that the account had been put (back) into the SPDataAccess role. This seemed to confirm my observation that somehow things became “unwired” during the patching and/or configuration wizard run process.
My recent attempts to configure the Windows Azure Workflow service (Workflow 1.0 Beta) with a SQL Server alias didn’t go so well. If you’re playing with Workflow 1.0 Beta, stay away from aliases!
I’ve been doing a bit of build-out with the new SharePoint 2013 Preview in anticipation of some development work, and I’ve documented a few snags that I’ve hit along the way. Although I ran into some additional problems with the SharePoint 2013 Preview yesterday, this post isn’t about SharePoint specifically; it’s about the Windows Azure Workflow service – also known (at this point in time) simply as Workflow 1.0 Beta.
A Bit of Background
If you’re brand-new to the SharePoint 2013 scene, you may not yet have heard: the future for workflow lies outside of SharePoint, not within it. The Windows Azure Workflow service (yes, it even has “Azure” in the name if you’re running it on-premise and not in the cloud) is industrial-strength stuff, and it promises all sorts of improvements over workflow as we know it (and use it) right now.
To take advantage of Windows Azure Workflow at this point in the SharePoint 2013 release cycle requires the installation of the Workflow 1.0 Beta. The installation is not a particularly complicated process, but that’s probably because I’ve been using a solid resource.
Note: the “solid resource” I’m referring to is CriticalPath Training’s VM setup guide. I’ve been using it as a reference as I’ve been doing my SharePoint 2013 build-outs; the guide itself is fantastic and comes with some supporting PowerShell scripts to help things along. The guide and scripts are freely available here – you just need to create an account on the CriticalPath Training site to download them. I recommend them if you’re just getting started with the SharePoint 2013 Preview.
So, what’s my beef with the Workflow 1.0 Beta? To summarize it in a few works: Workflow 1.0 Beta doesn’t seem to work with SQL Server aliases. I certainly tried, but in the end I was forced to abandon using an alias.
How I Initially Configured It
If you read my previous “An unexpected error has occurred” post, then you know that there are four different VMs I’m configuring for a SharePoint 2013 environment. Two of those VMs are of interest in the discussion about Workflow 1.0 Beta configuration:
SP2013-SQL. A SQL Server 2013 Enterprise VM
SP2013-APPS. A utility server for running Workflow 1.0 Beta and other “off-box” services
As a general rule of thumb, anytime I need to establish a SQL Server connection, I try to create a SQL Server alias to avoid tightly coupling my SQL Server consumers/clients directly to a SQL Server instance. This buys me some flexibility in the unfortunate event that a server dies, I need to relocate databases, etc.
I was planning to install the Workflow 1.0 Beta on my SP2013-APPS virtual machine, and I knew that Workflow 1.0 Beta would need to connect to my SP2013-SQL SQL Server. So, I created both a 32-bit alias and a 64-bit alias called SpSqlAlias for the default SQL Server instance residing on SP2013-SQL (which happened to be at IP address 172.16.0.2) as shown on left.
Once the alias was created and all other prerequisites were addressed, I started the Workflow 1.0 Beta installation process. In the Workflow Configuration Wizard, I supplied my SQL Server alias in place of a server name, checked the connection, and was given a green check-mark. As the configuration process started, everything looked good. Even the Service Bus farm management and gateway databases were created without issue.
The problems started shortly thereafter, though, during the creation of a default container. Basically, I didn’t get any further. I literally stared at the screen on the right for a full ten (10) minutes without seeing any meaningful activity in the Details box. After 10 minutes had elapsed, the configuration process failed and I was treated to an exception message and stack trace. Omitting the inner exception detail, here’s what I was told:
[sourcecode language=”text”]
System.Management.Automation.CmdletInvocationException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) —> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) —> System.ComponentModel.Win32Exception: The system cannot find the file specified
[/sourcecode]
Validating the Alias
Of course, the first thing I double-checked was the SQL Server to ensure that it was responding. It was. I even backed through the configuration wizard a couple of steps and verified (with the “Test Connection” button) that I could reach the SQL Server. No issues there: my SQL Server alias was valid as far as the configuration wizard was concerned.
Looking more closely at the exception message left me suspicious. This part in particular made me raise my eyebrow:
(provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server)
Named Pipes Provider? I had specified a TCP/IP alias, not Named Pipes. Changing the permitted 32-bit and 64-bit client protocols (again, via the SQL Server Configuration Manager) to make sure that TCP/IP was enabled and Named Pipes was disabled …
… made no difference, either – I’d still get an exception from the Named Pipes Provider. It looked as though one or more steps in the configuration process were “doing their own thing,” ignoring my alias and client protocols configuration, and (as a result) having trouble reaching the SQL Server.
Trying to Go with the Flow
The thought that entered my mind was, “Ok – don’t fight it if you don’t have to.” If the configuration wizard was going to fall back to using Named Pipes, then I’d go ahead and set up a Named Pipes alias. I wasn’t thrilled about the idea, but I’d rather have the SQL Server alias in-place than no alias at all.
So much for that thought.
I played with the actual Named Pipes alias format quite a bit, but in the end the result was always the same.
Attempts to use a TCP/IP alias always failed partway through configuration, and attempts to use a Named Pipes alias never even got started.
The Result
I gave it some more thought … and came up empty. So, I dumped any remaining aliases, ensured that all client protocols were back to their fully enabled state, and tried to do the configuration with just the SQL Server host name (to connect to the default instance).
The result?
Using just the host name, I had no issues performing the configuration.
The Conclusion
If you are setting up Workflow 1.0 Beta, stay away from SQL Server aliases. As best as I can tell, they aren’t (yet) supported. I’m hopeful that this is just a beta bug or limitation.
On the other hand, if you think I’ve gone off the deep end and can find some way to get the Workflow 1.0 Beta configuration to run with SQL Server aliases, please let me know – I’d love to hear about it!
After installing the current SharePoint 2013 preview build, I was greeted by “An unexpected error has occurred” message while trying to navigate to the Central Administration site. This post represents the steps I took to troubleshoot the problem and implement a least-privileges fix for it.
You’ve undoubtedly heard the news: SharePoint 2013 is coming. The preview is available right now, and you can download it from TechNet if you want to join in the fun. Just make sure you can meet the hardware and environmental prerequisites. They’re somewhat brutal.
As you might have guessed from the title of this post, I’ve been trying to get in on the SharePoint 2013 fun. There are a number of things I’m supposed to be working on for SharePoint 2013, so building out a SharePoint 2013 environment with the new preview build has been high on my list of things to do.
This post is about a very recent experience with a SharePoint 2013 installation and configuration … and yes, it’s one that had me looking long and hard for a happy pill.
As with many of my other blog posts, this post takes a winding, iterative approach towards analyzing problems and trying to find solutions. Please bear with me or jump to the “Implementing the Change” section near the end if you want to blindly apply a change (based on the blog post title) and hope for the best.
Hitting a Small Snag
This blog post would be something of a disappointment if all it said was “… SharePoint 2013 installed without issue, and my environment lived happily ever after.”
No such luck; just look at the screenshot on the left. Sometimes I feel like I’m a magnet for “bad technology karma” despite my attempts to keep a clean slate in that area. Of course, SharePoint 2013 is only in the preview stages of release, so hiccups are bound to occur. I accept that. Like many of you, I went through it with SharePoint 2010 and SharePoint 2007, as well.
Strangely, though, I built-out a SharePoint 2013 environment with an earlier build (prior to the release of the current preview) some time ago. That’s why I was really surprised to see the message shown in the screenshot immediately upon completing a run of the SharePoint 2013 Products Configuration Wizard:
An unexpected error has occurred.
That’s it. No additional information, no qualification – just a technological “whoops” accompanied by the equivalent of a shoulder shrug from my VM environment.
The Setup
Let me take a step back to describe the environment I had put into place before trying to install and configure the SharePoint 2013 binaries.
One major difference between my latest SharePoint 2013 setup attempt and the previous (successful) attempt was the make-up of the server environment. After learning of some of the install restrictions that are specific to SharePoint 2013 (for example, Office Web Apps require their own server), I decided to build out the following virtual servers on my laptop and assemble them into a domain:
SP2013-DC: a Windows 2008 R2 Enterprise domain controller (for my virtual spdc.com domain)
SP2013-SQL: a Windows 2008 R2 Enterprise server running SQL Server 2012 Enterprise
SP2013-WFE: a Windows 2008 R2 Enterprise all-in-one SharePoint 2013 Server
SP2013-APPS: a Windows 2008 R2 Enterprise “extra” server for roles/components that couldn’t be installed alongside SharePoint
Overkill? Perhaps, but I wanted to get a feel for how the different components might interact in a “real” production environment.
I also opted for a least privileges install so that I could start to understand where some of the security boundaries had shifted versus SharePoint 2010. Since I planned to use the farm for my development efforts, I didn’t want to make the common developer mistake of shoehorning everything onto one server with unrestricted privileges. Such an approach dodges security-related issues during development, but it also tends to yield code that falls apart (or at least generates security concerns) upon first contact with a “real” SharePoint environment.
Failed Troubleshooting
As stated earlier, my setup problems started after I installed the SharePoint 2013 bits and ran the SharePoint 2013 Products Configuration Wizard. The browser window that popped-up following the configuration wizard’s run was trying to take me to the Farm Configuration wizard that lives inside the Central Administration site. Clearly I hadn’t gotten very far in configuring my environment.
I started looking in some of the usual locations for additional troubleshooting hints. Strangely, I couldn’t quickly find any:
The Central Administration site application pool looked okay and was spun-up
My Application and System event logs were pretty doggone clean – exceptionally few errors and warnings, and none that appeared relevant to current problem
I didn’t see anything in the Security log to suggest problems
I tried an IISRESET. I rebooted the VM. I checked my SQL alias to make sure nothing was messed-up there. I checked my farm service account permissions in SQL Server to ensure that the account had the dbcreator and securityadmin role assignments as well as rights to the associated databases. Heck, I even deprovisioned the server and re-ran the SharePoint 2013 Products Configuration Wizard twice – once with a complete wipe of the databases. Nothing I did seemed to make a difference. Time after time, I kept getting “An unexpected error has occurred.”
Some Insight
Maybe it was my go ‘rounds with previous SharePoint beta releases, or maybe it was a combination of Eric Harlan’s and Todd Klindt’s spirits reaching out to me (the point of commonality between Todd and Eric: the two of them are fond of saying “it’s always permissions”). Whatever the source, I decided to start playing around with some account rights. Since I was setting up a least-privileges environment, it made sense that rights and permissions (or some lack of them) could be a factor.
The benefit of having gotten nearly nowhere on my farm configuration task was that there wasn’t much to really troubleshoot. Only a handful of application pools had been created (as shown on the right), and only one or two accounts were actually in-play. Since my Central Administration site was having trouble coming up, and knowing that the Central Administration site runs in the context of the farm service/timer service account, I focused my efforts there.
In my farm, I had assigned SPDC\svcSPFarm for use by the timer service. This account was a basic domain account at the start – nothing special, and no interesting rights to speak of. To see if I could make any progress on getting the Central Administration site to come up, I dropped the account into the Domain Admins group and tried to access the Central Administration site again.
I had no luck at first … but after an IISRESET and a re-launch of the site, Central Administration came up. I pulled the account out of the Domain Admins group and re-tried the site. It came up, but again – after an IISRESET, I was back to “An unexpected error has occurred.”
I repeated the process again, but the second time around I used the local (SP2013-WFE) Administrators group instead of the Domain Admins group. The results were the same: adding SPDC\svcSPFarm to the Administrators group allowed me to bring Central Administration up, and removing the account from the Admininstrators group brought things back down.
Hunch confirmed: it looked like I was dealing with some sort of rights or permissions issue.
Of course, knowing that there is a rights or permissions issue and knowing what the specific issue is are two very different things. The practical part of me screamed “just leave the account in the Administrators group and move on.”
Unfortunately, I don’t deal well with not knowing why something doesn’t work. It’s a personal hang-up that I have. So, I started with some low-impact/low-effort troubleshooting: I adjusted my VM’s Audit Policy settings (via the Local Security Policy MMC snap-in) to report on all failures that might pop-up.
Unfortunately, the only thing this change actually did for me was reveal that some sort of WinHttpAutoProxySvc service issue was popping-up when SPDC\svcSPFarm wasn’t an administrator. After a few minutes of researching the service, I decided that it probably wasn’t an immediate factor in the problem I was trying to troubleshoot.
So much for finding a quick answer.
Wading Into the Muck
I knew that I needed to dig deeper, and I knew where my troubleshooting was going to take me next. Honestly, I wasn’t too excited.
I dug into my SysInternals folder and dug out Process Monitor. For those of you who aren’t familiar with Process Monitor, I’ll sum it up this way: it’s the “nuclear option” when you need diagnostic information regarding what’s happening with the applications and services running on your system. Process Monitor collects file system activity, Registry reads/writes, network calls – pretty much everything that’s happening at a process level. It’s a phenomenal tool, but it generates a tremendous amount of information. And you need to wade through that information to find what you’re looking for.
I did an IISRESET, fired-up Process Monitor, and tried to bring up the Central Administration site once again. Since the SPDC\svcSPFarm account was no longer an administrator, I knew that the site would fail to come up. My hope was that Process Monitor would provide some insight into where things were getting stuck.
Over the course of the roughly 30 seconds it took the application pool to spin-up and then hand me a failure page, Process Monitor collected over 220,000 events.
Gulp.
I don’t know how you feel about it, but 220,000 events was downright intimidating to me. “Browsing” 220,000 events wasn’t going to be feasible. I’d worked with Process Monitor before, though, and I knew that the trick to making headway with the tool was in judicious use and application of its filtering capabilities.
Initially, I created filters to rule out a handful of processes that I knew wouldn’t be involved – things like Internet Explorer (iexplore.exe), Windows Explorer (Explorer.EXE), etc. Each filter that I added brought the number of events down, but I was still dealing with thousands upon thousands of events.
After a little thinking, I got a bit smarter with my filtering. First, I knew that I was dealing with an ASP.NET application pool; that was, after all, where Central Administration ran. That meant that the activity in which I was interested was probably taking place within an IIS worker process (w3wp.exe). I set a filter to show only those events that were tied to w3wp.exe activity.
Second, I knew that my farm service account (SPDC\svcSPFarm) was at the heart of my rights and permissions issue. So, I decided to filter out any activity that wasn’t tied to this account.
Applying those two filters got me down to roughly 50,000 events. Excluding SUCCESS results dropped me to 10,000 events. Some additional tinkering and exclusions brought the number down even lower. I was still wading through a large number of results, though, and I didn’t see anything that I could put my finger on.
Next, I decided to place SPDC\svcSPFarm back into the Administrators group and do another Process Monitor capture. As expected, I captured a few hundred thousand events. I went through the process of applying filters and whittling things down as I had done the first time. Then I spent a lot of time going back and forth between the successful and unsuccessful runs looking for differences that might explain what I was seeing.
Two Bit Comedy
After doing a number of comparisons, I began to focus on a series of entries that were tagged with a result message of BAD IMPERSONATION (as seen below). I was seeing 145 of these entries (out of 220,000+ events) when the Central Administration site was failing to come up. When SPDC\svcSPFarm was part of the local Administrators group, though, I wasn’t seeing any of the entries.
My gut told me that these BAD IMPERSONATION entries were probably a factor in my situation, so I started looking at them a bit more closely.
Many of the entries were seemingly non-specific attempts to access the Registry, but I did notice a handful of file and Registry accesses where an explicit impersonation attempt was being made with the current user’s account context. In the example on the right, for instance, an attempt was being made by the worker process to use my account context (SPDC\s0ladmin) for a CreateFile operation – and that attempt was failing.
This led to me formulate (what may seem like an obvious) hypothesis: seeing the BAD IMPERSONATION results, I suspected that the SPDC\svcSPFarm account was lacking something like the ability to replace a process-level token, log on interactively, or something like that. I’m certainly no expert when it comes to the specific boundaries and abilities associated with each rights assignment, but again – my gut was telling me that I should probably play around with some of the User Rights Assignments (via Local Security Policy) to see if I might get lucky.
A Fortunate Discovery
I popped open the Local Security Policy MMC snap-in on the SP2013-WFE VM once again, and I navigated down to User Rights Assignment node. At first glance, I feared that my gut feeling was off-the-mark. Looking through the rights assignments available, I saw that SPDC\svcSPFarm had already been granted the ability to Replace a process level token and Log on as a service – presumably by the SharePoint 2013 Products Configuration Wizard.
I continued looking at the various rights assignments, though, and I discovered one that looked promising: Impersonate a client after authentication. SPDC\svcSPFarm hadn’t been granted that right in my environment, and it seemed to me that such a right might be handy in getting rid of the BAD IMPERSONATION results I was seeing with Process Monitor. I took a leap, granted SPDC\svcSPFarm the ability to Impersonate a client after authentication (as shown on the left), performed an IISRESET, and tried to reach the Central Administration site.
And I’ll be darned if it didn’t actually work.
I don’t normally get lucky like that, but hey – I wasn’t going to argue with it. I browsed around the Central Administration site for a bit to see if the site would remain responsive, and I didn’t notice anything out of the ordinary. I also performed an IISRESET and brought the Central Administration site back up with Process Monitor running just to double-check things. Sure enough, the BAD IMPERSONATION results were gone.
The Fix?
I honestly have no idea whether this problem was specific to my environment or something that might be occurring in other SharePoint 2013 preview environments. I also don’t know if my solution is the “appropriate” solution to resolve the issue. It works for now, but I still have a lot of configuration and actual development work left to do to validate what I’ve implemented.
Since I’m trying to maintain a least-privileges install, though, I’m willing to try this out for a while instead of falling back to placing my farm service account (SPDC\svcSPFarm) in the Administrators group. Placing the account in that group is a last resort for me.
In case you were wondering: I did perform some level of verification on this change. Since the account I was running as (SPDC\s0ladmin) was itself a member of Domain Admins, I created a standard domain user account (SPDC\joe.nobody – he’s always my go-to guy in these situations) and added it to the Farm Administrators group in Central Administration. I then did an IISRESET and opened a browser to the Central Administration site from the domain controller (SP2013-DC) to see if SPDC\joe.nobody could indeed access the site. No troubles. The fact that the SPDC\joe.nobody account wasn’t a member of either Domain Admins or the local Administrators group (on SP2013-WFE) did not block the account from reaching Central Administration. No “An unexpected error has occurred” reared its head.
Implementing the Change
If you are of a similar mindset to me (i.e., you don’t like to elevate privileges unnecessarily) and find yourself unable to reach Central Administration with the same symptoms I’ve described, here is the quick run-through on how to grant your farm/timer service account the Impersonate a client after authentication right as I did:
On your SharePoint Server, go to Start > Administrative Tools > Local Security Policy to open the Local Security Policy MMC snap in.
When the snap-in opens, navigate (in the left Tree view) to the Security Settings > Local Policies > User Rights Assignment node.
Locate the Impersonate a client after authentication policy in the right-hand pane.
Right-click the policy and select the Properties item that appears in the pop-up menu.
A dialog box will appear. Click the Add User or Group … button on the dialog box.
In the Select Users, Computers, Service Accounts, or Groups dialog box that appears, add your farm service/timer service account.
Click the OK button on each of the two open dialog boxes to exit out of them.
Close the Local Security Policy MMC snap-in.
Perform an IISRESET and verify that the Central Administration site actually comes up instead of “An unexpected error has occurred”
Conclusion
If the change that I described in this post and implemented in my environment causes problems or requires further adjustment, I’ll update this post. My goal certainly isn’t to mislead – only to share and hopefully help those who may find themselves in the same situation as me.
If you’ve seen this problem in your SharePoint 2013 preview environment, please let me know. I’d love to hear about it, as well as how your worked through (or around) it!
UPDATE (9/4/2012)
I ran into the same issue with the account that was being used to serve up non-Central Admin site collections; i.e., the account that I was using as the identity for the application pools servicing the web applications I created. In my environment, this was SPDC\svcSpContentWebs as seen below (for the SharePoint – 80 application pool):
Attempts to bring up a site collection without the Impersonate a client after authentication privilege being assigned to the SPDC\svcSpContentWebs account would usually yield nothing more than a blank screen. As with the farm service account, there was very little to troubleshoot until I went in with Process Monitor to look for a bunch of BAD IMPERSONATION results:
At this point, I’m willing to bet that any other accounts that are assigned as application pool identities will need to be granted the Impersonate a client after authentication privilege, as well.
In addition to the Impersonate a client after authentication privilege, I also ended up having to grant the SPDC\svcSpContentWebs account the Log on as a batch job privilege from within the Local Security Policy MMC snap-in. Without the privilege to Log on as a batch job, I was receiving an HTTP 503 error every time I tried to bring up a site collection. Troubleshooting this problem wasn’t as difficult, though; examining the System event log helped with the following description for the WAS (Windows Process Activation Service) warning on an Event 5021 that was appearing:
The identity of application pool SharePoint – 80 is invalid. The user name or password that is specified for the identity may be incorrect, or the user may not have batch logon rights. If the identity is not corrected, the application pool will be disabled when the application pool receives its first request. If batch logon rights are causing the problem, the identity in the IIS configuration store must be changed after rights have been granted before Windows Process Activation Service (WAS) can retry the logon. If the identity remains invalid after the first request for the application pool is processed, the application pool will be disabled. The data field contains the error number.
In my case, my account credentials were correct, but for some reason the Log on as batch job right hadn’t been assigned to the SPDC\svcSpContentWebs account. Each time the application pool tried to spin up, it failed and was stopped; I’d then get two warnings from WAS (5021 and 5057) in my System event log, and that would be followed by a WAS5059 error.