In all honesty, this post is quite overdue. The topic is one that I started digging into before the end of last year (2020), and in a “normal year” I’d have been more with it and shared a post sooner. To be fair, I’m not even sure what a “normal year” is, but I do know this: I’d be extremely hard-pressed to find anyone who felt that 2020 was a normal year …
I need to rewind a little to explain “the gift” and the backstory behind it. Technically speaking, “the gift” in questions wasn’t so much a gift as it was something I received on loan. I do have hopes that I’ll be allowed to keep it … but let me avoid putting the cart ahead of the horse.
I received the DS220+ during the latter quarter of 2020, and I’ve had it running since roughly Christmastime.
How did I manage to come into possession of this little beauty? Well, that’s a bit of a story …
Back in October 2020, about a week or two before Halloween, I was checking my email one day and found a new email from a woman named Sarah Lien in my inbox. In that email, Sarah introduced herself and explained that she was with Synology’s Field and Alliance Marketing. She went on to share some information about Synology and the company’s offerings, both hardware and software.
I’m used to receiving emails of this nature semi-regularly, and I use them as an opportunity to learn and sometimes expand my network. This email was slightly different, though, in that Sarah was reaching out to see if we might collaborate together in some way around Synology’s NAS offerings and software written specifically for NAS that could back up and protect Microsoft 365 data.
Normally, these sorts of situations and arrangements don’t work out all that well for me. Like everyone else, I’ve got a million things I’m working on at any given time. As a result, I usually can’t commit to most arrangements like the one Sarah was suggesting – as interesting as I think some of those cooperative efforts might turn out to ultimately be.
Nevertheless, I was intrigued by Sarah’s email and offer. So, I decided to take the plunge and schedule a meeting with her to see where a discussion might lead.
One thing I learned pretty quickly about Sarah: she’s a very friendly and incredibly understanding person. One would have to be to remain so good-natured when some putz (me) completely stands you up for a scheduled call. Definitely not the first impression I wanted to make …
I’m happy to say that the second time was a charm: I managed to actually show up on-time (still embarrassed) and Sarah and I, along with her coworker Patrick, had a really good conversation.
Synology has been in the NAS business for quite some time. I’d been familiar with the company by name, but I didn’t have any familiarity with their NAS devices.
Long story short: Sarah wanted to change that.
The three of us discussed the variety of software available for the NAS – like Active Backup for Microsoft 365 – as well as some of the capabilities of the NAS devices themselves.
Interestingly enough, the bulk of our conversation didn’t revolve around Microsoft 365 backup as I had expected. What really caused Patrick and me to geek-out was a conversation about Plex and the Synology app that turned a NAS into a Plex Server.
The Plex Flex
Not familiar with Plex? Have you been living under a rock for the last half-decade?
Plex is an ever-evolving media server, and it has been around for quite some time. I bought my Plex Lifetime Pass (not required for use, but affords some nice benefits) back in September of 2013 for $75. The system was more of a promise at that point in time than a usable, reliable media platform. A lifetime pass goes for $120 these days, and the platform is highly capable and evolved.
Plex gives me a system to host and serve my media (movies, music, miscellaneous videos, etc.), and it makes it ridiculously easy to both consume and share that media with friends. Nearly every smart device has a Plex client built-in or available as a free download these days. Heck, if you’ve got a browser, you can watch media on Plex:
I’m a pretty strong advocate for Plex, and I share my media with many of my friends (including a lot of folks in the M365 community). I even organized a Facebook group around Plex to update folks on new additions to my library, host relevant conversations, share server invites, and more.
An Opportunity To Play
I’ve had my Plex Server up-and-running for years, so the idea of a NAS doing the same thing wasn’t something that was going to change my world. But I did like the idea of being able to play with a NAS to put it through the paces. Plex just became the icing on the cake.
After a couple of additional exchanges and discussions, I got lucky (note: one of the few times in my life): Sarah offered to ship me the DS220+ seen at the top of this post for me to play with and put through the paces! I’m sure it comes as no surprise to hear me say that I eagerly accepted Sarah’s generous offer.
Sarah got my address information, confirmed a few things, and a week or so later I was informed that the NAS was on its way to me. Not long after that, I found this box on my front doorstep.
Finally Setting It Up
The box arrived … and then it sat for a while.
The holidays were approaching, and I was preoccupied with holiday prep and seasonal events. I had at least let Sarah know that the NAS made it to me without issue, but I had to admit in a subsequent conversation that I hadn’t yet “made time” to start playing around with it.
Sarah was very understanding and didn’t pressure me for feedback, input, or anything. In fact, her being so nice about the whole thing really started to make me feel guilty.
Guilt can be a powerful motivator, and so I finally made the time to unbox the NAS, set it up, and play around with it a little.
Here are a series of shots I took as I was unpacking the DS220+ and getting it setup.
It was very easy to get up-and-running … which is a good thing, because the instructions in the package were literally just the small little foldout shown in the slides above. I’d say the Synology folks did an excellent job simplifying what had the potential to be a confusing process for those who might not be technical powerhouses.
And eventually … power-on!
Once I got the DS220+ running, I started paying a little more attention to all the ports, capabilities in the interface, etc. And to tell you the truth, I was simply floored.
First off, the DS220+ is a surprisingly capable NAS – much more than I originally envisioned or expected. I’ve had NAS devices before, but my experience – like those NAS devices – is severely dated. I had an old Buffalo Linkstation which I never really took a liking to. I also had a couple of Linksys Network Storage Link devices. They worked “well enough,” but the state of the art has advanced quite a bit in the last 15+ years.
Two 3.5″ drive bays with RAID-1 (mirroring) support
It’s worth noting that the 2GB of RAM that is soldered into the device can be expanded to 6GB with the addition of a 4GB SODIMM. Also, the two RJ-45 ports support Link Aggregation.
I’m planning to expand the RAM ASAP (already ordered a chip from Amazon), and given that I’ve got 10Gbps optical networking in my house, and the switch next to me is pretty darned advanced (and seems to support every standard under the sun), I’m looking forward to seeing if I can “goose things” a bit with the Link Aggregation capability.
What I’m sharing here just scratches the surface of what the device is capable of. Seriously – check out the datasheet to see what I’m talking about!
But Wait - There's More!
I realize I’m probably giving off something of a fanboy vibe right now, and I’m really kind of okay with that … because I haven’t even really talked about the applications yet.
Once powered-on, the basic interface for the NAS is a browser-based pseudo desktop that appears as follows:
This interface is immediately available following setup and startup of the NAS, and it provides all manner of monitoring, logging, and performance tracking within the NAS itself. The interface can also be customized a fair bit to fit preferences and/or needs.
The cornerstone of any NAS is its ability to handle files, and the DS220+ is capable with files on so many levels. Opening the NAS Control Panel and checking-out related services in the Info Center, we see file basics like NFS and SMB … and so much more.
The above screen is dense; there is a lot of information shown and communicated. And each of the tabs and nodes in the Control Panel is similarly dense with information. Hardware geeks and numbers freaks have plenty to keep themselves busy with when examining a DS220+.
But the applications are what truly have me jazzed about the DS220+. I briefly mentioned the Office 365 backup app and the Plex Server app earlier. But those are only two from an extensive list:
Many of these apps aren’t lightweight fare by any stretch. In addition to the two I already mentioned having an interest in, I really want to put the following apps through the paces:
Audio Station. An audio-specific media server that can be linked with Amazon Alexa (important in our house). I don’t see myself using this long term, but I want to try it out.
Glacier Backup. Provides the NAS with an interface into Amazon Glacier storage – something I’ve found interesting for ages but never had an easy way to play with or test.
Docker. Yes, a full-on Docker container host server! If something isn’t available as a NAS app, chances are it can be found as a Docker container. I’m actually going to see how well the NAS might do as a Minecraft Server. The VM my kids and I (and Anders Rask) play on has some I/O issues. Wouldn’t it be cool if we could move it into a lighter-weight but better performing NAS/Docker environment.
Part of the reason for ordering the memory expansion was that I expect the various server apps and advanced capabilities to work the NAS pretty hard. My understanding is the that the Celeron chip the DS220+ employs is fairly capable, but tripling the memory to 6GB is doing what I can to help it along.
I could go on and on about all the cool things I seem to keep finding in the DS220+ … and I might in future posts. I’d really like to be a little more directed and deliberate about future NAS posts, though. Although I believe many of you can understand and perhaps share in my excitement, this post doesn’t do much to help anyone or answer specific questions.
I suspect I’ll have at least another post or two summarizing some of the experiments (e.g., with the Minecraft Docker container) I indicated I’d like to conduct. I will also be seriously evaluating the Microsoft 365 Backup Application and its operation, as I think that is a topic many of you would be interested in reading my summary and assessment of.
Stay tuned in the coming weeks/months. I plan to cover other topics besides the NAS, but I also want to maximize my time and experience with my “gift of NAS.”
What started as a simple attempt to use the ~appWebUrl token in an image URL became a deep dive into SharePoint’s internal processing of custom actions and the App deployment process. In this post, I cover what will and won’t work for custom action image URLs in your own SharePoint 2013 Apps.
My adventures in SharePoint 2013 App Model Land have been going pretty well, but I recently encountered a limitation that left me sort of scratching my head.
The limitation applies to the creation of custom actions for SharePoint apps. To be more specific: the problem I’ve encountered is that there doesn’t appear to be a way to package and reference (using relative links) custom images for ribbon buttons like the one that’s circled in the image above and to the left. This doesn’t mean that custom images can’t be used, of course, but the work-around isn’t exactly something I’m particularly fond of (nor is it even feasible) in some application scenarios.
If you’re not familiar with the new SharePoint 2013 App Model, then you may want to do a little reading before proceeding with this post. I’m only going to cover the App Model concepts that are relevant to the limitation I observed and how to address/work-around it. However, if you are familiar with the new 2013 App Model and creating custom actions in SharePoint 2010, then you may want to jump straight down to the section titled Where the Headaches Begin.
One more warning: this post does some heavy digging into SharePoint’s internal processing of custom ribbon actions and URL tokens. If you want to skip all of that and head straight to the practical take-away, jump down to the What About the Image32by32 and Image16by16 Attributes section.
Adding a Ribbon Custom Action
First, let me do a quick run-through on custom actions. They aren’t unique to SharePoint 2013 or its new “Cloud App Model.” In fact, the type of custom action I’m talking about (i.e., extending the ribbon) became available when the Ribbon was introduced with SharePoint 2010.
With a SharePoint 2013 App, adding a new button to the ribbon is a relatively simple affair. It starts with choosing the Ribbon Custom Action option from the Add New Item dialog as shown below and to the left. Once a name is provided for the custom action and the Add button is clicked, the Create Custom Action for Ribbon dialog appears as shown below and to the right. There’s a third dialog page that further assists in setting some properties for a custom action, but I’m going to skip over it since it isn’t relevant to the point I’m trying to make.
I want to call attention to one of the selections I made on the Create Custom Action for Ribbon dialog, though; specifically, the decision to expose the custom action in the Host Web rather than in the App Web.
Why is this choice so important? Well, the new App Model enforces a relatively strict boundary of separation between SharePoint sites and any custom applications (running under the new App Model) that they may contain. A SharePoint site (Host Web) can technically “host” applications, but those applications operate in an isolated App Web that may have components running on an entirely different server. Under the new App Model, no custom app code is running in the Host Web.
App Webs (where custom applications exist after installation) don’t have direct access to the Host Web in which they’re contained, either. In fact, App Webs are logically isolated from their Host Web parents. If App Webs want to communicate with their Host Web parent to interact with site collection data, for example, they have to do so through SharePoint’s Client-Side Object Model (CSOM) or the Representational State Transfer (REST) interface. The old full-trust, server-side object isn’t available; everything is “client-side.”
There are some exceptions to this model of isolation, and one of those exceptions is the use of custom actions to allow an App (residing in an App Web) to partially wire itself into the Host Web. The Create Custom Action for Ribbon dialog shown above, for instance, adds a new button to the ribbon for each of the Document Libraries in the Host Web. This gives users a way to navigate directly from Document Libraries (in the Host Web) to a page in the App Web, for example.
The Elements.xml file that gets generated for the custom action once the Visual Studio wizard has finished running looks something like the following:
[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-8"?>
Title="Invoke 'LibraryDetailsCustomAction' action">
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
Alt="Examine Library Details"
LabelText="Examine Library Details"
Deploying the App that contains the custom action markup shown above creates a new button in the ribbon of each Host Web Document Library. By default, each button looks like the following:
There are a few attributes in the previous XML that I’m going to repeatedly come back to, so it’s worth taking a closer look at each one’s purpose and associated value(s):
Image32by32 and Image16by16 for the <Button /> element. These two attributes specify the images that are used when rendering the custom action button on the ribbon. By default, they point to an orange dot placeholder image that lives in the farm’s _layouts folder.
CommandAction for the <CommandUIHandler /> element. In its simplest form, this is the URL of the page to which the user is redirected upon pressing the custom ribbon button.
The Problem with the Default CommandAction
When a user clicks on a custom ribbon button in one of the Host Web document libraries, the goal is to send them over to a page in the App Web where the custom action can be processed. Unfortunately, the default CommandAction isn’t set up in a way that permits this.
In fact, attempting to deploy the solution to Office 365 with this default CommandAction results in failure; the App package doesn’t pass validation.
To understand why the failure occurs, it’s important to remember the isolation that exists between the Host Web and the App Web. To illustrate how the Host Web and App Web are different from simply a hostname perspective, consider the project I’ve been working on as an example:
Notice that although the /sites/dev2 relative path portion is the same for both the Host Web and App Web URLs, the hostname portion of each URL is different. This is by design, and it helps to enforce the logical separation between the Host Web and App Web – even though the App Web technically resides within the Host Web.
Looking again at the default CommandAction attribute reveals that its value is just an ASPX page that is identified with a relative URL. Rather than pointing to where we want it to point …
And this is exactly what should happen. After all, the custom action is launched from within the Host Web, so a relative path specification should resolve to a location in the Host Web – not the location we actually want to target in the App Web.
Fixing the CommandAction
Thankfully, it isn’t a major undertaking to correct the CommandAction attribute value so that it points to the App Web instead of the Host Web. If you’ve worked with SharePoint at all in the past, then you may know that the key to making everything work (in this situation) is the judicious use of tokens.
What are tokens? In this case, tokens are specific string sequences that SharePoint parses at run-time and replaces with a value based on the run-time environment, action that was performed, associated list, or some other context-sensitive value that isn’t known at design-time.
To illustrate how this works, consider the default CommandAction attribute:
Since tokens are the key to inserting context-dependent values at run-time, you’d think they’d have been implemented and usable anywhere a developer needs to cross the Host Web / App Web divide.
Apparently not. To be more specific (and fair), I should instead say “not consistently.”
Since this blog post is about image limitations with custom ribbon buttons, you can probably guess where I’m headed with all of this. So, let’s take a look at the Image16by16 and Image32by32 attributes.
By default, the Image16x16 and Image32by32 attributes point to a location in the _layouts folder for the farm. Each attribute value references an image that is nothing more than a little round orange dot:
Much like the CustomAction attribute, it stands to reason that developers would want to replace the placeholder image attribute values with URLs of their choosing. In my case, I wanted to use a set of images I was deploying with the rest of the application assets in my App Web. So, I updated my image attributes to look like the following:
I deployed my App to my Office 365 Preview tenant, watched my browser launch into my App Web, hopped back to the Host Web, navigated to a document library,and looked at the toolbar. I was not happy by what I saw (on the left).
The image I had specified for use by the button wasn’t being used. All I had was a broken image link.
Examining the properties for the broken image quickly confirmed my fear: the ~appWebUrl token was not being processed for either of the Image32by32 or Image16by16 attributes. The token was being output directly into the image references.
I tried changing the image attributes to reference the App Web a couple of different ways (and with a couple of different tokens), but none of them seemed to work.
To see if I was losing my mind, I decided to try a little experiment. Although I knew it wouldn’t solve my particular problem, I decided to try using the ~site token just to see if it would be parsed properly. Lo and behold, it was parsed and replaced. ~site worked. So, ~site worked … but ~appWebUrl didn’t?
That didn’t make any sense. If it isn’t possible to use the ~appWebUrl token, how are developers supposed to reference custom images for the buttons they deploy in their Apps? Without the ~appWebUrl, there’s no practical way to reference an item in the App Web from the Host Web.
When I find myself in situations where I’m holding results that don’t make sense, I can’t help myself: I pull out Reflector and start poking around for clues inside SharePoint’s plumbing. If I dig really hard, sometimes I find answers to my questions.
After some poking around with Reflector, I discovered that the “journey to enlightenment” (in this case) started with the RegisterCommandUIWithRibbon method on the SPCustomActionElement type. It is in this method that the Image16by16 and Image32by32 attributes are read-in from the XML file in which they are defined. Before assignment for use, they’re passed through a couple of methods that carry out token parsing:
ReplaceUrlTokens on the SPCustomActionElement type
UrlFromPrefixedUrlCore on the the SPUtility type
Although these methods together are capable of recognizing and replacing many different token types (including some I hadn’t seen listed in existing documentation; e.g., ~siteCollectionLayouts), none of the new SharePoint 2013 tokens, like the ~appWebUrl and ~remoteWebUrl~remoteAppUrl tokens, appear in these methods.
Interestingly enough, I didn’t see any noteworthy differences between the path of execution for processing image attributes and the sequence of calls through which CommandAction attributes are handled in the RegisterCommandUIExtension method of the SPRibbon type. The RegisterCommandUIExtension method eventually “punches down” to the ReplaceUrlTokens and UrlFromPrefixedUrlCore methods, as well.
The differences I was seeing in how tokens were handled between the CommandAction and Image32by32/Image16by16 attributes had to be originating somewhere else – not in the processing of the custom action XML.
After some more digging in Reflector to determine where the ~appWebUrl actually showed-up and was being processed, I came across evidence suggesting that “something special” was happening on App deployment rather than at run-time. The ~appWebUrl token was being processed as part of a BuildTokenMap call in the SPAppInstance type; looking at the call chain for the BuildTokenMap method revealed that it was getting called during some App deployment operations processing.
If changes were taking place on App deployment, then I had a hunch I might find what I was looking for in the content database housing the Host Web to which my App was being deployed. After all, Apps get deployed to App Webs that reside within a Host Web, and Host Webs live in content databases … so, all of the pieces of my App had to exist (in some form) in the content database.
I fired-up Visual Studio, stopped deploying to Office 365, and started deploying my App to a site collection on my local SharePoint 2013 VM farm. Once my App was deployed, I launched SQL Management Studio on the SQL Server housing the SharePoint databases and began poking around inside the content database where the target site collection was located.
Brief aside: standard rules still apply in SharePoint 2013, so I’ll mention them here for those who may not know them. Don’t poke around inside content databases (or any other databases) in live SharePoint environments you care about. As with previous versions, querying and working against live databases may hurt performance and lead to bigger problems. If you want to play with the contents of a SharePoint database, either create a SQL snapshot of it (and work against the snapshot) or mount a backup copy of the database in a test environment.
I wasn’t sure what I was looking for, so I quickly examined the contents of each table in the content database. I hit paydirt when I opened-up the CustomActions table. It had a single row, and the Properties field of that row contained some XML that looked an awful lot like the Elements.xml which defined my custom action:
[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-16"?>
<CustomAction Title="Invoke ‘LibraryDetailsCustomAction’ action" Id="4f835c73-a3ab-4671-b142-83304da0639f.LibraryDetailsCustomAction" Location="CommandUI.Ribbon" RegistrationId="101" RegistrationType="List" Sequence="10001">
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton" Alt="Examine Library Details" Sequence="100" Command="Invoke_LibraryDetailsCustomActionButtonRequest" LabelText="Examine Library Details" Image16by16="~site/Images/sharepoint-library-analyzer_16x16-a.png" Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png" TemplateAlias="o1"/>
There were some differences, though, between the Elements.xml I had defined earlier and what actually appeared in the Properties field. I narrowed my focus to the differences that existed between the non-working Image32by32/Image16by16 attributes
Jumping into my App in Internet Explorer and opening IE’s debugging tools via <F12>, I did a search for the LaunchApp function within the referenced scripts and found it in the core.js library/script. Examining the LaunchApp function revealed that it called the LaunchAppInternal function; LaunchAppInternal, in turn, called back to the SharePoint server’s /_layouts/15/appredirect.aspx page with the parameters that were supplied to the original LaunchApp method – including the URL with the ~appWebUrl token.
To complete the journey, I opened up the Microsoft.SharePoint.ApplicationPages.dll assembly back on the server and dug into the AppRedirectPage class that provides the code-behind support for the AppRedirect.aspx page. When the AppRedirect.aspx page is loaded, control passes to the page’s OnLoad event and then to the HandleRequest method. HandleRequest then uses the ReplaceAppTokensAndFixLaunchUrl method of the SPTenantAppUtils class to process tokens.
The ReplaceAppTokensAndFixLaunchUrl method is noteworthy because it includes parsing and replacement support for the ~appWebUrl token, ~remoteWebUrl~remoteAppUrl token, and other tokens that were introduced with SharePoint 2013. The deployment-time processing that is performed on the CommandAction attribute is what ultimately wires-up the CommandAction to the ReplaceAppTokensAndFixLaunchUrl method. The Image32by32 and Image16by16 attributes don’t get this treatment, and so the new 2013 tokens (like ~appWebUrl) can’t be used by these attributes.
What About the Image32by32 and Image16by16 Attributes?
Now that some of the key differences in processing between the CommandAction attribute and image attributes have been identified, let me jump back to the original problem. Is there anything that can be done with the Image32by32 and Image16by16 attributes that are specified in a custom action to get them to reference assets that exist in the App Web? Since tokens like ~appWebUrl(and ~remoteWebUrl for all you Autohosted and Provider-hosted application builders) aren’t parsed and processed, are there alternatives?
The safest and most pragmatic way to handle this situation, it seems, is to use absolute URLs for the desired image resources and forget about deploying them to the App Web altogether. For example, I placed the images I was trying to use on the ribbon buttons here on my blog and referenced them as follows:
I had some initial concerns that I might inadvertently bump into some security boundaries, such as those that sometimes arise when an asset is referenced via HTTP from a site that is being served up under HTTPS. This didn’t prove to be the case, however. I tested the use of absolute URLs in both my development VM environment (served up under HTTP) and through one of my Office 365 Preview site collections (accessed via HTTPS), and no browser security warnings popped up. The target image appeared on the custom button as desired (shown on the left) in both cases.
Although the use of absolute URLs will work in many cases, I have to admit that I’m still not a big fan of this approach – especially for SharePoint-hosted apps like the one I’ve been working on. Even though Office 365 entails an “always connected” scenario, I can easily envision on-premises deployment environments that are taken offline some or all of the time. I can also see (and have seen in the past) SharePoint environments where unfettered Internet access is the exception rather than the rule.
In these environments, users won’t see image buttons at all – just blank placeholders or broken image links. After all, without Internet access there is no way to resolve and download the referenced button images.
Wrapping It Up
At some point in the future, I hope that Microsoft considers extending token parsing for URL-based attributes like Image32by32 and Image16by16 to include the ~appWebUrl, ~remoteWebUrl, and other new tokens used by the SharePoint 2013 App Model. In the meantime, though, you should probably consider getting an easily accessible online location (SkyDrive, Dropbox, a blog, etc.) for images and other similar assets if you’re building apps under the new SharePoint 2013 App Model and intend to use custom actions.
I need to issue a couple of updates and clarifications. First, I need to be very clear and state that SharePoint-hosted apps were the focus of this post. In a SharePoint-hosted app, what I’ve written is correct: there is no processing of “new” 2013 tokens (like ~appWebUrl and ~remoteAppUrl) for the Image32by32 and Image16by16 attributes. Interestingly enough, though, there does appear to be processing of the ~remoteAppUrl in the Image32by32 and Image16by16 attributes specifically for the other application types such as provider-hosted apps and autohosted apps. Jamie Rance mentioned this in a comment (below), and I verified it with an autohosted app that I quickly spun-up.
I double-checked to see if the ~remoteAppUrl token would even be recognized/processed (despite the lack of a remote web component) for SharePoint-hosted apps, and it is not … nor is ~appWebUrl token processed for autohosted apps. The selective implementation of only the ~remoteAppUrl token for certain app types has me baffled; I hope that we’ll eventually see some clarification or changes. If you’re building provider-hosted or autohosted apps, though, this does give you a way to redirect image requests to your remote web application rather than an absolute endpoint. Thank you, Jamie, for the information!
And now for some good news that for SharePoint-hosted app creators. Prior to writing this post, I had posted a question about the tokens over in the SharePoint Exchange forums. At the time I wrote this post, there hadn’t been any activity to suggest that a solution or workaround existed. F. Aquino recently supplied an incredibly creative answer, though, that involves using a data URI to Base64-encode the images and package them directly into the Image32by32 and Image16by16 attributes themselves! Although this means that some image pre-processing will be required to package images, it gets around the requirement of being “always-connected.” This is an awesome technique, and I’ll certainly be adding it to my arsenal. Thank you, F. Aquino!
After applying some recently-released patches for SharePoint 2013, my farm’s App infrastructure went belly-up. This post describes my troubleshooting and resolution.
I’ve been doing a lot of work with the new SharePoint 2013 App Model in the last few months. Specifically, I’ve been working on a free tool (for Idera) that will be going into the SharePoint App Marketplace sometime soon. The tool itself is not too terribly complicated – just a SharePoint-hosted app that will allow users to analyze library properties, compare library configuration settings, etc.
The development environment that I was using to put the new application together had been humming along just fine … until today. It seems that I tempted fate today by applying a handful of RTM patches to my environment.
I’d heard that some patches for SharePoint 2013 RTM had been released, so I pulled them down and applied them to my development environment. Those patches were:
After all binaries had been installed and a reboot was performed, I ran the SharePoint 2013 Products Configuration Wizard. The wizard ran and completed without issue, Central Administration popped-up afterwards, and life seemed to be going pretty well.
I went back to working on my SharePoint-hosted app, and that’s when things went south. When I tried to deploy the application to my development site collection from Visual Studio 2012, it failed with the following error message:
Error occurred in deployment step ‘Install app for SharePoint’: We’re sorry, we weren’t able to complete the operation, please try again in a few minutes. If you see this message repeatedly, contact your administrator.
Okay, I thought, that’s odd. Let’s give it a second.
Three failed redeploys later, I rebooted the VM to see if that might fix things. No luck.
My development wasn’t moving forward until I figured out what was going on, so I did a quick hunt online to see if anyone had encountered this problem. The few entries I found indicated that I should verify my App settings in Central Administration, so I tried that. Strangely, I couldn’t even get those settings to come up – just error pages.
All of this was puzzling. Remember: my farm was doing just fine with the entire app infrastructure just a day earlier, and all of a sudden things were dead in the water. Something had to have happened as a result of the patches that were applied.
Not finding any help on the Internet, I fired-up ULSViewer to see what was happening as I attempted to access the farm App settings from Central Administration. These were the errors I was seeing:
Insufficient SQL database permissions for user ‘Name: SPDC\svcSpServices SID: S-1-5-21-1522874658-601840234-4276112424-1115 ImpersonationLevel: None’ in database ‘SP2013_AppManagement’ on SQL Server instance ‘SpSqlAlias’. Additional error information from SQL Server is included below. The EXECUTE permission was denied on the object ‘proc_GetDataRange’, database ‘SP2013_AppManagement’, schema ‘dbo’.
Seeing that my service account (SPDC\svcSpServices) didn’t have the access it needed to run the proc_GetDataRange stored procedure left me scratching my head. I didn’t know what sort of permissions the service account actually required or how they were specifically granted. So, I hopped over to my SQL Server to see if anything struck me as odd or out-of-place.
Looking at the SP2013_AppManagement database, I saw that members in the SPDataAccess role had rights to execute the proc_GetDataRange stored procedure. SPDC\svcSPServices didn’t appear to be a direct member of that group (that I could tell), so I added it. Bazinga! Adding the account to the role permitted me to once again review the App settings in Central Administration.
Unfortunately, I still couldn’t deploy my Apps from Visual Studio. Going back to the ULS logs, I found the following:
Insufficient SQL database permissions for user ‘Name: NT AUTHORITY\IUSR SID: S-1-5-17 ImpersonationLevel: Impersonation’ in database ‘SP2013_AppManagement’ on SQL Server instance ‘SpSqlAlias’. Additional error information from SQL Server is included below. The EXECUTE permission was denied on the object ‘proc_AM_PutAppPrincipal’, database ‘SP2013_AppManagement’, schema ‘dbo’.
It was obvious to me that more than just a single account was out of whack since the proc_AM_PutAppPrincipal stored procedure was now in-play. Rather than try to manually correct all possible permission issues, I decided to try and get SharePoint to do the heavy lifting for me.
Knowing that the problem was tied to the Application Management Service, I figured that one (possible) easy way to resolve the problem was to simply have SharePoint reprovision the Application Management Service service application. To do this, I carried out the following:
Deleted my App Management Service Application instance (which I happened to call “Application Management Service”) in Central Administration. I checked the box for Delete data associated with the Service Applications when it appeared to ensure that I got a new app management database.
Once the service application was deleted, I created a new App Management Service service application. I named it the same thing I had called it before (“Application Management Service”) and re-used the same database name I had been using (“SP2013_AppManagement”). I re-used the shared services application pool I had been using previously, too.
After completing these steps, I was able to successfully deploy my application to the development site collection through Visual Studio. I no longer saw the stored procedure access errors appearing in the ULS logs.
I don’t know what happened exactly, but what I observed seems to suggest that one of the patches I applied messed with the App Management service application database. Specifically, rights and permissions that one or more accounts possessed were somehow revoked by removing those accounts from the SPDataAccess role. Additional role and/or permission changes could have been made, as well – I just don’t know.
Once everything was running again, I went back into my SQL Server and had a look at the (new) SP2013_AppManagement database. Examining the role membership for SPDC\svcSpServices (which was one of the accounts that was blocked from accessing stored procedures earlier), I saw that the account had been put (back) into the SPDataAccess role. This seemed to confirm my observation that somehow things became “unwired” during the patching and/or configuration wizard run process.