Microsoft Power Automate: The Teenage Years

In this post, I chart the growth of Power Automate over the course of its relatively short but meaningful existence. I also discuss some IT concerns that Power Automate, and the citizen developers building solutions with it, need to acknowledge and address in order to reach maturity and realize their full potential.

Power Automate: The Early Years

This post isn’t a “what is Power Automate” writeup – I have to assume you understand (or at least have heard about) Power Automate to some degree. That being said, if you feel like you could use a little more background or information about Power Automate, use the inline links I’m supplying and References and Resources section at the bottom of this post to investigate further.

Long story short, Power Automate is a member of Microsoft’s Power Platform collection of tools. The goal of the Power Platform is to make business solution capabilities that were previously available only to software developers available to the masses. These “citizen developers” can use Power Platform tools to build low-code or no-code business solutions and address needs without necessarily involving their organization’s formal IT group/department.

Power Automate, in particular, is of particular significance to those of us who consider ourselves SharePoint practitioners. Power Automate has been fashioned and endorsed by Microsoft as the replacement for SharePoint workflows – including those that may have been created previously in SharePoint Designer. This is an important fact to know, especially since SharePoint 2010 Workflows are no longer available for use in SharePoint Online (SPO), and SharePoint 2013 workflows are slated for the chopping block at some point in the not-so-distant future.

Such Promise!

Given that Power Automate was first released in 2016, it has been very successful in a relatively short time – both as a workflow replacement and as a highly effective way to enable citizen developers (and others) to address their own business process needs without needing to involve format IT.

Power Automate (and the Power Platform in general) represents a tremendous amount of potential value for average users consuming Microsoft 365 services in some form or fashion. Power Automate’s reach and applications go well beyond SharePoint alone. Assuming you have a Microsoft 365 subscription, just look at your Power Automate home/launch page to get an idea of what you can do with it. Here’s a screenshot of my Power Automate home page.

Looking at the top-four Power Automate email templates, we can see that the tool goes beyond mere SharePoint workflows:

Checking out the Files and Documents prebuilt templates, we find:

The list of prebuilt templates go on for miles. There are literally thousands of them that can be used as-is or as a starting point for the process you are trying to build and/or automate.

And to further drive home the value proposition, Power Automate can connect to nearly any system with a web service. Microsoft also maintains a list of pre-built connectors that can be used to tie-in data from other (non-Microsoft) systems into Power Automate. Just a few examples of useful connectors:

Power Automate is typically thought of as being a cloud-only tool, but that’s really not true. The lesser-known reality is that Power Automate can be used with on-premises environments and data provided the right hybrid data gateway is setup between the on-premises environment and the cloud.

But Such a Rebellious Streak!

Like all children, Power Automate started out representing so much promise and goodness, and that really hasn’t changed at all. But as PowerAutomate has grown in adoption and gained widespread usage, we have been seeing signs of “teen rebellion” and the frustration that comes with it. Sure, Power Automate has demonstrated ample potential … but it has some valuable lessons to learn and needs to mature in several areas before it will be recognized and accepted as an adult.

Put another way: in terms of providing business process modeling and execution capabilities that are accessible to a wide audience, Power Automate is a resounding success. Where PowerAutomate needs help growing (or obtaining support) is with a number of ALM (application lifecycle management) concerns. 

These concerns – things like source (code) management, documentation, governance, deployment support, and more – are the areas that formalized IT organizations typically consider “part of the job” and have processes/solutions to address. The average citizen developer, likely not having been a member of an IT organization, is oftentimes blissfully unaware of these concerns until something goes wrong.

Previous workflow and business process tools (like SharePoint Designer) struggled with these IT-centric non-functional requirements. This is one of the reasons SharePoint Designer was “lovingly” called SharePoint Destroyer by those of us who worked with and (more importantly) had to support what had been created with it. Microsoft was aware of this perceived deficit, and so Power Automate (and the rest of the Power Platform) was handled differently.

A Growth Plan

Microsoft is in a challenging position with Power Automate and the overall Power Platform. One of Power Automate’s most compelling aspirations is enabling the creation of low- and no-code solutions by average users (citizen developers). Prior to the Power Platform, these solutions typically had to be constructed by developers and other formalized IT groups. And since IT departments were typically juggling many of these requests and other demands like them, the involvement of IT oftentimes introduced unexpected delays, costs, bureaucracy, etc., into the solutioning process.

But by (potentially) taking formalized IT out of the solutioning loop, how do these non-functional requirements and project needs get addressed? In the past, many of these needs would only get addressed if someone had the foresight, motivation, and training to address them – if they were even recognized as requirements in the first place.

With the Power Platform, Microsoft has acknowledged the need to educate and assist citizen developers with non-functional requirements. It has released a number of tools, posts, and other materials to help organizations and their citizen developers who are trying to do the right thing. Here are a handful of resources I found particularly helpful:

Some Guidance Counseling

When I was in high school (many, many years ago), I was introduced to the concept of a guidance counselor. A guidance counselor is someone who can provide assistance and advice to high school students and their parents. Since high school students are oftentimes caught between two worlds (childhood and adulthood), a guidance counselor can help students figure out their next steps and act as supportive and objective sounding boards for the questions and decisions teenagers commonly face.

High school guidance counseling isn’t really accurate in the case of Power Automate, but the analogy makes more sense if we swap-out “high school” and insert “technical.” After all, citizen developers understand their business needs and the problems they’re trying to solve. Oftentimes, though, they could use some advice and assistance in the end-to-end solutioning process – especially with non-functional requirements. They need help to ensure that they don’t sabotage their own self-interests by building something that can’t be maintained, isn’t documented, can’t be deployed, or will run afoul of their IT partners and overarching IT policies/governance.

The aim of the links in the A Growth Plan section (above) is to provide some basis and a starting point for the non-functional concerns we’ve discussed a bit thus far. Generally speaking, Microsoft has done a solid job covering many technical and non-technical non-functional requirements surrounding Power Automate and solutions built from Power Automate.

I give my “solid job” thumbs-up on the basis of what I know and have focused on over the years. If I review the supplied links and the material that they share through the eyes of a citizen developer, though, I find myself getting confused quickly – especially as we get into the last few links and the content they contain. I suspect some citizen developers may have heard of Git, GitHub, Azure DevOps, Visual Studio Code, and the various other acronyms and products frequently mentioned in the linked resources. But is it realistic to expect citizen developers to understand how to use (or even recognize) a CLI takes or be well-versed in properly formed JSON? In my frank opinion: “no.”

The Microsoft docs and articles I’ve perused (and shared links above) have been built with a slant towards the IT crowd and their domain knowledge set, and that’s not particularly helpful for who I envision citizen developers to be. The documents and technical guides tend to assume a little too much knowledge to be helpful to those trying to build no-code and low-code business solutions.

Thankfully, citizen developers have additional allies and tools becoming available to them on an ever-increasing basis.

Pulling The Trigr

One of the most useful tools I’ve been introduced to more recently doesn’t come from Microsoft. It comes from Encodian, a Microsoft Partner building tools and solutions for the Microsoft 365 platform and various Azure workloads. The specific Encodian tool that is of interest to me (a self-described SharePoint practitioner) is called Trigr, and in Encodian’s words Trigr can “Make Power Automate Flows available across multiple and targeted SharePoint Online sites. Possible via a SharePoint Framework (SPFx) Extension, users can access Flows from within SharePoint Online libraries and lists.”

A common challenge with SharePoint-based Power Automate flows is that they are built in situ and attached to the list they are intended to operate against. This makes them hard to repurpose, and if you want to re-use a Power Automate flow on one or more other lists, there’s a fair bit of manual recreation/manipulation necessary to adjust steps, change flow parameter values, etc.

Trigr allows Power Automate flow creators to design a flow one time and then re-use that flow wherever they’d like. Trigr takes care of handling and passing parameters, attaching new instances of the Power Automate flow to additional lists, and handling a lot of the grunt work that takes the sparkle away from Power Automate.

Have a look at the following video for a more concrete demonstration.

As I see it, Trigr is part of a new breed of tools that tries (and successfully manages) to walk the fine line between citizen developers’ limited technical knowledge/capabilities and organizational IT’s needs/requirements that are geared towards keeping PowerAutomate-based solutions documented, controlled, repeatably deployable, operating reliably/consistently, and generally “under control.”

Trigr is a SaaS application/service that has its own web-based administrative console that provides you with installation resources, “how to” guidance, the mechanism for parameterizing your Power Automate flows, deployment support, and other service functionality. A one-time installation and setup within the target SPO tenant is necessary, but after that, subsequent Trigr functionality is in the hands of the citizen developer(s) tasked with responsibility for the business process(es) modeled in Power Automate. And Trigr’s documentation, guidance, and general product language reflect this (largely) non-technical or minimally technical orientation possessed by the majority of citizen developers.

Reaching Maturity

I have faith that Power Automate is going to continue to grow, gain greater adoption, and mature. Microsoft has given us some solid guidance and tools to address non-functional requirements, and interested third-parties (like Encodian) are also meeting citizen developers where they currently operate – rather than trying to “drag them” someplace they might not want to be.

At the end of the day, though, one key piece of Power Automate (Power Platform) usage and the role citizen developers occupy is best summed-up by this quote from rabbi and author Joshua L. Liebman:

Maturity is achieved when a person postpones immediate pleasures for long-term values.

Helping Power Automate and citizen developers mature responsibly requires patience and guidance from those of us in formalized IT. We need to aid and guide citizen developers as they are exposed to and try to understand the value and purpose that non-functional requirements serve in the solutions they’re creating. We can’t just assume they’ll inherently know. The reason we do the things we do isn’t necessarily obvious – at times, even to many of us.

Those of us in IT also need to see Power Automate and the larger Power Platform as another enabling tool in the organizational chest that can be used to address business process needs. Rather than an “us versus them” mentality, everyone would benefit from us embracing the role of guidance counselor rather than adversarial sibling or disapproving parent.

References and Resources

    1. Microsoft: Power Automate
    2. Microsoft: Power Platform
    3. Gartner: Citizen Developer Definition
    4. Microsoft Support: Overview of workflows included with SharePoint
    5. Microsoft Support: Introducing SharePoint Designer
    6. Microsoft Support: SharePoint 2010 workflow retirement
    7. Microsoft Learn: All about the product retirement plan (workflows, designer etc.)
    8. Microsoft Learn: Plan and prepare for Power Automate in 2022 release wave 2
    9. Microsoft: Office is becoming Microsoft 365
    10. Link: Power Automate home/launch page
    11. Microsoft Power Automate: Save Outlook.com email attachments to your OneDrive
    12. Microsoft Power Automate: Save Gmail attachments to your Google Drive
    13. Microsoft Power Automate: Notify me on Teams when I receive an email with negative sentiment
    14. Microsoft Power Automate: Analyze incoming emails and route them to the right person
    15. Microsoft Power Automate: Start an approval for new file to move it to a different folder
    16. Microsoft Power Automate: Read information from invoices
    17. Microsoft Power Automate: Start approval for new documents and notify via Teams
    18. Microsoft Power Automate: Track emails in an Excel Online (Business) spreadsheet
    19. Microsoft Power Automate: Templates
    20. Microsoft Learn: List of all Power Automate connectors
    21. Microsoft Learn: Adobe PDF Services
    22. Microsoft Learn: DocuSign
    23. Microsoft Learn: Google Calendar
    24. Microsoft Learn: JIRA
    25. Microsoft Learn: MySQL
    26. Microsoft Learn: Pinterest
    27. Microsoft Learn: Salesforce
    28. Microsoft Learn: YouTube
    29. Microsoft Learn: What is an on-premises data gateway?
    30. Red Hat: What is application lifecycle management (ALM)?
    31. Splunk: What Is Source Code Management?
    32. CIO: What is IT governance? A formal way to align IT & business strategy
    33. Wikipedia: Non-functional requirement
    34. TechNet: SharePoint 2010 Best Practices: Is SharePoint Designer really pure evil?
    35. Microsoft Learn: Microsoft Power Platform guidance documentation
    36. Microsoft Learn: Power Platform adoption maturity model: Goals and opportunities
    37. Microsoft Learn: Admin and governance best practices
    38. Microsoft Learn: Introduction: Planning a Power Automate project
    39. Microsoft Learn: Application lifecycle management (ALM) with Microsoft Power Platform
    40. Microsoft Learn: Create packages for the Package Deployer tool
    41. Microsoft Learn: SolutionPackager tool
    42. Microsoft Learn: Source control with solution files
    43. Betterteam: Guidance Counselor Job Description
    44. Atlassian Bitbucket: What is Git
    45. TechCrunch: What Exactly Is GitHub Anyway?
    46. Microsoft Learn: What is Azure DevOps?
    47. Wikipedia: Visual Studio Code
    48. TechTarget: command-line interface (CLI)
    49. JSON.org: Introducing JSON
    50. Nintex: What is a citizen developer and where do they come from?
    51. Encodian: An award-winning Microsoft partner
    52. Microsoft Power Automate Community Forums: Reuse flow
    53. YouTube: ‘When a user runs a Trigr’ Deep Dive
    54. Encodian: Encodian Trigr App Deployment and Installation
    55. Encodian: (Trigr) General Guidance
    56. Encodian: ‘When a user runs a Trigr’ overview
    57. Wikipedia: Joshua L. Liebman

The Gift of NAS

Ah, the holidays ...In all honesty, this post is quite overdue. The topic is one that I started digging into before the end of last year (2020), and in a “normal year” I’d have been more with it and shared a post sooner. To be fair, I’m not even sure what a “normal year” is, but I do know this: I’d be extremely hard-pressed to find anyone who felt that 2020 was a normal year …

The Gift?

I need to rewind a little to explain “the gift” and the backstory behind it. Technically speaking, “the gift” in questions wasn’t so much a gift as it was something I received on loan. I do have hopes that I’ll be allowed to keep it … but let me avoid putting the cart ahead of the horse.

The item I’m referring to as a “gift” is a Synology NAS (Network Attached Storage) device. Specifically speaking, it’s a Synology DiskStation DS220+ with a couple of 2TB red drives (rated for NAS conditions) to provide storage. A picture of it up-and-running appears below.

I received the DS220+ during the latter quarter of 2020, and I’ve had it running since roughly Christmastime.

How did I manage to come into possession of this little beauty? Well, that’s a bit of a story …

Brainstorming

Back in October 2020, about a week or two before Halloween, I was checking my email one day and found a new email from a woman named Sarah Lien in my inbox. In that email, Sarah introduced herself and explained that she was with Synology’s Field and Alliance Marketing. She went on to share some information about Synology and the company’s offerings, both hardware and software.

I’m used to receiving emails of this nature semi-regularly, and I use them as an opportunity to learn and sometimes expand my network. This email was slightly different, though, in that Sarah was reaching out to see if we might collaborate together in some way around Synology’s NAS offerings and software written specifically for NAS that could back up and protect Microsoft 365 data.

Normally, these sorts of situations and arrangements don’t work out all that well for me. Like everyone else, I’ve got a million things I’m working on at any given time. As a result, I usually can’t commit to most arrangements like the one Sarah was suggesting – as interesting as I think some of those cooperative efforts might turn out to ultimately be.

Nevertheless, I was intrigued by Sarah’s email and offer. So, I decided to take the plunge and schedule a meeting with her to see where a discussion might lead.

Rocky Beginnings

One thing I learned pretty quickly about Sarah: she’s a very friendly and incredibly understanding person. One would have to be to remain so good-natured when some putz (me) completely stands you up for a scheduled call. Definitely not the first impression I wanted to make …

I’m happy to say that the second time was a charm: I managed to actually show up on-time (still embarrassed) and Sarah and I, along with her coworker Patrick, had a really good conversation.

Synology has been in the NAS business for quite some time. I’d been familiar with the company by name, but I didn’t have any familiarity with their NAS devices.

Long story short: Sarah wanted to change that.

The three of us discussed the variety of software available for the NAS – like Active Backup for Microsoft 365 – as well as some of the capabilities of the NAS devices themselves.

Interestingly enough, the bulk of our conversation didn’t revolve around Microsoft 365 backup as I had expected. What really caused Patrick and me to geek-out was a conversation about Plex and the Synology app that turned a NAS into a Plex Server.

The Plex Flex

The Plex LogoNot familiar with Plex? Have you been living under a rock for the last half-decade?

Plex is an ever-evolving media server, and it has been around for quite some time. I bought my Plex Lifetime Pass (not required for use, but affords some nice benefits) back in September of 2013 for $75. The system was more of a promise at that point in time than a usable, reliable media platform. A lifetime pass goes for $120 these days, and the platform is highly capable and evolved.

Plex gives me a system to host and serve my media (movies, music, miscellaneous videos, etc.), and it makes it ridiculously easy to both consume and share that media with friends. Nearly every smart device has a Plex client built-in or available as a free download these days. Heck, if you’ve got a browser, you can watch media on Plex:

I’m a pretty strong advocate for Plex, and I share my media with many of my friends (including a lot of folks in the M365 community). I even organized a Facebook group around Plex to update folks on new additions to my library, host relevant conversations, share server invites, and more.

An Opportunity To Play

I’ve had my Plex Server up-and-running for years, so the idea of a NAS doing the same thing wasn’t something that was going to change my world. But I did like the idea of being able to play with a NAS to put it through the paces. Plex just became the icing on the cake.

After a couple of additional exchanges and discussions, I got lucky (note: one of the few times in my life): Sarah offered to ship me the DS220+ seen at the top of this post for me to play with and put through the paces! I’m sure it comes as no surprise to hear me say that I eagerly accepted Sarah’s generous offer.

Sarah got my address information, confirmed a few things, and a week or so later I was informed that the NAS was on its way to me. Not long after that, I found this box on my front doorstep.

The Package

Finally Setting It Up

The box arrived … and then it sat for a while.

The holidays were approaching, and I was preoccupied with holiday prep and seasonal events. I had at least let Sarah know that the NAS made it to me without issue, but I had to admit in a subsequent conversation that I hadn’t yet “made time” to start playing around with it.

Sarah was very understanding and didn’t pressure me for feedback, input, or anything. In fact, her being so nice about the whole thing really started to make me feel guilty.

Guilt can be a powerful motivator, and so I finally made the time to unbox the NAS, set it up, and play around with it a little.

Here are a series of shots I took as I was unpacking the DS220+ and getting it setup.

It was very easy to get up-and-running … which is a good thing, because the instructions in the package were literally just the small little foldout shown in the slides above. I’d say the Synology folks did an excellent job simplifying what had the potential to be a confusing process for those who might not be technical powerhouses.

And eventually … power-on!

Holy Smokes!

Once I got the DS220+ running, I started paying a little more attention to all the ports, capabilities in the interface, etc. And to tell you the truth, I was simply floored.

First off, the DS220+ is a surprisingly capable NAS – much more than I originally envisioned or expected. I’ve had NAS devices before, but my experience – like those NAS devices – is severely dated. I had an old Buffalo Linkstation which I never really took a liking to. I also had a couple of Linksys Network Storage Link devices. They worked “well enough,” but the state of the art has advanced quite a bit in the last 15+ years.

Here are the basics of the DS220+:

  • Intel Celeron J4025 2-core 2GHz CPU
  • 2GB DDR4 RAM
  • Two USB 3.0 ports
  • Two gigabit RJ-45 ports
  • Two 3.5″ drive bays with RAID-1 (mirroring) support

It’s worth noting that the 2GB of RAM that is soldered into the device can be expanded to 6GB with the addition of a 4GB SODIMM. Also, the two RJ-45 ports support Link Aggregation.

I’m planning to expand the RAM ASAP (already ordered a chip from Amazon), and given that I’ve got 10Gbps optical networking in my house, and the switch next to me is pretty darned advanced (and seems to support every standard under the sun), I’m looking forward to seeing if I can “goose things” a bit with the Link Aggregation capability.

What I’m sharing here just scratches the surface of what the device is capable of. Seriously – check out the datasheet to see what I’m talking about!

But Wait - There's More!

I realize I’m probably giving off something of a fanboy vibe right now, and I’m really kind of okay with that … because I haven’t even really talked about the applications yet.

Once powered-on, the basic interface for the NAS is a browser-based pseudo desktop that appears as follows:

This interface is immediately available following setup and startup of the NAS, and it provides all manner of monitoring, logging, and performance tracking within the NAS itself. The interface can also be customized a fair bit to fit preferences and/or needs.

The cornerstone of any NAS is its ability to handle files, and the DS220+ is capable with files on so many levels. Opening the NAS Control Panel and checking-out related services in the Info Center, we see file basics like NFS and SMB … and so much more.

The above screen is dense; there is a lot of information shown and communicated. And each of the tabs and nodes in the Control Panel is similarly dense with information. Hardware geeks and numbers freaks have plenty to keep themselves busy with when examining a DS220+.

But the applications are what truly have me jazzed about the DS220+. I briefly mentioned the Office 365 backup app and the Plex Server app earlier. But those are only two from an extensive list:

Many of these apps aren’t lightweight fare by any stretch. In addition to the two I already mentioned having an interest in, I really want to put the following apps through the paces:

  • Audio Station. An audio-specific media server that can be linked with Amazon Alexa (important in our house). I don’t see myself using this long term, but I want to try it out.
  • Glacier Backup. Provides the NAS with an interface into Amazon Glacier storage – something I’ve found interesting for ages but never had an easy way to play with or test.
  • Docker. Yes, a full-on Docker container host server! If something isn’t available as a NAS app, chances are it can be found as a Docker container. I’m actually going to see how well the NAS might do as a Minecraft Server. The VM my kids and I (and Anders Rask) play on has some I/O issues. Wouldn’t it be cool if we could move it into a lighter-weight but better performing NAS/Docker environment.

Part of the reason for ordering the memory expansion was that I expect the various server apps and advanced capabilities to work the NAS pretty hard. My understanding is the that the Celeron chip the DS220+ employs is fairly capable, but tripling the memory to 6GB is doing what I can to help it along.

(Partial) Conclusion

I could go on and on about all the cool things I seem to keep finding in the DS220+ … and I might in future posts. I’d really like to be a little more directed and deliberate about future NAS posts, though. Although I believe many of you can understand and perhaps share in my excitement, this post doesn’t do much to help anyone or answer specific questions.

I suspect I’ll have at least another post or two summarizing some of the experiments (e.g., with the Minecraft Docker container) I indicated I’d like to conduct. I will also be seriously evaluating the Microsoft 365 Backup Application and its operation, as I think that is a topic many of you would be interested in reading my summary and assessment of.

Stay tuned in the coming weeks/months. I plan to cover other topics besides the NAS, but I also want to maximize my time and experience with my “gift of NAS.”

References and Resources

Custom Ribbon Button Image Limitations with SharePoint 2013 Apps

What started as a simple attempt to use the ~appWebUrl token in an image URL became a deep dive into SharePoint’s internal processing of custom actions and the App deployment process. In this post, I cover what will and won’t work for custom action image URLs in your own SharePoint 2013 Apps.

A custom action button with image My adventures in SharePoint 2013 App Model Land have been going pretty well, but I recently encountered a limitation that left me sort of scratching my head.

The limitation applies to the creation of custom actions for SharePoint apps. To be more specific: the problem I’ve encountered is that there doesn’t appear to be a way to package and reference (using relative links) custom images for ribbon buttons like the one that’s circled in the image above and to the left. This doesn’t mean that custom images can’t be used, of course, but the work-around isn’t exactly something I’m particularly fond of (nor is it even feasible) in some application scenarios.

If you’re not familiar with the new SharePoint 2013 App Model, then you may want to do a little reading before proceeding with this post. I’m only going to cover the App Model concepts that are relevant to the limitation I observed and how to address/work-around it. However, if you are familiar with the new 2013 App Model and creating custom actions in SharePoint 2010, then you may want to jump straight down to the section titled Where the Headaches Begin.

One more warning: this post does some heavy digging into SharePoint’s internal processing of custom ribbon actions and URL tokens. If you want to skip all of that and head straight to the practical take-away, jump down to the What About the Image32by32 and Image16by16 Attributes section.

Adding a Ribbon Custom Action

First, let me do a quick run-through on custom actions. They aren’t unique to SharePoint 2013 or its new “Cloud App Model.” In fact, the type of custom action I’m talking about (i.e., extending the ribbon) became available when the Ribbon was introduced with SharePoint 2010.

With a SharePoint 2013 App, adding a new button to the ribbon is a relatively simple affair. It starts with choosing the Ribbon Custom Action option from the Add New Item dialog as shown below and to the left. Once a name is provided for the custom action and the Add button is clicked, the Create Custom Action for Ribbon dialog appears as shown below and to the right. There’s a third dialog page that further assists in setting some properties for a custom action, but I’m going to skip over it since it isn’t relevant to the point I’m trying to make.

Adding a Ribbon Custom Action

Create Custom Action for Ribbon

I want to call attention to one of the selections I made on the Create Custom Action for Ribbon dialog, though; specifically, the decision to expose the custom action in the Host Web rather than in the App Web.

Why is this choice so important? Well, the new App Model enforces a relatively strict boundary of separation between SharePoint sites and any custom applications (running under the new App Model) that they may contain. A SharePoint site (Host Web) can technically “host” applications, but those applications operate in an isolated App Web that may have components running on an entirely different server. Under the new App Model, no custom app code is running in the Host Web.

App Webs (where custom applications exist after installation) don’t have direct access to the Host Web in which they’re contained, either. In fact, App Webs are logically isolated from their Host Web parents. If App Webs want to communicate with their Host Web parent to interact with site collection data, for example, they have to do so through SharePoint’s Client-Side Object Model (CSOM) or the Representational State Transfer (REST) interface. The old full-trust, server-side object isn’t available; everything is “client-side.”

There are some exceptions to this model of isolation, and one of those exceptions is the use of custom actions to allow an App (residing in an App Web) to partially wire itself into the Host Web. The Create Custom Action for Ribbon dialog shown above, for instance, adds a new button to the ribbon for each of the Document Libraries in the Host Web. This gives users a way to navigate directly from Document Libraries (in the Host Web) to a page in the App Web, for example.

The Elements.xml file that gets generated for the custom action once the Visual Studio wizard has finished running looks something like the following:

[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/"&gt;
<CustomAction Id="1470c964-6b8a-4d79-9817-4d32c898ffbe.RibbonCustomAction1"
RegistrationType="List"
RegistrationId="101"
Location="CommandUI.Ribbon"
Sequence="10001"
Title="Invoke &apos;LibraryDetailsCustomAction&apos; action">
<CommandUIExtension>
<!–
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
–>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Library.Actions.Controls._children">
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton"
Alt="Examine Library Details"
Sequence="100"
Command="Invoke_LibraryDetailsCustomActionButtonRequest"
LabelText="Examine Library Details"
TemplateAlias="o1"
Image32by32="_layouts/15/images/placeholder32x32.png"
Image16by16="_layouts/15/images/placeholder16x16.png" />
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler Command="Invoke_RibbonCustomAction1ButtonRequest"
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"/>
</CommandUIHandlers>
</CommandUIExtension >
</CustomAction>
</Elements>
[/sourcecode]

Deploying the App that contains the custom action markup shown above creates a new button in the ribbon of each Host Web Document Library. By default, each button looks like the following:

Custom Ribbon Button

There are a few attributes in the previous XML that I’m going to repeatedly come back to, so it’s worth taking a closer look at each one’s purpose and associated value(s):

  • Image32by32 and Image16by16 for the <Button /> element. These two attributes specify the images that are used when rendering the custom action button on the ribbon. By default, they point to an orange dot placeholder image that lives in the farm’s _layouts folder.
  • CommandAction for the <CommandUIHandler /> element. In its simplest form, this is the URL of the page to which the user is redirected upon pressing the custom ribbon button.

The Problem with the Default CommandAction

When a user clicks on a custom ribbon button in one of the Host Web document libraries, the goal is to send them over to a page in the App Web where the custom action can be processed. Unfortunately, the default CommandAction isn’t set up in a way that permits this.

[sourcecode language=”XML” autolinks=”false”]
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"
[/sourcecode]

In fact, attempting to deploy the solution to Office 365 with this default CommandAction results in failure; the App package doesn’t pass validation.

To understand why the failure occurs, it’s important to remember the isolation that exists between the Host Web and the App Web. To illustrate how the Host Web and App Web are different from simply a hostname perspective, consider the project I’ve been working on as an example:

Notice that although the /sites/dev2 relative path portion is the same for both the Host Web and App Web URLs, the hostname portion of each URL is different. This is by design, and it helps to enforce the logical separation between the Host Web and App Web – even though the App Web technically resides within the Host Web.

Looking again at the default CommandAction attribute reveals that its value is just an ASPX page that is identified with a relative URL. Rather than pointing to where we want it to point …

[sourcecode language=”XML” autolinks=”false”]
https://mcdonough-bc920dbeb7ecd3.sharepoint.com/sites/dev2/LibraryManager/Pages/LibraryDetails.aspx
[/sourcecode]

… it ends up pointing to a non-existent destination in the Host Web:

[sourcecode language=”XML” autolinks=”false”]
https://mcdonough.sharepoint.com/sites/dev2/LibraryManager/Pages/LibraryDetails.aspx
[/sourcecode]

And this is exactly what should happen. After all, the custom action is launched from within the Host Web, so a relative path specification should resolve to a location in the Host Web – not the location we actually want to target in the App Web.

Fixing the CommandAction

The Key! Thankfully, it isn’t a major undertaking to correct the CommandAction attribute value so that it points to the App Web instead of the Host Web. If you’ve worked with SharePoint at all in the past, then you may know that the key to making everything work (in this situation) is the judicious use of tokens.

What are tokens? In this case, tokens are specific string sequences that SharePoint parses at run-time and replaces with a value based on the run-time environment, action that was performed, associated list, or some other context-sensitive value that isn’t known at design-time.

To illustrate how this works, consider the default CommandAction attribute:

[sourcecode language=”XML” autolinks=”false”]
CommandAction="LibraryManager\Pages\LibraryDetails.aspx"
[/sourcecode]

Modifying the attribute as follows changes the destination URL of the button so that the user is redirected to the desired page in the App Web rather than the Host Web:

[sourcecode language=”XML” autolinks=”false”]
CommandAction="~appWebUrl/Pages/LibraryDetails.aspx"
[/sourcecode]

The ~appWebUrl token is replaced at run-time with the actual URL of the associated App Web (https://mcdonough-bc920dbeb7ecd3.sharepoint.com/sites/dev2) to build the desired destination link.

SharePoint defines a whole host of URL strings and tokens for use in Apps. As it turns out, a fairly complete list has been aggregated and defined in a handy little page on MSDN. Thanks to the always-helpful Andrew Clark for pointing this out to me; I hadn’t realized Microsoft had pulled so many tokens together in one place!

Where the Headaches Begin

Baby Crying Since tokens are the key to inserting context-dependent values at run-time, you’d think they’d have been implemented and usable anywhere a developer needs to cross the Host Web / App Web divide.

Apparently not. To be more specific (and fair), I should instead say “not consistently.”

Since this blog post is about image limitations with custom ribbon buttons, you can probably guess where I’m headed with all of this. So, let’s take a look at the Image16by16 and Image32by32 attributes.

By default, the Image16x16 and Image32by32 attributes point to a location in the _layouts folder for the farm. Each attribute value references an image that is nothing more than a little round orange dot:

[sourcecode language=”XML” autolinks=”false”]
Image32by32="_layouts/15/images/placeholder32x32.png"
Image16by16="_layouts/15/images/placeholder16x16.png"
[/sourcecode]

Much like the CustomAction attribute, it stands to reason that developers would want to replace the placeholder image attribute values with URLs of their choosing. In my case, I wanted to use a set of images I was deploying with the rest of the application assets in my App Web. So, I updated my image attributes to look like the following:

[sourcecode language=”XML” autolinks=”false”]
Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png"
Image16by16="~appWebUrl/Images/sharepoint-library-analyzer_16x16-a.png"
[/sourcecode]

Tokens Do Not Work for Image Attributes I deployed my App to my Office 365 Preview tenant, watched my browser launch into my App Web, hopped back to the Host Web, navigated to a document library, and looked at the toolbar. I was not happy by what I saw (on the left).

The image I had specified for use by the button wasn’t being used. All I had was a broken image link.

Examining the properties for the broken image quickly confirmed my fear: the ~appWebUrl token was not being processed for either of the Image32by32 or Image16by16 attributes. The token was being output directly into the image references.

I tried changing the image attributes to reference the App Web a couple of different ways (and with a couple of different tokens), but none of them seemed to work.

I did a little digging, and I saw that Chris Hopkins (over at Microsoft) covered this very topic for sandboxed solutions in SharePoint 2010. In Chris’ article, though, it was clear that tokens such as ~site and ~sitecollection were valid for use by the Image32by32 and Image16by16 attributes.

To see if I was losing my mind, I decided to try a little experiment. Although I knew it wouldn’t solve my particular problem, I decided to try using the ~site token just to see if it would be parsed properly. Lo and behold, it was parsed and replaced. ~site worked. So, ~site worked … but ~appWebUrl didn’t?

That didn’t make any sense. If it isn’t possible to use the ~appWebUrl token, how are developers supposed to reference custom images for the buttons they deploy in their Apps? Without the ~appWebUrl, there’s no practical way to reference an item in the App Web from the Host Web.

Token Forensics

When I find myself in situations where I’m holding results that don’t make sense, I can’t help myself: I pull out Reflector and start poking around for clues inside SharePoint’s plumbing. If I dig really hard, sometimes I find answers to my questions.

RegisterCommandUIWithRibbon After some poking around with Reflector, I discovered that the “journey to enlightenment” (in this case) started with the RegisterCommandUIWithRibbon method on the SPCustomActionElement type. It is in this method that the Image16by16 and Image32by32 attributes are read-in from the XML file in which they are defined. Before assignment for use, they’re passed through a couple of methods that carry out token parsing:

  • ReplaceUrlTokens on the SPCustomActionElement type
  • UrlFromPrefixedUrlCore on the the SPUtility type

Although these methods together are capable of recognizing and replacing many different token types (including some I hadn’t seen listed in existing documentation; e.g., ~siteCollectionLayouts), none of the new SharePoint 2013 tokens, like the ~appWebUrl and ~remoteWebUrl ~remoteAppUrl tokens, appear in these methods.

Interestingly enough, I didn’t see any noteworthy differences between the path of execution for processing image attributes and the sequence of calls through which CommandAction attributes are handled in the RegisterCommandUIExtension method of the SPRibbon type. The RegisterCommandUIExtension method eventually “punches down” to the ReplaceUrlTokens and UrlFromPrefixedUrlCore methods, as well.

The differences I was seeing in how tokens were handled between the CommandAction and Image32by32/Image16by16 attributes had to be originating somewhere else – not in the processing of the custom action XML.

Deployment Modifications

After some more digging in Reflector to determine where the ~appWebUrl actually showed-up and was being processed, I came across evidence suggesting that “something specialwas happening on App deployment rather than at run-time. The ~appWebUrl token was being processed as part of a BuildTokenMap call in the SPAppInstance type; looking at the call chain for the BuildTokenMap method revealed that it was getting called during some App deployment operations processing.

App Deployment Hierarchy to BuildTokenMap

If changes were taking place on App deployment, then I had a hunch I might find what I was looking for in the content database housing the Host Web to which my App was being deployed. After all, Apps get deployed to App Webs that reside within a Host Web, and Host Webs live in content databases … so, all of the pieces of my App had to exist (in some form) in the content database. 

I fired-up Visual Studio, stopped deploying to Office 365, and started deploying my App to a site collection on my local SharePoint 2013 VM farm. Once my App was deployed, I launched SQL Management Studio on the SQL Server housing the SharePoint databases and began poking around inside the content database where the target site collection was located.

Brief aside: standard rules still apply in SharePoint 2013, so I’ll mention them here for those who may not know them. Don’t poke around inside content databases (or any other databases) in live SharePoint environments you care about. As with previous versions, querying and working against live databases may hurt performance and lead to bigger problems. If you want to play with the contents of a SharePoint database, either create a SQL snapshot of it (and work against the snapshot) or mount a backup copy of the database in a test environment.

I wasn’t sure what I was looking for, so I quickly examined the contents of each table in the content database. I hit paydirt when I opened-up the CustomActions table. It had a single row, and the Properties field of that row contained some XML that looked an awful lot like the Elements.xml which defined my custom action:

[sourcecode language=”XML” autolinks=”false”]
<?xml version="1.0" encoding="utf-16"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/"&gt;
<CustomAction Title="Invoke ‘LibraryDetailsCustomAction’ action" Id="4f835c73-a3ab-4671-b142-83304da0639f.LibraryDetailsCustomAction" Location="CommandUI.Ribbon" RegistrationId="101" RegistrationType="List" Sequence="10001">
<CommandUIExtension xmlns="http://schemas.microsoft.com/sharepoint/"&gt;
<!–
Update the UI definitions below with the controls and the command actions
that you want to enable for the custom action.
–>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Library.Actions.Controls._children">
<Button Id="Ribbon.Library.Actions.LibraryDetailsCustomActionButton" Alt="Examine Library Details" Sequence="100" Command="Invoke_LibraryDetailsCustomActionButtonRequest" LabelText="Examine Library Details" Image16by16="~site/Images/sharepoint-library-analyzer_16x16-a.png" Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png" TemplateAlias="o1"/>
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler Command="Invoke_LibraryDetailsCustomActionButtonRequest" CommandAction="javascript:LaunchApp(‘709d9f25-bb39-4e6a-97d5-6e1d7c855f38’, ‘i:0i.t|ms.sp.int|a441fa2c-8c5f-4152-9085-3930239ab21b@9db0b916-0dd6-4d6c-be49-41f72f5dfc02’, ‘~appWebUrl\u002fPages\u002fLibraryDetails.aspx?ListID={ListId}\u0026SiteUrl={SiteUrl}’, null);"/>
</CommandUIHandlers>
</CommandUIExtension>
</CustomAction>
</Elements>
[/sourcecode]

There were some differences, though, between the Elements.xml I had defined earlier and what actually appeared in the Properties field. I narrowed my focus to the differences that existed between the non-working Image32by32/Image16by16 attributes

[sourcecode language=”XML” autolinks=”false”]
Image16by16="~appWebUrl/Images/sharepoint-library-analyzer_16x16-a.png"
Image32by32="~appWebUrl/Images/sharepoint-library-analyzer_32x32-a.png"
[/sourcecode]

… and the CommandAction attribute.

[sourcecode language=”XML” autolinks=”false”]
CommandAction="javascript:LaunchApp(‘709d9f25-bb39-4e6a-97d5-6e1d7c855f38’, ‘i:0i.t|ms.sp.int|a441fa2c-8c5f-4152-9085-3930239ab21b@9db0b916-0dd6-4d6c-be49-41f72f5dfc02’, ‘~appWebUrl\u002fPages\u002fLibraryDetails.aspx’, null);"
[/sourcecode]

As suspected, some deployment-time processing had been performed on the CommandAction attribute but not on the image attributes. The CommandAction still contained an ~appWebUrl token, but it was wrapped as part of a parameter call to a LaunchApp JavaScript function that appeared to be handled (or rather, executed) from a client-side browser.

Jumping into my App in Internet Explorer and opening IE’s debugging tools via <F12>, I did a search for the LaunchApp function within the referenced scripts and found it in the core.js library/script. Examining the LaunchApp function revealed that it called the LaunchAppInternal function; LaunchAppInternal, in turn, called back to the SharePoint server’s /_layouts/15/appredirect.aspx page with the parameters that were supplied to the original LaunchApp method – including the URL with the ~appWebUrl token.

To complete the journey, I opened up the Microsoft.SharePoint.ApplicationPages.dll assembly back on the server and dug into the AppRedirectPage class that provides the code-behind support for the AppRedirect.aspx page. When the AppRedirect.aspx page is loaded, control passes to the page’s OnLoad event and then to the HandleRequest method. HandleRequest then uses the ReplaceAppTokensAndFixLaunchUrl method of the SPTenantAppUtils class to process tokens.

The ReplaceAppTokensAndFixLaunchUrl method is noteworthy because it includes parsing and replacement support for the ~appWebUrl token, ~remoteWebUrl ~remoteAppUrl token, and other tokens that were introduced with SharePoint 2013. The deployment-time processing that is performed on the CommandAction attribute is what ultimately wires-up the CommandAction to the ReplaceAppTokensAndFixLaunchUrl method. The Image32by32 and Image16by16 attributes don’t get this treatment, and so the new 2013 tokens (like ~appWebUrl) can’t be used by these attributes.

What About the Image32by32 and Image16by16 Attributes?

Doubt Now that some of the key differences in processing between the CommandAction attribute and image attributes have been identified, let me jump back to the original problem. Is there anything that can be done with the Image32by32 and Image16by16 attributes that are specified in a custom action to get them to reference assets that exist in the App Web? Since tokens like ~appWebUrl (and ~remoteWebUrl for all you Autohosted and Provider-hosted application builders) aren’t parsed and processed, are there alternatives?

My response is a somewhat wishy-washy “doubtful.” In my estimation, you’d need to hack SharePoint with something like a javascript: tag for an image attribute (which, interestingly enough, doesn’t appear to be expressly blocked), find some way to obtain the App Web URL base, formulate the proper path to the image, and more. If it could be done, you’d be gaming SharePoint … and I could easily see a cumulative update or service pack breaking this type of elaborate work-around.

The safest and most pragmatic way to handle this situation, it seems, is to use absolute URLs for the desired image resources and forget about deploying them to the App Web altogether. For example, I placed the images I was trying to use on the ribbon buttons here on my blog and referenced them as follows:

[sourcecode language=”XML” autolinks=”false”]
Image16by16="http://sharepointinterface.com/wp-content/uploads/2013/01/sharepoint-library-analyzer_16x16-a.png&quot;
Image32by32="http://sharepointinterface.com/wp-content/uploads/2013/01/sharepoint-library-analyzer_32x32-a.png&quot;
[/sourcecode]

Working Custom Button Image I had some initial concerns that I might inadvertently bump into some security boundaries, such as those that sometimes arise when an asset is referenced via HTTP from a site that is being served up under HTTPS. This didn’t prove to be the case, however. I tested the use of absolute URLs in both my development VM environment (served up under HTTP) and through one of my Office 365 Preview site collections (accessed via HTTPS), and no browser security warnings popped up. The target image appeared on the custom button as desired (shown on the left) in both cases.

Although the use of absolute URLs will work in many cases, I have to admit that I’m still not a big fan of this approach – especially for SharePoint-hosted apps like the one I’ve been working on. Even though Office 365 entails an “always connected” scenario, I can easily envision on-premises deployment environments that are taken offline some or all of the time. I can also see (and have seen in the past) SharePoint environments where unfettered Internet access is the exception rather than the rule.

In these environments, users won’t see image buttons at all – just blank placeholders or broken image links. After all, without Internet access there is no way to resolve and download the referenced button images.

Wrapping It Up

At some point in the future, I hope that Microsoft considers extending token parsing for URL-based attributes like Image32by32 and Image16by16 to include the ~appWebUrl, ~remoteWebUrl, and other new tokens used by the SharePoint 2013 App Model. In the meantime, though, you should probably consider getting an easily accessible online location (SkyDrive, Dropbox, a blog, etc.) for images and other similar assets if you’re building apps under the new SharePoint 2013 App Model and intend to use custom actions.

Update (1/27/2013)

I need to issue a couple of updates and clarifications. First, I need to be very clear and state that SharePoint-hosted apps were the focus of this post. In a SharePoint-hosted app, what I’ve written is correct: there is no processing of “new” 2013 tokens (like ~appWebUrl and ~remoteAppUrl) for the Image32by32 and Image16by16 attributes. Interestingly enough, though, there does appear to be processing of the ~remoteAppUrl in the Image32by32 and Image16by16 attributes specifically for the other application types such as provider-hosted apps and autohosted apps. Jamie Rance mentioned this in a comment (below), and I verified it with an autohosted app that I quickly spun-up.

I double-checked to see if the ~remoteAppUrl token would even be recognized/processed (despite the lack of a remote web component) for SharePoint-hosted apps, and it is not … nor is ~appWebUrl token processed for autohosted apps. The selective implementation of only the ~remoteAppUrl token for certain app types has me baffled; I hope that we’ll eventually see some clarification or changes. If you’re building provider-hosted or autohosted apps, though, this does give you a way to redirect image requests to your remote web application rather than an absolute endpoint. Thank you, Jamie, for the information!

And now for some good news that for SharePoint-hosted app creators. Prior to writing this post, I had posted a question about the tokens over in the SharePoint Exchange forums. At the time I wrote this post, there hadn’t been any activity to suggest that a solution or workaround existed. F. Aquino recently supplied an incredibly creative answer, though, that involves using a data URI to Base64-encode the images and package them directly into the Image32by32 and Image16by16 attributes themselves! Although this means that some image pre-processing will be required to package images, it gets around the requirement of being “always-connected.” This is an awesome technique, and I’ll certainly be adding it to my arsenal. Thank you, F. Aquino!

References and Resources

  1. MSDN: How to: Create custom actions to deploy with apps for SharePoint
  2. MSDN: Apps for SharePoint overview
  3. MSDN: Customizing and Extending the SharePoint 2010 Server Ribbon
  4. MSDN: How to: Complete basic operations using SharePoint 2013 client library code
  5. MSDN: How to: Complete basic operations using SharePoint 2013 REST endpoints
  6. MSDN: URL strings and tokens in apps for SharePoint
  7. Twitter: Andrew Clark
  8. Chris Hopkins’ Visilog: Using images on your ribbon buttons from a sandboxed solution in SharePoint 2010
  9. Software: Red Gate’s Reflector
  10. Service: Microsoft’s SkyDrive
  11. Service: Dropbox

Whaddaya Mean I Can’t Deploy My SharePoint App?

After applying some recently-released patches for SharePoint 2013, my farm’s App infrastructure went belly-up. This post describes my troubleshooting and resolution.

ULS Viewer Showing the Problem I’ve been doing a lot of work with the new SharePoint 2013 App Model in the last few months. Specifically, I’ve been working on a free tool (for Idera) that will be going into the SharePoint App Marketplace sometime soon. The tool itself is not too terribly complicated – just a SharePoint-hosted app that will allow users to analyze library properties, compare library configuration settings, etc.

The development environment that I was using to put the new application together had been humming along just fine … until today. It seems that I tempted fate today by applying a handful of RTM patches to my environment.

What Happened?

I’d heard that some patches for SharePoint 2013 RTM had been released, so I pulled them down and applied them to my development environment. Those patches were:

After all binaries had been installed and a reboot was performed, I ran the SharePoint 2013 Products Configuration Wizard. The wizard ran and completed without issue, Central Administration popped-up afterwards, and life seemed to be going pretty well.

I went back to working on my SharePoint-hosted app, and that’s when things went south. When I tried to deploy the application to my development site collection from Visual Studio 2012, it failed with the following error message:

Error occurred in deployment step ‘Install app for SharePoint’: We’re sorry, we weren’t able to complete the operation, please try again in a few minutes. If you see this message repeatedly, contact your administrator.

Okay, I thought, that’s odd. Let’s give it a second.

Three failed redeploys later, I rebooted the VM to see if that might fix things. No luck.

Troubleshooting

My development wasn’t moving forward until I figured out what was going on, so I did a quick hunt online to see if anyone had encountered this problem. The few entries I found indicated that I should verify my App settings in Central Administration, so I tried that. Strangely, I couldn’t even get those settings to come up – just error pages.

All of this was puzzling. Remember: my farm was doing just fine with the entire app infrastructure just a day earlier, and all of a sudden things were dead in the water. Something had to have happened as a result of the patches that were applied.

Not finding any help on the Internet, I fired-up ULSViewer to see what was happening as I attempted to access the farm App settings from Central Administration. These were the errors I was seeing:

Insufficient SQL database permissions for user ‘Name: SPDC\svcSpServices SID: S-1-5-21-1522874658-601840234-4276112424-1115 ImpersonationLevel: None’ in database ‘SP2013_AppManagement’ on SQL Server instance ‘SpSqlAlias’. Additional error information from SQL Server is included below.  The EXECUTE permission was denied on the object ‘proc_GetDataRange’, database ‘SP2013_AppManagement’, schema ‘dbo’.

Seeing that my service account (SPDC\svcSpServices) didn’t have the access it needed to run the proc_GetDataRange stored procedure left me scratching my head. I didn’t know what sort of permissions the service account actually required or how they were specifically granted. So, I hopped over to my SQL Server to see if anything struck me as odd or out-of-place.

Looking at the SP2013_AppManagement database, I saw that members in the SPDataAccess role had rights to execute the proc_GetDataRange stored procedure. SPDC\svcSPServices didn’t appear to be a direct member of that group (that I could tell), so I added it. Bazinga! Adding the account to the role permitted me to once again review the App settings in Central Administration.

Unfortunately, I still couldn’t deploy my Apps from Visual Studio. Going back to the ULS logs, I found the following:

Insufficient SQL database permissions for user ‘Name: NT AUTHORITY\IUSR SID: S-1-5-17 ImpersonationLevel: Impersonation’ in database ‘SP2013_AppManagement’ on SQL Server instance ‘SpSqlAlias’. Additional error information from SQL Server is included below.  The EXECUTE permission was denied on the object ‘proc_AM_PutAppPrincipal’, database ‘SP2013_AppManagement’, schema ‘dbo’.

It was obvious to me that more than just a single account was out of whack since the proc_AM_PutAppPrincipal stored procedure was now in-play. Rather than try to manually correct all possible permission issues, I decided to try and get SharePoint to do the heavy lifting for me.

Resolution

Service Applications in Central Administration Knowing that the problem was tied to the Application Management Service, I figured that one (possible) easy way to resolve the problem was to simply have SharePoint reprovision the Application Management Service service application. To do this, I carried out the following:

  1. Deleted my App Management Service Application instance (which I happened to call “Application Management Service”) in Central Administration. I checked the box for Delete data associated with the Service Applications when it appeared to ensure that I got a new app management database.
  2. Once the service application was deleted, I created a new App Management Service service application. I named it the same thing I had called it before (“Application Management Service”) and re-used the same database name I had been using (“SP2013_AppManagement”). I re-used the shared services application pool I had been using previously, too.

After completing these steps, I was able to successfully deploy my application to the development site collection through Visual Studio. I no longer saw the stored procedure access errors appearing in the ULS logs.

What Happened?

App Management Database Roles I don’t know what happened exactly, but what I observed seems to suggest that one of the patches I applied messed with the App Management service application database. Specifically, rights and permissions that one or more accounts possessed were somehow revoked by removing those accounts from the SPDataAccess role. Additional role and/or permission changes could have been made, as well – I just don’t know.

Once everything was running again, I went back into my SQL Server and had a look at the (new) SP2013_AppManagement database. Examining the role membership for SPDC\svcSpServices (which was one of the accounts that was blocked from accessing stored procedures earlier), I saw that the account had been put (back) into the SPDataAccess role. This seemed to confirm my observation that somehow things became “unwired” during the patching and/or configuration wizard run process.

 

References and Resources

  1. MSDN: Apps for SharePoint overview
  2. Company: Idera
  3. Microsoft: SharePoint App Marketplace
  4. MSDN: How to: Create a basic SharePoint-hosted app
  5. SharePoint 2013 Update: KB2737983
  6. SharePoint 2013 Update: KB2752001
  7. SharePoint 2013 Update: KB2752058
  8. SharePoint 2013 Update: KB2760355
  9. MSDN: ULSViewer
%d bloggers like this: