Microsoft Teams Ownership Changes – The Bulk PowerShell Way

As someone who spends most days working with (and thinking about) SharePoint, there’s one thing I can say without any uncertainty or doubt: Microsoft Teams has taken off like a rocket bound for low Earth orbit. It’s rare these days for me to discuss SharePoint without some mention of Teams.

I’m confident that many of you know the reason for this. Besides being a replacement for Skype, many of Teams’ back-end support systems and dependent service implementations are based in – you guessed it – SharePoint Online (SPO).

As one might expect, any technology product that is rapidly evolving and seeing adoption by the enterprise has gaps that reveal themselves and imperfect implementations as it grows – and Teams is no different. I’m confident that Teams will reach a point of maturity and eventually address all of the shortcomings that people are currently finding, but until it does, there will be those of us who attempt to address gaps we might find with the tools at our disposal.

Administrative Pain

One of those Teams pain points we discussed recently on the Microsoft Community Office Hours webcast was the challenge of changing ownership for a large numbers of Teams at once. We took on a question from Mark Diaz who posed the following:

May I ask how do you transfer the ownership of all Teams that a user is managing if that user is leaving the company? I know how to change the owner of the Teams via Teams admin center if I know already the Team that I need to update. Just consulting if you do have an easier script to fetch what teams he or she is an owner so I can add this to our SOP if a user is leaving the company.

Mark Diaz

We discussed Mark’s question (amidst our normal joking around) and posited that PowerShell could provide an answer. And since I like to goof around with PowerShell and scripting, I agreed to take on Mark’s question as “homework” as seen below:

The rest of this post is my direct response to Mark’s question and request for help. I hope this does the trick for you, Mark!

Teams PowerShell

Anyone who has spent any time as an administrator in the Microsoft ecosystem of cloud offerings knows that Microsoft is very big on automating administrative tasks with PowerShell. And being a cloud workload in that ecosystem, Teams is no different.

Microsoft Teams has it’s own PowerShell module, and this can be installed and referenced in your script development environment in a number of different ways that Microsoft has documented. And this MicrosoftTeams module is a prerequisite for some of the cmdlets you’ll see me use a bit further down in this post.

The MicrosoftTeams module isn’t the only way to work with Teams in PowerShell, though. I would have loved to build my script upon the Microsoft Graph PowerShell module … but it’s still in what is termed an “early preview” release. Given that bit of information, I opted to use the “older but safer/more mature” MicrosoftTeams module.

The Script: ReplaceTeamsOwners.ps1

Let me just cut to the chase. I put together my ReplaceTeamOwners.ps1 script to address the specific scenario Mark Diaz asked about. The script accepts a handful of parameters (this next bit lifted straight from the script’s internal documentation):

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.

So both currentTeamOwner and newTeamOwner must be specified, and that’s fairly intuitive to understand. If the -confirmEachUpdate switch is supplied, then for each possible ownership change there will be a confirmation prompt allowing you to agree to an ownership change on a case-by-case basis.

The one parameter that might be a little confusing is the script’s isTest parameter. If unspecified, this parameter defaults to TRUE … and this is something I’ve been putting in my scripts for ages. It’s sort of like PowerShell’s -WhatIf switch in that it allows you to understand the path of execution without actually making any changes to the environment and targeted systems/services. In essence, it’s basically a “dry run.”

The difference between my isTest and PowerShell’s -WhatIf is that you have to explicitly set isTest to FALSE to “run the script for real” (i.e., make changes) rather than remembering to include -WhatIf to ensure that changes aren’t made. If someone forgets about the isTest parameter and runs my script, no worries – the script is in test mode by default. My scripts fail safe and without relying on an admin’s memory, unlike -WhatIf.

And now … the script!

<#  

.SYNOPSIS  
    This script is used to replace all instances of a Teams team owner with the
    identity of another account. This might be necessary in situations where a
    user leaves an organization, administrators change, etc.

.DESCRIPTION  
    Anytime a Microsoft Teams team is created, an owner must be associated with
    it. Oftentimes, the team owner is an administrator or someone who has no
    specific tie to the team.

    Administrators tend to change over time; at the same time, teams (as well as
    other IT "objects", like SharePoint sites) undergo transitions in ownership
    as an organization evolves.

    Although it is possible to change the owner of Microsoft Teams team through
    the M365 Teams console, the process only works for one site at a time. If
    someone leaves an organization, it's often necessary to transfer all objects
    for which that user had ownership.

    That's what this script does: it accepts a handful of parameters and provides
    an expedited way to transition ownership of Teams teams from one user to 
    another very quickly.

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.
	
.NOTES  
    File Name  : ReplaceTeamsOwners.ps1
    Author     : Sean McDonough - sean@sharepointinterface.com
    Last Update: September 2, 2020

#>
Function ReplaceOwners {
    param(
        [Parameter(Mandatory=$true)]
        [String]$currentTeamsOwner,
        [Parameter(Mandatory=$true)]
        [String]$newTeamsOwner,
        [Parameter(Mandatory=$false)]
        [Switch]$confirmEachUpdate,
        [Parameter(Mandatory=$false)]
        [Boolean]$isTest = $true
    )

    # Perform a parameter check. Start with the site spec.
    Clear-Host
    Write-Host ""
    Write-Host "Attempting prerequisite operations ..."
    $paramCheckPass = $true
    
    # First - see if we have the MSOnline module installed.
    try {
        Write-Host "- Checking for presence of MSOnline PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MSOnline"
        if ($null -ne $checkResult) {
            Write-Host "  - MSOnline module already installed; now importing ..."
            Import-Module -Name "MSOnline" | Out-Null
        }
        else {
            Write-Host "- MSOnline module not installed. Attempting installation ..."            
            Install-Module -Name "MSOnline" | Out-Null
            $checkResult = Get-InstalledModule -Name "MSOnline"
            if ($null -ne $checkResult) {
                Import-Module -Name "MSOnline" | Out-Null
                Write-Host "  - MSOnline module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MSOnline module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Red "- Unexpected problem encountered with MSOnline import attempt."
        $paramCheckPass = $false
    }

    # Our second order of business is to make sure we have the PowerShell cmdlets we need
    # to execute this script.
    try {
        Write-Host "- Checking for presence of MicrosoftTeams PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
        if ($null -ne $checkResult) {
            Write-Host "  - MicrosoftTeams module installed; will now import it ..."
            Import-Module -Name "MicrosoftTeams" | Out-Null
        }
        else {
            Write-Host "- MicrosoftTeams module not installed. Attempting installation ..."            
            Install-Module -Name "MicrosoftTeams" | Out-Null
            $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
            if ($null -ne $checkResult) {
                Import-Module -Name "MicrosoftTeams" | Out-Null
                Write-Host "  - MicrosoftTeams module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MicrosoftTeams module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Yellow "- Unexpected problem encountered with MicrosoftTeams import attempt."
        $paramCheckPass = $false
    }

    # Have we taken care of all necessary prerequisites?
    if ($paramCheckPass) {
        Write-Host -ForegroundColor Green "Prerequisite check passed. Press  to continue."
        Read-Host
    } else {
        Write-Host -ForegroundColor Red "One or more prerequisite operations failed. Script terminating."
        Exit
    }

    # We can now begin. First step will be to get the user authenticated to they can actually
    # do something (and we'll have a tenant context)
    Clear-Host
    try {
        Write-Host "Please authenticate to begin the owner replacement process."
        $creds = Get-Credential
        Write-Host "- Credentials gathered. Connecting to Azure Active Directory ..."
        Connect-MsolService -Credential $creds | Out-Null
        Write-Host "- Now connecting to Microsoft Teams ..."
        Connect-MicrosoftTeams -Credential $creds | Out-Null
        Write-Host "- Required connections established. Proceeding with script."
        
        # We need the list of AAD users to validate our target and replacement.
        Write-Host "Retrieving list of Azure Active Directory users ..."
        $currentUserUPN = $null
        $currentUserId = $null
        $currentUserName = $null
        $newUserUPN = $null
        $newUserId = $null
        $newUserName = $null
        $allUsers = Get-MsolUser
        Write-Host "- Users retrieved. Validating ID of current Teams owner ($currentTeamsOwner)"
        $currentAADUser = $allUsers | Where-Object {$_.SignInName -eq $currentTeamsOwner}
        if ($null -eq $currentAADUser) {
            Write-Host -ForegroundColor Red "- Current Teams owner could not be found in Azure AD. Halting script."
            Exit
        } 
        else {
            $currentUserUPN = $currentAADUser.UserPrincipalName
            $currentUserId = $currentAADUser.ObjectId
            $currentUserName = $currentAADUser.DisplayName
            Write-Host "  - Current user found. Name='$currentUserName', ObjectId='$currentUserId'"
        }
        Write-Host "- Now Validating ID of new Teams owner ($newTeamsOwner)"
        $newAADUser = $allUsers | Where-Object {$_.SignInName -eq $newTeamsOwner}
        if ($null -eq $newAADUser) {
            Write-Host -ForegroundColor Red "- New Teams owner could not be found in Azure AD. Halting script."
            Exit
        }
        else {
            $newUserUPN = $newAADUser.UserPrincipalName
            $newUserId = $newAADUser.ObjectId
            $newUserName = $newAADUser.DisplayName
            Write-Host "  - New user found. Name='$newUserName', ObjectId='$newUserId'"
        }
        Write-Host "Both current and new users exist in Azure AD. Proceeding with script."

        # If we've made it this far, then we have valid current and new users. We need to
        # fetch all Teams to get their associated GroupId values, and then examine each
        # GroupId in turn to determine ownership.
        $allTeams = Get-Team
        $teamCount = $allTeams.Count
        Write-Host
        Write-Host "Begin processing of teams. There are $teamCount total team(s)."
        foreach ($currentTeam in $allTeams) {
            
            # Retrieve basic identification information
            $groupId = $currentTeam.GroupId
            $groupName = $currentTeam.DisplayName
            $groupDescription = $currentTeam.Description
            Write-Host "- Team name: '$groupName'"
            Write-Host "  - GroupId: '$groupId'"
            Write-Host "  - Description: '$groupDescription'"

            # Get the users associated with the team and determine if the target user is
            # currently an owner of it.
            $currentIsOwner = $null
            $groupOwners = (Get-TeamUser -GroupId $groupId) | Where-Object {$_.Role -eq "owner"}
            $currentIsOwner = $groupOwners | Where-Object {$_.UserId -eq $currentUserId}

            # Do we have a match for the targeted user?
            if ($null -eq $currentIsOwner) {
                # No match; we're done for this cycle.
                Write-Host "  - $currentUserName is not an owner."
            }
            else {
                # We have a hit. Is confirmation needed?
                $performUpdate = $false
                Write-Host "  - $currentUserName is currently an owner."
                if ($confirmEachUpdate) {
                    $response = Read-Host "  - Change ownership to $newUserName (Y/N)?"
                    if ($response.Trim().ToLower() -eq "y") {
                        $performUpdate = $true
                    }
                }
                else {
                    # Confirmation not needed. Do the update.
                    $performUpdate = $true
                }
                
                # Change ownership if the appropriate flag is set
                if ($performUpdate) {
                    # We need to check if we're in test mode.
                    if ($isTest) {
                        Write-Host -ForegroundColor Yellow "  - isTest flag is set. No ownership change processed (although it would have been)."
                    }
                    else {
                        Write-Host "  - Adding '$newUserName' as an owner ..."
                        Add-TeamUser -GroupId $groupId -User $newUserUPN -Role owner
                        Write-Host "  - '$newUserName' is now an owner. Removing old owner ..."
                        Remove-TeamUser -GroupId $groupId -User $currentUserUPN -Role owner
                        Write-Host "  - '$currentUserName' is no longer an owner."
                    }
                }
                else {
                    Write-Host "  - No changes in ownership processed for $groupName."
                }
                Write-Host ""
            }
        }

        # We're done let the user know.
        Write-Host -ForegroundColor Green "All Teams processed. Script concluding."
        Write-Host ""

    } 
    catch {
        # One or more problems encountered during processing. Halt execution.
        Write-Host -ForegroundColor Red "-" $_
        Write-Host -ForegroundColor Red "- Script execution halted."
        Exit
    }
}

ReplaceOwners -currentTeamsOwner bob@EvilCorp.com -newTeamsOwner jane@AcmeCorp.com -isTest $true -confirmEachUpdate

Don’t worry if you don’t feel like trying to copy and paste that whole block. I zipped up the script and you can download it here.

A Brief Script Walkthrough

I like to make an admin’s life as simple as possible, so the first part of the script (after the comments/documentation) is an attempt to import (and if necessary, first install) the PowerShell modules needed for execution: MSOnline and MicrosoftTeams.

From there, the current owner and new owner identities are verified before the script goes through the process of getting Teams and determining which ones to target. I believe that the inline comments are written in relatively plain English, and I include a lot of output to the host to spell out what the script is doing each step of the way.

The last line in the script is simply the invocation of the ReplaceOwners function with the parameters I wanted to use. You can leave this line in and change the parameters, take it out, or use the script however you see fit.

Here’s a screenshot of a full script run in my family’s tenant (mcdonough.online) where I’m attempting to see which Teams my wife (Tracy) currently owns that I want to assume ownership of. Since the script is run with isTest being TRUE, no ownership is changed – I’m simply alerted to where an ownership change would have occurred if isTest were explicitly set to FALSE.

ReplaceTeamsOwners.ps1 execution run

Conclusion

So there you have it. I put this script together during a relatively slow afternoon. I tested and ensured it was as error-free as I could make it with the tenants that I have, but I would still test it yourself (using an isTest value of TRUE, at least) before executing it “for real” against your production system(s).

And Mark D: I hope this meets your needs.

References and Resources

  1. Microsoft: Microsoft Teams
  2. buckleyPLANET: Microsoft Community Office Hours, Episode 24
  3. YouTube: Excerpt from Microsoft Community Office Hours Episode 24
  4. Microsoft Docs: Microsoft Teams PowerShell Overview
  5. Microsoft Docs: Install Microsoft Team PowerShell
  6. Microsoft 365 Developer Blog: Microsoft Graph PowerShell Preview
  7. Microsoft Tech Community: PowerShell Basics: Don’t Fear Hitting Enter with -WhatIf
  8. Zipped Script: ReplaceTeamsOwners.zip

A Quick Look At The Get-PnPGroup Cmdlet And Its Operation

Why This Particular Topic?

I wouldn’t be surprised if some of you might be saying and asking, “Okay, that’s an odd choice for a post – even for you. Why?”

If you’re one of those people wondering, I would say that the sentiment and question are certainly fair. I’m actually writing this as part of my agreed upon “homework” from last Monday’s broadcast of the Community Office Hours podcast (I think that’s what we’re calling them). If you’re not immediately familiar with this particular podcast and its purpose, I’ll take two seconds out to describe.

I was approached one day by Christian Buckley (so many “interesting experiences” seem to start with Christian Buckley) about a thought he had. He wanted to start doing a series of podcasts each week to address questions, concerns, problems, and other “things” related to Office 365, Microsoft Teams, and all the O365/M365 associated workloads. He wanted to open it up as a panel-style podcast, and although anyone could join, he was interested in rounding-up a handful of Microsoft MVPs to “staff” the podcast in an ongoing capacity. The idea sounded good to me, so I said “Count me in” even before he finished his thoughts and pitch.

I wasn’t sure what to expect initially … but we just finished our 22nd episode this past Monday, and we are still going strong. The cast on the podcast rotates a bit, but there are a few of us that are part of what I’d consider the “core group” of entertainers …

The podcast has actually become something I look forward to every Monday, especially with the pandemic and the general lack of in-person social contact I seem to have (or rather, don’t have). We do two sections of the podcast every Monday: one for EMEA at 11:00am EST and the other for APAC at 9:00pm EST. You can find out more about the podcast in general through the Facebook group that’s maintained. Alternatively, you can send questions and things you’d like to see us address on the podcast to OfficeHours@CollabTalk.com.

If you don’t want (or have the time) to watch the podcast live, an archive of past episodes exists on Christian’s site, I maintain an active playlist of the recorded episodes on YouTube, and I’m sure there are other repositories available.

Ok, Got It. “Your Homework,” You Say?

The broadcasts we do normally have no fixed format or agenda, so we (mostly Christian) tend to pull questions and topics to address from the Facebook group and other places. And since the topics are generally so wide-ranging, it goes without saying that we have viable answers for some topics … but there are plenty of things we’re not good at (like telephony) and freely tell you so.

Whenever we get to a question or topic that should be dealt with outside the scope of the podcast (oftentimes to do some research or contact a resource who knows the domain), we’ll avoid BSing too much … and someone will take the time to research the topic and return back the following week with what they found or put together. We’re trying to tackle a bunch of questions and topics each week, and none of us is well-versed in the entire landscape of M365. Things just change so darn fast these days ….

So, my “homework” from last week was one of these topics. And I’m trying to do one better than just report back to the podcast with an answer. The topic and research may be of interest to plenty of people – not just the person who asked about it originally. Since today is Sunday, I’m racing against the clock to put this together before tomorrow’s podcast episodes …

The Topic

Rather than trying to supply a summary of the topic, I’m simply going to share the post and then address it. The inquiry/post itself was made in the Office 365 Community Facebook group by Bilal Bajwa. Bilal is from Milwaulkee, Wisconsin, and he was seeking some PowerShell-related help:

Being the lone developer in our group of podcast regulars (and having worked a fair bit with the SharePointPnP Cmdlets for PowerShell and PowerShell in general), I offered to take Bilal’s post for homework and come back with something to share. As of today (Sunday, 8/23/2020), the post is still sitting in the Facebook group without comment – something I hope to change once this blog post goes live in a bit.

SharePointPnP Cmdlets And The Get-PnPGroup Cmdlet Specifically

If you’re a SharePoint administrator and you’re unfamiliar with the SharePoint Patterns and Practices group and the PowerShell cmdlets they maintain, I’M giving YOU a piece of homework: read the Microsoft Docs to familiarize yourself with what they offer and how they operate. They will only help make your job easier. That’s right: RTFM. Few people truly enjoy reading documentation, but it’s hard to find a better and more complete reference medium.

If you are already familiar with the PnP cmdlets … awesome! As you undoubtedly know, they add quite a bit of functionality and extend a SharePoint administrator’s range of control and options within just about any SharePoint environment. The PnP group that maintains the cmdlets (and many other tools) are a group of very bright and very giving folks.

Vesa Juvonen is one name I associate with pretty much anything PnP. He’s a Principal Program Manager at Microsoft these days, and he directs many of the PnP efforts in addition to being an exceptionally nice (and resourceful!) guy.

The SharePoint Developer Blog regularly covers PnP topics, and they regularly summarize and update PnP resource material – as well as explain it. Check out this post for additional background and detail.

Cmdlet: Get-PnPGroup

Now that I’ve said all that, let’s get started with looking at the Get-PnPGroup cmdlet that is part of the SharePointPnP PowerShell module. I will assume that you have some skill with PowerShell and have access to a (SharePoint) environment to run the cmdlets successfully. If you’re new to all this, then I would suggest reviewing the Microsoft Docs link I provide in this blog post, as they cover many different topics including how to get setup to use the SharePoint PnP cmdlets.

In his question/post, Bilal didn’t specify whether he was trying to run the Get-PnPGroup cmdlet against a SharePoint Online (SPO) site or a SharePoint on-premises farm. The operation of the SharePointPnP cmdlets, while being fairly consistent and predictable from cmdlet to cmdlet, sometimes vary a bit depending on the version of SharePoint in-use (on-premises) or whether SPO is being targeted. In my experience, the exposed APIs and development surfaces went through some enhancement after SharePoint 2013 in specific areas. One such area that was affected was data pertaining to site users and their alerts; the data is available in SharePoint 2016 and 2019 (as well as in SPO), but it’s inaccessible in 2013.

Because of this, it is best to review the online documentation for any cmdlet you’re going to use. Barring that, make sure you remember the availability of the documentation if you encounter any issues or behavior that isn’t expected.

If we do this for Get-PnPGroup, we frankly don’t get too much. The online documentation at Microsoft Docs is relatively sparse and just slightly better than auto-generated docs. But we do get a little helpful info:

We can see from the docs that this cmdlet runs against all versions of SharePoint starting with SharePoint 2013. I would therefore expect operations to be generally be consistent across versions (and location) of SharePoint.

A little further down in the documentation for Get-PnPGroup (in Example 1), we find that simply running the cmdlet is said to return all SharePoint groups in a site. Let’s see that in practice.

Running Wild

I fired up a VM-based SharePoint 2019 farm I have to serve as the target for on-prem tests. For SPO, I decided to use my family’s tenant as a test target. Due to time constraints, I didn’t get a chance to run anything against my VM environment, so I’m assuming (dangerous, I know) that on-prem results will match SPO. If they don’t, I’m sure someone will tell me below (in the Comments) …

Going against SPO involves connecting to the tenant and then executing Get-PnPGroup. The initial results:

Running Get-PnPGroup returned something, and it’s initially presented to us in a somewhat condensed table format that includes ID, (group) Title, and LoginName.

But there’s definitely more under the hood than is being shown here, and that “under the hood” part is what I suspect might have been causing Bilal some issues when he looked at his results.

We’ve all probably heard it before at some point: PowerShell is an object-oriented scripting language. This means that PowerShell manipulates and works with Microsoft .NET objects behind-the-scenes for most things. What may appear as a scalar value or simple text data on first inspection could be just the tip of the “object iceberg” when it comes to PowerShell.

Going A Bit Deeper

To learn a bit more about what the function is actually returning upon execution, I ran the Get-PnPGroup cmdlet again and assigned the function return to a variable I called $group (which you can see in the screen capture earlier). Performing this variable assignment would allow me to continue working with the function output (i.e., the SharePoint groups) without the need to keep querying my SharePoint environment.

To display the contents of $group with additional detail, the PowerShell I executed might appear a little cryptic for those who don’t live in PowerShellLand:

$group | fl

There’s some shorthand in play with that last bit of PowerShell, so I’ll spell everything out. First, fl is the shorthand notation for the Format-List cmdlet. I could have just as easily typed …

$group | Format-List

… but that’s more typing! I’m no different than anyone else, and I like to get more done with less whenpossible.

Next, the pipe (“|”) will be familiar to most PowerShell practitioners, and here it’s used to send the contents of the $group variable to the Format-List cmdlet. The Format-List cmdlet then expands the data piped to it (i.e., the SharePoint groups in $group) and shows all the property values that exist for each SharePoint group.

If you’re not familiar with .NET objects or object-oriented development, I should point out that the SharePoint groups returned and assigned to our $group variable are .NET objects. Knowing this might help your understanding – or maybe not. Try not to worry if you’re not a dev and don’t speak dev. I know that to many admins, devs might as well be speaking jive …

For our purposes today, we’re going to limit our discussion and analysis of objects to just their properties – nothing more. The focus still remains PowerShell.

What Are The Actual Properties Available To Us?

If you’re asking the question just posed, then you’re following along and hopefully making some kind of sense of a what I’m sharing.

So, what are the properties that are exposed by each of the SharePoint groups? Looking at the output of the $group variable sent to the Format-List command (shown earlier) gives you an idea, but there’s a much quicker and more reliable way to get the listing of properties.

You may not like what I’m about to say, but it probably won’t surprise you: those properties are documented (for everyone to learn about) in Microsoft Docs. Yes, another documentation reference!

How did I know what to look/search for? If you refer to the end of the reference for the Get-PnPGroup cmdlet, there is a section that describes the “Outputs” from running the cmdlet. That output is only one line of text, and it’s exactly what we need to make the next hop in our hunt for properties details:

List<Microsoft.SharePoint.Client.Group>

A List is a .NET collection class, but that’s not important for our purposes. Simply put, you can think of a .NET List as a “bucket” into which we put other objects – including our SharePoint groups. The class/type that is identified between the “<” and “>” after List specify the type of each object in the List. In our case, each item in the List is of type Microsoft.SharePoint.Client.Group.

If you search for that class type, you’ll get a reference in your search results that points to a Microsoft Docs link serving as a reference for the SharePoint Group type we’re interested in. And if we look at the “Properties” link of that particular reference, each of the properties that appear in our returned groups are spelled out with additional information – in most cases, at least basic usage information is included.

A quick look at those properties and a review of one of the groups in the $group variable (shown below) should convince you that you’re looking at the right reference.

What Do We Do Now?

You might recall that we’re going through this exercise of learning about the output from the Get-PnPGroup cmdlet because Bilal asked the question, “Any idea how to filter?”

Hopefully the output that’s returned from the cmdlet makes some amount of sense, and I’ve convinced you (and Bilal) that it’s not “garbage” but a List collection of .NET objects that are all of the Microsoft.SharePoint.Client.Group type.

At this point, we can leave our discussion of .NET objects behind (for the most part) and transition back to PowerShell proper to talk about filtering. We could do our filtering without leaving .NET, but that wouldn’t be considered the “PowerShell way” of doing it. Just remember, though: there’s almost always more than one way to get the results you need from PowerShell …

Filtering The Results

In the case of my family’s SPO tenant, there are a total of seven (7) SharePoint groups in the main site collection:

Looking at a test case for filtering, I’m going to try to get any group that has “McDonough” in its name.

A SharePoint group’s name is the value of the Title property, and a very straightforward way to filter a collection of objects (which we have identified exists within our $group variable) is through the use of the Where-Object cmdlet.

Let’s setup some PowerShell that should return only the subset of groups that I’m interested in (i.e., those with “McDonough” in the Title). Reviewing the seven groups in my site collection, I note that only three (3) of them contain my last name. So, after filtering, we should have precisely three groups listed.

Preparing the PowerShell …

$group | where-object {$_.Title -like "*McDonough*"}

… and executing this, we get back the filtered results predicted and expected; i.e., three SharePoint groups:

For those that could use a little extra clarification, I will summarize what transpired when I executed that last line of PowerShell.

  1. From our previous Get-PnPGroup operation, we knew that the $group variable contained the seven groups that exist in my site collection.
  2. We piped (“|”) that unfiltered collection of groups to the Where-Object cmdlet. It’s worth pointing out that the cmdlets and most of the other strings/text in PowerShell are case-insensitive (Where-Object, where-object, and WhErE-oBjEcT are all the same from a PowerShell processing perspective).
  3. The curly braces after the where-object cmdlet define the logic that will be processed for each object (i.e., SharePoint group) that is passed to the where-object cmdlet.
  4. Within the curly braces, we indicated that we wanted to filter and keep each group that had a Title which was like “*McDonough*” This was accomplished with the -like operator (PowerShell has many other operators, too). The asterisks before and after “McDonough” are simply wildcards that will match against anything with “McDonough” in the Title – regardless of any text or characters appearing before and/or after “McDonough”
  5. Also worth nothing within the curly braces is the “$_.” notation. When iterating through the collection of SharePoint groups, the “$_.” denotes the current object/group we’re evaluating – each one in turn.

Round Two

Let’s try another one before pulling the plug (figuratively and literally – it’s close to my bed time …)

Let’s filter and keep only the groups where the members of the group can also edit the group membership. This is an uncommon scenario, and we might wish to know this information for some potential security tightening.

Looking at the properties available on the Group type, I see the one I’m interested in: AllowMembersEditMembership. It’s a boolean value, and I want back the groups that have a value of true (which is represented as $true in PowerShell) for this property.

$group | where-object {$_.AllowMembersEditMembership -eq $true}

Running the PowerShell just presented, we get only one matching group back:

Frankly, that’s one more group than I originally expected, so I should probably take a closer look in the ol’ family site collection …

Summary

I hope this helped you (and Bilal) understand that there is a method to PowerShell’s madness. We just need to lean on .NET and objected oriented concepts a bit to help us get what we want.

The filtering I demonstrated was pretty basic, and there are numerous ways to take it further and get more specific in your filtering logic/expressions. If you weren’t already comfortable with filtering, I hope you now know that it isn’t really that hard.

If I happened to skip or gloss over something important, please leave me a note in the Comments section below. My goal was to provide a complete-enough picture to build some confidence – so that the next time you need to work with objects and filter them in PowerShell, you’ll feel comfortable doing so.

Have fun PowerShelling!

References And Resources

  1. LinkedIn: Christian Buckley
  2. Podcast History: Microsoft Community Office Hours from 8/18/2020
  3. BuckleyPLANET: Community category and activities
  4. Facebook Group: Office 365 Community
  5. Email Group: OfficeHours@CollabTalk.com
  6. YouTube: Microsoft Community Office Hours playlist
  7. Microsoft Docs: PnP PowerShell Overview
  8. LinkedIn: Vesa Juvonen
  9. Blog: SharePoint Developer Blog
  10. Blog Post: Microsoft 365 & SharePoint Ecosystem (PnP) – July 2020 Update
  11. Microsoft Docs: Get-PnPGroup
  12. Microsoft: What Is .NET Framework?
  13. Microsoft Docs: Format-List
  14. Microsoft Docs: List<T> Class
  15. Microsoft Docs: Group Class
  16. Microsoft Docs: Group Properties
  17. Microsoft Docs: Where-Object
  18. Microsoft Docs: About Comparison Operators

What CDN Usage Does for SharePoint Online (SPO) Performance

If you need the what’s what on CDNs (content delivery networks), this is a bit of quick reading that will get you up to speed with what a CDN is, how to configure your SPO tenant to use a CDN, and the benefits that CDNs can bring.

The (Not Entirely Obvious) TL;DR Answer

CDN

Since I’m taking the time to write about the topic, you can safely guess that yes, CDNs make a difference withSPO page operations. In many cases, proper CDN configuration will make a substantial difference in SPO page performance. So enable CDN use NOW!

The Basis For That Answer: Introduction

Knowing that some folks simply want the answer up-front, I hope that I’ve satisfied their curiosity. The rest of this post is dedicated to explaining content delivery networks (CDNs), how they operate, and how you can easily enable them for use within your SharePoint Online (SPO) sites.

Let me first address a misconception that I sometimes encountered among SPO administrators and developers (including some MVPs) – that being that CDNs don’t really “do a whole lot” to help site and/or page performance. Sure, usage of a CDN is recommended … but a common misunderstanding is that a CDN is really more of a “nice-to-have” than “need-to-have” element for SPO sites. Of the people saying such things, oftentimes that judgment comes without any real research, knowledge, or testing. Skeptics typically haven’t read the documentation (the “non-RTFM crowd”) and haven’t actually spent any time profiling and troubleshooting the performance of SPO sites. Since I enjoy addressing perf. problems and challenges, I’ve been fortunate to experience firsthand the benefits that CDNs can bring. By the end of this post, I hope I’ll have made converts of a CDN skeptic or two.

What Is A CDN?

Abstract Network

A CDN is a Content Delivery Network. There are a lot of (good) web resources that describe and illustrate what CDNs are and how they generally operate (like this one and this one), so I’m not going to attempt to “add value” with my own spin. I will simply call attention to a couple of the key characteristics that we really care about in our use of CDNs with SPO.

  1. A CDN, at its core, can be thought of as a system of distributed (typically geographically so) servers for caching and offloading of SPO content. Rather than needing to go to the Microsoft network and data center where your tenant is located in order to fetch certain files from SPO, your browser can instead go to a (geographically) closer CDN server to get those same files.
  2. By virtue of going to a closer CDN instead of the Microsoft network, the chance that you’ll have a “bigger pipe” with more bandwidth – and less latency/delay – are greater. This usually translates directly to an improvement in performance.
  3. In addition to giving us the opportunity to download certain SPO files faster and with less delay, CDNs can do other things to improve the experience for the SPO files they serve. For instance, CDN servers can pass files back to the browser with cache-control headers that allow browsers to re-serve downloaded files to other users (i.e, to users who haven’t actually download the files), store downloaded files locally (to avoid having to download them again for a period of time), and more.

If you didn’t know about CDNs prior to this post, or didn’t understand how they could help you, I hope you’re beginning to see the possibilities!

The Arrival Of The Office 365 CDN

It wasn’t all that long ago that Microsoft was a bit more “modest” in its use of CDNs. Microsoft certainly made use of them, but prior to the implementation of its own content delivery networks, Microsoft frequently turned to a company called Akamai for CDN support.

When I first started presenting on SharePoint and its built-in caching mechanisms, I often spoke about Akamai and their edge network when talking about BLOB caching and how the max-age cache-control header could be configured and misconfigured. Back then, “Akamai” was basically synonymous with “CDN,” and that’s how many of us thought about the company. They were certainly leading the pack in the CDN service space.

Back then, if you were attempting to download a large file from Microsoft (think DVD images, ISO files, etc.), then there was a good change that the download link your browser would receive (from Microsoft’s servers) would actually point to an Akamai edge node near your location geographically instead of a Microsoft destination.

Fast forward to today. In addition to utilizing third-party CDNs like those deployed by Akamai, Microsoft has built (and is improving) their own first-party CDNs. There are a couple of benefits to this. First, many data regulations you may be subject to that prevent third-party housing of your data (yes, even in temporary locations like a CDN) can be largely avoided. In the case of CDNs that Microsoft is running, there is no hand-off to a third party and thus much less practical concern regarding who is housing your data.

Second, with their own CDNs, Microsoft has a lot more latitude and ability to extend the specifics of CDN configuration and operation its customers. And that’s what they’ve done with the Office 365 CDN.

Set Up The O365 CDN For Tenant’s Use

Now we’re talking! This next part is particularly important, and it’s what drove the creation of this post. It’s also the one bit of information that I promised Scott Stewart at Microsoft that I would try to get “out in the wild” as quickly and as visibly as possible.

So, if you remember nothing else from this post,please remember this:

Set-SPOTenantCdnEnabled -CdnType Public -Enable $true

That is the line of PowerShell that needs to be executed (against your SPO tenant, so you need to have a connection to your tenant established first) to enable transparent CDN support for public files. Run that, and non-sensitive files of public origin from SPO will begin getting cached in a CDN and served from there.

The line of PowerShell I shared goes through the SharePoint Online Management Shell – something most organizations using SPO (and their admins in particular) have installed somewhere.

It is also possible to enable CDN support if you’re using the PNP PowerShell module, if that’s your preference, by executing the following PowerShell:

Set-PnPTenantCdnEnabled -CdnType Public -Enable $true

No matter how you enable the CDN, it should be noted that the PowerShell I’ve elected to share (above) enables CDN usage for files of public origin only. It is easy enough to alter the parameters being passed in our PowerShell command so as to cover all files, public and private, by switching -CdnType to Both (with the SPO management shell) or executing another line of PowerShell after the first that swaps –type Public with –type Private (in the case of the SharePointPnP PowerShell module).

The reason I chose only public enablement is because your organization may be bound by restrictions or policies that prohibit or limit CDN use with private files. This is discussed a bit in the O365 CDN post originally cited, but it’s best to do your own research.

Enabling CDN support for public files, however, is considered to be safe in general.

What Sort Of Improvements Can I Potentially See?

I’ve got a series of images that I use to illustrate performance improvements when files are served via CDN instead of SPO list/library, and those files are from Microsoft. Thankfully, MS makes the images I tend to use (and a discussion of them) free available, and they are presented at this link for your reading and reference.

The example that is called out in the link I just shared involves offloading of the jQuery JavaScript library from SPO to CDN. The real world numbers that were captured reduced fetch-and-load time from just over 1.5 seconds to less than half a second (<500ms). That is no small change … and that’s for just one file!

The Other (Secret) Benefit Of CDNs

I guess “Secret” is technically the wrong choice of term here. A more accurate description would be to say that I seldom hear or see anyone talking about another CDN benefit I consider to be very important and significant. That benefit, quite simply, involves improving file fetching and retrieval parallelism when a web page and associated assets (CSS, JS, images, etc.) are requested for download by your browser. In plain English: CDNs typically improve file downloading by allowing the browser to issue a greater number of concurrent file requests.

To help with this concept and its explanation, I’ve created a couple of diagrams that I’ll share with you. The first one appears below, and it is meant to represent the series of steps a browser might execute when retrieving everything needed to show a (SharePoint/SPO) page. As we’ve talked about, what is commonly thought of as a single page in a SharePoint site is, more accurately, a page containing all sorts of dependent assets: image files, JavaScript files, cascading style sheets, and a whole bunch more.

A request for a SharePoint page housed at http://www.thesite.com might start out with one request, but your browser is going to need all of the files referenced within the context of that page (default.aspx, in our case) to render correctly. See below:

To get what’s needed to successfully render the example SharePoint page without CDN support, we follow the numbers:

  1. Your browser issues an HTTP request for the page you want to load – http://www.thesite.com/default.aspx in the case of example above.
  2. That page request goes to (and is served by) the web server/front-end that can return the page.
  3. Our page needs other files to render properly, like styling.css, logo.png, functions.js, and more. These get queued-up and returned according to some rules – more on this in a minute.
  4. In step four (4), files get returned to the browser. Notice I say “no more than six at a time” in the illustration. That’s important and will come into play once we start introducing CDN support to the page/site.

You might be wondering, “Only six files at a time? Really? Why the limitation?” Well, I should start by saying the limit is probably six … maybe a bit more, perhaps a bit less. It depends on the browser you’re using what the specific number is. There was a good summary answer on StackOverflow to a related (but slightly different) question that provides some additional discussion.

Section eight (8) of the HTTP specification (RFC 2616) specifically addresses HTTP connections, how they should be handled, how proxies should be negotiated, etc. For our purposes, the practical implementation of the HTTP specification by modern browsers generally limits the number of concurrent/active connections a browser can have to any given host or URL to six (6).

Notice how I worded that last sentence. Since you folks are smart cookies, I’ll bet you’re already thinking “Wait a minute. CDNs typically have different URLs/hosts from the sites they cache” and you’re imaging what happens (or can happen) when a new source (i.e., different host/URL) is introduced.

This illustration roughly outlines the fetch process when a CDN is involved:

Steps one (1) through four (4) of the fetch process with a CDN are basically still the same as was illustrated without a CDN a bit earlier. When the page is served-up in step three (3) and returned in step four (4), though, there are some differences and additional activity taking place:

  1. Since at least one CDN is in-use for the SPO environment, some of the resource links within the page that is returned will have different URLs. For instance, whereas styling.css was previously served from the SPO environment in the non-CDN example, it might now be referenced through the CDN host shown as http://cdn.source.com/styling.css
  2. The requested file is retrieved, and …
  3. Files come back to the client browser from the CDN at the same time they’re being passed-back from the SPO environment.

Since we’re dealing with two different URLs/hosts in our CDN example (http://www.thesite.com and cdn.source.com), our original six (6) file concurrent download limitation transforms into a 12 file limitation (two hosts serving six files a time, 2 x 6 = 12).

Whether or not the CDN-based process is ultimately faster than without a CDN depends on a great many factors: your Internet bandwidth, the performance of your computer, the complexity/structure of the page being served-up, and more. In the majority of cases, though, at least some performance improvement is observed. In many cases, the improvement can be quite substantial (as referenced and discussed earlier).

Additional Note: 8/24/2020

In a bit of laziness on my part, I didn’t do a prior article search before writing this post. As fate would have it, Bob German (a friend and fellow MVP – well, he was an MVP prior to joining Microsoft a couple of years back) wrote a great post at the end of 2017 that I became aware of this morning with a series of tweets. Bob’s post is called “Choosing a CDN for SharePoint Client Solutions” and is a bit more developer-oriented. That being said, it’s a fantastic post with good information that is a great additional read if you’re looking for more material and/or a slightly different perspective. Nice work, Bob!

Post Update: 8/26/2020

Anders Rask was kind enough to point out that the PnP PowerShell line I originally had listed wasn’t, in fact, PnP PowerShell. That specific line of PowerShell has since been updated to reflect the correct way of altering a tenant’s CDN with the PnP PowerShell cmdlets. Many thanks for the catch, Anders!

Conclusion

So, to sum-up: enable CDN use within your SPO tenant. The benefits are compelling!

References

  1. Microsoft Docs: Use The Office 365 Content Delivery Network (CDN) With SharePoint Online
  2. Imperva: What Is A CDN?
  3. Akamai: What Does CDN Stand For?
  4. MDN Web Docs: Cache-Control
  5. Company: Akamai
  6. Presentations: Caching-In For SharePoint Performance
  7. Akamai: Download Delivery
  8. Microsoft Docs: Configure Cache Settings For A Web Application In SharePoint Server
  9. Blog Post: Do You Know What’s Going To Happen When You Enable The SharePoint BLOB Cache?
  10. LinkedIn: Scott Stewart
  11. Microsoft Docs: Enabling O365 CDN support for public origin files.
  12. Microsoft Docs: Get Started With SharePoint Online Management Shell
  13. Microsoft Docs: PnP PowerShell Overview
  14. Microsoft Docs: Set Up And Configure The Office 365 CDN By Using PnP PowerShell
  15. Microsoft Docs: What Performance Gains Does A CDN Provide?
  16. Push Technologies: Browser Connection Limitations
  17. StackOverflow: How many maximum number of simultaneous Chrome connections/threads I can start through Selenium WebDriver?
  18. W3.org: RFC 2616, Section 8: Connection

Quick Tips for Managing the SharePoint 2010 Office Web Applications Cache

I presented remotely to the Boston Area SharePoint User Group (BASPUG) tonight (7/13/2016), and I referenced an article that I had written that is no longer available online. This post originally appeared as a “SharePoint Smarts” article from Idera. Idera is out of the SharePoint business nowadays, but the information I shared in that article is still relevant to those who use SharePoint 2010. So if you have a SharePoint 2010 environment and use the Office Web Apps, this post (and more specifically, the scripts contained within) is for you.

One of the hotly anticipated items in SharePoint 2010’s feature set is the introduction of the Microsoft Office Web Applications, or ā€œOffice Web Appsā€ for short. The release of the Office Web Apps opens up new possibilities for those who work with documents and files that are tied to Microsoft Word and other applications in the Microsoft Office Family.

What Are the Office Web Apps?

In prior versions of SharePoint, viewing and editing Office documents that existed in SharePoint document libraries normally required a client computer possessing the Microsoft Office suite of applications. If you wanted to view or edit a Word document that existed in SharePoint, for example, you needed Microsoft Word (or an equivalent application) installed on your computer.

That situation changes with the arrival of the Office Web Apps. When a SharePoint 2010 farm is properly set up and configured with the Office Web Apps, it becomes possible to view and edit several different Office document types directly from within a browser as shown in Figure 1 below.

Open Document

Figure 1: Browser-based editing of a Microsoft Word document

The Office Web Apps provide browser-based viewing and editing support for Microsoft Excel, OneNote, PowerPoint, and Word document types, and this support extends to more than just Internet Explorer. Firefox 3.x, Safari 4.x, and Google Chrome browser types are also supported for viewing and editing – making the Office Web Apps an enabler of cross-platform collaboration that centers on Office documents.

A Word about the Plumbing

As you might imagine, browser-based rendering and editing of Office documents involves a number of complex processes that engage a variety of front-end, middle-tier, and back-end components. The front-end and middle-tier tasks that are tied to document viewing and editing are handled primarily by a new set of service applications that appear when the Office Web Apps are installed. These service applications (and their associated pages, handlers, and worker processes) take care of the business of document conversion, load-balancing, and rendering for browser consumption.

Document conversion and rendering typically generate a combination of images, HTML, JavaScript, and XAML (or eXtensible Application Markup Language) that are sent to consuming browsers. The creation of these document resources is an expensive process, both in terms of CPU cycles and storage. To improve performance levels, it makes sense to generate these document resources only as needed and reuse them whenever possible. That’s where the Office Web Apps cache comes in.

The Office Web Apps cache is the back-end store that is responsible for housing images, HTML, JavaScript, and XAML resources once they have been created for a document. Each time a document is converted into a set of these resources, the resources are stored in the Office Web Apps cache. When a request for a document comes into SharePoint, the cache is checked to see if the document had been previously requested and rendered. If it had, and the cached document resources are up-to-date for the document, then the document request is served from the cache instead of engaging the Office Web Apps to convert and re-render it. Serving document resources from the Office Web Apps cache can yield significant performance improvements over scenarios where no cache is employed.

Quick side note before going too far: the Office Web Apps cache is only employed for Word and PowerPoint document types. It is not used for OneNote or Excel documents.

Inside the Office Web Apps Cache

The Office Web Apps cache takes the form of a single site collection for each Web application within a SharePoint farm. When the Office Web Apps are installed and configured in a SharePoint environment, a couple of new timer jobs are installed and run regularly within the farm. One of those timer jobs, the Office Web Apps Cache Creation timer job, ensures that each Web application where the Office Web Apps are running has a site collection like the one shown below in Figure 2.

Site Collection

Figure 2: The Office_Viewing_Service_Cache site collection

The Office_Viewing_Service_Cache site collection is a standard Team Site, and it is the location where resources are stored following the conversion and rendering of either a Word or PowerPoint document by the Office Web Apps.

The Team Site can be accessed just like any other SharePoint Team Site, and a glimpse inside the All Documents library (showing a number of document resources) appears below in Figure 3.

Cache Library

Figure 3: All Documents library in an Office Web Apps cache site collection

Managing the Cache

For such a complex system, the Office Web Apps components do a pretty good job of maintaining themselves without external intervention. This extends to the site collections that are used by Office Web Apps for caching purposes, as well. For example, the Office Web Apps Expiration timer job that is installed with the Office Web Apps removes old document resources from cache site collections once they’ve hit a certain age. The timer job also ensures that each of the site collections responsible for caching has adequate space to serve its purpose.

This doesn’t mean that there aren’t opportunities for tuning and maintenance, though. In fact, there are a couple of things that every administrator should do and review when it comes to the Office Web Apps cache.

Tip #1: Relocate the Cache to a New Database

By default, the Office Web Apps Cache Creation timer job creates an Office_Viewing_Service_Cache site collection in a content database that is collocated with one or more of the ā€œrealā€ site collections within each of your content Web applications. Since the cache site collection is allowed to grow to a beefy 100GB by default, it makes sense to relocate the cache site collection to its own (new) content database. By relocating the cache site collection to its own content database, it becomes easy to exclude it from other maintenance such as backups.

Relocating the cache site collection is pretty straightforward, and it can be accomplished pretty easily with following RelocateOwaCache.ps1 PowerShell script. Simply save the script, execute it, and supply the URL of a Web application within your farm where the Office Web Apps are running. The script will take care of creating a new content database within the Web application, and it will then move the Web application’s Office Web Apps cache site collection to the newly created content database.

[code language=”powershell”]
<#
.SYNOPSIS
RelocateOwaCache.ps1
.DESCRIPTION
Relocates the Office Web Apps cache for a specified Web application to a new content database that is created by the script
.NOTES
Author: Sean McDonough
Last Revision: 07-June-2011
.PARAMETER targetUrl
A Web application where Office Web Apps are in use
.EXAMPLE
RelocateOwaCache.ps1 http://www.TargetWebApplication.com
#>
param
(
[string]$targetUrl = "$(Read-Host ‘Target Web application URL [e.g. http://hostname]&#8217;)"
)

function RelocateCache($targetUrl)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Get the name of the current database where the cache is located; it
# will serve as the basis for a new content database name.
$cacheSite = Get-SPOfficeWebAppsCache -WebApplication $targetUrl -ErrorAction stop
$newDbName = $cacheSite.ContentDatabase.Name + "_OWACache"

# Create a new content database and relocate the cache to it. Make sure the
# user knows what’s happening each step of the way.
Write-Host "- creating a new content database …"
$cacheDb = New-SPContentDatabase -Name $newDbName -WebApplication $targetUrl -ErrorAction stop
Write-Host "- moving the Office Web Apps cache …"
Move-SPSite $cacheSite -DestinationDatabase $cacheDb -Confirm:$false -ErrorAction stop
Write-Host "- performing required IISRESET …"
iisreset | Out-Null

# Let the user know where the cache is now located
Write-Host "Cache successfully relocated to the ‘$newDbName’ database."

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
RelocateCache $targetUrl
[/code]

Tip #2: Review Size and Expiration Settings

When an Office_Viewing_Service_Cache site collection is provisioned within a Web application by the Office Web Apps Cache Creation timer job, it is initially configured to hold cached document resources for 30 days. As mentioned in Tip #1, a cache site collection can also grow to a maximum of 100GB by default.

Whether or not these default settings are appropriate for a Web application depends primarily upon the nature of the site collections housed within the Web application. When site collections contain primarily static documents or content that changes infrequently, it makes sense to allow the cache to grow larger and expire content less often than normal. This maximizes the benefit obtained from caching since document content turns over less frequently.

On the other hand, site collections that experience frequent document turnover and heavy collaboration traffic tend to benefit very little from large cache sizes and long expiration periods. In site collections of this nature, cached content tends to become stale quickly. Little benefit is derived from holding onto document resources that may only be good for days or even hours, so maximum cache size is reduced and expiration periods are shortened.

Tip #3: Give Yourself Some Warning

Since each Office Web App cache is a Team Site and like any other site collection, you can leverage standard SharePoint site collection features and capabilities to help you out. One such mechanism that can be of assistance is the ability to have an e-mail warning sent to site collection owners once a site collection’s size hits a predefined threshold. In the case of the Office Web Apps cache, such a warning could be a cue to increase the maximum size of the cache site collection or perhaps lower the expiration period for document resources housed within the site collection.

Like the maximum cache size setting described in Tip #2, the ability to send e-mail warnings once the cache reaches a threshold is actually tied to SharePoint’s site collection quota capabilities. The maximum size of the cache site collection is handled as a storage quota, and the warning threshold maps directly to the quota’s warning threshold as shown below in Figure 4. In the case of Figure 4, a maximum cache size of 50GB is in effect for the cache site collection, and the e-mail warning threshold is set for 25GB.

Quota

Figure 4: Quota settings for an Office Web Apps cache site collection

Knobs and Dials

Tips #2 and #3 discussed some of the more straightforward Office Web Apps cache settings that are available to you, but you might be wondering how you actually go about changing them.

The AdjustOwaCache.ps1 PowerShell script that appears below provides you with an easy way to review and change the settings discussed. Simply save the script, execute it, and supply the URL of the Web application containing the Office Web Apps cache you’d like to adjust. The script will show you the cache’s current settings and give you the opportunity to modify them.

[code language=”powershell”]
<#
.SYNOPSIS
AdjustOwaCache.ps1
.DESCRIPTION
Dumps several common OWA cache settings to the console for a selected Web application and provides a mechanism for altering the those values
.NOTES
Author: Sean McDonough
Last Revision: 08-June-2011
.PARAMETER targetUrl
A Web application where Office Web Apps are in use
.EXAMPLE
AdjustOwaCache.ps1 http://www.TargetWebApplication.com
#>
param
(
[string]$targetUrl = "$(Read-Host ‘Target Web application URL [e.g. http://hostname]&#8217;)"
)

function AdjustCache($targetUrl)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Create an easy converter for GB to bytes
$GBtoBytes = 1024 * 1024 * 1024

# Get a reference to the cache site collection and extract the values we’ll be
# working with and (potentially) altering.
$cacheSite = Get-SPOfficeWebAppsCache -WebApplication $targetUrl -ErrorAction stop
$wacSize = $cacheSite.Quota.StorageMaximumLevel / $GBtoBytes
$wacWarn = $cacheSite.Quota.StorageWarningLevel / $GBtoBytes
$wacExpire = 30
if ($cacheSite.RootWeb.Properties.ContainsKey("waccacheexpirationperiod"))
{ $wacExpire = $cacheSite.RootWeb.Properties["waccacheexpirationperiod"] }
Write-Host "Current OWA cache values for ‘$targetUrl’"
Write-Host "- Maximum Cache Size (GB): $wacSize"
Write-Host "- Warning Threshold (GB): $wacWarn"
Write-Host "- Expiration Period (Days): $wacExpire"

# Give the user the option to make changes.
$yesOrNo = Read-Host "Would you like to change one or more values? [y/n]"
if ($yesOrNo -eq "y")
{
[Int64]$newWacSize = Read-Host "- Maximum Cache Size (GB)"
Write-Host "- Warning Threshold (GB)"
[Int64]$newWacWarn = Read-Host " (supply 0 for no warning)"
[int]$newWacExpire = Read-Host "- Expiration Period (Days)"

# Convert GB values to bytes and set the cache
$newWacSize = ($newWacSize * $GBtoBytes)
$newWacWarn = ($newWacWarn * $GBtoBytes)
Set-SPOfficeWebAppsCache -WebApplication $targetUrl -ExpirationPeriodInDays $newWacExpire -MaxSizeInBytes $newWacSize -WarningSizeInBytes $newWacWarn -ErrorAction stop
}

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
AdjustCache $targetUrl
[/code]

Conclusion

The Office Web Apps are a powerful addition to SharePoint 2010 and pave the way for greater collaboration on Office documents without the need for the Microsoft Office suite of client applications. The Office Web Apps cache is an important part of the larger Office Web Apps equation, and the cache is generally pretty good about taking care of itself. As shown in this article, though, it is still a good idea to relocate the cache from its default location. At the same time, a little bit of tuning and e-mail alerting can go a long way towards ensuring that the cache operates optimally for you in your environment.

Is a Higher SharePoint Backup Thread Count Better?

Many administrators have noted that SharePoint 2010 allows them to tune the number of threads that can be used for farm backup and restore operations, but very few have played with the settings. In this post, I share some results I compiled while testing the settings in my own environments. I also share the PowerShell script I assembled for my testing so you can tune the backup and restore thread settings in your own SharePoint farm.

Balls of purple, orange and grey yarn or woolScalability in the hardware and software space is all about parallel computing nowadays. Consider our modern hardware: it used to be that all we really cared about was how fast our CPU could run (ā€œhow many GHz?ā€) Now, we care more about how many cores our CPU has, whether or not those cores support Hyper-threading, how many memory channels our CPU has available to it, etc. Scale-out beats scale-up.

The same is largely true in the software space. Most IT folks learned some time ago that ā€œmultithreadingā€ and ā€œhigher performanceā€ tended to go hand-in-hand or were at least associated in some way. Multiple threads of execution meant better scheduling of limited processor resources and fewer chances that one long-running operation would bottleneck an entire application.

Configuring SharePoint 2010 Farm Backup and Restore

When I first saw the following section in the ā€œConfigure Backup Settingsā€ section of SharePoint 2010’s Central Administration site, it brought a big grin to my face:

Thread Configuration

In SharePoint 2007 and earlier, administrators had no real levers to pull to try and tune the performance of farm backup and restore operations. This obviously changed with SharePoint 2010. We were basically being handed a way to adjust those processes as we saw fit – for better or worse.

Strangely enough, though, I never really took the time to explore the impact of those settings in my SharePoint environments. I always left the number of assigned threads for backup and restore operations at three. I would have liked to mess around with the values, but something else was always more important in the grand scheme of things.

Why Now?

I’ve been working on a new ā€œbackup tips and tricksā€ whitepaper, and I found myself looking for backup and restore concerns within the SharePoint platform that I may not have given much attention to in the past. It didn’t take much wading through Central Administration before I once again found myself looking at thread counts for backup and restore operations.

Doing a little bit of Internet (background) research confirmed what I had suspected: no one else had really spent any time on the topic either. In fact, the only ā€œfreshā€ and non-copyright-infringing material I found came from a Microsoft TechNet post titled Backup and recovery best practices (SharePoint Server 2010) … and to tell you the truth, the following paragraph from the section titled ā€œConfigure SharePoint settings for better backup or restore performanceā€ really bugged me:

If you are using the Backup-SPFarm cmdlet, you can use the BackupThreads parameter to specify how many threads SharePoint Server 2010 will use during the backup process. The more threads you specify, the more resources that backup operation will take, but the faster that it will finish, if sufficient resources are available. However, each thread is reported individually in the log files, so using fewer threads makes interpreting the log files easier. By default, three threads are used. The maximum number of threads available is 10.

Without an understanding of how multithreading (in general) and SharePoint backup (specifically) work, this could easily be interpreted as follows:

The greater the number of threads you assign, the faster your backups will complete.

I realize that my summary is an oversimplification, but I believe that many administrators see the TechNet paragraph as I summarized it. And that concerns me.

I’ve always told people that increasing the backup thread count could yield better performance, but any adjustments would need to be tested in the target farm where they are to be implemented. Realistically speaking, there are several participants and a lot of moving parts in any SharePoint farm backup. Besides the SharePoint server where the backup operation is being coordinated, there is the performance of one or more SQL Servers to consider. The capabilities and restrictions of the backup destination location (typically a UNC file share) also need to be factored-in since that destination is being written to by both the SharePoint Server and one or more SQL Servers.

Setting the number of backup threads to 10 on a SharePoint Server of infinite capability and resources doesn’t guarantee a fast backup, because the farm might have a slow SQL Server, a less-capable backup destination location, a slow or congested network, or a host of other complicating factors.

Oh Yeah? Prove It.

Of course, all of this is just a bunch of hand-waving without proof. So, the scientist in me (yeah, I actually used to be a chemist) decided to take over and devise a series of simple tests to see if there is any real weight to the arguments I’ve been making.

I began with the hypothesis that the easiest and most visible way to gauge the performance of a farm backup operation is to measure how long a backup takes to run; e.g., a farm backup that takes 10 minutes to run is faster than a backup that takes 20 minutes to run if farm content, hardware, configuration, and other factors remain constant. Since SharePoint 2010 provides the ability to specify anywhere from one to 10 backup threads, running a series of backups where the only variable is backup thread count should determine if greater or fewer backup threads yield better performance.

You might recall that I also mentioned that farm topology is a factor in the overall backup equation. As part of my experiment, I decided to run the tests on two different farms I have available to me. General descriptions for each farm:

  • Single-Server Farm: my single server farm environment is a VM running on my laptop. The VM houses SharePoint, SQL Server, and the backup location being targeted. The laptop hardware is a Core-i7 quad-core processor, and the underlying storage for the VM is a solid-state drive (SSD). Hardware bottlenecks should be minimized, and network latency isn’t a factor since backup operations are conducted against a local drive within the VM.
  • Multi-Server Farm: my multi-server environment is the ā€œproductionā€ environment on my home network. It consists of a SharePoint Server VM running on a Hyper-V host that also hosts other VMs. The SQL Server instance backing the farm is a non-virtualized SQL Server housing all of the SharePoint databases as well as a few databases for other applications. The backup destination location is a virtualized file server with a pass-through drive array (eSATA with RAID-5). Overall hardware, in this case, is ā€œokayā€ but obviously not dedicated purely to SharePoint. In addition, network latency and bandwidth (GbE) are also in-play as potential sources of impact.

These two environments have pretty different overall topologies, and it was my hope that I’d see some effect on the performance numbers as a result.

The Script

To run the tests reproducibly, I needed a PowerShell script. So, I put the following script together while I had a bit of free time one night. Feel free to pluck this out to use for testing in your SharePoint environment, as well.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
TestBackupThreads.ps1
.DESCRIPTION
This script is used to conduct and time a series of backups using different thread counts.
The output can then be used to make an educated decision on the number of backup threads to
assign for use in farm-level backups.
.NOTES
Author: Sean McDonough
Last Revision: 25-July-2012
.PARAMETER TestLocation
A UNC path to a location that can be used to create test backup sets
.EXAMPLE
TestBackupThreads \\FileShare\TestLocation
#>
param
(
[string]$TestLocation = "$(Read-Host ‘UNC path to test backup location [e.g. \\FileShare\TestLocation]’)"
)

function TestThreads($backupLocation)
{
# Ensure that the SharePoint cmdlets are loaded before continuing
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Setup some variables we’ll need for execution.
$threadTimes = @{} # Hash table to hold timing results
$backupItems = Join-Path $backupLocation "spbr*" # Used to delete temp backup files

# We need to execute a full farm backup for each thread count 1 through 10
Clear-Host
Write-Host "`nBackup thread count testing process beginning."
for ($threads = 1; $threads -lt 11; $threads++)
{
# Clean out any backup contents from the test location
Remove-Item $backupItems -recurse

# Grab the starting date/time (for later comparison), kick-off a farm backup, and then
# grab the stop date/time.
Write-Host "`nInitiating a backup with $threads thread(s) …"
$startPoint = Get-Date
Backup-SPFarm -BackupMethod Full -Directory $backupLocation -BackupThreads $threads
$stopPoint = Get-Date

# Store and report results
$keyName = "Backup with {0} thread(s)" -f $threads
$elapsedSeconds = "{0:N0}" -f ($stopPoint – $startPoint).TotalSeconds
$threadTimes[$keyName] = $elapsedSeconds
Write-Host "Backup with $threads thread(s) complete"
Write-Host ("- time to complete (in seconds): {0}" -f $elapsedSeconds)
}

# Do a final sweep of the test backup location to clean out backup items
Remove-Item $backupItems -recurse

# Dump the results sorted in order of quickest to longest
Write-Host "`nBackup thread count testing process complete."
$threadTimes.GetEnumerator() | Sort-Object Value

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
TestThreads $TestLocation
[/sourcecode]

The script is fairly straightforward in what it does. You supply a TestLocation parameter to specify where farm backup test data should be written to, and the script will run a series of full farm backups using the supplied location as the backup destination. The script starts with a full backup using one backup thread; at the end of each full farm backup, the script notes how long the backup took (in seconds) and cleans-up the contents of the TestLocation folder. The number of backup threads is then incremented, and the next test is run. When the script has completed running all backup tests, it sorts the results from ā€œquickest backupā€ (i.e., the backup thread count requiring the least amount of time) to the slowest backup.

Test Results

I ran a series of three tests for each of the aforementioned environments for a total of six total test runs. Although there’s still quite a bit of variability between individual results within a backup thread series, some trends did appear to emerge.

Single-Server Farm

Backup Times for the Single-Server Environment

With the single-server environment, increasing the number of backup threads did appear to have a directional impact on performance. A single backup thread proved to be the slowest option for the farm backup, and ā€œgreater than oneā€ thread resulted in better performance.

If you look at the average values, though, there wasn’t a tremendous difference between the slowest thread count (410 seconds for one thread) and the fastest (388 seconds for 10 threads). We’re only talking about a 5% to 6% difference overall. To truly find the optimum number of backup threads in an environment like this would require more than three test runs to account for standard deviation and establish significance.

Oh, and for those that might be wondering: I’m sure I introduced some of my own variability into the results. Although I didn’t do anything processor or disk intensive during the test runs, I didn’t go out of my way to minimize the impact of services, background operations, etc. To repeat: more testing (with better controls) would be needed for truly conclusive results. The only thing I started to show with this particular set of tests is that multithreading seemed to improve backup performance.

Multi-Server Farm

Things got quite a bit more interesting (to me) when I switched over to multi-server farm testing.

Backup Times for the Multi-Server Environment

In the multi-server environment, the average for using just one backup thread (1413 seconds) appeared to be significantly faster than the next best option (1747 seconds for seven backup threads) – in the neighborhood of 20% or so faster. Just like the single-server results, additional trials would be needed to completely validate the observations, but the results are less ambiguous (given the relatively greater precision of the samples) than with the single-server runs.

Do you find this surprising? Given my multi-server environment and what I know about it, I can’t really say that I was caught flat-footed by the results. Going into the tests, my hypothesis was that my backup destination location would likely be the ā€œweak linkā€ in my overall farm and backup topology. The SharePoint Server was doing well, the SQL Server was relatively robust … but all of that backup activity was hard on my (virtualized) file server. Multiple servers trying to write to the backup location were swamping it and the network, and adding additional backup threads to the mix didn’t end up helping or improving the overall backup process.

The Take-Away

At the end of the day, I recognize that these tests of mine didn’t prove anything conclusively. Frankly, conclusive proof wasn’t my goal. The intent of these experiments wasn’t to say ā€œmore threads are betterā€ or ā€œmore threads are worse.ā€

The only point I’m making (I hope) by sharing these results is this: until you run some real tests of your own in your SharePoint environment, you really don’t know where your backup thread sweet spot is. You can try to guess it, but it’s just a guess. And guessing is really no better than simply leaving the backup thread count set to its default value of three.

References and Resources

  1. Wikipedia: Parallel Computing
  2. Wikipedia: Hyper-threading
  3. Wikipedia: Thread (computing) and Multithreading
  4. TechNet: Backup and recovery best practices (SharePoint Server 2010)

Finding a GUID in a SharePoint Haystack

In this post, I share a PowerShell script that I recently wrote to help out a friend. The script allows you to search and identify content items in SharePoint site collections by object ID (GUID). The script isn’t something you’d probably use every day, but it might be handy to keep in the script library “just in case.”

HaystackHere’s another blog post to file in the ā€œI’ll probably never need it, but you never knowā€ bucket of things you’ve seen from me.

Admittedly, I don’t get to spend as much time as I’d like playing with PowerShell and assembling scripts. So when the opportunity to whip-up a ā€œquick hitā€ script comes along, I usually jump at it.

The Situation

I feel very fortunate to have made so many friends in the SharePoint community over the last several years. One friend who has been with me since the beginning (i.e., since my first presentation at the original SharePoint Saturday Ozarks) is Kirk Talbot. Kirk has become something of a ā€œregularā€ on the SharePoint Saturday circuit, and many of you may have seen him at a SharePoint Saturday event someplace in the continental United States. To tell you the truth, I’ve seen Kirk as far north as Michigan and as far south as New Orleans. Yes, he really gets around.

Kirk and I keep up fairly regular correspondence, and he recently found himself in a situation where he needed to determine which objects (in a SharePoint site collection) were associated with a handful of GUIDs. Put a different way: Kirk had a GUID (for example, 89b66b71-afc8-463f-b5ed-9770168996a6) and wanted to know – was it a web? A list? A list item? And what was the identity of the item?

PowerShell to the Rescue

I pointed Kirk to a script I had previously written (in my Finding Duplicate GUIDs in Your SharePoint Site Collection post) and indicated that it could probably be adapted for his purpose. Kirk was up to the challenge, but like so many other SharePoint administrators was short on time.

I happened to find myself with a bit of free time in the last week and was due to run into Kirk at SharePoint Saturday Louisville last weekend, so I figured ā€œwhat the heck?ā€ I took a crack at modifying the script I had written earlier so that it might address Kirk’s need. By the time I was done, I had basically thrown out my original script and started over. So much for following my own advice.

The Script

The PowerShell script that follows is relatively straightforward in its operation. You supply it with a site collection URL and a target object GUID. The script then searches through the webs, lists/libraries, and list items of the site collection for an object with an ID that matches the GUID specified. If it finds a match, it reports some information about the matching object.

A sample run of the script appears below. In the case of this example, a list item match was found in the target site collection for the supplied GUID.

Sample Script Execution

 

This script leverages the SharePoint object model directly, so it can be used with either SharePoint 2007 or SharePoint 2010. Its search algorithm is relatively efficient, as well, so match results should be obtained in seconds to maybe minutes – not hours.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
FindObjectByGuid.ps1
.DESCRIPTION
This script attempts to locate a SharePoint object by its unique ID (GUID) within
a site collection. The script first attempts to locate a match by examining webs;
following webs, lists/libraries are examined. Finally, individual items within
lists and libraries are examined. If an object with the ID is found, information
about the object is reported back.
.NOTES
Author: Sean McDonough
Last Revision: 27-July-2012
.PARAMETER SiteUrl
The URL of the site collection that will be searched
.PARAMETER ObjectGuid
The GUID that identifies the object to be located
.EXAMPLE
FindObjectByGuid -SiteUrl http://mysitecollection.com -ObjectGuid 91ce5bbf-eebb-4988-9964-79905576969c
#>
param
(
[string]$SiteUrl = "$(Read-Host ‘The URL of the site collection to search [e.g. http://mysitecollection.com]&#8217;)",
[Guid]$ObjectGuid = "$(Read-Host ‘The GUID of the object you are trying to find [e.g. 91ce5bbf-eebb-4988-9964-79905576969c]’)"
)

function FindObject($startingUrl, $targetGuid)
{
# To work with SP2007, we need to go directly against the object model
Add-Type -AssemblyName "Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"

# Grab the site collection and all webs associated with it to start
$targetSite = New-Object Microsoft.SharePoint.SPSite($startingUrl)
$matchObject = $false
$itemsTotal = 0
$listsTotal = 0
$searchStart = Get-Date

Clear-Host
Write-Host ("INITIATING SEARCH FOR GUID: {0}" -f $targetGuid)

# Step 1: see if we can find a matching web.
$allWebs = $targetSite.AllWebs
Write-Host ("`nPhase 1: Examining all webs ({0} total)" -f $allWebs.Count)
foreach ($spWeb in $allWebs)
{
$listsTotal += $spWeb.Lists.Count
if ($spWeb.ID -eq $targetGuid)
{
Write-Host "`nMATCH FOUND: Web"
Write-Host ("- Web Title: {0}" -f $spWeb.Title)
Write-Host ("- Web URL: {0}" -f $spWeb.Url)
$matchObject = $true
break
}
$spWeb.Dispose()
}

# If we don’t yet have match, we’ll continue with list iteration
if ($matchObject -eq $false)
{
Write-Host ("Phase 2: Examining all lists and libraries ({0} total)" -f $listsTotal)
$allWebs = $targetSite.AllWebs
foreach ($spWeb in $allWebs)
{
$allLists = $spWeb.Lists
foreach ($spList in $allLists)
{
$itemsTotal += $spList.Items.Count
if ($spList.ID -eq $targetGuid)
{
Write-Host "`nMATCH FOUND: List/Library"
Write-Host ("- List Title: {0}" -f $spList.Title)
Write-Host ("- List Default View URL: {0}" -f $spList.DefaultViewUrl)
Write-Host ("- Parent Web Title: {0}" -f $spWeb.Title)
Write-Host ("- Parent Web URL: {0}" -f $spWeb.Url)
$matchObject = $true
break
}
}
if ($matchObject -eq $true)
{
break
}

}
$spWeb.Dispose()
}

# No match yet? Look at list items (which includes folders)
if ($matchObject -eq $false)
{
Write-Host ("Phase 3: Examining all list and library items ({0} total)" -f $itemsTotal)
$allWebs = $targetSite.AllWebs
foreach ($spWeb in $allWebs)
{
$allLists = $spWeb.Lists
foreach ($spList in $allLists)
{
try
{
$listItem = $spList.GetItemByUniqueId($targetGuid)
}
catch
{
$listItem = $null
}
if ($listItem -ne $null)
{
Write-Host "`nMATCH FOUND: List/Library Item"
Write-Host ("- Item Name: {0}" -f $listItem.Name)
Write-Host ("- Item Type: {0}" -f $listItem.FileSystemObjectType)
Write-Host ("- Site-Relative Item URL: {0}" -f $listItem.Url)
Write-Host ("- Parent List Title: {0}" -f $spList.Title)
Write-Host ("- Parent List Default View URL: {0}" -f $spList.DefaultViewUrl)
Write-Host ("- Parent Web Title: {0}" -f $spWeb.Title)
Write-Host ("- Parent Web URL: {0}" -f $spWeb.Url)
$matchObject = $true
break
}
}
if ($matchObject -eq $true)
{
break
}

}
$spWeb.Dispose()
}

# No match yet? Too bad; we’re done.
if ($matchObject -eq $false)
{
Write-Host ("`nNO MATCH FOUND FOR GUID: {0}" -f $targetGuid)
}

# Dispose of the site collection
$targetSite.Dispose()
Write-Host ("`nTotal seconds to execute search: {0}`n" -f ((Get-Date) – $searchStart).TotalSeconds)

# Abort script processing in the event an exception occurs.
trap
{
Write-Warning "`n*** Script execution aborting. See below for problem encountered during execution. ***"
$_.Message
break
}
}

# Launch script
FindObject $SiteUrl $ObjectGuid
[/sourcecode]

Conclusion

Again, I don’t envision this being something that everyone needs. I want to share it anyway. One thing I learned with the ā€œduplicate GUIDā€ script I referenced earlier is that I generally underestimate the number of people who might find something like this useful.

Have fun with it, and please feel free to share your feedback!

Additional Reading and Resources

  1. Event: SharePoint Saturday Ozarks
  2. Twitter: Kirk Talbot (@kctElgin)
  3. Post: Finding Duplicate GUIDs in Your SharePoint Site Collection
  4. Event: SharePoint Saturday Louisville

Mirror, Mirror, In the Farm …

SQL Server mirroring support is a welcome addition to SharePoint 2010. Although SharePoint 2010 makes use of the Failover Partner keyword in its connection strings, SharePoint itself doesn’t appear to know whether or not SQL Server has failed-over for any given database. This post explores this topic in more depth and provides a PowerShell script to dump a farm’s mirroring configuration.

This is a post I’ve been meaning to write for some time, but I’m only now getting around to it. It’s a quick one, and it’s intended to share a couple of observations and a script that may be of use to those of you who are SharePoint 2010 administrators.

Mirroring and SharePoint

The use of SQL Server mirroring isn’t something that’s unique to SharePoint, and it was possible to leverage mirroring with SharePoint 2007 … though I tended to steer people away from trying it unless they had a very specific reason for doing so and no other approach would work. There were simply too many hoops you needed to jump through in order to get mirroring to work with SharePoint 2007, primarily because SharePoint 2007 wasn’t mirroring-aware. Even if you got it working, it was … finicky.

SharePoint 2010, on the other hand, is fully mirroring-aware through the use of the Failover Partner keyword in connection strings used by SharePoint to connect to its databases.

(Side note: if you aren’t familiar with the Failover Partner keyword, here’s an excellent breakdown by Michael Aspengren on how the SQL Server Native Provider leverages it in mirroring configurations.)

There are plenty of blog posts, articles (like this one from TechNet), and books (like the SharePoint 2010 Disaster Recovery Guide that John Ferringer and I wrote) that talk about how to configure mirroring. It’s not particularly tough to do, and it can really help you in situations where you need a SQL Server-based high availability and/or remote redundancy solution for SharePoint databases.

This isn’t a blog post about setting up mirroring; rather, it’s a post to share some of what I’ve learned (or think I’ve learned) and related ā€œah-haā€ moments when it comes to mirroring.

What Are You Pointing At?

This all started when Jay Strickland (one of the Quality Assurance (QA) folks on my team at Idera) ran into some problems with one of our SharePoint 2010 farms that was used for QA purposes. The farm contained two SQL Server instances, and the database instances were setup such that the databases on the second instance mirrored the databases on the first (principal) instance. Jay had configured SharePoint’s service applications and Web applications for mirroring, so all was good.

But not really. The farm had been running properly for quite some time, but something had gone wrong with the farm’s mirroring configuration – or so it seemed. That’s when Jay pinged me on Skype one day with a question (which I’m paraphrasing here):

Is there any way to tell (from within SharePoint) which SQL Server instance is in-use by SharePoint at any given time for a database that is being mirrored?

It seemed like a simple question that should have a simple answer, but I was at a loss to give Jay anything usable off the top of my head. I told Jay that I’d get back to him and started doing some digging.

The SPDatabase Type

Putting on my developer hat for a second, I recalled that all SharePoint databases are represented by an instance of the SPDatabase type (Microsoft.SharePoint.Administration.Database specifically) or one of the other classes that derive from it, such as SPContentDatabase. Running down the available members for the SPDatabase type, I came up with the following properties and methods that were tied to mirroring in some way:

  • FailoverServer
  • FailoverServiceInstance
  • AddFailoverServiceInstance()

What I thought I would find (but didn’t) was one or more properties and/or methods that would allow me to determine which SQL Server instance was serving as the active connection point for SharePoint requests.

In fact, the more digging that I did, the more that it appeared that SharePoint had no real knowledge of where it was actually connecting to for data in mirrored setups. It was easy enough to specify which database instances should be used for mirroring configurations, but there didn’t appear to be any way to determine (from within SharePoint) if the principal was in-use or if failover to the mirrored instance had taken place.

The Key Takeaway

If you’re familiar with SQL Server mirroring and how it’s implemented, then the following diagram (which I put together for discussion) probably looks familiar:

SharePoint connecting to mirrored database

This diagram illustrates a couple of key points:

  1. SharePoint connects to SQL Server databases using the SQL Server Native Client
  2. SharePoint supplies a connection string that tells the native client which SQL Server instances (as Data Source and Failover Partner) should be used as part of a mirroring configuration.
  3. It’s the SQL Server Native Client that actually determines where connections are made, and the results of the Client’s decisions don’t directly surface through SharePoint.
    Number 3 was the point that I kept getting stuck on. I knew that it was possible to go into SQL Server Management Studio or use SQL Server’s Management Objects (SMO) directly to gain more insight around a mirroring configuration and what was happening in real-time, but I thought that SharePoint must surely surface that information in some form.

Apparently not.

Checking with the Experts

I hate when I can’t nail down a definitive answer. Despite all my reading, I wanted to bounce the conclusions I was drawing off of a few people to make sure I wasn’t missing something obvious (or hidden) with my interpretation.

  • I shot Bill Baer (Senior Technical Product Manager for SharePoint and an MCM) a note with my question about information surfacing through SharePoint. If anyone could have given me a definitive answer, it would have been him. Unfortunately, I didn’t hear back from him. In his defense, he’s pretty doggone busy.
  • I put a shout out on Twitter, and I did hear back from my good friend Todd Klindt. While he couldn’t claim with absolute certainty that my understanding was on the mark, he did indicate that my understanding was in-line with everything he’d read and conclusions he had drawn.
  • I turned to Enrique Lima, another good friend and SQL Server MCM, with my question. Enrique confirmed that SQL SMO would provide some answers, but he didn’t have additional thoughts on how that information might surface through SharePoint.

Long and short: I didn’t receive rock-solid confirmation on my conclusions, but my understanding appeared to be on-the-mark. If anyone knows otherwise, though, I’d love to hear about it (and share the information here – with proper recognition for the source, of course!)

Back to the Farm

In the end, I wasn’t really able to give Jay much help with the QA farm that he was trying to diagnose. Since I couldn’t determine where SharePoint was pointing from within SharePoint itself, I did the next best thing: I threw together a PowerShell script that would dump the (mirroring) configuration for each database in the SharePoint farm.

[sourcecode language=”powershell”]
<#
.SYNOPSIS
SPDBMirrorInfo.ps1
.DESCRIPTION
Examines each of the databases in the SharePoint environment to identify which have failover partners and which don’t.
.NOTES
Author: Sean McDonough
Last Revision: 19-August-2011
#>
function DumpMirroringInfo ()
{
# Make sure we have the required SharePoint snap-in loaded.
$spCmdlets = Get-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction silentlycontinue
if ($spCmdlets -eq $Null)
{ Add-PSSnapin Microsoft.SharePoint.PowerShell }

# Grab databases and determine which have failover support (and which don’t)
$allDatabases = Get-SPDatabase
$dbsWithoutFailover = $allDatabases | Where-Object {$_.FailoverServer -eq $null} | Sort-Object -Property Name
$dbsWithFailover = $allDatabases | Where-Object {$_.FailoverServer -ne $null} | Sort-Object -Property Name

# Write out unmirrored databases
if ($dbsWithoutFailover -eq $null)
{ Write-Host "`n`nNo databases are configured without a mirroring partner." }
else
{
Write-Host ("`n`nDatabases without a mirroring partner: {0}" -f $dbsWithoutFailover.Count)
$dbsWithoutFailover | Format-Table -Property Name, Server -AutoSize
}

# Dump results for mirrored databases
if ($dbsWithFailover -eq $null)
{ Write-Host "`nNo databases are configured with a mirroring partner." }
else
{
Write-Host ("`nDatabases with a mirroring partner: {0}" -f $dbsWithFailover.Count)
$dbsWithFailover | Format-Table -Property Name, Server, FailoverServer -AutoSize
}

# For ease of reading
Write-Host ("`n`n")
}
DumpMirroringInfo
[/sourcecode]

The script itself isn’t rocket science, but it did actually prove helpful in identifying some databases that had apparently ā€œlostā€ their failover partners.

Additional Reading and Resources

  1. MSDN: Using Database Mirroring
  2. Whitepaper: Using database mirroring (Office SharePoint Server)
  3. Blog Post: Clarification on the Failover Partner in the connectionstring in Database Mirror setup
  4. TechNet: Configure availability by using SQL Server database mirroring (SharePoint Server 2010)
  5. Book: The SharePoint 2010 Disaster Recovery Guide
  6. Blog: John Ferringer’s ā€œMy Central Adminā€
  7. Blog: Jay Strickland’s ā€œSlinger’s Thoughts
  8. Company: Idera
  9. MSDN: SPDatabase members
  10. MSDN: SQL Server Management Objects (SMO)
  11. Blog: Bill Baer
  12. Blog: Todd Klindt’s SharePoint Admin Blog
  13. Blog: Enrique Lima’s Intentional Thinking

Wrapping Up 2011

After some time away, I’m getting back to blogging with a recap of the last several months’ worth of events. I cover a couple of SharePoint Saturdays, a webcast, my new whitepaper, and a new CodePlex project for SharePoint administrators.

Over the last several months, I haven’t been blogging as much as I’d hoped to; in reality, I haven’t blogged at all. There are a couple of reasons for that: one of them was our recent house move (and the aftermath), and the other was a little more personal. Without going into too much detail: we were contending with a very serious health issue in our family, and that took top priority.

The good news is that the clouds are finally parting, and I’m heading into the close of 2011 on a much better note (and with more time) than I’ve spent the last several months. To get back into some blogging, I figured I’d wrap-up the last several months’ worth of activities that took place since SharePoint Saturday Columbus.

Secrets of SharePoint (SoS) Webcast

Secrets of SharePoint Webcast BannerA lot of things started coming together towards the end of October, and the first of those was another webcast that I did for Idera titled ā€œā€™Caching-In’ for SharePoint Performance.ā€ The webcast covered each of SharePoint’s built-in caching mechanisms (object caching, BLOB caching, and page output caching) as well as the Office Web Applications’ cache. I provided a rundown on each mechanism, how it worked, how it could be leveraged, and some watch-outs that came with its use.

The webcast was basically a lightweight version (40 minutes or so) of the longer (75 minute) presentation I like to present at SharePoint Saturday events. It was something of a challenge to squeeze all of the regular session’s content into 40 minutes, and I had to cut some of the material I would have liked to have kept in … but the final result turned-out pretty well.

If you’re interested in seeing the webcast, you can watch it on-demand from the SoS webcast archive. I also posted the slides in the Resources section of this blog.

SharePoint Saturday Cincinnati

SharePoint Cincinnati BannerOn Saturday October 29th, Cincinnati had its first-ever SharePoint Saturday Cincinnati event. The event took place at the Kingsgate Marriott on Goodman Drive (near University Hospital), and it was very well attended – so much so that Stacy Deere and the other folks who organized the event are planning to do so again next year!

Many people from the local SharePoint community came out to support the event, and we had a number of folks from out of town come rolling in as well to help ensure that the event was a big success. I ended up delivering two sessions: my ā€œā€™Caching-In’ for SharePoint Performanceā€ session and my ā€œSharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!ā€

I had a great time at the event, and I’m hoping I’ll be fortunate enough to participate again on the next go ā€˜round!

New Disaster Recovery WhitePaper

WhitePaper Title PageMy co-author and good friend John Ferringer and I were hard at work throughout the summer and early Fall putting together a new disaster recovery whitepaper for Idera. The whitepaper is titled ā€œNew Features in SharePoint 2010: A Disaster Recovery Love Story,ā€ and it’s a bromance novel that only a couple of goofballs like John and I could actually write …

Okay, there’s actually no romance in it whatsoever (thank heavens for prospective readers – no one needs us doing that to them), but there is a solid chunk of coverage on SharePoint 2010’s new platform capabilities pertaining to disaster recovery. We also review some disaster recovery basics in the whitepaper, cover things that have changed since SharePoint 2007, and identify some new watch-out areas in SharePoint 2010 that could have an impact on your disaster recovery planning.

The whitepaper is pretty substantial at 13 pages, but it’s a good read if you want to understand your platform-level disaster recovery options in SharePoint 2010. It’s a free download, so please grab a copy if it sounds interesting. John and I would certainly love to hear your feedback, as well.

SharePoint Backup Augmentation Cmdlets (SharePointBAC)

SharePointBACMany of my friends in the SharePoint community have heard me talk about some of the projects I’ve wanted to undertake to extend the SharePoint platform. I’m particularly sensitive to the plight of the administrator who is constrained (typically due to lack of resources) to use only the out-of-the-box (OOTB) tools that are available for data protection. While I think the OOTB tools do a solid job in most small and mid-size farms scenarios, there are some clear gaps that need to be addressed.

Since I’d been big on promises and short on delivery in helping these administrators, I finally started on a project to address some of the backup and restore gaps I see in the SharePoint platform. The evolving and still-under-development result is my SharePoint Backup Augmentation Cmdlets (SharePointBAC) project that is available on CodePlex.

With the PowerShell cmdlets that I’m developing for SharePoint 2010, I’m trying to introduce some new capabilities that SharePoint administrators need in order to make backup scripting with the OOTB tools a simpler and more straightforward experience. For example, one big gap that exists with the OOTB tools is that there is no way to groom a backup set. Each backup you create using Backup-SPFarm, for instance, adds to the backups that existed before it. There’s no way to groom (or remove) older backups you no longer want to keep, so disk consumption grows unless manual steps are taken to do something about it. That’s where my cmdlets come in. With Remove-SPBackupCatalog, for example, you could trim backups to retain only a certain number of them; you could also trim backups to ensure that they consume no more disk space (e.g., 100GB) than you’d like.

The CodePlex project is in alpha form right now (it’s brand spankin’ new), and it’s far from complete. I’ve already gotten some great suggestions for what I could do to continue development, though. When I combine those ideas with the ones I already had, I’m pretty sure I’ll be able to shape the project into something truly useful for SharePoint administrators.

If you or someone you know is a SharePoint administrator using the OOTB tools for backup scripting, please check out the project. I’d really love to hear from you!

SharePoint Saturday Denver

SharePoint Saturday DenverAs I type this, I’m in Colorado at the close of the third (annual) SharePoint Saturday Denver event. This year’s event was phenomenal – a full two days of SharePoint goodness! Held on Friday November 11th and Saturday November 12th at the Colorado Convention Center, this year’s event was capped at 350 participants for Saturday. A full 350 people signed-up, and the event even had a wait list.

On the first day of the event, I delivered a brand new session that I put together (in Prezi format) titled The Essentials of SharePoint Disaster Recovery. Here’s the amended abstract (and I’ll explain why it’s amended in a second) for the session:

ā€œAre my nightly SQL Server backups good enough?ā€ ā€œDo I need an off-site disaster recovery facility?ā€ ā€œHow do I even start the process of disaster recovery planning?ā€ These are just a few of the more common questions that arise when the topic of SharePoint disaster recovery comes up. As with most things SharePoint, the real answer to each question is oftentimes ā€œit depends.ā€ In this business and process-centric session, we will be taking a look at the topic of SharePoint disaster recovery from multiple perspectives: business continuity planner, technical architect, platform owner, and others. Critical concepts and terms will be explained and defined, and an effective process for analyzing and formulating a disaster recovery plan will be discussed. We’ll also highlight some common mistakes that take place when working to build a disaster recovery strategy and how you can avoid them. By the end of this session, you will be armed with the knowledge needed to plan or review a disaster recovery strategy for your SharePoint environment.

The reason I amended the abstract is because the previous abstract for the session didn’t do enough to call out the fact that the presentation is primarily business-centric rather than technically focused. Many of the folks who initially came to the session were SharePoint IT pros and administrators looking for information on backup/restore, mirroring, configuration, etc. Although I cover those items at a high level in this new talk, they’re only a small part of what I discuss during the session.

On Saturday, I delivered my ā€œā€™Caching-In’ for SharePoint Performanceā€ talk during the first slot of the day. I really enjoy delivering the session; it’s probably my favorite one. I had a solid turn-out, and I had some good discussions with folks both during and after the presentation.

As I mentioned, this year’s event was a two day event. That’s a little unusual, but multi-day SharePoint Saturday events appear to be getting some traction in the community – starting with SharePoint Saturday The Conference a few months back. Some folks in the community don’t care much for this style of event, probably because there’s some nominal cost that participants typically bear for the extra day of sessions. I expect that we’ll probably continue to see more hybrid events, though, because I think they meet an unaddressed need that falls somewhere between ā€œgive up my Saturday for free trainingā€ and ā€œpay a lot of money for a multi-day weekday conference.ā€ Only time will tell, though.

On the Horizon

Event though 2011 isn’t over yet, I’m slowing down on some of my activities save for SharePointBAC (my new extracurricular pastime). 2012 is already looking like it’s going to be a big year for SharePoint community activities. In January I’ll be heading down to Texas for SharePoint Saturday Austin, and in February I’ll be heading to San Francisco for SPTechCon. I’ll certainly cover those activities (and others) as we approach 2012.

Additional Reading and Resources

  1. Event: SharePoint Saturday Columbus
  2. Company: Idera
  3. Webcast: ā€œCaching-Inā€ for SharePoint Performance
  4. Webcast Slides: ā€œCaching-Inā€ for SharePoint Performance
  5. Location: My blog’s Resources section
  6. Event: SharePoint Saturday Cincinnati
  7. Blog: Stacy Deere and Stephanie Donahue’s ā€œNot Just SharePointā€
  8. SPS Cincinnati Slides: ā€œCaching-Inā€ for SharePoint Performance
  9. SPS Cincinnati Slides: SharePoint 2010 Disaster Recovery: New Capabilities, New Possibilities!
  10. Blog: John Ferringer’s ā€œMy Central Adminā€
  11. Whitepaper: New Features in SharePoint 2010: A Disaster Recovery Love Story
  12. CodePlex: SharePoint Backup Augmentation Cmdlets (SharePointBAC)
  13. Event: SharePoint Saturday Denver
  14. Tool: Prezi
  15. SPS Denver Slides: The Essentials of SharePoint Disaster Recovery
  16. SPS Denver Slides: ā€œCaching-Inā€ for SharePoint Performance
  17. Event: SharePoint Saturday The Conference
  18. Event: SharePoint Saturday Austin
  19. Event: SPTechCon 2012 San Francisco
%d bloggers like this: