Launching Your SPO Site or Portal
In this short post I cover the SharePoint Online (SPO) Launch Scheduling Tool and why you should get familiar with it before you launch a new SPO site or portal.
In this short post I cover the SharePoint Online (SPO) Launch Scheduling Tool and why you should get familiar with it before you launch a new SPO site or portal.
In this brief post, I talk about my first in-person event (SPFest Chicago) since COVID hit. I also talk about and include a recent interview with the M365 Developer Podcast.
As someone who spends most days working with (and thinking about) SharePoint, there’s one thing I can say without any uncertainty or doubt: Microsoft Teams has taken off like a rocket bound for low Earth orbit. It’s rare these days for me to discuss SharePoint without some mention of Teams.
I’m confident that many of you know the reason for this. Besides being a replacement for Skype, many of Teams’ back-end support systems and dependent service implementations are based in – you guessed it – SharePoint Online (SPO).
As one might expect, any technology product that is rapidly evolving and seeing adoption by the enterprise has gaps that reveal themselves and imperfect implementations as it grows – and Teams is no different. I’m confident that Teams will reach a point of maturity and eventually address all of the shortcomings that people are currently finding, but until it does, there will be those of us who attempt to address gaps we might find with the tools at our disposal.
One of those Teams pain points we discussed recently on the Microsoft Community Office Hours webcast was the challenge of changing ownership for a large numbers of Teams at once. We took on a question from Mark Diaz who posed the following:
May I ask how do you transfer the ownership of all Teams that a user is managing if that user is leaving the company? I know how to change the owner of the Teams via Teams admin center if I know already the Team that I need to update. Just consulting if you do have an easier script to fetch what teams he or she is an owner so I can add this to our SOP if a user is leaving the company.
Mark Diaz
We discussed Mark’s question (amidst our normal joking around) and posited that PowerShell could provide an answer. And since I like to goof around with PowerShell and scripting, I agreed to take on Mark’s question as “homework” as seen below:
The rest of this post is my direct response to Mark’s question and request for help. I hope this does the trick for you, Mark!
Anyone who has spent any time as an administrator in the Microsoft ecosystem of cloud offerings knows that Microsoft is very big on automating administrative tasks with PowerShell. And being a cloud workload in that ecosystem, Teams is no different.
Microsoft Teams has it’s own PowerShell module, and this can be installed and referenced in your script development environment in a number of different ways that Microsoft has documented. And this MicrosoftTeams module is a prerequisite for some of the cmdlets you’ll see me use a bit further down in this post.
The MicrosoftTeams module isn’t the only way to work with Teams in PowerShell, though. I would have loved to build my script upon the Microsoft Graph PowerShell module … but it’s still in what is termed an “early preview” release. Given that bit of information, I opted to use the “older but safer/more mature” MicrosoftTeams module.
Let me just cut to the chase. I put together my ReplaceTeamOwners.ps1 script to address the specific scenario Mark Diaz asked about. The script accepts a handful of parameters (this next bit lifted straight from the script’s internal documentation):
.PARAMETER currentTeamOwner A string that contains the UPN of the user who will be replaced in the ownership changes. This property is mandatory. Example: bob@EvilCorp.com .PARAMETER newTeamOwner A string containing the UPN of the user who will be assigned at the new owner of Teams teams (i.e., in place of the currentTeamOwner). Example jane@AcmeCorp.com. .PARAMETER confirmEachUpdate A switch parameter that if specified will require the user executing the script to confirm each ownership change before it happens; helps to ensure that only the changes desired get made. .PARAMETER isTest A boolean that indicates whether or not the script will actually be run against and/or make changes Teams teams and associated structures. This value defaults to TRUE, so actual script runs must explicitly set isTest to FALSE to affect changes on Teams teams ownership.
So both currentTeamOwner and newTeamOwner must be specified, and that’s fairly intuitive to understand. If the -confirmEachUpdate switch is supplied, then for each possible ownership change there will be a confirmation prompt allowing you to agree to an ownership change on a case-by-case basis.
The one parameter that might be a little confusing is the script’s isTest parameter. If unspecified, this parameter defaults to TRUE … and this is something I’ve been putting in my scripts for ages. It’s sort of like PowerShell’s -WhatIf switch in that it allows you to understand the path of execution without actually making any changes to the environment and targeted systems/services. In essence, it’s basically a “dry run.”
The difference between my isTest and PowerShell’s -WhatIf is that you have to explicitly set isTest to FALSE to “run the script for real” (i.e., make changes) rather than remembering to include -WhatIf to ensure that changes aren’t made. If someone forgets about the isTest parameter and runs my script, no worries – the script is in test mode by default. My scripts fail safe and without relying on an admin’s memory, unlike -WhatIf.
And now … the script!
<# .SYNOPSIS This script is used to replace all instances of a Teams team owner with the identity of another account. This might be necessary in situations where a user leaves an organization, administrators change, etc. .DESCRIPTION Anytime a Microsoft Teams team is created, an owner must be associated with it. Oftentimes, the team owner is an administrator or someone who has no specific tie to the team. Administrators tend to change over time; at the same time, teams (as well as other IT "objects", like SharePoint sites) undergo transitions in ownership as an organization evolves. Although it is possible to change the owner of Microsoft Teams team through the M365 Teams console, the process only works for one site at a time. If someone leaves an organization, it's often necessary to transfer all objects for which that user had ownership. That's what this script does: it accepts a handful of parameters and provides an expedited way to transition ownership of Teams teams from one user to another very quickly. .PARAMETER currentTeamOwner A string that contains the UPN of the user who will be replaced in the ownership changes. This property is mandatory. Example: bob@EvilCorp.com .PARAMETER newTeamOwner A string containing the UPN of the user who will be assigned at the new owner of Teams teams (i.e., in place of the currentTeamOwner). Example jane@AcmeCorp.com. .PARAMETER confirmEachUpdate A switch parameter that if specified will require the user executing the script to confirm each ownership change before it happens; helps to ensure that only the changes desired get made. .PARAMETER isTest A boolean that indicates whether or not the script will actually be run against and/or make changes Teams teams and associated structures. This value defaults to TRUE, so actual script runs must explicitly set isTest to FALSE to affect changes on Teams teams ownership. .NOTES File Name : ReplaceTeamsOwners.ps1 Author : Sean McDonough - sean@sharepointinterface.com Last Update: September 2, 2020 #> Function ReplaceOwners { param( [Parameter(Mandatory=$true)] [String]$currentTeamsOwner, [Parameter(Mandatory=$true)] [String]$newTeamsOwner, [Parameter(Mandatory=$false)] [Switch]$confirmEachUpdate, [Parameter(Mandatory=$false)] [Boolean]$isTest = $true ) # Perform a parameter check. Start with the site spec. Clear-Host Write-Host "" Write-Host "Attempting prerequisite operations ..." $paramCheckPass = $true # First - see if we have the MSOnline module installed. try { Write-Host "- Checking for presence of MSOnline PowerShell module ..." $checkResult = Get-InstalledModule -Name "MSOnline" if ($null -ne $checkResult) { Write-Host " - MSOnline module already installed; now importing ..." Import-Module -Name "MSOnline" | Out-Null } else { Write-Host "- MSOnline module not installed. Attempting installation ..." Install-Module -Name "MSOnline" | Out-Null $checkResult = Get-InstalledModule -Name "MSOnline" if ($null -ne $checkResult) { Import-Module -Name "MSOnline" | Out-Null Write-Host " - MSOnline module successfully installed and imported." } else { Write-Host "" Write-Host -ForegroundColor Yellow " - MSOnline module not installed or loaded." $paramCheckPass = $false } } } catch { Write-Host -ForegroundColor Red "- Unexpected problem encountered with MSOnline import attempt." $paramCheckPass = $false } # Our second order of business is to make sure we have the PowerShell cmdlets we need # to execute this script. try { Write-Host "- Checking for presence of MicrosoftTeams PowerShell module ..." $checkResult = Get-InstalledModule -Name "MicrosoftTeams" if ($null -ne $checkResult) { Write-Host " - MicrosoftTeams module installed; will now import it ..." Import-Module -Name "MicrosoftTeams" | Out-Null } else { Write-Host "- MicrosoftTeams module not installed. Attempting installation ..." Install-Module -Name "MicrosoftTeams" | Out-Null $checkResult = Get-InstalledModule -Name "MicrosoftTeams" if ($null -ne $checkResult) { Import-Module -Name "MicrosoftTeams" | Out-Null Write-Host " - MicrosoftTeams module successfully installed and imported." } else { Write-Host "" Write-Host -ForegroundColor Yellow " - MicrosoftTeams module not installed or loaded." $paramCheckPass = $false } } } catch { Write-Host -ForegroundColor Yellow "- Unexpected problem encountered with MicrosoftTeams import attempt." $paramCheckPass = $false } # Have we taken care of all necessary prerequisites? if ($paramCheckPass) { Write-Host -ForegroundColor Green "Prerequisite check passed. Pressto continue." Read-Host } else { Write-Host -ForegroundColor Red "One or more prerequisite operations failed. Script terminating." Exit } # We can now begin. First step will be to get the user authenticated to they can actually # do something (and we'll have a tenant context) Clear-Host try { Write-Host "Please authenticate to begin the owner replacement process." $creds = Get-Credential Write-Host "- Credentials gathered. Connecting to Azure Active Directory ..." Connect-MsolService -Credential $creds | Out-Null Write-Host "- Now connecting to Microsoft Teams ..." Connect-MicrosoftTeams -Credential $creds | Out-Null Write-Host "- Required connections established. Proceeding with script." # We need the list of AAD users to validate our target and replacement. Write-Host "Retrieving list of Azure Active Directory users ..." $currentUserUPN = $null $currentUserId = $null $currentUserName = $null $newUserUPN = $null $newUserId = $null $newUserName = $null $allUsers = Get-MsolUser Write-Host "- Users retrieved. Validating ID of current Teams owner ($currentTeamsOwner)" $currentAADUser = $allUsers | Where-Object {$_.SignInName -eq $currentTeamsOwner} if ($null -eq $currentAADUser) { Write-Host -ForegroundColor Red "- Current Teams owner could not be found in Azure AD. Halting script." Exit } else { $currentUserUPN = $currentAADUser.UserPrincipalName $currentUserId = $currentAADUser.ObjectId $currentUserName = $currentAADUser.DisplayName Write-Host " - Current user found. Name='$currentUserName', ObjectId='$currentUserId'" } Write-Host "- Now Validating ID of new Teams owner ($newTeamsOwner)" $newAADUser = $allUsers | Where-Object {$_.SignInName -eq $newTeamsOwner} if ($null -eq $newAADUser) { Write-Host -ForegroundColor Red "- New Teams owner could not be found in Azure AD. Halting script." Exit } else { $newUserUPN = $newAADUser.UserPrincipalName $newUserId = $newAADUser.ObjectId $newUserName = $newAADUser.DisplayName Write-Host " - New user found. Name='$newUserName', ObjectId='$newUserId'" } Write-Host "Both current and new users exist in Azure AD. Proceeding with script." # If we've made it this far, then we have valid current and new users. We need to # fetch all Teams to get their associated GroupId values, and then examine each # GroupId in turn to determine ownership. $allTeams = Get-Team $teamCount = $allTeams.Count Write-Host Write-Host "Begin processing of teams. There are $teamCount total team(s)." foreach ($currentTeam in $allTeams) { # Retrieve basic identification information $groupId = $currentTeam.GroupId $groupName = $currentTeam.DisplayName $groupDescription = $currentTeam.Description Write-Host "- Team name: '$groupName'" Write-Host " - GroupId: '$groupId'" Write-Host " - Description: '$groupDescription'" # Get the users associated with the team and determine if the target user is # currently an owner of it. $currentIsOwner = $null $groupOwners = (Get-TeamUser -GroupId $groupId) | Where-Object {$_.Role -eq "owner"} $currentIsOwner = $groupOwners | Where-Object {$_.UserId -eq $currentUserId} # Do we have a match for the targeted user? if ($null -eq $currentIsOwner) { # No match; we're done for this cycle. Write-Host " - $currentUserName is not an owner." } else { # We have a hit. Is confirmation needed? $performUpdate = $false Write-Host " - $currentUserName is currently an owner." if ($confirmEachUpdate) { $response = Read-Host " - Change ownership to $newUserName (Y/N)?" if ($response.Trim().ToLower() -eq "y") { $performUpdate = $true } } else { # Confirmation not needed. Do the update. $performUpdate = $true } # Change ownership if the appropriate flag is set if ($performUpdate) { # We need to check if we're in test mode. if ($isTest) { Write-Host -ForegroundColor Yellow " - isTest flag is set. No ownership change processed (although it would have been)." } else { Write-Host " - Adding '$newUserName' as an owner ..." Add-TeamUser -GroupId $groupId -User $newUserUPN -Role owner Write-Host " - '$newUserName' is now an owner. Removing old owner ..." Remove-TeamUser -GroupId $groupId -User $currentUserUPN -Role owner Write-Host " - '$currentUserName' is no longer an owner." } } else { Write-Host " - No changes in ownership processed for $groupName." } Write-Host "" } } # We're done let the user know. Write-Host -ForegroundColor Green "All Teams processed. Script concluding." Write-Host "" } catch { # One or more problems encountered during processing. Halt execution. Write-Host -ForegroundColor Red "-" $_ Write-Host -ForegroundColor Red "- Script execution halted." Exit } } ReplaceOwners -currentTeamsOwner bob@EvilCorp.com -newTeamsOwner jane@AcmeCorp.com -isTest $true -confirmEachUpdate
Don’t worry if you don’t feel like trying to copy and paste that whole block. I zipped up the script and you can download it here.
I like to make an admin’s life as simple as possible, so the first part of the script (after the comments/documentation) is an attempt to import (and if necessary, first install) the PowerShell modules needed for execution: MSOnline and MicrosoftTeams.
From there, the current owner and new owner identities are verified before the script goes through the process of getting Teams and determining which ones to target. I believe that the inline comments are written in relatively plain English, and I include a lot of output to the host to spell out what the script is doing each step of the way.
The last line in the script is simply the invocation of the ReplaceOwners function with the parameters I wanted to use. You can leave this line in and change the parameters, take it out, or use the script however you see fit.
Here’s a screenshot of a full script run in my family’s tenant (mcdonough.online) where I’m attempting to see which Teams my wife (Tracy) currently owns that I want to assume ownership of. Since the script is run with isTest being TRUE, no ownership is changed – I’m simply alerted to where an ownership change would have occurred if isTest were explicitly set to FALSE.
So there you have it. I put this script together during a relatively slow afternoon. I tested and ensured it was as error-free as I could make it with the tenants that I have, but I would still test it yourself (using an isTest value of TRUE, at least) before executing it “for real” against your production system(s).
And Mark D: I hope this meets your needs.
If you need the what’s what on CDNs (content delivery networks), this is a bit of quick reading that will get you up to speed with what a CDN is, how to configure your SPO tenant to use a CDN, and the benefits that CDNs can bring.
Since I’m taking the time to write about the topic, you can safely guess that yes, CDNs make a difference withSPO page operations. In many cases, proper CDN configuration will make a substantial difference in SPO page performance. So enable CDN use NOW!
Knowing that some folks simply want the answer up-front, I hope that I’ve satisfied their curiosity. The rest of this post is dedicated to explaining content delivery networks (CDNs), how they operate, and how you can easily enable them for use within your SharePoint Online (SPO) sites.
Let me first address a misconception that I sometimes encountered among SPO administrators and developers (including some MVPs) – that being that CDNs don’t really “do a whole lot” to help site and/or page performance. Sure, usage of a CDN is recommended … but a common misunderstanding is that a CDN is really more of a “nice-to-have” than “need-to-have” element for SPO sites. Of the people saying such things, oftentimes that judgment comes without any real research, knowledge, or testing. Skeptics typically haven’t read the documentation (the “non-RTFM crowd”) and haven’t actually spent any time profiling and troubleshooting the performance of SPO sites. Since I enjoy addressing perf. problems and challenges, I’ve been fortunate to experience firsthand the benefits that CDNs can bring. By the end of this post, I hope I’ll have made converts of a CDN skeptic or two.
A CDN is a Content Delivery Network. There are a lot of (good) web resources that describe and illustrate what CDNs are and how they generally operate (like this one and this one), so I’m not going to attempt to “add value” with my own spin. I will simply call attention to a couple of the key characteristics that we really care about in our use of CDNs with SPO.
If you didn’t know about CDNs prior to this post, or didn’t understand how they could help you, I hope you’re beginning to see the possibilities!
It wasn’t all that long ago that Microsoft was a bit more “modest” in its use of CDNs. Microsoft certainly made use of them, but prior to the implementation of its own content delivery networks, Microsoft frequently turned to a company called Akamai for CDN support.
When I first started presenting on SharePoint and its built-in caching mechanisms, I often spoke about Akamai and their edge network when talking about BLOB caching and how the max-age cache-control header could be configured and misconfigured. Back then, “Akamai” was basically synonymous with “CDN,” and that’s how many of us thought about the company. They were certainly leading the pack in the CDN service space.
Back then, if you were attempting to download a large file from Microsoft (think DVD images, ISO files, etc.), then there was a good change that the download link your browser would receive (from Microsoft’s servers) would actually point to an Akamai edge node near your location geographically instead of a Microsoft destination.
Fast forward to today. In addition to utilizing third-party CDNs like those deployed by Akamai, Microsoft has built (and is improving) their own first-party CDNs. There are a couple of benefits to this. First, many data regulations you may be subject to that prevent third-party housing of your data (yes, even in temporary locations like a CDN) can be largely avoided. In the case of CDNs that Microsoft is running, there is no hand-off to a third party and thus much less practical concern regarding who is housing your data.
Second, with their own CDNs, Microsoft has a lot more latitude and ability to extend the specifics of CDN configuration and operation its customers. And that’s what they’ve done with the Office 365 CDN.
Now we’re talking! This next part is particularly important, and it’s what drove the creation of this post. It’s also the one bit of information that I promised Scott Stewart at Microsoft that I would try to get “out in the wild” as quickly and as visibly as possible.
So, if you remember nothing else from this post,please remember this:
Set-SPOTenantCdnEnabled -CdnType Public -Enable $true
That is the line of PowerShell that needs to be executed (against your SPO tenant, so you need to have a connection to your tenant established first) to enable transparent CDN support for public files. Run that, and non-sensitive files of public origin from SPO will begin getting cached in a CDN and served from there.
The line of PowerShell I shared goes through the SharePoint Online Management Shell – something most organizations using SPO (and their admins in particular) have installed somewhere.
It is also possible to enable CDN support if you’re using the PNP PowerShell module, if that’s your preference, by executing the following PowerShell:
Set-PnPTenantCdnEnabled -CdnType Public -Enable $true
No matter how you enable the CDN, it should be noted that the PowerShell I’ve elected to share (above) enables CDN usage for files of public origin only. It is easy enough to alter the parameters being passed in our PowerShell command so as to cover all files, public and private, by switching -CdnType to Both (with the SPO management shell) or executing another line of PowerShell after the first that swaps –type Public with –type Private (in the case of the SharePointPnP PowerShell module).
The reason I chose only public enablement is because your organization may be bound by restrictions or policies that prohibit or limit CDN use with private files. This is discussed a bit in the O365 CDN post originally cited, but it’s best to do your own research.
Enabling CDN support for public files, however, is considered to be safe in general.
I’ve got a series of images that I use to illustrate performance improvements when files are served via CDN instead of SPO list/library, and those files are from Microsoft. Thankfully, MS makes the images I tend to use (and a discussion of them) free available, and they are presented at this link for your reading and reference.
The example that is called out in the link I just shared involves offloading of the jQuery JavaScript library from SPO to CDN. The real world numbers that were captured reduced fetch-and-load time from just over 1.5 seconds to less than half a second (<500ms). That is no small change … and that’s for just one file!
I guess “Secret” is technically the wrong choice of term here. A more accurate description would be to say that I seldom hear or see anyone talking about another CDN benefit I consider to be very important and significant. That benefit, quite simply, involves improving file fetching and retrieval parallelism when a web page and associated assets (CSS, JS, images, etc.) are requested for download by your browser. In plain English: CDNs typically improve file downloading by allowing the browser to issue a greater number of concurrent file requests.
To help with this concept and its explanation, I’ve created a couple of diagrams that I’ll share with you. The first one appears below, and it is meant to represent the series of steps a browser might execute when retrieving everything needed to show a (SharePoint/SPO) page. As we’ve talked about, what is commonly thought of as a single page in a SharePoint site is, more accurately, a page containing all sorts of dependent assets: image files, JavaScript files, cascading style sheets, and a whole bunch more.
A request for a SharePoint page housed at http://www.thesite.com might start out with one request, but your browser is going to need all of the files referenced within the context of that page (default.aspx, in our case) to render correctly. See below:
To get what’s needed to successfully render the example SharePoint page without CDN support, we follow the numbers:
You might be wondering, “Only six files at a time? Really? Why the limitation?” Well, I should start by saying the limit is probably six … maybe a bit more, perhaps a bit less. It depends on the browser you’re using what the specific number is. There was a good summary answer on StackOverflow to a related (but slightly different) question that provides some additional discussion.
Section eight (8) of the HTTP specification (RFC 2616) specifically addresses HTTP connections, how they should be handled, how proxies should be negotiated, etc. For our purposes, the practical implementation of the HTTP specification by modern browsers generally limits the number of concurrent/active connections a browser can have to any given host or URL to six (6).
Notice how I worded that last sentence. Since you folks are smart cookies, I’ll bet you’re already thinking “Wait a minute. CDNs typically have different URLs/hosts from the sites they cache” and you’re imaging what happens (or can happen) when a new source (i.e., different host/URL) is introduced.
This illustration roughly outlines the fetch process when a CDN is involved:
Steps one (1) through four (4) of the fetch process with a CDN are basically still the same as was illustrated without a CDN a bit earlier. When the page is served-up in step three (3) and returned in step four (4), though, there are some differences and additional activity taking place:
Since we’re dealing with two different URLs/hosts in our CDN example (http://www.thesite.com and cdn.source.com), our original six (6) file concurrent download limitation transforms into a 12 file limitation (two hosts serving six files a time, 2 x 6 = 12).
Whether or not the CDN-based process is ultimately faster than without a CDN depends on a great many factors: your Internet bandwidth, the performance of your computer, the complexity/structure of the page being served-up, and more. In the majority of cases, though, at least some performance improvement is observed. In many cases, the improvement can be quite substantial (as referenced and discussed earlier).
Additional Note: 8/24/2020
In a bit of laziness on my part, I didn’t do a prior article search before writing this post. As fate would have it, Bob German (a friend and fellow MVP – well, he was an MVP prior to joining Microsoft a couple of years back) wrote a great post at the end of 2017 that I became aware of this morning with a series of tweets. Bob’s post is called “Choosing a CDN for SharePoint Client Solutions” and is a bit more developer-oriented. That being said, it’s a fantastic post with good information that is a great additional read if you’re looking for more material and/or a slightly different perspective. Nice work, Bob!
Post Update: 8/26/2020
Anders Rask was kind enough to point out that the PnP PowerShell line I originally had listed wasn’t, in fact, PnP PowerShell. That specific line of PowerShell has since been updated to reflect the correct way of altering a tenant’s CDN with the PnP PowerShell cmdlets. Many thanks for the catch, Anders!
So, to sum-up: enable CDN use within your SPO tenant. The benefits are compelling!
In this post, I share a brief bit of magic that Outlook exhibited for me recently. I don’t know where it came from or if it is even is an indication of things to come … but I liked what I saw!
I recently returned from SPTechCon. While I was in San Francisco, I delivered a few sessions (including a new advanced PowerShell session) and managed to make it to Muir Woods to visit the Redwoods once again. The entire time I was in San Francisco, I was riding around in a rental car from Enterprise. I usually get my rental cars from Enterprise, but something weird happened when I was getting this rental car.
Outlook did me a favor.
When I booked the rental car with Enterprise, I received the following email:
Do you see the part stating “This event was automatically added to your calendar from email by Outlook?” That caught my attention. Outlook had never taken any action on my behalf prior to this trip, and I can’t say that I’ve seen it do anything since. But for some strange reason, this one car reservation got Outlook to do something new and cool.
I checked my calendar, and sure enough, there were events for both pickup and drop off.
I’ll be honest: I don’t know how these events got onto my calendar, and I don’t even know who wrote the code to make the magic happen. But in this one single instance, I feel like I’ve had a taste of what’s to come … and I really like it.
I did a little digging as I was writing this post to see if I could figure out where Outlook got its smarts from. I didn’t find a whole lot, but I did find this one post on Microsoft’s acquisition of Genee to accelerate intelligent experiences in Office 365. Maybe that had something to do with what I was seeing?
I like the idea of Outlook getting some intelligence and being able to look at my email to ascertain when things will happen. Maybe Delta will send me a trip confirmation and my flight times will end up on my calendar. Or maybe Mark will send me an email about a great Baconfest that’s happening in Harrison, Arkansas, and that event will get parsed and entered into my records so that I’ll know when I need to leave my house to make it there on-time.
I see a lot of potential for this sort of processing and assistance, but I think I’d like to understand it all a bit better before things move on. Heck, right now I’m not even sure if what happened to me is something that’s going to roll out more broadly … or if it was just a blip/test. As I indicated, I haven’t seen anything appear on my calendar since the car reservation, so I’m not even sure that it’s something that “someone” is rolling out.
But I like this. If it’s done right, it has the potential to simplify a lot of things we manually push ourselves to do today.
I’m okay with Outlook becoming my secretary. How about you?
My friend Tom Resing reached out via Facebook after I shared this blog post, and he opened the door to a world of settings I was simply unaware of. He pointed me to a link titled Automatically add travel and package delivery events to your calendar. It discusses how to control the behavior with Outlook online, and it’s definitely worth checking out. I’m always happy to discover new knobs and levers!
Here we are in 2016. If you’ve been following my blog for a while, you might recall a post I threw together back in 2010 called Portrait of a Basement Datacenter. Back in 2010, I was living on the west side of Cincinnati with my wife (Tracy) and three year-old twins (Brendan and Sabrina). We were kind of shoehorned into that house; there just wasn’t a lot of room. Todd Klindt visited once and had dinner with us. He didn’t say it, but I’m sure he thought it: “gosh, there’s a lot of stuff in this little house.”
The image on the right is how things looked in 2010. Just looking at the picture brings back a bunch of memories for me, and it also reminds me a bit of what we (as server administrators) could and couldn’t easily do. For example, nowadays we virtualize nearly everything without a second thought. Six years ago, virtualization technology certainly existed … but it hadn’t hit the level of adoption that it’s cruising at today. I look at all the boxes on the right and think “holy smokes – that’s a lot of hardware. I’m glad I don’t have all of that anymore.” It seemed like I had drives and computers everywhere, and they were all sucking down juice. I had two APC 1600W UPS units that were acting as battery backups back then. With all the servers plugged-in, they were drawing quite a bit of power. And yeah – I had the electric bill to prove it.
For starters, we now live on the east side of Cincinnati and have a much bigger house than we had way back when. Whenever friends come over and get a tour of the house, they inevitably head downstairs and get to see what’s in the unfinished portion of the basement. That’s where the servers are nowadays, and this is what my basement datacenter looks like in 2016:
In reality, quite a bit has changed. We have much more space in our new house, and although the “server area” is smaller overall, it’s basically a dedicated working area where all I really do is play with tech, fix machines, store parts, etc. If I need to sit at a computer, I go into the gaming area or upstairs to my office. But if I need to fix a computer? I do it here.
In terms of capabilities, the last six years have been good to me.
Back on the west side of town, I had a BPL (broadband-over-powerline) Internet hookup from Duke Energy and The CURRENT Group. Nowadays, I don’t even know what’s happening with that technology. It looks like Duke Energy may be trying to move away from it? In any case, I know it gave me a symmetric pipe to the Internet, and I think I had about 10Mbps up and down. I also had a secondary DSL connection (from Cincinnati Bell) that was about 2.5Mbps down and 1Mbps up.
Once I moved back to the east side of Cincinnati and Anderson Township, the doors were blown off of the barn in terms of bandwidth. Initially, I signed with Time Warner Cable for a 50Mbps download / 5Mbps upload primary connection to my house. I made the mistake of putting in a business circuit (well, I was running a business), so while it gave me some static IP address options, it ended up costing a small fortune.
I do keep a backup connection with Time Warner Cable in case the fiber goes down, and my TMG firewall does a great job of failing over to that backup connection if something goes wrong. And yes, I’ve had a problem with the fiber once or twice. But it’s been resolved quickly, and I was back up in no time. Frankly, I love Cincinnati Bell’s fiber.
I have a bunch of storage downstairs, and frankly I’m pretty happy with it. I’ve graduated from the random drives and NAS appliances that used to occupy my basement. These days, I use Mediasonic RAID enclosures. You pop some drives in, connect an eSATA cable (or USB cable, if you have to), and away you go. They’ve been great self-contained pass-through drive arrays for specific virtual machines running on my Hyper-V hosts. I’ve been running the Mediasonic arrays for quite a few years now, and although this isn’t a study in “how to build a basement datacenter,” I’d recommend them to anyone looking for reliable storage enclosures. I keep one as a backup unit (because eventually one will die), and as a group they seem to be in good shape at this point in time. The enclosures supply the RAID-5 that I want (and yeah, I’ve had *plenty* of drives die), so I’ve got highly-available, hot-swappable storage where I need it.
Oh, and don’t mind the minions on my enclosures. Those of you with children will understand. Those who don’t have children (or who don’t have children in the appropriate age range) should either just wait it out or go watch Despicable Me.
First off, I love the cloud. For enterprise scale engagements, the cloud (and Microsoft’s Azure capabilities, in particular) are awesome. Microsoft has done a lot to make it easier (not “easy,” but “easier”) for us to build for the cloud, put our stuff (like pictures, videos, etc.) in the cloud, and get things off of our thumb drives and backup boxes and into a place where they are protected, replicated, and made highly available.
What I’m doing in my basement doesn’t mean I’m “avoiding” the cloud. Actually, I moved my family onto an Office 365 plan to give them email and capabilities they didn’t have before. My kids have their first email address now, and they’re learning how to use email through Office 365. I’m going to move the SharePoint site collection that I maintain for our family (yes, I’m that big of a geek) over to SharePoint Online because I don’t want to wrangle with it at home any longer. Keeping SharePoint running is a pain-in-the-butt, and I’m more than happy to hand that over the Office 365 folks.
I’ll still be tinkering with SharePoint VMs for sure with the work I do, but I’m happy to turn over operational responsibility to Microsoft for my family’s site collection.
Accessing stuff at home is only part of it, though. The other part is just knowing that I’m going through my network, interacting with my systems, and still feeling like I have some control in our increasingly disconnected world. My Plex server is there, and my file shares are available, and I can RDP into my desktop to leverage its power for something I’m working on. There’s a comfort in knowing my stuff is on my network and servers.
Critical data makes it to the cloud via OneDrive, Dropbox, etc, but I still can’t afford to pay for all of my stuff to be in the cloud. Prices are dropping all of the time, though. Will I ever give up my basement datacenter? Probably not, because maintaining it helps me keep my technical skills sharpened … but it’s also a labor of love.
After working with the Office 365 Preview over the last several months, I shifted my thoughts on SharePoint in the Cloud. In this post, I share my thoughts and “revelations” about what’s coming with SharePoint 2013, Office 365, and usage of SharePoint in the Cloud.
Around the summer of 2010, it was becoming clear to me that Cloud-based SharePoint wasn’t just a passing trend. With Microsoft clearly stating its intention to make the Cloud a cornerstone of its business, I needed to start paying attention.
My relationship with Microsoft and Microsoft technologies goes back to the days of MS-DOS. As a result, I’ve always seen Microsoft as a company that was primarily interested in one thing: selling software. I worked for a Microsoft managed systems integration (SI) partner – Cardinal Solutions Group – for several years. During my years with Cardinal, my goal was to help others who had purchased Microsoft software make use of that software. In many cases, customer leads came from Microsoft either directly or indirectly. Microsoft sold the software, and we setup/customized/serviced/configured that software based on what a customer was trying to accomplish. It was a symbiotic relationship, and it was pretty easy for me to grasp.
Then the whole “Cloud thing” started. Cloud-based SharePoint and other Azure-branded services seemed a somewhat confusing move for Microsoft at first – at least to me. Even before Office 365, Microsoft offered hosted SharePoint through BPOS – or the Business Productivity Online Suite. At the time when BPOS was first released, I viewed it as something of a niche market for Microsoft. I had plenty of friends who worked at places like Rackspace and Fpweb.net, so the part I found unusual wasn’t really that “someone else” was hosting SharePoint and focusing on it as a service. The fact that Microsoft itself was getting serious about SharePoint and other services was the eyebrow raiser.
For Microsoft, it wasn’t just about selling software anymore.
I’d also be remiss if I didn’t say that I think Office 365 has a very compelling value proposition, even without SharePoint. SharePoint itself is a complex platform, though, and many organizations struggle with administrative needs like data protection, performance optimization, high availability, and basic day-to-day management. The idea of turning these concerns over to someone else (or some other entity) who better-understands them makes sense to me.
After working with SharePoint 2013 for several months now, I can easily say that the platform isn’t getting any easier. SharePoint 2013 has quite a few more “moving parts” relative to SharePoint 2010, just as SharePoint 2010 demonstrated itself to be significantly more complex than SharePoint 2007.
Despite the compelling nature of Office 365, I always seemed to come back around to fixate on one thought. This thought constantly reverberated through my head anytime “SharePoint in the Cloud” became a topic of conversation:
Most companies using SharePoint have made a significant investments in hardware, software, personnel, and services to get SharePoint up-and-running. They aren’t going to simply “dump” those on-premises investments and go to the Cloud tomorrow. The Cloud will happen, but it’s going to take longer than Microsoft thinks.
In discussions with many friends and respected professionals in the SharePoint community, I knew that I wasn’t completely alone in my way of thinking. In the conversations I’d had, there was almost always agreement that a shift to the Cloud and Cloud-based services would happen over time. The greatest debate seemed to be over whether it would happen next year or if it would take the next half a decade.
That’s when the pieces started to click into place for me. All along I had been thinking about Office 365 and Cloud-based SharePoint deployments along the lines of the bar chart seen above and to the right. Numbers and proportions are all relative, but the key concept I’m trying to convey with the chart is this: for some reason, I had always thought that the proponents of Cloud-based SharePoint were suggesting that Cloud adoption would come at the cost of on-premises deployments; i.e., on-premises users would “convert” to the Cloud. If Cloud-based deployments grew, that meant that on-premises deployments had to shrink. In short: I was inadvertently assuming that the overall number of SharePoint deployments had hit saturation and was remaining static.
I don’t think that way at all anymore.
The more I think about it, the more I feel that Office 365 growth – once the new 2013 Preview goes live – will be aggressive and look something more like what I’ve charted above and to the left. While Office 365 might replace some on-premises deployments, particularly for smaller organizations, I don’t see that as its primary market (initially) or its strong suit. The greatest degree of Office 365 traction is going to be obtained with users who need a Google Apps-like solution but for whom buying the required infrastructure and expertise for Exchange, SharePoint, etc., is cost-prohibitive.
So, I stopped thinking “replacement” and started thinking “complement.” That’s my assessment and working outlook for the Office 365 (Preview) right now.
I’m sure that plenty of folks who’ve believed in “Cloud Power” since Day One probably think that I’m still being too conservative in my outlook for SharePoint on Office 365, and that may be true. However, I still see plenty of concerns that are near-and-dear to most enterprise and larger business customers, and I believe that they will be Cloud adoption blockers until they’re addressed directly and decisively. Here are just a few that come to mind.
1. Who owns the data? Sure, it’s your tenant … but do you own the data? Common sense would seem to suggest “yes,” but this is still uncharted legal territory. Don’t believe me? Do some background reading on the Megaupload situation and see how users of that Cloud-based service are faring in their attempts to get “their data” back.
2. What about disasters? Many people point to the Cloud as a solution for business continuity and disaster recovery (DR) concerns. The Cloud can certainly help, but I’ll tell you (somewhat authoritatively) that the Cloud doesn’t make DR concerns “go away” – especially for SharePoint. For one thing, you’re locked into your provider’s terms of service; if you need more aggressive RPO and RTO windows, then you need to be looking elsewhere. Even Cloud data centers themselves go down; what’s your plan then?
3. Can I leave my provider? Everyone is quick to talk about moving to the Cloud, and many companies are happy to talk about migration strategies. What if you want to leave or change providers, though? Do those migration strategies work? What do you lose? How long would it even take? These may not seem like important questions now, but they will become increasingly more important as Cloud adoption grows and more companies get in on the action. It stands to reason that some portion of those companies will fail, close-up shop, be bought, etc. When that happens, what do you do … and what happens to your SharePoint?
Of course, my perspective on Office 365 uptake in the next several years could be completely off-the-mark. After all, I don’t really have any numbers to back up my hypotheses. They’re just my opinions, but they are in-line with my gut feel.
And I’ve learned to trust my gut.