The Threadripper

Circuit BoardThis post is me finally doing what I told so many people I was going to do a handful of weeks back: share the “punch list” (i.e, the parts list) I used to put together my new workstation. And unsurprisingly, I chose to build my workstation based upon AMD’s Threadripper CPU.

Getting Old

I make a living and support my family through work that depends on a computer, as I’m sure many of you do. And I’m sure that many of you can understand when I say that working on a computer day-in and day-out, one develops a “feel” for its performance characteristics.

While undertaking project work and other “assignments” over the last bunch of months, I began to feel like my computer wasn’t performing with the same “pep” that it once had. It was subtle at first, but I began to notice it more and more often – and that bugged me.

So, I attempted to uninstall some software, kill off some boot-time services and apps that were of questionable use, etc. Those efforts sometimes got me some performance back, but the outcome wasn’t sustained or consistent enough to really make a difference. I was seriously starting to feel like I was wading through quicksand anytime I tried to get anything done.

The Last Straw

StrawsThere isn’t any one event that made me think “Jeez, I really need a new computer” – but I still recall the turning point for me because it’s pretty vivid in my mind.

I subscribe to the Adobe Creative Cloud. Yes, it costs a small fortune each year, and each time I pay the bill, I wonder if I get enough use out of it to justify the expense. I invariably decide that I do end up using it quite a bit, though, so I keep re-upping for another year. At least I can write it off as a business expense.

Well, I was trying to go through a recent batch of digital photos using Adobe Lightroom, and my system was utterly dragging. And whenever my system does that for a prolonged period, I hop over to the Windows Task Manager and start monitoring. And when I did that with Lightroom, this is what I saw:

Note the 100% CPU utilization in the image. Admittedly, RamboxPro looks like the culprit here, and it was using a fair bit of memory … but that’s not the whole story.

Since the start of this ordeal, I’ve become more judicious in how many active tabs I spin-up in Rambox Pro. It’s a great utility, but like every Chromium-based tool, it’s an absolute pig when it comes to memory usage. Have you ever looked at your memory consumption when you have a lot of Google Chrome tabs open? That’s what’s happening with Rambox Pro. So be warned and be careful.​

I’m used to the CPU spiking for brief periods of time, but the CPU sat pegged at 100% utilization for the duration that Lightroom was running – literally the entire time. And not until I shut down Lightroom did the utilization start to settle back down.

I thought about this for a while. I know that Adobe does some work to optimize/enhance its applications to make the most of systems with multiple CPU cores and symmetric multiprocessing when it’s available to the applications. The type of tasks most Adobe applications deal with are the sort that people tend to buy beefy machines for, after all: video editing, multimedia creation, image manipulation, etc.

After observing Lightroom and how it brought my processor to its knees, I decided to do a bit of research.

Research and Realization

Lab ResearchAt the time, my primary workstation was operating based on an Intel Core i7-5960X Extreme processor. When I originally built the system, there was no consumer desktop processor that was faster or had more cores (that I recall). Based on the (then) brand new Haswell E series from Intel, the i7-5960X had eight cores that each supported hyperthreading. It had an oversized L3 cache of 20MB, “new” virtualization support and extensions, 40 PCIe lanes, and all sorts of goodies baked-in. I figured it was more than up to handling current, modern day workstation tasks.

Yeah – not quite.

In researching that processor, I learned that it had been released in September of 2014 – roughly six years prior. Boy, six years flies by when you’re not paying attention. Life moves on, but like a new car that’s just been driven off the lot, that shiny new PC you just put together starts losing value as soon as you power it up.

The Core i7 chip and the system based around it are still very good at most things today – in fact, I’m going to set my son up with that old workstation as an upgrade from his Core-i5 (which he uses primarily for video watching and gaming). But for the things I regularly do day in and day out – running VMs, multimedia creation and editing, etc., that Core i7  system is significantly behind the times. With six years under its belt, a computer system tends to start receiving email from AARP 

The Conversation and Approval

So, my wife and I had “the conversation,” and I ultimately got her buy-in on the construction of a new PC. Let me say, for the record, that I love my wife. She’s a rational person, and as long as I can effectively plead my case that I need something for my job (being able to write it off helps), she’s behind me and supports the decision.

Tracy and I have been married for 17 years, so she knows me well. We both knew that the new system was going to likely cost quite a bit of money to put together … because my general thinking on new computer systems (desktops, servers, or whatever) boils down to a few key rules and motivators:

  1. Nine times out of ten, I prefer to build a system (from parts) over buying one pre-assembled. This approach ensures that I get exactly what I want in the system, and it also helps with the “continuing education” associated with system assembly. It also forces me to research what’s currently available at the time of construction, and that invariably ends up helping at least one or two friends in the assembly of new systems that they want to put together or purchase.
  2. I generally try to build the best performing system I can with what’s available at the time. I’ll often opt for a more expensive part if it’s going to keep the system “viable” for a longer period of time, because getting new systems isn’t something I do very often. I would absolutely love to get new systems more often, but I’ve got to make these last as long as I can – at least until I’m independently wealthy (heh … don’t hold your breath – I’m certainly not).
  3. As an adjunct to point #2 (above), I tend to opt for more expensive parts and components if they will result in a system build that leaves room for upgrades/part swaps down the road. Base systems may roll over only every half-dozen years or so, but parts and upgrades tend to flow into the house at regular intervals. Nothing simply gets thrown out or decommissioned. Old systems and parts go to the rest of the family, get donated to a friend in need, etc.
  4. When I’m building a system, I have a use in mind. I’m fortunate that I can build different computers for different purposes, and I have two main systems that I use: a primary workstation for business, and a separate machine for gaming. That doesn’t mean I won’t game on my workstation and vice-versa, any such usage is secondary; I select parts for a system’s intended purpose.
  5. Although I strive to be on the cutting edge, I’ve learned that it’s best to stay off the bleeding edge when it comes to my primary workstation. I’ve been burned a time or two by trying to get the absolute best and newest tech. When you depend on something to earn a living, it’s typically not a bad idea to prioritize stability and reliability over the “shiny new objects” that aren’t proven yet.

Threadripper: The Parts List

At last – the moment that some of you may have been waiting for: the big reveal!

I want to say this at the outset: I’m sharing this selection of parts (and some of my thinking while deciding what to get) because others have specifically asked. I don’t value religious debates over “why component ‘xyz’ is inferior to ‘abc'” nearly as much as I once did in my youth.

So, general comments and questions on my choice of parts are certainly welcome, but the only thing you’ll hear are crickets chirping if you hope to engage me in a debate …

The choice of which processor to go with wasn’t all that difficult. Well, maybe a little.

Given that this was going into the machine that would be swapped-in as my new workstation, I figured most medium-to-high end current processors available would do the job. Many of the applications I utilize can get more done with a greater number of processing cores, and I’ve been known to keep a significant number of applications open on my desktop. I also continue to run a number of virtual machines (on my workstation) in my day-to-day work.

In recent years, AMD has been flogging Intel in many different benchmarks – more specifically, the high-end desktop (non-gaming) performance range of benchmarks that are the domain of multi-core systems. AMD’s manufacturing processes are also more advanced (Intel is still stuck on 10nm-14nm while AMD has been on 7nm), and they’ve finally left the knife at home and brought a gun to the fight – especially with Threadripper. It reminds me of period of time decades ago when AMD was able to outperform Intel with the Athlon FX-series (I loved the FX-based system I built!).

I realize benchmarks are won by some companies one day, and someone else the next. Bottom line for me: performance per core at a specific price point has been held by AMD’s Ryzen chips for a while. I briefly considered a Ryzen 5 or 9 for a bit, but I opted for the Threadripper when I acknowledged that the system would have to last me a fairly long time. Yes, it’s a chunk of change … but Threadripper was worth it for my computing tasks.

Had I been building a gaming machine, it’s worth noting that I probably would have gone Intel, as their chips still tend to perform better for single-threaded loads that are common in games.

First off, you should know that I generally don’t worry about motherboard performance. Yes, I know that differences exist and motherboard “A” may be 5% faster than motherboard “B.” At the end of the day, they’re all going to be in the same ballpark (except for maybe a few stinkers – and ratings tend to frown on those offerings …)

For me, motherboard selection is all about capabilities and options. I want storage options, and I especially want robust USB support. Features and capabilities tend to become more available as cost goes up (imagine that!), and I knew right off that I was going to probably spend a pretty penny for the appropriate motherboard to drop that Threadripper chip into.

I’ve always good luck with ASUS motherboards, and it doesn’t hurt that the ROG Zenith II Extreme Alpha was highly rated and reviewed. After all, it has a name that sounds like the next-generation terminator, so how could I go wrong?!?!?!

Everything about the board says high end, and it satisfies the handful of requirements I had. And some I didn’t have (but later found nice, like that 10Gbps Ehternet port …)

“Memory, all alone in the moonlight …”

Be thankful you’re reading that instead of listening to me sing it. Barbra Streisand I am not.

Selecting memory doesn’t involve as many decision points as other components in a new system, but there are still a few to consider. There is, of course, the overall amount of memory you want to include in the system. My motherboard and processor supported up to 256GB, but that would be overkill for anythings I’d be doing. I settled on 128GB, and I decided to get that as 4x32GB DIMMS rather than 8x16GB so I could expand (easily) later if needed.

Due to their architecture, it has been noted that the performance of Ryzen chips can be impacted significantly by memory speeds. The “sweet spot” before prices grew beyond my desire to purchase appeared to be about 3200MHz. And if possible, I wanted to get memory with the lowest possible CAS (column access strobe) latency I could find, as that number tends to matter the most with memory timings (of CAS, tRAS, tRP, and tRCD.)

I found what I wanted with the Corsair Vengeance RGB series. I’ve had a solid experience with Corsair memory in the past, so once I confirmed the numbers it was easy to pull the trigger on the purchase.

There are 50 million cases and case makers out there. I’ve had experience with many of them, but getting a good case (in my experience) is as much about timing as any other factor (like vendor, cost, etc).

Because I was a bit more focused on the other components, I didn’t want to spend a whole lot of time on the case. I knew I could get one of those diamonds in the rough (i.e., cheap and awesome) if I were willing spend some time combing reviews and product slicks … but I’ll confess: I dropped back and punted on this one. I pulled open my Maximum PC and/or PC Gamer magazines (I’ve been subscribing for years) and looked at what they recommended.

And that was as hard as it got. Sure, the Cosmos C700P was pricy, but it looked easy enough to work with. Great reviews, too.

When the thing was delivered, the one thing I *wasn’t* prepared for was sheer SIZE of the case. Holy shnikes – this is a BIG case. Easily the biggest non-server case I’ve ever owned. It almost doesn’t fit under my desk but thankfully it just makes it with enough clearance that I don’t worry.

Oh yeah, there’s something else I realized with this case: I was acrruing quite the “bling show” of RGB lighting-capable components. Between the case, the memory, and the motherboard, I had my own personal 4th of July show brewing.

Power supplies aren’t glamorous, but they’re critical to any stable and solid system. 25 years ago, I lived in an old apartment with atrocious power. I would go through cheap power supplies regularly. It was painful and expensive, but it was instructional. Now, I do two things: buy an uninterruptible power supply (UPS) for everything electronic, and purchase a good power supply for any new build. Oh, and one more thing: always have another PSU on-hand.

I started buying high-end Corsair power supplies around the time I built my first gaming machine which utilized videocards in SLI. That was the point in nVidia’s history when the cards had horrible power consumption stats … and putting two of them in a case was a quick trip to the scrap heap for anything less than 1000W.

That PSU survived and is still in-use in one of my machines, and that sealed the deal for me for future PSU needs.

This PSU can support more than I would ever throw at it, and it’s fully modular *and* relatively high efficiency. Fully modular is the only to go these days; it definitely cuts down on cable sprawl.

Much like power supplies, CPU coolers tend not to be glamorous. The most significant decision point is “air cooled” or “liquid cooled.” Traditionally, I’ve gone with air coolers since I don’t overclock my systems and opt for highly ventillated cases. It’s easier (in my opinion) and tends to be quite a bit cheaper.

I have started evolving my thinking on the topic, though – at least a little bit. I’m not about to start building custom open-loop cooling runs like some of the extreme builders out there, but there are a host of sealed closed-loop coolers that are well-regarded and highly rated.

Unsurprisingly, Corsair makes one of the best (is there anything they don’t do?) I believe Maximum PC put the H100i PRO all-in-one at the top of their list. It was a hair more than I wanted to spend, but in the context of the project’s budget (growing with each piece), it wasn’t bad.

And oh yeah: it *also* had RGB lighting built-in. What the heck?

I initially had no plans (honestly) of buying another videocard. My old workstation had two GeForce 1080s (in SLI) in it, and my thinking was that I would re-use those cards to keep costs down.

Ha. Ha ha. “Keep costs down” – that’s funny! Hahahahahahaha…

At first, I did start with one of the 1080s in the case. But there were other factors in the mix I hadn’t foreseen. Those two cards were going to take up a lot room in the case and limit access to the remaining PCI express slots. There’s also the time-honored tradition of passing one of the 1080s down to my son Brendan, who is also a gamer.

Weak arguments, perhaps, but they were enough to push me over the edge into the purchase of another RTX 2080Ti. I actually picked it up at the local Micro Center, and there’s a bit of a story behind it. I originally purchase the wrong card (one that had connects for an open-loop cooling system), so I returned it and picked up the right card while doing so. That card (the right one) was only available as an open box item (at a substantially reduced price). Shortly after powering my system on with the card plugged in, it was clear why it was open-box: it had hardware problems.

Thus began the dance with EVGA support and the RMA process. I’d done the dance before, so I knew what to expect. EVGA has fantastic support anyway, so I was able to RMA the card back (shipping nearly killed me – ouch!), and I got a new RTX 2080Ti at an ultimately “reasonable” price.

Now my son will get a 1080, I’ve got a shiny new 2080Ti … and nVidia just released the new 30 series. Dang it!

Admittedly, this was a Micro Center “impulse buy.” That is, the specific choice of card was the impulse buy. I knew I was going to get an external sound card (i.e., aside from the motherboard-integrated sound) before I’d really made any other decision tied to the new system.

For years I’ve been hearing that the integrated sound chips they’re now putting on motherboards have gotten good enough that the need for a separate, discrete sound card is no longer necessary for those wanting high-quality audio. Forget about SoundBlaster – no longer needed!

I disagree.

I’ve tried using integrated sound on a variety of motherboards, and there’s always been something … sub-standard. In many cases, the chips and electronics simply weren’t shielded enough to keep powerline hum and other interference out. In other cases, the DSP associated with the audio would chew CPU cycles and slow things down.

Given how much I care about my music – and my picky listening habits (we’ll say “discerning audiophile tendencies”) – I’ve found that I’m only truly happy with a sound card.

I’d always gotten SoundBlaster cards in the past, but I’ve been kinda wondering about SoundBlaster for a while. They were still making good (or at least “okay”) cards in my opinion, but their attempts to stay relevant seemed to be taking them down some weird avenues. So, I was open to the idea of another vendor.

The ASUS card looked to be the right combo of a high signal-to-noise, low distortion minimalist card. And thus far, it’s been fantastic. An impulse buy that actually worked out!

Much like the choice of CPU, picking the SSD that would be used as my Windows system (boot) drive wasn’t overly difficult. This was the device that my system would be booting from, using for memory swapping, and other activities that would directly impact perceived speed and “nimbleness.” For those reasons alone, I wanted to find the fastest SSD I could reasonably purchase.

Historically, I’ve purchased Samsung Pro SSD drives for boot drive purposes and have remained fairly brand loyal. If something “ain’t broke, ya don’t fix it.” But when I saw that Seagate had a new M.2 SSD out that was supposed to be pretty doggone quick, I took notice. I picked one up, and I can say that it’s a really sweet SSD.

The only negative thing or two that Tom’s Hardware had to say about it was that it was “costly” and had “no heatsink.” In the plus category, Tom’s said that it had “solid performance,” a “large write cache,” that it was “power efficient,” had “class-leading endurance,” and they like its “aesthetics.” They also said it “should be near the top of your best ssds list.”

And about the cost: Micro Center actually had the drive for substantially less than what the drive is listing as, so I jumped at it. I’m glad I did, because I’ve been very happy with its performance. Happiness is based on nothing more than my perception. Full disclosure: I haven’t actually benchmarked system performance (yet), so I don’t have numbers to share. Maybe a future post …

Unsurprisingly, my motherboard selection came with built-in RAID capability. That RAID capability actually extended to NVMe drives (a first for one of my systems), so I decided to take advantage of it.

Although it’s impractical from a data stability and safety standpoint, I decided that I was going to put together a RAID-0 (striped) “disk” array with two M.2 drives. I figured I didn’t need maximum performance (as I did with my boot/system drive), so I opted to pull back a bit and be a little more cost-efficient.

It’s no surprise (or at least, I don’t think it should be a surprise), then, that I opted to go with Samsung and a pair of 970 EVO plus M.2 NVMe drives for that array. I got a decent deal on them (another Micro Center purchase), and so with two of the drives I put together a 4TB pretty-darn-quick array – great for multimedia editing, recording, a temporary area … and oh yeah: a place to host my virtual machine disks. Smooth as butta!

For more of my “standard storage” needs – where data safety trumped speed of operations – I opted for a pair of Seagate IronWolf 6TB NAS drives in a RAID-1 (mirrored) array configuration. I’ve been relatively happy with Seagate’s NAS series. Truthfully, both Seagate and Western Digitial did a wonderful thing by offering their NAS/Red series of drives. The companies acknowledge the reality that a large segment of the computing population are leaving machines and devices running 24/7, and they built products to work for that market. I don’t think I’ve had a single Red/NAS-series drive fail yet … and I’ve been using them for years now.

In any case, there’s nothing amazing out these drives. They do what their supposed to do. If I lose one, I just need to get another back in and let the array rebuild itself. Sure, I’ll be running in degraded fashion for a while, but that’s a small price to pay for a little data safety.

I believe in protection in layers – especially for data. That’s a mindset that comes out of my experience doing disaster recovery and business continuity work. Some backup process that you “set and forget” isn’t good enough for any data – yours or mine. That’s a perspective I tried to share and convey in the DR guides that John Ferringer and I wrote back in the SharePoint 2007 and 2010 days, and it’s a philosophy I adhere to even today.

The mirroring of the 6TB IronWolf drives provides one layer of data protection. The additional 10TB Western Digital Red drive I added as a system level backup target provides another. I’ve been using Acronis True Image as a backup tool for quite a few years now, and I’m generally pretty happy with the application, how it has operated, and how it has evolved. About the only thing that still bugs me (on a minor level) is the relative lack of responsiveness of UI/UX elements within the application. I know the application is doing a lot behind the scenes, but as a former product manager for a backup product myself (Idera SharePoint Backup), I have to believe that something could be done about it.

Thoughts on backup product/tool aside, I back up all the drives in my system to my Z: drive (the 10TB WD drive) a couple of times per week:

Acronis Backup Intervals

I use Acronis’ incremental backup scheme and maintain about month’s worth of backups at any given time; that seems to strike a good balance between capturing data changes and maintaining enough disk space.

I have one more backup layer in addition to the ones I’ve already described: off-machine. Another topic for another time …

Last but not least, I have to mention my trust Blu-ray optical drive. Yes, it does do writing … but I only ever use it to read media. If I didn’t have a large collection of Blu-rays that I maintain for my Plex Server, I probably wouldn’t even need the drive. With today’s Internet speeds and the ease of moving large files around, optical media is quickly going the way of the floppy disk.

I had two optical drives in my last workstation, and I have plenty of additional drives downstairs, so it wasn’t hard at all to find one to throw in the machine.

And that’s all I have to say about that.

Some Assembly Required

Of course, I’d love to have just purchased the parts and have the “assembly elves” show up one night while I was sleeping, do their thing, and I’d have woken up the next morning with a fully functioning system. In reality, it was just a tad a bit more involved that that. 

I enjoy putting new systems together, but I enjoy it a whole lot less when it’s a system that I rely upon to get my job done. There was a lot of back and forth, as well as plenty of hiccups and mistakes along the way.

I took a lot of pictures and even a small amount of video while putting things together, and I chronicled the journey to a fair extent on Facebook. Some of you may have even been involved in the ongoing critique and ribbing (“Is it built yet?”). If so, I want to say thanks for making the process enjoyable; I hope you found it as funny and generally entertaining as I did. Without you folks, it wouldn’t have been nearly as much fun. Now, if I can just find a way to magically pay the whole thing off …

The Media Chronicle

I’ll close this post out with some of the images associated with building Threadripper (or for Spencer Harbar: THREADRIPPER!!!)

Definitely a Step Up

I’ll conclude this post with one last image, and that’s the image I see when I open Windows Device Manager and look and look at the “Processors” node:Device Manager

I will admit that the image gives me all sorts of warm fuzzies inside. Seeing eight hyperthreading cores used to be impressive, but now that I’ve got 32 cores, I get a bit giddy.

Thanks for reading!

References and Resources

Microsoft Teams Ownership Changes – The Bulk PowerShell Way

As someone who spends most days working with (and thinking about) SharePoint, there’s one thing I can say without any uncertainty or doubt: Microsoft Teams has taken off like a rocket bound for low Earth orbit. It’s rare these days for me to discuss SharePoint without some mention of Teams.

I’m confident that many of you know the reason for this. Besides being a replacement for Skype, many of Teams’ back-end support systems and dependent service implementations are based in – you guessed it – SharePoint Online (SPO).

As one might expect, any technology product that is rapidly evolving and seeing adoption by the enterprise has gaps that reveal themselves and imperfect implementations as it grows – and Teams is no different. I’m confident that Teams will reach a point of maturity and eventually address all of the shortcomings that people are currently finding, but until it does, there will be those of us who attempt to address gaps we might find with the tools at our disposal.

Administrative Pain

One of those Teams pain points we discussed recently on the Microsoft Community Office Hours webcast was the challenge of changing ownership for a large numbers of Teams at once. We took on a question from Mark Diaz who posed the following:

May I ask how do you transfer the ownership of all Teams that a user is managing if that user is leaving the company? I know how to change the owner of the Teams via Teams admin center if I know already the Team that I need to update. Just consulting if you do have an easier script to fetch what teams he or she is an owner so I can add this to our SOP if a user is leaving the company.

Mark Diaz

We discussed Mark’s question (amidst our normal joking around) and posited that PowerShell could provide an answer. And since I like to goof around with PowerShell and scripting, I agreed to take on Mark’s question as “homework” as seen below:

The rest of this post is my direct response to Mark’s question and request for help. I hope this does the trick for you, Mark!

Teams PowerShell

Anyone who has spent any time as an administrator in the Microsoft ecosystem of cloud offerings knows that Microsoft is very big on automating administrative tasks with PowerShell. And being a cloud workload in that ecosystem, Teams is no different.

Microsoft Teams has it’s own PowerShell module, and this can be installed and referenced in your script development environment in a number of different ways that Microsoft has documented. And this MicrosoftTeams module is a prerequisite for some of the cmdlets you’ll see me use a bit further down in this post.

The MicrosoftTeams module isn’t the only way to work with Teams in PowerShell, though. I would have loved to build my script upon the Microsoft Graph PowerShell module … but it’s still in what is termed an “early preview” release. Given that bit of information, I opted to use the “older but safer/more mature” MicrosoftTeams module.

The Script: ReplaceTeamsOwners.ps1

Let me just cut to the chase. I put together my ReplaceTeamOwners.ps1 script to address the specific scenario Mark Diaz asked about. The script accepts a handful of parameters (this next bit lifted straight from the script’s internal documentation):

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.

So both currentTeamOwner and newTeamOwner must be specified, and that’s fairly intuitive to understand. If the -confirmEachUpdate switch is supplied, then for each possible ownership change there will be a confirmation prompt allowing you to agree to an ownership change on a case-by-case basis.

The one parameter that might be a little confusing is the script’s isTest parameter. If unspecified, this parameter defaults to TRUE … and this is something I’ve been putting in my scripts for ages. It’s sort of like PowerShell’s -WhatIf switch in that it allows you to understand the path of execution without actually making any changes to the environment and targeted systems/services. In essence, it’s basically a “dry run.”

The difference between my isTest and PowerShell’s -WhatIf is that you have to explicitly set isTest to FALSE to “run the script for real” (i.e., make changes) rather than remembering to include -WhatIf to ensure that changes aren’t made. If someone forgets about the isTest parameter and runs my script, no worries – the script is in test mode by default. My scripts fail safe and without relying on an admin’s memory, unlike -WhatIf.

And now … the script!

<#  

.SYNOPSIS  
    This script is used to replace all instances of a Teams team owner with the
    identity of another account. This might be necessary in situations where a
    user leaves an organization, administrators change, etc.

.DESCRIPTION  
    Anytime a Microsoft Teams team is created, an owner must be associated with
    it. Oftentimes, the team owner is an administrator or someone who has no
    specific tie to the team.

    Administrators tend to change over time; at the same time, teams (as well as
    other IT "objects", like SharePoint sites) undergo transitions in ownership
    as an organization evolves.

    Although it is possible to change the owner of Microsoft Teams team through
    the M365 Teams console, the process only works for one site at a time. If
    someone leaves an organization, it's often necessary to transfer all objects
    for which that user had ownership.

    That's what this script does: it accepts a handful of parameters and provides
    an expedited way to transition ownership of Teams teams from one user to 
    another very quickly.

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.
	
.NOTES  
    File Name  : ReplaceTeamsOwners.ps1
    Author     : Sean McDonough - sean@sharepointinterface.com
    Last Update: September 2, 2020

#>
Function ReplaceOwners {
    param(
        [Parameter(Mandatory=$true)]
        [String]$currentTeamsOwner,
        [Parameter(Mandatory=$true)]
        [String]$newTeamsOwner,
        [Parameter(Mandatory=$false)]
        [Switch]$confirmEachUpdate,
        [Parameter(Mandatory=$false)]
        [Boolean]$isTest = $true
    )

    # Perform a parameter check. Start with the site spec.
    Clear-Host
    Write-Host ""
    Write-Host "Attempting prerequisite operations ..."
    $paramCheckPass = $true
    
    # First - see if we have the MSOnline module installed.
    try {
        Write-Host "- Checking for presence of MSOnline PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MSOnline"
        if ($null -ne $checkResult) {
            Write-Host "  - MSOnline module already installed; now importing ..."
            Import-Module -Name "MSOnline" | Out-Null
        }
        else {
            Write-Host "- MSOnline module not installed. Attempting installation ..."            
            Install-Module -Name "MSOnline" | Out-Null
            $checkResult = Get-InstalledModule -Name "MSOnline"
            if ($null -ne $checkResult) {
                Import-Module -Name "MSOnline" | Out-Null
                Write-Host "  - MSOnline module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MSOnline module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Red "- Unexpected problem encountered with MSOnline import attempt."
        $paramCheckPass = $false
    }

    # Our second order of business is to make sure we have the PowerShell cmdlets we need
    # to execute this script.
    try {
        Write-Host "- Checking for presence of MicrosoftTeams PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
        if ($null -ne $checkResult) {
            Write-Host "  - MicrosoftTeams module installed; will now import it ..."
            Import-Module -Name "MicrosoftTeams" | Out-Null
        }
        else {
            Write-Host "- MicrosoftTeams module not installed. Attempting installation ..."            
            Install-Module -Name "MicrosoftTeams" | Out-Null
            $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
            if ($null -ne $checkResult) {
                Import-Module -Name "MicrosoftTeams" | Out-Null
                Write-Host "  - MicrosoftTeams module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MicrosoftTeams module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Yellow "- Unexpected problem encountered with MicrosoftTeams import attempt."
        $paramCheckPass = $false
    }

    # Have we taken care of all necessary prerequisites?
    if ($paramCheckPass) {
        Write-Host -ForegroundColor Green "Prerequisite check passed. Press  to continue."
        Read-Host
    } else {
        Write-Host -ForegroundColor Red "One or more prerequisite operations failed. Script terminating."
        Exit
    }

    # We can now begin. First step will be to get the user authenticated to they can actually
    # do something (and we'll have a tenant context)
    Clear-Host
    try {
        Write-Host "Please authenticate to begin the owner replacement process."
        $creds = Get-Credential
        Write-Host "- Credentials gathered. Connecting to Azure Active Directory ..."
        Connect-MsolService -Credential $creds | Out-Null
        Write-Host "- Now connecting to Microsoft Teams ..."
        Connect-MicrosoftTeams -Credential $creds | Out-Null
        Write-Host "- Required connections established. Proceeding with script."
        
        # We need the list of AAD users to validate our target and replacement.
        Write-Host "Retrieving list of Azure Active Directory users ..."
        $currentUserUPN = $null
        $currentUserId = $null
        $currentUserName = $null
        $newUserUPN = $null
        $newUserId = $null
        $newUserName = $null
        $allUsers = Get-MsolUser
        Write-Host "- Users retrieved. Validating ID of current Teams owner ($currentTeamsOwner)"
        $currentAADUser = $allUsers | Where-Object {$_.SignInName -eq $currentTeamsOwner}
        if ($null -eq $currentAADUser) {
            Write-Host -ForegroundColor Red "- Current Teams owner could not be found in Azure AD. Halting script."
            Exit
        } 
        else {
            $currentUserUPN = $currentAADUser.UserPrincipalName
            $currentUserId = $currentAADUser.ObjectId
            $currentUserName = $currentAADUser.DisplayName
            Write-Host "  - Current user found. Name='$currentUserName', ObjectId='$currentUserId'"
        }
        Write-Host "- Now Validating ID of new Teams owner ($newTeamsOwner)"
        $newAADUser = $allUsers | Where-Object {$_.SignInName -eq $newTeamsOwner}
        if ($null -eq $newAADUser) {
            Write-Host -ForegroundColor Red "- New Teams owner could not be found in Azure AD. Halting script."
            Exit
        }
        else {
            $newUserUPN = $newAADUser.UserPrincipalName
            $newUserId = $newAADUser.ObjectId
            $newUserName = $newAADUser.DisplayName
            Write-Host "  - New user found. Name='$newUserName', ObjectId='$newUserId'"
        }
        Write-Host "Both current and new users exist in Azure AD. Proceeding with script."

        # If we've made it this far, then we have valid current and new users. We need to
        # fetch all Teams to get their associated GroupId values, and then examine each
        # GroupId in turn to determine ownership.
        $allTeams = Get-Team
        $teamCount = $allTeams.Count
        Write-Host
        Write-Host "Begin processing of teams. There are $teamCount total team(s)."
        foreach ($currentTeam in $allTeams) {
            
            # Retrieve basic identification information
            $groupId = $currentTeam.GroupId
            $groupName = $currentTeam.DisplayName
            $groupDescription = $currentTeam.Description
            Write-Host "- Team name: '$groupName'"
            Write-Host "  - GroupId: '$groupId'"
            Write-Host "  - Description: '$groupDescription'"

            # Get the users associated with the team and determine if the target user is
            # currently an owner of it.
            $currentIsOwner = $null
            $groupOwners = (Get-TeamUser -GroupId $groupId) | Where-Object {$_.Role -eq "owner"}
            $currentIsOwner = $groupOwners | Where-Object {$_.UserId -eq $currentUserId}

            # Do we have a match for the targeted user?
            if ($null -eq $currentIsOwner) {
                # No match; we're done for this cycle.
                Write-Host "  - $currentUserName is not an owner."
            }
            else {
                # We have a hit. Is confirmation needed?
                $performUpdate = $false
                Write-Host "  - $currentUserName is currently an owner."
                if ($confirmEachUpdate) {
                    $response = Read-Host "  - Change ownership to $newUserName (Y/N)?"
                    if ($response.Trim().ToLower() -eq "y") {
                        $performUpdate = $true
                    }
                }
                else {
                    # Confirmation not needed. Do the update.
                    $performUpdate = $true
                }
                
                # Change ownership if the appropriate flag is set
                if ($performUpdate) {
                    # We need to check if we're in test mode.
                    if ($isTest) {
                        Write-Host -ForegroundColor Yellow "  - isTest flag is set. No ownership change processed (although it would have been)."
                    }
                    else {
                        Write-Host "  - Adding '$newUserName' as an owner ..."
                        Add-TeamUser -GroupId $groupId -User $newUserUPN -Role owner
                        Write-Host "  - '$newUserName' is now an owner. Removing old owner ..."
                        Remove-TeamUser -GroupId $groupId -User $currentUserUPN -Role owner
                        Write-Host "  - '$currentUserName' is no longer an owner."
                    }
                }
                else {
                    Write-Host "  - No changes in ownership processed for $groupName."
                }
                Write-Host ""
            }
        }

        # We're done let the user know.
        Write-Host -ForegroundColor Green "All Teams processed. Script concluding."
        Write-Host ""

    } 
    catch {
        # One or more problems encountered during processing. Halt execution.
        Write-Host -ForegroundColor Red "-" $_
        Write-Host -ForegroundColor Red "- Script execution halted."
        Exit
    }
}

ReplaceOwners -currentTeamsOwner bob@EvilCorp.com -newTeamsOwner jane@AcmeCorp.com -isTest $true -confirmEachUpdate

Don’t worry if you don’t feel like trying to copy and paste that whole block. I zipped up the script and you can download it here.

A Brief Script Walkthrough

I like to make an admin’s life as simple as possible, so the first part of the script (after the comments/documentation) is an attempt to import (and if necessary, first install) the PowerShell modules needed for execution: MSOnline and MicrosoftTeams.

From there, the current owner and new owner identities are verified before the script goes through the process of getting Teams and determining which ones to target. I believe that the inline comments are written in relatively plain English, and I include a lot of output to the host to spell out what the script is doing each step of the way.

The last line in the script is simply the invocation of the ReplaceOwners function with the parameters I wanted to use. You can leave this line in and change the parameters, take it out, or use the script however you see fit.

Here’s a screenshot of a full script run in my family’s tenant (mcdonough.online) where I’m attempting to see which Teams my wife (Tracy) currently owns that I want to assume ownership of. Since the script is run with isTest being TRUE, no ownership is changed – I’m simply alerted to where an ownership change would have occurred if isTest were explicitly set to FALSE.

ReplaceTeamsOwners.ps1 execution run

Conclusion

So there you have it. I put this script together during a relatively slow afternoon. I tested and ensured it was as error-free as I could make it with the tenants that I have, but I would still test it yourself (using an isTest value of TRUE, at least) before executing it “for real” against your production system(s).

And Mark D: I hope this meets your needs.

References and Resources

  1. Microsoft: Microsoft Teams
  2. buckleyPLANET: Microsoft Community Office Hours, Episode 24
  3. YouTube: Excerpt from Microsoft Community Office Hours Episode 24
  4. Microsoft Docs: Microsoft Teams PowerShell Overview
  5. Microsoft Docs: Install Microsoft Team PowerShell
  6. Microsoft 365 Developer Blog: Microsoft Graph PowerShell Preview
  7. Microsoft Tech Community: PowerShell Basics: Don’t Fear Hitting Enter with -WhatIf
  8. Zipped Script: ReplaceTeamsOwners.zip
%d bloggers like this: