Faster Access to Office Files in Microsoft Teams

While we were answering (or more appropriately, attempting to answer) questions on this week’s webcast of the Microsoft Community Office Hours, one particular question popped-up that got me thinking and playing around a bit. The question was from David Cummings, and here was what David submitted in its entirety:

with the new teams meeting experience, not seeing Teams under Browse for PowerPoint, I’m aware that they are constantly changing the file sharing experience, it seems only way to do it is open sharepoint ,then sync to onedrive and always use upload from computer and select the location,but by this method we will have to sync for most of our users that use primarily teams at our office

Reading David’s question/request, I thought I understood the situation he was struggling with. There didn’t seem to be a way to add an arbitrary location to the list of OneDrive for Business locations and SharePoint sites that he had Office accounts signed into … and that was causing him some pain and (seemingly) unnecessary work steps.

What I’m about to present isn’t groundbreaking information, but it is something I’d forgotten about until recently (when prompted by David’s post) and was happy to still find present in some of the Office product dialogs.

Can't Get There From Here

I opened-up PowerPoint and started poking around the initial page that had options to open, save,  export, etc.,for PowerPoint presentations. Selecting the Open option on the far left yielded an “Open” column like the one seen on the left.

The “Open” column provided me with the option to save/load/etc. from a OneDrive location or the any of the SharePoint sites associated with an account that had been added/attached to Office, but not an arbitrary Microsoft Teams or SharePoint site.

SharePoint and OneDrive weren’t the only locations from which files could be saved or loaded. There were also a handful of other locations types that could be integrated, and the options to add those locations appeared below the “Open” column: This PC, Add a Place, and Browse.

Selecting This PC swapped-out the column of documents to the right of the “Open” column with what I regarded as a less-functional local file system browser. Selecting Add a Place showed some potential promise, but upon further investigation I realized it was a glorified OneDrive browser: 

But selecting Browse gave me what appeared to be a Windows common file dialog. As I suspected, though, there were actually some special things that could be done with the dialog that went beyond the local file system:

It was readily apparent upon opening the Browse file dialog that I could access local and mapped drives to save, load, or perform other operations with PowerPoint presentations, and this was consistent across Microsoft Office. What wasn’t immediately obvious, though, was that the file dialog had unadvertised goodies.

Dialog on Steroids

What wasn’t readily apparent from the dialog’s appearance and labels was that it had the ability to open SharePoint-resident files directly. It could also be used to browse SharePoint site structures and document libraries to find a file (or file location) I wished to work with.

Why should I care (or more appropriately, why should David care) that this can be done? Because SharePoint is the underlying storage location for a lot of the data -including files – that exist and are surfaced in Microsoft Teams.

Don’t believe me? Follow along as I run a scenario that highlights the SharePoint functionality in-action through a recent need of my own.

Accounts Accounts Everywhere

As someone who works with quite a few different organizations and IT shops, it probably comes as no real surprise for me to share that I have a couple dozen sets of Microsoft 365 credentials (i.e., usernames and associated passwords). I’m willing to bet that many of you are in a similar situation and wish there were a faster way to switch between accounts since it seems like everything we need to work with is protected by a different login.

Office doesn’t allow me to add every Microsoft 365 account and credential set to the “quick access” list that appears in Word, PowerPoint, Excel, etc. I have about five different accounts and associated locations that I added to my Office quick access location list. This covers me in the majority of daily circumstances, but there are times when I want to work with a Teams site or other repository that isn’t on my quick access list and/or is associated with a seldom-used credential set.

A Personal Example

Not too long ago, I had the privilege of delivering a SharePoint Online performance troubleshooting session at our recent M365 Cincinnati & Tri-State Virtual Friday event. Fellow MVP Stacy Deere-Strole and her team over at Focal Point Solutions have been organizing these sorts of events for the Cincinnati area for the last bunch of years, but the pandemic affecting everyone necessitated some changes this year. So this year, Stacy and team spun up a Microsoft Team in the Microsoft Community Teams environment to coordinate sessions and speaker activities (among other things).

Like a lot of speakers who present on Microsoft 365 topics, I have a set of credentials in the msftcommunity.com domain, and those are what I used to access the Teams team associated with M365 Cincinnati virtual event:

When I was getting my presentation ready for the event, I needed access to a couple of PowerPoint presentations that were stored in the Teams file area (aka, the associated SharePoint Online document library). These PowerPoint files contained slides about the event, the sponsors, and other important information that needed to be included with my presentation:

At the point when I located the files in the Teams environment, I could have downloaded them to my local system for reference and usage. If I did that, though, I wouldn’t have seen any late-breaking changes that might have been introduced to the slides just prior to the virtual event.

So, I decided to get a SharePoint link to each PowerPoint file through the ellipses that appeared after each file like this:

Choosing Copy Link from the context-sensitive menu popped-up another dialog that allowed me to choose either a Microsoft Teams link or a SharePoint file link. In my case, I wanted the SharePoint file link specifically:

Going back to PowerPoint, choosing Open, selecting Browse, and supplying the link I just copied from Teams …

… got me this dialog:

Well that wasn’t what I was hoping to see at the time.

I remembered the immortal words of Douglas Adams, “Don’t Panic” and reviewed the link more closely. I realized that the “can’t open” dialog was actually expected behavior, and it served to remind me that there was just a bit of cleanup I needed to do before the link could be used.

Reviewing the SharePoint link in its entirety, this is what I saw:

https://msftcommunity.sharepoint.com/sites/M365CincinnatiTriStateUserGroup-Speakers/_layouts/15/Doc.aspx?OR=teams&action=edit&sourcedoc={C8FF1D53-3238-44EA-8ECF-AD1914ECF6FA}

Breaking down this link, I had a reference to a SharePoint site’s Doc.aspx page in the site’s _LAYOUTS special folder. That was obviously not the PowerPoint presentation of interest. I actually only cared about the site portion of the link, so I modified the link by truncating everything from /_layouts to the end. That left me with:

https://msftcommunity.sharepoint.com/sites/M365CincinnatiTriStateUserGroup-Speakers

I went back into PowerPoint with the modified site link and dropped it in the File name: textbox (it could be placed in either the File name: textbox or the path textbox at the top of the dialog; i.e., either of the two areas boxed in red below):

When I clicked the Open button after copying in the modified link, I experienced some pauses and prompts to login. When I supplied the right credentials for the login prompt(s) (in my case, my @msftcommunity.com credentials), I eventually saw the SharePoint virtual file system of the associated Microsoft Team:

The PowerPoint files of interest to me were going to be in the Documents library. When I drilled into Documents, I was aware that I would encounter a layer of folders: one folder for each Channel in the Team that had files associated with it (i.e., for each channel that has files on its Files tab).  It turns out that only the Speakers channel had files, so I saw: 

Drilling into the Speakers folder revealed the two PowerPoint presentations I was interested in:

And when I selected the desired file (boxed above) and clicked the Open button, I was presented with what I wanted to see in PowerPoint:

Getting Back

At this point, you might be thinking, “That seems like a lot of work to get to a PowerPoint file in SharePoint.” And honestly, I couldn’t argue with that line of reasoning. 

Where this approach starts to pay dividends, though, is when we want to get back to that SharePoint document library to work with additional files – like the other PowerPoint file I didn’t open when I initially went in to the document library.

Upon closing the original PowerPoint file containing the slides I needed to integrate, PowerPoint was kind enough to place a file reference in the Presentations area/list of the Open page:

That file reference would hang around for quite some time depending on how many different files I would open over time. If I wanted the file I just worked with to hang around longer, I always had the option of pinning it to list.

But if I was done with that specific file, what do I care? Well, you may recall that there’s still another file I needed to work with in that resides in the same SharePoint location … so while the previous file reference wasn’t of any more use to me, the location where it was stored was something I had an interest in.

Fun fact: each entry in the Presentations tab has a context-sensitive menu associated with it. When I right-clicked the highlighted filename/entry, I saw:

And when I clicked the Open file location menu selection, I was taken back to the document library where both of the PowerPoint files resided:

Re-opening the SharePoint document library may necessitate re-authenticating a time or two along the way … but if I’m still within the same PowerPoint session and authenticated to the SharePoint site housing the files at the time, I won’t be prompted.

Either way, I find this “repeat experience” more streamlined than making lots of local file copies, remembering specific locations where files are stored, etc.

Conclusion

This particular post didn’t really break any new ground and may be common information to many of you. My memory isn’t what it once was, though, and I’d forgotten about the “file dialogs on steroids” when I stopped working regularly with SharePoint Designer a number of years back. I was glad to be reminded thanks to David.

If nothing else, I hope this post served as a reminder to some that there’s more than one way to solve common problems and address recurring needs. Sometimes all that is required is a bit of experimentation.

References and Resources

The Threadripper

Circuit BoardThis post is me finally doing what I told so many people I was going to do a handful of weeks back: share the “punch list” (i.e, the parts list) I used to put together my new workstation. And unsurprisingly, I chose to build my workstation based upon AMD’s Threadripper CPU.

Getting Old

I make a living and support my family through work that depends on a computer, as I’m sure many of you do. And I’m sure that many of you can understand when I say that working on a computer day-in and day-out, one develops a “feel” for its performance characteristics.

While undertaking project work and other “assignments” over the last bunch of months, I began to feel like my computer wasn’t performing with the same “pep” that it once had. It was subtle at first, but I began to notice it more and more often – and that bugged me.

So, I attempted to uninstall some software, kill off some boot-time services and apps that were of questionable use, etc. Those efforts sometimes got me some performance back, but the outcome wasn’t sustained or consistent enough to really make a difference. I was seriously starting to feel like I was wading through quicksand anytime I tried to get anything done.

The Last Straw

StrawsThere isn’t any one event that made me think “Jeez, I really need a new computer” – but I still recall the turning point for me because it’s pretty vivid in my mind.

I subscribe to the Adobe Creative Cloud. Yes, it costs a small fortune each year, and each time I pay the bill, I wonder if I get enough use out of it to justify the expense. I invariably decide that I do end up using it quite a bit, though, so I keep re-upping for another year. At least I can write it off as a business expense.

Well, I was trying to go through a recent batch of digital photos using Adobe Lightroom, and my system was utterly dragging. And whenever my system does that for a prolonged period, I hop over to the Windows Task Manager and start monitoring. And when I did that with Lightroom, this is what I saw:

Note the 100% CPU utilization in the image. Admittedly, RamboxPro looks like the culprit here, and it was using a fair bit of memory … but that’s not the whole story.

Since the start of this ordeal, I’ve become more judicious in how many active tabs I spin-up in Rambox Pro. It’s a great utility, but like every Chromium-based tool, it’s an absolute pig when it comes to memory usage. Have you ever looked at your memory consumption when you have a lot of Google Chrome tabs open? That’s what’s happening with Rambox Pro. So be warned and be careful.​

I’m used to the CPU spiking for brief periods of time, but the CPU sat pegged at 100% utilization for the duration that Lightroom was running – literally the entire time. And not until I shut down Lightroom did the utilization start to settle back down.

I thought about this for a while. I know that Adobe does some work to optimize/enhance its applications to make the most of systems with multiple CPU cores and symmetric multiprocessing when it’s available to the applications. The type of tasks most Adobe applications deal with are the sort that people tend to buy beefy machines for, after all: video editing, multimedia creation, image manipulation, etc.

After observing Lightroom and how it brought my processor to its knees, I decided to do a bit of research.

Research and Realization

Lab ResearchAt the time, my primary workstation was operating based on an Intel Core i7-5960X Extreme processor. When I originally built the system, there was no consumer desktop processor that was faster or had more cores (that I recall). Based on the (then) brand new Haswell E series from Intel, the i7-5960X had eight cores that each supported hyperthreading. It had an oversized L3 cache of 20MB, “new” virtualization support and extensions, 40 PCIe lanes, and all sorts of goodies baked-in. I figured it was more than up to handling current, modern day workstation tasks.

Yeah – not quite.

In researching that processor, I learned that it had been released in September of 2014 – roughly six years prior. Boy, six years flies by when you’re not paying attention. Life moves on, but like a new car that’s just been driven off the lot, that shiny new PC you just put together starts losing value as soon as you power it up.

The Core i7 chip and the system based around it are still very good at most things today – in fact, I’m going to set my son up with that old workstation as an upgrade from his Core-i5 (which he uses primarily for video watching and gaming). But for the things I regularly do day in and day out – running VMs, multimedia creation and editing, etc., that Core i7  system is significantly behind the times. With six years under its belt, a computer system tends to start receiving email from AARP 

The Conversation and Approval

So, my wife and I had “the conversation,” and I ultimately got her buy-in on the construction of a new PC. Let me say, for the record, that I love my wife. She’s a rational person, and as long as I can effectively plead my case that I need something for my job (being able to write it off helps), she’s behind me and supports the decision.

Tracy and I have been married for 17 years, so she knows me well. We both knew that the new system was going to likely cost quite a bit of money to put together … because my general thinking on new computer systems (desktops, servers, or whatever) boils down to a few key rules and motivators:

  1. Nine times out of ten, I prefer to build a system (from parts) over buying one pre-assembled. This approach ensures that I get exactly what I want in the system, and it also helps with the “continuing education” associated with system assembly. It also forces me to research what’s currently available at the time of construction, and that invariably ends up helping at least one or two friends in the assembly of new systems that they want to put together or purchase.
  2. I generally try to build the best performing system I can with what’s available at the time. I’ll often opt for a more expensive part if it’s going to keep the system “viable” for a longer period of time, because getting new systems isn’t something I do very often. I would absolutely love to get new systems more often, but I’ve got to make these last as long as I can – at least until I’m independently wealthy (heh … don’t hold your breath – I’m certainly not).
  3. As an adjunct to point #2 (above), I tend to opt for more expensive parts and components if they will result in a system build that leaves room for upgrades/part swaps down the road. Base systems may roll over only every half-dozen years or so, but parts and upgrades tend to flow into the house at regular intervals. Nothing simply gets thrown out or decommissioned. Old systems and parts go to the rest of the family, get donated to a friend in need, etc.
  4. When I’m building a system, I have a use in mind. I’m fortunate that I can build different computers for different purposes, and I have two main systems that I use: a primary workstation for business, and a separate machine for gaming. That doesn’t mean I won’t game on my workstation and vice-versa, any such usage is secondary; I select parts for a system’s intended purpose.
  5. Although I strive to be on the cutting edge, I’ve learned that it’s best to stay off the bleeding edge when it comes to my primary workstation. I’ve been burned a time or two by trying to get the absolute best and newest tech. When you depend on something to earn a living, it’s typically not a bad idea to prioritize stability and reliability over the “shiny new objects” that aren’t proven yet.

Threadripper: The Parts List

At last – the moment that some of you may have been waiting for: the big reveal!

I want to say this at the outset: I’m sharing this selection of parts (and some of my thinking while deciding what to get) because others have specifically asked. I don’t value religious debates over “why component ‘xyz’ is inferior to ‘abc'” nearly as much as I once did in my youth.

So, general comments and questions on my choice of parts are certainly welcome, but the only thing you’ll hear are crickets chirping if you hope to engage me in a debate …

The choice of which processor to go with wasn’t all that difficult. Well, maybe a little.

Given that this was going into the machine that would be swapped-in as my new workstation, I figured most medium-to-high end current processors available would do the job. Many of the applications I utilize can get more done with a greater number of processing cores, and I’ve been known to keep a significant number of applications open on my desktop. I also continue to run a number of virtual machines (on my workstation) in my day-to-day work.

In recent years, AMD has been flogging Intel in many different benchmarks – more specifically, the high-end desktop (non-gaming) performance range of benchmarks that are the domain of multi-core systems. AMD’s manufacturing processes are also more advanced (Intel is still stuck on 10nm-14nm while AMD has been on 7nm), and they’ve finally left the knife at home and brought a gun to the fight – especially with Threadripper. It reminds me of period of time decades ago when AMD was able to outperform Intel with the Athlon FX-series (I loved the FX-based system I built!).

I realize benchmarks are won by some companies one day, and someone else the next. Bottom line for me: performance per core at a specific price point has been held by AMD’s Ryzen chips for a while. I briefly considered a Ryzen 5 or 9 for a bit, but I opted for the Threadripper when I acknowledged that the system would have to last me a fairly long time. Yes, it’s a chunk of change … but Threadripper was worth it for my computing tasks.

Had I been building a gaming machine, it’s worth noting that I probably would have gone Intel, as their chips still tend to perform better for single-threaded loads that are common in games.

First off, you should know that I generally don’t worry about motherboard performance. Yes, I know that differences exist and motherboard “A” may be 5% faster than motherboard “B.” At the end of the day, they’re all going to be in the same ballpark (except for maybe a few stinkers – and ratings tend to frown on those offerings …)

For me, motherboard selection is all about capabilities and options. I want storage options, and I especially want robust USB support. Features and capabilities tend to become more available as cost goes up (imagine that!), and I knew right off that I was going to probably spend a pretty penny for the appropriate motherboard to drop that Threadripper chip into.

I’ve always good luck with ASUS motherboards, and it doesn’t hurt that the ROG Zenith II Extreme Alpha was highly rated and reviewed. After all, it has a name that sounds like the next-generation terminator, so how could I go wrong?!?!?!

Everything about the board says high end, and it satisfies the handful of requirements I had. And some I didn’t have (but later found nice, like that 10Gbps Ehternet port …)

“Memory, all alone in the moonlight …”

Be thankful you’re reading that instead of listening to me sing it. Barbra Streisand I am not.

Selecting memory doesn’t involve as many decision points as other components in a new system, but there are still a few to consider. There is, of course, the overall amount of memory you want to include in the system. My motherboard and processor supported up to 256GB, but that would be overkill for anythings I’d be doing. I settled on 128GB, and I decided to get that as 4x32GB DIMMS rather than 8x16GB so I could expand (easily) later if needed.

Due to their architecture, it has been noted that the performance of Ryzen chips can be impacted significantly by memory speeds. The “sweet spot” before prices grew beyond my desire to purchase appeared to be about 3200MHz. And if possible, I wanted to get memory with the lowest possible CAS (column access strobe) latency I could find, as that number tends to matter the most with memory timings (of CAS, tRAS, tRP, and tRCD.)

I found what I wanted with the Corsair Vengeance RGB series. I’ve had a solid experience with Corsair memory in the past, so once I confirmed the numbers it was easy to pull the trigger on the purchase.

There are 50 million cases and case makers out there. I’ve had experience with many of them, but getting a good case (in my experience) is as much about timing as any other factor (like vendor, cost, etc).

Because I was a bit more focused on the other components, I didn’t want to spend a whole lot of time on the case. I knew I could get one of those diamonds in the rough (i.e., cheap and awesome) if I were willing spend some time combing reviews and product slicks … but I’ll confess: I dropped back and punted on this one. I pulled open my Maximum PC and/or PC Gamer magazines (I’ve been subscribing for years) and looked at what they recommended.

And that was as hard as it got. Sure, the Cosmos C700P was pricy, but it looked easy enough to work with. Great reviews, too.

When the thing was delivered, the one thing I *wasn’t* prepared for was sheer SIZE of the case. Holy shnikes – this is a BIG case. Easily the biggest non-server case I’ve ever owned. It almost doesn’t fit under my desk but thankfully it just makes it with enough clearance that I don’t worry.

Oh yeah, there’s something else I realized with this case: I was acrruing quite the “bling show” of RGB lighting-capable components. Between the case, the memory, and the motherboard, I had my own personal 4th of July show brewing.

Power supplies aren’t glamorous, but they’re critical to any stable and solid system. 25 years ago, I lived in an old apartment with atrocious power. I would go through cheap power supplies regularly. It was painful and expensive, but it was instructional. Now, I do two things: buy an uninterruptible power supply (UPS) for everything electronic, and purchase a good power supply for any new build. Oh, and one more thing: always have another PSU on-hand.

I started buying high-end Corsair power supplies around the time I built my first gaming machine which utilized videocards in SLI. That was the point in nVidia’s history when the cards had horrible power consumption stats … and putting two of them in a case was a quick trip to the scrap heap for anything less than 1000W.

That PSU survived and is still in-use in one of my machines, and that sealed the deal for me for future PSU needs.

This PSU can support more than I would ever throw at it, and it’s fully modular *and* relatively high efficiency. Fully modular is the only to go these days; it definitely cuts down on cable sprawl.

Much like power supplies, CPU coolers tend not to be glamorous. The most significant decision point is “air cooled” or “liquid cooled.” Traditionally, I’ve gone with air coolers since I don’t overclock my systems and opt for highly ventillated cases. It’s easier (in my opinion) and tends to be quite a bit cheaper.

I have started evolving my thinking on the topic, though – at least a little bit. I’m not about to start building custom open-loop cooling runs like some of the extreme builders out there, but there are a host of sealed closed-loop coolers that are well-regarded and highly rated.

Unsurprisingly, Corsair makes one of the best (is there anything they don’t do?) I believe Maximum PC put the H100i PRO all-in-one at the top of their list. It was a hair more than I wanted to spend, but in the context of the project’s budget (growing with each piece), it wasn’t bad.

And oh yeah: it *also* had RGB lighting built-in. What the heck?

I initially had no plans (honestly) of buying another videocard. My old workstation had two GeForce 1080s (in SLI) in it, and my thinking was that I would re-use those cards to keep costs down.

Ha. Ha ha. “Keep costs down” – that’s funny! Hahahahahahaha…

At first, I did start with one of the 1080s in the case. But there were other factors in the mix I hadn’t foreseen. Those two cards were going to take up a lot room in the case and limit access to the remaining PCI express slots. There’s also the time-honored tradition of passing one of the 1080s down to my son Brendan, who is also a gamer.

Weak arguments, perhaps, but they were enough to push me over the edge into the purchase of another RTX 2080Ti. I actually picked it up at the local Micro Center, and there’s a bit of a story behind it. I originally purchase the wrong card (one that had connects for an open-loop cooling system), so I returned it and picked up the right card while doing so. That card (the right one) was only available as an open box item (at a substantially reduced price). Shortly after powering my system on with the card plugged in, it was clear why it was open-box: it had hardware problems.

Thus began the dance with EVGA support and the RMA process. I’d done the dance before, so I knew what to expect. EVGA has fantastic support anyway, so I was able to RMA the card back (shipping nearly killed me – ouch!), and I got a new RTX 2080Ti at an ultimately “reasonable” price.

Now my son will get a 1080, I’ve got a shiny new 2080Ti … and nVidia just released the new 30 series. Dang it!

Admittedly, this was a Micro Center “impulse buy.” That is, the specific choice of card was the impulse buy. I knew I was going to get an external sound card (i.e., aside from the motherboard-integrated sound) before I’d really made any other decision tied to the new system.

For years I’ve been hearing that the integrated sound chips they’re now putting on motherboards have gotten good enough that the need for a separate, discrete sound card is no longer necessary for those wanting high-quality audio. Forget about SoundBlaster – no longer needed!

I disagree.

I’ve tried using integrated sound on a variety of motherboards, and there’s always been something … sub-standard. In many cases, the chips and electronics simply weren’t shielded enough to keep powerline hum and other interference out. In other cases, the DSP associated with the audio would chew CPU cycles and slow things down.

Given how much I care about my music – and my picky listening habits (we’ll say “discerning audiophile tendencies”) – I’ve found that I’m only truly happy with a sound card.

I’d always gotten SoundBlaster cards in the past, but I’ve been kinda wondering about SoundBlaster for a while. They were still making good (or at least “okay”) cards in my opinion, but their attempts to stay relevant seemed to be taking them down some weird avenues. So, I was open to the idea of another vendor.

The ASUS card looked to be the right combo of a high signal-to-noise, low distortion minimalist card. And thus far, it’s been fantastic. An impulse buy that actually worked out!

Much like the choice of CPU, picking the SSD that would be used as my Windows system (boot) drive wasn’t overly difficult. This was the device that my system would be booting from, using for memory swapping, and other activities that would directly impact perceived speed and “nimbleness.” For those reasons alone, I wanted to find the fastest SSD I could reasonably purchase.

Historically, I’ve purchased Samsung Pro SSD drives for boot drive purposes and have remained fairly brand loyal. If something “ain’t broke, ya don’t fix it.” But when I saw that Seagate had a new M.2 SSD out that was supposed to be pretty doggone quick, I took notice. I picked one up, and I can say that it’s a really sweet SSD.

The only negative thing or two that Tom’s Hardware had to say about it was that it was “costly” and had “no heatsink.” In the plus category, Tom’s said that it had “solid performance,” a “large write cache,” that it was “power efficient,” had “class-leading endurance,” and they like its “aesthetics.” They also said it “should be near the top of your best ssds list.”

And about the cost: Micro Center actually had the drive for substantially less than what the drive is listing as, so I jumped at it. I’m glad I did, because I’ve been very happy with its performance. Happiness is based on nothing more than my perception. Full disclosure: I haven’t actually benchmarked system performance (yet), so I don’t have numbers to share. Maybe a future post …

Unsurprisingly, my motherboard selection came with built-in RAID capability. That RAID capability actually extended to NVMe drives (a first for one of my systems), so I decided to take advantage of it.

Although it’s impractical from a data stability and safety standpoint, I decided that I was going to put together a RAID-0 (striped) “disk” array with two M.2 drives. I figured I didn’t need maximum performance (as I did with my boot/system drive), so I opted to pull back a bit and be a little more cost-efficient.

It’s no surprise (or at least, I don’t think it should be a surprise), then, that I opted to go with Samsung and a pair of 970 EVO plus M.2 NVMe drives for that array. I got a decent deal on them (another Micro Center purchase), and so with two of the drives I put together a 4TB pretty-darn-quick array – great for multimedia editing, recording, a temporary area … and oh yeah: a place to host my virtual machine disks. Smooth as butta!

For more of my “standard storage” needs – where data safety trumped speed of operations – I opted for a pair of Seagate IronWolf 6TB NAS drives in a RAID-1 (mirrored) array configuration. I’ve been relatively happy with Seagate’s NAS series. Truthfully, both Seagate and Western Digitial did a wonderful thing by offering their NAS/Red series of drives. The companies acknowledge the reality that a large segment of the computing population are leaving machines and devices running 24/7, and they built products to work for that market. I don’t think I’ve had a single Red/NAS-series drive fail yet … and I’ve been using them for years now.

In any case, there’s nothing amazing out these drives. They do what their supposed to do. If I lose one, I just need to get another back in and let the array rebuild itself. Sure, I’ll be running in degraded fashion for a while, but that’s a small price to pay for a little data safety.

I believe in protection in layers – especially for data. That’s a mindset that comes out of my experience doing disaster recovery and business continuity work. Some backup process that you “set and forget” isn’t good enough for any data – yours or mine. That’s a perspective I tried to share and convey in the DR guides that John Ferringer and I wrote back in the SharePoint 2007 and 2010 days, and it’s a philosophy I adhere to even today.

The mirroring of the 6TB IronWolf drives provides one layer of data protection. The additional 10TB Western Digital Red drive I added as a system level backup target provides another. I’ve been using Acronis True Image as a backup tool for quite a few years now, and I’m generally pretty happy with the application, how it has operated, and how it has evolved. About the only thing that still bugs me (on a minor level) is the relative lack of responsiveness of UI/UX elements within the application. I know the application is doing a lot behind the scenes, but as a former product manager for a backup product myself (Idera SharePoint Backup), I have to believe that something could be done about it.

Thoughts on backup product/tool aside, I back up all the drives in my system to my Z: drive (the 10TB WD drive) a couple of times per week:

Acronis Backup Intervals

I use Acronis’ incremental backup scheme and maintain about month’s worth of backups at any given time; that seems to strike a good balance between capturing data changes and maintaining enough disk space.

I have one more backup layer in addition to the ones I’ve already described: off-machine. Another topic for another time …

Last but not least, I have to mention my trust Blu-ray optical drive. Yes, it does do writing … but I only ever use it to read media. If I didn’t have a large collection of Blu-rays that I maintain for my Plex Server, I probably wouldn’t even need the drive. With today’s Internet speeds and the ease of moving large files around, optical media is quickly going the way of the floppy disk.

I had two optical drives in my last workstation, and I have plenty of additional drives downstairs, so it wasn’t hard at all to find one to throw in the machine.

And that’s all I have to say about that.

Some Assembly Required

Of course, I’d love to have just purchased the parts and have the “assembly elves” show up one night while I was sleeping, do their thing, and I’d have woken up the next morning with a fully functioning system. In reality, it was just a tad a bit more involved that that. 

I enjoy putting new systems together, but I enjoy it a whole lot less when it’s a system that I rely upon to get my job done. There was a lot of back and forth, as well as plenty of hiccups and mistakes along the way.

I took a lot of pictures and even a small amount of video while putting things together, and I chronicled the journey to a fair extent on Facebook. Some of you may have even been involved in the ongoing critique and ribbing (“Is it built yet?”). If so, I want to say thanks for making the process enjoyable; I hope you found it as funny and generally entertaining as I did. Without you folks, it wouldn’t have been nearly as much fun. Now, if I can just find a way to magically pay the whole thing off …

The Media Chronicle

I’ll close this post out with some of the images associated with building Threadripper (or for Spencer Harbar: THREADRIPPER!!!)

Definitely a Step Up

I’ll conclude this post with one last image, and that’s the image I see when I open Windows Device Manager and look and look at the “Processors” node:Device Manager

I will admit that the image gives me all sorts of warm fuzzies inside. Seeing eight hyperthreading cores used to be impressive, but now that I’ve got 32 cores, I get a bit giddy.

Thanks for reading!

References and Resources

Microsoft Teams Ownership Changes – The Bulk PowerShell Way

As someone who spends most days working with (and thinking about) SharePoint, there’s one thing I can say without any uncertainty or doubt: Microsoft Teams has taken off like a rocket bound for low Earth orbit. It’s rare these days for me to discuss SharePoint without some mention of Teams.

I’m confident that many of you know the reason for this. Besides being a replacement for Skype, many of Teams’ back-end support systems and dependent service implementations are based in – you guessed it – SharePoint Online (SPO).

As one might expect, any technology product that is rapidly evolving and seeing adoption by the enterprise has gaps that reveal themselves and imperfect implementations as it grows – and Teams is no different. I’m confident that Teams will reach a point of maturity and eventually address all of the shortcomings that people are currently finding, but until it does, there will be those of us who attempt to address gaps we might find with the tools at our disposal.

Administrative Pain

One of those Teams pain points we discussed recently on the Microsoft Community Office Hours webcast was the challenge of changing ownership for a large numbers of Teams at once. We took on a question from Mark Diaz who posed the following:

May I ask how do you transfer the ownership of all Teams that a user is managing if that user is leaving the company? I know how to change the owner of the Teams via Teams admin center if I know already the Team that I need to update. Just consulting if you do have an easier script to fetch what teams he or she is an owner so I can add this to our SOP if a user is leaving the company.

Mark Diaz

We discussed Mark’s question (amidst our normal joking around) and posited that PowerShell could provide an answer. And since I like to goof around with PowerShell and scripting, I agreed to take on Mark’s question as “homework” as seen below:

The rest of this post is my direct response to Mark’s question and request for help. I hope this does the trick for you, Mark!

Teams PowerShell

Anyone who has spent any time as an administrator in the Microsoft ecosystem of cloud offerings knows that Microsoft is very big on automating administrative tasks with PowerShell. And being a cloud workload in that ecosystem, Teams is no different.

Microsoft Teams has it’s own PowerShell module, and this can be installed and referenced in your script development environment in a number of different ways that Microsoft has documented. And this MicrosoftTeams module is a prerequisite for some of the cmdlets you’ll see me use a bit further down in this post.

The MicrosoftTeams module isn’t the only way to work with Teams in PowerShell, though. I would have loved to build my script upon the Microsoft Graph PowerShell module … but it’s still in what is termed an “early preview” release. Given that bit of information, I opted to use the “older but safer/more mature” MicrosoftTeams module.

The Script: ReplaceTeamsOwners.ps1

Let me just cut to the chase. I put together my ReplaceTeamOwners.ps1 script to address the specific scenario Mark Diaz asked about. The script accepts a handful of parameters (this next bit lifted straight from the script’s internal documentation):

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.

So both currentTeamOwner and newTeamOwner must be specified, and that’s fairly intuitive to understand. If the -confirmEachUpdate switch is supplied, then for each possible ownership change there will be a confirmation prompt allowing you to agree to an ownership change on a case-by-case basis.

The one parameter that might be a little confusing is the script’s isTest parameter. If unspecified, this parameter defaults to TRUE … and this is something I’ve been putting in my scripts for ages. It’s sort of like PowerShell’s -WhatIf switch in that it allows you to understand the path of execution without actually making any changes to the environment and targeted systems/services. In essence, it’s basically a “dry run.”

The difference between my isTest and PowerShell’s -WhatIf is that you have to explicitly set isTest to FALSE to “run the script for real” (i.e., make changes) rather than remembering to include -WhatIf to ensure that changes aren’t made. If someone forgets about the isTest parameter and runs my script, no worries – the script is in test mode by default. My scripts fail safe and without relying on an admin’s memory, unlike -WhatIf.

And now … the script!

<#  

.SYNOPSIS  
    This script is used to replace all instances of a Teams team owner with the
    identity of another account. This might be necessary in situations where a
    user leaves an organization, administrators change, etc.

.DESCRIPTION  
    Anytime a Microsoft Teams team is created, an owner must be associated with
    it. Oftentimes, the team owner is an administrator or someone who has no
    specific tie to the team.

    Administrators tend to change over time; at the same time, teams (as well as
    other IT "objects", like SharePoint sites) undergo transitions in ownership
    as an organization evolves.

    Although it is possible to change the owner of Microsoft Teams team through
    the M365 Teams console, the process only works for one site at a time. If
    someone leaves an organization, it's often necessary to transfer all objects
    for which that user had ownership.

    That's what this script does: it accepts a handful of parameters and provides
    an expedited way to transition ownership of Teams teams from one user to 
    another very quickly.

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.
	
.NOTES  
    File Name  : ReplaceTeamsOwners.ps1
    Author     : Sean McDonough - sean@sharepointinterface.com
    Last Update: September 2, 2020

#>
Function ReplaceOwners {
    param(
        [Parameter(Mandatory=$true)]
        [String]$currentTeamsOwner,
        [Parameter(Mandatory=$true)]
        [String]$newTeamsOwner,
        [Parameter(Mandatory=$false)]
        [Switch]$confirmEachUpdate,
        [Parameter(Mandatory=$false)]
        [Boolean]$isTest = $true
    )

    # Perform a parameter check. Start with the site spec.
    Clear-Host
    Write-Host ""
    Write-Host "Attempting prerequisite operations ..."
    $paramCheckPass = $true
    
    # First - see if we have the MSOnline module installed.
    try {
        Write-Host "- Checking for presence of MSOnline PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MSOnline"
        if ($null -ne $checkResult) {
            Write-Host "  - MSOnline module already installed; now importing ..."
            Import-Module -Name "MSOnline" | Out-Null
        }
        else {
            Write-Host "- MSOnline module not installed. Attempting installation ..."            
            Install-Module -Name "MSOnline" | Out-Null
            $checkResult = Get-InstalledModule -Name "MSOnline"
            if ($null -ne $checkResult) {
                Import-Module -Name "MSOnline" | Out-Null
                Write-Host "  - MSOnline module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MSOnline module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Red "- Unexpected problem encountered with MSOnline import attempt."
        $paramCheckPass = $false
    }

    # Our second order of business is to make sure we have the PowerShell cmdlets we need
    # to execute this script.
    try {
        Write-Host "- Checking for presence of MicrosoftTeams PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
        if ($null -ne $checkResult) {
            Write-Host "  - MicrosoftTeams module installed; will now import it ..."
            Import-Module -Name "MicrosoftTeams" | Out-Null
        }
        else {
            Write-Host "- MicrosoftTeams module not installed. Attempting installation ..."            
            Install-Module -Name "MicrosoftTeams" | Out-Null
            $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
            if ($null -ne $checkResult) {
                Import-Module -Name "MicrosoftTeams" | Out-Null
                Write-Host "  - MicrosoftTeams module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MicrosoftTeams module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Yellow "- Unexpected problem encountered with MicrosoftTeams import attempt."
        $paramCheckPass = $false
    }

    # Have we taken care of all necessary prerequisites?
    if ($paramCheckPass) {
        Write-Host -ForegroundColor Green "Prerequisite check passed. Press  to continue."
        Read-Host
    } else {
        Write-Host -ForegroundColor Red "One or more prerequisite operations failed. Script terminating."
        Exit
    }

    # We can now begin. First step will be to get the user authenticated to they can actually
    # do something (and we'll have a tenant context)
    Clear-Host
    try {
        Write-Host "Please authenticate to begin the owner replacement process."
        $creds = Get-Credential
        Write-Host "- Credentials gathered. Connecting to Azure Active Directory ..."
        Connect-MsolService -Credential $creds | Out-Null
        Write-Host "- Now connecting to Microsoft Teams ..."
        Connect-MicrosoftTeams -Credential $creds | Out-Null
        Write-Host "- Required connections established. Proceeding with script."
        
        # We need the list of AAD users to validate our target and replacement.
        Write-Host "Retrieving list of Azure Active Directory users ..."
        $currentUserUPN = $null
        $currentUserId = $null
        $currentUserName = $null
        $newUserUPN = $null
        $newUserId = $null
        $newUserName = $null
        $allUsers = Get-MsolUser
        Write-Host "- Users retrieved. Validating ID of current Teams owner ($currentTeamsOwner)"
        $currentAADUser = $allUsers | Where-Object {$_.SignInName -eq $currentTeamsOwner}
        if ($null -eq $currentAADUser) {
            Write-Host -ForegroundColor Red "- Current Teams owner could not be found in Azure AD. Halting script."
            Exit
        } 
        else {
            $currentUserUPN = $currentAADUser.UserPrincipalName
            $currentUserId = $currentAADUser.ObjectId
            $currentUserName = $currentAADUser.DisplayName
            Write-Host "  - Current user found. Name='$currentUserName', ObjectId='$currentUserId'"
        }
        Write-Host "- Now Validating ID of new Teams owner ($newTeamsOwner)"
        $newAADUser = $allUsers | Where-Object {$_.SignInName -eq $newTeamsOwner}
        if ($null -eq $newAADUser) {
            Write-Host -ForegroundColor Red "- New Teams owner could not be found in Azure AD. Halting script."
            Exit
        }
        else {
            $newUserUPN = $newAADUser.UserPrincipalName
            $newUserId = $newAADUser.ObjectId
            $newUserName = $newAADUser.DisplayName
            Write-Host "  - New user found. Name='$newUserName', ObjectId='$newUserId'"
        }
        Write-Host "Both current and new users exist in Azure AD. Proceeding with script."

        # If we've made it this far, then we have valid current and new users. We need to
        # fetch all Teams to get their associated GroupId values, and then examine each
        # GroupId in turn to determine ownership.
        $allTeams = Get-Team
        $teamCount = $allTeams.Count
        Write-Host
        Write-Host "Begin processing of teams. There are $teamCount total team(s)."
        foreach ($currentTeam in $allTeams) {
            
            # Retrieve basic identification information
            $groupId = $currentTeam.GroupId
            $groupName = $currentTeam.DisplayName
            $groupDescription = $currentTeam.Description
            Write-Host "- Team name: '$groupName'"
            Write-Host "  - GroupId: '$groupId'"
            Write-Host "  - Description: '$groupDescription'"

            # Get the users associated with the team and determine if the target user is
            # currently an owner of it.
            $currentIsOwner = $null
            $groupOwners = (Get-TeamUser -GroupId $groupId) | Where-Object {$_.Role -eq "owner"}
            $currentIsOwner = $groupOwners | Where-Object {$_.UserId -eq $currentUserId}

            # Do we have a match for the targeted user?
            if ($null -eq $currentIsOwner) {
                # No match; we're done for this cycle.
                Write-Host "  - $currentUserName is not an owner."
            }
            else {
                # We have a hit. Is confirmation needed?
                $performUpdate = $false
                Write-Host "  - $currentUserName is currently an owner."
                if ($confirmEachUpdate) {
                    $response = Read-Host "  - Change ownership to $newUserName (Y/N)?"
                    if ($response.Trim().ToLower() -eq "y") {
                        $performUpdate = $true
                    }
                }
                else {
                    # Confirmation not needed. Do the update.
                    $performUpdate = $true
                }
                
                # Change ownership if the appropriate flag is set
                if ($performUpdate) {
                    # We need to check if we're in test mode.
                    if ($isTest) {
                        Write-Host -ForegroundColor Yellow "  - isTest flag is set. No ownership change processed (although it would have been)."
                    }
                    else {
                        Write-Host "  - Adding '$newUserName' as an owner ..."
                        Add-TeamUser -GroupId $groupId -User $newUserUPN -Role owner
                        Write-Host "  - '$newUserName' is now an owner. Removing old owner ..."
                        Remove-TeamUser -GroupId $groupId -User $currentUserUPN -Role owner
                        Write-Host "  - '$currentUserName' is no longer an owner."
                    }
                }
                else {
                    Write-Host "  - No changes in ownership processed for $groupName."
                }
                Write-Host ""
            }
        }

        # We're done let the user know.
        Write-Host -ForegroundColor Green "All Teams processed. Script concluding."
        Write-Host ""

    } 
    catch {
        # One or more problems encountered during processing. Halt execution.
        Write-Host -ForegroundColor Red "-" $_
        Write-Host -ForegroundColor Red "- Script execution halted."
        Exit
    }
}

ReplaceOwners -currentTeamsOwner bob@EvilCorp.com -newTeamsOwner jane@AcmeCorp.com -isTest $true -confirmEachUpdate

Don’t worry if you don’t feel like trying to copy and paste that whole block. I zipped up the script and you can download it here.

A Brief Script Walkthrough

I like to make an admin’s life as simple as possible, so the first part of the script (after the comments/documentation) is an attempt to import (and if necessary, first install) the PowerShell modules needed for execution: MSOnline and MicrosoftTeams.

From there, the current owner and new owner identities are verified before the script goes through the process of getting Teams and determining which ones to target. I believe that the inline comments are written in relatively plain English, and I include a lot of output to the host to spell out what the script is doing each step of the way.

The last line in the script is simply the invocation of the ReplaceOwners function with the parameters I wanted to use. You can leave this line in and change the parameters, take it out, or use the script however you see fit.

Here’s a screenshot of a full script run in my family’s tenant (mcdonough.online) where I’m attempting to see which Teams my wife (Tracy) currently owns that I want to assume ownership of. Since the script is run with isTest being TRUE, no ownership is changed – I’m simply alerted to where an ownership change would have occurred if isTest were explicitly set to FALSE.

ReplaceTeamsOwners.ps1 execution run

Conclusion

So there you have it. I put this script together during a relatively slow afternoon. I tested and ensured it was as error-free as I could make it with the tenants that I have, but I would still test it yourself (using an isTest value of TRUE, at least) before executing it “for real” against your production system(s).

And Mark D: I hope this meets your needs.

References and Resources

  1. Microsoft: Microsoft Teams
  2. buckleyPLANET: Microsoft Community Office Hours, Episode 24
  3. YouTube: Excerpt from Microsoft Community Office Hours Episode 24
  4. Microsoft Docs: Microsoft Teams PowerShell Overview
  5. Microsoft Docs: Install Microsoft Team PowerShell
  6. Microsoft 365 Developer Blog: Microsoft Graph PowerShell Preview
  7. Microsoft Tech Community: PowerShell Basics: Don’t Fear Hitting Enter with -WhatIf
  8. Zipped Script: ReplaceTeamsOwners.zip

A Quick Look At The Get-PnPGroup Cmdlet And Its Operation

Why This Particular Topic?

I wouldn’t be surprised if some of you might be saying and asking, “Okay, that’s an odd choice for a post – even for you. Why?”

If you’re one of those people wondering, I would say that the sentiment and question are certainly fair. I’m actually writing this as part of my agreed upon “homework” from last Monday’s broadcast of the Community Office Hours podcast (I think that’s what we’re calling them). If you’re not immediately familiar with this particular podcast and its purpose, I’ll take two seconds out to describe.

I was approached one day by Christian Buckley (so many “interesting experiences” seem to start with Christian Buckley) about a thought he had. He wanted to start doing a series of podcasts each week to address questions, concerns, problems, and other “things” related to Office 365, Microsoft Teams, and all the O365/M365 associated workloads. He wanted to open it up as a panel-style podcast, and although anyone could join, he was interested in rounding-up a handful of Microsoft MVPs to “staff” the podcast in an ongoing capacity. The idea sounded good to me, so I said “Count me in” even before he finished his thoughts and pitch.

I wasn’t sure what to expect initially … but we just finished our 22nd episode this past Monday, and we are still going strong. The cast on the podcast rotates a bit, but there are a few of us that are part of what I’d consider the “core group” of entertainers …

The podcast has actually become something I look forward to every Monday, especially with the pandemic and the general lack of in-person social contact I seem to have (or rather, don’t have). We do two sections of the podcast every Monday: one for EMEA at 11:00am EST and the other for APAC at 9:00pm EST. You can find out more about the podcast in general through the Facebook group that’s maintained. Alternatively, you can send questions and things you’d like to see us address on the podcast to OfficeHours@CollabTalk.com.

If you don’t want (or have the time) to watch the podcast live, an archive of past episodes exists on Christian’s site, I maintain an active playlist of the recorded episodes on YouTube, and I’m sure there are other repositories available.

Ok, Got It. “Your Homework,” You Say?

The broadcasts we do normally have no fixed format or agenda, so we (mostly Christian) tend to pull questions and topics to address from the Facebook group and other places. And since the topics are generally so wide-ranging, it goes without saying that we have viable answers for some topics … but there are plenty of things we’re not good at (like telephony) and freely tell you so.

Whenever we get to a question or topic that should be dealt with outside the scope of the podcast (oftentimes to do some research or contact a resource who knows the domain), we’ll avoid BSing too much … and someone will take the time to research the topic and return back the following week with what they found or put together. We’re trying to tackle a bunch of questions and topics each week, and none of us is well-versed in the entire landscape of M365. Things just change so darn fast these days ….

So, my “homework” from last week was one of these topics. And I’m trying to do one better than just report back to the podcast with an answer. The topic and research may be of interest to plenty of people – not just the person who asked about it originally. Since today is Sunday, I’m racing against the clock to put this together before tomorrow’s podcast episodes …

The Topic

Rather than trying to supply a summary of the topic, I’m simply going to share the post and then address it. The inquiry/post itself was made in the Office 365 Community Facebook group by Bilal Bajwa. Bilal is from Milwaulkee, Wisconsin, and he was seeking some PowerShell-related help:

Being the lone developer in our group of podcast regulars (and having worked a fair bit with the SharePointPnP Cmdlets for PowerShell and PowerShell in general), I offered to take Bilal’s post for homework and come back with something to share. As of today (Sunday, 8/23/2020), the post is still sitting in the Facebook group without comment – something I hope to change once this blog post goes live in a bit.

SharePointPnP Cmdlets And The Get-PnPGroup Cmdlet Specifically

If you’re a SharePoint administrator and you’re unfamiliar with the SharePoint Patterns and Practices group and the PowerShell cmdlets they maintain, I’M giving YOU a piece of homework: read the Microsoft Docs to familiarize yourself with what they offer and how they operate. They will only help make your job easier. That’s right: RTFM. Few people truly enjoy reading documentation, but it’s hard to find a better and more complete reference medium.

If you are already familiar with the PnP cmdlets … awesome! As you undoubtedly know, they add quite a bit of functionality and extend a SharePoint administrator’s range of control and options within just about any SharePoint environment. The PnP group that maintains the cmdlets (and many other tools) are a group of very bright and very giving folks.

Vesa Juvonen is one name I associate with pretty much anything PnP. He’s a Principal Program Manager at Microsoft these days, and he directs many of the PnP efforts in addition to being an exceptionally nice (and resourceful!) guy.

The SharePoint Developer Blog regularly covers PnP topics, and they regularly summarize and update PnP resource material – as well as explain it. Check out this post for additional background and detail.

Cmdlet: Get-PnPGroup

Now that I’ve said all that, let’s get started with looking at the Get-PnPGroup cmdlet that is part of the SharePointPnP PowerShell module. I will assume that you have some skill with PowerShell and have access to a (SharePoint) environment to run the cmdlets successfully. If you’re new to all this, then I would suggest reviewing the Microsoft Docs link I provide in this blog post, as they cover many different topics including how to get setup to use the SharePoint PnP cmdlets.

In his question/post, Bilal didn’t specify whether he was trying to run the Get-PnPGroup cmdlet against a SharePoint Online (SPO) site or a SharePoint on-premises farm. The operation of the SharePointPnP cmdlets, while being fairly consistent and predictable from cmdlet to cmdlet, sometimes vary a bit depending on the version of SharePoint in-use (on-premises) or whether SPO is being targeted. In my experience, the exposed APIs and development surfaces went through some enhancement after SharePoint 2013 in specific areas. One such area that was affected was data pertaining to site users and their alerts; the data is available in SharePoint 2016 and 2019 (as well as in SPO), but it’s inaccessible in 2013.

Because of this, it is best to review the online documentation for any cmdlet you’re going to use. Barring that, make sure you remember the availability of the documentation if you encounter any issues or behavior that isn’t expected.

If we do this for Get-PnPGroup, we frankly don’t get too much. The online documentation at Microsoft Docs is relatively sparse and just slightly better than auto-generated docs. But we do get a little helpful info:

We can see from the docs that this cmdlet runs against all versions of SharePoint starting with SharePoint 2013. I would therefore expect operations to be generally be consistent across versions (and location) of SharePoint.

A little further down in the documentation for Get-PnPGroup (in Example 1), we find that simply running the cmdlet is said to return all SharePoint groups in a site. Let’s see that in practice.

Running Wild

I fired up a VM-based SharePoint 2019 farm I have to serve as the target for on-prem tests. For SPO, I decided to use my family’s tenant as a test target. Due to time constraints, I didn’t get a chance to run anything against my VM environment, so I’m assuming (dangerous, I know) that on-prem results will match SPO. If they don’t, I’m sure someone will tell me below (in the Comments) …

Going against SPO involves connecting to the tenant and then executing Get-PnPGroup. The initial results:

Running Get-PnPGroup returned something, and it’s initially presented to us in a somewhat condensed table format that includes ID, (group) Title, and LoginName.

But there’s definitely more under the hood than is being shown here, and that “under the hood” part is what I suspect might have been causing Bilal some issues when he looked at his results.

We’ve all probably heard it before at some point: PowerShell is an object-oriented scripting language. This means that PowerShell manipulates and works with Microsoft .NET objects behind-the-scenes for most things. What may appear as a scalar value or simple text data on first inspection could be just the tip of the “object iceberg” when it comes to PowerShell.

Going A Bit Deeper

To learn a bit more about what the function is actually returning upon execution, I ran the Get-PnPGroup cmdlet again and assigned the function return to a variable I called $group (which you can see in the screen capture earlier). Performing this variable assignment would allow me to continue working with the function output (i.e., the SharePoint groups) without the need to keep querying my SharePoint environment.

To display the contents of $group with additional detail, the PowerShell I executed might appear a little cryptic for those who don’t live in PowerShellLand:

$group | fl

There’s some shorthand in play with that last bit of PowerShell, so I’ll spell everything out. First, fl is the shorthand notation for the Format-List cmdlet. I could have just as easily typed …

$group | Format-List

… but that’s more typing! I’m no different than anyone else, and I like to get more done with less whenpossible.

Next, the pipe (“|”) will be familiar to most PowerShell practitioners, and here it’s used to send the contents of the $group variable to the Format-List cmdlet. The Format-List cmdlet then expands the data piped to it (i.e., the SharePoint groups in $group) and shows all the property values that exist for each SharePoint group.

If you’re not familiar with .NET objects or object-oriented development, I should point out that the SharePoint groups returned and assigned to our $group variable are .NET objects. Knowing this might help your understanding – or maybe not. Try not to worry if you’re not a dev and don’t speak dev. I know that to many admins, devs might as well be speaking jive …

For our purposes today, we’re going to limit our discussion and analysis of objects to just their properties – nothing more. The focus still remains PowerShell.

What Are The Actual Properties Available To Us?

If you’re asking the question just posed, then you’re following along and hopefully making some kind of sense of a what I’m sharing.

So, what are the properties that are exposed by each of the SharePoint groups? Looking at the output of the $group variable sent to the Format-List command (shown earlier) gives you an idea, but there’s a much quicker and more reliable way to get the listing of properties.

You may not like what I’m about to say, but it probably won’t surprise you: those properties are documented (for everyone to learn about) in Microsoft Docs. Yes, another documentation reference!

How did I know what to look/search for? If you refer to the end of the reference for the Get-PnPGroup cmdlet, there is a section that describes the “Outputs” from running the cmdlet. That output is only one line of text, and it’s exactly what we need to make the next hop in our hunt for properties details:

List<Microsoft.SharePoint.Client.Group>

A List is a .NET collection class, but that’s not important for our purposes. Simply put, you can think of a .NET List as a “bucket” into which we put other objects – including our SharePoint groups. The class/type that is identified between the “<” and “>” after List specify the type of each object in the List. In our case, each item in the List is of type Microsoft.SharePoint.Client.Group.

If you search for that class type, you’ll get a reference in your search results that points to a Microsoft Docs link serving as a reference for the SharePoint Group type we’re interested in. And if we look at the “Properties” link of that particular reference, each of the properties that appear in our returned groups are spelled out with additional information – in most cases, at least basic usage information is included.

A quick look at those properties and a review of one of the groups in the $group variable (shown below) should convince you that you’re looking at the right reference.

What Do We Do Now?

You might recall that we’re going through this exercise of learning about the output from the Get-PnPGroup cmdlet because Bilal asked the question, “Any idea how to filter?”

Hopefully the output that’s returned from the cmdlet makes some amount of sense, and I’ve convinced you (and Bilal) that it’s not “garbage” but a List collection of .NET objects that are all of the Microsoft.SharePoint.Client.Group type.

At this point, we can leave our discussion of .NET objects behind (for the most part) and transition back to PowerShell proper to talk about filtering. We could do our filtering without leaving .NET, but that wouldn’t be considered the “PowerShell way” of doing it. Just remember, though: there’s almost always more than one way to get the results you need from PowerShell …

Filtering The Results

In the case of my family’s SPO tenant, there are a total of seven (7) SharePoint groups in the main site collection:

Looking at a test case for filtering, I’m going to try to get any group that has “McDonough” in its name.

A SharePoint group’s name is the value of the Title property, and a very straightforward way to filter a collection of objects (which we have identified exists within our $group variable) is through the use of the Where-Object cmdlet.

Let’s setup some PowerShell that should return only the subset of groups that I’m interested in (i.e., those with “McDonough” in the Title). Reviewing the seven groups in my site collection, I note that only three (3) of them contain my last name. So, after filtering, we should have precisely three groups listed.

Preparing the PowerShell …

$group | where-object {$_.Title -like "*McDonough*"}

… and executing this, we get back the filtered results predicted and expected; i.e., three SharePoint groups:

For those that could use a little extra clarification, I will summarize what transpired when I executed that last line of PowerShell.

  1. From our previous Get-PnPGroup operation, we knew that the $group variable contained the seven groups that exist in my site collection.
  2. We piped (“|”) that unfiltered collection of groups to the Where-Object cmdlet. It’s worth pointing out that the cmdlets and most of the other strings/text in PowerShell are case-insensitive (Where-Object, where-object, and WhErE-oBjEcT are all the same from a PowerShell processing perspective).
  3. The curly braces after the where-object cmdlet define the logic that will be processed for each object (i.e., SharePoint group) that is passed to the where-object cmdlet.
  4. Within the curly braces, we indicated that we wanted to filter and keep each group that had a Title which was like “*McDonough*” This was accomplished with the -like operator (PowerShell has many other operators, too). The asterisks before and after “McDonough” are simply wildcards that will match against anything with “McDonough” in the Title – regardless of any text or characters appearing before and/or after “McDonough”
  5. Also worth nothing within the curly braces is the “$_.” notation. When iterating through the collection of SharePoint groups, the “$_.” denotes the current object/group we’re evaluating – each one in turn.

Round Two

Let’s try another one before pulling the plug (figuratively and literally – it’s close to my bed time …)

Let’s filter and keep only the groups where the members of the group can also edit the group membership. This is an uncommon scenario, and we might wish to know this information for some potential security tightening.

Looking at the properties available on the Group type, I see the one I’m interested in: AllowMembersEditMembership. It’s a boolean value, and I want back the groups that have a value of true (which is represented as $true in PowerShell) for this property.

$group | where-object {$_.AllowMembersEditMembership -eq $true}

Running the PowerShell just presented, we get only one matching group back:

Frankly, that’s one more group than I originally expected, so I should probably take a closer look in the ol’ family site collection …

Summary

I hope this helped you (and Bilal) understand that there is a method to PowerShell’s madness. We just need to lean on .NET and objected oriented concepts a bit to help us get what we want.

The filtering I demonstrated was pretty basic, and there are numerous ways to take it further and get more specific in your filtering logic/expressions. If you weren’t already comfortable with filtering, I hope you now know that it isn’t really that hard.

If I happened to skip or gloss over something important, please leave me a note in the Comments section below. My goal was to provide a complete-enough picture to build some confidence – so that the next time you need to work with objects and filter them in PowerShell, you’ll feel comfortable doing so.

Have fun PowerShelling!

References And Resources

  1. LinkedIn: Christian Buckley
  2. Podcast History: Microsoft Community Office Hours from 8/18/2020
  3. BuckleyPLANET: Community category and activities
  4. Facebook Group: Office 365 Community
  5. Email Group: OfficeHours@CollabTalk.com
  6. YouTube: Microsoft Community Office Hours playlist
  7. Microsoft Docs: PnP PowerShell Overview
  8. LinkedIn: Vesa Juvonen
  9. Blog: SharePoint Developer Blog
  10. Blog Post: Microsoft 365 & SharePoint Ecosystem (PnP) – July 2020 Update
  11. Microsoft Docs: Get-PnPGroup
  12. Microsoft: What Is .NET Framework?
  13. Microsoft Docs: Format-List
  14. Microsoft Docs: List<T> Class
  15. Microsoft Docs: Group Class
  16. Microsoft Docs: Group Properties
  17. Microsoft Docs: Where-Object
  18. Microsoft Docs: About Comparison Operators

What CDN Usage Does for SharePoint Online (SPO) Performance

If you need the what’s what on CDNs (content delivery networks), this is a bit of quick reading that will get you up to speed with what a CDN is, how to configure your SPO tenant to use a CDN, and the benefits that CDNs can bring.

The (Not Entirely Obvious) TL;DR Answer

CDN

Since I’m taking the time to write about the topic, you can safely guess that yes, CDNs make a difference withSPO page operations. In many cases, proper CDN configuration will make a substantial difference in SPO page performance. So enable CDN use NOW!

The Basis For That Answer: Introduction

Knowing that some folks simply want the answer up-front, I hope that I’ve satisfied their curiosity. The rest of this post is dedicated to explaining content delivery networks (CDNs), how they operate, and how you can easily enable them for use within your SharePoint Online (SPO) sites.

Let me first address a misconception that I sometimes encountered among SPO administrators and developers (including some MVPs) – that being that CDNs don’t really “do a whole lot” to help site and/or page performance. Sure, usage of a CDN is recommended … but a common misunderstanding is that a CDN is really more of a “nice-to-have” than “need-to-have” element for SPO sites. Of the people saying such things, oftentimes that judgment comes without any real research, knowledge, or testing. Skeptics typically haven’t read the documentation (the “non-RTFM crowd”) and haven’t actually spent any time profiling and troubleshooting the performance of SPO sites. Since I enjoy addressing perf. problems and challenges, I’ve been fortunate to experience firsthand the benefits that CDNs can bring. By the end of this post, I hope I’ll have made converts of a CDN skeptic or two.

What Is A CDN?

Abstract Network

A CDN is a Content Delivery Network. There are a lot of (good) web resources that describe and illustrate what CDNs are and how they generally operate (like this one and this one), so I’m not going to attempt to “add value” with my own spin. I will simply call attention to a couple of the key characteristics that we really care about in our use of CDNs with SPO.

  1. A CDN, at its core, can be thought of as a system of distributed (typically geographically so) servers for caching and offloading of SPO content. Rather than needing to go to the Microsoft network and data center where your tenant is located in order to fetch certain files from SPO, your browser can instead go to a (geographically) closer CDN server to get those same files.
  2. By virtue of going to a closer CDN instead of the Microsoft network, the chance that you’ll have a “bigger pipe” with more bandwidth – and less latency/delay – are greater. This usually translates directly to an improvement in performance.
  3. In addition to giving us the opportunity to download certain SPO files faster and with less delay, CDNs can do other things to improve the experience for the SPO files they serve. For instance, CDN servers can pass files back to the browser with cache-control headers that allow browsers to re-serve downloaded files to other users (i.e, to users who haven’t actually download the files), store downloaded files locally (to avoid having to download them again for a period of time), and more.

If you didn’t know about CDNs prior to this post, or didn’t understand how they could help you, I hope you’re beginning to see the possibilities!

The Arrival Of The Office 365 CDN

It wasn’t all that long ago that Microsoft was a bit more “modest” in its use of CDNs. Microsoft certainly made use of them, but prior to the implementation of its own content delivery networks, Microsoft frequently turned to a company called Akamai for CDN support.

When I first started presenting on SharePoint and its built-in caching mechanisms, I often spoke about Akamai and their edge network when talking about BLOB caching and how the max-age cache-control header could be configured and misconfigured. Back then, “Akamai” was basically synonymous with “CDN,” and that’s how many of us thought about the company. They were certainly leading the pack in the CDN service space.

Back then, if you were attempting to download a large file from Microsoft (think DVD images, ISO files, etc.), then there was a good change that the download link your browser would receive (from Microsoft’s servers) would actually point to an Akamai edge node near your location geographically instead of a Microsoft destination.

Fast forward to today. In addition to utilizing third-party CDNs like those deployed by Akamai, Microsoft has built (and is improving) their own first-party CDNs. There are a couple of benefits to this. First, many data regulations you may be subject to that prevent third-party housing of your data (yes, even in temporary locations like a CDN) can be largely avoided. In the case of CDNs that Microsoft is running, there is no hand-off to a third party and thus much less practical concern regarding who is housing your data.

Second, with their own CDNs, Microsoft has a lot more latitude and ability to extend the specifics of CDN configuration and operation its customers. And that’s what they’ve done with the Office 365 CDN.

Set Up The O365 CDN For Tenant’s Use

Now we’re talking! This next part is particularly important, and it’s what drove the creation of this post. It’s also the one bit of information that I promised Scott Stewart at Microsoft that I would try to get “out in the wild” as quickly and as visibly as possible.

So, if you remember nothing else from this post,please remember this:

Set-SPOTenantCdnEnabled -CdnType Public -Enable $true

That is the line of PowerShell that needs to be executed (against your SPO tenant, so you need to have a connection to your tenant established first) to enable transparent CDN support for public files. Run that, and non-sensitive files of public origin from SPO will begin getting cached in a CDN and served from there.

The line of PowerShell I shared goes through the SharePoint Online Management Shell – something most organizations using SPO (and their admins in particular) have installed somewhere.

It is also possible to enable CDN support if you’re using the PNP PowerShell module, if that’s your preference, by executing the following PowerShell:

Set-PnPTenantCdnEnabled -CdnType Public -Enable $true

No matter how you enable the CDN, it should be noted that the PowerShell I’ve elected to share (above) enables CDN usage for files of public origin only. It is easy enough to alter the parameters being passed in our PowerShell command so as to cover all files, public and private, by switching -CdnType to Both (with the SPO management shell) or executing another line of PowerShell after the first that swaps –type Public with –type Private (in the case of the SharePointPnP PowerShell module).

The reason I chose only public enablement is because your organization may be bound by restrictions or policies that prohibit or limit CDN use with private files. This is discussed a bit in the O365 CDN post originally cited, but it’s best to do your own research.

Enabling CDN support for public files, however, is considered to be safe in general.

What Sort Of Improvements Can I Potentially See?

I’ve got a series of images that I use to illustrate performance improvements when files are served via CDN instead of SPO list/library, and those files are from Microsoft. Thankfully, MS makes the images I tend to use (and a discussion of them) free available, and they are presented at this link for your reading and reference.

The example that is called out in the link I just shared involves offloading of the jQuery JavaScript library from SPO to CDN. The real world numbers that were captured reduced fetch-and-load time from just over 1.5 seconds to less than half a second (<500ms). That is no small change … and that’s for just one file!

The Other (Secret) Benefit Of CDNs

I guess “Secret” is technically the wrong choice of term here. A more accurate description would be to say that I seldom hear or see anyone talking about another CDN benefit I consider to be very important and significant. That benefit, quite simply, involves improving file fetching and retrieval parallelism when a web page and associated assets (CSS, JS, images, etc.) are requested for download by your browser. In plain English: CDNs typically improve file downloading by allowing the browser to issue a greater number of concurrent file requests.

To help with this concept and its explanation, I’ve created a couple of diagrams that I’ll share with you. The first one appears below, and it is meant to represent the series of steps a browser might execute when retrieving everything needed to show a (SharePoint/SPO) page. As we’ve talked about, what is commonly thought of as a single page in a SharePoint site is, more accurately, a page containing all sorts of dependent assets: image files, JavaScript files, cascading style sheets, and a whole bunch more.

A request for a SharePoint page housed at http://www.thesite.com might start out with one request, but your browser is going to need all of the files referenced within the context of that page (default.aspx, in our case) to render correctly. See below:

To get what’s needed to successfully render the example SharePoint page without CDN support, we follow the numbers:

  1. Your browser issues an HTTP request for the page you want to load – http://www.thesite.com/default.aspx in the case of example above.
  2. That page request goes to (and is served by) the web server/front-end that can return the page.
  3. Our page needs other files to render properly, like styling.css, logo.png, functions.js, and more. These get queued-up and returned according to some rules – more on this in a minute.
  4. In step four (4), files get returned to the browser. Notice I say “no more than six at a time” in the illustration. That’s important and will come into play once we start introducing CDN support to the page/site.

You might be wondering, “Only six files at a time? Really? Why the limitation?” Well, I should start by saying the limit is probably six … maybe a bit more, perhaps a bit less. It depends on the browser you’re using what the specific number is. There was a good summary answer on StackOverflow to a related (but slightly different) question that provides some additional discussion.

Section eight (8) of the HTTP specification (RFC 2616) specifically addresses HTTP connections, how they should be handled, how proxies should be negotiated, etc. For our purposes, the practical implementation of the HTTP specification by modern browsers generally limits the number of concurrent/active connections a browser can have to any given host or URL to six (6).

Notice how I worded that last sentence. Since you folks are smart cookies, I’ll bet you’re already thinking “Wait a minute. CDNs typically have different URLs/hosts from the sites they cache” and you’re imaging what happens (or can happen) when a new source (i.e., different host/URL) is introduced.

This illustration roughly outlines the fetch process when a CDN is involved:

Steps one (1) through four (4) of the fetch process with a CDN are basically still the same as was illustrated without a CDN a bit earlier. When the page is served-up in step three (3) and returned in step four (4), though, there are some differences and additional activity taking place:

  1. Since at least one CDN is in-use for the SPO environment, some of the resource links within the page that is returned will have different URLs. For instance, whereas styling.css was previously served from the SPO environment in the non-CDN example, it might now be referenced through the CDN host shown as http://cdn.source.com/styling.css
  2. The requested file is retrieved, and …
  3. Files come back to the client browser from the CDN at the same time they’re being passed-back from the SPO environment.

Since we’re dealing with two different URLs/hosts in our CDN example (http://www.thesite.com and cdn.source.com), our original six (6) file concurrent download limitation transforms into a 12 file limitation (two hosts serving six files a time, 2 x 6 = 12).

Whether or not the CDN-based process is ultimately faster than without a CDN depends on a great many factors: your Internet bandwidth, the performance of your computer, the complexity/structure of the page being served-up, and more. In the majority of cases, though, at least some performance improvement is observed. In many cases, the improvement can be quite substantial (as referenced and discussed earlier).

Additional Note: 8/24/2020

In a bit of laziness on my part, I didn’t do a prior article search before writing this post. As fate would have it, Bob German (a friend and fellow MVP – well, he was an MVP prior to joining Microsoft a couple of years back) wrote a great post at the end of 2017 that I became aware of this morning with a series of tweets. Bob’s post is called “Choosing a CDN for SharePoint Client Solutions” and is a bit more developer-oriented. That being said, it’s a fantastic post with good information that is a great additional read if you’re looking for more material and/or a slightly different perspective. Nice work, Bob!

Post Update: 8/26/2020

Anders Rask was kind enough to point out that the PnP PowerShell line I originally had listed wasn’t, in fact, PnP PowerShell. That specific line of PowerShell has since been updated to reflect the correct way of altering a tenant’s CDN with the PnP PowerShell cmdlets. Many thanks for the catch, Anders!

Conclusion

So, to sum-up: enable CDN use within your SPO tenant. The benefits are compelling!

References

  1. Microsoft Docs: Use The Office 365 Content Delivery Network (CDN) With SharePoint Online
  2. Imperva: What Is A CDN?
  3. Akamai: What Does CDN Stand For?
  4. MDN Web Docs: Cache-Control
  5. Company: Akamai
  6. Presentations: Caching-In For SharePoint Performance
  7. Akamai: Download Delivery
  8. Microsoft Docs: Configure Cache Settings For A Web Application In SharePoint Server
  9. Blog Post: Do You Know What’s Going To Happen When You Enable The SharePoint BLOB Cache?
  10. LinkedIn: Scott Stewart
  11. Microsoft Docs: Enabling O365 CDN support for public origin files.
  12. Microsoft Docs: Get Started With SharePoint Online Management Shell
  13. Microsoft Docs: PnP PowerShell Overview
  14. Microsoft Docs: Set Up And Configure The Office 365 CDN By Using PnP PowerShell
  15. Microsoft Docs: What Performance Gains Does A CDN Provide?
  16. Push Technologies: Browser Connection Limitations
  17. StackOverflow: How many maximum number of simultaneous Chrome connections/threads I can start through Selenium WebDriver?
  18. W3.org: RFC 2616, Section 8: Connection

2020 Goals + Recent and Upcoming Happenings

2020 is here and we are screaming through a new decade at light speed. In this post, I share my goals for 2020 (gulp!) and why they matter to me. I also share some upcoming events and recent efforts that may be of interest (and are free!)

Aircraft carrier launch

The new decade is in full swing at this point, and it is certainly moving along without showing any signs of slowing down soon. Like many others, January began for me a little like a jet being thrown off an aircraft carrier. I’m adapting to the uptick in activity and the pace at which things are moving, but whoa – whiplash!

Planning for a New Year

A good friend of mine and Microsoft PFE extraordinaire, Brian Jackett, has done something over the last handful of years that I both admire and have tried to emulate with limited degrees of success. Brian is an extremely thoughtful guy who regularly tries to lay out his goals and track his progress against those goals in various ways. One of the things he has done in the past is start off a new calendar year with some form of assessment of the previous year’s goals. He then proceeds to lay out what he’s going to be working on in the year ahead and why he’s chosen those goals. Brian has typically done this in blog post format. I haven’t seen one from him yet this year (hope he does – I’m always interested in where he’s focused), but he did write up a post going into the year 2019.

I’ve attempted to follow suit in the past, because it forces me to focus on the year ahead and set some goals.

At Best, Limited Success

It probably comes as no surprise for me to say that like so many others, I’ve set a bunch of goals in the past and then fallen dramatically short of achieving them. I attribute this outcome to many things: shifting priorities, lack of time (or more appropriately, lack of prioritization), and any number of other factors. But if I look back at previous years and try to summarize my results/outcome in one statement: In general, I think I’d set the bar too high.

In response to that, you may be thinking …

“Shoot for the moon. Even if you miss, you’ll land among the stars.”

Norman Vincent Peale

Peale was very big on positive thinking in case that previous quote doesn’t make it readily apparent. While I agree with him in an optimistic and energetic way, I’ll be honest: I’m getting to the age and point in life where my energy is starting to flag a bit. I also have many more people, efforts, and things vying for my attention and energy than I did earlier in my life/career.

I try to remain optimistic day-to-day (with varying degrees of success), but to guarantee that I truly remain focused and make progress on goals I set nowadays, I have to be realistic and pragmatic in the specific goals I set for myself. In general, I need to think more in terms of small steps rather than massive undertakings.

(Pragmatic) Goals for 2020

So, with some trepidation, I make my public declaration (here, in this blog) of 2020 goals. These are things I truly believe I can achieve, I can objectively and tangibly measure progress towards, which I can stick to, and on which I can stay motivated (very important!) These aren’t the only things I expect I’ll do, but they deserve to be called out as particular areas of focus.

In order of importance:

  1. Do at least one thing to delight my wife. Surprised to see a non-SharePoint goal listed first (or even listed)? I feel that I am a caring individual who thinks regularly of others, but a romantic I am not. Nor am I particularly good at surprising people (particularly those I’m close to) with something wonderful and delightful “out of the blue” – particularly something that comes from the heart. To be more cognizant of this deficiency (or “opportunity for growth,” if you prefer) is the most important thing for me to focus on this year. The measure of achievement will be (at least) one thing, one situation, one undertaking, one whatever – where I surprise my wife and she feels special, loved, and truly touched. That probably is an easy undertaking for many of you, but believe me when I say it’s something that will take a lot of thinking and planning on my end.
  2. Grow and strengthen my relationship with my children. My son and daughter, Brendan and Sabrina, turn 13 in March … meaning we’ll officially have teenagers in the house. Many times, it feels like they have been teens for a while now (especially Sabrina), but that’s not the reality. As a parent, I’ve been encountering a growing number of challenges (in general) with my kids. I’m less certain how to respond and react to them in many ways. I try to be a loving and engaged father, but that’s harder for me at times and in certain situations. My parents divorced when I was in third grade, and my middle school years (which is where my kids are in age now) were something of a mess with parents, step-parents, and other authority figures in my life. I didn’t have a lot of consistency or reliability in terms of role models and relationships. So, I’ve found myself looking to my wife more and more for guidance in situations that have been popping-up with the kids … and I’d like to be a better and confident father while relying a little less for outside help. I’m not saying that I want to totally go it alone, but I would say that I do want to develop a better compass and sense of “internal guidance.” Of all of the goals I’ve selected for myself this year, this is probably the toughest one to measure objectively … so my plan is to share it with my kids and then check in with them at various points over the year to see what they think and where they feel I am.
  3. Learn the SharePoint Framework (SPFx) enough to be truly competent with it. For those of you who might be reading this and aren’t familiar with SPFx, the easiest way to describe it is this way: it’s the only truly viable path forward for developers seeking to stay relevant with SharePoint development. I consider myself a SharePoint developer, but I come from the ranks of the “old guard” who began development with compiled code and SharePoint’s server-side object model. SPFx is Microsoft’s cloud-ready development model and is grounded firmly in JavaScript, along with frameworks, libraries, and other enabling client-side technologies like TypeScript, React, npm, gulp, yeoman, webpack … the list goes on. For classical devs like me, this is traditionally foreign territory; compiled code “it ain’t.” Until now, I’ve known enough SPFx to be dangerous, but I’m on a mission to learn it inside out. There have been many things motivating me and pushing me forward with that goal for quite some time now, but 2020 is when I’m going to internalize SPFx and become proficient with it. Measurement of that goal should be easy in that I’ll be doing project work based on SPFx development, creating client-side web parts, extensions, etc.
  4. Complete a redesign and relaunch www.bitstreamfoundry.com. This is something that’s been needed for quite some time – longer than I’d care to admit, actually. All that’s out there right now is a contact form with a mention of a new site “coming soon.” Well, that new site will arrive in 2020 – with a little luck and prioritization, sooner rather than later. I’ve picked-up some tools to make WordPress (which this blog and my Bitstream Foundry sites use as a web platform) editing truly WYSIWYG in approach, and I’ve got some ideas on how to put things together. The biggest challenge is (as I tell people), “I’m a plumber, not a painter” – meaning I do web development, but I focus on putting the underlying sites together, not on how they look.
There are many days when I work with CSS that I feel this way. I’m sure many other development “plumbers” like myself can relate.

Sure, I have some ability to style sites … but I’m not particularly creative in that regard and certainly no ace with CSS. My daughter Sabrina is extremely creative and talented; maybe I can enlist her help in beautifying the redesign … In any regard a re-launched Bitstream Foundry site will be the measure of success for this goal.

  1. Before the end of 2020, contribute one project to the public domain. There was a time (years ago, at this point) when I wrote software and tools for fun and shared those with the world at large – see my Tools section in the right-hand column. I still write tools, scripts, and develop other (I think) useful “things” … but I’ve not done a great job about sharing them broadly. This year, I want to return to my roots (a little) and get at least one project into the public domain. The measure of success for this one is pretty objective: did I release a tool or project … or not?
  2. Post regularly on this blog. And we come down to the final “Jeez, I’ve said that before.” For purposes of measurement, I’m going to shoot for once per month on this … but I’ll allow myself to “slide” to twice every three months if I get slammed. Much like projects and other development, I regularly come up with (and across) things I think would help others. Many of these would make at least decent blog posts, but they don’t always make it here. Wish me luck (and give me a swift kick in the butt) if you see me falling behind.

I could easily go on and on, but I want to stop at no more than a half-dozen goals. Remember: these are goals that I think are attainable, that I can stay focused on, and on which I can measure progress. They’re also important to me for the various reasons cited. They aren’t the only things I’ll be doing, but they will be points of focus.

Recent Developments and Plans

There are a few things I wanted to share so far for this year … and no, I don’t consider these as counting towards my stated goals in any way. These are just more-or-less informational items in case you or someone you know is interested.

Free report – Your Office 365 Journey: Securing Every Stage

A short time ago, I was among a handful of industry folks and other Microsoft MVPs (like Ragnar Heil) who were approached to participate in an effort to share guidance and strategies regarding Office 365 migration, experiences I’ve had with it, lessons I’ve learned along the way, etc. The results of that effort have been compiled into a report that is free to download from the folks at Censornet, so check it out!

Upcoming Digital Workplace webinar with Akumina

On Tuesday, February 12th, I’ll be teaming up with Akumina‘s president David Maffei to deliver a webinar titled “Modern SharePoint + Akumina’s EXP = A True Digital Workplace Experience.” Over the last several years, I’ve done a lot of work building and implementing digital workplaces using the Akumina platform both on-premises and in SharePoint Online. I’ve also done a lot work with SharePoint Online and on-premises without an accelerator or platform like Akumina in the mix. The purpose of this webinar is to explain what SharePoint, in its modern form, is capable of … and what a platform like Akumina can do to enhance “vanilla” SharePoint and enable a true digital workplace experience. It’s a free webinar, so sign up if it sounds interesting!

Chicago Suburbs M365 2020

Are you familiar with SharePoint Saturday events? They’ve been around for quite a few years now. They are gatherings of people who work with Microsoft SharePoint, regularly speak/educate on it, sell products and services oriented around it … and other folks who are simply SharePoint enthusiasts in some way. The events, which are normally held on Saturdays, are a source of education and information for the SharePoint platform, and they’re (almost) always free. Speakers and other presenters donate their time to deliver sessions, and food and prizes are obtained with the help of sponsor dollars.

In recent years, SharePoint Saturday events have been diversifying and becoming more inclusive than just SharePoint, and this change has been driven by SharePoint Online and the larger Office 365 / Microsoft 365 cloud suite and associated offerings. SharePoint is integrated into so many of the O365 workloads that the lines are really blurry around where SharePoint “stops” and another workload “begins.” As a result, many SharePoint Saturdays have adapted and become “Cloud Saturdays,” “Office 365 Saturdays,” etc., to reflect their broader nature and inclusion on non-SharePoint-specific topics.

I like to volunteer and donate my time and energy at as many of these events as I can get to (and that will accept me), and the next one on my list – my first for 2020 – is the next Chicago Suburbs M365 event on February 29th. If you are in or around the ‘burbs of Chicago, consider coming to the event, meeting others in our space, and learning something along the way. I’ll be presenting a session titled “Getting the Best Performance Out of Your SharePoint Online Site,” and I hope to see some of you there!

References and Resources

  1. Microsoft TechCommunity: How to become a Premier Field Engineer (PFE)
  2. Blog: The Frog Pond of Technology
  3. Blog Post: Looking Ahead To 2019
  4. Wikipedia: Norman Vincent Peale
  5. Microsoft Docs: Overview of the SharePoint Framework
  6. Site: TypeScript
  7. Site: React
  8. Site: npm
  9. Site: gulp
  10. Site: yeoman
  11. Site: webpack
  12. Site: Bitstream Foundry
  13. Site: WordPress
  14. W3C: Cascading Style Sheets
  15. Censornet Report: Your Office 365 Journey: Securing Every Stage
  16. Blog: Ragnar Heil
  17. Company: Censornet
  18. Company: Akumina
  19. Webinar Registration: Modern SharePoint + Akumina EXP = A True Digital Workplace Experience
  20. LinkedIn: David Maffei
  21. Site: SPSEvents.org
  22. Event: Chicago Suburbs M365 2020

The Six Essential Soft Skills for IT Professionals

Not too long ago, I was approached by someone wanting to share some thoughts and information more broadly. After an email conversation with this person, I decided to take a chance and afford her a guest blogger hat.

If you feel particularly moved or inspired by the write-up that follows (regardless of direction), please let me know. If you would like to see more of this type of content, please leave a comment to that effect.

And now I turn things over to our guest writer/blogger, Lisa. Sean out!

The Six Essential Soft Skills for IT Professionals

Having high levels of technical skill in IT is great. In fact, CNBC reports that big companies like Facebook and Google are looking at skills rather than college degrees, which is a big step towards providing opportunities for deserving candidates who can’t afford college tuition.

But in any industry, soft skills are also as necessary as technical skills, since the former can ensure an enjoyable collaboration with your teammates and a better work experience overall. Unfortunately, these skills are often undervalued, and corporations don’t hold as many training seminars on them compared to hard skills. Thus, we’ve compiled a list of soft skills IT professionals should have to interact well with other people and thrive in their career.

Empathy

Often seen as a given attribute, empathy is a social skill that needs to be exercised like any muscle. However, in an era dominated by science, technology, engineering, and mathematics, the importance of social sciences is often overlooked, especially in an increasingly digital world. But make no mistake, Maryville University’s overview of the social sciences highlights how these can empower you to be better at changing society — it’s not just about the tech you build or the tools you use, but on why you do things at all. Social sciences highlight the connections that bind our society together, and help you develop a sense of empathy that will make your work not only valuable for whoever uses it, but also the wider social groups within your team and customer base.

Teamwork

Have you ever heard the phrase “teamwork makes the dream work”? While some IT professionals may prefer to work alone, being able to work well with others and recognize each other’s strengths is a skill that is invaluable in IT. Teamwork fosters a positive environment, and also motivates not just yourself, but the people around you to align with each other and get the ball rolling. After all, behind every successful project is an equally successful team.

Being detail-oriented

In the IT industry (or any industry, for that matter), it pays to be detail-oriented — and in some IT roles, this is a vital skill. This is where you have the ability to repeatedly achieve a level of accuracy, thoroughness, and consistency when doing and accomplishing your tasks. It also means making a conscious effort to understand not just the effects, but the causes of the problems you encounter. As you practice this skill, it will eventually become second nature, and you’ll end up knowing how to pay attention to all the little details without noticing it.

Creativity

Listed by LinkedIn as the number one in-demand soft skill by employers, organizations are looking for creative employees who can build new solutions and provide perspective to the workplace. You may think being creative just means thinking outside the box, but it also manifests in having the enthusiasm to approach new projects in a way that is different than you or others typically would.

Clear communication

Our founder Sean McDonough explains that communication is the most critical skill in everything we do — and this doesn’t just mean having good verbal communication, but also clear and compelling written communication skills. After all, with the sheer number of emails, proposals, and documents you have to go through, being able to communicate with co-workers and clients on what exactly has to happen is crucial.

Negotiation

Last but not least is knowing how to negotiate. When you’re working with clients or even just your boss, sometimes you’ll have to negotiate deadline extensions or even your salary — and knowing how to do this well is an essential skill to achieve a win-win outcome. People with successful negotiation skills often come in with a goal and some persuasive data, along with a ready ear to listen to the other person’s side. That’s because negotiation isn’t just about winning it your way, but about meeting halfway and reaching a compromise.

There’s no better way to start learning these skills than to simply do them. Do your best to communicate clearly with your peers, show empathy in situations, and pay attention to detail. Practice your negotiation skills where possible, and don’t be afraid to show your creativity. When you know how to appreciate and gain these skills, you’ll be fully equipped to stand out as an IT professional.

Written exclusively for SharePointInterface.com
by Lisa Martin

One Tool to Rule Them All

Microsoft released the second iteration of its Page Diagnostics Tool for SharePoint. If you have an SPO site, you NEED this tool in your toolbox!

Last week, on Wednesday, September 18th, 2019, Microsoft released the second iteration of its Page Diagnostics Tool for SharePoint. An announcement was made, and the Microsoft Docs site was updated, but the day passed with very little fanfare in most circles.

“The One Ring” by Mateus Amaral is licensed under CC BY-NC-ND 4.0 

In my opinion, there should have been fireworks. Lots of fireworks.

What is it?

If you’re not familiar with the Page Diagnostics Tool for SharePoint, then I need to share a little history on how I came to be “meet” this tool.

Back in 2018, the SharePoint Conference North America (SPCNA) was rebooted after having been shutdown as part of Microsoft’s consolidation of product-specific conferences a number of years earlier. I had the good fortune of making the cut to deliver a couple of sessions at the conference: “Making the Most of OneDrive for Business and SharePoint Online” and “Understanding and Avoiding Performance Pitfalls with SharePoint Online.”

Sometime in the months leading up to the conference, I received an email from out-of-the-blue from a guy named Scott Stewart – who at the time was a Senior Program Manager for OneDrive and SharePoint Engineering. In the email, Scott introduced himself, what he did in his role, and suggested that we collaborate together for the performance session I was slated to deliver at SPCNA.

I came to understand that Scott and his team were responsible for addressing and remedying many of the production performance issues that arose in SharePoint Online (SPO). The more that Scott and I chatted, the more it sounded like we were preaching many of the same things when it came to performance.

One thing Scott revealed to me was that at the time, his team had been working on a tool to help diagnose SPO performance issues. The tool was projected to be ready around the time that SPCNA was happening, so I asked him if he’d like to co-present the performance session with me and announce the tool to an audience that would undoubtedly be eager to hear the news. Thankfully, he agreed!

The audience for our performance talk at SPCNA 2018

Scott demo’d version one (really it was more like a beta) during our talk, and the demo demons got the better of him … but shortly after the conference, v1.0 of the tool went live and was available to download as a Chrome browser extension.

So, what does it do?

Simply put, the Page Diagnostics Tool for SharePoint analyzes your browser’s interaction with SPO and points out conditions and configurations that might be adversely affecting your page’s performance.

The first version of the tool only worked for classic publishing pages. And as a tool, it was only available as a Google Chrome Extension:

The Page Diagnostics for SharePoint extension in the Google Chrome Store

The second iteration of the tool that was released last Thursday addresses one of those limitations: it analyzes both modern and classic SharePoint pages. So, you’re covered no matter what’s on your SPO site.

What Can the Tool Tell Me?

For one thing, the tool can get you the metrics I’ve highlighted that are relevant to diagnosing basic page performance issues – most notably, SPRequestDuration and SPIisLatency. But it can do so much more than that!

Many of the adverse performance conditions and scenarios I’ve covered while speaking and in blog posts (such as this one here) are analyzed and called-out by the tool, as well as many other things/conditions, such as navigational style used, whether or not content deployment networks (CDNs) are used by your pages, and quite a few more.

And finally, the tool provides a simple mechanism for retrieving round-trip times for pages and page resource requests. It eliminates the need to pull up Fiddler or your browser’s debug tools to try and track down the right numbers from a scrolling list of potentially hundreds of requests and responses.

How Do I Use It?

It’s easy, but I’ll summarize it for you here.

1. Open the Chrome Web Store. Currently, the extension is only available for Google Chrome. Open Chrome and navigate to https://chrome.google.com/webstore/search/sharepoint directly or search for “SharePoint” in the Chrome Web Store. However you choose to do it, you should see the Page Diagnostics Tool for SharePoint entry within the list of results as shown below.

2. Add the Extension to Chrome. Click the Add to Chrome button. You’ll be taken directly to the diagnostic tool’s specific extension page, and then Chrome will pop up a dialog like the one seen below. The dialog will describe what the tool will be able to do once you install it, and yes: you have to click Add Extension to accept what the dialog is telling you and to actually activate the extension in your browser.

3. Navigate to a SharePoint Online page to begin diagnosing it. Once you’ve got the extension installed, you should have the following icon in the tool area to the right of the URL/address bar in Chrome:

To illustrate how the tool works, I navigated to a modern Communication Site in my Bitstream Foundry tenant:

I then clicked on the SharePoint Page Diagnostics Tool icon in the upper right of the browser (as shown above). Doing so brings up the Page Diagnostics dialog and gives me some options:

Kicking off an analysis of the current page is as simple as clicking the Start button as shown above. Once you do so, the page will reload and the Tool dialog will change several times over the course of a handful of seconds based on what it’s loading, analyzing, and attempting to do.

When the tool has completed its analysis and is ready to share some recommendations, the dialog will change once again to show something similar to what appears below.

Right off the bat, you can see that the Page Diagnostics Tool supplies you with important metrics like the SPRequestDuration and SPIIsLatency – two measures that are critical to determining where you might have some slowdown as called out in a previous blog post. But the tool doesn’t stop there.

The tool does many other things – like look at the size of your images, whether or not you’re using structural navigation (because structural navigation is oh so bad for your SPO site performance), if you’re using content delivery networks (CDNs) for frequently used scripts and resources, and a whole lot more.

Let’s drill into one of the problem items it calls out on one of my pages:

The tool explains to me, in plain English, what is wrong: Large images detected. An image I’m using is too large (i.e., larger than 300KB). It supplies the URL of the image in question so that I’m not left wondering which image it’s calling out. And if I want to know why 300KB is special or simply learn about the best way to handle images in SharePoint Online, there’s a Learn More link. Clicking that link takes me to this page in Microsoft Docs:

Targeted and detailed guidance – exactly what you need in order to do some site fixup/cleanup in the name of improving performance.

Wrapping-Up

There’s more that the tool can do – like provide round trip times for pages and assets within those pages, as well as supply a couple of data export options if you want to look at the client/server page conversation in a tool that has more capabilities.

As a one-stop shop tool, though, I’m going to basically start recommending that everyone with an SPO site start downloading the tool for use within their own tenants. There is simply no other tool that is easier and more powerful for SharePoint Online sites. And the price point is perfect: FREE!

The next time you see Scott Stewart, buy him a beer to thank him for giving us something usable in the fight against poorly performing SPO sites.

References and Resources

  1. Company: Microsoft
  2. Browser Extension: Page Diagnostics for SharePoint
  3. Microsoft Docs: Use the Page Diagnostics for SharePoint tool
  4. Conference: The SharePoint Conference North America
  5. Presentation Resource: Making the Most of OneDrive for Business and SharePoint Online
  6. Presentation Resource: Understanding and Avoiding Performance Pitfalls with SharePoint Online
  7. LinkedIn: Scott Stewart
  8. Blog Post: The Five-Minute Page Performance Troubleshooting Guide for SharePoint Online
  9. Blog Post: Caching, You Ain’t No Friend of Mine
  10. Tool: Telerik Fiddler
  11. Web Page: Chrome Web Store Extensions
  12. Microsoft Docs: Optimize images in SharePoint Online modern site pages

Running As Administrator All The Time.

In this post, I review the process of creating Taskbar and Start Menu shortcuts that automatically “Run as Administrator” with a left-click or two.

UPDATE (6/9/2019): Jonathan Mast, who happens to be a pretty sharp guy and friend of mine, saw this post and enlightened me with another tip (which I've tried and verified). If you want to launch an application as an Administrator, you can also press <CTRL><SHIFT> while left-clicking the shortcut. Microsoft officially lists this shortcut among its list of Taskbar keyboard shortcuts here. It just so happens that Jonathan now works for Microsoft!

This post is nothing earth-shattering, and my only hope is that it exposes a person or two to a less-than-obvious technique that might yield some incremental time savings when building shortcuts.

I was building some virtual machines the other day, and I was dropping shortcuts onto the Windows Start Menu and the Windows Taskbar with abandon. Creating shortcuts is relatively easy, but I wanted the applications associated with the shortcuts to run with Administrator privileges.

To launch an application from an associated shortcut, we typically do one of the following:

  • Single left-click an application shortcut icon (for applications on the Start Menu or Taskbar)
  • Double left-click an application shortcut icon (in the case of a desktop application shortcuts)

We’ve been doing this for decades now to execute an application. But when we want to launch an application within the security context of an account with Administrator rights, we’ve got to do that right-click thing to select “Run as Administrator” from the list of menu options we’re presented with. It’s a trivial step, I know, but it’s annoying as all get out. My index finger wants to do the clicking, dammit …

Well, there’s a better way to handle this situation. Wouldn’t you like to set up your Start Menu and Taskbar shortcuts to automatically “Run as Administrator” whenever you launch them in the standard left-click (or left double-click) fashion?

The Task at Hand

This is actually relatively easy to do, but I’m sure that there are at least a few out there for whom this will be new knowledge.

For Taskbar-pinned application shortcuts that you always want to launch with Administrator privileges, perform the initial right-click that you normally would to select “Run as Administrator” as demonstrated in the image above and to the right for the Windows PowerShell icon I have pinned to my Taskbar. Instead of clicking “Run as Administrator” as is normally the case, right-click again on the name of the application you want to set up to run in the context of the Administrator account.

In the case of my example, that’s Windows PowerShell. So, I’d right-click once to open the context-sensitive menu seen above, and then I’d right click the “Windows PowerShell” option to open the second context-sensitive menu seen on the left.

Upon selecting “Properties” with a left-click from the second context sensitive window shown above and to the left, the Properties dialog box would appear for the application (as shown below).

Upon seeing this dialog box, you should left-click the “Advanced …” button that appears approximately 2/3 of the way down the dialog on the right. When you click that the “Advanced …” button, you’ll see an “Advanced Properties” dialog open as seen below.

At this point, simply click on the “Run as administrator” checkbox and click the “OK” button on all of the open dialogs to apply your changes. From this point forward, whenever you left-click on the Taskbar shortcut you’ve just configured, the associated application will launch in the context of the Administrator account!

What About Start Menu Items?

Setting up Start Menu shortcuts to “Run as Administrator” is really just a variation on the theme we’ve already established. As with the Taskbar shortcuts, we begin by right-clicking the desired shortcut. In this example, I’m going to use  “.Net Reflector 9.0” shortcut:

After the first right-click, I then hover over or expand the “More” menu item and select the “Open file location” option:

This will open Windows Explorer to the location in the local file system of the shortcut we’re interested in configuring.

From this point onward, it’s the same as it was with the Taskbar shortcuts. Simply click the “Advanced…” button and check the “Run as Adminstrator” box for the shortcut to have the associated application launch in the Administrator context.

Wrap-up

This post wasn’t rocket science, but when I was reminded of the shortcut configuration process (by the recent creation of a new batch of SharePoint 2019 VMs and all of their shortcuts), I figured sharing it out might help a person or two. And after all, that’s what it’s all about! Besides, it gave me the chance to write something up, so I consider it an all-around win. I hope that you feel the same way