In this post, I highlight one of the lesser-understood requirements to the Windows 11 install process.
The arrival of Windows 11 is imminent – that much you are probably aware of. If you didn’t know, well, now you do …
Windows 11 promises to do everything that Windows didn’t do before. It’s been “redesigned for productivity, creativity, and ease.” I have no doubt that it will bring some new capabilities and features with it, but I’m not entirely sure how far the changes will extend.
Because I’m part of the Windows Insider program (I suspect many of your are, as well), I’ve been getting regular OS updates that have extended beyond standard Windows Updates for some time now. In fact, the Beta Channel that I’ve been keeping my machines in gives me early access to Windows 11 builds, and I did get an obvious Windows 11 build installed on my laptop just a couple of days ago.
I didn’t, however, get the same build on my primary workstation. After a little checking, I realized my primary workstation had been “demoted” to the Release Preview Channel within the Windows Insider program:
The Release Preview Channel gets you features and fixes in advance, but it doesn’t get you Windows 11.
It wasn’t immediately clear to me why my primary workstation had been recategorized. I had to read through some old email to understand what had happened.
Do You Trust Me?
The reason for Threadripper’s demotion can be best summed-up this way: it was as an issue of trust.
More accurately: Microsoft couldn’t detect an active Trusted Platform Module (TPM) within my system, and so I didn’t appear to meet the minimum hardware requirements for Windows 11 seen below:
Platform security is an important topic and a concern of mine, but I need to be forthcoming with you: in the past, I really didn’t care too much about what TPMs did or how they worked. I knew that they were present in a lot of hardware (particularly laptops). If anything, that TPM hardware caused me headaches on systems that I simply wanted to setup without the need to “secure boot.” It seemed like it was never as easy to simply install an OS on hardware that included a TPM as it was on other hardware.
TPM hardware has matured over time (we’re on v2.0), and if you want to install Windows 11, you’re going to need to turn that TPM on, so you should learn a little about it.
It seems that TPM chips do quite a bit. If you want to turn on Windows Bitlocker these days (a good idea), the TPM chip gets involved. In essence, the TPM chip is your crypto companion, essentially enabling the encryption of information you might wish to pass across the net or store on your system. I’m sure it does more than just crypto, but that fact alone earns my respect. What a lousy job!
As you folks who follow me on this blog know, The Threadripper was built about a year ago. It naturally has a TPM module, but I hadn’t enabled it. While browsing net posts, I learned that Asus had been hard at work on BIOS updates that would enable the TPM modules for DIY folks (like myself) more easily, make them more visible, and allow them to upgrade to Windows 11. So, I did things the Asus way and rebooted my system with a USB drive that had an updated firmware image on it:
… and got my system BIOS up to v1502. It was a piece of cake, and when I went back to my Windows Insider settings (post-upgrade), it looked like I was sitting pretty:
But most importantly, it made the presence and the function of the TPM on the mainboard visible:
So if you want to be Windows 11 ready and ensure a smooth experience, make sure your TPM is visible in the system:
The last time I wrote about the network-attached storage (NAS) appliance that the good folks at Synology had sent my way, I spent a lot of time talking about how amazed I was at all the things that NAS appliances could do these days. They truly have come a very long way in the last decade or so.
Once I got done gushing about the DiskStation DS220+ that I had sitting next to my primary work area, I realized that I should probably do a post about it that amounted to more than a “fanboy rant.”
This is an attempt at “that post” and contains some relevant specifics on the DS220+’s capabilities as well as some summary words about my roughly five or six months of use.
First Up: Business
As the title of this post alluded to, I’ve found uses for the NAS that would be considered “work/business,” others that would be considered “play/entertainment,” and some that sit in-between. I’m going to start by outlining the way I’ve been using it in my work … or more accurately, “for non-play purposes.”
But first: one of the things I found amazing about the NAS that really isn’t a new concept is the fact that Synology maintains an application site (they call it the “Package Center“) that is available directly from within the NAS web interface itself:
Much like the application marketplaces that have become commonplace for mobile phones, or the Microsoft Store which is available by default to Windows 10 installations, the Package Center makes it drop-dead-simple to add applications and capabilities to a Synology NAS appliance. The first time I perused the contents of the Package Center, I kind of felt like a kid in a candy store.
Backup and restore, as well as Disaster Recovery (DR) in general, are concepts I have some history and experience with. What I don’t have a ton of experience with is the way that companies are handling their DR and BCP (business continuity planning) for cloud-centric services themselves.
What little experience I do have generally leads me to categorize people into two different camps:
Those who rely upon their cloud service provider for DR. As a generalization, there are plenty of folks that rely upon their cloud service provider for DR and data protection. Sometimes folks in this group wholeheartedly believe, right or wrong, that their cloud service’s DR protection and support are robust. Oftentimes, though, the choice is simply made by default, without solid information, or simply because building one’s own DR plan and implementing it is not an inexpensive endeavor. Whatever the reason(s), folks in this group are attached at the hip to whatever their cloud service provider has for DR and BCP – for better or for worse.
Those who don’t trust the cloud for DR. There are numerous reasons why someone may choose to augment a cloud service provider’s DR approach with something supplemental. Maybe they simply don’t trust their provider. Perhaps the provider has a solid DR approach, but the RTO and RPO values quoted by the provider don’t line up with the customer’s specific requirements. It may also be that the customer simply doesn’t want to put all of their DR eggs in one basket and wants options they control.
In reality, I recognize that this type of down-the-middle split isn’t entirely accurate. People tend to fall somewhere along the spectrum created by both extremes.
Microsoft 365 Data Protection
On the specific topic of Microsoft 365 data protection, I tend to sit solidly in the middle of the two extremes I just described. I know that Microsoft takes steps to protect 365 data, but good luck finding a complete description or metrics around the measures they take. If I had to recover some data, I’m relatively (but not entirely) confident I could open a service ticket, make the request, and eventually get the data back in some form.
The problem with this approach is that it’s filled with assumptions and not a lot of objective data. I suspect part of the reason for this is that actual protection windows and numbers are always evolving, but I just don’t know.
In all honesty, I wasn’t actually looking for supplemental Microsoft 365 data protection at the time. Knowing the price tag on some of the services and packages that are sold to address protection needs, I couldn’t justify (as a “home user”) the cost.
I was pleasantly surprised to learn that the Synology solution/package was “free” – or rather, if you owned one of Synology’s NAS devices, you had free access to download and use the package on your NAS.
The price was right, so I decided to install the package on my DS220+ and take it for a spin.
Kicking The Tires
First impressions and initial experiences mean a lot to me. For the brief period of time when I was a product manager, I knew that a bad first experience could shape someone’s entire view of a product.
I am therefore very happy to say that the Synology backup application was a breeze to get setup – something I initially felt might not be the case. The reason for my initial hesitancy was due to the fact that applications and products that work with Microsoft 365 need to be registered as trusted applications within the M365 tenant they’re targeting. Most of the products I’ve worked with that need to be setup in this capacity involve a fair amount manual legwork: certificate preparation, finding and granting permissions within a created app registration, etc.
Not Synology’s backup package. From the moment you press the “Create” button and indicate that you want to establish a new backup of Microsoft 365 data, you’re provided with solid guidance and hand-holding throughout the entire setup and app registration process. Of all of the apps I’ve registered in Azure, Synology’s process and approach has been the best – hands-down. It took no more than five minutes to establish a recurring backup against a tenant of mine.
I’ve included a series of screenshots (below) that walk through the backup setup process.
What Goes In, SHOULD Come Out ...
When I would regularly speak on data protection and DR topics, I had a saying that I would frequently share: “Backup is science, but Restore is an art.” A decade or more ago, those tasked with backing up server-resident data often took a “set it and forget it” approach to data backups. And when it came time to restore some piece of data from those backups, many of the folks who took such an approach would discover (to their horror) that their backups had been silently failing for weeks or months.
Motto of the story (and a 100-level lesson in DR): If you establish backups, you need to practice your restore operations until you’re convinced they will work when you need them.
Synology approaches restoration in a very straightforward fashion that works very well (at least in my use case). There is a separate web portal from which restores and exports (from backup sets) are conducted.
And in case you’re wondering: yes, this means that you can grant some or all of your organization (or your family, if you’re like me) self-service backup capabilities. Backup and restore are handled separately from one another.
As the series of screenshots below illustrates, there are five slightly different restore presentations for each of the five areas backed up by the Synology package: (OneDrive) Files, Email, SharePoint Sites, Contacts, and Calendars. Restores can be performed from any backup set and offer the ability to select the specific files/items to recover. The ability to do an in-place restore or an export (which is downloaded by the browser) is also available for all items being recovered. Pretty handy.
Will It Work For You?
I’ve got to fall-back to the SharePoint consultant’s standard answer: it depends.
I see something like this working exceptionally well for small-to-mid-sized organizations that have smaller budgets and already overburdened IT staff. Setting up automated backups is a snap, and enabling users to get their data back without a service ticket and/or IT becoming the bottleneck is a tremendous load off of support personel.
My crystal ball stops working when we’re talking about larger companies and enterprise scale. All sorts of other factors come into play with organizations in this category. A NAS, regardless of capabilities, is still “just” a NAS at the end of the day.
My DS220+ has two-2TB drives in it. I/O to the device is snappy, but I’m only one user. Enterprise-scale performance isn’t something I’m really equipped to evaluate.
Then there are the questions of identity and Active Directory implementation. I’ve got a very basic AD implementation here at my house, but larger organizations typically have alternate identity stores, enforced group policy objects (GPOs), and all sorts of other complexities that tend to produce a lot of “what if” questions.
Larger organizations are also typically interested in advanced features, like integration with existing enterprise backup systems, different backup modes (differential/incremental/etc.), deduplication, and other similar optimizations. The Synology package, while complete in terms of its general feature set, doesn’t necessarily possess all the levers, dials, and knobs an enterprise might want or need.
So, I happily stand by my “solid for small-to-mid-sized companies” outlook … and I’ll leave it there. For no additional cost, Synology’s Active Backup for Microsoft 365 is a great value in my book, and I’ve implemented it for three tenants under my control.
Rounding Things Out: Entertainment
I did mention some “play” along with the work in this post’s title – not something that everyone thinks about when envisioning a network storage appliance. Or rather, I should say that it’s not something I had considered very much.
My conversations with the Synology folks and trips through the Package Center convinced me that there were quite a few different ways to have fun with a NAS. There are two packages I installed on my NAS to enable a little fun.
Package Number One: Plex Server
Admittedly, this is one capability I knew existed prior to getting my DS220+. I’ve been an avid Plex user and advocate for quite a few years now. When I first got on the Plex train in 2013, it represented more potential than actual product.
Nowadays (after years of maturity and expanding use), Plex is a solid media server for hosting movies, music, TV, and other media. It has become our family’s digital video recorder (DVR), our Friday night movie host, and a great way to share media with friends.
I’ve hosted a Plex Server (self-hosted virtual machine) for years, and I have several friends who have done the same. At least a few of my friends are hosting from NAS devices, so I’ve always had some interest in seeing how Plex would perform on NAS device versus my VM.
As with everything else I’ve tried with my DS220+, it’s a piece of cake to actually get a Plex Server up-and-running. Install the Plex package, and the NAS largely takes care of the rest. The sever is accessible through a browser, Plex client, or directly from the NAS web console.
I’ve tested a bit, but I haven’t decommissioned the virtual machine (VM) that is my primary Plex Server – and I probably won’t. A lot of people connect to my Plex Server, and that server has had multiple transcodes going while serving up movies to multiple concurrent users – tasks that are CPU, I/O, and memory intensive. So while the NAS does a decent job in my limited testing here at the house, I don’t have data that convinces me that I’d continue to see acceptable performance with everyone accessing it at once.
One thing that’s worth mentioning: if you’re familiar with Plex, you know that they have a pretty aggressive release schedule. I’ve seen new releases drop on a weekly basis at times, so it feels like I’m always updating my Plex VM.
What about the NAS package and updates? Well, the NAS is just as easy to update. Updated packages don’t appear in the Package Center with the same frequency as the new Plex Server releases, and you won’t get the same one-click server update support (a feature that never worked for me since I run Plex Server non-interactively in a VM), but you do get a link to download a new package from the NAS’s update notification:
The “Download Now” button initiates the download of an .SPK file – a Synology/NAS package file. The package file then needs to be uploaded from within the Package Center using the “Manual Install” button:
And that’s it! As with most other NAS tasks, I would be hard-pressed to make the update process any easier.
Package Number Two: Docker
If you read the first post I wrote back in February as a result of getting the DS220+, you might recall me mentioning Docker as another of the packages I was really looking forward to taking for a spin.
The concept of containerized applications has been around for a while now, and it represents an attractive alternative to establishing application functionality without an administrator or installer needing to understand all of the ins and outs of a particular application stack, its prerequisites and dependencies, etc. All that’s needed is a container image and host.
So, to put it another way: there are literally millions of Docker container images available that you could download and get running in Docker with very little time invested on your part to make a service or application available. No knowledge of how to install, configure, or setup the application or service is required on your part.
Let's Go Digging
One container I had my eye on from the get-go was itzg’s Minecraft Server container. itzg is the online handle used by a gentleman named Geoff Bourne from Texas, and he has done all of the work of preparing a Minecraft server container that is as close to plug-and-play as containers come.
Minecraft (for those of you without children) is an immensely popular game available on many platforms and beloved by kids and parents everywhere. Minecraft has a very deep crafting system and focuses on building and construction rather than on “blowing things up” (although you can do that if you truly want to) as so many other games do.
My kids and I have played Minecraft together for years, and I’ve run various Minecraft servers in that time that friends have joined us in play. It isn’t terribly difficult to establish and expose a Minecraft server, but it does take a little time – if you do it “manually.”
I decided to take Docker for a run with itzg’s Minecraft server container, and we were up-and-running in no time. The NAS Docker package has a wonderful web-based interface, so there’s no need to drop down to a command line – something I appreciate (hey, I love my GUIs). You can easily make configuration changes (like swapping the TCP port that responds to game requests), move an existing game’s files onto/off of the NAS, and more.
I actually decided to move our active Minecraft “world” (in the form of the server data files) onto the NAS, and we ran the game from the NAS for about two months. Although we had some unexpected server stops, the NAS performed admirably with multiple players concurrently. I suspect the server stops were actually updates of some form taking place rather than a problem of some sort.
The NAS-based Docker server performed admirably for everything except Elytra flight. In all fairness, though, I haven’t been on a server of any kind yet where Elytra flight works in a way I’d describe as “well” largely because of the I/O demands associated with loading/unloading sections of the world while flying around.
After a number of months of running with a Synology NAS on my network, I can’t help but say again that I am seriously impressed by what it can do and how it simplifies a number of tasks.
I began the process of server consolidation years ago, and I’ve been trying to move some tasks and operations out to the cloud as it becomes feasible to do so. Where it wouldn’t have even resulted in a second thought to add another Windows server to my infrastructure, I’m now looking at things differently. Anything a NAS can do more easily (which is the majority of what I’ve tried), I see myself trying it there first.
I once had an abundance of free time on my hands. But that was 20 – 30 years ago. Nowadays, I’m in the business of simplifying and streamlining as much as I can. And I can’t think of a simpler approach for many infrastructure tasks and needs than using a NAS.
In all honesty, this post is quite overdue. The topic is one that I started digging into before the end of last year (2020), and in a “normal year” I’d have been more with it and shared a post sooner. To be fair, I’m not even sure what a “normal year” is, but I do know this: I’d be extremely hard-pressed to find anyone who felt that 2020 was a normal year …
I need to rewind a little to explain “the gift” and the backstory behind it. Technically speaking, “the gift” in questions wasn’t so much a gift as it was something I received on loan. I do have hopes that I’ll be allowed to keep it … but let me avoid putting the cart ahead of the horse.
I received the DS220+ during the latter quarter of 2020, and I’ve had it running since roughly Christmastime.
How did I manage to come into possession of this little beauty? Well, that’s a bit of a story …
Back in October 2020, about a week or two before Halloween, I was checking my email one day and found a new email from a woman named Sarah Lien in my inbox. In that email, Sarah introduced herself and explained that she was with Synology’s Field and Alliance Marketing. She went on to share some information about Synology and the company’s offerings, both hardware and software.
I’m used to receiving emails of this nature semi-regularly, and I use them as an opportunity to learn and sometimes expand my network. This email was slightly different, though, in that Sarah was reaching out to see if we might collaborate together in some way around Synology’s NAS offerings and software written specifically for NAS that could back up and protect Microsoft 365 data.
Normally, these sorts of situations and arrangements don’t work out all that well for me. Like everyone else, I’ve got a million things I’m working on at any given time. As a result, I usually can’t commit to most arrangements like the one Sarah was suggesting – as interesting as I think some of those cooperative efforts might turn out to ultimately be.
Nevertheless, I was intrigued by Sarah’s email and offer. So, I decided to take the plunge and schedule a meeting with her to see where a discussion might lead.
One thing I learned pretty quickly about Sarah: she’s a very friendly and incredibly understanding person. One would have to be to remain so good-natured when some putz (me) completely stands you up for a scheduled call. Definitely not the first impression I wanted to make …
I’m happy to say that the second time was a charm: I managed to actually show up on-time (still embarrassed) and Sarah and I, along with her coworker Patrick, had a really good conversation.
Synology has been in the NAS business for quite some time. I’d been familiar with the company by name, but I didn’t have any familiarity with their NAS devices.
Long story short: Sarah wanted to change that.
The three of us discussed the variety of software available for the NAS – like Active Backup for Microsoft 365 – as well as some of the capabilities of the NAS devices themselves.
Interestingly enough, the bulk of our conversation didn’t revolve around Microsoft 365 backup as I had expected. What really caused Patrick and me to geek-out was a conversation about Plex and the Synology app that turned a NAS into a Plex Server.
The Plex Flex
Not familiar with Plex? Have you been living under a rock for the last half-decade?
Plex is an ever-evolving media server, and it has been around for quite some time. I bought my Plex Lifetime Pass (not required for use, but affords some nice benefits) back in September of 2013 for $75. The system was more of a promise at that point in time than a usable, reliable media platform. A lifetime pass goes for $120 these days, and the platform is highly capable and evolved.
Plex gives me a system to host and serve my media (movies, music, miscellaneous videos, etc.), and it makes it ridiculously easy to both consume and share that media with friends. Nearly every smart device has a Plex client built-in or available as a free download these days. Heck, if you’ve got a browser, you can watch media on Plex:
I’m a pretty strong advocate for Plex, and I share my media with many of my friends (including a lot of folks in the M365 community). I even organized a Facebook group around Plex to update folks on new additions to my library, host relevant conversations, share server invites, and more.
An Opportunity To Play
I’ve had my Plex Server up-and-running for years, so the idea of a NAS doing the same thing wasn’t something that was going to change my world. But I did like the idea of being able to play with a NAS to put it through the paces. Plex just became the icing on the cake.
After a couple of additional exchanges and discussions, I got lucky (note: one of the few times in my life): Sarah offered to ship me the DS220+ seen at the top of this post for me to play with and put through the paces! I’m sure it comes as no surprise to hear me say that I eagerly accepted Sarah’s generous offer.
Sarah got my address information, confirmed a few things, and a week or so later I was informed that the NAS was on its way to me. Not long after that, I found this box on my front doorstep.
Finally Setting It Up
The box arrived … and then it sat for a while.
The holidays were approaching, and I was preoccupied with holiday prep and seasonal events. I had at least let Sarah know that the NAS made it to me without issue, but I had to admit in a subsequent conversation that I hadn’t yet “made time” to start playing around with it.
Sarah was very understanding and didn’t pressure me for feedback, input, or anything. In fact, her being so nice about the whole thing really started to make me feel guilty.
Guilt can be a powerful motivator, and so I finally made the time to unbox the NAS, set it up, and play around with it a little.
Here are a series of shots I took as I was unpacking the DS220+ and getting it setup.
It was very easy to get up-and-running … which is a good thing, because the instructions in the package were literally just the small little foldout shown in the slides above. I’d say the Synology folks did an excellent job simplifying what had the potential to be a confusing process for those who might not be technical powerhouses.
And eventually … power-on!
Once I got the DS220+ running, I started paying a little more attention to all the ports, capabilities in the interface, etc. And to tell you the truth, I was simply floored.
First off, the DS220+ is a surprisingly capable NAS – much more than I originally envisioned or expected. I’ve had NAS devices before, but my experience – like those NAS devices – is severely dated. I had an old Buffalo Linkstation which I never really took a liking to. I also had a couple of Linksys Network Storage Link devices. They worked “well enough,” but the state of the art has advanced quite a bit in the last 15+ years.
Two 3.5″ drive bays with RAID-1 (mirroring) support
It’s worth noting that the 2GB of RAM that is soldered into the device can be expanded to 6GB with the addition of a 4GB SODIMM. Also, the two RJ-45 ports support Link Aggregation.
I’m planning to expand the RAM ASAP (already ordered a chip from Amazon), and given that I’ve got 10Gbps optical networking in my house, and the switch next to me is pretty darned advanced (and seems to support every standard under the sun), I’m looking forward to seeing if I can “goose things” a bit with the Link Aggregation capability.
What I’m sharing here just scratches the surface of what the device is capable of. Seriously – check out the datasheet to see what I’m talking about!
But Wait - There's More!
I realize I’m probably giving off something of a fanboy vibe right now, and I’m really kind of okay with that … because I haven’t even really talked about the applications yet.
Once powered-on, the basic interface for the NAS is a browser-based pseudo desktop that appears as follows:
This interface is immediately available following setup and startup of the NAS, and it provides all manner of monitoring, logging, and performance tracking within the NAS itself. The interface can also be customized a fair bit to fit preferences and/or needs.
The cornerstone of any NAS is its ability to handle files, and the DS220+ is capable with files on so many levels. Opening the NAS Control Panel and checking-out related services in the Info Center, we see file basics like NFS and SMB … and so much more.
The above screen is dense; there is a lot of information shown and communicated. And each of the tabs and nodes in the Control Panel is similarly dense with information. Hardware geeks and numbers freaks have plenty to keep themselves busy with when examining a DS220+.
But the applications are what truly have me jazzed about the DS220+. I briefly mentioned the Office 365 backup app and the Plex Server app earlier. But those are only two from an extensive list:
Many of these apps aren’t lightweight fare by any stretch. In addition to the two I already mentioned having an interest in, I really want to put the following apps through the paces:
Audio Station. An audio-specific media server that can be linked with Amazon Alexa (important in our house). I don’t see myself using this long term, but I want to try it out.
Glacier Backup. Provides the NAS with an interface into Amazon Glacier storage – something I’ve found interesting for ages but never had an easy way to play with or test.
Docker. Yes, a full-on Docker container host server! If something isn’t available as a NAS app, chances are it can be found as a Docker container. I’m actually going to see how well the NAS might do as a Minecraft Server. The VM my kids and I (and Anders Rask) play on has some I/O issues. Wouldn’t it be cool if we could move it into a lighter-weight but better performing NAS/Docker environment.
Part of the reason for ordering the memory expansion was that I expect the various server apps and advanced capabilities to work the NAS pretty hard. My understanding is the that the Celeron chip the DS220+ employs is fairly capable, but tripling the memory to 6GB is doing what I can to help it along.
I could go on and on about all the cool things I seem to keep finding in the DS220+ … and I might in future posts. I’d really like to be a little more directed and deliberate about future NAS posts, though. Although I believe many of you can understand and perhaps share in my excitement, this post doesn’t do much to help anyone or answer specific questions.
I suspect I’ll have at least another post or two summarizing some of the experiments (e.g., with the Minecraft Docker container) I indicated I’d like to conduct. I will also be seriously evaluating the Microsoft 365 Backup Application and its operation, as I think that is a topic many of you would be interested in reading my summary and assessment of.
Stay tuned in the coming weeks/months. I plan to cover other topics besides the NAS, but I also want to maximize my time and experience with my “gift of NAS.”
This post is me finally doing what I told so many people I was going to do a handful of weeks back: share the “punch list” (i.e, the parts list) I used to put together my new workstation. And unsurprisingly, I chose to build my workstation based upon AMD’s Threadripper CPU.
I make a living and support my family through work that depends on a computer, as I’m sure many of you do. And I’m sure that many of you can understand when I say that working on a computer day-in and day-out, one develops a “feel” for its performance characteristics.
While undertaking project work and other “assignments” over the last bunch of months, I began to feel like my computer wasn’t performing with the same “pep” that it once had. It was subtle at first, but I began to notice it more and more often – and that bugged me.
So, I attempted to uninstall some software, kill off some boot-time services and apps that were of questionable use, etc. Those efforts sometimes got me some performance back, but the outcome wasn’t sustained or consistent enough to really make a difference. I was seriously starting to feel like I was wading through quicksand anytime I tried to get anything done.
The Last Straw
There isn’t any one event that made me think “Jeez, I really need a new computer” – but I still recall the turning point for me because it’s pretty vivid in my mind.
I subscribe to the Adobe Creative Cloud. Yes, it costs a small fortune each year, and each time I pay the bill, I wonder if I get enough use out of it to justify the expense. I invariably decide that I do end up using it quite a bit, though, so I keep re-upping for another year. At least I can write it off as a business expense.
Well, I was trying to go through a recent batch of digital photos using Adobe Lightroom, and my system was utterly dragging. And whenever my system does that for a prolonged period, I hop over to the Windows Task Manager and start monitoring. And when I did that with Lightroom, this is what I saw:
Note the 100% CPU utilization in the image. Admittedly, RamboxPro looks like the culprit here, and it was using a fair bit of memory … but that’s not the whole story.
Since the start of this ordeal, I’ve become more judicious in how many active tabs I spin-up in Rambox Pro. It’s a great utility, but like every Chromium-based tool, it’s an absolute pig when it comes to memory usage. Have you ever looked at your memory consumption when you have a lot of Google Chrome tabs open? That’s what’s happening with Rambox Pro. So be warned and be careful.
I’m used to the CPU spiking for brief periods of time, but the CPU sat pegged at 100% utilization for the duration that Lightroom was running – literally the entire time. And not until I shut down Lightroom did the utilization start to settle back down.
I thought about this for a while. I know that Adobe does some work to optimize/enhance its applications to make the most of systems with multiple CPU cores and symmetric multiprocessing when it’s available to the applications. The type of tasks most Adobe applications deal with are the sort that people tend to buy beefy machines for, after all: video editing, multimedia creation, image manipulation, etc.
After observing Lightroom and how it brought my processor to its knees, I decided to do a bit of research.
Research and Realization
At the time, my primary workstation was operating based on an Intel Core i7-5960X Extreme processor. When I originally built the system, there was no consumer desktop processor that was faster or had more cores (that I recall). Based on the (then) brand new Haswell E series from Intel, the i7-5960X had eight cores that each supported hyperthreading. It had an oversized L3 cache of 20MB, “new” virtualization support and extensions, 40 PCIe lanes, and all sorts of goodies baked-in. I figured it was more than up to handling current, modern day workstation tasks.
Yeah – not quite.
In researching that processor, I learned that it had been released in September of 2014 – roughly six years prior. Boy, six years flies by when you’re not paying attention. Life moves on, but like a new car that’s just been driven off the lot, that shiny new PC you just put together starts losing value as soon as you power it up.
The Core i7 chip and the system based around it are still very good at most things today – in fact, I’m going to set my son up with that old workstation as an upgrade from his Core-i5 (which he uses primarily for video watching and gaming). But for the things I regularly do day in and day out – running VMs, multimedia creation and editing, etc., that Core i7 system is significantly behind the times. With six years under its belt, a computer system tends to start receiving email from AARP …
The Conversation and Approval
So, my wife and I had “the conversation,” and I ultimately got her buy-in on the construction of a new PC. Let me say, for the record, that I love my wife. She’s a rational person, and as long as I can effectively plead my case that I need something for my job (being able to write it off helps), she’s behind me and supports the decision.
Tracy and I have been married for 17 years, so she knows me well. We both knew that the new system was going to likely cost quite a bit of money to put together … because my general thinking on new computer systems (desktops, servers, or whatever) boils down to a few key rules and motivators:
Nine times out of ten, I prefer to build a system (from parts) over buying one pre-assembled. This approach ensures that I get exactly what I want in the system, and it also helps with the “continuing education” associated with system assembly. It also forces me to research what’s currently available at the time of construction, and that invariably ends up helping at least one or two friends in the assembly of new systems that they want to put together or purchase.
I generally try to build the best performing system I can with what’s available at the time. I’ll often opt for a more expensive part if it’s going to keep the system “viable” for a longer period of time, because getting new systems isn’t something I do very often. I would absolutely love to get new systems more often, but I’ve got to make these last as long as I can – at least until I’m independently wealthy (heh … don’t hold your breath – I’m certainly not).
As an adjunct to point #2 (above), I tend to opt for more expensive parts and components if they will result in a system build that leaves room for upgrades/part swaps down the road. Base systems may roll over only every half-dozen years or so, but parts and upgrades tend to flow into the house at regular intervals. Nothing simply gets thrown out or decommissioned. Old systems and parts go to the rest of the family, get donated to a friend in need, etc.
When I’m building a system, I have a use in mind. I’m fortunate that I can build different computers for different purposes, and I have two main systems that I use: a primary workstation for business, and a separate machine for gaming. That doesn’t mean I won’t game on my workstation and vice-versa, any such usage is secondary; I select parts for a system’s intended purpose.
Although I strive to be on the cutting edge, I’ve learned that it’s best to stay off the bleeding edge when it comes to my primary workstation. I’ve been burned a time or two by trying to get the absolute best and newest tech. When you depend on something to earn a living, it’s typically not a bad idea to prioritize stability and reliability over the “shiny new objects” that aren’t proven yet.
Threadripper: The Parts List
At last – the moment that some of you may have been waiting for: the big reveal!
I want to say this at the outset: I’m sharing this selection of parts (and some of my thinking while deciding what to get) because others have specifically asked. I don’t value religious debates over “why component ‘xyz’ is inferior to ‘abc'” nearly as much as I once did in my youth.
So, general comments and questions on my choice of parts are certainly welcome, but the only thing you’ll hear are crickets chirping if you hope to engage me in a debate …
The choice of which processor to go with wasn’t all that difficult. Well, maybe a little.
Given that this was going into the machine that would be swapped-in as my new workstation, I figured most medium-to-high end current processors available would do the job. Many of the applications I utilize can get more done with a greater number of processing cores, and I’ve been known to keep a significant number of applications open on my desktop. I also continue to run a number of virtual machines (on my workstation) in my day-to-day work.
I realize benchmarks are won by some companies one day, and someone else the next. Bottom line for me: performance per core at a specific price point has been held by AMD’s Ryzen chips for a while. I briefly considered a Ryzen 5 or 9 for a bit, but I opted for the Threadripper when I acknowledged that the system would have to last me a fairly long time. Yes, it’s a chunk of change … but Threadripper was worth it for my computing tasks.
Had I been building a gaming machine, it’s worth noting that I probably would have gone Intel, as their chips still tend to perform better for single-threaded loads that are common in games.
First off, you should know that I generally don’t worry about motherboard performance. Yes, I know that differences exist and motherboard “A” may be 5% faster than motherboard “B.” At the end of the day, they’re all going to be in the same ballpark (except for maybe a few stinkers – and ratings tend to frown on those offerings …)
For me, motherboard selection is all about capabilities and options. I want storage options, and I especially want robust USB support. Features and capabilities tend to become more available as cost goes up (imagine that!), and I knew right off that I was going to probably spend a pretty penny for the appropriate motherboard to drop that Threadripper chip into.
I’ve always good luck with ASUS motherboards, and it doesn’t hurt that the ROG Zenith II Extreme Alpha was highly rated and reviewed. After all, it has a name that sounds like the next-generation terminator, so how could I go wrong?!?!?!
Everything about the board says high end, and it satisfies the handful of requirements I had. And some I didn’t have (but later found nice, like that 10Gbps Ehternet port …)
Be thankful you’re reading that instead of listening to me sing it. Barbra Streisand I am not.
Selecting memory doesn’t involve as many decision points as other components in a new system, but there are still a few to consider. There is, of course, the overall amount of memory you want to include in the system. My motherboard and processor supported up to 256GB, but that would be overkill for anythings I’d be doing. I settled on 128GB, and I decided to get that as 4x32GB DIMMS rather than 8x16GB so I could expand (easily) later if needed.
I found what I wanted with the Corsair Vengeance RGB series. I’ve had a solid experience with Corsair memory in the past, so once I confirmed the numbers it was easy to pull the trigger on the purchase.
There are 50 million cases and case makers out there. I’ve had experience with many of them, but getting a good case (in my experience) is as much about timing as any other factor (like vendor, cost, etc).
Because I was a bit more focused on the other components, I didn’t want to spend a whole lot of time on the case. I knew I could get one of those diamonds in the rough (i.e., cheap and awesome) if I were willing spend some time combing reviews and product slicks … but I’ll confess: I dropped back and punted on this one. I pulled open my Maximum PC and/or PC Gamer magazines (I’ve been subscribing for years) and looked at what they recommended.
And that was as hard as it got. Sure, the Cosmos C700P was pricy, but it looked easy enough to work with. Great reviews, too.
When the thing was delivered, the one thing I *wasn’t* prepared for was sheer SIZE of the case. Holy shnikes – this is a BIG case. Easily the biggest non-server case I’ve ever owned. It almost doesn’t fit under my desk but thankfully it just makes it with enough clearance that I don’t worry.
Oh yeah, there’s something else I realized with this case: I was acrruing quite the “bling show” of RGB lighting-capable components. Between the case, the memory, and the motherboard, I had my own personal 4th of July show brewing.
Power supplies aren’t glamorous, but they’re critical to any stable and solid system. 25 years ago, I lived in an old apartment with atrocious power. I would go through cheap power supplies regularly. It was painful and expensive, but it was instructional. Now, I do two things: buy an uninterruptible power supply (UPS) for everything electronic, and purchase a good power supply for any new build. Oh, and one more thing: always have another PSU on-hand.
I started buying high-end Corsair power supplies around the time I built my first gaming machine which utilized videocards in SLI. That was the point in nVidia’s history when the cards had horrible power consumption stats … and putting two of them in a case was a quick trip to the scrap heap for anything less than 1000W.
That PSU survived and is still in-use in one of my machines, and that sealed the deal for me for future PSU needs.
This PSU can support more than I would ever throw at it, and it’s fully modular *and* relatively high efficiency. Fully modular is the only to go these days; it definitely cuts down on cable sprawl.
Much like power supplies, CPU coolers tend not to be glamorous. The most significant decision point is “air cooled” or “liquid cooled.” Traditionally, I’ve gone with air coolers since I don’t overclock my systems and opt for highly ventillated cases. It’s easier (in my opinion) and tends to be quite a bit cheaper.
I have started evolving my thinking on the topic, though – at least a little bit. I’m not about to start building custom open-loop cooling runs like some of the extreme builders out there, but there are a host of sealed closed-loop coolers that are well-regarded and highly rated.
Unsurprisingly, Corsair makes one of the best (is there anything they don’t do?) I believe Maximum PC put the H100i PRO all-in-one at the top of their list. It was a hair more than I wanted to spend, but in the context of the project’s budget (growing with each piece), it wasn’t bad.
And oh yeah: it *also* had RGB lighting built-in. What the heck?
I initially had no plans (honestly) of buying another videocard. My old workstation had two GeForce 1080s (in SLI) in it, and my thinking was that I would re-use those cards to keep costs down.
Ha. Ha ha. “Keep costs down” – that’s funny! Hahahahahahaha…
At first, I did start with one of the 1080s in the case. But there were other factors in the mix I hadn’t foreseen. Those two cards were going to take up a lot room in the case and limit access to the remaining PCI express slots. There’s also the time-honored tradition of passing one of the 1080s down to my son Brendan, who is also a gamer.
Weak arguments, perhaps, but they were enough to push me over the edge into the purchase of another RTX 2080Ti. I actually picked it up at the local Micro Center, and there’s a bit of a story behind it. I originally purchase the wrong card (one that had connects for an open-loop cooling system), so I returned it and picked up the right card while doing so. That card (the right one) was only available as an open box item (at a substantially reduced price). Shortly after powering my system on with the card plugged in, it was clear why it was open-box: it had hardware problems.
Thus began the dance with EVGA support and the RMA process. I’d done the dance before, so I knew what to expect. EVGA has fantastic support anyway, so I was able to RMA the card back (shipping nearly killed me – ouch!), and I got a new RTX 2080Ti at an ultimately “reasonable” price.
Now my son will get a 1080, I’ve got a shiny new 2080Ti … and nVidia just released the new 30 series. Dang it!
Admittedly, this was a Micro Center “impulse buy.” That is, the specific choice of card was the impulse buy. I knew I was going to get an external sound card (i.e., aside from the motherboard-integrated sound) before I’d really made any other decision tied to the new system.
For years I’ve been hearing that the integrated sound chips they’re now putting on motherboards have gotten good enough that the need for a separate, discrete sound card is no longer necessary for those wanting high-quality audio. Forget about SoundBlaster – no longer needed!
I’ve tried using integrated sound on a variety of motherboards, and there’s always been something … sub-standard. In many cases, the chips and electronics simply weren’t shielded enough to keep powerline hum and other interference out. In other cases, the DSP associated with the audio would chew CPU cycles and slow things down.
Given how much I care about my music – and my picky listening habits (we’ll say “discerning audiophile tendencies”) – I’ve found that I’m only truly happy with a sound card.
I’d always gotten SoundBlaster cards in the past, but I’ve been kinda wondering about SoundBlaster for a while. They were still making good (or at least “okay”) cards in my opinion, but their attempts to stay relevant seemed to be taking them down some weird avenues. So, I was open to the idea of another vendor.
The ASUS card looked to be the right combo of a high signal-to-noise, low distortion minimalist card. And thus far, it’s been fantastic. An impulse buy that actually worked out!
Much like the choice of CPU, picking the SSD that would be used as my Windows system (boot) drive wasn’t overly difficult. This was the device that my system would be booting from, using for memory swapping, and other activities that would directly impact perceived speed and “nimbleness.” For those reasons alone, I wanted to find the fastest SSD I could reasonably purchase.
The only negative thing or two that Tom’s Hardware had to say about it was that it was “costly” and had “no heatsink.” In the plus category, Tom’s said that it had “solid performance,” a “large write cache,” that it was “power efficient,” had “class-leading endurance,” and they like its “aesthetics.” They also said it “should be near the top of your best ssds list.”
And about the cost: Micro Center actually had the drive for substantially less than what the drive is listing as, so I jumped at it. I’m glad I did, because I’ve been very happy with its performance. Happiness is based on nothing more than my perception. Full disclosure: I haven’t actually benchmarked system performance (yet), so I don’t have numbers to share. Maybe a future post …
Unsurprisingly, my motherboard selection came with built-in RAID capability. That RAID capability actually extended to NVMe drives (a first for one of my systems), so I decided to take advantage of it.
Although it’s impractical from a data stability and safety standpoint, I decided that I was going to put together a RAID-0 (striped) “disk” array with two M.2 drives. I figured I didn’t need maximum performance (as I did with my boot/system drive), so I opted to pull back a bit and be a little more cost-efficient.
It’s no surprise (or at least, I don’t think it should be a surprise), then, that I opted to go with Samsung and a pair of 970 EVO plus M.2 NVMe drives for that array. I got a decent deal on them (another Micro Center purchase), and so with two of the drives I put together a 4TB pretty-darn-quick array – great for multimedia editing, recording, a temporary area … and oh yeah: a place to host my virtual machine disks. Smooth as butta!
For more of my “standard storage” needs – where data safety trumped speed of operations – I opted for a pair of Seagate IronWolf 6TB NAS drives in a RAID-1 (mirrored) array configuration. I’ve been relatively happy with Seagate’s NAS series. Truthfully, both Seagate and Western Digitial did a wonderful thing by offering their NAS/Red series of drives. The companies acknowledge the reality that a large segment of the computing population are leaving machines and devices running 24/7, and they built products to work for that market. I don’t think I’ve had a single Red/NAS-series drive fail yet … and I’ve been using them for years now.
In any case, there’s nothing amazing out these drives. They do what their supposed to do. If I lose one, I just need to get another back in and let the array rebuild itself. Sure, I’ll be running in degraded fashion for a while, but that’s a small price to pay for a little data safety.
I believe in protection in layers – especially for data. That’s a mindset that comes out of my experience doing disaster recovery and business continuity work. Some backup process that you “set and forget” isn’t good enough for any data – yours or mine. That’s a perspective I tried to share and convey in the DR guides that John Ferringer and I wrote back in the SharePoint 2007 and 2010 days, and it’s a philosophy I adhere to even today.
The mirroring of the 6TB IronWolf drives provides one layer of data protection. The additional 10TB Western Digital Red drive I added as a system level backup target provides another. I’ve been using Acronis True Image as a backup tool for quite a few years now, and I’m generally pretty happy with the application, how it has operated, and how it has evolved. About the only thing that still bugs me (on a minor level) is the relative lack of responsiveness of UI/UX elements within the application. I know the application is doing a lot behind the scenes, but as a former product manager for a backup product myself (Idera SharePoint Backup), I have to believe that something could be done about it.
Thoughts on backup product/tool aside, I back up all the drives in my system to my Z: drive (the 10TB WD drive) a couple of times per week:
I use Acronis’ incremental backup scheme and maintain about month’s worth of backups at any given time; that seems to strike a good balance between capturing data changes and maintaining enough disk space.
I have one more backup layer in addition to the ones I’ve already described: off-machine. Another topic for another time …
Last but not least, I have to mention my trust Blu-ray optical drive. Yes, it does do writing … but I only ever use it to read media. If I didn’t have a large collection of Blu-rays that I maintain for my Plex Server, I probably wouldn’t even need the drive. With today’s Internet speeds and the ease of moving large files around, optical media is quickly going the way of the floppy disk.
I had two optical drives in my last workstation, and I have plenty of additional drives downstairs, so it wasn’t hard at all to find one to throw in the machine.
And that’s all I have to say about that.
Some Assembly Required
Of course, I’d love to have just purchased the parts and have the “assembly elves” show up one night while I was sleeping, do their thing, and I’d have woken up the next morning with a fully functioning system. In reality, it was just a tad a bit more involved that that.
I enjoy putting new systems together, but I enjoy it a whole lot less when it’s a system that I rely upon to get my job done. There was a lot of back and forth, as well as plenty of hiccups and mistakes along the way.
I took a lot of pictures and even a small amount of video while putting things together, and I chronicled the journey to a fair extent on Facebook. Some of you may have even been involved in the ongoing critique and ribbing (“Is it built yet?”). If so, I want to say thanks for making the process enjoyable; I hope you found it as funny and generally entertaining as I did. Without you folks, it wouldn’t have been nearly as much fun. Now, if I can just find a way to magically pay the whole thing off …
The Media Chronicle
I’ll close this post out with some of the images associated with building Threadripper (or for Spencer Harbar: THREADRIPPER!!!)
Definitely a Step Up
I’ll conclude this post with one last image, and that’s the image I see when I open Windows Device Manager and look and look at the “Processors” node:
I will admit that the image gives me all sorts of warm fuzzies inside. Seeing eight hyperthreading cores used to be impressive, but now that I’ve got 32 cores, I get a bit giddy.
Here we are in 2016. If you’ve been following my blog for a while, you might recall a post I threw together back in 2010 called Portrait of a Basement Datacenter. Back in 2010, I was living on the west side of Cincinnati with my wife (Tracy) and three year-old twins (Brendan and Sabrina). We were kind of shoehorned into that house; there just wasn’t a lot of room. Todd Klindt visited once and had dinner with us. He didn’t say it, but I’m sure he thought it: “gosh, there’s a lot of stuff in this little house.”
All of my computer equipment (or rather, nearly all of my computer equipment) was in the basement. I had what I called a “basement datacenter,” and it was quite a collection of PCs and servers in varying form factors and with a variety of capabilities.
The image on the right is how things looked in 2010. Just looking at the picture brings back a bunch of memories for me, and it also reminds me a bit of what we (as server administrators) could and couldn’t easily do. For example, nowadays we virtualize nearly everything without a second thought. Six years ago, virtualization technology certainly existed … but it hadn’t hit the level of adoption that it’s cruising at today. I look at all the boxes on the right and think “holy smokes – that’s a lot of hardware. I’m glad I don’t have all of that anymore.” It seemed like I had drives and computers everywhere, and they were all sucking down juice. I had two APC 1600W UPS units that were acting as battery backups back then. With all the servers plugged-in, they were drawing quite a bit of power. And yeah – I had the electric bill to prove it.
So, What’s Changed?
For starters, we now live on the east side of Cincinnati and have a much bigger house than we had way back when. Whenever friends come over and get a tour of the house, they inevitably head downstairs and get to see what’s in the unfinished portion of the basement. That’s where the servers are nowadays, and this is what my basement datacenter looks like in 2016:
In reality, quite a bit has changed. We have much more space in our new house, and although the “server area” is smaller overall, it’s basically a dedicated working area where all I really do is play with tech, fix machines, store parts, etc. If I need to sit at a computer, I go into the gaming area or upstairs to my office. But if I need to fix a computer? I do it here.
In terms of capabilities, the last six years have been good to me.
All Hail The Fiber
Back on the west side of town, I had a BPL (broadband-over-powerline) Internet hookup from Duke Energy and The CURRENT Group. Nowadays, I don’t even know what’s happening with that technology. It looks like Duke Energy may be trying to move away from it? In any case, I know it gave me a symmetric pipe to the Internet, and I think I had about 10Mbps up and down. I also had a secondary DSL connection (from Cincinnati Bell) that was about 2.5Mbps down and 1Mbps up.
Once I moved back to the east side of Cincinnati and Anderson Township, the doors were blown off of the barn in terms of bandwidth. Initially, I signed with Time Warner Cable for a 50Mbps download / 5Mbps upload primary connection to my house. I made the mistake of putting in a business circuit (well, I was running a business), so while it gave me some static IP address options, it ended up costing a small fortune.
My costly agreement with Time Warner ended last year, and for that I’m thankful. Nowadays, I have Cincinnati Bell Fiber coming to my house (Fioptics), and it’s a full-throttle connection. I pay for gigabit download speeds and have roughly a 250Mbps upload pipe. Realistically, the bandwidth varies … but there’s a ton of it, even on a bad day. The image on the right shows the bandwidth to my desktop as I’m typing this post. No, it’s not gigabit (at this moment) … but really, should I complain about 330Mbps download speeds from the Internet? Realistically speaking, some of the slowdown is likely due to my equipment. Running full gigabit Ethernet takes good wiring, quality switches, fast firewalls, and more. You’re only as fast as your slowest piece of equipment.
I do keep a backup connection with Time Warner Cable in case the fiber goes down, and my TMG firewall does a great job of failing over to that backup connection if something goes wrong. And yes, I’ve had a problem with the fiber once or twice. But it’s been resolved quickly, and I was back up in no time. Frankly, I love Cincinnati Bell’s fiber.
What About Storage?
In the last handful of years, storage limits have popped over and over again. You can buy 8TB drives on Amazon.com right now, and they’re not prohibitively expensive? We’ve come a long way in just a half dozen years, and the limits just keep expanding.
I have a bunch of storage downstairs, and frankly I’m pretty happy with it. I’ve graduated from the random drives and NAS appliances that used to occupy my basement. These days, I use Mediasonic RAID enclosures. You pop some drives in, connect an eSATA cable (or USB cable, if you have to), and away you go. They’ve been great self-contained pass-through drive arrays for specific virtual machines running on my Hyper-V hosts. I’ve been running the Mediasonic arrays for quite a few years now, and although this isn’t a study in “how to build a basement datacenter,” I’d recommend them to anyone looking for reliable storage enclosures. I keep one as a backup unit (because eventually one will die), and as a group they seem to be in good shape at this point in time. The enclosures supply the RAID-5 that I want (and yeah, I’ve had *plenty* of drives die), so I’ve got highly-available, hot-swappable storage where I need it.
Oh, and don’t mind the minions on my enclosures. Those of you with children will understand. Those who don’t have children (or who don’t have children in the appropriate age range) should either just wait it out or go watch Despicable Me.
Hey? What About The Cloud?
The astute will ask “why are you putting all this hardware in your house instead of shifting to the cloud?” You know, that’s a good question. I work for Cardinal Solutions Group, and we’re a Microsoft managed partner with a lot of Office 365 and Azure experience. Heck, I’m Cardinal’s National Solution Manager for Office 365, so The Cloud is what I think about day-in and day-out.
First off, I love the cloud. For enterprise scale engagements, the cloud (and Microsoft’s Azure capabilities, in particular) are awesome. Microsoft has done a lot to make it easier (not “easy,” but “easier”) for us to build for the cloud, put our stuff (like pictures, videos, etc.) in the cloud, and get things off of our thumb drives and backup boxes and into a place where they are protected, replicated, and made highly available.
What I’m doing in my basement doesn’t mean I’m “avoiding” the cloud. Actually, I moved my family onto an Office 365 plan to give them email and capabilities they didn’t have before. My kids have their first email address now, and they’re learning how to use email through Office 365. I’m going to move the SharePoint site collection that I maintain for our family (yes, I’m that big of a geek) over to SharePoint Online because I don’t want to wrangle with it at home any longer. Keeping SharePoint running is a pain-in-the-butt, and I’m more than happy to hand that over the Office 365 folks.
I’ll still be tinkering with SharePoint VMs for sure with the work I do, but I’m happy to turn over operational responsibility to Microsoft for my family’s site collection.
The Private Cloud
So even though I believe in The Cloud (i.e, “the big cloud that’s out there with all of our data”), I also believe in the “private cloud,” “personal cloud,” or whatever you want to call it. When I work from the Cardinal office, my first order of business is to VPN back to my house (again, through my TMG Firewall – they’ll have to pry it from my cold, dead hands) so that I have access to all of my files and systems at home.
Accessing stuff at home is only part of it, though. The other part is just knowing that I’m going through my network, interacting with my systems, and still feeling like I have some control in our increasingly disconnected world. My Plex server is there, and my file shares are available, and I can RDP into my desktop to leverage its power for something I’m working on. There’s a comfort in knowing my stuff is on my network and servers.
Critical data makes it to the cloud via OneDrive, Dropbox, etc, but I still can’t afford to pay for all of my stuff to be in the cloud. Prices are dropping all of the time, though. Will I ever give up my basement datacenter? Probably not, because maintaining it helps me keep my technical skills sharpened … but it’s also a labor of love.
In this post, I take a small detour from SharePoint to talk about my home network, how it has helped me to grow my skill set, and where I see it going.
Whenever I’m speaking to other technology professionals about what I do for a living, there’s always a decent chance that the topic of my home network will come up. This seems to be particularly true when talking with up-and-coming technologists, as I’m commonly asked by them how I managed to get from “Point A” (having transitioned into IT from my previous life as a polymer chemist) to “Point B” (consulting as a SharePoint architect).
I thought it would be fun (and perhaps informative) to share some information, pictures, and other geek tidbits on the thing that seems to consume so much of my “free time.” This post also allows me to make good on the promise I made to a few people to finally put something online for them to see.
Wait … “Basement Datacenter?”
For those on Twitter who may have seen my occasional use of the hashtag #BasementDatacenter: I can’t claim to have originated the term, though I fully embrace it these days. The first time I heard the term was when I was having one of the aforementioned “home network” conversations with a friend of mine, Jason Ditzel. Jason is a Principal Consultant with Microsoft, and we were working together on a SharePoint project for a client a couple of years back. He was describing his love for his recently acquired Windows Home Server (WHS) and how I should have a look at the product. I described why WHS probably wouldn’t fit into my network, and that led Jason to comment that Microsoft would have to start selling “Basement Datacenter Editions” of its products. The term stuck.
So, What Does It Look Like?
Two pictures appear on the right. The left-most shot is a picture of my server shelves from the front. Each of the computing-related items in the picture is labeled in the right-most shot. There are obviously other things in the pictures, but I tried to call out the items that might be of some interest or importance to my fellow geeks.
Generally speaking, things look relatively tidy from the front. Of course, I can’t claim to have the same degree of organization in the back. The shot on the left displays how things look behind and to the right of the shots that were taken above. All of the power, network, and KVM cabling runs are in the back … and it’s messy. I originally had things nicely organized with cables of the proper length, zip ties, and other aids. Unfortunately, servers and equipment shift around enough that the organization system wasn’t sustainable.
While doing the network planning and subsequent setup, I’m happy that I at least had the foresight to leave myself ample room to move around behind the shelves. If I hadn’t, my life would be considerably more difficult.
On the topic of shelves: if you ever find yourself in need of extremely heavy duty, durable industrial shelves, I highly recommend this set of shelves from Gorilla Rack. They’re pretty darn heavy, but they’ll accept just about any amount of weight you want to put on them.
I had to include the shot below to give you a sense of the “ambiance.”
Anyone who’s been to my basement (which I lovingly refer to as “the bunker”) knows that I have a thing for dim but colorful lighting. I normally illuminate my basement area with Christmas lights, colored light bulbs, etc. Frankly, things in the basement are entirely too ugly (and dusty) to be viewed under normal lighting. It may be tough to see from this shot, but the servers themselves contribute some light of their own.
Why On Earth Do You Have So Many Servers?
After seeing my arrangement, the most common question I get is “why?” It’s actually an easy one to answer, but to do so requires rewinding a bit.
Many years ago, when I was a “young and hungry” developer, I was trying to build a skill set that would allow me to work in the enterprise – or at least on something bigger than a single desktop. Networking was relatively new to me, as was the notion of servers and server-side computing. The web had only been visual for a while (anyone remember text-based surfing? Quite a different experience …), HTML 3 was the rage, Microsoft was trying to get traction with ASP, ActiveX was the cool thing to talk about (or so we thought), etc.
It was around that time that I set up my first Windows NT4 server. I did so on the only hardware I had leftover from my first Pentium purchase – a humble 486 desktop. I eventually got the server running, and I remember it being quite a challenge. Remember: Google and “answers at your fingertips” weren’t available a decade or more ago. Servers and networking also weren’t as forgiving and self-correcting as they are nowadays. I learned a awful lot while troubleshooting and working on that server.
Before long, though, I wanted to learn more than was possible on a single box. I wanted to learn about Windows domains, I wanted to figure out how proxies and firewalls worked (anyone remember Proxy Server 2.0?), and I wanted to start hosting online Unreal Tournament and Half Life games for my friends. With everything new I learned, I seemed to pick up some additional hardware.
When I moved out of my old apartment and into the house that my wife and I now have, I was given the bulk of the basement for my “stuff.” My network came with me during the move, and shortly after moving in I re-architected it. The arrangement changed, and of course I ended up adding more equipment.
Fast-forward to now. At this point in time, I actually have more equipment than I want. When I was younger and single, maintaining my network was a lot of fun. Now that I have a wife, kids, and a great deal more responsibility both in and out of work, I’ve been trying to re-engineer things to improve reliability, reduce size, and keep maintenance costs (both time and money) down.
I can’t complain too loudly, though. Without all of this equipment, I wouldn’t be where I’m at professionally. Reading about Windows Server, networking, SharePoint, SQL Server, firewalls, etc., has been important for me, but what I’ve gained from reading pales in comparison to what I’ve learned by *doing*.
How Is It All Setup?
I actually have documentation for most of what you see (ask my Cardinal SharePoint team), but I’m not going to share that here. I will, however, mention a handful of bullets that give you an idea of what’s running and how it’s configured.
I’m running a Windows 2008 domain (recently upgraded from Windows 2003)
With only a couple of exceptions, all the computers in the house are domain members
I have redundant ISP connections (DSL and BPL) with static IP addresses so I can do things like my own DNS resolution
My primary internal network is gigabit Ethernet; I also have two 802.11g access points
All my equipment is UPS protected because I used to lose a lot of equipment to power irregularities and brown-outs.
I believe in redundancy. Everything is backed-up with Microsoft Data Protection Manager, and in some cases I even have redundant backups (e.g., with SharePoint data).
There’s certainly a lot more I could cover, but I don’t want to turn this post into more of a document than I’ve already made it.
Fun And Random Facts
Some of these are configuration related, some are just tidbits I feel like sharing. All are probably fleeting, as my configuration and setup are constantly in flux:
Beefiest Server: My SQL Server, a Dell T410 with quad-core Xeon and about 4TB worth of drives (in a couple of RAID configurations)
Wimpiest Server: I’ve got some straggling Pentium 3, 1.13GHz, 512MB RAM systems. I’m working hard to phase them out as they’re of little use beyond basic functions these days.
Preferred Vendor: Dell. I’ve heard plenty of stories from folks who don’t like Dell, but quite honestly, I’ve had very good luck with them over the years. About half of my boxes are Dell, and that’s probably where I’ll continue to shop.
Uptime During Power Failure: With my oversize UPS units, I’m actually good for about an hour’s worth of uptime across my whole network during a power failure. Of course, I have to start shutting down well before that (to ensure graceful power-off).
Most Common Hardware Failure: Without a doubt, I lose power supplies far more often than any other component. I think that’s due in part to the age of my machines, the fact that I haven’t always bought the best equipment, and a couple of other factors. When a machine goes down these days, the first thing I test and/or swap out is a power supply. I keep at least a couple spares on-hand at all times.
Backup Storage: I have a ridiculous amount of drive space allocated to backups. My DPM box alone has 5TB worth of dedicated backup storage, and many of my other boxes have additional internal drives that are used as local backup targets.
Server Paraphernalia: Okay, so you may have noticed all the “junk” on top of the servers. Trinkets tend to accumulate there. I’ve got a set of Matrix characters (Mr. Smith and Neo), a PIP boy (of Fallout fame), Cheshire Cat and Alice (from American McGee’s Alice game), a Warhammer mech (one of the Battletech originals), a “cat in the bag” (don’t ask), a multimeter, and other assorted stuff.
Cost Of Operation: I couldn’t begin to tell you, though my electric bill is ridiculous (last month’s was about $400). Honestly, I don’t want to try to calculate it for fear of the result inducing some severe depression.
Where Is It All Going?
As I mentioned, I’m actively looking for ways to get my time and financial costs down. I simply don’t have the same sort of time I used to have.
Given rising storage capacities and processor capabilities, it probably comes as no surprise to hear me say that I’ve started turning towards virtualization. I have two servers that act as dedicated Hyper-V hosts, and I fully expect the trend to continue.
Here are a few additional plans I have for the not-so-distant future:
I just purchased a Dell T110 that I’ll be configuring as a Microsoft Forefront Threat Management Gateway 2010 (TMG) server. I currently have two Internet Security and Acceleration Server 2006 servers (one for each of my ISP connections) and a third Windows Server 2008 for SSL VPN connectivity. I can get rid of all three boxes with the feature set supplied by one TMG server. I can also dump some static routing rules and confusing firewall configuration in the process. That’s hard to beat.
I’m going to see about virtualizing my two domain controllers (DCs) over the course of the year. Even though the machines are backed-up, the hardware is near the end of its usable life. Something is eventually going to fail that I can’t replace. By virtualizing the DCs, I gain a lot of flexibility (I can move them around on physical hardware) and can get rid of two more physical boxes. Box reduction is the name of the game these days! I’ll probably build a new (virtual) DC on Windows Server 2008 R2; migrate FSMO roles, DNS, and DHCP responsibilities to it; and then phase out the physical DCs – rather than try a P2V move.
With SharePoint Server 2010 coming, I’m going to need to get some even beefier server hardware. I’m learning and working just fine with the aid of desktop virtualization right now (my desktop is a Core i7-920 with 12GB RAM), but that won’t cut it for “production use” and testing scenarios when SharePoint Server 2010 goes RTM.
If the past has taught me anything, it’s that additional needs and situations will arise that I haven’t anticipated. I’m relatively confident that the infrastructure I have in place will be a solid base for any “coming attractions,” though.
If you have any questions or wonder how I did something, feel free to ask! I can’t guarantee an answer (good or otherwise), but I do enjoy discussing what I’ve worked to build.