Windows 10 isn’t scheduled to “sunset” for another two years (and a month, since I’m writing this in early September), but I’m already executing on a hardware replacement plan for my Microsoft Surface Pro 5, which isn’t sanctioned to survive the operating system transition to the Windows 11 successor. I could try hacking my way around the newer O/S’s TPM 2.0 (the planned focus of another upcoming post) and CPU requirements, but I’m not confident that Microsoft will sustain update deliveries long-term to unsupported hardware configurations.
The SP5 is getting a bit “long in the tooth” at this point, anyway; more frequently than I’d prefer, for example, it overheats and automatically downclocks to unusable-performance levels until it cools back down. And its limited system memory (8 GBytes) and storage (256 GBytes), both non-user-upgradeable, are increasingly constraining (although everything’s relative).
Speaking of storage…I’ve actually acquired two different upgrade platforms, a Surface Pro 7+ and a Surface Pro 8, both LTE-inclusive “for Business” variants, and for varying reasons which I’ll delve into full detail on in another near-future upcoming planned blog post.
Here’s the SP7+:
If you’re thinking it looks a lot like the SP5, you’d be right, and here’s a hint: that’s key to why I’m migrating to it. Stand by for more details in that upcoming additional editorial piece.
And here’s the SP8:
Keen-eyed readers may notice that the power and volume rocker switches have moved from the top edge in the SP5 and SP7+ to the right and left sides, respectively, in this product generation. Note, too, that there are now two USB-C connectors on the right side of the SP8, and this time they’re Thunderbolt 4-enhanced, which—here’s another hint—is key to why I’m in-parallel migrating to it, too.
One key enhancement with both systems versus the SP5, in addition to their bumped-up integrated DRAM allocation to 16 GBytes, is that like their Surface Pro X sibling also in my stable (and unlike their SP5 predecessor), they support user-upgradeable m.2 2230 form factor NVMe SSDs. I initially bought a 1 TByte Corsair MP600 NVMe PCIe 4.0 drive for $70 plus domestic shipping and tax off eBay (SSDs are much cheaper than they were 1.5 years ago!), specifically with the beefier SP8 system in mind. However, I subsequently went with two 1 TByte Samsung PM991a NVMe PCIe 3.0 drives (one for each system, both of which originally contained 256 GByte SSDs), for $61 plus tax each (free overseas shipping) again from an eBay reseller.
As for the reason why, I’ll first direct you to a detailed, regularly updated and otherwise excellent blog post from Dan S. Charlton. I found out about Charleton’s reference guide via upfront perusal of the r/Surface subreddit (Reddit is increasingly my go-to first stop for info on various tech-and-other topics such as, in this case, the question “what’s the best SSD for a Surface Pro?”). What I learned was an effective reminder of the importance of assessing not only average power consumption over time but also instantaneous—specifically peak—power draw.
Quoting from Charlton’s piece:
If your device/laptop crashes or the SSD unmounts periodically after installing a Gen4 SSD, it could be a symptom of a power or signal integrity issue. While average power consumption is usually lower on Gen4 SSDs compared to Gen3, peak power use may be ~25% higher. For example, the Kioxia BG5 1TB uses up to 4.5 watts while the BG4 1TB uses 3.5 watts; the WD SN740 2GB uses up to 6.3 watts while the 1TB SN530 used up to 5 watts, and the Micron 2450 1TB uses up to 5.5 watts. Older laptop mainboards may not be designed with this increased power draw in mind. Likewise, several Intel and AMD mobile platforms technically support Gen4 data throughput, but not well enough to be reliable across all SSD models.
Thankfully, as I’d already suspected, an opinion which various Reddit commenters’ benchmark results further bolstered, there’s little to no real-life performance difference between PCIe 3.0 and PCIe 4.0 SSDs, at least in systems like these which have modest overall PCIe bus loading (not containing discrete PCIe-based GPUs, for example).
Now let’s bring HDDs into the discussion, in the process expanding beyond power consumption to energy draw (power multiplied by the timespan across which power is drawn) also noted in this post’s title. SSDs, as I’ve discussed in the past, are generally higher performance than legacy rotating-media HDDs, especially in usage scenarios where random (versus sequential) read and/or writes dominate the overall access profile. Conversely, HDDs are particularly appealing in ultra-high-capacity storage scenarios, where their low media cost/bit can dominate the total comparative cost equation vs SSDs (even after factoring in a HDD’s higher capacity-independent “fixed” cost: platters, housing, motor, arm, and head assemblies, etc.). Of course, as time goes on, the capacity threshold at which either an SSD or HDD delivers lowest total cost varies, as both options strive to squeeze ever more data storage capability into a given form factor.
But what about power and energy consumption? You might automatically think that SSDs are superior to HDDs here as well; in all cases, in fact, for reasons such as the following:
You might think…but as the results of a recent study suggest, you might not always be right. Then again, though, given the increasing prevalence of heat sinks on SSDs, in retrospect perhaps I shouldn’t have been surprised.
Upfront qualifiers first; Scality (the firm that did the study) focuses on petabyte-level storage for “cloud” and other servers. This emphasis is reflected in the mass storage devices compared in the report: NVMe SSDs in the 2.5” U.3 form factor versus 22 TByte 3.5” 7,200 RPM SATA HDDs. I’ve recently learned about (and used) flash memory subsystems based on the precursor U.2 format in my home office setup, more details on which I’ll save for another post on another day:
That all said, the results are thought-provoking. A few high-level excerpts:
Our findings: HDDs provide 19-94% better power density per drive than SSD based on specific workload patterns and today’s drive densities.
For details on how we calculated these comparisons, see the table below.
This clearly demonstrates that the perception of high-density QLC flash SSDs as having a power efficiency advantage over HDDs isn’t accurate today. And, based on our read-intensive workload profile above, HDDs actually provide 19% better power density than SSDs. For the write-intensive workload profile, the advantage rises to 94% for HDDs.
For the “rest of the story”, check out Scality’s report, as well as the company’s prior overview post. Both are well worth your time. And to the author’s credit, his company isn’t seemingly vested in any particular option or outcome. Take this, for example:
Reminder: Scality products work with both SSD and HDD, so we don’t have a horse in this race. Ultimately, in all comparisons, the data and access patterns of the user/customer’s application workload will determine the best storage platform and media.
This [result set] will obviously vary with other workload pattern assumptions and is certainly subject to change as SSD densities increase in the coming years. Moreover, there are additional considerations for enclosure-level (servers and disk shelves) density and power consumption metrics, and how the cost of power affects each customer’s overall storage TCO.
No matter that I’m generally impressed with the methodology, including the thoroughness with which assumptions and other parameters were documented, I’ve got a few quibbles:
- I have no idea whether the assumed access read and write access patterns were highly sequential, mostly random, or a mix of the two types. As previously mentioned, random accesses particularly play to SSDs’ rotating media-free performance strengths.
- More concerning: the seeming focus was exclusively on power consumption; the time element (translating into energy consumption), wasn’t considered, as far as I can tell. Even if an access profile test might generate higher power draw when run on an SSD, it’d likely complete in a fraction of the time the HDD needed, therefore enabling the SSD to drop into its lowest power-consuming mode for the remainder of the time the HDD was still chugging away. I’d argue this’d be an SSD “win” for all but the most peak power-strapped systems. Said another way, even if the SSD were actively reading and writing the whole time, thereby burning more power than the HDD, it’d still deliver a whole lot more accesses in the process, translating into higher overall system performance.
And of course, were you to drop down to a more space-constrained client computer—a desktop or (especially) a laptop, there wouldn’t be sufficient room inside for a 3.5” or even 2.5” mass storage form factor, meaning that a flash memory-based m.2 module would be the only option. But I digress. If nothing else, Scality’s study is an effective reminder of the enduring importance of regularly questioning your assumptions. Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.