The Great MicroSD Card Survey: One Year Later

It’s been exactly one year since I started this project. I’ve set up 8 machines with close to 70 card readers running around the clock. I’m writing 101 terabytes of data per day. I’ve written 18 petabytes of data to 181 microSD cards and destroyed 51 of them. Don’t believe me? Check out my graveyard:

On the anniversary of this project, I thought it might be good to do a retrospective and cover things I’ve learned in the last year.

I started this project because I wanted to see how practical it would be to expand a ZFS array using cheap microSD cards. I pretty much abandoned that goal after discovering that refurbished 12TB enterprise-grade drives could be had for cheap. But by that point, I realized something: there’s not a lot of empirical data out there around just how long a microSD cards actually last. Manufacturers don’t exactly publish this information unless you’re buying a high endurance or industrial grade card. Most of the rest of what’s out there is anecdotes and speculation. So I decided to continue and search for the answer to a question: “just how long should you expect a microSD card to last?”

In the last year, I’ve learned quite a bit — not just from the data I’ve collected, but also through studying the subject. I’ve read through the SD card specification in detail. I’ve also started working on an FPGA design to perform some of the testing on these cards — particularly speed testing, as the SD Card standard actually has quite a few specifics on how cards are supposed to be tested, and you can’t properly run those tests with your average SD card reader.

However, speed is a secondary concern here: the primary goal here was to measure endurance. I’ve learned a few things as well here too, so let’s dig in.

By The Numbers

  • Total cards: 216
    • Total unique brands: 38
    • Total unique models: 73
    • Total name-brand cards1: 89
    • Total off-brand cards1: 74
      • Total authentic cards1: 65
      • Total fake cards1: 9
    • Total knock-off cards1: 18
      • Total authentic cards1: 1
      • Total fake cards1: 17
    • Total cards currently in testing: 129
    • Total cards destroyed: 52
    • Total cards waiting to be tested: 34
  • Total data written (approx.): 18.63PiB (20.97PB)
  • Average time to complete one erase/program/verify cycle: 46 minutes
MinimumMaximumAverage
Price paid2$1.03$14.58$6.49
Price per gigabyte3$0.029$1.732$0.241
Sequential read speed (MB/sec)0.77176.5075.72
Sequential write speed (MB/sec)0.7581.9831.25
Random read speed (IOPS/sec)126.804,429.841,743.79
Random write speed (IOPS/sec)0.361,412.99398.69
Number of days tested0366140.36
Total data written per card0 bytes462.38TiB (508.40TB)88.32TiB (97.10TB)
Number of erase / program / verify cycles completed per card063,6324,463
Number of erase / program / verify cycles completed per card without errors063,6322,221
Number of erase / program / verify cycles completed per card before total failure4026,0414,468
Number of days tested per card before total failure40260100.50

1Does not include cards that have not yet undergone any testing.

2Including shipping. Excluding tax.

3Calculated based on the actual physical capacity of the card (e.g., after determining how big fake flash cards actually were, not how big they said they were).

4Only includes those cards that have experienced a total failure.

Fake Flash Is Bad, Mmmkay

One day, I had an idea pop into my head. It’s probably the one thought that caused me to kick off this entire project. MicroSD cards are getting pretty cheap — but there’s a lot of fake flash out there, and they’re even cheaper. Fake flash isn’t entirely worthless — if you can determine how much space a piece of fake flash media actually has, you can take steps to make sure you don’t go over that. At that point, you have a working microSD card — just in a smaller capacity than you were led to believe you were getting. So, my mind thought, is it more economical to use a bunch of fake flash cards than to use a single, larger card?

Well…no, as it turns out. Across 25 cards, I paid an average of $0.432 per gigabyte for fake flash cards. Authentic flash, on the other hand, cost less than half as much: across 152 cards, I paid an average of only $0.21 per gigabyte. This is primarily because the true capacity of the fake flash cards I purchased were in the range of about 4GB-32GB, but they were being sold for an average of about $6.46 — and for that price, I was able to find authentic 8GB, 16GB, 32GB, and even 64GB and 128GB cards.

But there’s another problem: fake flash turned out to be the least reliable of the cards I tested. Some cards were dead-on-arrival; others only made it a few read/write cycles before exhibiting issues. Most only made it a few hundred read/write cycles before exhibiting errors. A few have actually gone for some time without issues — but they tend to be the exception rather than the rule.

In terms of actual data:

  • Fake flash cards experienced their first error after just 696 read/write cycles, on average, while authentic cards experienced their first error after an average of 2,535 read/write cycles.
  • Fake flash cards lasted 2,127 read/write cycles, on average, before failing or reaching the 50% error threshold. Meanwhile, authentic cards lasted an average of 5,522 read/write cycles.

It’s also interesting to me to see the ways in which flash tends to fail. In my testing, fake flash tended to be more prone to bit rot than authentic flash. And, once a card starts to suffer from bit rot, it only tends to accelerate as time goes on. Why is that? Well, my hypothesis is that:

  • Fake flash tends to be made from low-quality flash media — media that would normally be rejected for use in authentic cards. This increases the likelihood of errors cropping up on any given round of testing compared to authentic cards.
  • Fake flash lacks any sort of error correction algorithms or wear leveling algorithms that are present in many name-brand cards. While this doesn’t increase the likelihood of errors in the underlying media, it does increase the likelihood of errors in general — simply because authentic cards would be able to detect minor errors and correct them, staving off more serious errors for later; whereas fake flash would have to expose the corrupted data to the host.

Some cards also had a tendency to simply stop working after a few hundred read/write cycles. I have a hypothesis for this as well: flash media often contains a small amount of storage space dedicated to storing control information. This includes identification information (such as its manufacturing date, serial number, etc.), data on which portions of the flash are allocated, wear leveling data, etc. It also contains firmware for the card itself. (Yes, SD cards have their own CPU and RAM and have a program that they’re running when you plug them in.) In authentic media, this storage space is generally subject to reading and writing more frequently than the rest of the card, so it’s designed to be more durable than the flash core. But I’m not sure this is the case with fake flash — if you skimped on other aspects of the card, why not skimp on this one? And when this storage area begins to fail, and the card’s firmware starts suffering from bit rot, the rest of the card will stop working shortly after.

But these are just my hypotheses — I don’t have any data to back them up.

Finally, fake flash consistently performed poorly on performance tests compared to authentic media. Most fake flash got sequential read speeds below 25MB/sec and sequential write speeds below 20MB/sec. What’s really appalling, however, is that most fake flash cards got under 10 IOPS/sec on random write tests, with many of them getting less than one IOPS/sec. For comparison, authentic flash got an average of about 85MB/sec on sequential read tests, 35MB/sec on sequential write tests, and 452 IOPS/sec on random write tests.

So…yeah. Fake flash sucks on just about every metric I examined.

USB Isn’t Perfect, But It’s Pretty Much the Only Option

All of the card readers that I use are hooked up to their host systems via USB. Even the internal card readers built into the laptops I’m using are hooked up internally via USB. USB is ubiquitous nowadays, and it’s hard to find card readers that don’t hook up to their host through USB.

I run most of my testing rigs with 16 card readers each. (One of my laptops is a little older — and while it still supports USB 3, its USB host controller isn’t able to handle as many devices as the others, so it only has 14 card readers hooked up to it.) The maximum throughput of the USB 3 bus is 5,000Mb/sec — or 625MB/sec; and that 625MB/sec, split among 16 card readers, comes out to about 39MB/sec per card reader. And, I run each card reader with two cards — so that means there’s about 19.5MB/sec of bandwidth available per card. (And note that none of this takes the USB protocol overhead into account.) Meanwhile, most cards — on their own — can get read/write speeds that are much higher. All of this to say is that I’m running these machines with the USB bus constantly saturated.

Most of the time, this isn’t a problem. The USB 3 standard is pretty mature — having been around for 15 years now — and chip manufacturers and driver developers have had plenty of time to work out most of the bugs. Every once in a while, however, there are issues that come up that I can’t attribute to the cards themselves.

Flaky Connectors

One issue has to do with the card readers I use. I primarily use the JJS CR-UTC4AC (link) for a few reasons:

  1. They support the SD 4.0 standard — which means they support UHS-II-enabled cards;
  2. They don’t have quite the same issues with randomly disconnecting themselves after a few days of continuous use (an issue that I had with the SmartQ Singles); and
  3. They’re relatively cheap as far as UHS-II-enabled readers go.

For the most part, these readers have been pretty rock solid; however, there’s one glaring issue with them. They feature a standard USB A connector; however, this connector is on a hinge and can be flipped up to reveal a USB micro-B (or “OTG”) connector. You can see this in action in the product pictures:

It’s an innovative design for sure, but it has an unintended side effect: the connection to the USB A connector is kinda flaky, which can make it sensitive to even the smallest movements. Sometimes just lightly touching the card reader is enough to cause it to disconnect from the host.

Fortunately, I was able to find a workaround: these readers have USB C connectors on the other end. I just bought a boatload of USB C to USB A adapters and plugged them in using those instead. (One of these days, I might switch to using hubs with all USB C ports; however, I haven’t found any 16-port hubs that aren’t hella huge and hella expensive.)

Hub Issues

Another problem has to do with the hubs I’m using. For the record, I’m using Sabrent 16-port hubs (link). Internally, these are 4-port hubs, with 4-port hubs wired into each port. Again, most times these things work just fine, but every once in a while they’ll decide that they’ve had enough of my bullshit and will simply disconnect themselves from the host. Not physically of course — rather, they’ll run into an error condition that will cause them to logically disconnect from the USB bus. Most times this can be solved by just disconnecting the hub and plugging it back in — my program is resilient enough that it’ll resume automatically from where it left off.

However, there’s one issue that confuses me. It’s a phenomenon I’m calling “device mangling”. The basic idea is pretty simple: you go to issue a read operation from one device, but some (or all) of the data you get back is actually from a different device. I discovered this after I started embedding round number, sector number, and CRC32 information into each sector, and started seeing errors where the round number coming back was far ahead of where the card actually was — but lined up with where another card, on the same machine, was at the time. I added a unique card ID to each sector, and — sure enough — I started seeing instances where the data read back from one card actually had another card’s ID in it.

When this device mangling happens, it typically involves cards on separate card readers — which leads me to believe it’s not the card readers that are causing this to happen. I’ve also seen this issue happen on different brands of USB host controllers, so I don’t think that’s to blame either. This leaves either the Linux USB stack or the hubs I’m using — and I’m starting to think that it’s an issue with the hubs combined with the fact that the USB bus on these machines is constantly saturated.

The next logical step in this process of elimination would be to replace the hubs. The hubs I’m using all seem to be using Genesys Logic hub chips, so I’d want to try to find something with a chip made by another manufacturer. Unfortunately, whose chip a hub is using isn’t something a lot of manufacturers tend to advertise — so at the moment, I’m not even sure where to look.

SanDisk Cards Have a Disturbing Tendency to Randomly Fail

If you’ve read through every single part of my results, you’ll know that I’ve spoken about embedded solutions manufacturer embeddedTS and the test they performed where they set up 40 SanDisk microSD cards and tested them to the point of failure. One of the things they mentioned was that “SD cards do not like getting power brownouts” and that “[s]everal cards have permanently destroyed themselves with a precisely timed power disconnection”. Up until the time I was writing this, I read this as “SanDisk cards do not like getting power brownouts” — and assuming that’s the case, I would have to agree. (Seeing as how the page was about testing SanDisk cards, I don’t think it’s an unreasonable assumption to make.)

I have 24 SanDisk cards, of various sizes and models, participating in this survey. Six of them have completely failed so far — five of them under circumstances that any other brand of card would have shrugged off:

  • Two died when another card reader was plugged into another USB port on the same hub.
  • One failed when a circuit breaker tripped and turned off both the hub and the host machine it was plugged into.
  • Two died after the host machine was gracefully shut down and restarted.
  • One — a SanDisk Industrial 8GB — just randomly died in the middle of endurance testing.

This is disappointing for a couple of reasons: first, I’ve been using SanDisk cards for years now and — before now — have only had this happen maybe one other time. I had higher hopes for them going into this project. Second, these cards had been doing quite well up until they failed — some of them not having experienced any errors at all.

Off-Brands Come and Go

There’s a lot of cards — off-brand cards especially — that by the time I get around to testing them and posting the results, have been de-listed or discontinued. The Hiksemi NEO’s are a perfect example — I liked these cards, they’ve performed pretty well in some areas, I feel like they’re a good middle-of-the-road card, and they were cheap. However, they’ve since disappeared from the Hiksemi store on AliExpress. When I went in search of a third sample for the Hiksemi NEO 32GB, I had to search around for quite a while before I found another seller that still had them.

I also feel like there are several new brands that popped up shortly after the New Year — for example, AUGAOKE, Bliksem, and Hsthe sea (all of which I have in my backlog). I wonder if this a regular or semi-regular cycle for some of these sellers — “your brand has gained a bad rap, time to fire up a new brand”. Or maybe it’s cheap enough to get a boatload of custom microSD cards that everyone and their dog has thrown their hat into that ring and will continue to do so.

What’s Been The Best Card So Far?

Well…depends on how you want to look at it. I evaluated these cards in three areas: skimp (which is basically a measure of “how much space did they advertise on the package vs. how much space do you actually have available”), read/write performance (which is split into sequential read and write and random read and write speeds), and endurance.

  • In the skimp category: The Auotkn Extreme 8GB and the QEEDNS 8GB tied here, with a score of -0.66%. (Negative scores mean that you actually got more space than what was advertised.) However, I would like to give a shoutout to Samsung, who consistently got negative skimp scores on all of the cards I tested. They were the only name brand card to do so.
  • In the performance category: the Kingston Canvas Go! Plus 64GB is the clear winner here. Not only did it take the top spot in three out of the four metrics I tested for, but each individual card’s scores in those three metrics were higher than any other card I tested.
  • In the endurance category: the Hiksemi NEO 8GB has lasted the longest here, with sample #1 having gone for over 63,000 read/write cycles without experiencing a single error. That puts it 37,000 read/write cycles more than its next closest competitor — so it’s skewing the averages just a bit. With all three cards averaged together, it comes down to about 29,700 read/write cycles completed overall, and 21,000 read/write cycles without any errors. If I were to take this model out of the equation — or even just take sample #1 out of the equation — the SanDisk Industrial 8GB would be the winner, with an average of about 19,000 read/write cycles completed without errors.

Are Some Brands Better Than Others?

I want to draw conclusions here. That’s the whole goal of this project, right? Figure out which cards are reliable and which ones aren’t? Here’s the problem: so many cards are still going through endurance testing. I have different cards of different sizes that started testing at different times. I was going to include a table of each brand and the average number of rounds of endurance testing they managed to complete — but as I started putting that table together, I realized that the numbers for most brands simply reflected where they are right now — they’re not reflective of where they’ll eventually end up. For example, it ranked one major brand way below several other name brands, off-brands, and even knockoffs — simply because I started testing them later and they haven’t had time to complete as many read/write cycles as the others.

So…I’m not ready to make any calls on which brands are better than others. I think I have enough data to say for sure that fake flash is less reliable than authentic flash, but that’s about it.

What’s Next?

I have a few goals for this project going forward:

  • Add more testing rigs. I’d like to get to the point where the USB bus on any given testing rig isn’t constantly saturated. I think realistically, the limit probably needs to be 8 cards per rig. I’m hoping that will cut down on some of the errors that these cards are seeing.
  • Finish my FPGA design. I’d really like to get to the point where I can have something that I can use to “certify” a card — e.g., something that will run a card through a series of tests to ensure it’s properly supporting the SD card communication protocol, that it supports the features it says it does, test it against the various speed classes, etc.
  • Clean up my code. My testing program is a mess. I wrote the original program as a pretty monolithic program — e.g., everything in one file, and way too much code in my main() function. Heck, as of right now, most of the endurance testing code is still in main(). I’ve started breaking things out, moving sections of related code to other files, moving some pieces of logic to their own functions, etc., but I still feel like I have a long ways to go. I also have some things I want to work into the program — for example, I want to turn it in a client/server architecture, so that I can view/manage all of my SD card testing in one place. In short — at the end of the day, I want to be proud of the code I’ve put out into the world.
  • Get more cards. There are still plenty of cards out there that aren’t in my backlog at the moment. I’ve tested a pretty wide array of cards so far — more than I think anyone else has tested in a single review like this — but there are still plenty of other options out there. A lot of them are more than my self-imposed “$15 per card” limit, but some of them are just garbage and I’ve been reluctant to buy them because I know they’re going to be garbage. I’d also like to add full-size SD cards and USB flash drives to this at some point.

What’s stopping me? Two things:

  • Money. I’ve already gotten flack from my family around how much I’ve spent on this project — especially when the end goal for these cards is “destroy them”. Don’t get me wrong, I’m not destitute — I have a full-time job (that, sadly, doesn’t involve microSD cards), and, all things considered, I live pretty comfortably. But that said, if anyone would like to donate money, equipment, or new microSD cards to this project, please leave a note in the comments — I’ll reach out to you to coordinate it.
  • Mental health issues. I have depression and what I feel like is a pretty bad case of ADHD. Don’t worry, I’m being medicated for both — but there are a lot of days where it’s hard to find the motivation to work on this project. And when I do work on it, it’s hard to stay focused on it for more than a couple hours. The motivation is still there…the energy just isn’t.

So what’s keeping me going? I still believe in this project. I don’t see anyone else out there running endurance tests on microSD cards like I am. That still inspires me to continue what I’m doing.

So with that…here’s to another year.

And don’t forget to check out the Great MicroSD Card Survey here.

3 Replies to “The Great MicroSD Card Survey: One Year Later”

  1. Excellent article!

    One thing I would recommend (and I know this is mostly dependent on FPGA or more PCs) is eliminating USB hubs.

    Simply put, manu are cheaply made, not great at power delivery, etc. But even when you get past that, there’s a topology issue. How are the USB hubs wired up? Most hubs are based on 4 port chips. That’s why 4, 7 and 10 port hubs are so common: they are wired serially, with hub 1 having a host connection, hub 2 taking a port in hub 1, hub 3 taking a port in hub 2, etc.

    The topology of the 16 port may be 5 internal hubs with hub 1 being a direct upstream for the other 4, but this is difficult to wire up. Even if you can guarantee a consistent topology, each hub jump is a chance for the errors you’ve seen or slowdown. Daisy chained hubs have bitten me in the ass more times then I care to count. Toss in the often cheap power delivery circuits providing relatively noisy power and it’s a mess.

    I’d be interested to see the difference between a direct connection and the hubs on good and bad cards. You can view the topology on Windows using USBTreeView; Linux would likely be discernable from lsusb. There’s also internal hubs topology: is a port direct from the CPU, from the chipset, or a on-motherboard hub chip?

    I also have heard of discrepancies in UHS-II behavior and UHS-I or regular readers. This would be due to card controllers optimized for specific use cases or inadvertently designed in specific ways. This is interesting for the parts of the community not using UHS-II, typically because a device doesn’t support it (cheap game emulation boxes, some single board computers, etc.).

    There are so many variables to control for, it’s a mess. It appears you’ve controlled a lot of them – you are at least using a good hub!

    1. Thank you!

      Yeah, I’m trying to work on the topology — but it’s going to be a long-term project. Basically, I’m doing two things:

      1. First, I’m trying to pare down to only having one card in each card reader.

        Since the card readers are dual-LUN, I’ve been putting two cards in each card reader and testing them simultaneously. This causes a big performance hit, but when I started doing this, I was going by the philosophy of “more cards is better, even if they don’t run as fast”. But as I’ve studied the SD specs, I’ve realized how card readers pull this off: the spec allows for multiple cards to be hooked up to a single controller in a bus fashion, with all of them sharing command and data lines. Each card has a unique ID, and when the card reader wants to talk to a particular card, it issues a command to activate the card with that ID. Every card on the bus is supposed to look at that command: if the ID being activated matches that card’s ID, it’s supposed to set itself to be active; if it doesn’t match, it’s supposed to set itself to be inactive. (Inactive cards aren’t supposed to respond to commands or do anything else until they see another activation command that has their ID.) So when you first pick up a dual-LUN card reader, you’re tempted to think that it’s talking to both cards simultaneously, but the reality is that that’s not the case — it’s switching back and forth between the two. This means that if one of the cards is really slow to respond to commands, it’s going to impact the performance of the other card — because the card reader has to wait for the slow card to finish what it’s doing before it can switch to the other card. I also suspect that cards sometimes “miss” the command to activate another card — so they end up talking over the other card(s), which causes scrambled data to be returned. The fix for that? Only have one card in each card reader.

      2. Second, I’m adding more testing machines — because even with one card in each card reader, the USB bus is still going to be constantly saturated.

      So as cards have been dying and card readers have been freeing up, I’ve been moving the empty card readers to new machines and only putting one card each in them.

      I do have one machine where I’ve used no hubs and only had one card per card reader. I’ve used it for 5 cards in total so far:

      • SanDisk Extreme PRO 64GB #2
      • SanDisk Industrial 8GB #3
      • SanDisk Extreme 64GB #2
      • Amazon Basics 64GB #1
      • Transcend 350V 64GB #2

      It does kinda seem like I see a lower incidence of errors on that machine — so it does kinda suggest that I’m on the right track. I guess time will tell.

      1. Oh wow, I didn’t even know about the dual-LUN card readers – I had always assumed that under the hood the would just have a USB hub chip and 2 separate SD reader chips. The idea makes sense, but we’ve largely moved away from this sort of bus: the only other case I can think of is NVMe which can theoretically have the same sort of multiple storage devices per enumerated PCI-E device.

        My C is pretty rusty, so it’d be tricky to help, but it’d be nice to have a chain of relevant HW with a benchmark result. Your code appears to account for the difference in random number generation by prepping the buffer in advance; that said different CPUs, memory configurations and USB controllers will still have different characteristics on just the read/write part. It could be useful to compare the same card in these situations; for example, how big is the difference between CPU and northbridge, or adding 1 or more hubs? If certain chipset or hub combinations are very performant, that feels like it’s worth knowing too.

        I also didn’t realize – until I saw the live updating note on the main list – that you are endurance testing all of these simultaneously (or at least have a bunch running at once?). That’s impressive, but raises another interesting question: Will an endurance test at, say, 100 mb/sec burn out a card in fewer cycles than another at 10mb/sec? I’d think so, because the flash management on the SoC in the card would be running hotter and likely need to do more ‘management overhead’. For example, if a card is a TLC card and using an SLC cache for quick small writes, blowing through that cache with a full-speed write-op may cause more wear.

        I may try testing a few of my cards with your app soon. Hopefully the vengeful ghosts of the microSD cards you’ve killed don’t keep you up at night! 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *