Some of the specs listed on the following page are wrong.
Could you please elaborate a bit more about this?
Well, it might be nice if there was some information about the single bay MyClouds, for instance.
No joke. They dont tell you anything about the single bay MyClouds on that thing, aside from the firmware versions.
Granted, they are intended for consumer in-house NAS applications, where the multi-bay mycloud offers are for small to midsize businesses, but really? Not a single bit of data on cpu type, installed ram, or anything? blech.
We have passed this along to the responsible parties.
Some info on processors and memory here:
On a spec list like this one, it would be useful to have a “maximum allowed drive storage” as a combined total of all internal and externally connected drives.
This is particularly important in today’s market since multi-terrabyte external hard drives in the 8T+ range are becoming increasingly common. This is also of critical importance since exceeding the maximum allowable drive storage for a device can often totally bork the system due to file-system wrap-around.
Filesystem wrap around? Could you explain what filesystem wrap around.
Indeed-- I have yet to see a disk that had more than what is addressable with 48bit LBA, which would be the only way I could see a sector getting “wrapped around”-- At least as far a single disks go anyway.
Sorry, I mis-spoke. The phenomenon is “hard drive wrap-around” which Wired_w, addressed wonderfully on the “Maximum total drive storage allowed on My Cloud (v2)?” page. (His excellent explanation of that can be found here.)
In short, the phenomenon of hard drive wrap-around occurs when either the hardware or the operating system do not have enough address bits to access the entire drive.
In Windows XP, there was an operating system limitation of 137 gigs, even though, (theoretically), the NTFS file system could support drives in the Petabyte range. The problem was that hard drives had grown to the point where 500 Gig+ drives were available, people were plopping them into systems - as I had done - and discovered that the “Petabytes of storage” hard drive limit was an illusion.
What would happen is that the address for the “next” block of data would “overflow” the address bits maintained by Windows, causing it to wrap around to sector zero and continue from there. Result: An unrecoverable hard drive.
Modern operating systems - post XP - are designed to avoid this problem by using a larger addressing scheme with 48 bits of address space instead of 32, which adds a huge amount of space.
What complicates matters even further is that the new SATA/eSATA interface spec doesn’t care what size the drive is or how many bits are used. All it expects is that the devices at each end know what they’re talking about.
Ultimately it becomes a matter of what the hardware controllers can address, (as bits are not an unlimited resource), or what the operating system can handle given the limitations of whatever modifications were made to it or it’s available memory.
Using Wired_w’s Windows XP example from his excellent explanation within that article, Windows would wrap at 137 gigs because of the LBA-32 addressing scheme, which LBA-48 solved.
When discussing embedded hardware, simply because the spec specifies 48 available addressing bits, doesn’t mean that all of them will be implemented in expensive silicon real-estate. This is especially true with embedded hardware where both cost, and physical size, are constraints that limit what the chip designer can do.
If there are fewer bits in the hardware registers, or if there is a lack of available controller/system memory, the actual limitation of the system can fall far short of the Petabytes and Exabytes that are theoretically possible.
The result has the potential for ranging from a crashed system, and potential drive corruption, or an address wrap-around in a controller chip somewhere, causing all the data on the hard drive to be lost.
Since I do not know which - if any - of those possibilities are true for the My Cloud, I have to assume that there is some limit, as yet unknown. And as I said on the other thread, we will probably not find out until someone adds more space to his device than it can handle and he borks his box. Which will be an absolute shame.
What say ye?
With 6tb drives inside these devices now, we can create a safe assumption on the max size affordable, as we can guarantee that a certain number of bits are hardware implemented to achieve that disk size.
A 6tb disk needs 12884901888 logical blocks (assuming you use actual binary values for your units, and not decimal ones. Stupid HDD makers and their marketing shenanigans.) to be fully addressed. That is a 36 bit number. With that knowledge, we can safely presume that a disk up to size 64TB can be used by the hardware.
If we use a disk bigger than this, and can successfully write to 100% of the volume reliably, we can increase our known working bit depth threshold, and have a much larger theoretical max disk size.
That’s assuming that the hardware implements all those bits.
I’ve done enough embedded controller work, and have seen enough device spec-sheets, to know that this is a dangerous assumption.
I’ve done embedded hardware development, software QA, consulting, etc. etc. etc. for more years than I want to count. If there is only one thing I’ve learned from all that, it is to assume stupidity, (either the designer’s, or (more often), the managers and bean-counters), unless and until proven otherwise.
This is why I like to see things like this hard spec’d; and even then not believe it until it’s been thoroughly tested at that limit.
You are not understanding me. In order for it to address a 6tb drive, it MUST implement ALL 36 bits, or it will not be able to see the full drive. 6tb is not the largest number that needs 36bits. That is the 64tb value I quoted. Greater than that, it will require more active bits, which we do not KNOW are available. (We KNOW there are at least 36bits of active address, because the 6tb drive works.)
I bow to your superior intellect. ( )
Seriously, you make an excellent point. I had not made that calculation, and being as naturally paranoid as I am, I don’t believe anything until I see it proved at least twice. And then I still don’t trust it. (Though I many have to use it anyway, despite my lack of trust!)
Thanks for clarifying that.