Increasing sharespace capacity from 4*1 TB 4*2 TB

I’d like to know if i  can increase the capacity of my sharespace, which has 4*1TB drives, using 4*2 TB drives.

It’s current configuration being RAID5.

If so, how does one perform the right steps?

Can the new WD red WD20EFRX drives be used?

Thank for your reply.

Hello,

I found this in the WD Sharespace user manual:

“The WD ShareSpace enclosure is designed to support only WD Caviar® Green™ hard drive assemblies. Visit support.wdc.com and search the knowledge base article 2569 for detailed instructions on obtaining a replacement drive for this product. Use only WD hard drive assemblies of the same size or your warranty will be voided.”

Check page #177 of the user manual for more information.

http://www.wdc.com/wdproducts/library/UM/ENG/4779-705006.pdf

And here is the knowledgebase artice 2569:

How to replace a drive in a WD ShareSpace 

Hope this helps.

Hello.

I’d like to develop the topic a bit.

Will ShareSpace work if replace HDDs of 2 TB to 4 TB capacity? I think, it was written somewhere (can’t find right now), that only 1 TB or 2 TB HDDs are acceptable. Is that correct?

Will it work in fact (yes, I’ve read the manual), if use not Green, but Red HDDs? It’s hard to buy WD Green 4 TB in my occasion.

Thanks to all, who can help and have reliable info.

Please, WD support, answer my simply question!

I’m waiting for it three weeks already!

Will ShareSpace work if replace HDDs of 2 TB to 4 TB capacity? I guess, it is written somewhere (can't find right now), that only 1 TB or 2 TB HDDs are acceptable. Is that correct?

 Thanks.

yuriker wrote:

Please, WD support, answer my simply question!

I’m waiting for it three weeks already!

Will ShareSpace work if replace HDDs of 2 TB to 4 TB capacity? I guess, it is written somewhere (can’t find right now), that only 1 TB or 2 TB HDDs are acceptable. Is that correct?

 Thanks.

Hello  yuriker

Please bare in mind that if you need support is better to go through the support line.

To Contact WD for Technical Support:
http://support.wdc.com/contact/index.asp?lang=en

The type of configurations you are suggesting are not supported since the controller might not work with bigger capacity drives.

Support will be able to give you more information. 

Robothief ,

thank you for the link to WD Technical Support. I’ll try the luck with my question there.

Hope, I’ll find the answer. :wink:

Hi Yuriker,

Did you get any response from the support, mind to share here?

I wish to do the same.

yuriker wrote:

Robothief ,

thank you for the link to WD Technical Support. I’ll try the luck with my question there.

Hope, I’ll find the answer. :wink:

Since this will likely be a future project here, I would like to reopen this thread.

A great deal of this depends on where actually the linux operating system is located.

I don’t actually believe this:

Important: The WD ShareSpace enclosure is designed to support only
WD Caviar® Green™ hard drive assemblies. Visit support.wdc.com and
search the knowledge base article 2569 for detailed instructions on
obtaining a replacement drive for this product. Use only WD hard drive
assemblies of the same size or your warranty will be voided.

I have always believed the manufacturers use these tactics in order to generate sales of their own items.

I do however believe you can’t just intermix different size drives on a raid5 or the like. I am not very proficient

with the raid aspects but I believe you can  JBOD  different size drives.

If the linux OS is not installed on the hard drive and rather on chip then this should be relatively easy. 

You should be able to just replace all the drives with bigger ones OR  you should be able to use just 1 single

drive like a WD Red WD60EFRX 6TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" NAS Hard Drive Bare Drive.

I have another thread going about a single drive with bad sectors already so I will be playing around with this the next few days to see what works and what doesn’t.

I took a little time to play around with this. 

One thing I can confirm is WD has it hard coded in the script file to only accept WDC as the code for the vendor. While I have found hacks to get around this, it it quite cumbersome to do and have not attempted it myself. Here we can see me trying to use a 3TB Seagate drive. I would also like to confirm that the limit for the drive size is 2 TB. Or at least that is the largest size it will make the automatically make the volume. I misplaced the picture but using that Seagate 3TB drive, the operating system automatically made a 2TB partition and 1TB partition that was unused. But since it was not a WD drive, it would not create the volume. Quite possible this trick might work however is using a 3+ TB WD drive and letting it do it’s work and create the 2TB partition and volume and then using something like Gparted or Easus Partition Manager to merge the two together. But it does not appear that I have a 3+ TB WD Drive as mine are different manufacturers. 

On the other hand, I can confirm that it is false by it’s statement that a 1TB WD Green Drive must be used. As pictured below the drive(s) that came with it.

So I decided to investigate this further. Some NAS like my old Maxtor Shared Store II had the boot partition on the hard drive itself. But the WD Sharespace has it embedded on the motherboard. I did learn that no matter what, the unit will not boot up completely unless it has a hard drive in it. It doesn’t even matter if it’s compatible or not, just so long as it is in the bottom sata slot. 

Tip: The drive(s) you plan on using must be MBR and not GPT and be completely blank with NO partitons. Some places have said you must zero write your drive. That’s bogus! You only need to delete all the partitions.

Working:

Screenshot_5.png

Not Working:

Screenshot_8.png

Notice the drive size i am using here. This is a 320 GB WD Black Drive.

This works perfectly without a problem. 

So the end result I have come up with so far is that it will support ANY WD drive of ANY size but will only create a partition of 2 TB leaving the rest unused. Also confirming that the unit will only support a WD (WDC) vendor code. 

I would also like to add as a personal opinion that there is really no such thing as a 2TB 4TB 8TB edition and rather the unit is the exact same for all and just using different size drives and amount of drives. 

I am willing to bet the 4TB that comes with 4x1TB drives will work just fine and dandy with 4x2TB drives making it an 8TB unit. Just my personal opinion though. 

Screenshot_10.png

1 Like

interesting and valuable info thanks Boujii,

i was just doing a search here in australia from one of my suppliers & quite by accident found this:

http://www.umart.com.au/umart1/pro/Products-details.phtml?id=10&id2=129&bid=5&sid=95164

interesting how WD say these red spec disks are desighed for NAS. doesn’t talk about disk speed though, i guess they are variable as well but it states the SATA 6Gb/sec

i am thinking of putting in 4x 2tb disks and running as raid 10 (half the write penalty of raid 5 if i recall?) probably i am just paranoid of how super slow the green suckers are.

Thoughts?

Indeed I was just looking at that and other drives last night as well and said the exact same thing as not seeing the speeds at the site I was on. 

These did give me a little comparison though. 

 

There 2 green drives I have that are bad are just 1 year expired in warranty of 3 year manufacturer. That’s pretty poor if you ask me. One of which failed developed bad sectors and actually failed the raid just after 3 years. This second one I believe it what has been causing critical issue as well but just has not failed the array yet. But it surely has developed bad sectors. 

I could never understand when someone blames the unit for being slow. I generally would lay blame more on network performance being the issue before the unit. Even at a connection at 1gbit, your still only limited to how fast you can send the data. By no comparrison can network speed out compete drive speed. I’ve also always felt a better throughput with a switch then a router and less bottleneck.

1 Like

ok just a quick update:

i opted for just one new disk right now in order to test / attempt to rebuild the raid 5 data accros all 4 working disks. (mainly because read access is suuuuuuper slow in the disabled state. my intention is that i will backup the data after the rebuild and replace these green disks with the newer red nas disk types)

since no one has the original green wd10ears-00mvwb0 (in this part of the workd at least) i bought the later release wd10ezrx-00L4hb0 disk. it seems to be rebuilding ok… (thankfully).

Actually  i don’t know enough about raid controllers on these disks but so far it looks like we are lucky that the new one is accepted.

rebuild is underway… slowly but surely.

   

thanks again for your help!!

While waiting on my 2 - 2TB Red’s to come in, I decided to play more with this 3TB Seagate I have been testing with.

As mentioned before, the unit will not create a volume larger then 2TB. So I cloaned by WD 320gb that works fine to that 3TB Seagate. Then naturally there was like 2.3TB of unused partiton. So my originally theory was to just resize that partition 320gb to 3TB in linux. Piece of cake right? Not quite lol. Before anything you need a newer version of gparted to even think about it. The ones that comes with most live cds and ubuntu distro’s not even close. So after about 3 hours and finally throwing in the towel I went back to basics and said ok why can’t i get larger then 2TB. Then it hit me! MBR vs GPT. MBR will not support anything higher then 2TB. Well **bleep**. So remember saying before that the drive won’t initialize a drive that has data on it OR is GPT. So at this point I am stuck. After some discussions with myself lol, I started thinking what if I don’t use the unit to init the drive and data manually. Well this surely is going to **bleep** I can see that already. 

So fist thing I need to do is backup my 320 gb drive using Acronis backup. Of course you can use basically any linux based backup program. I might suggest Clonezilla. This way I can easily restore the partitions I need since I won’t be able to use the WD SharedStorage to do it since it will be GPT instead of MBR.

Now we can see the 4 partitions it backed up. 

So next step is to take the 3 TB Seagate and initialize it to GPT. If it is already MBR, just convert it to GRP.

Now we recover the first 3 partitiions. Then for the remaining space we want to use, we just partition that to 

“unformatted”

I found that in order to to create a LVM format, the drive had to be partitioned to get the /dev/sda# first. That is why I chose to partition it as “unformmated”. 

I h ave been up to this part 50 times already but was only able to create a LVM format up to 2 TB. That’s when I tried ext2, ext3, ext4, nrfs, etc and always hit that 2TB limit. That’s when I realized it was the MBR/GPT that was restricting me.

So it this point I have my 3 partitions restored and the remaining space partitioned to the max (2.73 in this case).

Now it’s time to try and format this thing again as LVM2 PV. So I load a newer copy of Gparted and right click on the 2.73 partition and format → as → lvm2 pv. You will see this in the list also showing several other format types such as ext2, etc.

SUCCESS. Where as many other times have failed at that capacity. Would only work up to 2TB.

Can also use this method instead of gparted to create a LVM partition:

How To Create LVM Using vgcreate, lvcreate, and lvextend lvm2 CommandsHow To Create LVM Using vgcreate, lvcreate, and lvextend lvm2 Commands

Now it’s time to put it back in the SharedStorage and fire it up.

Sad news :frowning:

The unit will not boot completely. Stuck on the orange HD1 light. After a little while I conclude this is pretty much where it is going to remain. This was the same symtom I got in my original tests taking a clean drive that was partioned as GPT. 

I just think the unit is being forced to only work with a drive that it init as MBR. So this was a failed experiment, but I taught myself a few things and leaned that NO MATTER WHAT MBR will not support greater then 2TB. So until I can do a workaround for the SharedStorage to work with a GPT partition, we are stuck at the 2TB Limit per drive.

unburying of this topic…

okay, I read a lot of this EXTREMLY interesting and useful topic; now I have some questions

1-st here are the base of my reflexion :

After I logged in through ssh:

1/ “fdisk -l /dev/sdb” didn’t gave me a lots of interresting info, but:

 - fdisk works, okay it’s definitly MBR

2/ “mdadm --detail --scan” give me useful and quite nice infos

/dev/md0 and /dev/md1 are raid1 

/dev/md2 is raid5

why nice :

/dev/md0 (~200MB) (miror-raid) is the root directory as stated in “mount” or “df -h”

/dev/md1 (~1GB) (miror-raid) is ??? don’t know… maybe a log of md2, the size seems to fit… ? 

/dev/md2 (~3TB) (raid5) is where all datas seemed to be stored.

3/ “pvdisplay” show as expected lvm is used on the raid-5 md2

“vgdisplay” and “lvdisplay” didn’t show anything interresting, except all the size is used.

2-nd here are my reflexion…(and I don’t have the space to backup my datas to test my theory)

This is a quite advanced theory, I know it

Basicaly ShareSpace is a “simple”  “LVM over raid-5” system.

If I remove a 1TB disk and replace it with a 4TB disk from WD, I assume I will got 4 partitions (big assumption):

  • sdX1 (for root partition on /dev/md0), sdX2 (for log?), sdX3 (for ???, maybe log of sdX1) and sdX4 (for datas on /dev/md2)

Is this sequence possible?

1/ I unmount all /shares/* and /DataVolume

2/ I set-faulty and remove let’s say /dev/sda4

/dev/md2 will become in a “clean,degraded state”

3/ I use fdisk on /dev/sda to :

3.1/ remove sda4 partition

3.2 create an extended partition

3.3 create a new 1.9TB logical partition (/dev/sda5)

3.4 create an other 1.9TB logical parition with the SAME space as /dev/sda5

that’s the 1-st weak (and weakest) point of my theory, is it possible to fdisk a runing disk??? to my experience yes but it depends…

4/ I should have now 5 partitions /dev/sda1(untouched), /dev/sda2 (untouched), /dev/sda3 (untouched), /dev/sda5 and /dev/sda6

using mdadm, I add /dev/sda5 and /dev/sda6 on /dev/md2

Here is the hard technical point, we have to modify the array to take into account all partitions, here 5 partitions : /dev/sda5, /dev/sda6, /dev/sdb4, /dev/sdc4 and /dev/sdd4 (if not done properly, /dev/sda5 will be added to the array and /dev/sda6 may be used as a spare disk…maybe an idea…) “mdadm --grow …” will do the trick

this will be a RAID-5 array with 5 used partition and no spare

5/ once the array is synced (probably not necessary), if working as I expect, “vgdisplay” should be showing some free space (the size of the smallest used partition in the array md2)

6/ growing the logical volume with lvextend, lvresize or any lvm tools.

7/ finally resize2fs to take all the modification into account

8/ mount again on /shares directories and /DataVolume

The 2-nd weak point of this theory… will all these modification be kept after a reboot?

* if the assembly and start of the array is hardcoded, it will definitly fail

* if the hard-disks are checked before use (manufactured by WDC, have partitions and EXACTLY 4 partitions), it will definitly fail

but if it has a clever behaviour and let the system take it’s own decision based on “what is under my hand” (as I have done on other systems) the array may remain intact.

If it works, and this operation repeated on all disks, we will have a 4 disks x 2 partitions x ~2TB thus ~16TB server… not good ideau on MY opinion as to MY mind as raid-array should never be 100% trusted… bad hard drives happened : trust me

*** IMPORTANT *** the last weak point *** 

If it works, it WILL BE NECESSARY  to change from raid-5 to raid-6 : think about it 1 sec…

if one hard-disk fails… 2 partitions of the array fail… an apocalypse on raid-5, but could be managed on raid-6

What is sure, the Web-interface will not be useful nor useable anymore (unless wix"Files".class are tweaked), at least for the Disk/Raid management pages.

Theory B

I directly go through the end :

 - same as before, 5 partitions on each disk

 - using lvm, I create a logical partition using /dev/sda5 and /dev/sda6

pvcreate /dev/sda5, pvcreate /dev/sda6

vgcreate disk_a /dev/sda5 /dev/sda6

lvcreate disk_a -L 100%FREE -n disk 

  • using mdadm, I create/add/mod a 4-disks array using the the logical partition of lvm

 - using lvm,…no ShareSpace use lvm to create the 

LVM (to manage “big” disk") over MDadm (to manage failure) over LVM (to manage shares)

The weak point : beware during boot, I don’t have any Idea if LVM over RAID will properly be initialized.

I fear the webadmin tools will not be useful nor useable for volume management.