"No space left on device" while installing scripts

Hi guys,

I just bought the MyCloud and updated to the latest firmware. I first installed the Getnzb script and then wanted to install the Sickbeard script but got the following message while installing in /usr/share/ : “No space left on device”.

I ran a df -h and got this result:

Filesystem Size Used Avail Use% Mounted on
rootfs 1.9G 1.9G 0 100% /
/dev/root 1.9G 1.9G 0 100% /
tmpfs 23M 964K 22M 5% /run
tmpfs 40M 4.0K 40M 1% /run/lock
tmpfs 10M 0 10M 0% /dev
tmpfs 5.0M 0 5.0M 0% /run/shm
tmpfs 100M 140K 100M 1% /tmp
/dev/root 1.9G 1.9G 0 100% /var/log.hdd
ramlog-tmpfs 20M 1.8M 19M 9% /var/log
/dev/sda4 1.8T 11G 1.8T 1% /DataVolume
/dev/sda4 1.8T 11G 1.8T 1% /CacheVolume
/dev/sda4 1.8T 11G 1.8T 1% /shares
/dev/sda4 1.8T 11G 1.8T 1% /nfs/Public
/dev/sda4 1.8T 11G 1.8T 1% /nfs/SmartWare
/dev/sda4 1.8T 11G 1.8T 1% /nfs/TimeMachineBackup

I’m a bit shocked that I run into free space problems so soon. I assume it’s the 1.9G part that’s the problem. Any idea’s on how I can free up some space here? Surely I am missing something here because I know others run both scripts simultaneously 

Cheers

Ok so I gave it another shot today and everything suddenly installed just fine. This is without changing anything !??

I thought I was finally figuring linux out, but I guess I was wrong :neutral_face:

joskevermeulen wrote:

Ok so I gave it another shot today and everything suddenly installed just fine. This is without changing anything !??

The root partition is transient.   The space available will vary over time.

Pretty much everything is in that small partition EXCEPT share data, the ramlogs, and the /run tree.

When I started writing this post, the root partition was 23% used.   Now it’s at 26%.

But like any other volume, if it ever fills up, bad things can happen…

I’m usually very sceptical when it comes to updating, but didn’t really think it over this time.

You guys only have 23 and 33% used space… How on earth is it possible then that I’m using 100%?? It’s not like I installed over a gigabyte worth on scripts.

Well since I just started using the drive, I don’t mind wiping all data and start over while keeping an eye on the free space.

Will a Full Restore to factory from the wdmycloud browser interface be enough? Or will this only affect the public data?

To see what’s holding up the partition, you can look at the tree to see where it’s all gone.

Here’s mine for comparison.

CloudNAS:/# du -x -d 1
4 ./DataVolume
5540 ./sbin
312168 ./usr
4 ./home
3932 ./bin
4 ./mnt
4 ./media
0 ./run
16 ./root
4 ./opt
102408 ./var
0 ./dev
5756 ./etc
4 ./selinux
8 ./srv
0 ./sys
44 ./nfs
0 ./proc
4 ./shares
16 ./lost+found
3656 ./boot
14884 ./lib
4 ./CacheVolume
0 ./tmp
448464 .

joskevermeulen wrote:

Hi guys,

 

I just bought the MyCloud and updated to the latest firmware. I first installed the Getnzb script and then wanted to install the Sickbeard script but got the following message while installing in /usr/share/ : “No space left on device”.

I ran a df -h and got this result:

Just noticed a detail here…  Where’s these install instructions?  Were they tailored to this NAS?  

The /usr/share path will be in the root partition; so if it was occupying space there (particularly if it was downloading to the same path) it could have easily filled up root.

TonyPh12345 wrote:> Just noticed a detail here…  Where’s these install instructions?  Were they tailored to this NAS?  > * * *

I don’t know. They came from this forum though:  NZBget and  Sickbeard

I did a full restore but the process gave an error and stopped. After that I couldn’t access my public files and the disc showed 0kB capacity in the browser. 2 system restores and errors later I couldn’t access the disc at all (no browser, no ssh) and the led is showing red…

I believe the phrase “wow that escalated quickly” is pretty suited here :neutral_face:

LinAdmin wrote:

Probably “full restore” also needs enough free space on the root partition.

  

You were right. After a “hard reset” by pressing the reset button for 40 seconds, my pc recognized the hdd and got an SSH connection. It turns out a folder I had in my public shares was also inside the root folder (no idea why or how). This folder clogged up my root partition. I deleted it and afterwards the system restore succeeded without errors.

However…as from now when I access the hdd through the browser I get the message that my DataVolume failed to mount, but my hdd capacity is 1.2GB. Checking my disc space through SSH gave me:

/dev/root 1.9G 645M 1.2G 36% /
tmpfs 23M 328K 23M 2% /run
tmpfs 40M 4.0K 40M 1% /run/lock
tmpfs 10M 0 10M 0% /dev
tmpfs 5.0M 0 5.0M 0% /run/shm
tmpfs 100M 116K 100M 1% /tmp
/dev/root 1.9G 645M 1.2G 36% /var/log.hdd
ramlog-tmpfs 20M 3.4M 17M 17% /var/log
/dev/root 1.9G 645M 1.2G 36% /CacheVolume
/dev/root 1.9G 645M 1.2G 36% /nfs/TimeMachineBackup
/dev/root 1.9G 645M 1.2G 36% /nfs/Public
/dev/root 1.9G 645M 1.2G 36% /nfs/SmartWare

 This doesn’t look right at all, my 2TB storage is nowhere to be seen. I really don’t know what to do next

PS: I’m pretty sure that I erased all traces of my custom installed scripts

Well, apparently not.  

It’s quite possible that while the root partition was filled, some other configuration files got clobbered.  

That’s most likely why your DataVolume partition isn’t mounting, and why the nfs mountpoints are in the wrong place.

It looks like your /dev/sda4 partition is GONE.

If another System Restore doesn’t correct that, your firmware is now corrupt and you’ll have to run the Debrick guide.

The system restore didn’t work, but I’m glad a quick restore did. The DataVolume partition is mounted again and at 2TB capacity.

I’m going to install those scripts again and this time keeping a close eye on the root partition!