BUG: Creating one user and private share and firmware still goes and processes every share

I have created one user and directed the MyBook Live only create one private share for that user yet the scripts are executed that seem to work on every share defined on the MyBook Live.  This is delaying completion of this simple tasks and may result in the Dashboard UI timing out more often. When I discovered this issue the Dashboard UI (this time) did not time out.

I think the issue is with /usr/local/sbin/genApacheAccessRules.sh.

(MyBookLive 02.02.02-020 : Core F/W)

Now this is something I have noticed as well! Dude you’re finding more bugs that WD, that’s awesome!! :smiley:

Didn’t I tell anyone my profession for many years is as a computer systems and network administrator, programmer and software quality control?  Try imagine my frustration when I write common sense stuff and get little, no or negative response.

Iv’e had time to examine how some things are strung together within the MyBook Live and I see a few programmers somewhere within Western Digital are baing a bit lazy.  If the quality control was to be robust then I would have not started this thread which described a problem because there would be no problem to describe!

The concept of the MyBook Live is good.  In a way I quite like it as it sits in a corner doing it’s stuff and you do have direct access to the internal Linux operating system which has it’s uses.  Like if I can’t fidn a file I use Linux’s find command directly on the MBL and it does a fine VERY FAST job locating files.  Rule of thumb here is to be very careful what you do once you have direct access to the Linux operating system.  This SSH feature has already saved me from having to perform a factory format on two occasions.  One where Twonky’s database kept on becoming corrupted and the second, with the previous firmware, a simple reboot rendered the Dashbpoard/Web UI totally inaccessible.  I had to directly invoke the upgrade script which did the upgrade and brought the Web UI back to life.  Without SSH access I would have been forced to send the MBL back to Western Digital.

Ir’s just a pity that the SSH facility is disabled by default but there are people out there that have numb nuts for brains and to be able to access the MBL from the internet will stupidly put their MBL into the router’s DMZ zone!!!  So in this case4 maybe it is better for the SSH facility to be disabled by default.

Oh yeah…  Put two MBL’s with upto date firmware on the same LAN and the nmbd process on both MBL’s will infinatly have battle on which one will be the network’s browse master!   (This is stated in the samba documentation.)

Yup…   and if one DELETES FILES through SSH, it will now “corrupt” the file system, making it impossible to copy files back into the share, requiring the box to be rebooted.

I’ve not come across what you’ve described.  Please do elaborate.

TonyPh12345 wrote:

Yup…   and if one DELETES FILES through SSH, it will now “corrupt” the file system, making it impossible to copy files back into the share, requiring the box to be rebooted.

 

Let’s say I go into the MBL via SSH and delete a whole folder… my iTunes folder contains about 10,000 tracks.

If I then open that share in Windows, it does see that the file system is EMPTY, but if I then copy my iTunes folder back over to it, Windows starts throwing out all sorts of errors such as the file system being full, the file being too large, the filename being invalid, the file already existing, etc. etc.  

If, instead, I delete the folder from within WINDOWS (which takes a VERY long time) and copy then copy the same folder back, it copies without issue.

So, if indeed there were ILLEGAL FILENAMES, or files that are TOO LARGE, it shouldn’t matter how the folder was DELETED beforehand…  

So there is apparently something inside the MBL that does NOT get updated if a user deletes files or folders by way of the SSH shell.

Trying to find out why will keep me occupied for a while.  Thanks.  :smiley: