Sorry wptski to quote you but you made me think.
When I was younger, I created turn key systems that had no buttons except the power button that turned the system on or off. I used debian and copied the OS to a ram disk on boot up to prevent OS write corruption. It was a company that provided service that played background music for large stores.
The player was built using old computers with old hard drives so as long as the system booted up with a quick fsck, the system was good to go. It didn’t matter if you pulled the plug or simply turn off the power in mid stream music playing, the system was robust.
However because of windows and all the systems that occasionally gets corrupted as well as myself getting older, I am much more careful with shutting down my systems.
So in thinking about this… just how robust is the Cloud OS.
It has no power switch
the shut down menu in 3.04 was buried under settings/Utilities/Device Maintenance/Device Power… and even then the device was still powered up until you pull the plug (possibly the drive has been unmounted and parked).
we often just pull the plug when something goes wrong and I’ve done that at least a dozen times in my early days before I realize that I’m pulling the plug on a operating system.
fsck’ing should correct all problems
it is linux journaling file system that safeguards this.
Updating file systems to reflect changes to files and directories usually requires many separate write operations. This makes it possible for an interruption (like a power failure or system crash) between writes to leave data structures in an invalid intermediate state.
For example, deleting a file on a Unix file system involves three steps:
Removing its directory entry.
Release the inode to the pool of free inodes.
Return all used disk blocks to the pool of free disk blocks.
If a crash occurs after step 1 and before step 2, there will be an orphaned inode and hence a storage leak. On the other hand, if only step 2 is performed first before the crash, the not-yet-deleted file will be marked free and possibly be overwritten by something else.
Detecting and recovering from such inconsistencies normally requires a complete walk of its data structures, for example by a tool such as fsck (the file system checker). This must typically be done before the file system is next mounted for read-write access. If the file system is large and if there is relatively little I/O bandwidth, this can take a long time and result in longer downtimes if it blocks the rest of the system from coming back online.
To prevent this, a journaled file system allocates a special area—the journal—in which it records the changes it will make ahead of time. After a crash, recovery simply involves reading the journal from the file system and replaying changes from this journal until the file system is consistent again. The changes are thus said to be atomic (not divisible) in that they either succeed (succeeded originally or are replayed completely during recovery), or are not replayed at all (are skipped because they had not yet been completely written to the journal before the crash occurred).
Thus with all that said, perhaps @wptski is right to put it on a power strip and simply power it off every night without consequences?
So what does Western Digital say? is the Cloud built to have the plug pulled? is it designed to be user proof?
What are your thoughts?
For me, I’m still safety conscious ever since the rampant firmware upgrade of 3.04 of which the blinking white light has left me damaged with caution. I am even afraid of turning off my Cloud in fear that it may red light on me due to OS and hardware corruption. Is it time to take off the gloves?