A plan of backup solution with 100 per cent reliability

A system compromised of hardware and software elements and practices

is needed which will provide 100 per cent reliability.

That means the system will be used to store backup data.

The system is expected, required to always provide healthy data stored on it.

That means it will never occur that the system can’t provide healthy

data.

That means for instance if some day data gets corrupted due to

errors in file system or bad sectors or similar the system must be able

to recover from it. System must be also able to detect possible problems

which can lead in lose of data stored on it and enter appropriate measures

of maintainance to avoid data damage.

How have such a system look, what elements must be included?

I talk about hardware, software, methodics, practicies others.

Let’s use a NAS and additional external data storage as starting point

of discussion.

Hi, you can share any ideas on the network ideas section. 

http://community.wd.com/t5/Network-Product-Ideas/idb-p/network_ideas

No such thing as 100% reliablity.

No such thing as 100% reliability.

A system used to store backups of other data must be 100% reliable.

Otherwise one can forget the whole concept of backups!
Otherwise one can spare completely all the efforts risen by maintaining the backups!

What will help you a backup system/concept/plan not being able provide the data
in case of original data emergency? Nothing!

Therefore once again: The backup system must be 100% reliable.

Just for fun, nothing can be 100% reliable.  Look at the budget NASA has.  A Hard drive has a MTBF rate.  It will die.

100%, does that cover fire, flood, space debris?

Just for conversation.  2 drives provides the same “concept” as 10 drives.  You are protected if a disc dies.  With a raid 5 three drives, you have the same protection if one disc dies.

The next concept is a self healing array.  As in if a drive gets a bad spot, it gets the good data from the other drive(s) and relocates it.  This takes huge overhead as the drives are no longer “Mirrored” and it requires a databse to keep up with all the relocations.  Where do you put this database?  What happens to this database when the power is removed from it?

And just for fun, if you say backups.  How do you get the data from the PC to the NAS?  As in the only way to get a “perfect” backup is to take the PC offline and do a cold image.  No way a vendor can provide VSS writers for every application out there.

The best concept is mutiple backup targets with one possibly being offsite and then you have to do regular test restores.

1 Like

Therefore - and as mentioned in the initial inquiry - it is a inquiry about a SYSTEM.

A system normally comprises several elements, they can be elements of different kinds, material or non-material.

You are right, all kinds of risks must be considered, fire, flood, …

I am aware of it. Missed to ask in the question to skip such discussion.

Just to keep the discussion modularized as possible. My fault.

I am not asking for “perfect” things. Just working and satisfying the risen requirements will be good enough,

it means 100% reliability in terms of the risen requirements. Nothing more.

 The best concept is mutiple backup targets with one possibly being offsite and then you have to do regular test restores.

I agree. However me be not sure if it is sufficient enough. 

How about detecting errors and recovering from such in file system of the backup storage?

My opinion is - base on life experiences - file system errors protection/recovery is as essential 

as the backup itself. It should be the intergral element of backup plan.

What about people backup to go along with the hardware? Do you have 2 people that are fully knowledgable with the systems?

I am not sure what your “risen requirements” are but could you not use DFS on a few DS6100’s usig ReFS?

Gramps wrote:

The best concept is mutiple backup targets with one possibly being offsite and then you have to do regular test restores.

^this^