Disk migration/replacement


#1

Perhaps this was answered somewhere: I understand that taking data offline causes “revenue” loss.

How long before an offline vault is considered destroyed by the protocol?

I am curious because there will be people who want to migrate their vaults from OS1 to OS2 or retire their HDD all hopefully without losing revenue?


#2

Each data element held will have a different “death date” it should be very random but in a range particular to your on line status over time. So the answer will be likely in the weeks timeframe.


#3

Sounds good, a simple script with rsync + checksum check will do!

I was concerned that it may be hours or worse.
I’m planning to setup a node (first on the test net 2) and I was wondering if I’ll be able to migrate this data to a better node later on without losing vault data.


#4

Should be no problem at all :smile:


#5

@dirvine Almost related question ;-)… will a node or vault (not sure which is the correct term for this) be able to allocate space on different disks/partitions, and to modify this later (shrink, expand, add, remove)?


#6

Yes this should be an easy issue to have in place. @benjaminbollen @Ross can perhaps add it into the BEFORE_RELEASE issues?


#7

Alright! We’ll track the feature. It’s a good suggestion to enable a GUI of production vault code to easily manage disk migration.


#8

Can I add [grin]… ability to both have multiple vault locations on a machine, but also to prioritise their use: as in use all of vault1, then all of vault2, etc…


#9

No, you can’t, and you’re hijacking my topic.
Just kidding.

Since this is my thread, I will offer my unsolicited opinion:
Your request is trying to make the MaidSAFE a volume manager’s or filesystem’s job…
Next feature you’ll be requesting will be alerts for “vault almost full” situations, then soft and hard vault quotas, then the vault auto-rebalance feature, followed by something else.


#10

Doh… too late, I done it guv :wink:

And again… what I’m thinking is that people (me me me) will have different disks on a machine, with different performance (e.g. I will have Odroids with 64G eMMC RAM and HDD). Others might have an SSD and an HDD.

One might want to have an eMMC vault and an HDD vault etc. And to have one being filled first (e.g. because its a faster disk and will get better farming performance, or because it has lots of free space etc.).

Later I can imagine machine resource allocations needing to be changed - your disk migration/replacement - also adding another disk, meaning it would be useful to be able to move, expand, shrink, add remove, vaults etc. The priorities part helps a user manage their resources both to maximise farming (which benefits the network) and as things change over time.

I know it sounds a bit like making the system handle system functions, but its really more about optimising farming resources on a particular machine, and enabling these to be adjusted later.


#11

Vault itself should focus on it’s core business: farming. These feature requests probably come down to the necessary API or interface that would make them possible for a GUI-vaultmanager to implement, ie easy for an enduser to manage.


#12

Agreed, so long as the way the underlying code works facilitates this (i.e. more than one vault per machine account).


#13

I think you can create dozens of vaults, but a handful (one per HDD, if that’s what you’re considering) shouldn’t be a problem even on a recent x86 box with few GB of RAM.