Auto adjusting farming allocation to match account resource consumption

I’m in the middle of a UX thought experiment, for potential future apps.

Would it be reasonable for a client app to dynamically/automatically allocate farming resources in order to match a users consumption of network resource?

I.e. the client app assess usage patterns of PUT data, and then allocates more or less farming resource on the device (mobile lets say) in order to balance safecoin credit/debit to reduce the need for manual top-ups or ring-fencing to much resource?

There’d need to be predictions based on usage patterns, thresholds, and some sort of multiplier built in I’m presuming. But is it workable from a network point-of-view?

Jim

1 Like

I had heard somewhere that vaults were not going to be dynamic in sizing.

And definitely there is no ability to trim a vault. Meaning you cannot remove chunks to reduce vault size. To do that would mark your vault as misbehaving and be removed from the network

Have you also considered including a program that can limit the bandwidth allocated to various programs running on your computer. Thus if the node is consuming too much bandwidth then reduce its available bandwidth a little. Just need to keep the bandwidth high enough to be a useful node.

It might be a non-starter then.

What happens if a vault needs to change in size? Is it erased and started over? Could there be a pseudo dynamic resizing at regular daily/weekly intervals say, to meet a pattern of use?

I’m considering mobile first, so perhaps data transfer over time is a more useful control, if folk are on restricted data plans.

Thinking of ways to simplify these parameters for end users, with the goal of having more of a set-and-forget UX avoiding the requirement for (or the perception of a requirement) for regular top-ups, inconvenient or abrupt loss of service, or over-the-top resource usage.

Jim

The only possible changes in size is with the space that has not been used yet. In other words if they allow it, you should be able to reduce the spare space or increase the spare space. Obviously there will be a limit on reducing spare space in your vault since there is triggers such as a full vault can be removed from the network. If there is not enough reserve then it might be seen as misbehaving as the reserve is used in case other nodes go off line.

So I would say as long as you never reduce the spare space beyond the limits then you might be able to change the vault size.

Not sure how this helps in your dynamics though unless you are running out of disk space for your normal PC work.

The bandwidth control would be a powerful tool for reducing the effects of the vault on your network during the times you are using the internet link.

Top ups would be one thing I would be asking to be implemented in the vault. But then you could just set the vault size to all the space you will ever allocate to the vault from the start.

3 Likes

I may be missing something, but as a simple manual solution, when you create a vault you could be asked how much space you are willing to allocate - say 2GB. Then when the vault fills up with chunks to 1GB a warning could be issued with a prompt to increase your allotted space or risk being downgraded. In other words it would be done locally rather than by the network. Would that work?

I was thinking it would be done locally, but automatically/automagically.

I’d prefer a solution we a user has to be confronted with warnings, decisions on re-allocations, or adding safecoin to a wallet to ensure continuing service of an app as little as possible.

So what I was imagining was system where farming resources we’re ramped up or down to provide necessary balancing income to safecoin expenditure of PUTs from an app, based on calculations/guesstimates from patterns of use of said app.

You do know the space is not locked up. So if you set your vault to 2GB then that 2GB is not locked up but you could use it. The risk being as chunks come in there might not be enough actual space on your disk to store it and thus become a misbehaving vault.

The down is the problem. To remove stored chunks (ramp down) will mark your vault as bad. So the only way is to restart the vault with a smaller size and go through the whole process of joining and filling up.

1 Like

Cheers. Seems like I may have to go with a system of ongoing recommendations/notifications in the ‘up’ direction, while leaving the down reallocation as a manual decision for the user based on their other app space requirements.

Still be interested to know what the downward process would look like from a UX perspective, and if it could be made to feel easy/seamless.

Presumably a downward re-size could be requested by the user with the restart > join > fill-up being a background task?

Downward resize would be a command line “kill vault”, remove directory storing the chunk files then, change request size and a command line “start_vault”. The start>join>fill is all part of the vault code.

2 Likes

A few thoughts/opinions that came to mind:

It’s very logical that vaults should be fixed size for network KISS, especially considering XOR space. You want a nice contiguous block of storage with no fragmentation. In other fields, linux swap partitions and most virtual machines use a preallocated fixed size for best performance.

For the user to get the same effect as dynamic vault allocation, it seems like the easiest thing to do would be to run a script at the APP level that automatically spins up a new vault process with the user’s credentials. Taking advantage of some virtualization might make this easier or more intuitive for the user. A lightweight virtual machine image could be setup with all of the user credentials with a fixed but reasonable vault size (ex. ~32GB to ~128GB for a 1TB hdd). Copies of this machine image could be spun up or down in parallel as desired. Seems like some sort of virtualization like this would be standard practice for vaults running on server hardware anyway. The only downside from the user’s perspective would be that the new vaults would start out as infants, rather than borrowing the nodal age of the original. However, I think this is a good thing for keeping the network healthy. Hypothetically I could also see that the farming rewards may balance out since some of the younger nodes are storing proportionately larger amounts of hot data. Time and experimentation will tell.

1 Like

Is anything even approximating this approach feasible on mobile devices?

Multiple VMs would be asking too much for mobile devices, google says there was some tech like that around 2010 but it didn’t go anywhere it seems. I think Android apps already run as a sort of container on top of linux. I’m just guessing, but for mobile I think you would want to run a chroot, and then have a “real” linux distro inside there. Then you could easily manage the spawning of multiple vault processes (~1GB) using the same account/wallet credentials. When you want to recover resources, the script/ app would just start killing processes inside the chroot starting with the most recent. An android guru might know of easier ways… but going the chroot method might mean that no special development is needed to get the vault software running on android. What has already been developed should just work. I could be wrong though… I don’t use chroots very often so there could be other details I am unaware of.

You would be able to run multiple copies of the vault as separate tasks.

If Android has issues with that then you could make app1 app2 app3 etc and just start up the apps as needed. No need for containers.