Update August 5th, 2021

Raspberry Pi could be very important as they are dedicated hardware, if they are powerful enough. Phone are laggy slow things with small batteries. In medium term, phones seems useless doing work for the network.

2 Likes

Yeah, itā€™s just that I used to be on the opinion that it would be better idea to make it work just on the linux first and then expand from there. But then someone explained that it is worthwhile to work on the windows and MacOS at the same time and I was convinced, though I donā€™t remember the reasons anymore.

And because I donā€™t have true understanding I still have some tendency to think that maybe one system first could be better. It seems at times that making it work in several system is more work than just one. For example you often see some tests failing in one operation system while passing on others. And when thinking all the re-arrangements going on now and gone before, I wonder if it really is worthwhile to keep all the different variations in parallel?

Thus it seemed to me that one more eccentric system in parallel might not be the best use of resources right now, even though in the end it would be good thing to include.

6 Likes

I understand the thinking. From my side here are the plus points

  1. When setup in CI it becomes automatic (so a few days work then done)
  2. Later when we wish to expand the user base we donā€™t have some large issues with converting code to work there
  3. Related to the above it forces us to think about not using customised and sometimes simpler code to just get launched then tell folks it will be X months/years to make it work everywhere.

The bad points

  1. It is still a few days work, so there is a cost
  2. Errors in these tier2 platforms can still be distracting, but really they are generally we have written some bad code (not always)

So it is work, but in balance, I feel itā€™s probably valuable to make sure we do code in ways that all hardware can use eventually. A deeper part is making all this wasm/wasi compatible but the rust eco-system is leading the way there and many libs are starting to achieve this. Then folk can run a node while they browse etc. Itā€™s just too much for us right now, but we do watch that.

22 Likes

Amen is all I have to say!

5 Likes

Good points, I share your cancerns, more systems seems more work. For me when people mention phones I almost get panick inside because my head goes, letā€™s get a working network going that people want to use and then look at sub par solutions.

For me Windows, Linux, Arm seems like the most important and if other systems means more work then fokus on them after network goes live. But that is just my inner feelings, might not be right for various reasons.

Very good points, excellent logic thoughts!

4 Likes

I am also a bit baffled as to why connectivity is such a problem? Isnā€™t qp2p a kind of off-the-shelf solution for connections, or at least supposed to be one? Is it a popular software (if that is the right term) in general? Is it working for others? If it works for others, why not for us?

And one more fringe thing I wonder: why is the PR size checker still in place in CI as so many PRā€™s are larger than the limit and it is just annoying to see that red X there :sweat_smile:

3 Likes

Bugs, itā€™s not quic but how we used it in some cases. A few reasons, mis configuration/bad connection params/trying too hard to maintain connections (which with quic is not necessary) and message handling was all over the place. In addition our number of messages per operation was wild.

To remind us we are still doing too much work in each PR. Itā€™s related to above but we must get to PRs less than 200 lines as soon as we can

12 Likes

Thereā€™s a tiny ray of hope for mobile pocket computers too, maybe people have seen this but just in case it has slipped under the radar PINEPHONE | PINE64.

The long story short is that this phone will be a great candidate for the ARM builds being worked on. Itā€™s a 200 dollar phone that runs a variety of linuxy operating systems, and itā€™s gaining more and more attention from ordinary tinkerers and hackers.

Iā€™ve yet to pick one up, but if someone gets one and gets connected to one of the next test nets, they could be claiming a world first? :smile:

13 Likes

Sounds interesting, good luck!

4 Likes

Hmm, thatā€™s very interesting. I may well try one of these. To be honest, Iā€™ve kind of got bored of all the bling on smartphones now anyway.

6 Likes

This is just paradigm shifting stuff. Besides being beneficial to the network by making it more client side, it also is more empowering and flexible for the client. Absolutely amazing! Looking forward to testing of parallel processing of chunks, batching, etc w/ DBCs.

Register, multi map and other CRDTs also great to see worked in after all this time.

I wonder if upload arbitrage will be a thing? Or if that even really makes sense?

10 Likes

Thx for the update Maidsafe devs,

Keep hacking super ants

3 Likes

It should be fine. Paying for many chunks/data identifies the data name. So you have paid for specific names, even though you have not necessarily uploaded them yet. So arbitrage wonā€™t work here AFAIK. I may have misunderstood your notion though, so let me know if you are thinking something differnt.

12 Likes

I think youā€™re spot on. Didnā€™t put much thought into it besides the time and money element. Say upload cost is low early on and then high later or the other way around but like you mention these are unique chunks, not some generic or blank storage block/batch so what value or market is there really? None. Plus Iā€™m assuming storage cost will for the most part get cheaper over time so storage bought now will be worth less later. No financial benefit or gain to holding off on uploading a batch but the benefit of flexibility to do so when you need, which is very cool, IMO.

6 Likes

@dirvine quick question. Say someone wants to upload some famous work that is copyrighted and they get the batch quote from the network, does getting that quote prevent others uploading that same data?

My inclination is no but feel like that would be an undesirable edge case.

4 Likes

How that would work is they can get a quote as well then itā€™s first to upload.

6 Likes

Well if its immutable data then doesnā€™t matter since both can and neither can prove who was first.

My only thought is the quote to pay process. If data upload cost (SNT) is extremely low at some time, they get quote then they can pay even when it rises. Then they could upload when spare space is low, yet only paid the price of low.

Thus the following scenario is possible however unlikely or likely it may be for a group of attackers to do.

  • All start when the price (SNT) of upload is extremely low
  • Get quote for PBs worth of data to upload.
  • Wait till they see price skyrocket because of network spare space is low
  • pay and upload. Very cheap for them even when its expensive to upload.

Obviously not effective when the network is so large a PB is a fraction of spare space.

The issue is the ability to pre-quote the uploads and weeks/months later pay and do the upload without the network being able to charge more as space becomes scarce.

The solution I can see is to limit the network events between quote and paying. If they have to pay almost immediately (within hours/few events) then the network has benefited and at least mitigates the issue. But being able to get quotes for cheap uploads of that data and pay when they attack amplifies their attack.

The quote is issued then the person needs to pay within a small number of network events or else get an updated quote.

4 Likes

Sounds like arbitrage to me, which makes price more stable and consistent. Some companies could buy data when it is cheap and sell it when it is worth more. This would ensure that people are always paying farmers even when there is little interest at a given moment.

2 Likes

Itā€™s not a pre-quote though itā€™s payment. Unless I am missing something @oetyng but we are giving people paid receipts that allows them to store at any time, but they need to have ā€œpaidā€, i.e. they have minited the DBCs and burned their key (single use). So they have the DBCs in hand and use or lose them, they canā€™t be re-issued and if they donā€™t upload they lose payments and upload.

Interesting discussion though

To add, for clarity.

  • You can show you have made DBCs for payment.
  • This means you must have written a key revocation packet with the transaction hash in it.
  • The network can see the whole transaction (all input and output DBCs)
  • You then have DBCs made out to the network but have not given them to the correct section (yet)

i.e. you can get a quote but you need to have created the DBC transaction for those payments.

9 Likes

Only Apple iphones. :clown_face:

5 Likes