SAFE Network Dev Update - March 26, 2020

The train keeps on chugging! Liking those images a lot! And everything else, which I’ve been reading with somewhat more comprehension, because of reading so many of these updates consistently, among other reasons. Though I wish I could help more by testing, without wanting to pass out from Florida heat/humidity + working on everything else in my life.

I saw Node Ageing was completed on the roadmap. And there’s no mention of it here in the update! That’s what I like to see: major milestone checked off—yet there’s so much else to talk about that it doesn’t get a mention! Unless that’s part of the new iteration (or I overlooked something).

11 Likes

Testing is not that difficult - it can’t be, I can just about manage it. Helps greatly if you run Linux but I am sure the simple scripts I provided could be easily adapted for Powershell.
Have a go and let us know where you fail, we’ll try to help you along.

8 Likes

Long post follows… from a fun couple of hours playing :smiley:

So, I’ve made some progress on the back of this update, understanding some of what errors follow from different uploads.

Below, the detail of three flavours of trigger.

A. ERRORs from uploading the same file more than once
B. ERRORs from LARGE files 1GB
C. ERRORs from HUGE files 3GB+


A. ERRORs from uploading the same file more than once

#AccessDenied is caused by uploading the same file more than once.
The FilesContainer is new each time but the detail of the file itself is barfing that AccessDenied error below.

# upload
FilesContainer created at: "safe://hnyynywzeag1rj869d4xpom8jpnamoejosdy9w85rx1e7nrr1rqhwt6paobnc"
E  ./to-upload/file.dat  <[Error] NetDataError - Failed to PUT Published ImmutableData: CoreError(Self-encryption error: Storage error: Data error -> Access denied - CoreError::SelfEncryption -> Storage(SEStorageError(Data error -> Access denied - CoreError::DataError -> AccessDenied)))> 

B. ERRORs from LARGE files 1GB

#Three different errors tonight on notionally the same upload

[2020-03-26T22:06:09Z ERROR safe] safe-cli error: Failed to connect: [Error] InvalidInput - Failed to decode the credentials: InvalidMsg

[2020-03-26T22:18:59Z ERROR safe] safe-cli error: [Error] NetDataError - Failed to PUT Sequenced Append Only Data: CoreError(RequestTimeout - CoreError::RequestTimeout)

Third attempt worked fine

  • ./to-upload/file.dat safe://hbkyyodicekb4febtk3j7c7r9rky4wxprwbz898m71c7imh9qiozrpu7k7

I haven’t time to do this over again but expect to see the same again trying the same… I’m tired and didn’t see anything that should have prompted the InvalidInput instance
but note the one that succeeded likely did so only for being quicker than the RequestTimeout error

So, RequestTimeout, saw times of

real	9m31.551s
user	4m59.597s
sys	0m38.782s

where the successful upload, was just quicker at

real	8m15.798s
user	5m11.232s
sys	0m37.689s

1of3 attempt at 1GB =>

############
file: 1
size: 977M
# upload
[2020-03-26T22:06:09Z ERROR safe] safe-cli error: Failed to connect: [Error] InvalidInput - Failed to decode the credentials: InvalidMsg

real	0m0.009s
user	0m0.000s
sys	0m0.005s

# vault size
safe-vault-1	188K
safe-vault-2	188K
safe-vault-3	180K
safe-vault-4	124K
safe-vault-5	124K
safe-vault-6	68K
safe-vault-7	100K
safe-vault-8	100K

2of3 attempt at 1GB =>

############
file: 1
size: 977M
# upload
[2020-03-26T22:18:59Z ERROR safe] safe-cli error: [Error] NetDataError - Failed to PUT Sequenced Append Only Data: CoreError(RequestTimeout - CoreError::RequestTimeout)

real	9m31.551s
user	4m59.597s
sys	0m38.782s

# vault size
safe-vault-1	882M
safe-vault-2	871M
safe-vault-3	863M
safe-vault-4	879M
safe-vault-5	880M
safe-vault-6	875M
safe-vault-7	881M
safe-vault-8	68K

3of3 attempt at 1GB =>

############
file: 1
size: 977M
# upload
FilesContainer created at: "safe://hnyynywmnewjdqyqq1oetc5o577gok6irqs95zi9ryqceiprmm1xpkgemobnc"
+  ./to-upload/file.dat  safe://hbkyyodicekb4febtk3j7c7r9rky4wxprwbz898m71c7imh9qiozrpu7k7 

real	8m15.798s
user	5m11.232s
sys	0m37.689s

# vault size
safe-vault-1	981M
safe-vault-2	981M
safe-vault-3	100K
safe-vault-4	100K
safe-vault-5	981M
safe-vault-6	100K
safe-vault-7	100K
safe-vault-8	68K
############

C. ERRORs from HUGE files 3GB+

#Repeatable error

The script I use creates a delay at the start, which is the simple creation of the random data file but beyond that this huge file attempt, fails quickly as below, with the vaults hardly starting to fill and watching the System Monitor, it is obvious the RAM and Swap are chewed through at a rate of knots before it fails.

attempt at 3GB =>

############
file: 1
size: 2.9G
# upload
e[0me[31mWell, this is embarrassing.

safe-cli had a problem and crashed. To help us diagnose the problem you can send us a crash report.

We have generated a report file at "/tmp/report-325577f4-6d7e-44fe-ac47-e9ceb4e77f7d.toml". Submit an issue or email with the subject of "safe-cli Crash Report" and include the report as an attachment.

- Authors: bochaco <gabrielviganotti@gmail.com>, Josh Wilson <joshuef@gmail.com>, Calum Craig <calum.craig@maidsafe.net>, Chris O'Neil <chris.oneil@gmail.com>

We take privacy seriously, and do not perform any automated error collection. In order to improve the software, we rely on people to submit reports.

Thank you kindly!
e[0m[2020-03-26T22:36:01Z ERROR safe] safe-cli error: [Error] Unexpected - Failed to retrieve account's public BLS key: Unexpected (probably a logic error): send failed because receiver is gone

real	0m29.455s
user	0m13.479s
sys	0m5.309s

# vault size
safe-vault-1	168K
safe-vault-2	164K
safe-vault-3	100K
safe-vault-4	160K
safe-vault-5	100K
safe-vault-6	100K
safe-vault-7	100K
safe-vault-8	68K

Potentially the errors are relative to file size and RAM (it seems to me the swap is just swamped after the RAM fills, where that oversized file triggers that)
So, above was on
CPU(s): 4
Model name: Intel(R) Core™ i3-7100U CPU @ 2.40GHz
16254420 K total memory
10239996 K total swap

===================================

“Method”

My method is roughly noted below, and the script I use is below that.

# https://github.com/maidsafe/safe-api/blob/master/safe-cli/README.md#description
#rough:

#START HERE FRESH
./install.sh 
safe auth install
safe vault install
#OR START HERE UPGRADE
safe update
safe auth update
safe vault install
#START-VAULTS
safe vault run-baby-fleming
safe auth start
safe auth create-acc --test-coins
#phrase/word

safe auth login
#phrase/word

safe
 auth subscribe

#another terminal
safe auth
#back to subscribed terminal
#Enter and copy auth allow
auth allow 9999999999


## stop with
safe vault killall
safe auth stop
# remove all vaults =folders in ~/.safe/vault/baby-fleming-vaults/

GOTO START-VAULTS

Simple bash script for uploads

#change count=6000 for different size of file
#change while [ $COUNTER -lt 10 ]; do for different number of attempts

#!/bin/bash 

#Simple script to upload COUNTER x SIZE files and log result

## Setup
#Expects safe baby-fleming to be setup and running
mkdir ./zzz_log 2>/dev/null
mkdir ./to-upload 2>/dev/null

## Base state
#log base state
echo "### START" > ./zzz_log/report
date >> ./zzz_log/report
lscpu | grep -P 'Model name|^CPU\(s\)' >> ./zzz_log/report
vmstat -s | grep -P 'total memory|total swap' >> ./zzz_log/report
echo "# initial vault size" >> ./zzz_log/report
du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort >> ./zzz_log/report

## Standard file creation here
#dd if=/dev/urandom of=./to-upload/file.dat bs=1k count=6000 2>/dev/null
### CAUSES ERROR: Using that line above atm causes error=
### E  ./to-upload/file.dat  <[Error] NetDataError - Failed to PUT Published ImmutableData: CoreError(Self-encryption error: Storage error: Data error -> Access denied - CoreError::SelfEncryption -> Storage(SEStorageError(Data error -> Access denied - CoreError::DataError -> AccessDenied)))>


## Start
COUNTER=0
while [ $COUNTER -lt 10 ]; do
let COUNTER=COUNTER+1 

## non standard file creation here
dd if=/dev/urandom of=./to-upload/file.dat bs=1k count=6000 2>/dev/null

echo "file: "$COUNTER 
echo "############" >> ./zzz_log/report
echo "file: "$COUNTER >> ./zzz_log/report
echo "size: "$(ls -hs ./to-upload/file.dat | sed 's/^\([^ ]*\).*/\1/') >> ./zzz_log/report

echo "# upload" >> ./zzz_log/report
(time safe files put ./to-upload/file.dat ) &>> ./zzz_log/report 

echo >> ./zzz_log/report
echo "# vault size" >> ./zzz_log/report
du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort >> ./zzz_log/report

echo "upload: "$COUNTER" complete"
done

date >> ./zzz_log/report
echo "### END" >> ./zzz_log/report

## Summary pivot
echo -ne "\tfile:\t0\tsize: 0\t#\t\t\t\treal\t0\tuser\t0\tsys\t0\t\t" > ./zzz_log/summary_table_report; tail -n +7 ./zzz_log/report | tr '\n' '@' | sed 's/############/\n/g' | sed 's/@/\t/g' | sed 's/file: /file:\t/' >> ./zzz_log/summary_table_report

exit

:+1: :+1: :+1:

23 Likes

oh… and perhaps useful to note for larger file upload, you can watch the vault sizes with

du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort

6 Likes

And talking of help, I need some myself…

So far I have a script that asks the questions
willie@gagarin:~/projects/maidsafe/nappies$ ./test_and_report.sh

----------------------------------------------------------------------

    --  Test baby-fleming network and provide reports --

    @davidpbrown and @southside of the SAFE community March 2020
              https://safenetwork.org

       Is your baby-fleming network running?

  ` If not press Ctrl-C, start your network and run this script again.

----------------------------------------------------------------------



How much random data do you want to put to the network (kb) ? : 20

How many test runs do you want? : 20


PUTing  20 kb of random data to the network  20 times
--------------------------------------------------------------------------`

which gives the following output …
### START
Thu 26 Mar 23:10:23 GMT 2020
CPU(s): 4
Model name: Intel(R) Core™ i5-2500K CPU @ 3.30GHz
16378512 K total memory
14648316 K total swap
DISTRIB_DESCRIPTION=“Ubuntu 18.04.4 LTS”
Linux 4.15.0-91-generic x86_64

PUT  20 kb of random data to the network  20 times

1,20K,	0:00.99 
FilesContainer
+

2,20K,	0:01.02 
FilesContainer
+

3,20K,	0:01.04 
FilesContainer
+

4,20K,	0:01.01 
FilesContainer
+

the lines that do the real work are

  printf $COUNTER","$(ls -hs ./to-upload/file.dat | sed 's/^\([^ ]*\).*/\1/')"," >> ./zzz_log/report

/usr/bin/time -o ./zzz_log/report -a -f "\t%E "  safe files put ./to-upload/file.dat | sed 's/^\([^ ]*\).*/\1/' |tee -a ./zzz_log/report   

With thanks to @davidpbrown for the sed command which I still cannot fathom out, we are nearly there. A bit of tidying up and we will have something that can generate a graph either from a spreadsheet or is useable by R.

9 Likes

The test_and_report.sh script is available at GitHub - willief/change-nappies: wee script to run new baby Fleming network

PLEASE fork it and improve on it.

5 Likes

How many more babies are we gonna have before Fleming?

4 Likes

Dunno - are you offering to have my babies?
Please provide a photo (undraped) and evidence of your COVID-19 antibody status and we might talk about it.

2 Likes

UPDATE: I have a very kludgy grep hack that gives me the CSV output I so earnestly desire.
Right now I have a batch of 1000 puts of 50k running. So Im reading up on R to make use of this data.

2 Likes

1000 runs of 50k had one failure. Avg time to PUT 50kb 1.227 secs

4 Likes

@ravinderjangra this is a huge improvement to the user experience, very nice!

@JimCollinson and everyone else @maidsafe regarding the UI/UX and network features within the mockups. Not to sound too vulgar but if you listen really closely I’m pretty sure you can hear Dropbox shitting themselves. Being able to publish to a site on mobile is pretty damn epic I must say.

The work @adam did in routing is extensive, huge props!

This sounds handy.

Looking forward to hearing more cause this sounds so cool and it’s great it’s making its way in now. The road to Fleming seems so much more focused than anything in the past. Very promising.

The world needs SAFE and a new economy more than ever. If we can get these tools into peoples hands then people will have powers the current internet lacks to revolutionize our lives. Keep up the great work Maidsafe!

16 Likes

Thanks so much this will be very useful :raised_hands:

This helps to pin it down! :+1:

The first error seems to be something related to the CLI reading the credentials. Perhaps @bochaco or @joshuef can jump in here.

The second one is probably because all the vaults are running on the same machine and hence it’s taking a while to process it. Just a guess though.

Could you please share the crash report file so that I can take a look at this?

9 Likes

Doesn’t it seem a bit strange to say that publishing is “Permanent and cannot be undone”, followed by a little box that shows undo, I get how it works behind the screen but the average person may not and could assume it could be undone anyway.

And didn’t renaming, moving, and stuff like that all cost a (minute) amount of safecoin, shouldn’t that be mentioned somewhere? I think that may get fairly expensive if you’re going to be moving mountains of data.

1 Like

Will do but it’ll be this evening +10hrs from now. I wonder it’ll be the same as I emailed a few days ago as per that error to

Authors: bochaco <gabrielviganotti@gmail.com>, Josh Wilson <joshuef@gmail.com>, Calum Craig <calum.craig@maidsafe.net>, Chris O'Neil <chris.oneil@gmail.com>

4 Likes

No worries. Send it over whenever you can.
It’s probably not the same, but I guess we’ll see :slight_smile:

3 Likes

I think @lionel.faber this will be related to the artificial limit in self encryption. Removing the wait until close() and uploading chunks as we go will remove such limits there. May not be the case here, but likely it is.

10 Likes

If you send me a sample file or paste it into a PM I’ll see what I can do with it in VisLab.

3 Likes

I’ll try to replicate this later and send on the results
@lionel.faber

4 Likes

I’ll forward the report that @davidpbrown @digipl sent, to you @lionel.faber , from looking at the report, it seems to be an issue in sefl-encryption not handling such scenario:

   7:     0x55db51246c51 - core::panicking::panic_fmt::hb1f3e14b86a3520c
                               at src/libcore/panicking.rs:85
   8:     0x55db5124a806 - core::slice::slice_index_len_fail::h47cc3c9d8ac81271
                               at src/libcore/slice/mod.rs:2674
   9:     0x55db50b00fe3 - self_encryption::self_encryptor::State<S>::create_data_map::h6ca5edf9d377a862
5 Likes

This is the crash report I got I after I tried to feed it a 4Gb + Win10 .iso

-rw-rw-r-- 1 willie willie 4697362432 Jan 27 2018 Win10_1709_English_x64.iso

willie@sputnik:/tmp$ cat report-e0ac5e81-1a22-4770-b47a-9dea8c5008af.toml 
name = 'safe-cli'
operating_system = 'unix:Ubuntu'
crate_version = '0.11.0'
explanation = '''
Panic occurred in file 'src/libcore/slice/mod.rs' at line 2674
'''
cause = 'index 1074790400 out of range for slice of length 1073741824'
method = 'Panic'
backtrace = '''

   0: 0x565295557105 - self_encryption::self_encryptor::SelfEncryptor<S>::close::hd0bc46fce2f7a775
   1: 0x5652956a4eac - futures::future::chain::Chain<A,B,C>::poll::h85415fb91dc5395d
   2: 0x5652956741cc - <futures::future::map_err::MapErr<A,F> as futures::future::Future>::poll::hd6d5d3dad4b5f994
   3: 0x565295696960 - futures::future::chain::Chain<A,B,C>::poll::h0e248840901bfbe2
   4: 0x56529564fb99 - <futures::future::and_then::AndThen<A,B,F> as futures::future::Future>::poll::h4d5b42930eb9ee5d
   5: 0x5652956ab105 - futures::future::chain::Chain<A,B,C>::poll::hc35e735daf20f07c
   6: 0x5652956b0fe8 - futures::future::chain::Chain<A,B,C>::poll::hec08b8d120bef074
   7: 0x5652955f81c2 - futures::task_impl::std::set::h02a75ca631421c63
   8: 0x56529558195b - tokio_current_thread::CurrentRunner::set_spawn::hb2c5e6e4a0439cfc
   9: 0x5652956c8469 - tokio_current_thread::scheduler::Scheduler<U>::tick::h5d5826a31c3352dc
  10: 0x565295581b57 - tokio_current_thread::Entered<P>::block_on::he5e6231e4e9afe32
  11: 0x56529552aa78 - std::thread::local::LocalKey<T>::with::h08cbdfec957ada51
  12: 0x56529555351c - tokio_reactor::with_default::hfd3f19b173f2787a
  13: 0x565295669b51 - tokio::runtime::current_thread::runtime::Runtime::block_on::hfffff426363ff283
  14: 0x56529563ef23 - std::sys_common::backtrace::__rust_begin_short_backtrace::he6af4114c30d0c59
  15: 0x565295581fe4 - std::panicking::try::do_call::h79c8d9e128694f1e
  16: 0x565295d8be77 - __rust_maybe_catch_panic
                at src/libpanic_unwind/lib.rs:86
  17: 0x56529555cec3 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h7c7d9ad44a01d792
  18: 0x565295d6d00f - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::he3e3bc9932f56404
                at /rustc/b8cedc00407a4c56a3bda1ed605c6fc166655447/src/liballoc/boxed.rs:1015
  19: 0x565295d8afd0 - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::h0d82364c11057a62
                at /rustc/b8cedc00407a4c56a3bda1ed605c6fc166655447/src/liballoc/boxed.rs:1015
                 - std::sys_common::thread::start_thread::h6cf2238254b521b3
                at src/libstd/sys_common/thread.rs:13
                 - std::sys::unix::thread::Thread::new::thread_start::he70a06005b4d03f8
                at src/libstd/sys/unix/thread.rs:80
  20: 0x7f17d3e066db - start_thread
                at /build/glibc-OTsEL5/glibc-2.27/nptl/pthread_create.c:463
  21: 0x7f17d434388f - __clone
  22:        0x0 - <unknown>'''

its a shame discourse won’t let me attach .toml or .zip files

1 Like