InstallNet [2023-06-19 Testnet] [Offline]

Thanks for the feedback here. Will look into this bug.

4 Likes

You should be able to install a node if you run safeup node --version 0.83.37.

The latest release was problematic and didn’t end up with assets attached.

4 Likes

Chunks are now stored by default at ~/.local/share/safe/node/record_store, but when running multiple nodes on one machine there don’t seem to be any sub-directories created. (eg ~/.local/share/safe/node/record_store/safenode1 .. /safenode2 etc). Are all chunks dumped into the same pot?

Edit: Answered my own Q. Currently all chunks end up in the record_store folder. In future iterations it would be nice to have one subfolder per node created automatically.

6 Likes

Good point. I’ll look into that.

4 Likes

Small files upload result: 4133 files uploaded (log4133.zip (167.3 KB)), 5 chunks appeared at my node.
If every file goes to 8 nodes at once, and right now approximately 2000 nodes are online, then every node should get approximately 16 of them.
5 is not much smaller than 16 (considering random nature of IDs), so results are relatively good.
Do anyone with large amount of nodes want to check uniformity of spreading?

7 Likes

Cause I didn’t try hard enough!
Safe on my Android phone. :fire:

But @neik’s blag is letting me down with a missing chunk.

Going to try run a node next, will need to wait a bit, ridiculous as I am I have been sitting in a parking lot insistent to get this working.

7 Likes

I noticed strange thing:
I searched log files for GET requests and every request I found was followed by Record not found locally.
Is this a bug or normal behaviour?

2 Likes

Here is the breakdown of records across my 100 nodes:-

Node Number
Number of records

1
1

2
704

3
0

4
0

5
605

6
504

7
29

8
293

9
596

10
12

11
379

12
12

13
0

14
0

15
5

16
1250

17
0

18
5

19
942

20
55

21
342

22
1000

23
2

24
240

25
665

26
646

27
438

28
766

29
15

30
0

31
15

32
0

33
359

34
554

35
0

36
772

37
0

38
434

39
571

40
434

41
0

42
602

43
589

44
449

45
490

46
0

47
515

48
1

49
1006

50
1

51
4

52
1

53
2

54
993

55
0

56
363

57
624

58
1

59
734

60
0

61
857

62
521

63
286

64
552

65
5

66
437

67
3

68
1075

69
3

70
1

71
3

72
1226

73
4

74
2

75
546

76
0

77
0

78
0

79
5

80
0

81
669

82
519

83
1

84
78

85
0

86
865

87
0

88
344

89
8

90
0

91
3

92
615

93
403

94
0

95
0

96
5

97
292

98
208

99
781

100
0
7 Likes

Histogram for it:
snhist

10 Likes

This way it does not give an installation error:


  •                                *
    
  •      Installing safe           *
    
  •                                *
    

Installing safe for x86_64-apple-darwin at /usr/local/bin…
[########################################] 4.75 MiB/4.75 MiB safe 0.77.43 is now available at /usr/local/bin/safe
Please run ‘safeup --help’ to see how to install network components.

1 Like

I think it will be essential going forward to have each node’s records in it’s own folder so they can be cleared out if there is a problem with an individual node. I’m starting my nodes with this that is just a modification of what @southside was using for a previous testnet:-

cat start_safenodes
#!/bin/bash

for i in  {1..100}
do 
	SN_LOG=all /home/ubuntu/.local/bin/safenode --log-dir=/tmp/safenode/$i --root-dir=/home/ubuntu/.local/share/safe/node/$i &
        sleep 5
        
done
5 Likes

Just thinking out loud here - bear with me and please correct any misconceptions I may have…

Any and every file uploaded to the network >1kb is split into chunks, self-encrypted and the chunks stored at multiple (4,8, whatever) locations depending on the first few chars of the actual chunk content being closest to the node addresses.
It does not matter if we store a single 1Gb file or a few thousand files of several hundred kb each, we are storing approximately the same no of chunks in random locations, dependant solely on the first few bytes of the actual chunk content matching the relevant node addresses.

Is this basically correct?

4 Likes

I hope it is not true.
Most likely, dependency is on hash.

1 Like

Apologies I should have been clearer

dependant solely on the first few bytes of the actual hashed chunk content matching the relevant node addresses.

3 Likes

Could be normal behaviour or a hangover from testing when running a local network or a bit of both. Check to see if the file is available locally before going out to the network. Maybe basic functionality is or was tested on something that stores data locally.

Imac High Sierra 10.13.6

I have this error message:

Successfully stored file “termux-app_v0.118.0+github-debug_arm64-v8a.apk” to 0b6d3d46935b2b1526b186046d0e7d1c378487b972df663e13fa1996cc995be6
Writing 94 bytes to “/Users/imac27/.safe/client/uploaded_files/file_names_2023-06-20_14-03-29”
> Error: Too many open files (os error 24)
**> **
> Location:
> sn_cli/src/subcommands/files.rs:145:5

3 Likes

This is on a AWS Instance (t4g.medium) with 2vCPU and 4GB RAM. So I didn’t do any port forwarding or specify a port with ‘–port’. So the nodes all have a port that was decided on by safenode.

1 Like

Sometimes I need to step back, take a breath and restate the fundamentals to myself.
It is often a good idea to check that what I think are the fundamentals are still valid.

==============================================
Meanwhile will somebody please say YES or NO?

SAFE urls are ALWAYS EXACTLY 65 hexadecimals long - no ifs, no buts

4 Likes

I’m wondering if each chunk is spread across a number of nodes according to each chunk’s hash or whether the chunks for a file are spread across a number of nodes according to the hash of the file?

3 Likes

It’s hash of chunk.

2 Likes