So I'm working on a script that autocompiles from the source code without asking the Maidsafe team if they would pretty please release an aarch64 binaries of their sn_cli, auth and node

I forgot to ask this first:

“Can you guys at team Maidsafe not release aarch64 versions of sn_cli/node/launch_tool
or are you too busy / does it cost too much time?”

I’m actually thinking of instead having the script install the compiled source code, just send it back to a safenetwork-community sn_cli/node/launch_tool somehow.
Or maybe have it upload to Storj DCS until the safenetwork itself is online.


You’re welcome to raise a PR to the sn_api and sn_node repos. These are the build and release workflows in sn_node and sn_api

I imagine it should be straightforward to add one more target. You are welcome to give it a try :slight_smile:
We will assist you in any way possible!


I’m not even familiar with workflows, so I’ll have to look into that first.

But I take it it’s

  1. fork sn_node/api
  2. Add something like this
--      os: [unbuntu-latest]
++      os: [ubuntu-latest, ubuntu-arm64]
          - os: ubuntu-latest
            build-script: make musl
            target: x86_64-unknown-linux-musl
++        - os: ubuntu-arm64
++          build-script: make musl
++          target: arm64-unknown-linux-musl
++          arch: arm64
  1. Run/test workflow
  2. Merge request?

Sorry I missed this earlier. I just took a look at this and it needs a little bit more than this including the relevant scripts to upload to GH and S3. The workflow is mostly bash commands that run one after the other. A good way to find out everything that needs to be added it to take a look at a release workflow for ubuntu this is a good example and add the relevant steps to be run for aarch64 binaries as well. We’ll need to make sure that the new target is included in these steps as well to ensure that the release is uploaded correctly.


I’ve noticed that sn_api no longer include a deploy section on it’s master.yml workflow, is it done manually these days?

It’s been split into components, sn_api is now a library, not a binary for release.

Have a look at sn_cli and sn_authd which are the two binaries that used to be within sn_api.


That clears things up.

I’m still not able to get the actions to work.
I’m stuck at two issues, building linux-musl and deployment.

for musl it’s:
error: failed to run custom build command for openssl-sys v0.9.63``

For deployment I think I’m getting the message that the ‘release code’ already exists:

Error: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}

Is that because there’s also a tag_release.yml has already created a tag?
Or is it because github_release.yml has already created a release?

These tag/release/main/bump workflows are beginning to confuse me.

What are their exact functions and do they function as intended?

It feels to me like there should only be a master/main.yml.


  • bump bumps the version of cargo.toml and fills the changelog automatically according to the latst batch of commits, makes a PR for this. This pr is automatically merged
  • tag creates a tag at code pushed to master (with a specific commit line… that generated by bump). This also sets up an empty github release
  • realease uses this newly pushed tag to generate the release content (also using the changelog to generate a release message)
  • master is actions we want to run when anything is pushed to master. This can run tests, or depending on some conditions do other things (build the code, and or deploy)

More general explanations of Github actions can be found here


Okay so bump automaticcaly triggers tag and also release and master if pushed to master?

Thus normally one just pushes their code to master and it becomes a new version.

If bump creates a pull request, won’t it trigger pr and commitlint as well?

1 Like

if: "!startsWith(github.event.pull_request.title, 'Automated version bump')"

those workflows have exceptions for a lot fo jobs. So yeh, the workflow is started but we dont run tests etc on these auto-generated PRs.

An irk w/ GHA is that you cannot have the creator (github key) also approve the PR in an automated fashion. So you’ll need two accounts set up w/ repo perms for this flow.

Alright, so it looks like I have three issues to tackle now despite having copy-pasted the action files:

version bump

HttpError: Validation Failed: {"resource":"PullRequest","field":"base","code":"invalid"}

Main Linux build

`error: failed to run custom build command for ` openssl-sys v0.9.63

Main deploy

Error: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}

I’m not 100% sure but my guess here is in L8 there’s a dependency on reqwest which will drag in openssl, which is always troublesome with musl.

Is it possible to try with no musl? Or is musl really essential to the project?

1 Like

Maidsafe uses musl.
I have never asked why, but I assume it is essential for them.

This is little dilemma when my own tiny project uses something that has a problem with it, but I guess I’ll just comment it out. It’s a ‘joshuaf is gonna pull the rug underneath my coding attempts a year later like the Duniter founder did to me’ paranoia project anyway.

I forgot, but there is a simple way to get reqwest to build with musl (by avoiding openssl and using rustls)

In Cargo.toml L25

-reqwest = { version = "0.11", features = ["json"]}
+reqwest = { version = "0.11", features = ["json", "rustls-tls"]}

which is a feature also used by sn_cli here:


I’d like to point out by the way that one of the issues I encounter is that github’s default branch has been altered from master to main, which can be renamed in Settings->branches.

I now just commented out that whole section, but I’m stuck adding a runner.
There’s only three runners hosted on github and none of them are aarch64.

I would have to self-host a runner or ask team Maidsafe to self-host one.
This is going to require some discussion because I don’t know what Maidsafe wants and I assume it just doesn’t want to bother.


I’m a little out of the loop on what you’re trying to achieve, but I don’t think you need the host to have the same architecture. The cargo GitHub Action supports flags that enable cross-compiling using cross. Perhaps that’s something to look at? I’ve compiled Safe successfully in the past for aarch64 for the Raspberry Pi using cross.


I’m trying to achieve is this:


+ifneq ($(UNAME_S),Linux)
+	@echo "This target only applies to Linux ARM 64-bit architecture - please use the `build` target."
+	@exit 1
+	rm -rf target
+	rm -rf artifacts
+	mkdir artifacts
+	sudo apt update -y && sudo apt install -y musl-tools
+	rustup target add aarch64-unknown-linux-musl
+	cargo build --release --target aarch64-unknown-linux-musl --verbose
+	find target/aarch64-unknown-linux-musl/release \
+		-maxdepth 1 -type f -exec cp '{}' artifacts \;


+    if: github.repository_owner == 'safenetwork-community'
+    name: Build_Linux_arm
+    runs-on: ${{ matrix.os }}
+    strategy:
+      matrix:
+        os: [ubuntu-latest]
+        include:
+          - os: ubuntu-latest
+            build-script: make aarch64
+            target: aarch64-unknown-linux-musl
+    steps:
+      - uses: actions/checkout@v2
+      # Install Rust
+      - uses: actions-rs/toolchain@v1
+        with:
+          profile: minimal
+          toolchain: stable
+          override: true
+      # Cache.
+      - name: Cargo cache registry, index and build
+        uses: actions/cache@v2.1.4
+        with:
+          path: |
+            ~/.cargo/registry
+            ~/.cargo/git
+            target
+          key: ${{ runner.os }}-cargo-cache-${{ hashFiles('**/Cargo.lock') }}
+      # Run build.
+      - shell: bash
+        run: ${{ }}
+      # Upload artifacts.
+      - uses: actions/upload-artifact@main
+        with:
+          name: sn_grufs-${{ }}-prod
+          path: artifacts

and so far, it fails with this error:

     Running `/home/runner/work/safe_network/safe_network/target/release/build/ring-77067fdb672b35d5/build-script-build`
error: failed to run custom build command for `ring v0.16.20`

Caused by:
  process didn't exit successfully: `/home/runner/work/safe_network/safe_network/target/release/build/ring-77067fdb672b35d5/build-script-build` (exit status: 101)
  --- stdout
  OPT_LEVEL = Some("3")
  TARGET = Some("aarch64-unknown-linux-musl")
  HOST = Some("x86_64-unknown-linux-gnu")
  CC_aarch64-unknown-linux-musl = None
  CC_aarch64_unknown_linux_musl = None
  TARGET_CC = None
  CC = None
  CFLAGS_aarch64-unknown-linux-musl = None
  CFLAGS_aarch64_unknown_linux_musl = None
  CFLAGS = None
  DEBUG = Some("false")

  --- stderr
  running "aarch64-linux-musl-gcc" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-I" "include" "-Wall" "-Wextra" "-pedantic" "-pedantic-errors" "-Wall" "-Wextra" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wenum-compare" "-Wfloat-equal" "-Wformat=2" "-Winline" "-Winvalid-pch" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wundef" "-Wuninitialized" "-Wwrite-strings" "-fno-strict-aliasing" "-fvisibility=hidden" "-fstack-protector" "-g3" "-U_FORTIFY_SOURCE" "-DNDEBUG" "-c" "-o/home/runner/work/safe_network/safe_network/target/aarch64-unknown-linux-musl/release/build/ring-dee54d344aa34ccd/out/aesv8-armx-linux64.o" "/home/runner/.cargo/registry/src/"
  thread 'main' panicked at 'failed to execute ["aarch64-linux-musl-gcc" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-I" "include" "-Wall" "-Wextra" "-pedantic" "-pedantic-errors" "-Wall" "-Wextra" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wenum-compare" "-Wfloat-equal" "-Wformat=2" "-Winline" "-Winvalid-pch" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wundef" "-Wuninitialized" "-Wwrite-strings" "-fno-strict-aliasing" "-fvisibility=hidden" "-fstack-protector" "-g3" "-U_FORTIFY_SOURCE" "-DNDEBUG" "-c" "-o/home/runner/work/safe_network/safe_network/target/aarch64-unknown-linux-musl/release/build/ring-dee54d344aa34ccd/out/aesv8-armx-linux64.o" "/home/runner/.cargo/registry/src/"]: No such file or directory (os error 2)', /home/runner/.cargo/registry/src/
  stack backtrace:
     0: rust_begin_unwind
               at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/
     1: std::panicking::begin_panic_fmt
               at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/
     2: build_script_build::run_command::{{closure}}
     3: core::result::Result<T,E>::unwrap_or_else
     4: build_script_build::run_command
     5: build_script_build::compile
     6: build_script_build::build_library::{{closure}}
     7: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &mut F>::call_once
     8: core::option::Option<T>::map
     9: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::next
    10: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter_nested::SpecFromIterNested<T,I>>::from_iter
    11: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
    12: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter
    13: core::iter::traits::iterator::Iterator::collect
    14: build_script_build::build_library
    15: build_script_build::build_c_code::{{closure}}
    16: <core::slice::iter::Iter<T> as core::iter::traits::iterator::Iterator>::for_each
    17: build_script_build::build_c_code
    18: build_script_build::ring_build_rs_main
    19: build_script_build::main
    20: core::ops::function::FnOnce::call_once
  note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
warning: build failed, waiting for other jobs to finish...
error: build failed
1 Like

musl is used by maidsafe, so for raising a PR that adds binary aarch64 zip and tar files, it’s essential unless maidsafe does not find musl essential.

What MUSL does is disconnects the binaries from the local libc. Why it’s important is that without it you are bound to a particular glibc and for different distributions, you can have a different GLIBC per distribution.

i.e. If you compile with no MUSL and use libc, the create an ARM binary it may only work on the particular distribution you use (say ubuntu version X). Sombeody uses any other distribution or version and you will almost certainly get, “this does not work” all over the place. Probably 90%+ of your audience will have issues all down to GLIBC. (caveat you can use a dead old glibc and get forward compatibility, but your in a losing game of whack-a-mole chess now)

However, compile with MUSL instead and your binary will work on all distributions and all versions of Linux on your chosen platform (arm in this case).

So essential, not so much, but if you don’t want horrendous compatibility errors then yes it’s kinda essential.

Hope that helps