Is there a list of apps people are working on?

People should get paid in native app tokens to test new apps. First come first served.

Apps can attract early adopters by rewarding early adopters with tokens which represent shares of the app itself. In the case of a DAO it would mean the DAO would reward on a first come first served basis, the human beings who interact with it. This includes the humans who test it out, the humans who are the first users, the humans who develop the DAO, and so on.

It’s called the bee pollination algorithm or flower pollination algorithm. It’s ultimately how apps or DAOs can market themselves and attract attention to themselves. It’s based on biomimicry because flowers attract bees in a similar way and we are dealing with swarms of humans but it’s still a swarm.

Project Decorum based off Structure Data will do all this it seems.

2 Likes

Regarding Operation Delego, in my opinion it’s more important to track down the producers of child pornography because it’s in the production process that children are harmed. The people caught in the sting in my opinion should be offered leniency in return for their cooperation and assistance to catch the producers.

The issue with SAFE Network is that PtP (pay the producer) would pay any producer of anything which is not cool. It could put SAFE Network itself in the crosshairs unncessarily. The benefits of PtP do not outweigh the risks because you can accomplish the same thing by doing PtP on top of a curation layer so that at least people can vote down or vote up or determine the flow of the payments so that SAFE Network adapts to and can adjust to changing social norms.

Inability of SAFE Network to adapt to or adjust to changing social norms is actually a vulnerability in the design itself because it could reduce the survivability of SAFE Network in the long term. My opinion is known on these issues.

Project Decorum? What is that?

2 Likes

That is private enterprise. Cautious people would go to, for example, a repository of deterministic and audited software, while the incautious could use whatever.

Private enterprise isn’t a magic wand, and consumers are often stupid and ignorant. They scarf down GMOs and factory food, and they often have the same attitude to software.

What if this was done client side. Say there was an app/add on you could install locally that would let you select to only allow use of apps with certain features, similar to when you install an app and it tells you all the entitlements it needs.

A user could select the settings they are comfortable with and then locally control what they are able to download.

This way any malicious apps can be caught on the client side instead of vetted by any third party/group/app store.

This in combination with a rating/review system for apps seems to be an idea at least…I am not sure if they would want to, but the devs could even do like a “Safe Installer” that would be an official SAFE app for vetting, installing and controlling SAFE apps…user controlled of course! :wink:

2 Likes

Yes,it sounds like a project idea where consumers would pay for membership to a vetted app download service.

Now we have a network that rewards popular content, I would think a vetted app repository could do well. Maybe an SD coin could be attached to vetted apps that is spent upon install, with a Decorum being tapped for feedback on the apps functionality and bill of health.

No, it doesn’t sound like that.

The deterministic builds of Tor, Bitcoin and Debian are a built-in feature of those products.

Fair enough, so where in the SAFE stack are you proposing to have these deterministic builds actioned?

SAFE network is more a protocol or a platform, not an app.

This is akin to saying the actual internet should have a built in website approver that validates and certifies websites, which is exactly the kind of thing that Maidsafe is trying to prevent.

SAFE provides the means, we (the users and free market) provide a way to solve a problem or provide a product to fill a market gap and add features and value to the underlying system (the network).

1 Like

Still interested to know, where in the SAFE stack are you proposing to have these deterministic builds actioned?

I wasn’t that specific, as a beginner. But it needs doing.

[EDIT]
Further thoughts (all of ten minutes worth :slight_smile:

The GNU/Linux Distro Model:

People, including myself, commonly speak of “Linux” distributions but actually, Linux is just the kernel of an operating system. All the rest of the things that one finds in a distribution (“distro”): the command-line environment, graphical desktop and applications, are properly referred to as GNU software, added by parties other than the Linux development team.

Almost all Linux users use a distro rather than compiling their own kernel; some do roll their own though, and they often become distros of their own.

All the distros I’ve encountered had convenient (to me) package managers.

On a related note, I recently had occasion to try out a terminal emulator (shell) called MSYS2 that was very nice (although it couldn’t quite do what I wanted) because of its package manager and many easily installed packages. The package manager (“pacman”) is a fork of the one used by the Arch Linux distro.

So what is my point?

I predict that eventually (which means, sooner rather than later), SAFE users will use a SAFE distro of some kind, because they’re not going to want to mess around with the low-level stuff.

Yes! There will be a SAFE distro, maybe more than one. And the apps we are talking about here will be added as packages to be as easily installed as “apt-get install dapp.”

Now, that allows me to circle back to your question, by citing what the Debian project are doing with their reproducible builds sub-project, as a working example of how such a requirement (for deterministic and signed software) is already done.

A SAFE distro would be much, much smaller for now, so it seems do-able.

1 Like

Thanks, I understand where your coming from now.

1 Like

Well, first of all, don’t get your hopes up, we still have a couple hurdles to cross first.

As @dirvine has mentioned since the beginning, vaults and safecoins are only the beginning. Next comes computation.

Computation will allow (parallellized?) computations to be made on vaults. That’s where having a big CPU (GPU even?) will come in handy. Right now the Network is optimized for traffic handling. Computation will shift that and balance out storage space with computation.

But computation alone isn’t enough. We have to have a type of Random Access Memory - or at least instructions that handle those computation outputs. That’s where smart contracts come into place. They (IIRC) can provide the instructions to the computation and return the answer.

Now, with this mechanism, there exists the ability for the Network to produce those builds and create its own identifier based on the deterministic build.

This also will eventually allow us to create a number of auxiliary authentication mechanisms for our packages … Interesting examples include … listing the package hashes in the Tor consensus …
mikeperry - Deterministic Builds - torproject.org

So we have the source code - which is verified by the Network (once Computation and RAM is added) against the public key (PKI having yet to be established) - and the hash of that same source code - which is stored as the key in a key-value pair, the value being the source code - ensuring the downloaded source code’s integrity. So as long as the key verification succeeded, we can assume that we have the correct source code. (And this is only one way to do it).

Now here’s where it gets tricky. Also where the Put Incentive Model leaks in a bit.

When I was considering the Put Incentive Model, I had to realize that the rewards had to be sent to some wallet. So are you going to send safecoin to an individual or community-curated wallet? Neither, in truth. With smart contracts able to be utilized on the Network, you can create smart wallets - or autonomous wallets. They are indeed automated by the code that drives them, and since that code can be built into the application itself, it is part of that verified code - but doesn’t have to be if that’s insecure. I am just a crypto fan-boy, I’m not developing any primitives or anything.

Now this allows wallets to be switched by the developers, but having the right to do so doesn’t mean that they will. Having the deterministic build use the previous wallet address of the build as raw input, it can verify it against past releases.

So we now use that wallet address as our public key. Since it’s still reliant on the initial release, so we can’t call that absolute, but rather relative assurity. We can know that it’s based on an older versions.

Guess what, Petname Systems work great with PKI.

So concluding that we can use wallet addresses as permanent pointers for devs who wish to get paid, and forks of that project can use different wallets, you can conclude that the source code is the one that you wanted to download.

3 Likes

@smacz I have to admit that I found that that scheme too complex to understand right now. I’ll come back to it later.

In the meantime, my feeling (and I could be quite wrong) is that a self-authenticating scheme such as that seems like a snake swallowing its own tail, an impossibility.

My suggestion of deterministic/reproducible builds was to copy what Debian are doing, of adjusting source and toolchains so that randomizing factors (such as datestamps) are eliminated from the build of the binaries, that are put in the download to the end user, and so the distro managers can publish a recipe for anyone to compile the exact same binaries. The private signing keys/certificates are held at Debian. So it is a tree-like structure, and not a loop.

1 Like

TBH, I’m still clarifying it for myself as well. I however, am very hopeful at the opportunity it presents.