The demise of Facebook (Mera) is a BIG opportunity

Cory Doctorow thinks Facebook is in trouble, which is a massive opportunity for a user focused social network… on Safe Network.

14 Likes

I would love to have Safe book for my social network :slight_smile:

5 Likes

This is one I’ve been thinking about for the past couple of years, particularly in the context of Ron Deibert’s book, “Reset: Reclaiming the Internet for Civil Society”. What is needed is a true “public square”, one that is operated for the benefit of the users, not advertisers or any other commercial or political elites. I’ve played about in Mastodon but there does not appear to be much momentum there to establish the “network effect” although the idea of federation seems plausible.

What I have not yet seen is a model that protects privacy and supports freedom of expression while avoiding hate speech, SPAM, trolling and, for lack of a better word, unacceptable content. I know this is a controversial question, since it can very easily become a question of censorship. If anyone is aware of any good articles on the problem of content moderation, I would be interested to hear about them.

2 Likes

I’ve been thinking about this for a number of years as well and have many notes.

Basically I am imagining an app much like a feed reader, but interactive with: ability to subscribe to other users feeds (if other users make their feeds public - basically sharing their xor address); a content tagging system, and a rewards mechanism for users to evaluate/rate various metrics (and of course some spam reducing ideas to deal with bots and the rewards).

Of course it would be open source and not-for-profit - I think there will be plenty of devs who would maintain it for free.

I’m happy to share my notes here, but it’s a large outline and, most ideas aren’t too well fleshed out, but hard to work out too many details until Safe is in beta. Perhaps a separate thread if anyone is interested.

4 Likes

This is impossible because what is “unacceptable content” is highly subjective. The only model I can see working is free network without any filtering on input and it is up to you as a user what you pick from the see of information.
Today we have social networks feeds, news feeds, SPAM filtering, all done by closed source private algorithms that work for them, not for users. Better model would be to have “personal agents” or AI or whatever we would call it, that will work for us and not for profit. Not my idea btw, if I remember it right it is from Snow Crash (1992 sci-fi book by Neal Stephenson).

2 Likes

Personal agents - I should have thought of that! In effect, I already have a rough approximation in my news feed using Inoreader (and Feedly before that) to select sources and filter based on keywords. A more evolved, AI enhanced version under the control of and acting on behalf of the user for filtering social media content would be ideal. That would leave the truly criminal activities (child porn, snuff films, etc.) for law enforcement to worry about.

Might there also be a mechanism for users to collectively micro-fund content moderation agents built into the transaction in a way that is similar to that which is used to fund farmers for providing storage? The financial incentive might spur further developments in the AI filtering of image and audio content (think Joe Rogan / Neil Young / Spotify controversy) and support the mental health of the humans working in the content moderation sector.

Thanks Eduard (@peca ) for the suggestion and pointer to the Neal Stephenson book which I will look for.

David

3 Likes

Thanks Tyler, I see you anticipated some of the ideas that I put in my reply to @peca. I didn’t see your reply until after I’d written that one (perils of reading the thread LIFO). I’d be happy to see your outline and give you feedback on it.

p.s. Are we heading into territory that is being covered in the Decorum project?

1 Like

Define hate speech, what if I find your content hateful? How can you possibly create something which can make that determination, how does it create those parameters? What if one views it as scathing whilst another doesn’t? Where does that end?

Yes, that is a problem. Hence the suggestion @peca made earlier to have the content filters under user control so each user can implement their own definition of content to be filtered. Since my earlier post I found a good essay on decentralized content moderation that identifies the key requirements of such a system.

1 Like

I will work on it some more and post it soon - when I had a look yesterday I had a bunch of new thoughts on it.

Perhaps this can be a community project, and if some devs are interested in making it real, then perhaps the community will support them directly.

Why even bother going down that road? It always becomes censorship so why bother even trying.

What’s wrong with hate speech anyway? Hate is a normal human emotion. I should be able to express that emotion. If people don’t like it, let them vote with their feet. If you cannot handle hurty words, just go somewhere else.

4 Likes

What I have in mind is simply a word and tag filter that users can choose to apply (or not) - then they can hide posts they are not ready to confront. Aside from that a method for paying people to verify tags on content is also part of the scheme I have in mind. So no censorship, but simply a way of agreeing on what content actually is, then allow the user to hide or show it.

1 Like

As I noted in an earlier reply, a better solution is decentralized content moderation that enables users to apply their own filters. Nonetheless, when hate speech escalates to incitement to violence, it is a criminal offense in most jurisdictions and subject to enforcement by the proper authorities (not platform providers).

1 Like