Is anybody preparing a YOUTUBE-like platform on SAFE already, or a twitter?

I just came up with a possibly annoying idea, haha. @danda and others feel free to shoot this down but people are used to ad interruptions on YouTube, Spotify, and on and on, then something like Brave browser comes along and PAYS you to be interrupted! Woohoo! /sarcasm. But what if. The short interruptions were to rank/ask about the video, pay you, and earn yourself some kind of score card that if above a reasonable threshold allows you to get some other kind of perk like a premium service. More would have to be thought up about how you can provide premium service when what people probably really want is to not be interrupted. Then again, maybe once you build up your score you get less interruptions?

Iā€™m not a fan of attention hijacking and the like but seems like a possible way to increase participation by being more mandatory. I think the UI could make it seem less intrusive by being straight forward and fun somehow too.

@Nigel I think that you or anyone could write an app like that atop an established rating API, and let the best model win. In other words, such a model seems fine, so long as it is opt-in, users have choices, etc.

The entire rating/filtering system (API, UI, etc) could (should?) itself be a layer above the core network, possibly atop rdf/solid, though not necessarily so. It would likely get more initial mindshare if MaidSafe built it, or even the community atop initial SAFE Network releases, so its kind of baked-in from the get-go.

Also, I donā€™t think such a system needs to worry too much about sybil/gaming initially. Sooner or later, people will game it yes, and strategies can be brainstormed to deal with it, technical and/or social. But we shouldnā€™t let perfect be the enemy of the good here, especially as SAFE Network is inherently pseudonymous with throw-away accounts. Eventually entire industries may spring up around rating content, governments may sponsor it, etc, etc.

4 Likes

I would say so. The core of the network shouldnā€™t be worried about human things besides securely and privately handling data. I say worried as if it will be sentient :flushed: haha

That is a good attitude imo. That little brainstorm I just had could be mandatory to a specific app but certainly like you said there would be other apps so in the regard of there are other app options, then itā€™d be opt in.

I would love to discuss these thoughts more with anyone but especially those who might be interested in building. @danda I think there is an opportunity to draw more interest to things like these as say modular like plug-ins that are themselves dapps that earn PtD so that they could have been made by any dev or company and earn them money by others using their plug-in.

I remember Jim talking about a network wide shopping cart and it just seems like something that could be a plug-in that earns @maidsafe money.

4 Likes

We could use a new Twitter like in now. The active censorship is a fact. What are the alternatives?

2 Likes

Obviously, there is a significant difference between rating for quality and opinion about topic.

Simple binary choices are easier than some range on a scale.
So, inclusion/exclusion against some significant factor.
Introduction of any multiple choice, greatly reduces likely participation.

So, ā€œshould it be excluded (y/n)ā€ is simple; the other exclusion might be ā€œis it porn?ā€. Those two not necessarily overlapping.

There will always be disruptive contributions and rightly so but the majority opinion can provide a good solution to resolve what can serve all by default and the difference becomes a choiceā€¦ and choice beyond the individual too, for services like directories to choose what they do list and not. My suggestion of .gov above, simply reflects that group think, whether organised in traditional way or more abstractly the sum of all.

The opinion of how to suggest what it is - ā€œis it humour or notā€ for example, becomes rapidly difficult and unsolvable. So, recommending content of interest, should be considered distinct activity from that of excluding certain content.

Binary is also easier for a computer to resolve ahead of humanā€™s doing what requires opinion.

People are lazy and reasonably do not what to contribute to what is obvious but for some classes of content auto-filters might work well to reduce the burden. As above, its the difference of opinion that becomes interesting.

There are ways that classification of content could work well but itā€™s a different problem from excluding objectionable content.

2 Likes

When, or if, this starts to work. You dont need to worry. There will be more content in a month than anyone can dream of. This is also why there is no need for any advertisment in such a case. The only point of trying to get awareness of this project is for funding and for getting more talents to work on it, before launch. If it works it will rampage away beyond anyones wildest dreams and fears.

1 Like

In terms of filtering things, it assumes a sort of workflow where you search for everything and then filter out what you donā€™t want.

Iā€™ve been coming round more and more to a model of search (influenced by @happybeing and @joshuef) where you just look for what is trusted first, or recommended by trusted sources, and anything undesirable will be further away and therefore buried below other results.

It will be a really difficult thing to pull off in an efficient manner, but I think it could solve a lot of problems of the internet in one fell swoop. Itā€™s much closer I think to how we trust people and information in real life.

9 Likes

A filtering system, one in which articles are rated by many could make SAFE be a collective mind that evolves. A topic being a taboo by many today could be a day-to-day issue in the future, even if the user pays a company to be filtering ā€¦ say childrenĀ“s contents. So this could lead part of the Network to being a DarkSAFEweb?

Everyone is empowered by SAFE. Itā€™s like oxygen or cash or any other enabler.
The key difference is a defense of privacy; security; and freedom.
Politics and opinion, will always be challenging [what and how] services are provided.

The unSAFE does that after a fashion but in a corrupted way.

Taboo is an odd concept, typically relating to a lack of education. Shying away from reality, does no good.

SAFE is not a cure all for the problems in the world but an enabler that will provide opportunity for making is better and challenging those problems by giving everyone voice. The conservative approach to stifling topics, does not make them go away and too often compounds the problem.

1 Like

yes, the uncensorable nature of it turns the base layer into a place where nothing can be taken down, by definition. Rating+filtering can then make it SAFE according to societyā€™s or oneā€™s personal definition of safe. One can easily imagine a kid-friendly SAFE based on a filtering criteria such as okay-for-kids which itself could be computed from a composite of ratings nsfw, violence, porn, obscenity, profanity, adult, child, legal, etc.

Of course, one could also tweak the filters in the other direction, ie only show porn, violence, racism, illegal, etc, etc, to get a super dark personalized view of things.

Further, it seems inevitable that rating criteria for political and religious ideologies would become available, so it would be possible to filter things in a way that favors oneā€™s own world viewā€¦ for better or worse.

3 Likes

So this is what we needā€¦ We have to face reality, get maturer by dealing with the truth!

In fact, I think of a school system that would not be under the thumb of corrupt systems to hold a person in endless circles of useless learning time and waste of resources. Most of every systems in the world are dictated by big states that enslave people! SAFE could provide the perfect environment to test new concepts.! Nice!

1 Like

The worse is more likely to be an impression than a reality.

Control of what others do, of course is at odds with notions of freedom but itā€™s important to acknowledge consensus that arises over time. It would be error to ignore the reality of human evolution over history has called for forms of authority repeatedly and my thought above that some .gov opinion be offered might see it required that users notionally subscribe to at least one .gov - option for more sensible governments to evidence their worth! Notionally that because thereā€™s likely always going to be a minority voiding the wishes of the majority. The challenge will be that gov evidences its value and that service providers consider what they do offer and what they choose to exclude.

Point being that SAFE is for everyone, suggests that it does cater for the conservative mind - and the normal reasoned call for some normal before the individual has to engage - people have more important things to do than fight a war of attrition with spam etc. People are lazy and busy with alt, it is no surprise that they proxy their interest to others. Still SAFE is for everyone and also must be provocateur for the more liberal progressive edge that pushes us forward.

Such considerations are always made more difficult with words twisted back to front by propaganda and politiks but reality cares not for the stupidity and SAFE being natural aligns to reality, in a way that reflects what is likely to be the better option than what we have now.

Incidentally my interest in seeing CLI working from cgi-bin in part was a thought that if the header of a site was in real unSAFE world that would be normal domain ownership that would allow for certain limitation of extreme content and more easily worked with - aside from the bigger aspect of being a bridge for pulling unSAFE body content onto SAFE. (I have yet to see CLI able to action from cgi-bin as it seems to call on config files perhaps that are not visible from cgi-bin??)

1 Like

Iā€™ve mentioned this before. Maybe we could use rings of trust.

You would trust someone and or those theyā€™ve trusted. If I were interested in childrenā€™s programs as a parent, if I found someone whoā€™s content I liked I could trust them and or those they trust. It is likely this would limit horror movies from popping up in my feed.

If I were religious I could trust someone I think has reasonable content and theyā€™d be less likely to have something offensive to religious people.

If one runs into something in your feed that is objectionable you could not trust the creator and those they trust or just the specific content.
We could have some pre-manufactured rings to start things off. So there is reasonable content to search for and let it grow organically from there.

There will be room for services to provide rings of trusted content. These could be community funded and free or privately created for profit.

2 Likes

The sum of opinions is very powerful very quicklyā€¦ the difference of choosing those you trust rather than sum of all, perhaps less liable to corrupting - if you consider any quality rating website atm itā€™s obvious thereā€™s a problem with that model, driven by the greed for profit motive, fake rating submissions abound. Still, if individuals do contribute the sum of those to a group that then is subscribed to, perhaps there is stability there that can be trusted. Option perhaps to subscribe to opinion of individual; group; .gov
but how to action that logging of opinion is far less obviousā€¦ itā€™s a lot of meta data if so many opinions are cast on so much. Tends to then a few powerful actors and a repeat of what is no ideal but perhaps inevitable that what is now opinion does age.

1 Like

I had the same thought last night and was thinking what if AI was trained by such a filter? That could be bad but also could become good at identifying exactly what you donā€™t want in an opposing filter.

Web of Trust is one of those things that sounds great in theory, but very difficult to make it work in practice.

When I worked at Epinions.com circa 2000, the company was very in love with WOT as the inventor of RDF was a founder and championed it greatly. We used an in-memory RDF graph database that was very fast for looking things up. Even so, WOT was computationally intensive. Later when we moved to a relational database the WOT had to be scrapped or close to, as it just required too many joins/lookups and/or massive batch jobs for recomputing summary tables. And this was in a centralized environment with a lot of (for the time) compute power. Also, the added value vs a simpler rating/filtering system, which we also had, just didnā€™t seem to be there, IMO.

Iā€™m not saying these issues are insurmountable, but they certainly are challenging, especially in a decentralized environment. Doubtless more research into WOT has been done since that time in the semantic-web community, and I havenā€™t followed state of the art.

6 Likes

yes, key thing about all this is generating the granular ratings data on an ongoing basis.

Once you have that data in all sorts of dimensions, many interesting things can be done with it. For one thing, that data can be stored as RDF triplesā€¦ ie we are generating a worldwide crowd-sourced knowledge base.

RDF originally came out of AI research. A key problem has always been how to generate a useful ontology (classification system) to enable AI to get a meaningful representation of the world.

Possibly such a rating/filtering system can move that ball forward.

More to your point regarding training AI, yes it would seem that if you fed AI ratings data plus the original content, then the AI could learn to identify and rate new unrated content. Though one has to be careful hereā€¦ do we want the AI overlords deciding for us?

6 Likes

Fascinating background @danda and am ashamed to say Iā€™d not heard of Ramanathan Guha but for his or profile he seems to have been busy!

Hereā€™s something I found which is a nice summary leading up to the invention of RDF:

https://web.archive.org/web/19991117062212/http://builder.cnet.com/Business/Innovators97/ss10.html

3 Likes

@happybeing yeah, that article brings back some memories. In particular of spending 1+ month ripping the RDF code out of Netscape 5, where it was deeply embedded/entangled, because it was the only working RDF engine in the world, and we wanted to use it as a standalone RSS parser/validator/processor.

4 Likes

Just poking around for some content flagging type stuff and came across this for detecting nudity. Since SAFE wonā€™t have compute available as a network feature right away, how could an app (particularly a web app) scan videos for a neural net to process and flag such content? Iā€™m completely in the dark on how that could possibly be achieved at the moment.

Btw, absolutely nothing against adult content but I think for many reasons that a video publishing application should just remove all nudity, full stop. That way CP is not as large of a concern given that itā€™s all captured by one large net. The adult websites will need this same kind of tool except for detecting minors, which I believe there are solutions for but letā€™s just say searching for them is uncomfortable enough.

1 Like