Brainstorm how to end online tribalism/echo chambers

Exactly. It lets the town loony find a whole town of loonies, worse yet google/YouTube help feed that bias and in general on YouTube recommend it to the general population like a funnel because it has proven to be very engaging (a rabbit hole of conspiracy theories) and addictive. I can be on just about any device on a different IP and not signed in, at some point you are recommended Joe Rogan content that leads to conspiracies, ancient alien crap and flat earth.

@danda I see your point. But given that these social tools and algos allow people from anywhere globally to connect (unnatural) and steer them into addictive patterns, all while big tech companies continue to let it happen, advertisers line their pockets, state actors sow disruption etc. I don’t consider it homogeneous. It happened and maybe you can just consider that natural as it emerged but the positive feedback loop is bad for our long term health.
Maybe we do find our way out of it but what if that is because solution or effort to stabilize naturally occurs? Which is why I think this discussion is important.

2 Likes

Maybe we’re hitting a fundamental problem with democratisation here.

Society has always ended up with some kind of pyramid, and I think we’ll always have a pyramid with respect to looniness, or should we say “understanding of the world”, or “intelligence” etc.

So maybe the problem with social media is that it doesn’t have enough hierarchy, in terms of making everyone equal without a mechanism that filters based on non-looniness (or whatever), when granting its power (reach).

2 Likes

This does seem pretty fundamental to the problem. I hear a lot on different podcasts that interview both sides of the aisle that it’s hard to know what’s true. Like @danda had previously mentioned, there used to be like maybe three big media companies and a lot of people trusted Walter Cronkite for their evening news, now information is democratized but because there is no credibility, literally anyone can make a story or a theory and pass it along and no one ever really looks for citations etc because there is too much information all the time to bother, especially when they’re just trying to make ends meet. Most people say they get their news from the internet. I also hear a lot of people say they do “internet research“ and when I ask how they do that they say they google what they are looking for :man_facepalming:t3:. Then I have to explain that google will give you exactly what you’re looking for and not the other side of the story because it wasn’t relevant to your search. You are searching for your bias and google is just doing its job. I think googles design is part of the problem as it efficiently feeds our natural human biases.

So we’re missing credibility, sources (citations), etc. Now what’s crazy is Facebook is “trying” to tag fake news as not credible and so on but the same people using the platform don’t believe those warnings anyways.

Thanks for bouncing thoughts Mark.

1 Like

The problem with too much social media is the hierarchy… the dull controlling the conversation. Mods who think they are Gods… bad comedy too often.

Reality has a conceptual hierarchy that filters relative to truth.

I wonder the problem too often is that people encounter others who think differently and cannot or care not to accommodate, two or more thoughts, to resolve difference… and people do tend to what is not challenging. Still, there are many issues apparent with the current state of discourse… and a long way to go to resolve better understanding and compassion for others; atm too many conservative defaults ending in error.

This is an example of the lack of what I called hierarchy. Perhaps “filter” is less contentious, but the result is hierarchy, so I think it’s useful to acknowledge it.

Social media disables filters that worked differently, better in many ways than the filters that have been substituted, which are commercial based attention stealing.

I call it stealing because the systems are designed to minimise awareness and therefore consent, by tapping into prejudice and triggering emotion at the expense of awareness and quality discourse.

There are of course those who manage this better than others, but they are drowned out by the preferences of the algorithms.

Hierarchy tempts a qualification of who has voice and a tendency to exclude and singular perspectives. The Illuminati approve of hierarchy!

I approve of hierarchy in many contexts, my point is that we are losing a way to understand why social media has the hierarchies it does and how it would be improved with different filters to create different hierarchies. I can’t think of much we humans do that doesn’t make good use of hierarchy.

1 Like

It’s often just politiks lite, people nominating representatives to moderate and defend the common interest. That’s corrupted in many forums now by a large effort to drive conversation, be that ego of individuals; organsations; or even organised crime that is MIC.

Also, hierarchies don’t need approval, they form naturally. It’s like all arenas, tending to a natural distribution. Power, if you want to call it that, tends to create what people will interpret as hierarchy but that then is mistaken as deliberate and not just happenstance… that is it can easily change. (perhaps something about media being fickle reflects that - what is popular, can change in a moment)

Still the solution is driving down the power; so, that individuals have opportunity to engage - and by engaging progress their own understanding.

One of the better approaches I wonder was reddit in the early days, it’s become a toxic over moderated now but early on, small groups of common interests just got on a chatted without limits and it worked well. The problem came when mainstream normal drowned out good conversation and so, many moved on, looking for those simpler niches. So, anything close to generic topics became perhaps necessarily moderated, then controlled, then conservative dull… and it’s that first step perhaps that needs inhibiting, preventing one channel having too many people that it’s not a conversation but a noise.

The hierarchies of social media are basically those of teenagers, so who’s got the best haircut, who looks coolest in shades and ultimately who’s got the most likes, which is fine until the same criteria apply to who has the best take on climate change, or epidemiology or the best way to manage an economy, in which case the guy with the coolest haircut tends to win the algorithmic wars against the balding dude with reading glasses even though he’s only 19 and knows very little about any of those things. This is a big problem, because it’s a hierarchy of popularity, and that’s something which is easily manipulated.

1 Like

I think there are other hierarchies constructed by the feed algorithms based on reactions (likes, reposts, comment etc.) and unknown criteria that probably go way beyond this (mood, personality type, topics etc).

Definitely. My point is that the hierarchies are increasingly decoupled from expertise.

2 Likes

It’s an interesting point about the feedback. If forums are centered on kudos, the hierarchy follows that ‘power’.

Coupled with the thought that large social media suffers because of the volume packing into a topic, I wonder what could be better.

So, if it’s perceived merit and then based on who is friends with who, perhaps that would be different? I wonder you could name friends, talk freely among those and others, on topics that are shared interest, the reward would be the conversation. Limit perhaps to friends+2degrees; so, the circle is not too wide but introduces some difference. Then access to subforums is re-enforced by that feedback of who is aligned with who; if you disagree then unfriend them and they risk losing access to that forum if they have not others. That would limit the size of the problem too and keep the focus on shared interests. So, I have in mind Telegram with access enabled by who you know. That would be more real world like and perhaps more stable?? I wonder on the internet friendships or whatever term is coined could be more fluid than in real life… so, might work as a feedback, where needed. Some max number of friends perhaps needed to force no huge groups?? The size problem reminds me of the thought about the limit of one human only managing with knowing of the order of a 100 people… and only a few close friends. None of that is thought through, just top of head but perhaps would be interesting to see and there surely must be other ways of limiting size; encouraging positive and negative feedback, that supports good conversations.

Great point. If I was to add to the hierarchy subject I would just say the hierarchy is one of feeding information/misinformation. Someone who provides value whether it be true or false is influential, period. Being that people like to feed their biases that lends to the building of that likely larger spectrum of social media influence hierarchy.

1 Like

i think that truth resonates with people… eventually. Even if unpopular at the time.

Most real scientific advances throughout history have challenged the dogma and beliefs of the era, and taken some while to become generally accepted. I personally am aware of many scientific advances today that are being blocked by a kind of stranglehold on science journals, textbooks, and grant monies. kind of a garbage in, garbage out situation.

I identify with the minority on a great number of matters, scientific, economic, and social. Sometimes it comes down to keeping the flame of truth alive for a later, hopefully more enlightened generation to discover and fan into a larger fire.

Thus, the key thing is to avoid creating a system where the majority can censor (delete, remove, ban, block, etc) the minority, for any reason whatsoever.

imho, it is fine (and necessary) to enable personalized filtering/blocking. ie, I might decide that I don’t want to see any more posts by some troll and so my user-agent can filter those out. But the platform itself should not remove the posts for anyone that DOES want to view them.

I apply this principle to content that I personally abhor, because that is the best test of the principle.

As for tribalism and echo chambers, I don’t worry about it so long as the platform is not actively censoring content. Unfortunately, there are few, if any examples of such platform today.

Safe Network, by virtue of being an uncensorable platform, will need to provide adequate filtering tools. I suspect they will be built of necessity by the community before any real mass adoption could realistically occur.

It is a worry that if a given filter is used by enough of the population, that it becomes kind of indistinguishable from outright removal of the filtered-out content. Thus, it is very important that a filter is always something that an individual can override, customize, opt-out of. ie, take the rose-tinted glasses off.

4 Likes

I get all what you’re saying and in that filtering thread I think we covered a lot of why it is so important. The rose tinted glasses though could be intoxicating. Having so many living in their own versions of reality seems so troubling and far from consensus. Local consensus maybe but not global consensus which is an interesting and maybe ironic perspective given the Safe Network works this way.

Perhaps the filtering portion is inevitable.

1 Like

yeah… speaking only for myself, I am not interested in consensus as a goal.

During most of my lifetime I have been opposed to many so-called “consensus” views on things I care about. And usually, the more I read up on a topic, the more I find out about how nuanced a given topic is and how little agreement there actually is amongst supposed experts. And that time and time again the most innovative thinkers, those that could really move humanity forward, are harassed and silenced, careers ruined, imprisoned, etc, etc.

In many aspects of life, locally speaking consensus can be good, eg for decision making in a small business, family, etc. But it is never really possible outside of a small group and I think it becomes quite dangerous when any type of so-called consensus is claimed at a large scale, because usually it just means that one group has more power and control to silence opposition of those that do not agree.

I definitely fall in the camp of “let everyone speak and individuals can figure out for themselves what they believe”. And really, any other approach is incompatible with liberty, afaict.

1 Like

Interesting talk which highlights how important it is to break down our own echo-chambers

4 Likes

I’m not sure if these communities were so like-minded. Or if the like-mindedness was the reason for them to stick together. I think they stuck together, because they needed each other in order to survive. Maybe they would have wanted to kick some individuals out, but couldn’t because they knew they needed them in some situations.

Nowadays we - who can afford to not need others that much, or at least to not see our dependency in bigger picture - easily define “community” as like-mindedness, because we can choose. But there are also communities that are not based on choice, but need, and they probably have greater tolerance for different people. They have to stand each other, and they maybe learn patience and other skills to do so out of necessity.

2 Likes

Common interest is different from like-mindedness.

As interests evolve, so do alliances. The sum of the overlap of interests makes for a community, with all the variety of normal distributions in the glue that keeps those together.

This idea comes from Stuart Russell’s book Human Compatible (it’s a really good book).

This section from the book outlines his idea:

… consider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly there is a category of articles that die-hard centrists are likely to click on, but it’s not easy to imagine what this category consists of.) Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user’s mind—in order to maximize its own reward.

There’s a section in the book with the subtitle ‘Tribalism’, here is the part I consider relevant to this topic:

To varying degrees, all the major technological issues of the twentieth century—nuclear power, genetically modified organisms (GMOs), and fossil fuels—succumbed to tribalism. On each issue, there are two sides, pro and anti. The dynamics and outcomes of each have been different, but the symptoms of tribalism are similar: mutual distrust and denigration, irrational arguments, and a refusal to concede any (reasonable) point that might favor the other tribe. On the pro-technology side, one sees denial and concealment of risks combined with accusations of Luddism; on the anti side, one sees a conviction that the risks are insuperable and the problems unsolvable. A member of the pro-technology tribe who is too honest about a problem is viewed as a traitor, which is particularly unfortunate as the pro-technology tribe usually includes most of the people qualified to solve the problem. A member of the anti-technology tribe who discusses possible mitigations is also a traitor, because it is the technology itself that has come to be viewed as evil, rather than its possible effects. In this way, only the most extreme voices—those least likely to be listened to by the other side—can speak for each tribe.

It seems to imply a symptom of tribalism is casting out ‘your own kind’, as well as the more obvious symptom of casting out ‘the opposition’.

I reckon a lot of progress can be made when we respect the sources of belief; this allows us to disagree with the belief itself but still have some common ground or agreement of why people choose to believe it in the first place. For example, I’m really strongly opposed to gun ownership, but I can sympathize with the desire for freedom from oppression which seems to be a significant part of gun ownership.

We can’t pretend ‘bad is good’ but looking deeper to the source might allow some sort of common ground to appear where it otherwise might not have.

6 Likes