Disinformation campaigns are specifically designed to undermine the reasoning capabilities of people by inveigling them into believing (usually emotionally provocative) falsehoods, turning them into misinformation conduits in the process.
It’s like saying that meth should be legal because reasonable people should just chose not to use it, ignoring the social and mental health issues that drive people to consume it against their best interest.
Sometimes the right thing to do is to cut off the head of the snake before it can bite you.
I get that, but Twitter isn’t based in Brazil at all. What happens if, say, China declares that certain posts are “misinformation”? Should those be taken down without complaint?
We routinely censor content to placate China; like, all the time.
I believe each country should get to have a say in what is permissible, and content deemed unacceptable should be blockable by region. I don’t think it’s reasonable to say “well it’s on the internet so it’s untouchable” simply because the server is in another country.
If a government is imposing harmful censorship I think supporting resistance of that censorship is the right thing to do. A company that isn’t located in that country, ethically shouldn’t be complying with such orders. Make them burn political capital taking extreme and implausible measures.
Since my argument isn’t about what should be censored, I’m intentionally leaving the boundaries of “harmful censorship” open to interpretation, save the assertion that it exists and is widely practiced.
I also think that any service (twitter) refusing to abide by the laws of a country (Brazil) has no place in that country.
That could be true in a literal sense (the country successfully bans the use of the service), or not (the country isn’t willing or able to prevent its use). Morally though, I’d say you have a place wherever people need your help, whether or not their government wants them to be helped.
I’m going to challenge your assertion that you’re not talking about what should be considered harmful by pointing out that you are loading your argument substantially by asserting that people need “help” protecting them from “harmful” censorship. Remember that the issue addressed in this thread is Brazil banning X for its promotion of misinformation and hate speech.
Censorship isn’t harmful by default. It is ok to ban people from shouting “fire” in a theater for example, because the shout may result in real harm. Now you can argue that some censorship may be harmful because of its impact on society, such as the removal of books from school hampering fair and complete education or banning research texts that expose inconvenient truths.
But, again the issue here is specifically an attempt to ban misinformation and hate speech; are you going to make an argument that these things are a positive to the community and should be defended as a moral imperative? Frankly it’s a pretty silly stance to take.
I’m going to challenge your assertion that you’re not talking about
You can interpret my words how you want and I can’t stop you willfully misinterpreting me, but I am telling you explicitly about what I am saying and what I am not saying because I have something specific I want to communicate. When you argue that
I believe each country should get to have a say in what is permissible, and content deemed unacceptable should be blockable by region
In the given context, you are asserting that states have an apparently unconditional moral right to censor, and that this right means third parties have a duty to go along with it and not interfere. I think this is wrong as a general principle, independent of the specific example of Twitter vs Brazil. If the censorship is wrong, then it is ok to fight it.
Now you can argue that some censorship may be harmful because of its impact on society, such as the removal of books from school hampering fair and complete education or banning research texts that expose inconvenient truths.
Ok, but the question is, what can be done about it? Say a country is doing that. A web service defies that government by providing downloads of those books to its citizens. Are they morally bound to not do that? Should international regulations prevent what they are doing? I think no, it is ok and good to do, if the censorship is harmful.
It’s hard to have a discourse on a topic if you insist that the scope of that topic must by default be infinite.
X isn’t being threatened with litigation because they’re freedom fighters bringing literature to the huddled masses; they’re being threatened with litigation because they are a billion dollar business sustaining themselves by selling ads along with content that Brazil argues was misinformation and hate speech.
On the topic of freedom fighters bringing literature to the huddled masses: it may be moral in some extreme examples to defy the government, but there are means of doing that completely removed from the scope of microblogging on a corporate behemoth’s web platform. For example, there is an international organization who’s sole purpose is perusing human rights violations.
There are officially 193 countries, according to UN. Each country with their own laws, some of them (European) with common laws (EU laws). How is it humanly possible for a site to keep track of every single law or every single country?
Laws are not a worldwide consensus.
Also, who and what exactly defines what “misinformation” is? For example: the belief in the supernatural (such as the daemonic forces from Göetia and Luciferianism) is not a scientifically proven thing, so, if we consider “non-misinformation” the information that is capable of being strictly proven, then should absolutely every social network content regarding one’s belief be considered “misinformation”?
I don’t think it’s the responsibility of X to know the laws of every country; I expect them to respect the wishes of other countries when it is brought to their attention if they want to continue doing business there.
Also, I think we both know that the misinformation we are talking about here has nothing to do with religious beliefs. The context of the linked article clearly indicates that harmful mistruths leading to harmful actions is the subject here.
I’m not sure why it’s so tempting to think of internet content as a special entity that defies otherwise established rules. Maybe it’s simply because no special effort is needed today to get the content across the border?
Regardless, we aren’t talking about your geocities page, we’re talking about billion dollar businesses. Would it be appropriate to take your physical storefront across international borders and insist that the government there should have zero say as to what products you sell? If not, why is it appropriate to do the same with web content? X is selling content in the form of ad distribution, countries should get to decide if that content is appropriate for distribution.
I’m not sure why it’s so tempting to think that because some government wants a piece of information to disappear, that people should actually make an effort to disappear that information.
Disinformation campaigns are specifically designed to undermine the reasoning capabilities of people by inveigling them into believing (usually emotionally provocative) falsehoods, turning them into misinformation conduits in the process.
It’s like saying that meth should be legal because reasonable people should just chose not to use it, ignoring the social and mental health issues that drive people to consume it against their best interest.
Sometimes the right thing to do is to cut off the head of the snake before it can bite you.
I get that, but Twitter isn’t based in Brazil at all. What happens if, say, China declares that certain posts are “misinformation”? Should those be taken down without complaint?
We routinely censor content to placate China; like, all the time.
I believe each country should get to have a say in what is permissible, and content deemed unacceptable should be blockable by region. I don’t think it’s reasonable to say “well it’s on the internet so it’s untouchable” simply because the server is in another country.
If a government is imposing harmful censorship I think supporting resistance of that censorship is the right thing to do. A company that isn’t located in that country, ethically shouldn’t be complying with such orders. Make them burn political capital taking extreme and implausible measures.
Define “harmful censorship”. I would argue—strongly—that censoring hate speech and misinformation is a public service.
I also think that any service (twitter) refusing to abide by the laws of a country (Brazil) has no place in that country.
Since my argument isn’t about what should be censored, I’m intentionally leaving the boundaries of “harmful censorship” open to interpretation, save the assertion that it exists and is widely practiced.
That could be true in a literal sense (the country successfully bans the use of the service), or not (the country isn’t willing or able to prevent its use). Morally though, I’d say you have a place wherever people need your help, whether or not their government wants them to be helped.
I’m going to challenge your assertion that you’re not talking about what should be considered harmful by pointing out that you are loading your argument substantially by asserting that people need “help” protecting them from “harmful” censorship. Remember that the issue addressed in this thread is Brazil banning X for its promotion of misinformation and hate speech.
Censorship isn’t harmful by default. It is ok to ban people from shouting “fire” in a theater for example, because the shout may result in real harm. Now you can argue that some censorship may be harmful because of its impact on society, such as the removal of books from school hampering fair and complete education or banning research texts that expose inconvenient truths.
But, again the issue here is specifically an attempt to ban misinformation and hate speech; are you going to make an argument that these things are a positive to the community and should be defended as a moral imperative? Frankly it’s a pretty silly stance to take.
You can interpret my words how you want and I can’t stop you willfully misinterpreting me, but I am telling you explicitly about what I am saying and what I am not saying because I have something specific I want to communicate. When you argue that
In the given context, you are asserting that states have an apparently unconditional moral right to censor, and that this right means third parties have a duty to go along with it and not interfere. I think this is wrong as a general principle, independent of the specific example of Twitter vs Brazil. If the censorship is wrong, then it is ok to fight it.
Ok, but the question is, what can be done about it? Say a country is doing that. A web service defies that government by providing downloads of those books to its citizens. Are they morally bound to not do that? Should international regulations prevent what they are doing? I think no, it is ok and good to do, if the censorship is harmful.
It’s hard to have a discourse on a topic if you insist that the scope of that topic must by default be infinite.
X isn’t being threatened with litigation because they’re freedom fighters bringing literature to the huddled masses; they’re being threatened with litigation because they are a billion dollar business sustaining themselves by selling ads along with content that Brazil argues was misinformation and hate speech.
On the topic of freedom fighters bringing literature to the huddled masses: it may be moral in some extreme examples to defy the government, but there are means of doing that completely removed from the scope of microblogging on a corporate behemoth’s web platform. For example, there is an international organization who’s sole purpose is perusing human rights violations.
There are officially 193 countries, according to UN. Each country with their own laws, some of them (European) with common laws (EU laws). How is it humanly possible for a site to keep track of every single law or every single country? Laws are not a worldwide consensus. Also, who and what exactly defines what “misinformation” is? For example: the belief in the supernatural (such as the daemonic forces from Göetia and Luciferianism) is not a scientifically proven thing, so, if we consider “non-misinformation” the information that is capable of being strictly proven, then should absolutely every social network content regarding one’s belief be considered “misinformation”?
I don’t think it’s the responsibility of X to know the laws of every country; I expect them to respect the wishes of other countries when it is brought to their attention if they want to continue doing business there.
Also, I think we both know that the misinformation we are talking about here has nothing to do with religious beliefs. The context of the linked article clearly indicates that harmful mistruths leading to harmful actions is the subject here.
You’ve been challenged on the definition of misinformation. Your response was to claim it’s obvious.
I think that’s entirely completely reasonable.
Agreed. But if I’m running a website, I’m not going to block content based on what some other country that I don’t live in wants and why should I?
I’m not sure why it’s so tempting to think of internet content as a special entity that defies otherwise established rules. Maybe it’s simply because no special effort is needed today to get the content across the border?
Regardless, we aren’t talking about your geocities page, we’re talking about billion dollar businesses. Would it be appropriate to take your physical storefront across international borders and insist that the government there should have zero say as to what products you sell? If not, why is it appropriate to do the same with web content? X is selling content in the form of ad distribution, countries should get to decide if that content is appropriate for distribution.
Then they better figure out how to block it, I’m not going to assist the nanny-state.
I’m not sure why it’s so tempting to think that because some government wants a piece of information to disappear, that people should actually make an effort to disappear that information.