A little about me: I’m sensitive and caring. I am most certainly not a far right hateful extremist, and yet a network of bloggers, fact-checkers, anti-hate campaigners and mainstream media outlets are combining narratives to promote government censorship of the internet.
In order to do this they are applying the label of ‘hateful extremism’ to any information or reports that question government policies or actions. In particular to allege that it constitutes ‘disinformation’ peddled by ‘far right hateful extremists’.
GOVERNMENT MINISTERS – You ARE committing genocide.
All that you care about is money. You are wicked men and women in the eyes of God. Stay at home, protect the NHS, save lives is CAUSING UNTOLD SUFFERING AND DEATH.
The truth will prevail. Government policies are killing people, fact checkers. you can’t fact check your way out of that.
“Woe unto them that call evil good, and good evil; that put darkness for light, and light for darkness; that put bitter for sweet, and sweet for bitter!” – Isaiah 5:20
According to the Word of God (Romans 13), our leaders (higher powers) are ordained by God and yet if they do evil, it says they should be afraid. All of the wasted taxpayers money they will have to give account for, and refusing people access to a cheap treatment called Hydroxychloroquine, Zinc, and Ivermectin. This is a great video of Dr Peter Breggin interviewing Dr Vladimir ‘Zev’ Zelenko.
All those Government ministers I could email have received the gospel message, and yet they continue to lie to the public, push the experimental vaccines, and they clearly have no fear of God.
Iain Davis from In This Together writes an investigative piece published at UK Column.
Copyright: UK Column
The open source investigative blogging platform Bellingcat recently published an article titled “The Websites Sustaining Britain’s Far-Right Influencers.” In the article Bellingcat openly advocates the censorship of the Internet:
This investigation … comes as the British government prepares key legislation to compel tech companies to make the internet a safer space … Tech platforms’ amplification of harmful ideologies with real-world consequences is undeniable … This year, the UK will bring its Online Harms Bill before parliament, legislation empowering an independent regulator to fine platforms that don’t restrict unsafe online content. Civil society has urged for smaller platforms to be included …
Bellingcat journalists are funded by, among others, the National Endowment for Democracy (NED). Despite its NGO status, NED is directly financed by the U.S. State Department and numerous private philanthropic foundations that are supportive of U.S. foreign policy objectives.
Theresa May’s Rapid Response Mechanism (RRM), agreed at the 2018 G7 meeting, is very much part of British and U.S. foreign policy. The objective is to provide a unified narrative of global events to the public: apportioning blame rapidly, seemingly without either investigation or evidence.
While the RRM is apparently focused upon international relations, seeking to declare the guilt of hostile states, it gave rise to the UK Cabinet Office’s Rapid Response Unit (RRU). The RRU’s focus is upon the UK public.
It coordinates the activities of information warfare units, such as 77th Brigade and @HutEighteen, who monitor what we do and say on social media and actively combat what they identify as online disinformation. Their opponents in this hybrid information war are ordinary people who question government narratives, including so-called alternative media and those who share their reports.
The UK government is among those who consider these efforts alone to be insufficient. They are formulating Online Harms legislation to effectively censor the Internet.
This intended curtailment of free speech will be difficult to sell to the public unless they are convinced of its necessity. NED-funded media platforms like Bellingcat are part of the campaign to convince us.
Bellingcat sit within a much wider media network of government and corporate-backed blogs, fact checkers, anti-hate campaigners and mainstream media outlets using a set of techniques designed to support the often weak narratives of governments who wish to have global control of information.
The latest term to be weaponised by this network is hateful extremism.
The aim is to convince the public that the online activities of said hateful extremists puts their safety at risk. Censorship is being touted as the only solution.
Establishing The Threat
Bellingcat claim that “tech platforms’ amplification of harmful ideologies with real-world consequences is undeniable.” They allege that online activity is a driver of radicalisation and in particular “far right” extremism, suggesting this potentially leads to terrorism.
We need to be clear about what Bellingcat is claiming here.
Throughout their article they rely heavily on the research of Hope Not Hate (HNH.) They cite the HNH report “State of Hate” as a source.
State of Hate claims “Far Right Terrorism Is On The Rise.” This claim is central to the argument Bellingcat is making.
However, State of Hate makes many claims without offering evidence to support them. It provides nothing to substantiate its headline assertion that “Far Right Terrorism” is increasing. Instead it offers apparent contradictions throughout:
Organisationally, the movement is weaker than it has been for 25 years. Membership of far right groups is down to an estimated 600-700 people. Traditional far right parties like the British National Party (BNP) and the National Front (NF) are now almost extinct … Yet, at the same time, the far right poses a bigger threat – in terms of violence and promotion of its vile views (particularly anti-Muslim views) – than it has in many years. The threat is evolving.
On the one hand HNH acknowledge that far right groups have practically disappeared but then they claim far right violence is increasing. They also blame this imperceptible rise in far right violence on the Internet.
The growth in global Internet use over the last 25 years has been marked. In 1995 there were an estimated 16 million people using the Internet worldwide. In 2020 that figure eclipsed 5 billion. According to the World Economic Forum, by 2016, this growth has indeed corresponded to a notable increase in terrorism.
However, their statistics show that this increase was overwhelmingly due to Islamist extremism and terrorism, much of it in effective or actual war zones.
Terrorist activity in destabilised regions of the world like Iraq, Libya, Syria, Afghanistan, Somalia, Pakistan etc. can’t be said to have been caused by people using social media. The determinants are numerous, interconnected and complex.
Indeed, if we look at terrorist activity in Western Europe, most notably the UK, the Internet age has corresponded to a remarkable reduction in extremist violence and nationalist terrorism. The 1970s and 1980s were far more dangerous than the 21st century.
In an attempt to establish the idea of this far right threat, Bellingcat linked to an article by CNBC which alleged that the “Australian terrorist who killed 51 people at two mosques in Christchurch was radicalized by YouTube.” In turn, CNBC cited a report by the New Zealand Royal Commission.
Why did Bellingcat not link to the primary source?
It turns out that CNBC’s claim that the Christchurch gunman was radicalized by YouTube wasn’t accurate. Bellingcat’s undeniable fact is, in fact, quite deniable. This is acknowledged in the Royal Commission report.
Firstly, the report highlights that possible online radicalisation isn’t the uncomplicated process that Bellingcat claims. The Commission notes:
Radicalisation to violence is highly individualised and there is not one model that can explain why people choose to commit violence. Rather, a person’s individual characteristics (their background, life experiences and personality), the social groupings they are part of and the wider socio-economic and political environment they live in, all interact in unique ways influence a person’s likelihood of radicalising.
The Commissioners did not have a clear picture of the terrorist’s online activity, nor any evidence that he had been “radicalized by YouTube.” Yet, despite acknowledging their doubt, they too sought to blame the Internet, although they offered considerably more qualification than Bellingcat:
We have no doubt that the individual’s Internet activity was considerably greater than we have been able to reconstruct … His exposure to such content may have contributed to his actions on 15 March 2019 – indeed, it is plausible to conclude that it did. We have, however, seen no evidence to suggest anything along the lines of personalised encouragement or the like.
They had no evidence that he had been radicalized by YouTube. Ultimately they concluded that the terrorist’s radicalisation wasn’t a product of surfing the Internet, but rather a complex mixture of a wide range of influences:
His life experiences appear to have fuelled resentment and he became radicalised, forming extreme right-wing views about people he considered a threat.
They had good reason to be uncertain about the role of the Internet in his radicalisation. In 2016 the U.N. Special Rapporteur Ben Emmerson issued a report noting the lack of any plausible evidence which adequately explains the radicalisation process:
Many programmes directed at radicalisation [are] based on a simplistic understanding of the process as a fixed trajectory to violent extremism with identifiable markers along the way … there is no authoritative statistical data on the pathways towards individual radicalisation.
A team of researchers from Australia’s Deakin University corroborated Emmerson’s view in 2018. Their peer reviewed article the 3 P’s of Radicalisation considered the available academic literature on radicalisation. They found:
Radicalisation, for the most in social settings … This means that factors such as the consumption of propaganda, narratives or political grievances do not operate by themselves but rather have effect within specific social settings…the lack of rigorous methods in the field also leaves unanswered the questions about the causal relations between the factors … There is no definitive answer to the question whether the adoption of an extreme ideology precedes engagement in violence.
Associating the Threat To The Target Of The Censorship
Throughout their article Bellingcat consistently refer to extremism “AND” conspiracy theory:
“His white supremacist conspiracy theories.”
“Brits promoting hate and conspiracy theories.”
“Gab contained hate and conspiracy theories.”
The term “conspiracy theory” is nothing more than a label to avoid discussion. Usually the person applying the label doesn’t wish to acknowledge the evidence of governmental crime. As Christopher Hitchens observed:
One has become used to this stolid, complacent return serve: so apparently grounded in reason and scepticism but so often naive and one-dimensional.
Operation Gladio is one, but there are numerous other well documented, proven examples of governmental crime. Iran Contra, the Watergate scandal, Operation Mockingbird, Operation Paperclip, MK Ultra, the COINTELPRO program and the WMD deception, to name just a few. It is by no means irrational to suspect such crimes continue.
U.S. political scientists Joseph Uscinski and Joseph Parent have conducted perhaps the most extensive research into the demographics of people labelled conspiracy theorists.
They found no difference in gender distribution, that black and Hispanic people are the predominant ethnic groups and, with 23% university-educated, educational patterns were consistent with the wider population. They found no particular political bias, though ‘conspiracy theorists’ tended to favour independent electoral candidates.
There is no evidence that people who are concerned about possible government crime are unusually disposed to racism, white supremacy or “far right” politics.
Bellingcat have exploited a composition fallacy to make an unfounded claim. But the reality is that just as with any large cross section of the UK populations, a small minority of “conspiracy theorists” may have racist beliefs.
Using this same composition fallacy, we might claim that censors are fascists, therefore everyone who advocates for censorship of free speech, be it online or off, is a fascist. That would be equally incorrect.
Censorship As The Only Solution To Hateful Extremism
In their recent report, Operating With Impunity, the Commission for Countering Extremism (CCE) claims that there are a lack of laws that can capture hateful extremists.
They allege that hateful extremists can exploit gaps in legislation and can spread their hateful extremism with impunity.
It has long been against the law to encourage anyone to commit a crime either verbally or in writing (including online) in the UK. Regardless of so-called “hateful extremism” if anyone encourages anyone else to commit crime they are guilty of one themselves.
Until the 2007 Serious Crime Act, incitement was a common-law crime. Under the 2007 Act it became an inchoate offence (an offence of preparing a crime) to encourage or assist in any crime. Once libel, anti-terrorism, sedition and defamation laws etc. are taken into account, our online protections are already in place.
Despite the fact that, as we have already seen, terrorism in the UK has declined significantly during the age of the Internet, the CCE also allege:
Hateful extremists intend to create a climate conducive to terrorism, hate crime and violence; or seek to erode and destroy our democratic freedoms and rights.
The problem for CCE and the government is that neither they (hateful extremists), nor the crimes they will be found guilty of, exist yet.
Therefore the government intends to create them.
People who previously were not guilty of encouraging or abetting any crime will be made into criminals — not because anything they do or say presents any genuine threat to public safety, but because the government, with the support of the likes of Bellingcat, want to silence them.
We can anticipate that this new crime wave is going to be enormous because, once again, the use of composition fallacy is littered throughout the CCE report:
“Several million users globally were in groups which promoted the QAnon conspiracy theory”
“42 channels dedicated to this widely discredited conspiracy theory”
“Extremist narratives underpin some of the best-known and most recent conspiracies.”
“Repeatedly uploading videos online containing anti-Arab conspiracies”
So we understand from this that whatever the CCE choose to label a conspiracy theory constitutes hateful extremism. But what is their definition of hateful extremism?
- Behaviours which incite and amplify hate, or engage in persistent hatred, or equivocate about and make the moral case for violence
- which draw on hateful, hostile or supremacist beliefs directed at an out-group who are perceived as a threat to the well-being, survival or success of an in-group; and
- cause, or are likely to cause, harm to individuals, communities or wider society
From this we can glean that hateful extremism can be anything that might be harmful to wider society. Consequently we need to understand what the government intend the meaning of online harm to be. For this we can go to the Online Harms White Paper:
Inaccurate information, regardless of intent, can be harmful.
Inaccurate information can be harmful? What else does the government consider to be hateful extremism?
…The spread of inaccurate anti-vaccination messaging online poses a risk to public health. The government is particularly worried about disinformation (information which is created or disseminated with the deliberate intent to mislead; this could be to cause harm, or for personal, political or financial gain).
Questioning vaccine safety is therefore an act of hateful extremism and new laws are required to punish the newly created anti-vaxxer criminals and censor their hate.
It seems the government intend to use their loosely defined legislative construct of “hateful extremism” to censor public opinion. It looks like a fairly arbitrary process and, thanks to the apparent vagaries of the legal concepts, won’t require much effort. Any opinion can be described as hateful extremism and the only real criteria for establishing it is that the government don’t like it.
Bellingcat claim that “civil society has urged for smaller platforms to be included” in the suggested censorship grid. By “civil society” Bellingcat don’t mean the people. Instead they cite HNH as an exemplar of stakeholder civil society. This is a civil society from which ordinary men and women are excluded. It is a network of hand-picked NGOs, charities, academic institutions and corporations.
Looking again at the HNH “State of Hate” report, we can see how civil society applies the Hate Creep method. It isn’t very far into HNH’s pamphlet before we start seeing the term hateful extremism being applied to perfectly legitimate political opinion:
Even UKIP, which as recently as 2015 took 14% of the General Election vote, has virtually collapsed.
The political party UKIP are defined by HNH as far right extremists. Therefore, they are hateful extremists according the CCE and cause harm according to the government.
The millions of people who voted for UKIP are all hateful extremists. Questioning the governments immigration policy is an act of hate and supporting Brexit is extremism, inferring that everyone who supported Brexit is also a far right extremists and a potential terrorist or terrorist sympathiser.
Hate Creep
“Hate Creep” can thus be defined as:
The false attribution of ‘hateful extremism’ to an opinion you oppose for the purpose of illegitimately and unlawfully censoring it.
Hate Creep is the real focus of the work of HNH and Bellingcat. Hate Creep is the driving methodology of the CCE and Hate Creep is the basis of the proposed online harms legislation.
Bellingcat and HNH are working in partnership with state legislatures to exploit a deliberately vague definition of “extremism” and “hate” in order to censor free speech and marginalise, with a view to silencing, any criticism of government policies and actions.
The Online Harms White Paper states:
Our society is built on confidence in public institutions, trust in electoral processes, a robust, lively and plural media, and hard-won democratic freedoms that allow different voices, views and opinions to freely and peacefully contribute to public discourse … Disinformation threatens these values and principles, and can threaten public safety, undermine national security, fracture community cohesion and reduce trust.
Our society is not built upon confidence in public institutions. Our society is based upon our inalienable rights enshrined in our constitution, not our belief, or trust, in the government. Our constitutionally protected rights exist to defend us against the worst excesses of the government. The trust that matters is the trust we place in each other.
Our rights include the rights to freedom of speech and freedom of expression, which the proposed Online Harms legislation seems to directly oppose.
In response to COVID-19, our hard-won democratic freedoms have been removed wholesale by the government. This does not appear to be a temporary situation.
We need different voices, views and opinions to freely and peacefully contribute to public discourse.
In order to defend ourselves against this onslaught a robust, lively and plural media is absolutely essential. As far as the government are concerned that means the mainstream media, their favoured online platforms like Bellingcat and representatives of their civil society, which most of us are excluded from.
Winston Churchill once said:
“Criticism may not be agreeable, but it is necessary. It fulfils the same function as pain in the human body. It calls attention to an unhealthy state of things.”
And we are in a very unhealthy state.