top of page
Search

Why AI 'misinformation' algorithms and research is mostly expensive garbage

Updated: Aug 30

If ever there was a case of 'garbage in, garbage out' then this is it. And, ultimately it has all been driven by the objective of censoring information that does not fit the politically correct narrative.


The Hunter Biden laptop story is just one of many stories which were deemed by the Main Stream Media (and most academics) to be 'misinformation' but which were subsequently revealed as true. Indeed Mark Zukerberg has now admitted that Facebook (Meta) (along with the other big tech companies) were pressured into censoring the story before the 2020 US election and also subsequently pressured by the Biden/Harris administration to censor stories about Covid which were wrongly classified as misinformation.




The problem is that the same kind of people who decided what was and was not misinformation (generally people on the political Left) were also the ones who were funded to produce AI algorithms to 'learn':


a) which people were 'spreaders of misinformation'; and

b) what new claims were 'misinformation'.


Between 2016 and 2022, I attended many research seminars in the UK on using AI and Machine Learning to 'combat misinformation and disinfomation'. From 2020, the example of Hunter Biden's laptop was often used as a key 'learning' example, so algorithms classified it as 'misinformation' with subclassifications like 'Russian propaganda' or 'conspiracy theory'.


Moreover, every presentation I attended invariably started with (and was dominated by) examples of 'misinformation' that were claimed to be based on "Trump lies" such as those among what the Washington Post claimed were the "30,573 false or misleading claims made by Trump over 4 years". But many of these supposed false or misleading claims were already known to be true to anybody outside of the Guardian/NYT/Washington Post reading bubble. For example, they claimed that denying Trump had said "Neo-Nazis and white supremacists were very fine people" was disinformation, whereas even the far Left-leaning Snopes had debunked that in 2017. Similarly, they claimed "evidence that Biden had dementia" or that "Biden liked to smell the hair of young girls" was misinformation despite multiple videos showing exactly that - so, don't believe your lying eyes; indeed as recently as one week before Biden's dementia could no longer been hidden during his live Presidential debate performance, the mainstream media were adamant that such videos were misinformation 'cheap tricks'.



But the academics presenting these anti-Trump, pro-Biden, and other political, examples ridiculed anybody who dared question the reliability of the self-appointed oracles who determined what was and was not misinformation. At one major conference taking place on zoom I posted in the chat: "Is anybody who does not hate Trump welcome in this meeting". The answer was "No. Trump supporters are not welcome and if you are one you should leave now". Sadly, most academics do not believe in freedom of thought, let alone freedom of expression when it comes to any views that challenge the 'progressive' narrative on anything.


In addition to the Biden and Trump related 'misinformation' stories which turned out to be true, there were also multiple examples of covid related stories (such as those claiming very low fatality rates and lack of effectiveness and safety of the vaccines) classified as misinformation that also turned out to be true. In all these cases anybody pushing these stories was classified as a 'spreader of misinformation', 'conspiracy theorist' etc. And it is these kinds of assumptions which drive how the AI ‘misinformation’ algorithms that were developed and implemented by organisations like Facebook and Twitter worked.


Let me give a simplified example The algorithms generally start with a database of statements which are pre-classified as either ‘misinformation’ (even though many of which turned out to be true), or ‘not misinformation’ (even though many of which turned out to be false). For example, the following were classified as misinformation:


  • “Hunter Biden left a laptop with evidence of his criminal behaviour in a repair shop”

  • “The covid vaccines can cause serious injury and death”


The converse of any statement classified as misinformation was classified as 'not misinformation'.


A subset of these statements are used to “train” the algorithm and others to “test” the algorithm.


So, suppose the laptop statement is one of those used to train the algorithm and the vaccine statement is one of those used to test the algorithm. Then, because the laptop satement is classified as misinformation, the algorithm learns that people who repost or like a tweet with the laptop statement are ‘misinformation spreaders’. Based on other posts these people make, the algorithm might additionally classify them as, for example, 'far right'. The algorithm is likely to find that some people already classified as 'far right' or 'misinformation spreader' – or people they are connected to - also post a statement like “The covid vaccines can cause serious injury and death”. In that case the algorithm will have ‘learnt’ that this statement is most likely misinformation. And, hey presto, since it gives the ‘correct’ classification to the ‘test’ statement, the algorithm is ‘validated’.


Moreover, when presented with a new test statement such as “The covid vaccines do not stop infection from covid” (which was also pre-classified as ‘misinformation’) the algorithm will also ‘correctly learn’ that this is ‘misinformation’ because it has already 'learnt' that the statement “The covid vaccines can cause serious injury and death” is misinformation and that people who claimed the latter statement- or people connected with them - also claimed the former statement.


The way I have outlined how the AI process is designed to detect 'misinformation', is also the way that 'world leading misinformation experts' set up their experiment to "profile" the "personality type" that is susceptible to misinformation. The same methods are also now used to profile and monitor people that the academic 'experts' claim are 'far right' or racist.


Hence, an enormous amount of research was (and is still) spent on developing 'clever' algorithms which simply censor the truth online or promote lies. Much of the funding for these AI algorithms is justified on the grounds that 'misinformation' is now one of the greatest threats to international security. Indeed, in Jan 2024 the Word Economic Forum declared that "misinformation and disinformation were the biggest short term global risks". European Commission President Ursula von der Leyen also declared that “misinformation and disinformation” are greater threats to the global business community than war and climate change. In the UK alone, the Government has provided many hundreds of millions of pounds of funding to numerous University research labs working on misinformation. In March 2024 the Turing Institute alone (which has several dedicated teams working on this and closely related areas) was awarded £100 million of extra Government funding - it had already received some £700 million since its inception in 2015. Somewhat ironically, the UK HM Government 2023 National Risk Register includes as a chronic risk:


"artificial intelligence (AI). Advances in AI systems and their capabilities have a number of implications spanning chronic and acute risks; for example, it could cause an increase in harmful misinformation and disinformation"


Yet it continues to prioritise research funding in AI to combat this increased risk of 'harmful misinformation and disinformation'!


As Mike Benz has made clear in his recent work and interviews (backed up with detailed evidence), almost all of the funding for the Universities/research institutes world wide doing this kind of work, along with the 'fact checkers' that use it, comes from the US State Dept, NATO and the British Foreign Office who, in the wake of the Brexit vote and Trump election in 2016, were determined to stop the rise of 'populism' everywhere. It is this objective which has driven the mad AI race to censor the internet. Look at this video in which Mike Benz walks us through an event that took place in 2019; it was hosted by the Atlantic Council (a NATO front organisation) to train journalists from mainstream organisations all around the world on how to 'counter misinformation'. Note how they make it clear that 'misinformation' includes for them 'malinformation' which they define as information that is true but, but which might harm their own narrative. They explain how to muzzle such 'malinformation, especially from the (then) President Trump's social media posts in advance of the 2020 election. Despite claims that this did not happen (and indeed any such claims were themselves classified as misinformation) the journalists involved in this subsequently boasted very publicly that they not only did it but that it prevented Trump's re-election in 2020.



39 views0 comments

Comments


bottom of page