(Psst: The FTC wants me to remind you that this website contains affiliate links. That means if you make a purchase from a link you click on, I might receive a small commission. This does not increase the price you’ll pay for that item nor does it decrease the awesomeness of the item. ~ Daisy)
by Jeff Thompson
As if 2020 wasn’t crazy enough, now in 2021, we have an international organization using an AI to monitor what we say for COVID-related thoughtcrime.
What am I talking about?
The World Health Organization’s EARS program.
EARS stands for Early AI-supported Response with Social Listening. In short, it’s an artificial intelligence that combs the internet for any mention of COVID-19, analyzes discussions, and collects data on such for researchers to use. It’s everything Edward Snowden warned us about and more.
The stated intent of such a program, according to the WHO, is to monitor the “fake news” (ol’ Trump seems to have created quite the sticky phrase here) that surrounds anything related to COVID. The World Health Organization publicly refers to any information out there that stands in contrast to WHO advice as “the infodemic.”
The status of The Infodemic
An infodemic is a new concept, but it’s one which the WHO has decided exists. WHO defines an infodemic as such:
An infodemic is too much information, including false or misleading information in digital and physical environments during a disease outbreak. [source]
I’m not sure why too much good information would be considered a problem. However, the WHO believes that even that is an issue according to the above statement.
The problems with this infodemic are two-fold, according to the WHO. For starters, people engage in non-WHO-approved actions. Secondly, “it leads to mistrust in health authorities and undermines the public health response.” Also, according to the WHO, social media and internet use are sources of much of the infodemic, and they “can…amplify harmful messages”. [source]
To combat this, the WHO believes they can take four broad steps.
For starters, Step 1: one must listen to hear what information the community is discussing. Next, Step 2: promote understanding.
These first two steps are rather lame. It’s the next two that are liable to pique your interest.
Step 3: Build Resilience to Misinformation. Just what exactly does that step entail, may I ask?
Step 4: Engage and Empower Communities to Take Positive Action. [source] This step is the one that should scare you. What on earth are communities being told to do here? Is this to encourage government officials to curb free speech? Are communities to take away the medical licenses of doctors who don’t march in line? What do they mean by ‘positive action’?
Let AI monitor everything.
One of the best ways to analyze mass amounts of data is through the use of artificial intelligence. It was with this understanding that the who launched EARS in January 2021. EARS has since tracked 41 different narratives online that pertain to COVID. EARS is monitoring 30 different countries at the moment, searching Facebook, Instagram, Twitter, blogs, forums, news articles, and comments sections of websites as it attempts to harvest as much real-time data as possible.
This WHO AI supported program monitors very specific keywords and hashtags to keep accurate tabs on what conversations are trending throughout cyberspace, and therefore, the public.
“When people face delays getting the information they need, it can lead to a rise in speculation and conspiracy theories, and that too is an infodemic challenge that can result in harm to peoples’ health,” the WHO states in one recent article.
One strategy to fight back against the infodemic is to “saturate online conversations with high-quality health information that responds to the questions and concerns of the public.” [source]
So, are bots used to auto-generate posts across cyberspace, making it seem as if the more significant majority of people believe X, when in fact, Y is what people believe?
Now, let’s think about the article I wrote about the Australian government being able to legally change people’s social media posts. EARS would be a significant benefit for those searching for posts to “correct.”
Should you be concerned?
Well, for starters, do you like the idea of an AI monitoring what you’re saying online? Do you like the idea of anyone monitoring what you’re saying online? And keep in mind that there’s a significant difference between having other people read what you say online compared to having somebody monitor what you say online.
In my mind, I associate monitoring with creating databases – databases used for future decision-making and policy creation.
In many ways, 2020 was the pouring of gasoline on a fire. 2021 is throwing some dynamite into the fire for good measure. Will programs such as EARS only continue the current trend of rightspeak well into the future? What do you think about an international organization determining whether what you have to say is deemed trustworthy or not? Are you okay with your local community taking “positive action” against what you enjoy listening to or saying? Sound off in the comments.
Jeff Thompson is an avid fisherman who likes to spend time sailing on his boat and reading while at sea.