top of page

The Battle for Truth Online

  • Writer: Wade MacCallum
    Wade MacCallum
  • Sep 15
  • 4 min read

Author: Wade MacCallum, Founder



ree

AI has been deciding what people see for a long time....


I’m not a writer by any stretch, so bear with me.

There are questions about using AI to advise and direct messaging for people. The fear is that this new technology will manipulate people into making decisions—whether purchases or votes—against their better judgment. That it will somehow “manipulate” them.

The truth is that people’s information and decisions have already been influenced by AI since the mid-90s. Everyone’s feed (on social pages), purchases (Netflix, Amazon), and their information and news (Google) are controlled by AI systems and have been for decades. 

The problem is that the information one person sees is selected as the content they are most likely to interact with. This is very bad—which I’ll explain in more depth farther down.



Different Realities


ree

First, here’s a simple test:


Search something on Google. Maybe something political and contentious—say, a topic about Trump (it doesn’t get more polarizing than him). Save your results.


Then activate a VPN (eg. FreeVPN for Chrome), select a different country, and ideally switch your browser to Private or Incognito mode.


Repeat the search and scroll through the results.


If what you see is different from what you saw on your own device without a VPN, then your information is being carefully selected just for you.


And it is.


The biggest problem shows up on social media. Because of the rapid pace of information, social platforms are even more precise in sending us what they think we want to see.


Even worse, they send us information that reinforces our beliefs—rarely anything that challenges them.


This creates polarization, a drift from truth, and at its extreme, hatred for opposing opinions. People become frustrated that others can’t see “their side of things,” because to them it feels “so clear.”


The problem is: if someone gets the majority of their information online, then THEIR REALITY IS DIFFERENT from everyone else’s—except those with similar beliefs of course.


We see this most clearly during elections. Divisiveness increases because AI systems are being fed a surplus of polarizing information. This is their fuel and accordingly makes them even more active and powerful.


Bill C-18 & Segmentation


ree

And then came Bill C-18 in Canada. This law was meant to support Canadian journalism by forcing big tech companies like Meta and Google to pay news outlets when their stories were shared. But instead of helping, it backfired. Meta (Facebook and Instagram) simply blocked Canadian news altogether. Google threatened to do the same until it negotiated a deal.


The result? Legitimate journalism disappeared from people’s feeds. And with credible news silenced, misinformation, clickbait, and half-truths rushed to fill the void. At the very moment when Canadians needed more access to facts, they got less.


So what does this mean for our solution—targeted advertising and AI-powered segmentation?

How we can help people make educated decisions


What we’re doing is simple: we’re countering the misinformation feed. We make sure our clients have a fair chance to share their side of the story, instead of being silenced or filtered out by the platforms.

Take healthcare for seniors as an example. If someone truly believes that a person or organization is going to harm seniors, their feed will only reinforce that belief. Whether it’s accurate or not doesn’t matter—because to them, it’s their truth as presented.

Unless we’re ensuring access to facts and alternative perspectives, that person may never see another side of the story.

At the end of the day, it’s still their choice what to believe. But at least now, they can make that choice with better information. And in the process, our clients get the chance to show that they’re human—that they care.

Our systems and tools don’t manipulate anyone. They provide the processing power to help our clients use their time and resources wisely, making sure their message reaches the people who need to hear it most. It’s about connecting people with information that actually matters to them.

(Btw: There are countless examples of how this applies beyond politics.)


Final thoughts


AI has been shaping what we see, buy, and believe for decades. The difference today is that the world is finally recognizing the scale of these systems. Left unchecked, they can polarize, misinform, and divide.

Our mission is to use AI differently. Not to manipulate, but to restore balance. Not to flood people with noise, but to ensure access to facts and alternative perspectives.

At the core of what we do are three simple principles:

  1. Transparency – People deserve to know why they’re seeing the information in front of them.

  2. Fairness – Everyone deserves a chance to share their side of the story, not just those amplified by machine algorithms (ie. AI).

  3. Humanity – Technology should connect people, not try to divide them. AI is just the processing power; the real impact comes from how we choose to use it.

Ultimately, the choice of what to believe always belongs to the individual. Our role is to make sure that choice is informed, balanced, and fair. That’s not just good business—it’s an ethical responsibility.

If we do this right, we don’t just help our clients succeed. We can help make the world a little less divided. - Wade

 
 
 

1 Comment


grandmaleahy11
Sep 16

Found this article very informative and interesting. I look forward to following you Michelle with rogue management and learning about AI

Like
bottom of page