Relax Canada, AI Isn't Coming for Your Job

January 22, 2026

Shaquille Morgan

Relax Canada, AI isn’t coming for your job.

At least not any time soon.

I recognize how this comes off. More than a skeptic, it presents as if I’m in denial — rejecting the ‘inevitable’ outcomes forecasted by innovation and AI experts and ignoring society’s recent technological progression.

But the rapid labour market transformation and job displacement these people speak of is exaggerated, and the reality of AI being capable of acting in highly educated jobs is unlikely.

A seismic shift in AI over the last three years has captured our attention. Excitement from both tech enthusiasts and casual tech users has led to explorative inquisitions into AI’s capabilities and scalability. Overshadowing this excitement is the fear of mass unemployment fueled by AI. Tech titans, including Elon Musk and Bill Gates, have pushed this narrative, painting a grim picture for future workers. Still, the realities of AI’s impact on Canada seem far different than the picture these people have created.

Indeed, the rapid progression of AI over the last few years is impressive. One might assume with these massive leaps that Canada would see increases in GDP, productivity, and unemployment. However, statistics tell a different story. GDP is stable, and not only is productivity in Canada between Q3 of 2022 and Q3 of 2025 (when ChatGPT was introduced) analogous (hovering around 104.4 points), but it was significantly higher going into the pandemic, before AI (119 points). And while unemployment increased in that period, it’s attributed to the growing number of new workers joining the labour market, layoffs, and more recently tariffs.

Outside of these economic markers, AI specific markers are painting an interesting picture: increasingly less jobs are AI-exposed.

A study from Statistics Canada using 2021 data found that 60% of Canadian jobs were AI-exposed, a similar rate to that of 2016. Of this, 51.9% were low complementarity or AI-competing roles (where AI can perform the job), and 48.1% were high complementarity or AI-augmenting roles (where AI can provide efficiency gains). Using 2024 data, a study by the Future Skills Centre found 57.4% of Canadian jobs were AI-exposed, with 49% of these being AI-competing roles, and 51% being AI-augmenting. The 2.6% decrease in total exposure is a positive signal of an increasingly AI resilient job market with a larger share of the workforce in human-centric roles, low AI-exposed roles that AI currently cannot touch. But the internal mix within this exposure group is the key takeaway. Here, we see that AI is actually augmenting more roles than it is competing. In the long run this could simply mean as workers get more efficient at jobs using AI, they may be awarded with more work.

It’s not just the data that tells this story. On the ground, we haven’t seen mass shifts in workforce AI transformation. Given cybersecurity concerns, AI use is confined in scope, largely used by workplaces to augment roles and promote cognitive offloading. This permits its use for research, analysis, and idea generation. If AI was coming for your job, we would already be seeing growing displacement given the massive strides. But we’re not.

When it comes to workplace confines to AI, there are numerous reasons for this. For starters, the long-term memory of LLMs is limited. I’ve noticed ChatGPT, for example, struggles to recall specific information from past conversations. Its working memory functions better when content is directly provided; otherwise, it provides answers based on foggy recall attempts that can produce inaccurate content. These limitations extend to the judgment of LLMs which might ignore flawed processes used to reach goals, so long as the result is what’s desired. This learning becomes reinforced and reproduced leading to flawed outcomes. Without the consistent ability to recognize mistakes or use emotional intelligence in judgment, AI cannot be self-sufficient and will make costly mistakes. For AI to be self-sufficient it would likely have to attain superintelligence, something that currently seems aspirational and bodes well for human labourers.

The quality of the content produced by AI on the internet also conveys AI’s limits. Graphite, an SEO firm, found that over 50 per cent of online content is AI “slop”, or low quality mass-produced content. The most problematic part about AI slop is researchers finding it will lead to ‘model collapse’, this being LLM failure as it forgets the underlying human data it was trained on. Consequently, it learns from self-generated material with inevitable flaws. Apple also found that models collapse when solving highly complex problems. Taken together, this means AI cannot perform highly educated jobs given the tact and nuance required.

There are also policy constraints that we must consider. For AI to meaningfully scale in workforce implementation, more money, research, water, energy, microchips, and AI infrastructure are needed. However, as geopolitical conflict disrupts AI advances, and political battles to limit the use of natural resources for AI become more pronounced, issues will arise. That is why researchers find local social factors will shape AI adoption as much as technical feasibility. This could restrict or delay AI infrastructure and disrupt global tech partnerships.

And this isn’t farfetched. In Nanaimo, B.C., for example, reports revealed a 200,000-square-foot data centre being built could churn through 70,000 litres of potable water a day. Globally, a 2023 study estimated that data centres around the world consumed around 140 billion litres of water for cooling. And consumption will rise as demand for AI and data centres increase. In Canada, local distaste for this natural resourcing issue is starting to spark resistance given its impact on residents where the most immediate are reduced drinking water and noise pollution. As knowledge and concerns about data centres grow, we’re likely to see plans get disrupted, similar to what happened in Indianapolis, Indiana, where plans to build a billion-dollar data centre were dropped after a months-long campaign against the project. Along these lines jurisdictions in the U.S. and Europe are increasingly seeking to pass regulations to limit data centre water consumption or force transparency in how much is being used.

All of this makes for a long and tenuous journey. In that time, it’s more likely that the labour market will determine how AI can be used to support worker efficiency — ultimately adding AI competency to job descriptions.

Despite the industry’s grand predictions given past growth, the material impact and data don’t align with these outcomes. Yes, AI will transform the labour market in the long-term, but just how much? Well, that remains to be seen. But given the geopolitical, policy, and resource concerns, I think it’s a long way off. 

Leave a Comment

Your email address will not be published. Required fields are marked *