Skip to content

AI may end some jobs. It will create others

Computer science researcher Vered Shwartz says she's taking AI hype with a grain of salt. A Tyee Q&A.
robotworkingatdesk
"There will definitely be some entry-level white-collar jobs lost in the short term, and we should prepare for it," says Vered Shwartz. "But I also think it will create new jobs in the long term."

In a time of AI wild enthusiasts and deep skeptics, Vered Shwartz puts herself somewhere in the middle.

The adoption of artificial intelligence models is being widely hyped as both job-destroying and liberating. Prime Minister Mark Carney is an enthusiast. Recent reports from consulting firms PricewaterhouseCoopers and McKinsey promise AI will transform the global workforce in the coming decades.

Shwartz is the Canadian Institute for Advanced Research AI chair at the Vector Institute, a Toronto-based AI research non-profit, and an assistant professor of computer science at the University of British Columbia.

She’s not convinced by some of the hype. While large language models are impressive, Shwartz said, she’s cautious about the risks of widespread adoption.

“We’re just starting to see how [AI] affects the job market,” Shwartz said. “The effect could be positive by saving workers time on repetitive or boring tasks, freeing them to practise their expertise — but it could also be negative, putting people out of jobs.”

Shwartz is the author of a book called Lost in Automatic Translation, about the widespread adoption of large language technologies like Siri, Alexa and ChatGPT. It’s scheduled to hit shelves this July.

The Tyee sat down with Shwartz for more insight into the technology and its capabilities. The conversation has been edited for length and clarity.

The Tyee: Would you consider yourself an AI skeptic? An optimist? Somewhere on — or outside — that spectrum?

Vered Shwartz: Let me focus my answer on large language models (LLMs), which I think is what most people mean when they say “AI.”

My main concern at the moment is the widespread adoption, reliance and trust in LLMs, despite a very serious technical limitation — the hallucination problem.

Large language models fabricate facts. I would like people to be more skeptical when they use LLMs. Without skepticism, the technology derails the quality of people’s work and introduces errors that are hard to find.

What is a large language model?

Large language models are a type of generative AI model that is popular right now. They are trained on vast amounts of text — almost all the text on the web — to predict the next word in the sentence, like autocomplete in our phones.

Neural language models have been around for a while, but the larger models from recent years produce better text.

Modern LLMs like ChatGPT have also been trained to follow natural language instructions, which allows them to perform arbitrary text-based tasks based on the user prompt.

There are other types of AI that are not generative, like predictive AI — which can classify ultrasound images of tumours as benign or malignant, for example.

A Statistics Canada study in September found three-quarters of Canadian workers in sectors like finance, administration, insurance and business and IT held jobs that were more exposed to — and perhaps replaceable by — AI. What was your reaction to that study?

The findings are not surprising to me. Previous automation waves targeted manual labour, but the current generative AI wave is transforming jobs that include tasks large language models can do, such as writing and coding.

I prefer the framing of AI as transforming industries to the description of it threatening industries or displacing workers.

White-collar jobs are already changing because many workers use AI to automate their tasks. This inevitably leads to employers looking to cut costs by replacing employees with AI. I think this is premature — LLMs still have technical limitations, they can’t perform all tasks and at the very least you would need a human expert to verify their outputs.

In the long term, I think these jobs that require higher education will change but most of them will not disappear.

What do you make of predictions like from the CEO of Anthropic, who predicts AI will cut half of all entry-level white-collar jobs within the next five years?

This prediction is much less subtle than the Statistics Canada study and for a good reason — Anthropic is trying to sell us a product. The CEO needs to convince us that the product is so powerful it’s going to put us out of our jobs. Of course, they would also say the only way to make yourself relevant in today’s job market is to use their product to enhance your skills as a worker.

I take these predictions with a grain of salt.

I think there will definitely be some entry-level white-collar jobs lost in the short term, and we should prepare for it. But I also think it will create new jobs in the long term.

What happens when companies try to replace entry-level jobs with this technology?

We have yet to see that impact. It’s possible that this experiment will not end well and companies will gradually increase the number of entry-level jobs again.

I think it’s a bit short-sighted to replace these jobs with AI at the moment, both because of the technical limitations — which may or may not be solved at some point in the near future — but also because you need entry-level workers that can one day replace your mid-level employees.

What types of white-collar activities do you think these models will be good at automating?

Large language models are good at generating text, so people use them to help with writing and editing. They have also been exposed to a lot of code from open-source code-sharing platforms like GitHub and forums such as Stack Overflow, so they can generate basic code from natural language instructions.

Modern large language models can also do a web search, so you can ask them complex questions and they will retrieve articles from the web and synthesize a concise answer. That’s a useful ability for many tasks of white-collar jobs.

But since these models often get things wrong, it would be irresponsible to automate these tasks without at the very least having a human expert validate the outputs.

Unfortunately, it’s already happening, and there are countless examples. For example, last year a B.C. lawyer cited fake cases invented by ChatGPT.

You mention that in the long term certain jobs that are exposed to AI will require higher education and change, but not disappear. Do you have any predictions as to how?

I think jobs will change because workers will use AI to automate the more repetitive and basic tasks, and these tools will hopefully become more reliable with time.

My hope is that this will allow workers to focus on the more meaningful tasks that require their expertise.

Of course, it’s also possible that jobs will change in a less positive way. For example, employees may have to spend their time “supervising” the AI instead of work they studied for and presumably enjoy.

To some extent, jobs are already changing because some employers expect higher productivity from their employees now that they have access to large language models.