Why the ‘godfather of AI’ says Industrial Revolution-style job changes loom

When asked about the risk artificial intelligence poses to humans, AI pioneer and Nobel Prize winner Geoffrey Hinton replies with a reference to a legendary Hollywood director and a grim joke.

“James Cameron recently said Terminator was too optimistic,” he said. “Because humans had a chance against super intelligence.”

His delivery is deadpan.

Hinton, often referred to as the “godfather of AI” for his trailblazing work on neural systems, doesn’t mince words when it comes to the seismic shift artificial intelligence has already begun ushering in, as well as the risk to humanity he says it poses.

And he does it all with his trademark dry British-Canadian humour.

Hinton and fellow AI expert Jacob Steinhardt, who has flown in from Silicon Valley for a two-part lecture series in Toronto titled AI Rising: Risk vs. Reward, are gathering with about a dozen people downtown for an intimate dinner Monday after sharing the stage.

Story continues below advertisement

Hinton rarely grants interviews, so this is a unique opportunity to pick the 76-year-old Nobel laureate’s brain.

When asked about the looming disruption to the labour force that AI has begun, Hinton says it’s difficult to gauge the seismic shift that is coming at this moment in time, but he likens it to the scale of disruption seen during the Industrial Revolution.

“Potentially, it’s very big. You make it so (that) average human intelligence isn’t worth much anymore because AI can do it,” he tells Global News.

“Journalists are in trouble,” he says with his signature deadpan delivery.


Click to play video: 'Nobel Prize winner Geoffrey Hinton warns of dangers of AI'


Nobel Prize winner Geoffrey Hinton warns of dangers of AI


Though they seem very aligned in terms of their messaging onstage, Hinton says he and machine learning expert Steinhardt “agree to disagree” over a cacophony of Italian dishes at a restaurant in Toronto’s Queen Street West neighbourhood.

Story continues below advertisement

Steinhardt, whose work experience includes a stint as a research scientist at OpenAI in 2019, is a self-described “worried optimist.” He stands in stark contrast to Hinton’s proclamations, which are perhaps tongue-in-cheek, though not optimistic.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Hinton’s warnings about AI’s ascent always come back to the existential threat he says it presents. The silver lining in his words is the fact that there are a lot of unknowns when it comes to AI’s trajectory.

“It’s very hard to model what’s going to happen with AI by looking at what’s happened to other things. It’s very different,” Hinton says. “The internet wasn’t producing beings more intelligent than us.”

When asked about AI’s threat, Steinhardt likens artificial intelligence to “a large group of very smart people.” They are not smarter than humans collectively, for now, but they are already smarter than some humans, at specific things like types of programming, or even playing chess.

In addition to the fact that no one can predict the outcomes generated by this group, “they behave very strangely and they have these weird capabilities,” Steinhardt says.

He points out that “they can copy themselves freely onto new computers” and multiply, much “like a virus.”


Click to play video: 'Canada’s A.I. ‘godfather’ wins Nobel Prize in physics'


Canada’s A.I. ‘godfather’ wins Nobel Prize in physics


Steinhardt says he’s very concerned about the types of cyberattacks that AI could soon unleash, from jurisdictions whose goals may not be aligned with those of western democracies.

Story continues below advertisement

“I don’t think we’re far off from the possibility of Russia taking down the power grid in L.A. or something,” he says.

Targets like critical infrastructure and even emergency services could be in the crosshairs between now and 2030, according to Steinhardt.

“Like fake 911 calls using voice cloning that’s very convincing,” he says. “People are already doing this on a small scale. I worry that a bunch of calls to 911 could disrupt the entire emergency response system.”

And he is, by far, the more optimistic of the two.

Hinton and Steinhardt have different views on AI safety as well.

Hinton’s view is that AI is safe, until it isn’t. And whether that turning point comes in years, decades or lifetimes doesn’t change what he views as a risky situation.

Story continues below advertisement

“The question is, can we ensure it never wants to take control? Because if it wants to take control, and it’s smarter than us, we’re screwed,” Hinton says.

Steinhardt’s view of the future is buoyed by the launch of Transluce, the non-profit AI research lab he launched in October out of the University of California at Berkeley. Its stated goal is “to build open, scalable technology to understand AI systems and steer them in the public interest.”


Click to play video: 'Business Matters: Canada must translate AI research into profits, experts say'


Business Matters: Canada must translate AI research into profits, experts say


In essence, using AI to improve, monitor and flag problems with AI. When questioned about the prudence of using artificial intelligence to police itself, he admits there may be problems with that approach. But he’s convinced it’s the best available solution.

“I’m not worried about the AI systems we have today taking over. I’m worried about our ability to know what’s true. I’m worried about bad actors,” he says. “It should be more of a collaboration where AI is empowering humans.”

Story continues below advertisement

He remains a staunch proponent of a cross-section of people from diverse backgrounds transparently coming together to wield AI like a new, powerful, emerging tool.

“Given the extraordinary consequences that AI has on society, what we do with AI needs to be a public conversation,” Steinhardt says.

When asked about the future of AI, Hinton brings it back to James Cameron.

“He basically said it’s worse than he thought in Terminator,” says Hinton, who then mentions Cameron’s unheeded warnings about the potential for the OceanGate Titan submersible to fail catastrophically, which it did in June 2023.

“He also said that carbon fibre submarine wasn’t any good. And he was right about that.”

A chilling statement, delivered deadpan by the godfather of AI.


Leave a Reply

Your email address will not be published. Required fields are marked *