When Andrew Ng talks about AI, people listen — in classrooms, boardrooms and Silicon Valley.
The researcher-turned-educator-turned-investor has become an AI statesman of sorts, co-founding Google Brain, which became part of Google’s flagship DeepMind division that now produces some of the world’s best AI systems, and serving as chief scientist of Chinese tech titan Baidu.
In today’s influencer-obsessed information landscape, Ng’s biggest claim to fame might be his credential as a “Top Voice” on LinkedIn, an honor the platform gives to a few handpicked experts, with over 2.3 million followers.
Armed with decades of AI experience, Ng says he remains clear-eyed about AI’s abilities. “The tricky thing about AI is that it is amazing and it is also highly limited,” Ng told NBC News in an interview on the sidelines of his AI Developers Conference in November. “And understanding that balance of how amazing and how limited it is, that’s difficult.”
Over the past few years, generative AI has attracted hundreds of billions of dollars in investment, as nearly every major tech company has pivoted toward the industry’s hottest topic. But in the last several months, many have questioned whether the surging investment has created a bubble now at risk of bursting due to persistent issues like hallucinations, AI’s involvement in mental health crises and increased regulatory scrutiny.
Ng is broadly bullish about AI’s upward trajectory, though he is quick to cast doubt on AI systems’ potential to broadly displace humans in the near future. He has repeatedly argued that artificial general intelligence (AGI), roughly defined as AI systems that can match human performance on all meaningful tasks, is a distant possibility — contrary to other AI luminaries who envision AGI emerging in the next few years.
“I look at how complex the training recipes are and how manual AI training and development is today, and there’s no way this is going to take us all the way to AGI just by itself,” Ng said.
“When someone uses AI and the system knows some language, it took much more work to prepare the data, to train the AI, to learn that one set of things than is widely appreciated,” he added.
Ng also has stellar bona fides in the education world. In addition to teaching computer science at Stanford University, Ng founded Coursera — one of the world’s largest online learning platforms — and oversees one of the most popular AI-focused education platforms, DeepLearning.AI.
With over a decade of success in the AI-meets-education ecosystem, Ng adopts a Chef Gusteau approach to AI education and coding in particular, arguing that anybody and everybody should code given advancements in coding tools.
“Some senior business leaders were recently advising others to not learn to code on the grounds that AI will automate coding,” Ng said. “We’ll look back on that as some of the worst career advice ever given. Because as coding becomes easier, as it has for decades, as technology has improved, more people should code, not fewer.”
Many experts have recently asserted that coding is the “epicenter of AI progress” and that AI’s shocking capabilities only become apparent when people use AI tools to code. Those developments have led some to theorize that traditional coding-only jobs will wither with the rise of AI, and early evidence backs up those claims.
“It’s true that I don’t want to write code by hand anymore. I want AI to do it for me. But as the barriers become lower and lower, more people should do it. For example, my best recruiters don’t screen resumes by hand. They write prompts or write code to screen resumes.”
“People that use AI to write code will just be more productive, and I think have more fun than people that don’t. There will be a big societal shift towards people who code,” Ng added.
As AI systems become more powerful, Ng is aware that real downsides are emerging — but he thinks today’s risks pale in comparison to AI’s potential upside.
“I think for a lot of AI models, the benefit is so much greater than the harm,” he said.
“The death of any single person is absolutely tragic,” Ng added, referencing recent suicides that allegedly involved the use of AI. “At the same time, I am nervous about one or two anecdotes leading to stifling regulations. That means it doesn’t help save 10 lives, right? It’s a very difficult calculus for the number of people that are getting good mental health support from these systems.”
Instead of what he describes as suffocating regulation, Ng is a strong proponent of laws that demand transparency from leading AI companies, like the recently passed SB 53 in California and RAISE Act in New York.
