Skip to content

Engineering leaders need to use AI. These experts have some tips 

What can engineering leaders do to prepare their businesses to be AI-ready? Invest in data infrastructure, train up their workforce, and lean on subject matter experts, said these Northeastern researchers.

Doctor Abishek Murthy sitting at a table and gesturing toward two colleagues.
Abhishek Murthy discusses how business leaders can use AI in a panel discussion held in the Alumni Center on Wednesday, Oct. 15, 2025. Photo by Alyssa Stone/Northeastern University

Artificial intelligence — especially generative AI — isn’t all that perplexing when you look into it, explained Sam Scarpino, the AI+Life Sciences director at Northeastern University’s Institute for Experiential AI. 

All the AI tools we use today from ChatGPT to Perplexity are a form of machine learning — a subset of AI in which machines are trained by consuming bits of data rather than through human instruction. 

“The reason I’m saying this,” Scarpino recently told a crowd of engineering professionals in the Alumni Center on Northeastern University’s Boston campus, “is because from a certain perspective, most of what we are doing with AI is not that mysterious.”

Scarpino was one of three Northeastern panelists on a discussion designed to help engineering leaders demystify AI — from investing in the right data infrastructure and training up the workforce, to triple-checking the accuracy of the output of AI systems and dispelling common AI myths.   

The panel served as a kickoff for an American Society of Engineering Education’s Leadership Development Division meeting hosted by Northeastern University’s Gordon Institute of Engineering Leadership. 

Other panelists included Auroop Ganguly, director of AI for Climate & Sustainability at the Institute for Experiential AI, and Abhishek Murthy, an adjunct faculty member in the Multidisciplinary Graduate Engineering Programs at Northeastern. 

So what can engineering leaders do to prepare their workforce to be AI-ready? 

Building and investing in the right data platform 

For one, it’s important to recognize that any AI system introduced into a company’s operations is only going to be as good as the data it is trained on, explained Scarpino. 

“There’s two ways to think about that,” he said. “One is how fit for purpose your data is for the problem that you are trying to solve. Second is what kind of data strategy and data platform and stack do you have in place at your organization. It starts and ends with good data.” 

Scarpino said it can be challenging to start here since many non-tech-savvy organizations may be reluctant to invest heavily in high-quality data platforms and infrastructure. It’s a challenge that can happen at every level — whether it is a CEO trying to convince a board to go all in on building out their data platforms or a CEO trying to convince a chief information officer. 

“We’re always fighting an uphill battle when it comes to justifying the data,” he said. 

AI has the power to unlock a new level of human autonomy

Equally important as building up the infrastructure is letting workers know about the new level of productivity AI tools can provide, explained Murthy, who is also a senior principal machine learning and AI architect at Schneider Electric, a digital automation and energy management company. 

“I think of AI as a set of technologies that can increase agency across an organization,” he said. “Therefore, it helps us scale better — complete our work better.” 

But one harmful myth around AI tools is that they simply “work out of the box” with little setup or workflow changes. 

“AI does not work right out of the box,” he said, highlighting that countering this myth will require cross-organizational conversations — between AI subject matter experts and everyday workers. 

“The AI world speaks a language and the agency they are trying to bring on speaks another language,” he said. “Communication becomes critical in order to improve decision-making.” 

Trust, but verify

Of course, one of the problems with these AI technologies is that they hallucinate — share inaccurate information — frequently, explained Ganguly. As a climate researcher, Ganguly relies on AI modeling for weather forecasting and other important information. 

But just because these models aren’t perfect doesn’t mean we shouldn’t use them, he said. That’s where the human in the loop comes in. 

“We have to be extremely careful in how much we trust and when we trust, making sure that there are some humans somewhere always validating what the AI systems produce, and how these AI systems are used,” he said.  

He’s slightly modified the popular adage trust but verify when working with these systems.

“Trust after verification, but verify continuously,” he said. 

That’s why subject matter experts who have deep knowledge are so critical when dealing with these systems. They are the best judges to determine when these AI systems are failing, added Scarpino. 

“What it ultimately comes down to is leveraging as much subject matter expertise as possible to design tests that are very hard for these AI systems to pass unless they have developed a more generalized model for these systems that we’re studying,” he said. “The need for subject matter expertise is not going away. If anything, we need more subject matter expertise to vet and validate these models.”