Researchers Develop Methods to Detect Hacking of 3D Printers

A team of researchers from Rutgers University-New Brunswick and the Georgia Institute of Technology has developed three strategies for determining if 3D printers have been hacked.

"They will be attractive targets because 3D-printed objects and parts are used in critical infrastructures around the world, and cyberattacks may cause failures in health care, transportation, robotics, aviation and space," said Saman Aliari Zonouz, coauthor of the study and associate professor in the electrical and computer engineering department at Rutgers University-New Brunswick, in a report on the research.

Hackers could conceivably insert tiny defects in printed objects, too small to detect visually, but that nevertheless compromise the integrity of the piece with potentially disastrous consequences. Many organizations outsource their 3D printing needs rather than buying expensive printers themselves, which centralizes and, perhaps, exacerbates the threat.

"The results could be devastating and you would have no way of tracing where the problem came from," said Mehdi Javanmard, study coauthor and assistant professor in the electrical and computer engineering department at Rutgers-New Bruswick, in a prepared satement.

"While anti-hacking software is essential, it's never 100 percent safe against cyberattacks," the university research news organization Futurity recently reported. "So the researchers looked at the physical aspects of 3D printers."

The team eventually developed three ways to detect tampering, either during or after printing an object:

  • Compare the sound of the printer as it operates to a recording of a printer creating a print known to be correct;
  • Compare the physical movement of the printer's parts as it works to a the movements of a printer as it produces a correct print; and
  • Mix gold nanorods with the printing filament, then use Raman spectroscopy and computed tomography to make sure the nanorods are dispersed throughout the object as expected.

In the future, the group plans to find new ways to attack the printers so they can develop new defenses, transfer their methods to industry and refine the techniques they've already developed.

"These 3D printed components will be going into people, aircraft and critical infrastructure systems," said Raheem Beyah, a professor and associate chair in Georgia Tech's School of Electrical and Computer Engineering. "Malicious software installed in the printer or control computer could compromise the production process. We need to make sure that these components are produced to specification and not affected by malicious actors or unscrupulous producers."

The full study is available as a PDF here.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • hand holding globe and environmental icons in front of a green background

    CoSN, SETDA, UDT Release Guidelines for Environmentally Responsible Technology Purchasing

    CoSN and SETDA, in partnership with IT and telecommunications solution provider UDT, recently released a set of Sustainability Procurement Guidelines designed to help K-12 school and district leaders, procurement officers, and technology directors make purchasing decisions that are both environmentally responsible and operationally effective.

  • illustration of stacked coins, bar graphs, downward arrows, and two school buildings

    Survey: Top Education and Budget Challenges for Schools

    A recent survey of more than 2,500 educators, school leaders, and district administrators across the country identified the top challenges schools are facing this year. The 2025 National Educator Survey, conducted by PowerSchool, found that teacher shortages and mounting financial uncertainty are persistent pain points across K-12 education.

  • interconnected gears and cogs

    Integration Brings Anthropic Claude AI Models to Copilot

    Microsoft has integrated Anthropic's Claude artificial intelligence models to its Microsoft 365 Copilot platform, giving enterprise users another option beyond OpenAI's models for powering workplace AI experiences.

  • cybersecurity book with a shield and padlock

    Proposed NIST Cybersecurity Guidelines Aim to Safeguard AI Systems

    The National Institute of Standards and Technology has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.