Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ Tue, 09 Sep 2025 00:54:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ 32 32 Register for Digital Transformation Week 2025 https://www.engineering.com/register-for-digital-transformation-week-2025/ Tue, 09 Sep 2025 00:54:14 +0000 https://www.engineering.com/?p=142714 Engineering.com’s September webinar series will focus on how to make the best strategic decisions during your digital transformation journey.

The post Register for Digital Transformation Week 2025 appeared first on Engineering.com.

]]>
Digital transformation remains one of the hottest conversations in manufacturing in 2025. A few years ago, most companies approached digital transformation as a hardware issue. But those days are gone. Now the conversation is a strategic one, centered on data management and creating value from the data all the latest technology generates. The onrush of AI-based technologies only clouds the matter further.

This is why the editors at Engineering.com designed our Digital Transformation Week event—to help engineers unpack all the choices in front of them, and to help them do it at the speed and scale required to compete.

Join us for this series of lunch hour webinars to gain insights and ideas from people who have seen some best-in-class digital transformations take shape.

Registrations are open and spots are filling up fast. Here’s what we have planned for the week:

September 22: Building the Digital Thread Across the Product Lifecycle

12:00 PM Eastern Daylight Time

This webinar is the opening session for our inaugural Digital Transformation Week. We will address the real challenges of implementing digital transformation at any scale, focusing on when, why and how to leverage manufacturing data. We will discuss freeing data from its silos and using your bill of materials as a single source of truth. Finally, we will help you understand how data can fill in the gaps between design and manufacturing to create true end-to-end digital mastery.

September 23: Demystifying Digital Transformation: Scalable strategies for Small & Mid-Sized Manufacturers

12:00 PM Eastern Daylight Time

Whether your organization is just beginning its digital journey or seeking to expand successful initiatives across multiple departments, understanding the unique challenges and opportunities faced by smaller enterprises is crucial. Tailored strategies, realistic resource planning, and clear objectives empower SMBs to move beyond theory and pilot phases, transforming digital ambitions into scalable reality. By examining proven frameworks and real-world case studies, this session will demystify the process and equip you with actionable insights designed for organizations of every size and level of digital maturity.

September 24, 2025: Scaling AI in Engineering: A Practical Blueprint for Companies of Every Size

12:00 PM Eastern Daylight Time

You can’t talk about digital transformation without covering artificial intelligence. Across industries, engineering leaders are experimenting with AI pilots — but many remain uncertain about how to move from experiments to production-scale adoption. The challenge is not primarily about what algorithms or tools to select but about creating the right blueprint: where to start, how to integrate with existing workflows, and how to scale in a way that engineers trust and the business can see immediate value. We will explore how companies are combining foundation models, predictive physics AI, agentic workflow automation, and open infrastructure into a stepped roadmap that works whether you are a small team seeking efficiency gains or a global enterprise aiming to digitally transform at scale.

September 25: How to Manage Expectations for Digital Transformation

12:00 PM Eastern Daylight Time

The digital transformation trend is going strong and manufacturers of all sizes are exploring what could be potentially game-changing investments for their companies. With so much promise and so much hype, it’s hard to know what is truly possible. Special guest Brian Zakrajsek, Smart Manufacturing Leader at Deloitte Consulting LLP, will discuss what digital transformation really is and what it looks like on the ground floor of a manufacturer trying to find its way. He will chat about some common unrealistic expectations, what the realistic expectation might be for each, and how to get there.

The post Register for Digital Transformation Week 2025 appeared first on Engineering.com.

]]>
How small language models can advance digital transformation – part 1 https://www.engineering.com/how-small-language-models-can-advance-digital-transformation-part-1/ Thu, 04 Sep 2025 17:33:09 +0000 https://www.engineering.com/?p=142603 Comparing the characteristics of SLMs to LLMs for digital transformation projects.

The post How small language models can advance digital transformation – part 1 appeared first on Engineering.com.

]]>
Small language models (SLMs) can perform better than large language models (LLMs). This counterintuitive idea applies to many engineering applications because many artificial intelligence (AI) applications don’t require an LLM. We often assume that more information technology capacity is better for search, data analytics and digital transformation.

SLMs offer numerous advantages for small, specialized AI applications, such as digital transformation. LLMs are more effective for large, general-purpose AI applications.

Let’s compare the characteristics of SLMs to LLMs for digital transformation projects.

SLM vs. LLM focus

SLMs are efficient, domain-specific AI models optimized for tasks that can run on smaller devices using limited resources. LLMs are powerful, general-purpose AI models that excel at complex tasks but require substantial computing resources.

SLMs are explicitly designed for small domain-specific tasks, such as digital transformation, which is critical to the work of engineers. SLMs offer high accuracy for niche AI applications. LLMs, on the other hand, are trained on enormous datasets to enable them to respond to a wide range of general-purpose tasks. LLMs sacrifice accuracy and efficiency to achieve general applicability.

Comparing language model characteristics

SLMs are quite different from LLMs, despite their similar names. Engineers can use these language model characteristics to determine which language model best fits the characteristics of their digital transformation project.

See footnotes at the end of the story for a glossary of the above terms.

Considering data privacy support

Data privacy is a significant issue for digital transformation projects because the internal data being transformed often contains intellectual property that underlies the company’s competitive advantage.

Support for data privacy depends on where the SLM or LLM is deployed. If the AI model is deployed on-premise, data privacy can be high if appropriate cybersecurity defences are in place. If the SLM or LLM is deployed in a cloud data centre, data privacy varies depending on the terms of the cloud service agreement. Some AI service vendors state that all end-user prompts will be used to train their AI model further. Other vendors commit to not using the provided data. If engineers are unsure that the vendor can meet its stated data privacy practices or if those practices are unacceptable, then implementing the AI application on-premises is the only course of action.

SLMs offer many advantages for digital transformation projects because these projects use domain-specific data. LLMs are more effective for large, general-purpose AI applications that require vast data volumes. In the follow-on article, we’ll discuss the differences between SLMs and LLMs for construction and operation.

Footnotes: AI Glossary

  1. Domain knowledge is knowledge of a specific discipline, such as engineering or digital transformation, in contrast to general knowledge.
  2. Parameters are the variables that the AI model learns during its training process.
  3. Contextual relevance is the ability of the AI model to understand the broader context of the prompt text to which it is responding.
  4. Curated proprietary domain-specific data is typically internal data to the organization. Internal data is often uneven or poor in quality. It is often a constraint on the value that AI applications based on an SLM can achieve. Improving this data quality will improve the value of AI applications.
  5. Accurate output is essential to building confidence and trust in the AI model.
  6. The accuracy of LLM output is undermined by the contradictions, ambiguity, incompleteness and deliberately false statements found in public web data. LLM AI output is better for English and Western societies because that’s where most of the web data originates.
  7. Bias refers to incidents of biased AI model output caused by human biases that skew the training data. The bias leads to distorted outputs and potentially harmful outcomes.
  8. Hallucinations are false or misleading AI model outputs that are presented as factual. They can mislead or embarrass. They occur when an AI model has been trained with insufficient or erroneous data.
  9. Prompts are the text that end-users provide to AI models to interpret and generate the requested output.

The post How small language models can advance digital transformation – part 1 appeared first on Engineering.com.

]]>
Is Nvidia’s Jetson Thor the robot brain we’ve been waiting for? https://www.engineering.com/is-nvidias-jetson-thor-the-robot-brain-weve-been-waiting-for/ Wed, 03 Sep 2025 15:39:58 +0000 https://www.engineering.com/?p=142562 Last month Nvidia launched it’s powerful new AI and robotics developer kit Nvidia Jetson AGX Thor. The chipmaker says it delivers supercomputer-level AI performance in a compact, power-efficient module that enables robots and machines to run advanced “physical AI” tasks—like perception, decision-making, and control—in real time, directly on the device without relying on the cloud. […]

The post Is Nvidia’s Jetson Thor the robot brain we’ve been waiting for? appeared first on Engineering.com.

]]>

Last month Nvidia launched it’s powerful new AI and robotics developer kit Nvidia Jetson AGX Thor. The chipmaker says it delivers supercomputer-level AI performance in a compact, power-efficient module that enables robots and machines to run advanced “physical AI” tasks—like perception, decision-making, and control—in real time, directly on the device without relying on the cloud.

It’s powered by the full-stack Nvidia Jetson software platform, which supports any popular AI framework and generative AI model. It is also fully compatible with Nvidia’s software stack from cloud to edge, including Nvidia Isaac for robotics simulation and development, Nvidia Metropolis for vision AI and Holoscan for real-time sensor processing.

Nvidia says it’s a big deal because it solves one of the most significant challenges in robotics: running multi-AI workflows to enable robots to have real-time, intelligent interactions with people and the physical world. Jetson Thor unlocks real-time inference, critical for highly performant physical AI applications spanning humanoid robotics, agriculture and surgical assistance.

Jetson AGX Thor delivers up to 2,070 FP4 TFLOPS of AI compute, includes 128 GB memory, and runs within a 40–130 W power envelope. Built on the Blackwell GPU architecture, the Jetson Thor incorporates 2,560 CUDA cores and 96 fifth-gen Tensor Cores, enabled with technologies like Multi-Instance GPU. The system includes a 14-core Arm Neoverse-V3AE CPU (1 MB L2 cache per core, 16 MB shared L3 cache), paired with 128 GB LPDDR5X memory offering ~273 GB/s bandwidth.

There’s a lot of hype around this particular piece of kit, but Jetson Thor isn’t the only game in town. Other players like Intel’s Habana Gaudi, Qualcomm RB5 platform, or AMD/Xilinx adaptive SoCs also target edge AI, robotics, and autonomous systems.

Here’s a comparison of what’s available currently and where it shines:

Edge AI robotics platform shootout

Nvidia Jetson AGX Thor

Specs & Strengths: Built on Nvidia Blackwell GPU, delivers up to 2,070 FP4 TFLOPS and includes 128 GB LPDDR5X memory—all within a 130 W envelope. That’s a 7.5 times AI compute leap and 3 times better efficiency compared to the previous Jetson Orin line. Equipped with 2,560 CUDA cores, 96 Tensor cores, and a 14-core Arm Neoverse CPU. Features 1 TB onboard NVMe, robust I/O including 100 GbE, and optimized for real-time robotics workloads with support for LLMs and generative physical AI.

Use Cases & Reception: Early pilots and evaluations are taking place at several companies, including Amazon Robotics, Boston Dynamics, Meta, Caterpillar, with pilots from John Deere and OpenAI.

Qualcomm Robotics RB5 Platform

Specs & Strengths: Powered by the QRB5165 SoC, combines Octa-core Kryo 585 CPU, Adreno 650 GPU, Hexagon Tensor Accelerator delivering 15 TOPS, along with multiple DSPs and an advanced Spectra 480 ISP capable of handling up to seven concurrent cameras and 8K video. Connectivity is a standout—integrated 5G, Wi-Fi 6, and Bluetooth 5.1 for remote, low-latency operations. Built for security with Secure Processing Unit, cryptographic support, secure boot, and FIPS certification.

Use Cases & Development Support: Ideal for robotics use cases like SLAM, autonomy, and AI inferencing in robotics and drones. Supports Linux, Ubuntu, and ROS 2.0 with rich SDKs for vision, AI, and robotics development.

(Read more about the Qualcom Robotics RB5 platform on Robot Report)

AMD Adaptive SoCs and FPGA Accelerators

Key Capabilities: AMD’s AI Engine ML (AIE-ML) architecture provides significantly higher TOPS per watt by optimizing for INT8 and bfloat16 workloads.

Innovation Highlight: Academic projects like EdgeLLM showcase CPU–FPGA architectures (using AMD/Xilinx VCU128) outperforming GPUs in LLM tasks—achieving 1.7 times higher throughput and 7.4 times better energy efficiency than NVIDIA’s A100.

Drawbacks: Powerful but requires specialized development and lacks an integrated robotics platform and ecosystem.

The Intel Habana Gaudi is more common in data centers for training and is less prevalent in embedded robotics due to form factor limitations.

The post Is Nvidia’s Jetson Thor the robot brain we’ve been waiting for? appeared first on Engineering.com.

]]>
Autodesk invests in AI CAM platform Toolpath https://www.engineering.com/autodesk-invests-in-ai-cam-platform-toolpath/ Tue, 26 Aug 2025 18:06:09 +0000 https://www.engineering.com/?p=142367 The Fusion developer joins toolmaker Kennametal and CAM developer ModuleWorks as strategic investors in the cloud manufacturing tool.

The post Autodesk invests in AI CAM platform Toolpath appeared first on Engineering.com.

]]>
Welcome to Engineering Paper and this week’s harvest of design and simulation software news.

Toolpath, the CAM startup using AI to automate toolpath creation, has a new investor, and who it is may surprise you (but it shouldn’t).

“Our new investor is Autodesk,” Al Whatmough, Toolpath CEO, told me. “This closes out all our seed funding.”

The companies didn’t disclose the amount of the investment, but Whatmough said it was part of the strategic investment round that closed in May 2025 and brought Toolpath’s funds to nearly $20 million. Toolmaker Kennametal led that round, which also included CAM kernel developer ModuleWorks.

“There was space in the round for a software leader,” Whatmough said. That Autodesk filled that space was only natural. For one thing, Whatmough was Autodesk’s director of product management for manufacturing until 2021. For another, Toolpath had an existing integration with Autodesk Fusion, which Whatmough praised as “the dominant cloud-based CAM system.”

“Nobody else is anywhere close,” he said. “Whether we had [Autodesk’s] investment or not, Fusion would still be the platform we put our automation on.”

With Autodesk’s investment, Toolpath can take the Fusion integration even further. Autodesk pointed out the potential in a blog post from Stephen Hooper, VP of cloud-based product design and manufacturing solutions.

“[Our investment] marks the start of a strategic partnership, enabling our two companies to integrate closed-loop, fully automated workflows into Autodesk Fusion. Looking ahead, combining Toolpath’s technology with Autodesk’s Manufacturing Data Model would enable Fusion users to automatically analyze manufacturability, plan machine strategies, and send complete programs to Fusion,” Hooper wrote.

A Toolpath toolpath imported into Autodesk Fusion. Note the Toolpath addon in the top right. (Image: Toolpath.)

When I last spoke with him, Whatmough told me that Toolpath planned to support other CAM systems beyond Fusion. I asked him if that’s still the case.

“Our focus is Fusion, just because there’s a core alignment in the current customers,” he said. “Fusion users, by definition, tend to be on the more innovative side. It’s the most modern CAM system. They don’t have a cloud aversion.”

That said, Whatmough emphasized that there’s nothing about Toolpath, either technically or obligatorily, that makes it exclusive to Fusion.

“When we think about CAM integration, it’s like a post processor for us,” he explained. “Today we output the instructions to grab onto the Fusion steering wheel. We’ll make an amazing experience there. Once we do that, we can open up to other CAM systems or directly to the machine.”

One more thing I learned from Whatmough: Toolpath is freely available for hobbyist use through this application process. If you try it out, let me know your thoughts at malba@wtwhmedia.com.

Jon on Onshape

This summer Onshape hit the memorable milestone of 200 updates. The cloud CAD platform is updated like clockwork every three weeks, so if you do the math you’ll find that time is moving a lot faster than it ought to for what I still think of as a fresh new CAD startup.

Thoughts of mortality aside, congratulations to Onshape.

To mark the occasion, I caught up with co-founder Jon Hirschtick to reflect on Onshape’s evolution and where it might go next. You can read all about it in Looking back on 200 releases of Onshape: Q&A with Jon Hirschtick.

Quick hits

  • Coreform has released the latest version of its hex meshing software, Coreform Cubit 2025.8. The update introduces a “sleeker, more modern look” and provides “more robustness, better quality elements, and improved capabilities,” according to Coreform.
  • Electromagnetic simulation software developer Nullspace announced $2.5 million in seed funding that it will use to “expand the engineering team, accelerate product development, and scale go-to-market efforts as we target growing demand across aerospace, defense, quantum computing, and AI-enabled hardware markets,” according to CEO Masha Petrova.
  • CoLab, the Canadian company developing an AI-powered design review tool, commissioned a survey of engineering leaders and discovered, in a stroke of fortuitous validation, that “100% of survey respondents said that AI would speed up design review times.”

One last link

Don’t sit down to read this one: Design World contributor Mark Jones with Finding inspiration in unlikely places.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Autodesk invests in AI CAM platform Toolpath appeared first on Engineering.com.

]]>
From chain-of-thought to agentic AI: the next inflection point https://www.engineering.com/from-chain-of-thought-to-agentic-ai-the-next-inflection-point/ Fri, 08 Aug 2025 15:11:08 +0000 https://www.engineering.com/?p=141980 AI that thinks versus AI that acts. Autonomously. Systemically. At scale.

The post From chain-of-thought to agentic AI: the next inflection point appeared first on Engineering.com.

]]>
We have learned to prompt AI. We have trained it to explain its reasoning. And we have begun to integrate it as a co-pilot or ‘co-assistant’ in science, product design, engineering, manufacturing and beyond—to facilitate enterprise-wide decision-making.

But even as chain-of-thought (CoT) prompting reshaped how we engage with machines, it also exposed a clear limitation: AI still waits for us to tell it what to do.

Often in engineering, the hardest part is not finding the right answer—it is knowing what the right question is in the first place. This highlights a critical truth: even advanced AI tools depend on human curiosity, perspective, and framing.

CoT helps bridge that gap, but it is still a people-centered evolution. As AI begins to reason more like humans, it raises a deeper question: Can it also begin to ask the right questions, not just answer them? Can the machine help engineers make product development or manufacturing decisions?

As complexity escalates and time-to-decision contracts, reactive monolithic enterprise systems alone will no longer suffice. We are entering a new era—where AI stops assisting and starts orchestrating.

Welcome to the age of agentic AI.

Chain-of-thought: transformational but not autonomous

CoT reasoning is a breakthrough in human-AI collaboration. By enabling AI to verbalize intermediate steps and reveal transparent reasoning, CoT has reshaped AI from an opaque black box into a more interpretable partner. This evolution has bolstered trust, enabling domain experts to validate AI outputs with greater confidence. Across sectors such as engineering, R&D, and supply chain management, CoT is accelerating adoption by enhancing human cognition.

Yet CoT remains fundamentally reactive. It requires human prompts and structured queries to function, lacking autonomy or initiative. In environments rife with complexity—thousands of interdependent variables influencing product development, manufacturing, and supply chains—waiting for human direction slows response and restricts scale.

Consider a product design review with multiple engineering teams navigating dynamic regulatory demands, supplier constraints, and shifting market trends. CoT can clarify reasoning or suggest alternatives, but it cannot autonomously prioritize design changes or coordinate cross-functional decisions in real time.

CoT is just the visible tip of the iceberg. While it connects to the underlying data plumbing, the real shift lies in how AI can interrogate these relationships meaningfully—and potentially uncover new ones. That is where things start to tip from reasoning to autonomy, and the door opens to agentic AI.

From logic to autonomous action

Agentic AI represents a fundamental leap from the prompt-response paradigm. These systems initiate, prioritize, and adapt. They fuse reasoning with goal-driven autonomy—capable of contextual assessment, navigating uncertainty, and taking independent action.

Self-directed, proactive, and context-aware, agentic AI embodies a new class of intelligent software—no longer answering queries alone but orchestrating workflows, resolving issues, and closing loops across complex value chains.

As Steven Bartlett noted in a recent DOAC podcast: “AI agents are the most disruptive technology of our lifetime.” They will not just change how we work—they will change what it means to work, reshaping roles, decisions, and entire industries in their wake.

The 2025 Trends in Artificial Intelligence report from Bond Capital highlights this transition, describing autonomous agents as evolving beyond manual interfaces into core enablers of digital workflows. The speed and scope of this transformation evoke the early days of the internet—only this time, the implications promise to be even more profound.

Redefining the digital thread

Agentic AI rewires the digital thread—from passive connectivity to proactive intelligence across the product lifecycle. No longer static, the thread becomes adaptive and autonomous. Industry applications are wide:

  • In quality, AI monitors sensor streams, predicts anomalies, and triggers resolution—preventing defects before they occur.
  • In configuration management, agents detect part-software-supplier conflicts and self-initiate change coordination.
  • In supply chain orchestration, disruptions prompt real-time replanning, compliance updates, and automated documentation.

The result: reduced cycle times, faster iteration, and proactive risk mitigation. Can the digital thread become a thinking, dynamic learning, acting ecosystem—bridging data, context, and decisions?

Nevertheless, the transformation is not just technical:

  • Trust and traceability: Autonomous decisions must be explainable, especially in regulated spaces.
  • Data readiness: Structured, accessible data is the backbone of agentic performance.
  • Integration: Agents must interface with PLM, ERP, digital twins, and legacy systems.
  • Leadership and workforce evolution: Engineers become orchestrators and interpreters. Leaders must foster new models of human-AI engagement.

This shift is from thinking better to acting faster, smarter, and more autonomously. Agentic AI will redraw the boundaries between systems, workflows, and organizational silos.

For those ready to lead, this is not just automation, it is acceleration. If digital transformation was a journey, this is the moment the wheels leave the ground.

Building trustworthy autonomy

The road ahead is not about AI replacing humans—but about shaping new hybrid ecosystems where software agents and people collaborate in real time.

  • We will see AI agents assigned persistent roles across product lifecycles—managing variants, orchestrating compliance, or continuously optimizing supply chains.
  • These agents will not just “assist” engineers. They will augment system performance, suggesting better configurations, reducing rework, and flagging design risks before they materialize.
  • Organizations will create AI observability frameworks—dashboards for tracking, auditing, and tuning the behavior of autonomous agents over time.

Going forward, we might not just review dashboards—we might be briefed by agents that curate insights, explain trade-offs, and propose resolutions. To succeed, the next wave of adoption will hinge on governance, skill development, and cultural readiness:

  • Governance that sets transparent bounds for agent behavior, and continuous purposeful adjustments.
  • Skills that blend domain expertise with human-AI fluency.
  • Cultures that treat agents not as black boxes—but as emerging teammates or human extensions.

Crucially, managing AI hallucination—where systems generate plausible but inaccurate outputs—alongside the rising entropy of increasingly complex autonomous interactions, will be essential to maintain trust, ensure auditable reasoning, and prevent system drift or unintended behaviors.

Ultimately, the goal is not to lose control—but to gain new control levers. Agentic AI will demand a rethink not just of tools—but of who decides, how, and when. The future is not man versus machine. It should be machine-empowered humanity—faster, more adaptive, and infinitely more scalable.

The post From chain-of-thought to agentic AI: the next inflection point appeared first on Engineering.com.

]]>
AI and robotics-powered microfactory rebuilds homes lost to the California wildfires https://www.engineering.com/ai-and-robotics-powered-microfactory-rebuilds-homes-lost-to-the-california-wildfires/ Tue, 05 Aug 2025 17:30:58 +0000 https://www.engineering.com/?p=141893 This video shows a collaboration between ABB and Cosmic Buildings to build homes on-site using AI, digital twins and robotics.

The post AI and robotics-powered microfactory rebuilds homes lost to the California wildfires appeared first on Engineering.com.

]]>
ABB Robotics has partnered with construction technology company Cosmic Buildings to help rebuild areas devastated by the 2025 Southern Californian wildfires using AI-powered mobile robotic microfactories.

After the wildfires burned thousands of acres, destroying homes, infrastructure, and natural habitats, this initiative will deploy the microfactory in Pacific Palisades, California, to build modular structures onsite, offering a glimpse into the future of affordable housing construction.

The microfactory collab between ABB and Cosmic Buildings uses simulation, AI and robotics to build homes on-site. (image: screen capture from youtube video.).

Watch the video on youtube.

“Together, Cosmic and ABB Robotics are rewriting the rules of construction and disaster recovery,” said Marc Segura, President of ABB Robotics Division. “By integrating our robots and digital twin technologies into Cosmic’s AI-powered mobile microfactory, we’re enabling real-time, precision automation ideal for remote and disaster-affected sites.”

These microfactories integrate ABB’s IRB 6710 robots and RobotStudio digital twin software with Cosmic’s Robotic Workstation Cell and AI-driven Building Information Model (BIM) – an end-to-end platform that handles design, permitting, procurement, robotic fabrication and assembly.

Housed within an on-site microfactory, these systems fabricate custom structural wall panels with millimeter precision just-in-time for assembly at the construction site.

Cosmic uses ABB’s RobotStudio with its AI BIM allowing the entire build process to be simulated and optimized in a digital environment before deployment. Once on location, Cosmic’s AI and computer vision systems work with the robots, making real-time decisions, detecting issues, and ensuring consistent quality.

These homes are built with non-combustible materials, solar and battery backup systems, and water independence through greywater recycling and renewable water generation. Each home exceeds California’s wildfire and energy efficiency codes. By delivering a turnkey experience from permitting to final construction, Cosmic is redefining what’s possible in emergency recovery.

Cosmic says its mobile microfactory reduces construction time by up to 70% and lowers total building costs by approximately 30% compared to conventional methods. Homes can be delivered in just 12 weeks at $550–$700 per square foot, compared to Los Angeles’ typical $800–$1,000 range.

“Our mobile microfactory is fast enough for disaster recovery, efficient enough to drastically lower costs, and smart enough not to compromise on quality,” said Sasha Jokic, Founder and CEO of Cosmic Buildings. “By integrating robotic automation with AI reasoning and on-site deployment, Cosmic achieves construction speeds three times faster than traditional methods, completing projects in as little as three months.”

The post AI and robotics-powered microfactory rebuilds homes lost to the California wildfires appeared first on Engineering.com.

]]>
From software 3.0 to PLM that thinks https://www.engineering.com/from-software-3-0-to-plm-that-thinks/ Tue, 29 Jul 2025 20:51:41 +0000 https://www.engineering.com/?p=141728 PLM is no longer just a system of record—it’s an ecosystem that learns with engineers to create “conversational” product innovation.

The post From software 3.0 to PLM that thinks appeared first on Engineering.com.

]]>
As Andrej Karpathy—former Director of AI at Tesla and a leading voice in applied deep learning—explains in his influential Software 3.0 talk, we are entering a new era in how software is created: not programmed line-by-line, but trained on data, shaped by prompts, and guided by intent.

This shift replaces traditional rule-based logic with inferred reasoning. Large Language Models (LLMs) no longer act as tools that execute commands—they behave more like collaborators that understand, interpret, and suggest. This is not just a software evolution—it’s a new operating paradigm for digital systems across industries.

This evolution challenges how we think about enterprise systems designed to support and enable product innovation—particularly PLM, which must now move beyond static data foundations and governance to embrace adaptive reasoning and continuous collaboration.

Legacy PLM: governance without understanding

PDM/PLM and alike systems have long played a foundational role in industrial digitalization. Built to manage complex product data, enforce compliance, and track design evolution, they act as structured systems of record. But while they govern well, they do not reason.

Most PLM platforms remain bound by rigid schemas and predefined workflows. They are transactional by design—built to secure approvals, ensure traceability, and document history. As such, PLM has often been seen as a brake pedal, not an accelerator, in the innovation process.

In today’s increasingly adaptive R&D and manufacturing environments, that model is no longer sufficient. Software 3.0 introduces a cognitive layer that can elevate PLM from reactive gatekeeping to proactive orchestration—but “only if we keep AI firmly on a leash” as Karpathy put it.

PLM that thinks

Imagine a PLM ecosystem that does not simply route change requests for approval—but asks why the change is needed, how it will impact downstream functions, and what the best alternatives might be.

This is the promise of LLM-powered PLM:

  • Conversational interfaces replace rigid forms. Engineers interact with the ecosystem through natural language, clarifying design intent and constraints.
  • Reasoning engines interpret the implications of product changes in real time—spanning design, sourcing, compliance, and sustainability.
  • Agentic capabilities emerge: AI can suggest design modifications, simulate risks, and even initiate cross-functional coordination.

PLM becomes an intelligent co-pilot—responding to prompts, adapting to context, and surfacing insight when and where it matters most. The shift is from enforcing compliance to guiding innovation—while maintaining strict guardrails to prevent runaway AI decisions.

The cognitive thread

Software 3.0 does more than enable conversational PLM—it rewires how digital continuity is managed across the lifecycle.

Beyond the digital thread, we now see the rise of a cognitive thread: a persistent, adaptive logic that connects design intent, regulatory constraints, manufacturing realities, and in-market feedback.

  • Decisions are traced not just by timestamp, but by reasoning path.
  • Data is interpreted based on role, context, and business objective.
  • AI learns from past projects to anticipate outcomes, not just report on them.

This transforms PLM into a system of systems thinking—an orchestration layer where data, knowledge, and human expertise converge into continuous learning cycles. It reshapes how products are developed, iterated, and sustained—with AI kept in check through rigorous validation.

Preventing PLM hallucination and entropy

With intelligence comes risk. Reasoning systems can misinterpret context, hallucinate outputs, or apply flawed logic. In safety-critical or highly regulated sectors, this is not a theoretical concern—it is a business and ethical imperative.

We must now ask:

  • How do we validate AI-generated recommendations in engineering workflows?
  • How do we trace the logic behind autonomous decisions?
  • How do we ensure adaptive systems do not drift from controlled baselines?

As PDM/PLM/ERP/MES and other enterprise systems begin to think, new governance models must emerge—combining ethical AI frameworks with domain-specific validation processes. This is not just about technology. It is about trust, accountability, and responsible transformation.

Software 3.0 marks a turning point—not just for software developers, but for product innovators, engineers, and digital transformation leaders. It redefines what enterprise systems can be. In this new landscape, PLM is no longer the place where innovation is recorded after the fact. It becomes the place where innovation is shaped in real time—through intelligent dialogue, adaptive reasoning, and guided exploration—all while keeping AI safely on a leash.

Are we ready to collaborate with a PLM ecosystem that learns with products—but only within trusted boundaries? Because the next generation of product innovation will not be built on forms and workflows. It is very likely that it will be built on conversation, interpretation, and co-creation with validated AI assistance.

The post From software 3.0 to PLM that thinks appeared first on Engineering.com.

]]>
New MIT report reveals how manufacturers are really using AI https://www.engineering.com/new-mit-report-reveals-how-manufacturers-are-really-using-ai/ Fri, 18 Jul 2025 18:27:33 +0000 https://www.engineering.com/?p=141474 Over the course of a year, Tata Consultancy Services and MIT Sloan Management Review studied AI's strategic role in manufacturing.

The post New MIT report reveals how manufacturers are really using AI appeared first on Engineering.com.

]]>
(Image: TATA Consultancy Services Ltd.)

New research is showing us how AI is being deployed in the manufacturing sector, and the results are not exactly what you would expect. Tata Consultancy Services (TCS), a global IT consulting firm based in Delhi, in collaboration with Boston-based MIT Sloan Management Review (MIT SMR), say their research shows the role of AI is across enterprise workflows from automatically handling the simple repetitive decisions to improving the entire decision making environment for company leadership.

“This shift is not just about improving processes—it is about empowering people to make better choices and building adaptive, future-ready manufacturing enterprises equipped to thrive in a changing world,” says Anupam Singhal, president of the manufacturing practice at TCS.

The study examines how global organizations are integrating predictive and generative AI to aid decision-making and gain a competitive edge and drew insights from experts and pioneers at manufacturers such as Cummins, Danaher, and Schneider Electric.

The report states AI is moving from a simple advisory role to more of a business architect, improving the quality of options available for decision-making rather than just optimizing processes.

This new paradigm is powered by intelligent choice architectures (ICAs). These are dynamic AI systems that combine generative and predictive AI capabilities to create, refine, and present optimal choices for human decision-makers. In manufacturing, ICAs equip leaders with better choices for driving measurable outputs and outcomes in performance, quality, and innovation.

“ICAs flip the script. They do not just learn from decisions — they learn how to improve the environment in which decisions are made. That’s not analytics, that’s architecture,” said Michael Schrage, MIT Sloan Research Fellow and a co-author of the report.

The research highlights how ICAs address key manufacturing challenges in each of these differing manufacturing environments.

Cummins is exploring how generative AI can simulate extreme scenarios in powertrain design, demonstrating how ICAs can improve resilience and reduce time to market by testing against exponentially more scenarios than can be even conceived by human engineers.

At Schneider Electric, generative and predictive AI models enhance the specificity and reliability of predictive maintenance interventions, reducing uncertainty about when and where to perform maintenance.

Lastly, Danaher is deploying ICAs to transform decision-making across its mergers and acquisitions, product strategy, and innovation roadmaps. This includes supply chain optimization, where advanced analytics can lead to substantial savings.

The study goes on to identify four key imperatives manufacturers should consider when looking to build and enable more intelligent decision environments with ICAs, including:

Identify, curate, and emphasize value-driving data

    Perfect data is a myth. What matters is generating better choices with available data. Companies must prioritize the critical data that delivers the most business value, enabling “frugal data cultivation” and accelerating meaningful outcomes.

    Design with economic clarity and business purpose

    Every ICA initiative must have a clear business purpose and stated, desired outcome. Projects should deliver measurable results, not just chase technological speculation. This, as the authors put it, ensures the “juice is worth the squeeze.”

    Orchestrate for intelligence

    ICAs must coordinate humans, AI models, and automated workflows to maximize throughput and decision quality. This transforms siloed decisions into integrated intelligence. The report shows evidence of this using anecdotes of Danaher’s “massively better output” and Cummins’s transformation of its federal bid evaluation.

    Establish a pervasive presence

    ICAs must become part of the everyday flow of work. Cummins demonstrates how connecting design, production, and service functions through ICAs unlocks cross-functional insights and drives operational efficiency. ICAs that exist outside normal workflows fail to deliver sustained value.

    “This isn’t AI as co-pilot. This is AI and humans working together as architects to redesign how people perceive, weigh, and act on choices,” said David Kiron, Editorial Director at MIT Sloan Management Review.

    Read the full report free of charge on TCS Insights page.

    The post New MIT report reveals how manufacturers are really using AI appeared first on Engineering.com.

    ]]>
    How AI agents can support design engineers https://www.engineering.com/how-ai-agents-can-support-design-engineers/ Mon, 14 Jul 2025 20:58:52 +0000 https://www.engineering.com/?p=141322 Libraries, requirements and testing are just a few areas to get started with AI in engineering.

    The post How AI agents can support design engineers appeared first on Engineering.com.

    ]]>
    Design engineers number in the hundreds of thousands, if not millions, worldwide. That represents a massive pool of valuable knowledge—industry-specific processes, personal workflows, procedures and more—that could be leveraged through machine learning.

    From my own experience as a design engineer, it was clear even then that parts of the job were repetitive and could be improved or automated. One traditional approach was to build up component libraries. However, these libraries were often specific to a plant or product line, and even within the same company, different teams had their own isolated systems.

    Some experienced engineers had developed individual shortcuts or retained knowledge from repeated exposure to the design–rework–release cycle. While effective, this knowledge was often locked in the minds of a few senior engineers. If one of them left the company or chose not to share their methods, that expertise became difficult or impossible to scale across teams or locations.

    This raises an important question: How can we reduce knowledge silos and make engineering know-how more accessible?

    One answer is to use an AI agent—not to replace the engineer, but to assist them. An AI-powered digital assistant could speed up decision-making and help engineers understand the reasoning behind choices.

    Take the automotive industry, for example. Consider the screw used to mount a headlight in a Ford Focus. It might be an M6x18mm screw—but why that specific part? The choice may involve testing data, torque specs, material considerations, weight limits, or economic factors. All of this information exists and could be made accessible to a machine-learning model to assist engineers during the design phase. If integrated with a CAD tool, the AI agent could suggest appropriate components based on context and past data.

    This concept can scale beyond automotive. With access to industry-specific libraries, an AI assistant could help engineers find relevant solutions quickly. It wouldn’t replace human insight but would reduce time spent on repetitive tasks—like browsing through component catalogs—and allow engineers to focus on creative problem-solving.

    Companies often sit on vast datalakes of underused information. Training AI models on this data could improve efficiency and reduce costs. Consider a test engineer working in a lab: an AI agent could analyze past test results to flag potential points of failure in new assemblies, improving comparative analysis and cutting down on wasted manufacturing costs.

    Other applications include:

    • Suggesting library components to reduce design time.
    • Recommending material thicknesses at the component level.
    • Providing context to help junior engineers understand the “why” behind a design decision.

    Ultimately, AI agents aren’t here to replace people. They’re tools that, when used wisely, can foster both personal and organizational growth. The future isn’t man versus machine. It’s about using these tools to create a partnership where the human factor—context, ethics, judgment—drives the machine to be more useful.

    AI can sift, learn and adapt. But it’s the engineer who decides what matters.

    The post How AI agents can support design engineers appeared first on Engineering.com.

    ]]>
    AI governance—the unavoidable imperative of responsibility https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/ Tue, 08 Jul 2025 18:03:42 +0000 https://www.engineering.com/?p=141188 Examining key pillars an organization should consider when developing AI governance policies.

    The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

    ]]>
    In a recent CIMdata Leadership Webinar, my colleague Peter Bilello and I presented our thoughts on the important and emerging topic of Artificial Intelligence (AI) Governance. More specifically, we brought into focus a new term in the overheated discussions surrounding this technology, now entering general use and, inevitably, misuse. That term is “responsibility.”

    For this discussion, responsibility means accepting that one will be held personally accountable for AI-related problems and outcomes—good or bad—while acting with that knowledge always in mind.

    Janie Gurley, Data Governance Director, CIMdata Inc.

    Every new digital technology presents opportunities for misuse, particularly in its early days when its capabilities are not fully understood and its reach is underestimated. AI, however, is unique, making its governance extra challenging because of the following three reasons:

    • A huge proportion of AI users in product development are untrained, inexperienced, and lack the caution and self-discipline of engineers; engineers are the early users of nearly all other information technologies.  
    • With little or no oversight, AI users can reach into data without regard to accuracy, completeness, or even relevance. This causes many shortcomings, including AI’s “hallucinations.”
    • AI has many poorly understood risks—a consequence of its power and depth—that many new AI users don’t understand.

    While both AI and PLM are critical business strategies, they are hugely different. Today, PLM implementations have matured to the point where they incorporate ‘guardrails,’ mechanisms common in engineering and product development that keep organizational decisions in sync with goals and strategic objectives while holding down risks. AI often lacks such guardrails and is used in ways that solution providers cannot always anticipate.

    And that’s where the AI governance challenges discussed in our recent webinar, AI Governance: Ensuring Responsible AI Development and Use, come in.

    The scope of the AI problem

    AI is not new; in various forms, it has been used for decades. What is new is its sudden widespread adoption, coinciding with the explosion of AI toolkits and AI-enhanced applications, solutions, systems, and platforms. A key problem is the poor quality of data fed into the Large Language Models (LLMs) that genAI (such as ChatGPT and others) uses.

    During the webinar, one attendee asked if executives understand the value of data. Bilello candidly responded, “No. And they don’t understand the value of governance, either.”  And why should they?  Nearly all postings and articles about AI mention governance as an afterthought, if at all.

    So, it is time to establish AI governance … and the task is far more than simply tracking down errors and identifying users who can be held accountable for them. CIMdata has learned from experience that even minor oversights and loopholes can undermine effective governance.

    AI Governance is not just a technical issue, nor is it just a collection of policies on paper. Everyone using AI must be on the same page, so we laid out four elements in AI governance that must be understood and adopted:

    Ethical AI, adhering to principles of fairness, transparency, and accountability.

    AI Accountability, assigning responsibility for AI decisions and ensuring human oversight.

    Human-in-the-Loop (HITL), the integration of human oversight into AI decision-making to ensure sound judgments, verifiable accountability, and authority to intercede and override when needed.

    AI Compliance, aligning AI initiatives with legal requirements such as GDPR, CCPA, and the AI Act.

    Bilello noted, “Augmented intelligence—the use of AI technologies that extend and/or enhance human intelligence—always has a human in the loop to some extent and. despite appearances, AI is human-created.”

    Next, we presented the key pillars of AI governance, namely:

    • Transparency: making AI models explainable, clarifying how decisions are made, and making the results auditable.
    • Fairness and proactively detecting and mitigating biases.
    • Privacy and Security to protect personal data, as well as the integrity of the model.
    • Risk Management with continuous monitoring across the AI lifecycle.

    The solution provider’s perspective

    Now let’s consider this from the perspective of a solution provider, specifically the Hexagon Manufacturing Intelligence unit of Hexagon Metrology GmbH.

    AI Governance “provides the guardrails for deploying production-ready AI solutions. It’s not just about complying with regulations—it’s about proving to our customers that we build safe, reliable systems,” according to Dr. René Cabos, Hexagon Senior Product Manager for AI.

    “The biggest challenge?” according to Cabos, is “a lack of clear legal definitions of what is legally considered to be AI. Whether it’s a linear regression model or the now widely used Generative AI [genAI], we need traceability, explainability, and structured monitoring.”

    Explainability lets users look inside AI algorithms and their underlying LLMs and renders decisions and outcomes visible, traceable, and comprehensible; explainability ensures that AI users and everyone who depends on their work can interpret and verify outcomes. This is vital for enhancing how AI users work and for establishing trust in AI; more on trust below.

    Organizations are starting to make changes to generate future value from genAI,with large companies leading the way.

    Industry data further supports our discussion on the necessity for robust AI governance, as seen in McKinsey & Company’s Global Survey on AI, titled The state of AI – How organizations are rewiring to capture value, published in March 2025.

    The study by Alex Singla et al. found that “Organizations are beginning to create the structures and processes that lead to meaningful value from gen AI.” Even though already in wide use—including putting senior leaders in critical roles overseeing AI governance.

    The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks. Overall, the use of AI—gen AI, as well as Analytical AI—continues to build momentum: more than three-quarters of respondents now say that their organizations use AI in at least one business function. The use of genAI in particular is rapidly increasing.

    “Unfortunately, governance practices have not kept pace with this rewiring of work processes,” the McKinsey report noted. “This reinforces the critical need for structured, responsible AI governance. Concerns about bias, security breaches, and regulatory gaps are rising. This makes core governance principles like fairness and explainability non-negotiable.”

    More recently, McKinsey observed that AI “implications are profound, especially Agentic AI. Agentic AI represents not just a new technology layer but also a new operating model,” Mr. Federico Burruti and four co-authors wrote in a June 4, 2025, report titled, When can AI make good decisions? The rise of AI corporate citizens.

    “And while the upside is massive, so is the risk. Without deliberate governance, transparency, and accountability, these systems could reinforce bias, obscure accountability, or trigger compliance failures,” the report says.

    The McKinsey report points out that companies should “Treat AI agents as corporate citizens. “That means more than building robust tech. It means rethinking how decisions are made from an end-to-end perspective. It means developing a new understanding of which decisions AI can make. And, most important, it means creating new management (and cost) structures to ensure that both AI and human agents thrive.”

    In our webinar, we characterized this rewiring as a tipping point because the integration of AI into the product lifecycle is poised to dramatically reshape engineering and design practices. AI is expected to augment, not replace, human ingenuity in engineering and design; this means humans must assume the role of curator of content and decisions generated with the support of AI.

    Why governance has lagged

    With AI causing so much heartburn, one might assume that governance is well-established. But no, there are many challenges:

    • The difficulty of validating AI model outputs when systems evolve from advisor-based recommendations to fully autonomous agents.
    • The lack of rigorous model validation, ill-defined ownership of AI-generated intellectual property, and data privacy concerns.
    • Evolving regulatory guidance, certification, and approval of all the automated processes being advanced by AI tools…coupled with regulatory uncertainty in a changing global landscape of compliance challenges and a poor understanding of legal restrictions.
    • Bias, as shown in many unsettling case studies, and the impacts of biased AI systems on communities.
    • The lack of transparency (and “explainability”), with which to challenge black-box AI models.
    • Weak cybersecurity measures and iffy safety and security in the face of cyber threats and risks of adversarial attacks.
    • Public confidence in AI-enabled systems, not just “trust” by users.
    • Ethics and trust themes that reinforce ROI discussions.

    Trust in AI is hindered by widespread skepticism, including fears of disinformation, instability, unknown unknowns, job losses, industry concentration, and regulatory conflicts/overreach.

    James Markwalder, U.S. Federal Sales and Industry Manager at Prostep i.v.i.p.,  a product data governance association based in Germany, characterized AI development “as a runaway train—hundreds of models hatch every day—so policing the [AI] labs is a fool’s errand. In digital engineering, the smarter play is to govern use.”

    AI’s fast evolution requires that we “set clear guardrails, mandate explainability and live monitoring, and anchor every decision to…values of safety, fairness, and accountability,” Markwalder added. “And if the urge to cut corners can be tamed, AI shifts from black-box risk to a trust engine that shields both ROI and reputation.”

    AI is also driving a transformation in product development amid compliance challenges to business, explained by Dr. Henrik Weimer, Director of Digital Engineering at Airbus. In his presentation at CIMdata’s PLM Road Map & PDT North America in May 2025, Weimer spelled out four AI business compliance challenges:

    Data Privacy, meaning the protection “of personal information collected, used, processed, and stored by AI systems,” which is a key issue “for ethical and responsible AI development and deployment.”

    Intellectual Property, that is “creations of the mind;” he listed “inventions, algorithms, data, patents and copyrights, trade secrets,data ownership, usage rights, and licensing agreements.”

    Data Security, ensuring confidentiality, integrity, and availability, as well as protecting data in AI systems throughout the lifecycle.

    Discrimination and Bias, addressing the unsettling fact that AI systems “can perpetuate and amplify biases present in the data on which they are trained,” leading to “unfair or discriminatory outcomes, disproportionately affecting certain groups or individuals.”

    Add to these issues the environmental impact of AI’s tremendous power demands. In the April 2025 issue of the McKinsey Quarterly, the consulting firm calculated that “Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures by 2030…” (The article is titled The cost of compute: A $7 trillion race to scale data centers.)

    Establishing governance

    So, how is governance created amid this chaos? In our webinar, we pointed out that the answer is a governance framework that:

    • Establishes governance policies aligned with organizational goals, plus an AI ethics committee or oversight board.

    • Develops and implements risk assessment methodologies for AI projects that monitor AI processes and results for transparency and fairness.

    • Ensures continuous auditing and feedback loops for AI decision-making.

    To show how this approach is effective, we offered case studies from Allied Irish Bank, IBM’s AI Ethics Governance framework, and Amazon’s AI Recruiting Tool (which had a bias against females).

    Despite all these issues, AI governance across the lifecycle is cost-effective, and guidance was offered on measuring the ROI impact of responsible AI practices:

    • Quantifying AI governance value in cost savings, risk reduction, and reputation
        management.
    • Developing and implementing metrics for compliance adherence, bias reduction, and transparency.
    • Justifying investment with business case examples and alignment with stakeholders’ priorities.
    • Focusing continuous improvement efforts on the many ways in which AI governance drives innovation and operational efficiency.

    These four points require establishing ownership and accountability through continuous monitoring and risk management, as well as prioritizing ethical design. Ethical design is the creation of products, systems, and services that prioritize benefits to society and the environment while minimizing the risks of harmful outcomes.

    The meaning of ‘responsibility’ always seems obvious until one probes into it. Who is responsible? To whom? Responsible for what? Why? And when? Before the arrival of AI, the answers to these questions were usually self-evident. In AI, however, responsibility is unclear without comprehensive governance.

    Also required is the implementation and fostering of a culture of responsible AI use through collaboration within the organization as well as with suppliers and field service. Effective collaboration, we pointed out, leads to diversity of expertise and cross-functional teams that strengthen accountability and reduce blind spots.

    By broadening the responsibilities of AI users, collaboration adds foresight into potential problems and helps ensure practical, usable governance while building trust in AI processes and their outcomes. Governance succeeds when AI “becomes everyone’s responsibility.”

    Our conclusion was summed up as: Govern Smart, Govern Early, and Govern Always.

    In AI, human oversight is essential. In his concluding call to action, Bilello emphatically stated, “It’s not if we’re going to do this but when…and when is now.” Undoubtedly, professionals who proactively embrace AI and adapt to the changing landscape will be well-positioned to thrive in the years to come.

    Peter Bilello, President and CEO, CIMdata and frequent Engineering.com contributor, contributed to this article.

    The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

    ]]>