PLM/ERP - Engineering.com https://www.engineering.com/category/technology/plm-erp/ Tue, 05 Aug 2025 15:37:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png PLM/ERP - Engineering.com https://www.engineering.com/category/technology/plm-erp/ 32 32 Nvidia Omniverse coming to PTC Creo and Windchill https://www.engineering.com/nvidia-omniverse-coming-to-ptc-creo-and-windchill/ Tue, 05 Aug 2025 15:34:07 +0000 https://www.engineering.com/?p=141889 Plus PTC pledged itself to the Alliance for OpenUSD, and more design and simulation software news.

The post Nvidia Omniverse coming to PTC Creo and Windchill appeared first on Engineering.com.

]]>
This is Engineering Paper, and here’s the latest design and simulation software news.

PTC has expanded its partnership with Nvidia. The Boston-based developer, which not long ago was rumored to be up for sale, says it will integrate Nvidia Omniverse technologies into Creo and Windchill.

“By connecting Windchill with Omniverse’s real-time, photorealistic simulation development platform, teams will be able to visualize and interact with the most current Creo design data in a shared, immersive environment,” reads PTC’s press release.

PTC has also joined the Alliance for OpenUSD (AOUSD), a group working to advance the Pixar-created OpenUSD file framework used in Nvidia Omniverse. Nvidia was one of the five founding members of the AOUSD alongside Pixar, Adobe, Apple, and Autodesk. In June, engineering software developer Tech Soft 3D also announced a collaboration with Nvidia and joined the AOUSD.

“By deepening our collaboration with Nvidia and joining the Alliance for OpenUSD, we’re giving our customers the ability to incorporate design and configuration data in a real-time, immersive simulation environment,” said Neil Barua, president and CEO of PTC, in the press release. “The integration of Omniverse technologies within Creo and Windchill will enable teams to accelerate development, improve product quality, and collaborate more effectively across the entire product lifecycle.”

Desktop Metal files for Chapter 11

The story of 3D printing company Desktop Metal has reached Chapter 11.

“Barely more than two years after Stratasys made a $1.8B bid for it and just a few weeks after Nano Dimension acquired it for a fraction of that price, Desktop Metal has filed for bankruptcy protection under Chapter 11 of the U.S. Bankruptcy Code,” wrote Engineering.com 3D printing editor Ian Wright in his coverage of the news.

“After much speculation about the fate of the beleaguered metal AM company… this looks like the end of what was once the darling of investors and 3D printing enthusiasts alike,” Ian wrote.

For more details, read the full article on Engineering.com: Desktop Metal files for Chapter 11.

ITC goes dark with IntelliCAD 14.0

The IntelliCAD Technology Consortium announced the release of IntelliCAD 14.0, the latest version of the member-funded CAD development platform.

IntelliCAD 14.0 introduces a dark mode, which in my opinion is an accessibility setting that belongs in every software package (I’m baffled by extremely popular applications that still lack the option—I’m looking at you, Google Docs).

“While dark is now the default, you can also choose from light or gray themes,” according to ITC’s video overview of IntelliCAD 14.0.

Screenshot of IntelliCAD 14.0. (Image: IntelliCAD Technology Consortium.)

The new release also adds faster performance for common functions including copy, break, move, and union, as well as detachable drawing windows, support for Autodesk Revit 2025 files, API enhancements, and more.

“IntelliCAD 14.0 reflects our commitment to listening to real-world feedback from our members and delivering the tools they need most,” said Shawn Lindsay, president of the IntelliCAD Technology Consortium, in the release announcement. “We remain focused on providing an open, dependable platform that developers can build on—and on offering a powerful alternative in the CAD software market.”

One last link

Engineering.com executive editor Jim Anderton’s latest episode of End of the Line discusses the rapidly changing technology of warfare: The war in Ukraine: The end of armor as we know it.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Nvidia Omniverse coming to PTC Creo and Windchill appeared first on Engineering.com.

]]>
From software 3.0 to PLM that thinks https://www.engineering.com/from-software-3-0-to-plm-that-thinks/ Tue, 29 Jul 2025 20:51:41 +0000 https://www.engineering.com/?p=141728 PLM is no longer just a system of record—it’s an ecosystem that learns with engineers to create “conversational” product innovation.

The post From software 3.0 to PLM that thinks appeared first on Engineering.com.

]]>
As Andrej Karpathy—former Director of AI at Tesla and a leading voice in applied deep learning—explains in his influential Software 3.0 talk, we are entering a new era in how software is created: not programmed line-by-line, but trained on data, shaped by prompts, and guided by intent.

This shift replaces traditional rule-based logic with inferred reasoning. Large Language Models (LLMs) no longer act as tools that execute commands—they behave more like collaborators that understand, interpret, and suggest. This is not just a software evolution—it’s a new operating paradigm for digital systems across industries.

This evolution challenges how we think about enterprise systems designed to support and enable product innovation—particularly PLM, which must now move beyond static data foundations and governance to embrace adaptive reasoning and continuous collaboration.

Legacy PLM: governance without understanding

PDM/PLM and alike systems have long played a foundational role in industrial digitalization. Built to manage complex product data, enforce compliance, and track design evolution, they act as structured systems of record. But while they govern well, they do not reason.

Most PLM platforms remain bound by rigid schemas and predefined workflows. They are transactional by design—built to secure approvals, ensure traceability, and document history. As such, PLM has often been seen as a brake pedal, not an accelerator, in the innovation process.

In today’s increasingly adaptive R&D and manufacturing environments, that model is no longer sufficient. Software 3.0 introduces a cognitive layer that can elevate PLM from reactive gatekeeping to proactive orchestration—but “only if we keep AI firmly on a leash” as Karpathy put it.

PLM that thinks

Imagine a PLM ecosystem that does not simply route change requests for approval—but asks why the change is needed, how it will impact downstream functions, and what the best alternatives might be.

This is the promise of LLM-powered PLM:

  • Conversational interfaces replace rigid forms. Engineers interact with the ecosystem through natural language, clarifying design intent and constraints.
  • Reasoning engines interpret the implications of product changes in real time—spanning design, sourcing, compliance, and sustainability.
  • Agentic capabilities emerge: AI can suggest design modifications, simulate risks, and even initiate cross-functional coordination.

PLM becomes an intelligent co-pilot—responding to prompts, adapting to context, and surfacing insight when and where it matters most. The shift is from enforcing compliance to guiding innovation—while maintaining strict guardrails to prevent runaway AI decisions.

The cognitive thread

Software 3.0 does more than enable conversational PLM—it rewires how digital continuity is managed across the lifecycle.

Beyond the digital thread, we now see the rise of a cognitive thread: a persistent, adaptive logic that connects design intent, regulatory constraints, manufacturing realities, and in-market feedback.

  • Decisions are traced not just by timestamp, but by reasoning path.
  • Data is interpreted based on role, context, and business objective.
  • AI learns from past projects to anticipate outcomes, not just report on them.

This transforms PLM into a system of systems thinking—an orchestration layer where data, knowledge, and human expertise converge into continuous learning cycles. It reshapes how products are developed, iterated, and sustained—with AI kept in check through rigorous validation.

Preventing PLM hallucination and entropy

With intelligence comes risk. Reasoning systems can misinterpret context, hallucinate outputs, or apply flawed logic. In safety-critical or highly regulated sectors, this is not a theoretical concern—it is a business and ethical imperative.

We must now ask:

  • How do we validate AI-generated recommendations in engineering workflows?
  • How do we trace the logic behind autonomous decisions?
  • How do we ensure adaptive systems do not drift from controlled baselines?

As PDM/PLM/ERP/MES and other enterprise systems begin to think, new governance models must emerge—combining ethical AI frameworks with domain-specific validation processes. This is not just about technology. It is about trust, accountability, and responsible transformation.

Software 3.0 marks a turning point—not just for software developers, but for product innovators, engineers, and digital transformation leaders. It redefines what enterprise systems can be. In this new landscape, PLM is no longer the place where innovation is recorded after the fact. It becomes the place where innovation is shaped in real time—through intelligent dialogue, adaptive reasoning, and guided exploration—all while keeping AI safely on a leash.

Are we ready to collaborate with a PLM ecosystem that learns with products—but only within trusted boundaries? Because the next generation of product innovation will not be built on forms and workflows. It is very likely that it will be built on conversation, interpretation, and co-creation with validated AI assistance.

The post From software 3.0 to PLM that thinks appeared first on Engineering.com.

]]>
Autodesk mulling PTC takeover to create industrial software juggernaut https://www.engineering.com/autodesk-mulling-ptc-takeover-to-create-industrial-software-juggernaut/ Fri, 11 Jul 2025 19:07:32 +0000 https://www.engineering.com/?p=141287 The $20B bet could reshape the future of engineering software. We analyze the product mix, strategic fit and how it will affect engineers and end users.

The post Autodesk mulling PTC takeover to create industrial software juggernaut appeared first on Engineering.com.

]]>
Autodesk is reportedly considering the acquisition of PTC in what could be its largest-ever deal, rumored to be valued at more than $20 billion. Although it is still in early stages and may not materialize, the potential impact is already generating significant market and industry attention. Reports from Bloomberg, Reuters and others suggest the transaction could be structured as a mix of cash and stock, reflecting both the ambition and complexity of such a transformative move.

This is not just a transaction between two legacy software firms. It could represent a redefinition of the industrial software landscape: Autodesk, long focused on democratizing design via the cloud, meeting PTC, grounded in enterprise-scale digital transformation for manufacturers. The overlap is clear. The complementarity? Still to be proven.

Strategy, scale, and ambition

While both companies are respected in their domains, they differ significantly in size, culture, and strategic posture:

  • Autodesk reported more than $6.1 billion in FY2025 revenue (fiscal year ending January 2025), with a market cap of approximately $66.6 billion.
  • PTC reported $2.3 billion in FY2024 revenue (fiscal year ending September 2024), with a current market cap around $17 billion following the takeover speculation bump.

Autodesk is more than twice PTC’s size in revenue and has traditionally focused on AEC, creative design, and mid-market engineering. PTC, in contrast, is deeply rooted in industrial manufacturing, PLM, and IoT.

This is not a merger of equals. It reflects Autodesk’s strategic ambition to move deeper into the enterprise market. With PTC, Autodesk would gain credibility and capability in core enterprise workflows. This would mark a step change for Autodesk’s portfolio maturity—from cloud-native tools for SMBs to enterprise-scale digital thread and product lifecycle platforms.

Yet, the companies have very different go-to-market approaches. Autodesk has built its SaaS business around high-volume channels, while PTC’s sales motion is enterprise direct. That contrast creates opportunity-but also serious integration risk.

Market reactions and community feedback

PTC shares surged over 17% on July 9 after Bloomberg reported Autodesk was exploring a bid. They fell 7.5% the next day. Autodesk’s stock declined nearly 8% as investors assessed the strategic rationale and integration risks. These market movements highlight the scale and sensitivity of such a transformative bet.

In professional forums and industry circles, the deal has sparked debate. Many experts have expressed skepticism about strategic alignment. They point out potential redundancy between core CAD offerings (Creo vs. Inventor/Fusion 360) and PLM solutions (Windchill vs. Fusion Manage). Others note Autodesk’s limited experience in large, complex integrations, and voice concerns about its ability to manage an enterprise-scale acquisition.

One clear thread: this would be a high-risk, high reward move. Autodesk has never made a deal of this magnitude. It could unlock new verticals—but also strain its operating model and alienate parts of its existing base.

Analysts also speculate on regulatory hurdles. The CAD and PLM market is already concentrated. A deal of this scale may face antitrust scrutiny, particularly in the US and Europe. Financing would also be a stretch, and shareholders will expect a well-articulated synergy plan. The rumored price tag of about $20 billion raises the stakes further.

Product portfolio and strategic fit

Autodesk has invested heavily in Autodesk Platform Services (APS), with Fusion 360 acting as its design collaboration anchor. PTC’s portfolio is broader in manufacturing and enterprise engineering, with Windchill+, Arena (PLM), Onshape (cloud CAD), and ThingWorx/Kepware (IoT/edge connectivity).

While the combination would offer end-to-end coverage from SMB to enterprise, the breadth creates duplication. Customers may worry about future roadmap clarity. Will Autodesk continue Fusion Manage or prioritize Windchill+? Can Creo and Inventor coexist? And does Autodesk have a plan for ThingWorx and Kepware, which do not align with its core portfolio?

Most experts believe those IoT assets will be divested. That opens new opportunities for companies like Rockwell, Schneider Electric, or Emerson—firms more focused on industrial automation and edge connectivity. These decisions will send strong signals to the market about Autodesk’s long-term intent.

Beyond the technology, there is a broader question: is Autodesk acquiring products, a platform, and/or an extended customer base? The answer is likely to be multiple. It will determine how much integration effort is required—and how much customer disruption it might cause.

Execution and leadership will define the outcome

The true test will be execution. Autodesk has evolved into a cloud-first player over the past decade, but it has little experience with large-scale enterprise integrations. PTC, though smaller, brings a strong industrial culture and a distinct go-to-market strategy that may not align with Autodesk’s creative, SMB-rooted DNA.

Cultural integration, pricing model alignment, and partner ecosystem rationalization will be complex. If poorly managed, these differences could erode customer trust and delay value realization.

Leadership will play a pivotal role. PTC’s new CEO, Neil Barua, took over in February 2024 from long-time chief Jim Heppelmann. Barua, formerly CEO of ServiceMax (acquired by PTC in 2022), brings a sharper focus on customer-driven innovation and return on investment. His strategic priorities—and openness to integration—could influence how the two companies align.

ThingWorx and Kepware, once central to PTC’s digital transformation narrative, now appear most vulnerable to divestment. Their fate may define Autodesk’s long-term industrial strategy. Rockwell Automation’s recent exit from its $1B stake in PTC in August 2023 further suggests shifting alliances and possible competitive realignments in the broader industrial software ecosystem.

This deal, if it proceeds, will not go unnoticed. Siemens, Dassault Systèmes, and other PLM leaders are likely already reassessing their positions. A successful integration would escalate the digital thread race. A failed one could reinforce the limits of M&A in an already saturated market.

In the end, the acquisition is just the beginning. The real transformation will be defined by what Autodesk chooses to keep, integrate or let go.

Editor’s update July 14 2025: In the days after this story was published Autodesk in a regulatory filing declared this deal is no longer on the table and will instead focus on more strategic priorities, as reported by Reuters.

The post Autodesk mulling PTC takeover to create industrial software juggernaut appeared first on Engineering.com.

]]>
Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward https://www.engineering.com/siemens-realize-live-2025-ai-powered-digital-transformation-is-the-path-forward/ Wed, 09 Jul 2025 20:15:56 +0000 https://www.engineering.com/?p=141233 Complexity is not a problem to solve, it’s an advantage to harness.

The post Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward appeared first on Engineering.com.

]]>
At Siemens Realize LIVE Europe 2025, the message was clear: complexity is not a problem to solve—it’s an advantage to harness. AI, digital threads, and domain-specific PLM are no longer future concepts; they are converging into operational realities.

This year’s event illustrated how Siemens is doubling down on the strategy it has long articulated: enabling faster innovation by embedding intelligence, integration, and collaboration into the digital backbone of manufacturing and product development.

Record attendance at Siemens Realize LIVE 2025 in Amsterdam, with Jones discussing the key to mastering complexity; and of course, that includes the use of AI. (Image: Siemens.)

AI as a strategic accelerator

AI at Siemens has moved beyond pilots—now, the race is on for scale. As Bob Jones, Chief Revenue Officer and EVP of Global Sales and Customer Success, put it: “It’s not just about adopting AI—it’s about being the fastest to adopt it.” Speed matters.

Jones emphasized mastering complexity through AI. Siemens is embedding intelligence across the Xcelerator portfolio to boost speed, clarity, and confidence in decision-making:

  • Ask, find, act: Teamcenter Copilot and AI Chat allow users to query data in natural language, surfacing insights instantly.
  • Fix faster: RapidMiner spots quality issues and recommends improvements.
  • Make documents dynamic: AI extracts procedures from static PDFs to accelerate training and compliance.
  • Automate handoffs: Teamcenter, Rulestream, and Mendix streamline design-to-order workflows.

Joe Bohman, EVP of PLM Products, summed it up: Siemens is “training AI in the language of engineering and manufacturing.” This is not about generic automation—it is about embedding domain-specific intelligence aligned with physics, lifecycle context, and operational constraints.

Reinforcing that intent, Siemens appointed Vasi Philomin—former AWS VP of Generative AI—as EVP of Data and AI, reporting to CTO Peter Koerte. At Amazon, Philomin launched Amazon Bedrock and led foundation model development. His arrival signals Siemens’ commitment to scaling industrial AI as a foundational capability—not a feature—across the Xcelerator suite.

From static data to dynamic digital threads

A deeper shift is underway: from managing Bills of Materials to orchestrating Bills of Information. Is this just new language or real change? Either way, it reflects a move from static data capture to dynamic, role-based information delivery across the product lifecycle-enabling faster, more informed decisions at every stage.

Siemens is championing a PLM architecture that supports this shift, built around:

  • Secure, object-level data access tailored to specific roles and responsibilities.
  • Microservices and large language models (LLMs) delivering contextual guidance across engineering, manufacturing, and service domains.
  • A digital thread backbone connecting design, production, quality, and support in real time.

As advertised, this approach goes well beyond traditional traceability. It unlocks the ability to deliver the right data, in the right format, to the right person—when and where it is needed. It transforms engineering knowledge from static documentation into living, operational intelligence.

For globally distributed and regulated industries, this kind of digital continuity is no longer optional—it is a minimum requirement. Engineering is being redefined not just by tools, but by how data is structured, shared, and transformed into actionable insights that drive innovation and execution at scale.

Rethinking CPG

Siemens is reimagining PLM for the CPG industry, extending Teamcenter beyond packaging to support end-to-end collaboration across R&D, regulatory, marketing, and supply chain. By integrating formulation and specification management—built on Opcenter RD&L—Siemens is positioning Teamcenter to compete with SAP PLM in process-heavy, compliance-driven sectors. The solution is promising but still maturing.

A recent partnership with FoodChain ID boosts this trajectory by embedding global regulatory intelligence into the digital thread, helping CPG companies design for compliance from the start.

Key focus areas include:

  • Formulation and specification support, bridging science-led R&D with enterprise PLM.
  • Cross-functional collaboration across R&D, regulatory, marketing, and sourcing in a shared digital workspace.
  • Recipe reuse across global sites, increasing agility and compliance—as demonstrated by Unilever.
  • Scenario modeling and digital twins, enabling design-for-supply-chain strategies.
  • Regulatory intelligence integration, to guide compliant product development from the outset.

While Siemens’ CPG capabilities are evolving, the strategy requires further clarity. The long-term goal is ambitious: to build a robust PLM backbone that accelerates product innovation while addressing regulatory compliance and supply chain complexity from day one.

Xcelerator-as-a-service and agent-driven automation

Building on this momentum, Siemens’ Xcelerator-as-a-Service approach follows a clear goal: keep things simple, flexible, and always up to date.

Key enablers include:

  • Lifecycle data management, with built-in traceability and change control.
  • Low-code tools, via Mendix, embedded across Teamcenter and Opcenter.
  • AI agents, reducing manual effort, streamlining workflows, and reinforcing governance.

The transition toward software-defined products is accelerating. Siemens is doubling down on:

  • SysML v2, enabling next-generation model-based systems engineering
  • Polarion, aligning software and hardware requirements in unified backlogs
  • Supplier frameworks, integrating BOMs and compliance for cross-domain coordination

This is more than a technical evolution—it is a strategic upgrade toward future-ready operations, where complexity is coordinated, traceable, and compatible by design.

Regulatory-ready digital twins and Industry 4.0 interoperability

Regulation is not lagging behind innovation anymore—it is driving it. The upcoming EU Digital Product Passport (DPP 4.0) marks a turning point. Siemens is preparing its customers to meet these mandates not as a constraint—but as a catalyst for trustworthy digital systems.

Their approach includes:

  • Asset Administration Shell (AAS): machine-readable digital twins that maintain continuity from design through operation.
  • OPC UA-based interoperability: enabling secure, standards-based data exchange across partners and platforms.
  • Embedded sustainability and compliance tracking: making ESG and traceability data a native part of the engineering model.

This is digital transformation built for permanence. With regulations requiring traceable, reusable digital records, AI can only accelerate what is built on the right data foundation.

Make no mistake. Complexity is not just managed; it is mined for advantage. The metaphor echoed at the event is apt: like bamboo, digital transformation takes time to root—but when it does, it grows fast and strong. For industrial leaders, the question is no longer why transform—but rather: How fast can intelligence be embedded into the product and value chain?

The post Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward appeared first on Engineering.com.

]]>
AI governance—the unavoidable imperative of responsibility https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/ Tue, 08 Jul 2025 18:03:42 +0000 https://www.engineering.com/?p=141188 Examining key pillars an organization should consider when developing AI governance policies.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
In a recent CIMdata Leadership Webinar, my colleague Peter Bilello and I presented our thoughts on the important and emerging topic of Artificial Intelligence (AI) Governance. More specifically, we brought into focus a new term in the overheated discussions surrounding this technology, now entering general use and, inevitably, misuse. That term is “responsibility.”

For this discussion, responsibility means accepting that one will be held personally accountable for AI-related problems and outcomes—good or bad—while acting with that knowledge always in mind.

Janie Gurley, Data Governance Director, CIMdata Inc.

Every new digital technology presents opportunities for misuse, particularly in its early days when its capabilities are not fully understood and its reach is underestimated. AI, however, is unique, making its governance extra challenging because of the following three reasons:

  • A huge proportion of AI users in product development are untrained, inexperienced, and lack the caution and self-discipline of engineers; engineers are the early users of nearly all other information technologies.  
  • With little or no oversight, AI users can reach into data without regard to accuracy, completeness, or even relevance. This causes many shortcomings, including AI’s “hallucinations.”
  • AI has many poorly understood risks—a consequence of its power and depth—that many new AI users don’t understand.

While both AI and PLM are critical business strategies, they are hugely different. Today, PLM implementations have matured to the point where they incorporate ‘guardrails,’ mechanisms common in engineering and product development that keep organizational decisions in sync with goals and strategic objectives while holding down risks. AI often lacks such guardrails and is used in ways that solution providers cannot always anticipate.

And that’s where the AI governance challenges discussed in our recent webinar, AI Governance: Ensuring Responsible AI Development and Use, come in.

The scope of the AI problem

AI is not new; in various forms, it has been used for decades. What is new is its sudden widespread adoption, coinciding with the explosion of AI toolkits and AI-enhanced applications, solutions, systems, and platforms. A key problem is the poor quality of data fed into the Large Language Models (LLMs) that genAI (such as ChatGPT and others) uses.

During the webinar, one attendee asked if executives understand the value of data. Bilello candidly responded, “No. And they don’t understand the value of governance, either.”  And why should they?  Nearly all postings and articles about AI mention governance as an afterthought, if at all.

So, it is time to establish AI governance … and the task is far more than simply tracking down errors and identifying users who can be held accountable for them. CIMdata has learned from experience that even minor oversights and loopholes can undermine effective governance.

AI Governance is not just a technical issue, nor is it just a collection of policies on paper. Everyone using AI must be on the same page, so we laid out four elements in AI governance that must be understood and adopted:

Ethical AI, adhering to principles of fairness, transparency, and accountability.

AI Accountability, assigning responsibility for AI decisions and ensuring human oversight.

Human-in-the-Loop (HITL), the integration of human oversight into AI decision-making to ensure sound judgments, verifiable accountability, and authority to intercede and override when needed.

AI Compliance, aligning AI initiatives with legal requirements such as GDPR, CCPA, and the AI Act.

Bilello noted, “Augmented intelligence—the use of AI technologies that extend and/or enhance human intelligence—always has a human in the loop to some extent and. despite appearances, AI is human-created.”

Next, we presented the key pillars of AI governance, namely:

  • Transparency: making AI models explainable, clarifying how decisions are made, and making the results auditable.
  • Fairness and proactively detecting and mitigating biases.
  • Privacy and Security to protect personal data, as well as the integrity of the model.
  • Risk Management with continuous monitoring across the AI lifecycle.

The solution provider’s perspective

Now let’s consider this from the perspective of a solution provider, specifically the Hexagon Manufacturing Intelligence unit of Hexagon Metrology GmbH.

AI Governance “provides the guardrails for deploying production-ready AI solutions. It’s not just about complying with regulations—it’s about proving to our customers that we build safe, reliable systems,” according to Dr. René Cabos, Hexagon Senior Product Manager for AI.

“The biggest challenge?” according to Cabos, is “a lack of clear legal definitions of what is legally considered to be AI. Whether it’s a linear regression model or the now widely used Generative AI [genAI], we need traceability, explainability, and structured monitoring.”

Explainability lets users look inside AI algorithms and their underlying LLMs and renders decisions and outcomes visible, traceable, and comprehensible; explainability ensures that AI users and everyone who depends on their work can interpret and verify outcomes. This is vital for enhancing how AI users work and for establishing trust in AI; more on trust below.

Organizations are starting to make changes to generate future value from genAI,with large companies leading the way.

Industry data further supports our discussion on the necessity for robust AI governance, as seen in McKinsey & Company’s Global Survey on AI, titled The state of AI – How organizations are rewiring to capture value, published in March 2025.

The study by Alex Singla et al. found that “Organizations are beginning to create the structures and processes that lead to meaningful value from gen AI.” Even though already in wide use—including putting senior leaders in critical roles overseeing AI governance.

The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks. Overall, the use of AI—gen AI, as well as Analytical AI—continues to build momentum: more than three-quarters of respondents now say that their organizations use AI in at least one business function. The use of genAI in particular is rapidly increasing.

“Unfortunately, governance practices have not kept pace with this rewiring of work processes,” the McKinsey report noted. “This reinforces the critical need for structured, responsible AI governance. Concerns about bias, security breaches, and regulatory gaps are rising. This makes core governance principles like fairness and explainability non-negotiable.”

More recently, McKinsey observed that AI “implications are profound, especially Agentic AI. Agentic AI represents not just a new technology layer but also a new operating model,” Mr. Federico Burruti and four co-authors wrote in a June 4, 2025, report titled, When can AI make good decisions? The rise of AI corporate citizens.

“And while the upside is massive, so is the risk. Without deliberate governance, transparency, and accountability, these systems could reinforce bias, obscure accountability, or trigger compliance failures,” the report says.

The McKinsey report points out that companies should “Treat AI agents as corporate citizens. “That means more than building robust tech. It means rethinking how decisions are made from an end-to-end perspective. It means developing a new understanding of which decisions AI can make. And, most important, it means creating new management (and cost) structures to ensure that both AI and human agents thrive.”

In our webinar, we characterized this rewiring as a tipping point because the integration of AI into the product lifecycle is poised to dramatically reshape engineering and design practices. AI is expected to augment, not replace, human ingenuity in engineering and design; this means humans must assume the role of curator of content and decisions generated with the support of AI.

Why governance has lagged

With AI causing so much heartburn, one might assume that governance is well-established. But no, there are many challenges:

  • The difficulty of validating AI model outputs when systems evolve from advisor-based recommendations to fully autonomous agents.
  • The lack of rigorous model validation, ill-defined ownership of AI-generated intellectual property, and data privacy concerns.
  • Evolving regulatory guidance, certification, and approval of all the automated processes being advanced by AI tools…coupled with regulatory uncertainty in a changing global landscape of compliance challenges and a poor understanding of legal restrictions.
  • Bias, as shown in many unsettling case studies, and the impacts of biased AI systems on communities.
  • The lack of transparency (and “explainability”), with which to challenge black-box AI models.
  • Weak cybersecurity measures and iffy safety and security in the face of cyber threats and risks of adversarial attacks.
  • Public confidence in AI-enabled systems, not just “trust” by users.
  • Ethics and trust themes that reinforce ROI discussions.

Trust in AI is hindered by widespread skepticism, including fears of disinformation, instability, unknown unknowns, job losses, industry concentration, and regulatory conflicts/overreach.

James Markwalder, U.S. Federal Sales and Industry Manager at Prostep i.v.i.p.,  a product data governance association based in Germany, characterized AI development “as a runaway train—hundreds of models hatch every day—so policing the [AI] labs is a fool’s errand. In digital engineering, the smarter play is to govern use.”

AI’s fast evolution requires that we “set clear guardrails, mandate explainability and live monitoring, and anchor every decision to…values of safety, fairness, and accountability,” Markwalder added. “And if the urge to cut corners can be tamed, AI shifts from black-box risk to a trust engine that shields both ROI and reputation.”

AI is also driving a transformation in product development amid compliance challenges to business, explained by Dr. Henrik Weimer, Director of Digital Engineering at Airbus. In his presentation at CIMdata’s PLM Road Map & PDT North America in May 2025, Weimer spelled out four AI business compliance challenges:

Data Privacy, meaning the protection “of personal information collected, used, processed, and stored by AI systems,” which is a key issue “for ethical and responsible AI development and deployment.”

Intellectual Property, that is “creations of the mind;” he listed “inventions, algorithms, data, patents and copyrights, trade secrets,data ownership, usage rights, and licensing agreements.”

Data Security, ensuring confidentiality, integrity, and availability, as well as protecting data in AI systems throughout the lifecycle.

Discrimination and Bias, addressing the unsettling fact that AI systems “can perpetuate and amplify biases present in the data on which they are trained,” leading to “unfair or discriminatory outcomes, disproportionately affecting certain groups or individuals.”

Add to these issues the environmental impact of AI’s tremendous power demands. In the April 2025 issue of the McKinsey Quarterly, the consulting firm calculated that “Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures by 2030…” (The article is titled The cost of compute: A $7 trillion race to scale data centers.)

Establishing governance

So, how is governance created amid this chaos? In our webinar, we pointed out that the answer is a governance framework that:

• Establishes governance policies aligned with organizational goals, plus an AI ethics committee or oversight board.

• Develops and implements risk assessment methodologies for AI projects that monitor AI processes and results for transparency and fairness.

• Ensures continuous auditing and feedback loops for AI decision-making.

To show how this approach is effective, we offered case studies from Allied Irish Bank, IBM’s AI Ethics Governance framework, and Amazon’s AI Recruiting Tool (which had a bias against females).

Despite all these issues, AI governance across the lifecycle is cost-effective, and guidance was offered on measuring the ROI impact of responsible AI practices:

  • Quantifying AI governance value in cost savings, risk reduction, and reputation
      management.
  • Developing and implementing metrics for compliance adherence, bias reduction, and transparency.
  • Justifying investment with business case examples and alignment with stakeholders’ priorities.
  • Focusing continuous improvement efforts on the many ways in which AI governance drives innovation and operational efficiency.

These four points require establishing ownership and accountability through continuous monitoring and risk management, as well as prioritizing ethical design. Ethical design is the creation of products, systems, and services that prioritize benefits to society and the environment while minimizing the risks of harmful outcomes.

The meaning of ‘responsibility’ always seems obvious until one probes into it. Who is responsible? To whom? Responsible for what? Why? And when? Before the arrival of AI, the answers to these questions were usually self-evident. In AI, however, responsibility is unclear without comprehensive governance.

Also required is the implementation and fostering of a culture of responsible AI use through collaboration within the organization as well as with suppliers and field service. Effective collaboration, we pointed out, leads to diversity of expertise and cross-functional teams that strengthen accountability and reduce blind spots.

By broadening the responsibilities of AI users, collaboration adds foresight into potential problems and helps ensure practical, usable governance while building trust in AI processes and their outcomes. Governance succeeds when AI “becomes everyone’s responsibility.”

Our conclusion was summed up as: Govern Smart, Govern Early, and Govern Always.

In AI, human oversight is essential. In his concluding call to action, Bilello emphatically stated, “It’s not if we’re going to do this but when…and when is now.” Undoubtedly, professionals who proactively embrace AI and adapt to the changing landscape will be well-positioned to thrive in the years to come.

Peter Bilello, President and CEO, CIMdata and frequent Engineering.com contributor, contributed to this article.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
PTC adds supply chain intelligence to Arena PLM https://www.engineering.com/ptc-adds-supply-chain-intelligence-to-arena-plm/ Tue, 24 Jun 2025 14:53:06 +0000 https://www.engineering.com/?p=140859 The cloud native supply chain module syncs with Onshape to make a CAD-PDM-PLM hybrid for product development.

The post PTC adds supply chain intelligence to Arena PLM appeared first on Engineering.com.

]]>
PTC has released its Arena product lifecycle management (PLM) and quality management system (QMS) solution’s new Supply Chain Intelligence (SCI) suite.

Arena SCI continuously checks for emerging risks from evolving supply chain conditions, embedding real-time AI-driven component monitoring and risk mitigation insight directly into product development workflows. The goal is to manage component risks throughout the entire product lifecycle within and existing PLM environment.

Product development and introduction teams use Arena SCI to continuously monitor electronic components across bills of materials (BoMs) to identify emerging risks from changing supply chain conditions. Arena SCI then suggests alternative components based on technical compatibility to prevent sourcing interruptions before they impact production.

“By delivering supply chain intelligence directly where design decisions are made in a cloud-native environment, Arena Supply Chain Intelligence simplifies collaboration between design teams and suppliers and supports more proactive component sourcing decisions to help offset supply chain disruptions,” said David Katzman, General Manager of Arena and Onshape, PTC. Katsman says PTC’s investment in SCI adds a new dimension that prioritizes resiliency and hinted at more AI-driven functionality in future arena releases

Arena SCI works by using electronic component data from information services provider Accuris to outline comprehensive electronic risk details and suggest alternative parts.

“Our teams face constant pressure to move faster, even as supply chain challenges become increasingly unpredictable. We are seeking ways to help us stay ahead by identifying risks early and avoiding costly last-minute changes, so we can keep projects on track and deliver on time. We see Arena SCI as an opportunity to help achieve this,” said Dan Freeman, Director of Hardware Engineering, Universal Audio.

Since its acquisition by PTC in 2021, Arena has expanded into new international markets, introduced over 16 product releases, It collaborates with PTC’s Onshape cloud-native computer-aided design (CAD) and product data management (PDM) platform, resulting in a cloud-native CAD-PDM-PLM offering to support cross-functional product development

The post PTC adds supply chain intelligence to Arena PLM appeared first on Engineering.com.

]]>
Duro reboots its PLM platform for AI https://www.engineering.com/duro-reboots-its-plm-platform-for-ai/ Tue, 17 Jun 2025 16:24:00 +0000 https://www.engineering.com/?p=140690 Duro Design is a ground-up revamp of Duro’s cloud PLM platform, and in other news, Onshape gets MBD.

The post Duro reboots its PLM platform for AI appeared first on Engineering.com.

]]>
You’re reading Engineering Paper, and here are the latest headlines from the world of design and simulation software.

Duro, the cloud-based PLM provider, has relaunched its platform as Duro Design.

Michael Corr, co-founder and CEO of Duro, told me that the change is more than just a new product name. “It’s really a new product… a revolution of what we’re doing, not just an incremental evolution,” he said.

Duro first launched its eponymous PLM platform in 2018, targeting startups and small-to-medium businesses looking for a quick and modern alternative to that jack-of-all-trades, Excel.

“We were very limited in functionality and very automated and opinionated, because we just helped customers implement industry best practices out of the box,” Corr said.

Since then, Corr said, the market for modern PLM tools has evolved. “The level of innovation that’s happening today is unprecedented,” he said, referring both to new hardware companies and the SaaS software startups catering to them. Duro’s customers wanted more capability, configurability and compatibility, and Corr saw that the platform could either adapt or harden into the same kind of stale PLM tool it had been built to disrupt.

“We recognized there was a unique small window to just completely revamp our platform and really meet what this market had evolved to be,” Corr said.

Duro Design is that revamp. Duro’s legacy PLM platform will be phased out and the company will help existing customers migrate to Duro Design.

So what’s the difference? A big part of it, as you might imagine, is AI. Corr describes Duro Design as “AI-native,” a phrase which I asked him to define (lest it come across as marketing fluff).

“Deep refactoring of our platform allowed us to leverage what was becoming the best practices for building AI-based tools,” Corr told me. “We changed our database structure, we changed our API structure, so that AI technologies, LLMs and even generative AI capabilities, were being built natively in the core of our platform, versus being a bolt-on after the fact.”

Screenshot of Duro Design. (Image: Duro.)

For example, Duro Design uses AI for natural language search, helping users more easily sort through heaps of design data. Users can also manage their PLM environment with AI by prompting changes to the underlying YAML configuration language (YAML ain’t markup language, if you’re a fan of backronyms). Duro Design also uses AI to analyze change orders and provide predictions and recommendations, according to Corr.

AI isn’t the only difference. Sandwich in a P and you get another tentpole of Duro Design: API.

“Following an API first approach, every single feature that we offer is exposed through the API,” Corr said, in contrast to the more limited API of the legacy platform. “[Users] can reconfigure their account as they wish. They could build their own integrations. They can even build their own front end web client if they wanted to.”

As far as integrations go, Duro offers plenty of its own with add-ins for Solidworks, NX (or should I say Designcenter), Altium 365, Onshape and more.

Speaking of Onshape…

Onshape gets MBD

PTC announced that its cloud CAD platform Onshape will soon be capable of model-based definition (MBD). The feature is in an “an early visibility program with select customers,” according to the press release, “and is expected to be generally available in late 2025.”

What is MBD? There are many bickering definitions for this engineering acronym, but when it stands for model-based definition it refers to annotating a 3D model with manufacturing data such as materials, dimensions, tolerances and the like. It’s an alternative to the standard 2D drawings that everyone loves to hate (but that don’t seem to be going away anytime soon).

MBD in Onshape. (Image: PTC.)

“Our new MBD capabilities remove the need to interpret 2D drawings by embedding PMI [product manufacturing information] directly into the 3D model,” David Katzman, PTC’s general manager of Onshape and Arena, said in the release. “And because Onshape is cloud-native, this information is instantly accessible to everyone who needs it, from any device and any location. It’s a major step forward in making MBD practical and scalable for real-world use.”

PTC is showing off Onshape’s MBD this week at the Paris Air Show with their customer Aura Aero (June 16 – 19 2025, Zone B4). Check it out if you’re in town (but you might want to stay away from the Louvre).

Design and Simulation Week 2025

If you’re not already counting down the days to Engineering.com’s annual Design and Simulation week, here’s your 27-day warning.

Running from July 14 – 18, 2025, this series of expert webinars will explore the top trends in engineering software from some of the leading voices in the industry (and me). You’ll learn about AI, automation, multiphysics and how to make the most of modern tools.

Register for Design and Simulation Week now and start counting.

One last link

DiffusionRenderer is a pretty cool new AI-based rendering tool from Nvidia. It takes a 2D video and strips out the geometry and material info in order to plug in different lighting conditions, like changing a scene from day to night. In addition to its creative applications, Nvidia says it’ll help generate synthetic data for robotics training and autonomous vehicle development.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Duro reboots its PLM platform for AI appeared first on Engineering.com.

]]>
In the rush to digital transformation, it might be time for a rethink https://www.engineering.com/in-the-rush-to-digital-transformation-it-might-be-time-for-a-rethink/ Tue, 03 Jun 2025 15:03:32 +0000 https://www.engineering.com/?p=140223 One of the main themes from the PLM Road Map and PDT North America event was just how much we still have to learn about going digital.

The post In the rush to digital transformation, it might be time for a rethink appeared first on Engineering.com.

]]>
In the breakneck pace of digital transformation, is comprehension being left behind? Do we need a rethink? No one at PLM Road Map and PDT North America, a collaboration with BAE Systems’ Eurostep organization—a leading gathering of product lifecycle management (PLM) professionals—said that, at least not in so many words, but presentations by one user after another raised the issue.

In my opening presentation, I confronted these issues by positioning PLM as a strategic business approach, thereby joining it to digital transformation, which has been CIMdata’s focus for more than four decades. And in the conference’s thought leadership vignettes, multiple PLM solution providers stressed connectivity and new tools to aid understanding and comprehension; in these vignettes, many supported my positioning of PLM.

The issues of comprehension were presented to conference attendees from several points of view. Many presenters delved into data and information quality—accuracy, completeness, structure, ownership, possible corruption, its exploding volume, and the steady growth of regulation.

Some numbers that made many attendees uncomfortable:

• There are hundreds of engineering software tools and new ones appear every week. Every engineering organization uses dozens of tools, systems, solutions, “apps,” and platforms; their constant updates are often disruptive to users

• About 800 standards apply to engineering information and its connections to the rest of the enterprise, said Kenneth Swope, The Boeing Co.’s Senior Manager for Enterprise Interoperability Standards and Supply Chain Collaboration

30 terabytes of data are generated in CAD and manufacturing for each of the hundreds of engines produced by Rolls-Royce PLC every year, reported Christopher Hinds, Head of Enterprise Architecture. Some output files from CFD analyses exceed 650 GB per part, he added.

Speakers also discussed how digital transformation is revealing the shortfalls in comprehension of data and information. “If we can’t agree on what data is, we can’t use it,” observed Swope. These shortfalls are caused by accelerated product development, shorter product lifecycles, and an explosion of product modifications and differentiations thanks to the software now embedded in every product.

A graphic construction of the comprehension challenges in digital transformation. (Image: CIMdata Inc.)

In my conference-opening presentation, “PLM’s Integral Role in Digital Transformation,” I stressed that companies need to think beyond digitizing data, that merely converting analog data to digital isn’t enough. Yes, digitalization is at the core of an organization’s digital transformation … but moving to a digital business requires rethinking many organizational structures and business processes as well as understanding the growing value of data.

So how does PLM fit into this? Only by seeing PLM as a strategic business approach can its depth and breadth in the reach of digital transformation can be comprehended. PLM concentrates the organization’s focus on the collaborative creation, use, management, and dissemination of product related intellectual assets—a company’s core asset. This makes PLM the platform for integrating external entities into lifecycle processes—thereby enabling end-to-end (E2E) connectivity … and the optimization of associated business functions and entities throughout the lifecycle.

Don’t forget, I cautioned, that the data generated from your products and services often becomes more valuable than the products themselves. Why? Because product data touches all phases of a product’s life, these digital assets play a central role in an enterprise’s digital transformation. Hence I warned that digital transformation will collapse without the implementation of the right set of data governance policies, procedures, structure, roles, and responsibilities.

Many presenters also noted how PLM and digital transformation are helping them deal with the challenges of stiffer competition, rising costs, downward pressure on pricing, customer demands for more functionality and longer service lives, data-hungry Artificial Intelligence (AI), and Product as a Service (PaaS) business models

And while all these factors aggravate the issues I addressed, speakers expressed confidence that they will eventually reap the benefits of PLM and digital transformation—starting with getting better products to market sooner and at lower cost.

Another challenge with digital transformation and comprehension is the multitude of ways that presenting companies organize and identify their engineering systems and functions. All these manufacturers use basically same processes to develop and produce a new product or system but these tasks are divided up in countless ways; no two companies’ product-development nomenclature are the same.

Sorting this out is crucial to the understanding and comprehension of the enterprise’s data and information. Gaining access to departmental “silos” of data is increasingly seen as just the beginning of digging information out of obsolete “legacy” systems and outdated formats.

Dr. Martin Eigner’s concept of the extended digital thread integrated across the product lifecycle. (Image: Eigner Engineering Consult.)

In the conference’s Day One keynote presentation, Martin Eigner of Eigner Engineering Consult, Baden-Baden, Germany, spoke on “Reflecting on 40 Years of PDM/PLM: Are We Where We Wanted to Be?” The answer, of course, is both yes and no.

Dr. Eigner expressed his frustration in PLM’s fragmented landscape. We are still tied to legacy systems (ERP, MES, SCM, CRM) that depend on flawed interfaces reminiscent of outdated monolithic software, he pointed out. As digitalization demands and technologies like IoT, AI, knowledge graphs, and cloud solutions continue to grow, the key question is: Can the next generation of PLM solutions meet the challenges of digital transformation with the advanced, modern software technologies available?

“The vision of PLM till exists,” Dr. Eigner continued, “but the term was hijacked in the late 1990s while the PLM vision was still being discussed. Vendors of product data management (PDM) solutions applied the term for their PDM offerings” which “mutated from PDM to PLM virtually overnight.”

“Ultimately,” he noted, “business opportunities and ROI will be significantly boosted by the overarching Digital Thread on Premise or as a Service,” leveraged with “knowledge graphs connected with the Digital Twin.” Applying “generative AI can optionally create an Omniverse with enhanced data connectivity and traceability.”

This stage of digital transformation, he summarized, “will improve decision making and support AI application development.” In turn, these “will revolutionize product development, optimize processes, reduce costs, and position the companies implementing this at the forefront of their industries. And we are coming back to our original PLM vision as the Single Source of Truth.”

Uncomfortably ambitious productivity improvements with AI and digital transformation. Image: GE Aerospace

The challenges of getting this done were addressed by Michael Carlton, Director, Digital Technology PLM Growth at GE Aerospace, Evendale, Ohio, using what he termed as “developing a best-in-class Enterprise PLM platform to increase productivity and capacity amid rising demands for digital thread capabilities, technology transformation, automation, and AI.” His remedies included “leveraging AI, cloud acceleration, observability, analytics, and automation techniques.”

“Uncomfortably ambitious productivity improvements,” Carlton continued, include “reduction in PLM environment build cycle time, parallel development programs on different timelines, shifting testing left (i.e., sooner), improved quality throughout, automated data security tests, and growing development capacity.”

IDC slide showing how PLM maintains the digital threads that define the product ecosystem by weaving together product development, manufacturing, supply chain, service to balance cost, time, and quality. (Image: IDC.)

The issue of PLM and the boardroom was raised in a presentation, by John Snow, Research Director, Product Innovation Strategies, at International Data Corp. (IDC), Needham, Mass. In his data-packed Day 2 keynote, Snow detailed how complex this issue is and the “disconnect between corporate concerns and engineering priorities.”

PLM, observed Snow, “maintains the digital threads that define the product ecosystem: weaving together product development, manufacturing, the supply chain, and service to balance cost, time, and quality.”

The opportunity for engineering in the boardroom is that “80% of product costs is locked in during design,” however, the Cost of Goods Sold (COGS) is 10X to 15X higher than Cost of R&D (CR&D), Snow explained.

“Poor product design,” Snow continued, “has an outsized impact on COGS, but good design does,” too. Thus, “increasing the engineering budget can have a big impact on profits (if properly allocated).” Current efforts to leverage design for manufacturing & assembly (DFM/A) are falling short,” he added.

HOLLISTER’s roller-coaster journey toward PLM showing key decision points; the loop indicates a stop and restart. (Image: Hollister Inc.)

Near the other end of the corporate size scale from GE Aerospace is Hollister Inc., Libertyville, Ill., an employee-owned medical supplies manufacturer of ostomy and continence products. Stacey Burgardt, Hollister’s Senior Program Manager for PLM, addressed PLM implementation challenges in her presentation on “The Role of Executive Sponsorship in PLM Success at Hollister.”

Burgardt, formerly R&D and Quality Leader, outlined Hollister’s PLM vision as three transformations:

• To product data centric from document centric

• To digital models from drawings, and

• To live collaboration and traceability from systems of record.

In her appeal to sponsors, Burgardt estimated total expected benefits through 2030 at $29 million. This sum included significant gains from improved efficiency of associates, smaller software costs, and reduced waste, scrap, and rework.

Unlike every other presenter, Hollister has yet to implement PLM, though not from lack of effort dating back to 2018. Hollister is currently finalizing PLM solution selection and planning. Burgardt focused the need for executive sponsorship and strategies to secure it. “Identify the right executive sponsors in the governance model including the CEO and CFO,” she said, “and the

leaders of the main functions that PLM will impact, and someone who has seen a successful PLM who can advocate.

“Be persistent,” she concluded, “and be adaptable.” Address sponsors’ concerns and “If it’s not the right time, keep the embers burning and try again.”

And this led to my conference summation topic: sponsorship. The fact that PLM and digital transformation are now recognizably tougher and will take longer than once hoped led to my Executive Spotlight panel discussion at the end of Day 2: “The Role of the Executive Sponsor in Driving a PLM Transformation.” My four panelists agreed high-level sponsorships are indispensable … and we discussed how to identify, enlist, and maintain those sponsorships.

To conclude, looking back over the two days’ presentations, I think the answer is “yes” to my questions in the first paragraph. And the sooner this rethink gets going the better.

The post In the rush to digital transformation, it might be time for a rethink appeared first on Engineering.com.

]]>
[Survey Report] Complexity Overload and Bottleneck Struggles: The Hidden Costs of Manual PLM Testing https://www.engineering.com/resources/survey-report-complexity-overload-and-bottleneck-struggles-the-hidden-costs-of-manual-plm-testing/ Wed, 28 May 2025 19:24:04 +0000 https://www.engineering.com/?post_type=resources&p=140096 Manual testing slows engineering teams down — and the data proves it. In this exclusive survey of 115 engineering professionals, discover why manual PLM software testing is no longer sustainable for today’s complex environments. Key findings include: Explore the real-world impact of manual testing bottlenecks, and see why so many organizations are urgently seeking automation […]

The post [Survey Report] Complexity Overload and Bottleneck Struggles: The Hidden Costs of Manual PLM Testing appeared first on Engineering.com.

]]>
Manual testing slows engineering teams down — and the data proves it. In this exclusive survey of 115 engineering professionals, discover why manual PLM software testing is no longer sustainable for today’s complex environments.

Key findings include:

  • 92% of teams have postponed or canceled releases due to incomplete manual testing
  • 66% require at least one week — and 34% need two or more weeks — for a full regression pass
  • 88% can only handle 1–2 integrated systems manually, limiting test coverage
  • And more

Explore the real-world impact of manual testing bottlenecks, and see why so many organizations are urgently seeking automation to accelerate releases, improve quality, and empower their teams.

Your download is sponsored by Keysight Technologies.

The post [Survey Report] Complexity Overload and Bottleneck Struggles: The Hidden Costs of Manual PLM Testing appeared first on Engineering.com.

]]>
Aras Software at 25: PLM transformation through connected intelligence https://www.engineering.com/aras-software-at-25-plm-transformation-through-connected-intelligence/ Sat, 17 May 2025 13:01:53 +0000 https://www.engineering.com/?p=139728 Its trajectory mirrors the wider PLM market shift—from rigid systems to flexible, integrated platforms.

The post Aras Software at 25: PLM transformation through connected intelligence appeared first on Engineering.com.

]]>
Roque Martin, CEO at Aras Software, opened ACE 2025 by reflecting on Aras’ 25-year evolution—from early PLM strategy roots to hands-on innovation and enterprise-wide digital thread leadership. (Image: Lionel Grealou)

Nestled in Boston’s Back Bay during the first three days of April, ACE 2025 marked a key milestone: Aras’ 25th anniversary. It was a celebration of a quarter-century of innovation in the PLM space, built on the vision of founder Peter Schroer. What began as a small gathering has grown into a global forum for transformation. Aras Innovator continues to position itself as a challenger to legacy PLM systems, offering an open and adaptable platform.

“Building on the company’s red box concept,” as presented several years ago by John Sperling, SVP of Product Management, the Aras strategy is rooted in an overlay approach and containerization—designed to simplify integration and support relationship-driven data management. CEO Roque Martin described Aras’ evolution from its early roots in PDM and document control to today’s enterprise-scale PLM platform—enabling connected intelligence across functions and domains.

This trajectory mirrors the wider PLM market shift—from rigid systems to flexible, integrated platforms that support customization, adaptability, and data fluidity across engineering and operational boundaries.

AI, cloud, and the connected enterprise

Nowadays, it is close to impossible to discuss tech/IT/OT or digital transformation without exploring new opportunities from artificial intelligence (AI). Cloud and SaaS are established deployment standards across enterprise software solutions. Nevertheless, PLM tech solutions often lag when it comes to adopting modern architecture and licensing models.

The intersection of PLM and AI is rapidly redefining transformation strategies. Aras’ ACE 2025 conference embraced this momentum through the theme: “Connected Intelligence: AI, PLM, and a Future-Ready Digital Thread.” This theme reflects how AI has become more than an emerging trend—it is now central to enabling smarter decision-making, increased agility, and value creation from data.

While cloud and SaaS have become standard deployment models, PLM platforms have historically struggled to keep pace. Aras is challenging that with an architecture that emphasizes openness, extensibility, and modern integration practices—foundational enablers for enterprise-grade AI. In this landscape, the importance of aligning AI readiness with digital thread maturity is growing. PLM no longer sits at the periphery of IT/OT strategy—it is becoming the backbone for scalable, connected transformation.

Bridging old and new

Martin opened ACE 2025 by recalling that the term “digital thread” originated in aerospace back in 2013—not a new concept, but one whose visual metaphor still resonates. With the announcement of InnovatorEdge, Aras showcased the next leap in PLM evolution—designed to connect people, data, and processes using AI, low-code extensibility, and secure integrations.

With InnovatorEdge, Aras introduces a modular, API-first extension designed to modernize PLM without discarding legacy value. It strikes a balance between innovation and compatibility, targeting four key priorities. It balances innovation with compatibility and addresses four key areas:

  1. Seamless connections across enterprise systems and tools.
  2. AI-powered analytics to enhance decision-making capabilities.
  3. Secure data portals enabling supply chain data collaboration.
  4. Open APIs to support flexible, industry-specific configurations.

By maintaining its commitment to adaptability while embracing modern cloud-native patterns, Aras reinforces its position as a strategic PLM partner—not just for managing product data, but for navigating complexity, risk, and continuous innovation at scale.

Data foundations

As we stand at the intersection of AI and PLM, ACE 2025 made one thing clear: solid data foundations are essential to unlock the full potential of connected intelligence. Rob McAveney, CTO at Aras, stressed that AI is not just about automation—it is about building smarter organizations through better use of data. “AI is indeed not just about topping up data foundation,” he said, “but helping organizations transform by leveraging new data threads.”

McAveney illustrated Aras’ vision with a simple yet powerful equation:

Digital Thread + AI = Connected Intelligence

This means:

  • Discover insights across disconnected data silos.
  • Enrich fragmented data by repairing links and improving context.
  • Amplify business value using simulation, prediction, and modeling.
  • Connect people and systems into responsive feedback loops.

Every mainstream PLM solution provider is racing to publish AI-enabled tools, recognizing that intelligence and adaptability are no longer optional in today’s dynamic product environments. Siemens continues to evolve its intelligent enterprise twins, embedding AI into its Xcelerator portfolio to drive predictive insights and closed-loop optimization. Dassault Systèmes recently unveiled its 3D UNIV+RSE vision for 2040, underscoring a future where AI, sustainability, and virtual twin experiences converge to reshape product innovation and societal impact. Meanwhile, PTC strengthens its suite through AI-powered generative design and analytics across Creo, Windchill, and ThingWorx. Across the board, AI is becoming the common thread—fueling a transformation from static PLM to connected, cognitive, and continuously learning platforms.

With so much movement among the established players, is Aras’ open, modular approach finally becoming the PLM disruptor the industry did not see coming? Across the board, AI is becoming the common thread—fueling a transformation from static PLM to connected, cognitive, and continuously learning platforms. Gartner VP Analyst Sudip Pattanayak echoed this in his analysis, emphasizing the need for traceability and data context as cornerstones of digital thread value. He identified four critical areas of transformation:

  1. Collaboration via MBSE and digital engineering integration.
  2. Simulation acceleration through democratized digital twins.
  3. Customer centricity driven by IoT and usage-based insights.
  4. Strategic integration of PLM with ERP, MES, and other platforms.
Sudip Pattanayak, VP Analyst at Gartner, highlighted that “PLM supports the enterprise digital thread” by building a connected ecosystem of product information. (Image: Lionel Grealou)

From a business standpoint, this translates to strategic benefits in risk management, compliance, product quality, and brand protection. For instance, digital thread traceability supports:

  • Warranty tracking and root cause analysis for recalls.
  • Maintenance, usage, and service optimization.
  • Real-time feedback loops from market to R&D.
  • Commercial impact modeling from product failures.

Pattanayak concluded that enterprises should not aim for total digital thread coverage from day one. Instead, the priority is identifying high-value “partial threads” and scaling from there—with AI capabilities built on solid, governed, and well-connected data structures.

The post Aras Software at 25: PLM transformation through connected intelligence appeared first on Engineering.com.

]]>