Lionel Grealou, Author at Engineering.com https://www.engineering.com/author/lionel-grealou/ Tue, 03 Jun 2025 13:37:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Lionel Grealou, Author at Engineering.com https://www.engineering.com/author/lionel-grealou/ 32 32 How Chain of Thought drives competitive advantage https://www.engineering.com/how-chain-of-thought-drives-competitive-advantage/ Tue, 03 Jun 2025 13:37:01 +0000 https://www.engineering.com/?p=140218 Moving beyond prompt engineering and towards AI-driven structured reasoning...for better or worse.

The post How Chain of Thought drives competitive advantage appeared first on Engineering.com.

]]>
Building on AI prompt literacy, engineers are discovering that knowing what to ask AI is only half the equation. The breakthrough comes from structuring how to think through complex problems with AI as a reasoning partner. Chain of Thought (CoT) methodology transforms this collaboration from text generation into dynamic co-engineering systems thinking— amplifying competent engineers into super-engineers who solve problems with exponential clarity and scale.

CoT as structured engineering reasoning

Chain of Thought formalizes what expert engineers intuitively do: breaking complex problems into logical, sequential steps that can be examined, validated, and improved. Enhanced with AI partnership, this structured reasoning becomes scalable organizational intelligence rather than individual expertise.

At its core, leveraging AI is about mastering the art of questioning. The transformation occurs when engineers move from asking AI “What is the solution?” to guiding AI through “How do we systematically analyze this problem?” This creates transparent reasoning pathways that preserve knowledge, enable collaboration, and generate solutions teams can understand and build upon.

As such, here is a reusable CoT template for technical decision-making:

“To solve [engineering challenge], break this down systematically:

  1. Identify core constraints: [performance/cost/regulatory requirements],
  2. Analyze trade-offs between [options] considering [specific criteria],
  3. Evaluate effects on [downstream systems/processes],
  4. Assess implementation risks and mitigation strategies.”

This template works across domains—thermal management, software architecture, regulatory compliance—because it mirrors the structured thinking that defines engineering excellence.

Practical applications in product innovation

CoT methodology proves most powerful in early-stage ideation, complex trade-off analysis, and compliance reasoning where traditional approaches miss critical interdependencies. Based on the target persona, this can translate in various use cases, such as:

Early-stage product ideation:

“To develop [product concept], systematically explore: 1) User pain points and current solutions, 2) Technical feasibility and core challenges, 3) Market positioning and competitive advantage, 4) Minimum viable approach to validate assumptions.”

Engineering trade-off analysis:

“When choosing between [options], evaluate: 1) Performance implications on [key metrics], 2) Cost analysis including lifecycle expenses, 3) Risk assessment and failure mode mitigation, 4) Integration requirements and future modification impacts.”

Compliance and regulatory reasoning:

“To ensure [system] meets [requirements], structure analysis: 1) Requirement mapping to measurable criteria, 2) Design constraint implications, 3) Verification strategy and documentation needs, 4) Change management for ongoing compliance.”

These frameworks transform AI from answer-generator to reasoning partner, helping engineers think systematically while preserving logic for team collaboration and future reference.

PLM integration—CoT as a digital thread enabler

CoT becomes particularly powerful when integrated into Product Lifecycle Management (PLM) and related enterprise resource systems—creating data threads that preserve not just what was decided, but why decisions were made and how they connect across development lifecycle. Just imagine these scenarios:

Design intent preservation:

“For [design decision], document reasoning: 1) Requirements analysis driving this choice, 2) Alternative evaluation and rejection rationale, 3) Implementation factors influencing approach, 4) Future assumptions that might affect this decision.”

Cross-functional integration:

“When [engineering decision] affects multiple disciplines, analyze: 1) Mechanical implications for structure/thermal/manufacturing, 2) Software considerations for control/interface/processing, 3) Regulatory impact and verification needs, 4) Supply chain effects on sourcing/cost/scalability.”

Digital thread connection points:

  • Link design decisions to original requirements and customer needs.
  • Connect material choices to performance targets and compliance requirements.
  • Trace software architecture to system-level performance goals.
  • Map manufacturing choices to cost targets and quality requirements.

This ensures that when teams change or requirements evolve, critical decision reasoning remains accessible and actionable rather than locked in individual expertise. From a business outcome perspective, this can contribute to continuity across product generations and reduce time spent retracing design decisions during audits, updates, or supplier transitions.

Strategic reality: revolution or evolution?

While CoT methodology delivers measurable improvements, the strategic question remains whether this represents fundamental transformation or sophisticated evolution.

Evidence for transformation: Though evidence remains scarce, early adopters of structured CoT approaches report measurable improvements in knowledge transfer efficiency, design review effectiveness, and decision consistency. Organizations consistently cite enhanced team collaboration, reduced rework cycles, and improved knowledge retention when engineering reasoning becomes explicit and traceable. These patterns suggest systematic capability enhancement rather than marginal improvement.

Case for evolution: Critics argue CoT merely formalizes what competent engineers have always done. Revolutionary breakthroughs—the transistor, World Wide Web, breakthrough materials—often emerge from intuitive leaps that defy structured frameworks, suggesting excessive systematization might constrain innovation. Regardless, the accelerating sophistication of AI demands that engineers critically assess not just what they build, but how they think.

Strategic balance: Successful engineering organizations are not choosing between structured reasoning and creative innovation—they are developing meta-skills for knowing when each approach adds value. CoT excels in complex, multi-constraint problems where systematic analysis prevents costly oversights. Pure creativity dominates breakthrough innovation where paradigm shifts matter more than optimization.

Future-proofing perspective: As AI capabilities accelerate from text generation to multimodal reasoning to autonomous design, organizations building frameworks for continuous methodology evaluation—rather than optimizing current techniques—will maintain competitive advantages through technological transitions.

Chain of Thought may represent the beginning of engineering’s AI integration rather than its culmination. The methodology’s emphasis on explicit reasoning provides tools for navigating technological uncertainty itself, perhaps its most valuable contribution to engineering’s digital future. CoT may be the missing link between today’s prompt-based AI assistants and tomorrow’s agentic co-engineers—moving from reactive support to proactive design collaboration.

Whether revolution or evolution, CoT offers engineers systematic approaches for amplifying problem-solving capabilities in an increasingly AI-integrated technical landscape.

The post How Chain of Thought drives competitive advantage appeared first on Engineering.com.

]]>
VW’s digital journey balances bold moves with the realities of execution https://www.engineering.com/vws-digital-journey-balances-bold-moves-with-the-realities-of-execution/ Thu, 22 May 2025 15:44:40 +0000 https://www.engineering.com/?p=139765 Volkswagen’s digital trajectory reveals both the promise of technology adoption and the hurdles of industrial-scale implementation.

The post VW’s digital journey balances bold moves with the realities of execution appeared first on Engineering.com.

]]>
Inside the line at Volkswagen’s Chatenooga manufacturing plant. (Image: Volkswagen)

Volkswagen’s recent strategic moves highlight a company at the crossroads of transformation. On one hand, VW is making bold investments in AI-driven engineering and forging strategic alliances to position itself as a leader in next-generation automotive innovation. On the other, it faces the stark realities of large-scale execution—rising manufacturing costs, operational challenges, new electric vehicle (EV) entrant competition, and financial pressures.

To stay competitive, VW has embraced generative AI, digital twins, and software-defined vehicles. Announced in December 2024, its partnerships with PTC and Microsoft to develop Codebeamer Copilot aims to revolutionize Application Lifecycle Management (ALM) with AI automation. Meanwhile, the adoption of Dassault Systèmes’ 3DEXPERIENCE platform signals a commitment to integrating model-based engineering (MBE) for optimized vehicle development.

At the same time, Volkswagen’s $5.8 billion investment in an alliance with Rivian showcases a strategic bet on the future of electric mobility. However, alongside these forward-looking investments, Volkswagen must grapple with fundamental execution challenges—managing rising production costs, navigating supply chain disruptions, and ensuring that its transformation efforts deliver tangible business outcomes.

Accelerating engineering transformation

Volkswagen’s collaboration with PTC and Microsoft to develop Codebeamer Copilot signals a strong commitment to leveraging generative AI in Application Lifecycle Management (ALM). Codebeamer is being augmented with AI-driven automation to enhance software development efficiency, a critical step as automotive manufacturers increasingly shift towards software-defined vehicles.

Software is no longer just an enabler; it is now at the heart of automotive product differentiation. For Volkswagen, a legacy automaker, competing with software-native disruptors requires a fundamental shift in how vehicle development is structured. Codebeamer Copilot represents more than an AI-enhanced ALM tool—it is part of a broader shift toward agile, continuous software deployment, ensuring that VW’s vehicles remain at the forefront of digital innovation.

Codebeamer is an ALM platform for advanced product and software development. (Image: PTC)

Simultaneously, VW’s adoption of Dassault Systèmes’ 3DEXPERIENCE platform aims to optimize vehicle development processes. This move reinforces the industry’s pivot towards integrated digital twins, where real-time collaboration and model-based engineering (MBE) accelerate product lifecycle governance. The 3DEXPERIENCE platform aligns with the growing need for cross-functional collaboration between mechanical, electrical, and software engineering teams, bridging gaps that have historically slowed down the development process. While these investments showcase Volkswagen’s intent to streamline development, execution remains key—successful deployment will hinge on cultural adoption and seamless integration with legacy systems.

Strategic EV alliances: the Rivian gambit

Volkswagen’s $5.8 billion partnership with Rivian announced in November 2024 signals a strategic hedge against legacy constraints. The alliance provides VW with access to Rivian’s advanced EV architecture, allowing the German automaker to accelerate its EV portfolio without reinventing the wheel. In return, Rivian gains the financial backing and industrial scale necessary to compete in an increasingly saturated EV market.

This collaboration is emblematic of a broader trend in the automotive industry: the shift from closed innovation models to open collaboration. OEMs are recognizing that building everything in-house is neither cost-effective nor agile enough for the rapid technological shifts defining the industry. By working with Rivian, VW positions itself to benefit from the startup’s agility while bringing its own mass-production expertise to the table.

However, alliances alone are not enough. To realize the full potential of this partnership, VW must overcome internal friction—balancing traditional automotive development processes with the more iterative, software-driven approach championed by Rivian. Success will depend on VW’s ability to integrate new ways of working without disrupting existing operations.

Executing transformation amid industrial pressures

While Volkswagen continues to push forward with its digital and electrification strategies, operational challenges remain a persistent theme. Rising material costs, supply chain bottlenecks, and production inefficiencies have placed significant financial pressure on the company. In 2024, VW reported 4.8 million vehicle deliveries—an impressive figure, but one that comes against the backdrop of increasing competition from Tesla, Chinese automakers such as local market leader BYD, and emerging EV startups.

Manufacturing complexity is another hurdle. Unlike Tesla, which designs its vehicles with highly streamlined production methods, VW is contending with legacy platforms that require significant re-engineering to accommodate next-generation propulsion systems and digital architectures. This tension between past and future is not unique to VW but serves as a reminder that digital transformation is as much about unlearning as it is about innovation.

To bridge this gap, Volkswagen must double down on operational efficiency while ensuring that its transformation investments deliver clear, measurable returns. This means refining its global production footprint, streamlining supplier relationships, and investing in workforce upskilling to ensure that its employees are equipped for the future of mobility.

Balancing disruption with execution

Volkswagen’s trajectory exemplifies the duality of digital transformation: bold investments in AI-driven engineering and strategic alliances, juxtaposed with the realities of industrial-scale execution. The success of these initiatives will depend on VW’s ability to navigate integration complexities, mitigate disruption risks, and sustain operational resilience.

For manufacturing engineering leaders, the key takeaway is clear: transformation is not just about adopting new technologies but ensuring their successful convergence with business imperatives. It requires a relentless focus on execution—aligning investments in AI, ALM, PLM, and EV strategy with pragmatic, scalable implementation roadmaps. The future of Volkswagen, and indeed the broader automotive industry, will be defined by those who can master this balancing act.

As digital and physical converge faster than ever, Volkswagen’s journey serves as a crucial case study that highlights both the promise and pitfalls of large-scale digital reinvention. The automaker’s success will hinge on its ability to harmonize technology adoption with industrial pragmatism, ensuring that innovation is not just pursued but effectively realized at scale.

The post VW’s digital journey balances bold moves with the realities of execution appeared first on Engineering.com.

]]>
Aras Software at 25: PLM transformation through connected intelligence https://www.engineering.com/aras-software-at-25-plm-transformation-through-connected-intelligence/ Sat, 17 May 2025 13:01:53 +0000 https://www.engineering.com/?p=139728 Its trajectory mirrors the wider PLM market shift—from rigid systems to flexible, integrated platforms.

The post Aras Software at 25: PLM transformation through connected intelligence appeared first on Engineering.com.

]]>
Roque Martin, CEO at Aras Software, opened ACE 2025 by reflecting on Aras’ 25-year evolution—from early PLM strategy roots to hands-on innovation and enterprise-wide digital thread leadership. (Image: Lionel Grealou)

Nestled in Boston’s Back Bay during the first three days of April, ACE 2025 marked a key milestone: Aras’ 25th anniversary. It was a celebration of a quarter-century of innovation in the PLM space, built on the vision of founder Peter Schroer. What began as a small gathering has grown into a global forum for transformation. Aras Innovator continues to position itself as a challenger to legacy PLM systems, offering an open and adaptable platform.

“Building on the company’s red box concept,” as presented several years ago by John Sperling, SVP of Product Management, the Aras strategy is rooted in an overlay approach and containerization—designed to simplify integration and support relationship-driven data management. CEO Roque Martin described Aras’ evolution from its early roots in PDM and document control to today’s enterprise-scale PLM platform—enabling connected intelligence across functions and domains.

This trajectory mirrors the wider PLM market shift—from rigid systems to flexible, integrated platforms that support customization, adaptability, and data fluidity across engineering and operational boundaries.

AI, cloud, and the connected enterprise

Nowadays, it is close to impossible to discuss tech/IT/OT or digital transformation without exploring new opportunities from artificial intelligence (AI). Cloud and SaaS are established deployment standards across enterprise software solutions. Nevertheless, PLM tech solutions often lag when it comes to adopting modern architecture and licensing models.

The intersection of PLM and AI is rapidly redefining transformation strategies. Aras’ ACE 2025 conference embraced this momentum through the theme: “Connected Intelligence: AI, PLM, and a Future-Ready Digital Thread.” This theme reflects how AI has become more than an emerging trend—it is now central to enabling smarter decision-making, increased agility, and value creation from data.

While cloud and SaaS have become standard deployment models, PLM platforms have historically struggled to keep pace. Aras is challenging that with an architecture that emphasizes openness, extensibility, and modern integration practices—foundational enablers for enterprise-grade AI. In this landscape, the importance of aligning AI readiness with digital thread maturity is growing. PLM no longer sits at the periphery of IT/OT strategy—it is becoming the backbone for scalable, connected transformation.

Bridging old and new

Martin opened ACE 2025 by recalling that the term “digital thread” originated in aerospace back in 2013—not a new concept, but one whose visual metaphor still resonates. With the announcement of InnovatorEdge, Aras showcased the next leap in PLM evolution—designed to connect people, data, and processes using AI, low-code extensibility, and secure integrations.

With InnovatorEdge, Aras introduces a modular, API-first extension designed to modernize PLM without discarding legacy value. It strikes a balance between innovation and compatibility, targeting four key priorities. It balances innovation with compatibility and addresses four key areas:

  1. Seamless connections across enterprise systems and tools.
  2. AI-powered analytics to enhance decision-making capabilities.
  3. Secure data portals enabling supply chain data collaboration.
  4. Open APIs to support flexible, industry-specific configurations.

By maintaining its commitment to adaptability while embracing modern cloud-native patterns, Aras reinforces its position as a strategic PLM partner—not just for managing product data, but for navigating complexity, risk, and continuous innovation at scale.

Data foundations

As we stand at the intersection of AI and PLM, ACE 2025 made one thing clear: solid data foundations are essential to unlock the full potential of connected intelligence. Rob McAveney, CTO at Aras, stressed that AI is not just about automation—it is about building smarter organizations through better use of data. “AI is indeed not just about topping up data foundation,” he said, “but helping organizations transform by leveraging new data threads.”

McAveney illustrated Aras’ vision with a simple yet powerful equation:

Digital Thread + AI = Connected Intelligence

This means:

  • Discover insights across disconnected data silos.
  • Enrich fragmented data by repairing links and improving context.
  • Amplify business value using simulation, prediction, and modeling.
  • Connect people and systems into responsive feedback loops.

Every mainstream PLM solution provider is racing to publish AI-enabled tools, recognizing that intelligence and adaptability are no longer optional in today’s dynamic product environments. Siemens continues to evolve its intelligent enterprise twins, embedding AI into its Xcelerator portfolio to drive predictive insights and closed-loop optimization. Dassault Systèmes recently unveiled its 3D UNIV+RSE vision for 2040, underscoring a future where AI, sustainability, and virtual twin experiences converge to reshape product innovation and societal impact. Meanwhile, PTC strengthens its suite through AI-powered generative design and analytics across Creo, Windchill, and ThingWorx. Across the board, AI is becoming the common thread—fueling a transformation from static PLM to connected, cognitive, and continuously learning platforms.

With so much movement among the established players, is Aras’ open, modular approach finally becoming the PLM disruptor the industry did not see coming? Across the board, AI is becoming the common thread—fueling a transformation from static PLM to connected, cognitive, and continuously learning platforms. Gartner VP Analyst Sudip Pattanayak echoed this in his analysis, emphasizing the need for traceability and data context as cornerstones of digital thread value. He identified four critical areas of transformation:

  1. Collaboration via MBSE and digital engineering integration.
  2. Simulation acceleration through democratized digital twins.
  3. Customer centricity driven by IoT and usage-based insights.
  4. Strategic integration of PLM with ERP, MES, and other platforms.
Sudip Pattanayak, VP Analyst at Gartner, highlighted that “PLM supports the enterprise digital thread” by building a connected ecosystem of product information. (Image: Lionel Grealou)

From a business standpoint, this translates to strategic benefits in risk management, compliance, product quality, and brand protection. For instance, digital thread traceability supports:

  • Warranty tracking and root cause analysis for recalls.
  • Maintenance, usage, and service optimization.
  • Real-time feedback loops from market to R&D.
  • Commercial impact modeling from product failures.

Pattanayak concluded that enterprises should not aim for total digital thread coverage from day one. Instead, the priority is identifying high-value “partial threads” and scaling from there—with AI capabilities built on solid, governed, and well-connected data structures.

The post Aras Software at 25: PLM transformation through connected intelligence appeared first on Engineering.com.

]]>
The prompt frontier—how engineers are learning to speak AI https://www.engineering.com/the-prompt-frontier-how-engineers-are-learning-to-speak-ai/ Wed, 14 May 2025 17:03:09 +0000 https://www.engineering.com/?p=139717 Will engineers shape AI, or will AI shape them?

The post The prompt frontier—how engineers are learning to speak AI appeared first on Engineering.com.

]]>
Microsoft defines prompt engineering as the process of creating and refining the prompt used by an artificial intelligence (AI) model. “A prompt is a natural language instruction that tells a large language model (LLM) to perform a task. The process is also known as instruction tuning. The model follows the prompt to determine the structure and content of the text it needs to generate.”

For engineers, this means understanding how to structure prompts to solve technical problems, automate tasks, and enhance decision-making. This particularly applies when working with Generative AI—referring to AI models that can create new content, such as text, images, or code, based on the input they receive.

An article from McKinsey suggests that “Prompt engineering is likely to become a larger hiring category in the next few years.” Furthermore, it highlights that “Getting good outputs is not rocket science, but it can take patience and iteration. Just like when you are asking a human for something, providing specific, clear instructions with examples is more likely to result in good outputs than vague ones.”

Why engineers should care about prompt engineering

AI is quickly becoming an integral part of engineering workflows. Whether it is for generating reports, optimizing designs, analyzing large datasets, or even automating repetitive tasks, engineers are interacting with AI tools more frequently. However, the effectiveness of these tools depends heavily on how well they are instructed.

Unlike traditional programming, where logic is explicitly defined, AI models require well-structured prompts to perform optimally. A poorly phrased question or vague instructions can lead to suboptimal or misleading outputs. Engineers must develop prompt engineering skills to maximize AI’s potential, just as they would with any other technical tool.

Interestingly, some experts argue that prompt engineering might become less critical as AI systems evolve. A recent Lifewire article suggests that AI tools are becoming more intuitive, reducing the need for users to craft highly specific prompts. Instead, AI interactions could become as seamless as using a search engine, making advanced prompt techniques less of a necessity over time.

Key prompt skills engineers need

Engineers do not need to be AI researchers, but a foundational understanding of machine learning models, natural language processing, and AI biases can help them craft better prompts. Recognizing how models interpret data and respond to inputs is crucial.

AI tools perform best when given clear, well-defined instructions. Techniques such as specifying the format of the response, using constraints, and breaking down requests into smaller components can improve output quality. For example, instead of asking, “Explain this system,” an engineer could say, “Summarize this system in three bullet points and provide an example of its application.”

Engineers must develop an experimental mindset, continuously refining prompts to get more precise and useful outputs. Testing different wordings, constraints, and levels of detail can significantly improve AI responses. Applying Chain-of-Thought Prompting encourages AI to think step-by-step, improving reasoning and accuracy. Rather than asking, “What is the best material for this component?” an engineer could use: “Consider mechanical strength, cost, and sustainability. Compare three material options and justify the best choice.”

Examples of prompt engineering in action

To illustrate how effective prompt engineering works, consider these examples using your favorite Gen-AI engine:

  • Manufacturing Improvement: Instead of asking an AI tool, “How can I improve my factory efficiency?” an engineer could prompt: “Analyze this production data and suggest three changes to reduce waste by at least 10% while maintaining throughput.”
  • Material Selection: Instead of a generic prompt like “Recommend a good material,” an engineer could use: “Compare aluminum and stainless steel for a structural component, considering weight, durability, and cost.”
  • Software Debugging: Instead of “Fix this code,” a structured prompt could be: “Analyze this Python script for performance issues and suggest optimizations for reducing execution time by 20%.”
  • Compliance Checks: Engineers working with sustainability standards could ask: “Review this product lifecycle report and identify areas where it fails to meet ISO 14001 environmental standards.”
  • System Design Optimization: Instead of asking, “How can I improve this mechanical system?” a structured prompt could be: “Given the following design constraints (weight limit: 50kg, max dimensions: 1m x 1m x 1m, operational temperature range: -20°C to 80°C), suggest three alternative system configurations that maximize efficiency while minimizing cost. Provide a trade-off analysis and justify the best choice.”

Such structured prompts hep AI generate more useful, targeted responses, demonstrating the value of thoughtful prompt engineering.

Applications of prompt engineering in actual engineering

Prompt engineering is not just for software developers—it has real-world applications across multiple engineering disciplines:

  • Manufacturing & Design: AI can assist in generating CAD models, optimizing designs for manufacturability, and analyzing production data for efficiency improvements.
  • Electrical & Software Engineering: Engineers can use AI for debugging code, generating test cases, and even predicting circuit failures.
  • Product Development: AI-driven tools can help in ideation, simulating product performance, and accelerating R&D workflows.
  • Sustainability & Compliance: Engineers working in sustainability can leverage AI to assess material lifecycle impacts, optimize energy usage, and ensure compliance with environmental regulations.

The future of prompt engineering in manufacturing

As AI models continue to evolve, the demand for engineers who can effectively interact with them will only grow. Mastering prompt engineering today will give professionals an edge in leveraging AI to drive innovation and efficiency.

However, the trajectory of prompt engineering is uncertain. Some predict that as AI becomes more advanced, it will require less intervention from users, shifting the focus from crafting prompts to verifying AI-generated results. This means engineers may not need to spend as much time iterating on prompts, but instead will focus on critically assessing AI outputs, filtering misinformation, and ensuring AI-driven decisions align with engineering standards and ethics.

Despite this, for the foreseeable future, engineers who master the art of prompt engineering will have a competitive advantage. Just as early adopters of CAD and simulation tools gained an edge, those who learn to effectively communicate with AI will be better positioned to innovate, optimize, and automate their workflows.

A new skill for a new era

Prompt engineering is more than just a buzzword—it is a fundamental skill for the AI-driven future of engineering. As AI tools become more embedded in daily workflows, knowing how to communicate with them effectively will set apart those who use AI passively from those who actively shape its outputs. One thing for sure: “AI will not replace engineers, but engineers who know AI will”—a quote often attributed to Mark Zuckerberg.

The engineering industry is entering a transformative era, where AI-driven tools are no longer just supplementary but central to problem-solving and innovation. This shift is not merely about learning how to phrase a question effectively—it is about rethinking how engineers interact with intelligent systems. The ability to refine, adapt, and critically assess AI-generated insights will be just as important as the ability to craft precise prompts.

This raises a key question: As AI continues to advance, will prompt engineering remain a specialized skill, or will it become an intuitive part of every engineer’s workflow? Either way, those who proactively develop their AI literacy today will be best prepared to lead in the next evolution of engineering practice.

The post The prompt frontier—how engineers are learning to speak AI appeared first on Engineering.com.

]]>
Digital transformation: a modern-day conclave? https://www.engineering.com/digital-transformation-a-modern-day-conclave/ Thu, 08 May 2025 19:56:23 +0000 https://www.engineering.com/?p=139557 In a connected business environment, binary signals are no longer sufficient.

The post Digital transformation: a modern-day conclave? appeared first on Engineering.com.

]]>
The recent Vatican conclave, steeped in centuries of tradition, offered more than just a moment of spiritual significance—it served as a striking metaphor. Behind closed doors, a small group of leaders debated, deliberated, and ultimately declared a decision to the world with a puff of white smoke. Following a speedy deliberation and election, Chicago-born cardinal Robert Francis Prevost was elected on the second day of the conclave. He will be known as Pope Leo XIV.

This ceremonial approach works well works well for Catholicism, but less so in business. Indeed, many organizations still treat digital transformation in a similar manner. Initiated within closed executive circles, strategic direction is shaped, technology investments approved, and roadmaps defined—all behind the scenes. The broader organization often only sees the outcome, not the process.

As organizations operate within increasingly agile, transparent, and connected environments, the question arises: can lasting transformation truly be driven in isolation? Or must organizations evolve toward a more open, participatory model that empowers insight and ownership from the ground up?

Communicating decisions without context or clarity

Transformation decisions are often communicated with great fanfare — announcements, all-hands meetings, slick slide decks. But for many across the organization, these moments resemble smoke signals from a distant tower: symbolic but vague. The decision is clear, but the rationale, trade-offs, and implications are not.

Whether it is the selection of a cloud platform, a shift to agile delivery, or a complete redesign of the operating model, announcements without contextual transparency create uncertainty. Teams scramble to interpret direction, while middle managers attempt to reverse-engineer the thinking behind strategic pivots.

This disconnect results in lost time, misaligned execution, and diminished trust. In a connected business environment, binary signals are no longer sufficient. What is needed is clarity on how decisions are made, why certain paths were chosen, and what success looks like.

The politics of closed rooms

Digital transformation is often framed as a technical or operational initiative. But at its core is political alignment. Like a conclave, the process often concentrates decision-making among a select group of influential stakeholders, typically executives who are both the architects and potential beneficiaries of change.

These leaders must navigate internal power dynamics. Functional heads may lobby for systems that protect their operational autonomy. Transformation officers may push for standardization that enables control and reporting. Budget holders often weigh innovation against short-term performance.

In this context, decisions are shaped not just by strategy, but by alignment of interests and trade-offs between competing priorities. Some of this is necessary. But when politics override participation, transformation becomes less…well…transformative. Excluding frontline insights, product team perspectives, or customer feedback in the early stages can result in solutions that are misaligned with operational realities. The architecture may be sound in theory but fragile in execution.

Transformation cannot succeed as a black box exercise. Governance must account for diverse inputs while avoiding paralysis. Political alignment is necessary—but not sufficient. Executive sponsorship provides critical momentum and legitimacy, but it must be matched by genuine engagement from all levels of the organization. Strategic direction set at the top should create the conditions for broad-based participation, where insight flows upward and action cascades downward in sync.

Unlocking bottom-up momentum

While leadership sets the tone and vision, execution at the edge ultimately determines success. Bottom-up momentum is not simply a cultural aspiration—it is a practical necessity.

Organizations that outperform in transformation tend to decentralize experimentation. They provide local teams with the frameworks, tools, and autonomy to adapt global strategies to real-world conditions. This includes structured experimentation with minimal viable pilots, cross-functional squads that test solutions early, and platforms that allow for scalable iteration.

Modern technologies enable this shift. Cloud-native architectures support modular deployment. Low-code platforms reduce development bottlenecks. Digital twins simulate impact before committing real-world resources. AI and analytics offer continuous feedback loops from operations and customers.

When employees have the means and mandate to contribute to transformation, they become co-creators, not just recipients. Engagement rises. Resistance drops. Execution accelerates. And critically, early warnings surface faster, enabling quicker course correction. A bottom-up approach also democratizes ownership. It cultivates a culture where individuals at all levels recognize their stake in the outcome. Transformation becomes embedded in daily work, not isolated in a PMO.

From ritual to renewal

True transformation is not a symbolic gesture. It is a system-wide renewal process that demands transparency, adaptability, and inclusion. Success depends on shifting from episodic initiatives to a continuous capability for change.

In this model, transparency is not just about sharing decisions—it is about sharing context. That includes access to roadmaps, visibility into interdependencies, and real-time updates on progress. It means creating systems where feedback is expected, not requested. Collaborative dashboards, open architecture reviews, and real-time KPI monitoring move the organization beyond annual reviews and stage gates. Governance becomes lighter and smarter.

Most importantly, the organization learns to manage tension between vision and execution, global alignment and local flexibility, leadership direction and grassroots innovation. This is the essence of digital maturity. Conclaves serve their purpose. But they are designed to select, not to transform. In business, waiting for white smoke is no longer viable. Decisions must be made in daylight, informed by insight from across the enterprise.

The post Digital transformation: a modern-day conclave? appeared first on Engineering.com.

]]>
Turning unstructured data into action with strategic AI deployment https://www.engineering.com/turning-unstructured-data-into-action-with-strategic-ai-deployment/ Fri, 02 May 2025 13:12:31 +0000 https://www.engineering.com/?p=139379 Transform industrial data from disconnected and fragmented to a more unified, actionable strategic resource.

The post Turning unstructured data into action with strategic AI deployment appeared first on Engineering.com.

]]>
Artificial Intelligence (AI) is driving profound change across the industrial sector; its true value lies in overcoming the challenge of transforming fragmented, siloed data into actionable insights. As AI technologies reshape industries, they offer powerful capabilities to predict outcomes, optimize processes, and enhance decision-making. However, the real potential of AI is unlocked when it is applied to the complex task of integrating unstructured, “freshly harvested” data from both IT and OT systems into a cohesive, strategic resource.

This article explores the strategic application of AI within industrial environments, where the convergence of IT and OT systems plays a critical role. From predictive maintenance to real-time process optimization, AI brings new opportunities to unify disparate data sources through intelligent digital thread—driving smarter decisions that lead to both immediate operational improvements and long-term innovation. Insights are drawn from industry frameworks to illustrate how businesses can effectively leverage AI to transform data into a competitive advantage.

From raw data to ready insights

In an ideal world, industrial data flows seamlessly through systems and is immediately ready for AI algorithms to digest and act upon. Yet, the reality is far different at this stage. Much of the data that businesses generate is fragmented, siloed, unstructured, sometimes untimely available, making it difficult to extract real-time actionable insights. To realize the full potential of AI, organizations must confront this data challenge head-on.

The first hurdle is understanding the true nature of “freshly harvested” data—the new, often unrefined information generated through sensors, machines, and human input. This raw data is often incomplete, noisy, or inconsistent, making it unreliable for decision-making. The key question is: How can organizations transform this raw data into structured, meaningful insights that AI systems can leverage to drive innovation?

The role of industrial-grade data solutions

According to industrial thought leaders, the solution lies in the deployment of “industrial-grade” AI solutions that can manage the complexities of industrial data. These solutions must be tailored to meet the specific requirements of industrial environments, where data quality and consistency are non-negotiable. Seamless enterprise-wide data integration is key—whether for predictive maintenance that connects sensor data with enterprise asset management, real-time process optimization that synchronizes factory operations with ERP and MRP platforms—driving supply chain resilience that links production planning with logistics and inventory.

The first step in this process is data integration—the practice of bringing together disparate data sources into a unified ecosystem. This is where many organizations fail, as they continue to operate in data silos, making it nearly impossible to get a holistic view of operations. By leveraging industrial-grade data fabrics, companies can create a single, cohesive data environment where data from multiple sources, whether from edge devices or cloud systems, can be processed together in real time.

Data structuring—the secret to actionable insights

Once raw data is integrated, it must be structured in a way that makes it interpretable and useful for AI models. Raw data points need to be cleaned, categorized, and tagged with relevant metadata to create a foundation for analysis. This is a critical step in the data preparation lifecycle and requires both human expertise and sophisticated algorithms.

The structuring of data enables the development of reliable AI models. These models are trained on historical data, but the real power lies in their ability to make predictions and provide insights from new, incoming data—what we might call “freshly harvested” data. For example, predictive maintenance models can alert manufacturers to potential equipment failures before they occur, while quality control models can detect deviations in production in real time, allowing for immediate intervention.

The importance of explainability cannot be understated. For industrial AI applications to be truly valuable, stakeholders must be able to trust the insights generated. Clear, transparent AI models that are explainable ensure that human operators can understand and act upon AI recommendations with confidence.

Operationalizing AI for real results

Having structured data and trained models is only part of the equation. The real test is turning AI-generated insights into actionable outcomes. This is where real-time decision-making comes into play.

Organizations need to operationalize AI by embedding it within their decision-making frameworks. Real-time AI systems need to communicate directly with production systems, supply chains, and maintenance teams to drive immediate action. For example, an AI system might detect an anomaly in production quality and automatically adjust parameters, triggering alerts to the relevant personnel. The ability to act on AI insights immediately is what separates a theoretical AI application from one that delivers real-world value.

Moreover, feedback loops are essential. The AI models should not be static but should continuously learn and adapt based on new data and operational changes. This iterative approach ensures that AI doesn’t just solve problems for today but continues to improve and optimize processes over time.

Generative AI: A catalyst for innovation and workforce augmentation

While AI’s predictive capabilities are often the focal point, generative AI holds particular promise for transforming industrial workflows. By augmenting human creativity and problem-solving, generative AI helps address the skill gap in the workforce. For example, AI-assisted design can produce innovative solutions that human engineers may not have considered.

However, the integration of generative AI into industrial settings requires careful consideration. As powerful as it is, generative AI can be more costly than traditional AI models. Its inclusion in industrial applications must be strategic, ensuring that the value it brings—such as faster prototyping or more efficient design—justifies the investment.

How to build a sustainable AI strategy for data insights

Turning fragmented data into actionable insights requires a strategic approach. Based on industry frameworks from ABB and ARC Advisory Group, here’s a blueprint for effective AI adoption in industrial settings:

  1. Begin by understanding what is to be achieved through AI—whether it is optimizing efficiency, reducing downtime, or improving quality control. Align AI initiatives with these objectives to ensure focused efforts.
  2. Assess the existing data infrastructure and invest in solutions that integrate and standardize data across your systems. A unified data environment is crucial for enabling AI-driven insights.
  3. Avoid generic AI solutions. Instead, select AI tools that address specific use cases—whether it is predictive maintenance or process optimization. Tailored solutions are far more likely to provide valuable, actionable insights.
  4. In highly regulated industries, transparent and explainable AI models are essential for building trust and compliance. Make sure AI systems provide insights that are understandable and auditable.
  5. AI adoption is not a one-time implementation. Begin with pilot projects, learn from the results, and scale up gradually. This approach allows businesses to optimize AI systems while minimizing risk.

Scaling AI for broader impact

Collaboration is key to successful AI adoption. Partnering with experienced software providers, AI developers, and industry experts can help organizations navigate the challenges of scaling AI across their operations. Moreover, integrating generative AI alongside traditional AI approaches allows companies to strike a balance between innovation and cost-effectiveness.

The promise of AI in transforming industries is undeniable, but to truly realize its value, organizations must overcome the data fragmentation challenges that hinder effective AI deployment. By integrating, structuring, and operationalizing data, companies can convert raw information into actionable insights that drive measurable results. The future of industrial AI is not just about predictions and optimization—it’s about continuous learning, innovation, and the strategic use of AI to create sustainable, long-term growth.

The post Turning unstructured data into action with strategic AI deployment appeared first on Engineering.com.

]]>
Managing the world’s most complex machine https://www.engineering.com/managing-the-worlds-most-complex-machine/ Mon, 28 Apr 2025 18:17:24 +0000 https://www.engineering.com/?p=139223 With 100,000 parts and a 50-year expected operational lifespan, PLM is the only option for managing CERN’s Large Hadron Collider.

The post Managing the world’s most complex machine appeared first on Engineering.com.

]]>
David Widegren, Head of Engineering Information Management at CERN, at ACE 2025 in Boston, discussed the role of Product Lifecycle Management (PLM) strategies in managing the world’s most complex scientific instrument. (Image: Lionel Grealou)

CERN stands for the European Organization for Nuclear Research (from the French ‘Conseil Européen pour la Recherche Nucléaire’). It operates the world’s largest and most powerful particle accelerator—the Large Hadron Collider (LHC), which spans a 27-kilometre loop buried about 600 feet beneath the France-Switzerland border near Geneva. The LHC accelerates protons and ions to near-light speeds, recreating the conditions just after the Big Bang. This enables physicists testing fundamental theories about the forces and particles that govern our universe—and providing invaluable data on the building blocks of reality.

Operating at an astonishing temperature of -271.3°C—colder than outer space—the LHC’s superconducting magnets require cryogenic cooling systems, creating one of the coldest environments on Earth. Although some sensationalized media reports have raised concerns about the LHC creating black holes on Earth, CERN’s scientific community has rigorously demonstrated that these fears are unfounded. The LHC’s energy levels, while impressive, are a fraction of those generated by natural cosmic events that occur regularly without incident.

CERN operates with a collaborative network of 10,000 staff across 80 countries, supported by an annual budget of $1.4 billion. This immense collaboration drives groundbreaking research that demands the highest levels of reliability and precision. Managing the LHC’s enormous infrastructure—including millions of components—requires a comprehensive approach that integrates engineering and scientific disciplines. This is where digital transformation, powered by PLM and digital twins, becomes essential.

New digital backbone for an evolving scientific platform

Historically, CERN used legacy CAD systems such as CATIA V5, AutoCAD, SolidWorks, Inventor, Revit, and SmarTeam to manage critical design and operational data, alongside multiple asset and document repositories. However, as the LHC grew in complexity, these tools created inefficiencies, data silos, and challenges around synchronization, verification, and scalability.

To modernize its approach, CERN adopted Aras Innovator—a CAD-agnostic, part-centric PLM platform—redefining its approach to integrated data management. This shift enables CERN to track components across their full lifecycle, providing real-time insights into performance, wear, and maintenance needs. With over 100 million components—many exposed to extreme radiation and high-energy fields—this capability is critical for ensuring resilience and longevity. The integration of digital twins into the ecosystem allows CERN to predict component failures, optimize performance, and plan preventive maintenance.

Towards an integrated digital engineering platform. (Image: CERN presentation at ACE 2025)

Given the LHC’s extraordinary expected lifespan—over 50 years—the management of its components and systems from design and construction through decommissioning is a monumental task. Some systems, such as superconducting magnets and cryogenic infrastructures, must remain functional for decades. PLM helps CERN manage these long-term needs by providing a unified, scalable solution that integrates data across all lifecycle phases. This is essential not only for maintaining operational efficiency but also for ensuring the LHC’s systems continue to meet high standards of scientific precision and safety.

Sustainability is integral to CERN’s long-term strategy. Managing the LHC’s lifecycle includes minimizing environmental impact and optimizing energy consumption. PLM and digital twins enable CERN to optimize resource usage, reduce waste, and extend the life of crucial systems, ultimately supporting the organization’s long-term sustainability goals.

CERN’s shift to Aras Innovator has also facilitated the integration of various data streams—ranging from engineering documents to enterprise asset management. By connecting this information through a robust digital thread, CERN ensures that all stakeholders, from engineers to researchers, operate with a unified, reliable view of the system. This shared information base enhances collaboration, reduces errors, and accelerates decision-making.

While PLM manages engineering and operational data, experimental research outputs are handled separately by specialized Laboratory Information Management Systems (LIMS). However, synergies between PLM and LIMS are increasingly being explored, with the goal of creating faster feedback loops between research and engineering to enable more data-driven innovation.

Managing complexity without digital overload

As CERN continues to push the boundaries of scientific discovery, the need for real-time monitoring and predictive analytics becomes more critical. Digital twins enable real-time health checks on LHC components, tracking their condition and ensuring compliance with safety standards.

Yet the real challenge is not simply managing technical complexity but doing so without introducing unnecessary digital complexity. The scale of the LHC, with its intricate interconnected systems, requires CERN to balance advanced technologies with operational simplicity. Digital tools must enhance operations without becoming another layer of complication.

The key question: How can CERN manage scientific complexity while minimizing the complexity of digital tools?

New technologies must deliver actionable insights that enable faster, better decisions, instead of overwhelming stakeholders with excess data or redundant processes.

Some key questions that arise:

  • What measurable reductions in maintenance costs or unplanned downtime can CERN achieve through predictive digital models?
  • How will real-time monitoring improve energy efficiency, system lifespan, and reliability?
  • How much faster can experimental setups and calibrations be completed using simulation and virtual commissioning?

Ultimately, the success of CERN’s digital transformation will not be judged by the sophistication of its tools, but by clear, quantifiable outcomes: lower downtime, improved reliability, energy-efficient operations, and faster scientific throughput.

Lessons from the LHC to the FCC

CERN’s digital transformation is not just about adding tools—it is about making complex systems easier to manage and enabling faster, more informed decisions. This mindset is critical as CERN embarks on its next major project: the Future Circular Collider (FCC).

The FCC will dwarf the LHC, with a circumference of 91 kilometers and particle collisions at energy levels up to 100 TeV—far beyond the LHC’s 13 TeV. Construction costs are estimated between €20 billion and €25 billion, with initial operations targeted around 2040. The scale of the FCC presents massive engineering challenges, from magnet design to cryogenic systems.

Here, lessons learned from the LHC’s digital journey will pay dividends.

The LHC’s digital twins—validated over years of operation—will serve as the foundation for FCC simulations. Virtual modeling allows CERN to identify risks earlier, test complex designs in silico, and optimize operations before construction even begins. By compressing design timelines and minimizing construction risks, CERN can potentially save both operational and capital costs while improving reliability.

CERN’s approach shows that digital transformation is not about complexity for its own sake. It is about ensuring that scientific and operational challenges are met with clarity, efficiency, and sustainability—building a stronger foundation for the next generation of discovery.

The post Managing the world’s most complex machine appeared first on Engineering.com.

]]>
Making sense of tariff impact: why digital transformation was never optional https://www.engineering.com/making-sense-of-tariff-impact-why-digital-transformation-was-never-optional/ Mon, 14 Apr 2025 17:36:13 +0000 https://www.engineering.com/?p=138693 Are you simply reacting to disruption or leading your company through it? PLM is the secret weapon at the center of a resilient response to volatility.

The post Making sense of tariff impact: why digital transformation was never optional appeared first on Engineering.com.

]]>
Tariffs have stormed back into the global spotlight, shaking global trade, and putting pressure on manufacturers with international supply chains. For companies already facing inflation, material shortages, and geopolitical instability, new tariff and political wargames add another layer of complexity to navigate.

This is not just another opinion on trade policy—it’s a wake-up call. Companies that postponed digital transformation are now struggling to manage disruption with enterprise systems unfit for today’s pace of change. Many still rely on spreadsheets, siloed software, and disconnected teams. In contrast, organizations that invested in a connected digital backbone—especially one centered around Product Lifecycle Management (PLM)—are better equipped to assess impacts, respond rapidly, and protect margins.

Understanding tariff impact across the value chain

Tariffs create ripple effects across operations, affecting cost, compliance, and sourcing decisions:

  • Importers and contract manufacturers experience the first wave of cost increases.
  • Brand leaders and OEMs must decide whether to absorb, offset, or pass along those costs.
  • Suppliers across borders face pressure to renegotiate contracts, timelines, or terms.

At the center of this complexity is the costed Bill of Materials (BOM), listing parts, sub-assemblies, components, raw materials, formulations, and quantities needed to manufacture a product, along with the associated cost information for each item. It should be the single source of truth for real-time cost impacts; yet too often, BOM updates lag behind key decisions—causing margin erosion and compliance issues.

The critical question extends beyond “What became more expensive?” to “Who is going to pay for it?” The answer depends on industry dynamics, supply agreements, market conditions, and strategic intent. Highly commoditized sectors may need to absorb added costs to remain competitive, while premium markets might have room to pass on increases—if brand value and pricing power allow.

Complexity deepens when tariffs affect multiple tiers of the supply chain. Without timely insight into these dynamics, companies default to reactive choices—accepting margin erosion, postponing decisions, or making trade-offs that compromise long-term strategy and customer trust. A digitally connected value chain creates clarity. With end-to-end PLM integrating procurement and finance data, manufacturers can model how shocks flow through their products and portfolios—and respond before it is too late.

Integrated digital approach to tariff management

Addressing tariff impact requires more than PLM. It demands a fully integrated digital thread that connects product, sourcing, financial, and customer data.

  • PLM captures product structures, supplier dependencies, and alternate design paths—anchoring the impact assessment.
  • Enterprise Resource Planning (ERP) maintains costed BOMs and tracks profitability changes as duties or sourcing costs shift.
  • Supply Chain Management (SCM) evaluates alternate suppliers and logistics to manage cost and lead time risks.
  • Trade compliance systems monitor changes to tariff codes and cross-border regulations.

Supporting systems play complementary roles:

  • Product Data Management (PDM) ensures the correct version of product data is used when making design or sourcing changes—critical to avoid rework or compliance issues.
  • Material Requirements Planning (MRP) provides forward visibility into procurement and inventory to avoid overbuying high-tariff parts or facing shortages.
  • Customer Relationship Management (CRM) contributes commercial insight, highlighting customer sensitivities, regional exposures, and contract constraints that influence pricing strategies.

The value of this integrated technology stack lies in connecting innovation with sourcing, costing, and compliance. With this comprehensive view, manufacturers can confidently simulate trade-offs, evaluate impacts, and execute necessary changes. When systems work together, companies can coordinate across engineering, procurement, finance, and sales using shared data and aligned risk thresholds. This enables scenario modeling, supplier exposure analysis, and implementation of controlled design or sourcing shifts with full traceability and governance.

Proactive risk management in an era of uncertainty

In today’s context, tariffs have become instruments of urgency—tools used to pressure negotiation rather than long-term policy levers. For manufacturers, this translates to operational volatility with abrupt announcements, unclear duration, and vague scope that disrupt planning cycles.

Leaders must now make critical decisions with limited clarity and compressed timelines. Digital maturity enables companies to manage these situations with structure and foresight by:

  • Simulating tariff scenarios before they take effect
  • Identifying high-risk suppliers or parts and activating contingency plans
  • Evaluating design alternatives and reconfiguring BOMs with version control
  • Maintaining compliance as sourcing or target markets shift

Without these capabilities, companies resort to instinct, delay decisions, or absorb unnecessary costs. With them, responses become structured, traceable, and repeatable.

Digital transformation as a competitive imperative

Beyond tariffs, digital transformation has always been a strategic foundation for resilience against various forms of disruption. Companies that invested in connected systems now operate with speed and alignment, ready for AI and analytics-driven decision making. Those that delayed now face both the disruption and the steep learning curve of modernization.

True resilience emerges from more than technology alone—it comes from embedding governance, data discipline, and cross-functional collaboration into the organization’s DNA. Digital transformation enables faster, better decisions under pressure by connecting development, sourcing, operations, compliance, and customer functions into a coherent ecosystem.

As tariff volatility evolves—potentially settling into a new reality shaped by regional sourcing and revised trade agreements—the strategic question remains: Are we still reacting to disruption, or are we built to lead through it? Trade policy may eventually stabilize, but volatility will not. Digital transformation is not tied to election cycles or regulatory changes. It represents a long-term investment in adaptability, insight, and resilience. The companies that will thrive are those that stopped waiting for stability—and started building for it.

The post Making sense of tariff impact: why digital transformation was never optional appeared first on Engineering.com.

]]>
Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence https://www.engineering.com/decoding-dassaults-3d-universes-jargon-combining-virtual-and-real-intelligence/ Mon, 24 Mar 2025 18:00:02 +0000 https://www.engineering.com/?p=137969 Can Dassault Systèmes convince the market that this is more than just another buzzword-laden evolution?

The post Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence appeared first on Engineering.com.

]]>
Product Lifecycle Management (PLM) reimagined: from static digital twins to an AI-powered, generative intelligence ecosystem. (Image: Dassault Systèmes.)

Dassault Systèmes has unveiled 3D Universes (styled as 3D UNIV+RSES for branding), a bold step toward reimagining how industries engage with digital and physical realities. This is not just another 3D modeling update. It represents a fundamental shift from static digital twins to an AI-powered, generative intelligence ecosystem. The branding itself—3D UNIVERSES instead of “3D Universes”—signals a new paradigm where virtual and real (V+R) are seamlessly integrated, enabling continuous learning, automation, and adaptability across product lifecycles.

But with this shift comes a set of key challenges: What does this mean for legacy users? How will intellectual property be managed in an AI-driven world? And can Dassault Systèmes convince the market that this is more than just another buzzword-laden evolution?

Virtual + real: more than just digital twins

The concept of V+R (Virtual + Real) is not new to Dassault Systèmes. It has been a central theme in the company’s Virtual Twin Experience, where digital twins are no longer mere representations but are continuously evolving with real-world inputs.

In 3D Universes, this vision is taken further:

  • AI-powered models learn from real-world behaviors and adjust accordingly
  • Virtual companions provide intelligent assistance in decision-making
  • Generative AI and sense computing optimize designs and simulations in real-time

This moves beyond the traditional “digital twin” approach. Rather than acting as a static mirror of the physical world, 3D Universes enables a dynamic, self-improving system that continuously integrates, analyzes, and adapts. The idea is not new. For instance, Siemens and other ‘PLM software’ providers are actively exploring opportunities for AI to add an intelligent layer to the PLM data backbone.

From static to generative intelligence

Dassault Systèmes has long been a leader in 3D modeling, PDM/PLM, and simulation, though 3D Universes marks a significant departure from traditional software functionality. It introduces an AI-driven, generative framework that transforms how products are designed, validated, and maintained.

Key differentiators from this new positioning include:

  • AI-assisted workflows that automatically refine and evolve designs.
  • Predictive simulations that adapt based on real-world sensor data.
  • A “living” knowledge platform that evolves with industry trends and user inputs.

You get the idea. Rather than designing a product in isolation, cross-functional teams, from Product Development, Engineering, Quality, Procurement, and supply chains can now co-create with AI, allowing for an iterative, automated process that reduces risk, enhances efficiency, and accelerates innovation cycles.

Beyond software—a living digital ecosystem

The shift to 3D Universes also seems to represent a move away from traditional licensing-based software models toward a consumption-based, Experience-as-a-Service (XaaS) framework—a similar commercial model per the approach recently described as “AI-as-a-Service” by Microsoft CEO Satya Nadella. This aligns with broader industry trends where companies are transitioning from one-time software purchases to continuous value-driven digital services.

What does this mean in practical terms?

  • Customers will consume intelligence rather than static software.
  • Real-time virtual twins will become decision-making hubs, constantly updating based on real-world inputs.
  • AI-generated designs will automate engineering iterations, dramatically reducing manual effort.

This is a major shift for legacy customers who are accustomed to on-premises, private cloud hosting, and transactional software ownership. Dassault Systèmes will need to provide a clear roadmap to help these organizations transition without disrupting their existing workflows and wider integration landscape.

IP, trust and the generative economy

One of the most critical challenges in this transformation is intellectual property (IP) ownership and data security. In an AI-driven, generative economy, where does human ingenuity end and machine-driven design begin? If AI generates a product variation based on learning from past designs, who owns the output?

Some key concerns include:

  • Ensuring IP integrity when AI continuously iterates on existing designs.
  • Managing security risks as real-world data feeds into digital models.
  • Addressing industry adoption barriers for companies that have built their entire business around traditional IP protection frameworks.

Dassault Systèmes, and other enterprise solution provider in this space, will need to provide strong governance mechanisms to help customers navigate these complexities and build trust in the generative AI-powered design process.

Dassault Systèmes issued a YouTube video presentation as a teaser to outline the core ambitions of 3D Universes, reinforcing its role in shaping a new generative economy—elaborating on key messages:

  • Virtual-Plus-Real Integration: A seamless blend of digital and physical data enhances accuracy and applicability in simulations.
  • Generative AI Integration: AI-driven processes enable more adaptable and intelligent design iterations.
  • Secure Industry Environment: A trusted space for integrating and cross-simulating virtual twins while ensuring IP protection.
  • Training Multi-AI Engines: Supports the development of AI models within a unified framework, promoting more sophisticated AI applications.

While the video presents a compelling vision and sets timeline expectations towards an aspirational 15-year journey by 2040, it introduces complex terminology that might not be easily digestible for a broad audience. The use of “Universes” as branding adds an extra layer of abstraction that could benefit from clearer explanations and, in due time, a gradual transition roadmap for legacy users.

Additionally, the practical implementation and real-world applications remain vague, leaving some unanswered questions about industry adoption and integration. How will companies transition to this model? What are the concrete steps beyond the conceptual framework? The challenge will be ensuring that this does not become another overcooked marketing push that confuses rather than inspires potential adopters. Users demand clarity and pragmatism in linking solutions to problem statements and practical value realization.

A bold leap into the future

The potential of 3D Universes is enormous, but its success hinges on several key factors:

  • Market Education: Dassault Systèmes must articulate the value proposition beyond buzzwords, demonstrating tangible ROI for both new and legacy users.
  • Seamless Transition Strategy: Organizations need a clear pathway to adopt 3D Universes without disrupting their current operations.
  • AI Governance & IP Assurance: Addressing industry concerns around AI-generated designs, IP ownership, ethical AI, and data security will be crucial for widespread adoption.

If 3D Universes delivers on its promise, it has the potential to redefine how industries design, simulate, and optimize products across their entire lifecycle. By truly integrating Virtual + Real intelligence, Dassault Systèmes is making a bold statement about the next frontier of digital transformation.

The question now is: Are industries ready to embrace this generative future, or will skepticism slow its adoption? Furthermore, where should organizations start on this journey? Can solution providers be bold enough to share a pragmatic roadmap towards this goal, and keep us posted on their learnings in this space? Will 3D Universes bring us one step closer to the “Industry Renaissance” previously advocated by Dassault Systèmes Chairman Bernard Charles? Time will tell, but one thing is certain—Dassault Systèmes is positioning itself at the forefront of the next industrial/digital revolution.

The post Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence appeared first on Engineering.com.

]]>
RIP SaaS, long live AI-as-a-service https://www.engineering.com/rip-saas-long-live-ai-as-a-service/ Thu, 16 Jan 2025 21:04:52 +0000 https://www.engineering.com/?p=135747 Microsoft CEO Satya Nadella recently predicted the end of the SaaS era as we know it, which could level the playing field for smaller manufacturers.

The post RIP SaaS, long live AI-as-a-service appeared first on Engineering.com.

]]>
Artificial Intelligence (AI) is no longer just a buzzword—it is a game-changer driving new insights, automation, and cross-functional integration. AI is transforming industries by powering digital transformation and business optimization; and a lot more innovation is expected. While some sectors are advanced in leveraging AI, others—particularly traditional manufacturing and legacy enterprise software providers—are scrambling to integrate AI into traditional digital ecosystems.

Many executives foresee AI revolutionizing Software-as-a-Service (SaaS) by transitioning from static tools to dynamic, personalized, and intelligent capabilities. AI-as-a-Service (AIaaS) offers businesses unprecedented opportunities to innovate and scale. The promise is a future powered by AI agents and Copilot-like systems that streamline infrastructure, connect enterprise data, and reduce reliance on traditional configuration and system integration.

In a recent BG2 podcast, Satya Nadella shared his vision for AI’s role in reshaping technology and business. He stated, “The opportunities far outweigh the risks, but success requires deliberate action.” These opportunities extend beyond industry giants to startups and mid-sized enterprises, enabling them to adopt AI and leapfrog traditional barriers. Smaller enterprises, in particular, stand to gain by avoiding the pitfalls of complex digital transformations, taking advantage of AI to innovate faster and scale effectively.

Revolutionizing Experiences and Integration

AI is (or will be) fundamentally changing how users interact with SaaS platforms. Traditional SaaS tools are often said to be rigid, offering one-size-fits-all interfaces that require users to adapt. In contrast, AI brings opportunities to disrupt this model by analyzing user behavior in real-time to offer personalized workflows, predictive suggestions, and proactive solutions. Nadella emphasized this transformation, saying, “The next 10x function of ChatGPT is having persistent memory combined with the ability to take action on our behalf.”

This aligns with the emergence of Copilot systems, where AI acts as a collaborative partner rather than a mere self-contained tool. Imagine a SaaS platform that not only remembers user preferences but actively anticipates needs, offering intelligent guidance and dynamic adjustments to workflows. Such personalization fosters deeper engagement and loyalty while transforming the management of business rules and system infrastructure.

Empowering Smaller Enterprises

The promise of AI extends not only to large enterprises but also to smaller businesses, particularly those in manufacturing and traditionally underserved sectors. For example, a small manufacturer could adopt AI-driven tools to optimize supply chain management, automate repetitive tasks, and deliver personalized customer experiences—all without the complexity of traditional ERP systems.

To ensure successful adoption, businesses must:

  • Identify high-impact areas: Focus on processes that benefit most from automation and predictive analytics, such as customer service, supply chain management, or marketing optimization.
  • Leverage scalable solutions: Choose AI platforms that align with current needs but can scale as the business grows.
  • Build internal expertise: Invest in upskilling employees to work alongside AI tools, ensuring alignment between human and machine capabilities.
  • Partner strategically: Collaborate with AI vendors that prioritize interoperability and ethical standards to avoid vendor lock-in and compliance risks.

Redefining Value: Pricing Models and Proactive Solutions

AI is not only transforming technical capabilities but also redefining pricing models for SaaS platforms. Traditional subscription fees are being replaced by real-time, usage-based pricing, powered by AI algorithms that align revenue with the value delivered. Nadella warned, “Do not bet against scaling laws,” underscoring AI’s potential to adapt and optimize at scale. For instance, AI can analyze customer usage patterns to calculate fair, dynamic pricing, ensuring customers pay for the outcomes that matter most.

This shift to value-based pricing can help SaaS companies differentiate themselves in competitive markets, reinforcing their commitment to customer success. Additionally, as AI drives data integration, traditional software vendors (ERP, CRM, PLM, MES, etc.) will need to adapt their business models. With AI, vendor lock-in could become obsolete, or at least redefined, as businesses migrate data seamlessly across platforms, fueled by open standards and interconnected data assets.

Overcoming Adoption Challenges

While the promise of AIaaS is immense, transitioning from traditional SaaS is not without its hurdles. Businesses must address:

  • Cost barriers: AI solutions can require significant upfront investment, especially for smaller firms. Clear ROI metrics and phased implementation plans can mitigate this challenge.
  • Technical expertise gaps: The lack of in-house AI expertise can slow adoption. Partnering with AI-savvy consultants or platforms can bridge this gap.
  • Resistance to change: Shifting from static tools to dynamic AI-driven systems requires cultural change. Leadership must communicate the benefits clearly and provide training to ease transitions.

Responsible AI: Trust, Compliance, and the Road Ahead

The rise of AI-powered SaaS platforms presents both immense opportunity and significant responsibility. As these platforms analyze vast datasets, safeguarding user privacy and ensuring compliance with regulatory standards will be non-negotiable. Nadella’s remark that “Innovation must go hand in hand with ethical considerations” underscores the need to balance technological advancement with accountability.

To build trust and ensure accountability, businesses must prioritize:

  • Transparent data policies: Clearly communicate how user data is collected, stored, and used.
  • Robust security measures: Safeguards against data breaches are critical for maintaining trust.
  • User-centric governance: Empower users with control over their data while ensuring compliance with global regulations.

Final Thoughts…

Looking ahead, adaptive AI systems and large language models will continue to redefine how SaaS platforms deliver value, addressing evolving customer needs with precision and speed. Nadella’s vision for AIaaS is inspiring, but businesses must remain grounded. To lead in this new era, organizations must tackle critical questions:

  • How can they balance AI’s immense potential with the risks of misuse or ethical lapses?
  • What steps are necessary to ensure AI enhances—not replaces—human decision-making?
  • How can smaller enterprises leapfrog traditional barriers to scale with AI?
  • Can persistent memory systems foster meaningful personalization without sacrificing user trust?
  • What role will regulatory frameworks play in ensuring accountable innovation?

By addressing these questions and embracing the opportunities AI presents, SaaS providers can chart a path toward sustained success. The question is not whether AI will transform SaaS, but how organizations will adapt to lead in this new digital era

The post RIP SaaS, long live AI-as-a-service appeared first on Engineering.com.

]]>